Ferment AI comes out of artificial intelligence work at UKAI Projects. If you have been added to this mailing list in error, we apologize and please unsubscribe and/or let us know.
Artificial intelligence (AI) refers to a broad field committed to researching and developing automated heuristics to solve problems. Artificial intelligence has greatly expanded the scope of what questions we can answer and the speed at which we can do so.
Algorithms offer a promise of certainty in a world sorely lacking in it, consisting of instructions or processes to follow when performing a task or making a decision. These recipes, as acted upon by evermore advanced forms of AI and machine learning, are present in almost all aspects of life today, and have had a profound effect on health, transportation, finance, food, entertainment, and anything else you might imagine. We are only just beginning to understand their tremendous power, as well as the opportunities and threats they pose.
In the past, we used to make algorithms containing instructions that a human could understand and explain. Over time, the capacity of algorithms written by humans to order the world was overwhelmed by the data created and the complexity of the systems involved. Identifying one fraudulent transaction out of millions, recommending the right video for a user, or regulating shipping traffic are beyond the ability of human beings to effectively administer at the speed and scale to which we have grown accustomed.
Several advances have increased the ability of algorithms to administer our world. Access to huge caches of data—captured through sensors and other tracking techniques, only some of it voluntarily recorded—serve as source material for new machine-learning methods, enabled by faster and cheaper computers. Algorithms are evolving that are well beyond the capacity of human beings to unravel, and their pace is only accelerating.
For some, this is good news. Algorithmic solutions will let us outrun the problems we create because of our collective commitment to growth and efficiency. Others note that biases in society, and therefore in the data, will create solutions that exacerbate systemic inequality.
A less appreciated concern is how the hegemony of this technology will shape and limit the experiences we have as human beings. Our culture is being profoundly affected by the increasing reach of AI. This will have consequences for the ideas we can generate to solve problems as they emerge.
picture of Prometheus having his liver eaten on a Greek postage stamp
An initial optimism for AI has been met by calls for a more critical reflection into what these algorithms do, for whom they do it, and how data is gathered and used in the process. In the future we can expect to see even greater gains in medicine, art, creative industries, efficiency,safety and elsewhere. There are also great concerns with AI, particularly around inequality, privacy, bias, safety, and security. The world is facing a historic shift and AI must be developed ethically and responsibly to ensure equitable and accessible implementation for everyone. There are loud and vocal calls to get AI out of the academy and to reach out to affected communities to generate guidelines for development and implementation. Artists are playing a central and vital role in this work.
To deal with the acceleration of information, we have turned to algorithms to answer an increasing number of important questions and to inform a widening array of decisions. What are the implications of this shift in how we understand human existence, identity, and culture?
Knowledge is created and evolves as events are compared to our existing ideas, acquired through experience or socialization. When events line up with what we think we already know about what is true or real or desirable, our confidence in our knowledge of the world is enhanced. Confusion and discomfort can emerge when events are incompatible with the ideas we have. However, by stepping into that discomfort, we can extend the reach of our knowledge and be better prepared to process new facts as our ideas expand and are arranged to accommodate them. Artists have long played an important role in inviting us to step into this discomfort to make sense of the changes happening around us.
The facts on which we build our algorithms reflect their designers’ understanding of the world and historical patterns of inclusion and exclusion. A self-driving car that is unaware of the fact of Black faces will lack the ability to respond appropriately when it encounters one. The ideas that inform the design of our automated systems have far-reaching consequences. When we value scale, efficiency, and growth, our solutions will tend to favor these characteristics: more ad revenue, more clicks, more purchases. There are ethical dimensions to both the events that we interpret and the ideas that we elevate. But, if our social, economic, political, and cultural systems start to view us simply as biological machines in need of regulation and control, then what will become of the very ideas that we used to make sense of the world and develop these systems in the first place?
In The Power of the Powerless, dramatist and politician Václav Havel describes the experience of living with a post-totalitarian system where everyone must “live within a lie.” He observes, “They need not accept the lie. It is enough for them to have accepted their life with it and in it. For by this very fact, individuals confirm the system, fulfill the system, make the system, are the system”. Havel describes a culture where, because of fear of reprisal by the agents and institutions of the system, true desires are not expressed. An effective response to this form of control is what he terms “living in truth,” having and performing an awareness of the dissonance between the facts as they are publicly endorsed and the ideas we privately hold about life and community.
Algorithmic culture removes this potential dissonance by selecting against facts that complicate the ideas we already hold. “Living in truth” is only possible when we possess ideas that might encounter events with which they are not reconciliable. In the digital, ideas confront other ideas frequently and violently. That these ideas be ‘eventful’ or meaningful is not a requirement for the interactions to occur.
creators Raad Seraj, Tyreek Phillips, and Heran Genene working on an AI game for participants in national workshops (not shown Bryan Depuy, Alexandra Lord, Jerrold McGrath)
Uncomfortable facts ask us to be critical of both the evidence and our ideas about what it means. Algorithms, though, ensure that uncomfortable facts are removed before we are unsettled by them. The algorithm protects us from the possibility of encountering experiences that might reshape our theories of the world.
Without new experiences we are left alone with our ideas, which themselves can be exploited. The algorithms have learned that exposure to different ideas can fuel a moral outrage that keeps us clicking, engaging, and posting. Nuance or concreteness aren’t profitable. Facts are elided and extreme versions of different ideas are presented to us to ensure that we remain emotionally engaged with a particular platform or community. Those responsible will argue that this is not the intent, but intention is not really the issue. The algorithm tests and tests and tests and finds approaches that work.
Our ideas about what things signify are assembled from experiences. We are not born with the skills to organize the facts of the world but develop them as we interact with others and the broader culture. What happens, then, when we assign more of our decision-making to systems that are optimized for efficiency and have no interest in providing us with a diverse and potentially unsettling set of experiences?
Convenience becomes a trap. Consistent stories, intolerant of other ways of seeing the world, start to shape what we notice and how we experience things. Different moral worlds are marginalized. Moral agency becomes harder to realize. Living life out in the open means that those that might speak out against the dominant narrative avoid doing so for fear of criticism or becoming the next algorithmically selected source of outrage.
A deceleration in the development of ideas starves our culture of the necessary diversity to assess or categorize events as they emerge. This trend relies on a positivist account of science and personal knowledge; the idea that truth and human knowledge can be restated or discovered by algorithms. Algorithms committed to efficiency and scale will select against experiences that do not contribute to the kind of engagement the system wants to see. Confusion, disorder, discomfort are negative outcomes in the tests that the algorithm administers. We stop stumbling into things.
What happens to beauty in such a world? Algorithms have been designed to deliver agreeableness, an aesthetic of comfort and monotony. Yet, in general, an aesthetic demands a kind of reflective judgement, a contemplative distance. An aesthetic of the smooth asks nothing. Renaissance painters were famous for the colours they made. Increasingly, we select from the same mass-produced colours and shades. The aggregated data about the behaviors of billions is used to predict our future decisions and tastes, guiding us toward profitable choices in service of efficiency. Our past actions will determine the facts of our future. We are witness to the gradual automation of aesthetic decisions.
next: Responses and a framework for talking about algorithmic culture
Through this newsletter, I’d like to share a few interesting pieces around AI, algorithmic culture, and the things that we can do in response. Today is a short list but will grow as the weeks unfold.
Concordia scholar Jason Lewis recently presented the Marshall McLuhan lecture at transmediale. I think this excerpt from Making Kin with the Machines gives a sense of what to expect in this amazing talk: “We believe that Indigenous epistemologies are much better at respectfully accommodating the non-human. We retain a sense of community that is articulated through complex kin networks anchored in specific territories, genealogies, and protocols. Ultimately, our goal is that we, as a species, figure out how to treat these non-human kin respectfully and reciprocally” (Making Kin with the Machines).
Technologies for Liberation, a new report by the Astrea Lesbian Foundation for Justice explores how government and corporations use technology to police and surveil BIPOC communities, and the powerful ways organizers are responding.
An older piece (from April 2020), but pressing given events around the planet. Human rights watchdog Amnesty International commented that human rights restrictions are spreading almost as quickly as coronavirus itself. Why is COVID a crisis for digital rights?