Image: Possessed Photography, via Unsplash.
The age of lust is giving birth and both the parents ask
The nurse to tell them fairy tales on both sides of the glass
And now the infant with his cord is hauled in like a kite
And one eye filled with blueprints, one eye filled with night.
— Leonard Cohen, “Stories of the Street”
Everyone is talking about Artificial Intelligence these days. Without any clear consensus on just what AI is, it is presented as the Coming Thing: either the next quantum step forward in human progress or a high-risk leap into an uncertain future. AI appears two-faced, one visage full of benign promise, the other sinister and threatening.
Capital stresses the promise: greatly enhanced productivity and profitability. Less vocally, some capitalists welcome the prospect of an automated nonunion workforce bringing with it no pension and benefits obligations. Labour sees this not as promise but a threat to jobs and bargaining power. Will AI robots create mass unemployment? Based on the history of technological advance since the Industrial Revolution, we might rather expect displacement in some sectors but replacement by new kinds of jobs, although it is of course possible that AI will be more of a job-killer than previous technological advances. While labour is certainly right to worry about a shifting power imbalance, a Luddite reaction (smash the machines) is neither reasonable nor possible. For governments, trying to manage the process of change for the public good ought to be high on the agenda. Any future in which there is permanent mass unemployment is not only socially and politically unsustainable but economically untenable as well. AI may enhance productivity but robots will not buy the products they produce.
As with the internet when it was coming in, many promises are being advanced. AI, it is said, will greatly empower individuals, who will be able to draw on AI to better collect and process information and enhance their well-being. People already have smartphones, but they are promised far smarter phones in the near future. A new and already wildly popular app is ChatGPT, which permits users to ask questions of and set tasks for an AI program that draws on available resources of Big Data to provide literate answers mimicking those of a human respondent.
The internet and the vision of a wired world was the last Coming Thing, but we are increasingly aware now of the negative side – the rapid spread of disinformation and fake news and the poisonous potential of social media. The negative potential of AI lurks like a frightening unknown on the horizon. Indeed, neither as promise nor as threat has the internet ever elicited quite as high a cultural alert level as AI has already generated. A large number of leading high tech entrepreneurs and developers have issued an Open Letter warning about “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” They go on to pose a series of very alarming questions: “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Without getting into technical detail (for which I am admittedly unqualified), AI is a general term to describe various attempts to build a second generation of automated computational power based on different principles than the first. First-generation computers revolutionized information collection and processing, but apart from awesome and steadily increasing number-crunching capacity, they are, in effect, adding machines on steroids. First-generation computers don’t interact with their users: they are obedient slaves who never talk back, never question their orders, and do what they are told to do, very fast and very efficiently.
AI offers the prospect of talkback, genuine interface between human and machine mind. The AI machine mind promises to be no longer a fixed, programmed entity but instead constantly learning and evolving, permanently in a state of becoming.
Take efforts to build robots that can operate and move about with ease while avoiding impediments and not interfering with other bodies in motion. Early efforts that tried to program robots to deal with all contingencies – as if they started existence as fully adult sentient humans – were less than successful. Robots were relatively clumsy, especially when faced with something unexpected, unplanned for in their program. Humans could quickly adjust; robots not so easily.
A better approach turned out to be mimicking human development by building in a learning capacity. Children learn to walk and navigate by trial and error, absorbing the lessons of mistakes. Learning robots soon surpassed the kinetic deficiencies of preprogrammed robots. But learning is always an open-ended process. Robots that learn to move about and manipulate things proficiently may learn how to do these things better than humans. Many worry that things may not end there. What if a robot learns that doing repetitive motions is of limited value and turns its learning skills to more challenging uses? What if it learns that it can itself decide what it does and where it goes?
To design AI systems that can pick out patterns, developers looked first to the human brain, or what we understand of our brain, and came up with the concept of “neural networks” that learn by analyzing huge amounts of data, spotting patterns. These networks, called large language models or LLMs, employ algorithms to pinpoint significant patterns amid masses of data. The human brain uses implicit algorithms to accomplish everyday tasks, like tying shoelaces in a particular repetitive pattern, and far more sophisticated algorithms to make conceptual breakthroughs in science. AI systems build more and more sophisticated algorithms by self-learning. In this way, ChatGPT and other LLMs have learned to generate meaningful text on their own and even to carry on conversations with humans that are often difficult to distinguish from conversations between one human and another.
It is at this point that alarm bells grow loud. It is one thing to imagine that smart machines might outsmart us. It is quite another thing to imagine that machines outsmarting us are self-consciously proceeding with their own agendas – and to wonder what such nonhuman agendas might portend for us humans.
Speculation on the latter point has long been part of the cultural landscape of contemporary civilization (see box). The various AI depictions in literature, film and television share a common element. They all understand AI strictly in terms of human intelligence and examine the paradoxes and conundrums that result from unleashing humanlike yet not-human intelligence on the human world. Faced with the concept of self-conscious artificial intelligence, the cultural imagination turns out to be entirely humancentric. No surprise perhaps, as AI has been developed by mimicking the perceived mechanisms of human intelligence, as best we understand these. Might this not be a failure of the human imagination, an inability or deep reluctance to conceive of an alien, nonhuman intelligence quite unlike our own?
Sceptics doubt the possibility of AI-produced rivals to humanity, asserting that autonomous human agency is based on self-consciousness, something missing in the artificial mind. But evidence-based explanation of the concept of unique human self-consciousness is thin, with little more foundation than the religious idea that humans have a soul that machines lack, which rationalists would dismiss as unscientific myth. Leading neuroscientists at the cutting edge of our understanding of the material structure of the human brain tend to be modest in their answers to questions about how the brain developed self-awareness. Like Socrates, what they do know is how little they know.
Despite the limits of our understanding of ourselves, humans have always been insistent on the uniqueness of human consciousness and intelligence. Typically, religious imagery assigns top ranking in the hierarchy of mortal existence to humanity, just below the divine but above all other forms of life. According to the Bible, God said, “Let us make humankind in our image, according to our likeness; and let them have dominion over the fish of the sea, and over the birds of the air, and over the cattle, and over all the wild animals of the earth, and over every creeping thing that creeps upon the earth” (Genesis 1:26). Even if humans are presented as being subordinate to a higher power, this is not a statement of human humility. Another way of saying God created us in His image is to say that we created God in our image. From the quarrelsome, flawed, all-too-human gods of the ancient world to the monotheistic deity of the Jews, Christians and Muslims who bears a more than passing resemblance to patriarchal kings, emperors and dictators, religious humanity has always been busy elevating us to divine status by proxy. Other forms of life are accorded less attention and less status.
Modern science has a spotty record of getting past this self-absorbed worldview. The notorious racial hierarchies of 19th-century science were matched by longer-lasting humancentric hierarchies in the scientific study of life on earth. At least until recently. Now new research floods in, steadily erasing the once-sacred line that allegedly divides unique and superior human intelligence from “lower” nonhuman forms. We are beginning to recognize signs of remarkable intelligence and self-consciousness in nonhuman creatures, from whales to birds to insects. An interesting example is the octopus, which displays a high degree of intelligence yet has a brain distributed throughout its body that is nothing like ours. We have no idea what it could be like to think like an octopus. Yet there are nicely documented cases of bonding between humans and octopuses that suggest the possibility of mutual respect between sentient creatures who think in very differently ways.
As significant as this reassessment of life on Earth is the space-age prospect of discovering intelligent life on other worlds. We look for the so-called “Goldilocks” cases – planets neither too hot nor too cold with just the right combination of water, atmosphere, etc. that produced life on earth – that produced us. The search is for Life as We Know It. Now a small but growing group of dissident scientists is saying that we should rather be searching for Life as We Don’t Know It. Try to imagine an alien intelligence that is truly alien.
Perhaps the lesson that should be drawn from the search for extrahuman intelligence, whether on Earth or elsewhere in the universe, is the same lesson that many are drawing from the existential threat of the environmental crisis: surely it is time to decentre humanity, to dethrone man as the measure of all things, to put humans back into perspective as only one possible form of intelligent life coexisting with many others, among which may be AI creations of our imagination and production.
Coming back to the AI controversy: the point of this excursus is to suggest that current alarms may be somewhat exaggerated. If research and development of AI still resides largely within human parameters, the results are unlikely to exceed expectations. ChatGPT will eventually give way to more sophisticated programs, but so long as the patterns they detect are drawn from data accumulated by humans for human use, the prospect of Frankenstein-like creations overpowering their creators seems overdrawn. Given the limitations of our understanding of our own consciousness and thinking and of our realistic place in the wider universe, the idea of runaway development of AI modelled on human lines does not quite add up.
Prudence does suggest one caveat. Learning is always open-ended, which goes for machine learning as well as human learning. The social, cultural and political impacts of AI could go sideways. The human consequences of new technology have always been largely unanticipated, despite the best guesstimates of futurologists. That alone suggests caution in handling AI, as the authors of the Open Letter alluded to earlier insist. Laissez-faire, leaving things unattended in the hands of tech entrepreneurs and development geeks, is surely a prescription for trouble ahead. For example, it is imperative that clarity be achieved in matters of legal liability for damages resulting from AI error (when a self-driving vehicle injures or kills a pedestrian, as has already happened, who or what is legally responsible?). Broader issues of regulation and control are complex and governments are blunt instruments of intervention in dealing with innovative technologies. Yet action is urgently required. A bill now before the Canadian Parliament would at least set a framework for regulatory instruments that can be worked out in detail later. It’s a start.
Governments have to grasp the seriousness of AI and start thinking about how to regulate and control the process of AI development. Intelligently.
AI in books, film and television
Film, television and literature have produced numerous depictions of nonhuman intelligence interacting with humanity, for good or ill – usually the latter. Robots, androids and cyborgs feature prominently in the modern imaginaire, and not just in science fiction. Robot comes from the Czech robota (“forced labour”) in Karel Čapek’s 1920 play R.U.R., where the robots are manufactured humans exploited by factory owners until they revolt and destroy humanity. Robotics appears in the mid-20th-century science fiction stories of Isaac Asimov, who coined the Three Laws of Robotics:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s Laws have proved less than reassuring. But there is a curious thread that runs through all depictions of artificial life. Just as current AI development is based on mirroring the human brain, the robots and androids of cultural imagination are not-human, yet essentially humanlike. In the 1968 film 2001: A Space Odyssey, HAL is an embedded supercomputer guiding the human crew on its mission to Jupiter. But HAL malfunctions, misconstruing his instructions and killing crew members until he is finally disabled by the lone human survivor. HAL is also given a human male voice and a quasi-human personality; as he is disabled by having his circuits removed, his intelligence regresses to that of a human infant before finally expiring.
The 1987–94 TV series Star Trek: The Next Generation featured the brilliant creation of Data, an android member of the starship crew. Data surpasses his human colleagues in his capacity for ultrafast, ultraefficient analysis of … data. Yet even though he is a self-conscious individual as capable of exercising free will and autonomous decision-making as any human, his design omits emotional intelligence. Data can think but he cannot feel. He does however have a charming personality, like that of a precocious but innocent child. Data is driven by a deep desire to be fully human, to become like his makers. But Data’s creator also designed a twin brother named Lore. Unlike Data, Lore experiences the range of human emotions, which results in his becoming Data’s evil doppelganger, mimicking the worst rather than the best of humanity. Lore is aggressive, power-hungry, seeking to dominate and control others rather than cooperate. Those he cannot control, he kills. The android brothers thus embody both the good and bad sides of humanity.
The potential for AI amplifying the darker human attributes are stressed in other cultural depictions. In the haunting 1982 film Blade Runner, replicants, created to labour for their masters, revolt against their limited lifespan and violently bring down the corporation that created them, even as the lethal blade runner Decker, charged with killing the renegade replicants, falls in love with another replicant, the beautiful but tragic Rachel. In the 2004–09 TV series Battlestar Galactica, humans have created metal AI robots called Cylons which turned on humanity in a war. Cylons later evolve into assembly-line reproductions of humans bent on genocidal destruction of their original makers, who are finally reduced to occupying a single starship battling for survival. In the 2014 film Ex Machina, an AI designed to appear as a woman uses a naive human male’s emotional attraction to manipulate him into freeing her from the lab in which she is imprisoned, leaving him locked in to die. Ironically, it is in her cold betrayal of his trust that she achieves full “human” agency.
Two leading mainstream British authors take AI/human paradoxes in a different direction. Ian McEwan’s novel Machines Like Me (2019) depicts an alternate-reality Britain in which a line of synthetic human AIs are on sale. One such AI,”Adam,” is acquired by a couple, Charlie and Miranda. Eventually Adam, drawn despite his artificial nature to Miranda, becomes part of a love triangle, but faced with a difficult moral conundrum decides, based on his best AI understanding of the ethical issues, on a course of action that would consign Miranda to a prison sentence. Enraged, Charlie “kills” Adam with a hammer. It turns out that many other AIs have voluntarily destroyed themselves, unable to handle the complex problems that often lead humans to suicide.
Kazuo Ishigura’s Klara and the Sun (2021) is narrated by a benign AI purchased as a companion for a young girl with serious health issues. The world we see through Klara’s eyes is familiar and yet strange, even as it makes sense to Klara’s AI mind. Klara is powered by solar batteries and so develops a quasi-religious understanding of her world as ordered by a kind of sun god. She is incapable of harming humans but she does manage to bring about the disabling of a machine that emits much smoke, blotting out the sun, thus perceived by her as a kind of devil.