In this year’s BBC Reith Lectures, “Living With Artificial Intelligence,” which concluded on Wednesday, Professor Stuart Russell weighed its benefits but warned of its dangers.
A world-leading expert on AI based at the University of California, Berkeley, he quoted the 17th-century philosopher and statesman, Francis Bacon: “The mechanical arts may be turned either way, and serve as well for the cure as for the hurt.”
For Russell, the hurt of AI includes racial and gender bias, disinformation, deep fakes, cybercrime, and deadly weapons.
One problem is that AI pursues its pre-programmed objective with maximal efficiency without thinking about the consequences or allowing inconvenient human emotion, such as compassion, to get in the way. A greater problem is that once AI outsmarts humans, it may become unstoppable; a frightening thought if we are dealing with killer robots.
But do not despair. Russell thinks we can head off the danger if we start working on solutions now. The primary goal should be for humans to retain control.
When making his case, Russell explains that, while current AI focuses on achieving specific objectives, the holy grail of AI research is Artificial General Intelligence (AGI), that is the creation of an intelligent agent which can understand and learn tasks as humans do to perform any objective.
But what are our collective objectives, and what do we want AI to achieve? These are difficult questions. For example, AI programmed to end suffering or bring about world peace may eradicate humanity. Yet, most of us would reject such an outcome.
This is because objectives often conflict. They are also shaped by our values. But values are even harder than objectives to articulate, let alone programme. Values also change over time, are held with varying degrees of conviction, and differ between individuals and societies. Which values should AI be programmed to reflect?
It seems that before we finally tackle AGI, we have some serious soul-searching to do. If AI is to reflect the collective wisdom of humankind, we would also do well to engage all traditions in this effort. And so, I have written a book looking at Judaism in the age of artificial intelligence.
In it, I approach the subject of AI by first considering the idea of the “singularity”, the moment that computers will overtake humans, made famous by the futurist, Ray Kurzweil, in his 2006 book, The Singularity is Near. Kurzweil describes how, post-singularity, there will be “no distinction… between human and machine or between physical and virtual.”
Instead, intelligence will saturate the physical universe and reorganise “matter and energy to provide an optimal level of computation.” It will then “spread out from its origin on Earth” to turn the universe into a vast super-intelligence. This is a strange fantasy.
The 2014 science fiction film, Transcendence, features similar ideas. In it, a leading AI researcher, Dr Will Caster (played by Johnny Depp), is uploaded to a computer, after being shot by an anti-AI protester, and attempts to take over the world.
A world remade in the image of Johnny Depp is a terrifying thought, but the desire for eternal oneness is perfectly understandable, and forms much of politics and religion, including Judaism for which after all “God is one.” But is such desire positive?
In my book, Staying Human: A Jewish Theology for the Age of Artificial Intelligence, I show how Jewish tradition espouses the idea of God as everything, the singularity, the network of networks. This is particularly so among Jewish mystics who saw no place as being free of God and find allusions to such ideas hidden within the biblical text.
But Jewish tradition is also keen to stress that God is wholly distinct from human beings, despite His unity. We exist in relationship to God and by extension to other people. The desire for absolute unity is to be curbed. Think the tower of Babel.
Indeed, the danger of the extreme version of the singularity is the levelling of all existence, and the disappearance of the creative and moral space in which individuated beings encounter one another. It seems that a balance is required between a focus on the All and the particular. These orientations are relevant too to AI’s ambitions, which should work to connect while respecting human difference.
Judaism also has something to contribute to the question of human values. In the 1950s, AI researchers considered that “humans are intelligent to the extent that our actions can be expected to achieve our objectives”. But our intelligence goes beyond this definition.
There is intelligence which results from failing to reach our objectives, in dealing with the unexpected, in the ability to feel the pain of another. We also evince numerous qualities which go beyond intelligence, such as love, care, awe and trust. Religion speaks to these qualities, and ritual (particularly halachah) mediates them through an encounter with the real world.
Jacques Ellul (1912-1994), the French theologian, saw technology as aiming at “absolute efficiency” in human affairs. Halachah is not about efficiency but meaning and relationship in the world of objects. It also brings into play the past, present, and future, which is to be worked towards steadily, not with an eye to conquering the universe but, with reverence, humility, and gratitude, and in the knowledge of our own temporality.
Professor Russell’s call for humans to remain in control makes good practical sense but, as he indicates, the discussions around these issues go much wider and deeper
Staying Human: A Jewish Theology for the Age of Artificial Intelligence is published by Cascade Books at £25.
Dr Bor will be speaking about the book at the Limmud Festival — which runs from Friday till Tuesday — on Monday, December 27. Tickets are available from limmudfestival.org, with a special JC discount code 5M9YKV