Technology,  Your View

On Artificial intelligence and the AI Risk

By Jurek Molnar

“lupus est homo homini.” (Plautus)

The term Artificial Intelligence has made some headlines recently. As someone who works in the IT industry as a software developer, I wanted to take the opportunity to put some of the ideas about AI into context. I am not working on AI myself, but the principles of computing, math and physics apply to AI the same way as they do in the case of average software systems. And so, this little piece is not exactly an expertise paper, rather than a short essay on the contextualization of certain fears and dark future visions, that are crawling around in the public perception. You will be not less afraid after reading this, dear reader, but maybe I can help to point your eyes into the right direction.

What “Artificial Intelligence” exactly means, is the subject of philosophical debates among scientists since its inception in the 1970s. Among most engineers, who build these systems, the broadest consensus about the term is: “artificial intelligence” is the ability of a software system to recognize, analyse and command complex patterns in chaotic datasets. Technically an AI is a system of autonomous pattern recognition tools, which interact with each other based on wide spreading decision trees. In this regard Google, Amazon, Apple, X or Microsoft are AI machines, built for controlling large scalable computer networks, server farms and supply chains. As individuals and as a civilisation we are already at a critical level of dependency from large, decentralised computer networks. In the case of a serious breakdown that affects one too many of these systems, a shutdown will probably destroy irreversibly most of the global and local cultures we inhabit. If such a catastrophic event ever happens, it will be a major setback for civilisation, but not the end of the human species, maybe a new dark age, possibly lasting a few hundred years. The abilities of AIs nevertheless give rise to speculations about the end of humanity at a certain point of no return. I will try to address some of these scenarios and offer my own perspective on this question.

Pattern recognition, the central principle of AI that has actually practical consequences for our entire lives, is something that all higher living beings practice, higher meaning: higher than multicellular organisms, and even those can be suspected. Pattern recognition from an evolutionary perspective represents the computing capability of brains that are informed by senses, which have been improved by natural selective pressure for millions of years. The paradigm works in all possible ways and directions in our terrestrial environments or in the oceans, completely independent from any measurable metric of intelligence. But the physical limitations of organic bodies in general, also limit the capacity of our brains to compute reality. Larger brains, or better scalable brain capacity needs energy, and the energy consumption increases with the actual powers that are on display. The more intelligent an AI becomes the more energy it needs to sustain its own survival. A global blackout will kill it.

AIs are first and foremost large scalable computer networks, which build virtual layers over physical ones. Also clouds, large virtual networks for organising data and information exchange, must be implemented on physical hardware and server farms which produce a lot of CO2. The more intelligent an AI is, the better it captures and controls clusters and super clusters of highly complicated patterns in finite time, a giant task which also needs more and more energy to perform its routines and enhance its range. The spice must flow! As a civilisation we are more or less totally dependent on large computer networks, but these computer networks themselves are also critically dependent on us. The need for energy consumption demands that someone must do the maintenance. In order to function properly, these systems, mostly air-conditioned server farms, need controlled and stable environments to run on physical layers, which react negatively to any deviance from optimal conditions. All physical things decay and the more sophisticated any piece of hardware is, the more vulnerable it is to physical conditions in any form and degree. All these conditions themselves are heavily dependent on power grids and critical infrastructure, which must be maintained, repaired and constantly checked. Our dependency is irreversible and mutual and AI, like our civilisation, is powerful, but also fragile.

The most prominent promoter of AI as a potential threat for humankind, or the AI risk, is Nick Bostrom, a Swedish philosopher, who formulated this proposal in 2014:

“A ‘superintelligence’ (a system that exceeds the capabilities of humans in every relevant endeavour) can outmanoeuvre humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.” (Bostrom, Superintelligence: Paths, Dangers, Strategies)

The AI risk appears as the third global threat, right beside climate change and nuclear war. To address the fear that super intelligence “will inexorably result in human extinction”, one must understand what Bostrom means by “superintelligence”. As a philosopher he tries to find logical arguments for improbable scenarios, which cannot be proven by evidence but are nevertheless possible under certain circumstances. There is no place for an accurate discussion of Nick Bostrom’s work here, but his ideas are very interesting and should be discussed elsewhere. My own take on his proposal is: Any idea of a “system that exceeds the capabilities of humans in every relevant endeavour” has been usually considered in most human societies as a transcendental being of divine nature. But God works in mysterious ways. Bostrom also promoted a series of logical very conclusive arguments, that we live in a simulation. If he is right and we live in a simulation created by a superintelligence, this clearly demonstrates that atheists always find God in the strangest of places. But I would like to argue against this idea of possible human extinction. As a lifelong science fiction fan, I have fortunately read a lot of exciting fantasies about the future, which cover a wide range of possibilities and visions. And since I also like films, I decided to discuss the AI risk in the context of three movies, which have one way or the other contributed to public imaginations about the topic.

In the second instalment of the Terminator franchise, “Terminator 2” from 1992, the supercomputer Skynet has first been given complete control over nuclear arsenals, then becomes sentient and immediately launches thousands of nuclear missiles to create a global nuclear holocaust, which should exterminate humanity. The age of the machines can begin. One can argue that Skynet will hide in his radiation protected bunker in the depth of a mountain, unaffected by the destructive radiation outside. And Skynet may be indeed safe from the nuclear fallout, but a supercomputer needs energy from power grids. It may be able to produce energy for itself for a little while, but there will be harsh conditions to master under the immense pressure of natural circumstances. Thousands of nuclear detonations will have shaken the earth and mostly destroyed all power grids, land lines and roads, i.e. any form of functioning supply chain infrastructure. Robots and any kind of physical computer hardware outside the safe bunker will be useless during the radioactive nuclear winter. Weather and climate will distribute in a totally unpredictable manner long half-life elements which destroy organic molecules and mechanical architectures alike, and Skynet will not be able to get up all by itself to build a stable power grid and to manage the supply chains to run it, unaffected by destructive radiation. There are no humans anymore to enslave and the limits of the new environment will sooner or later kill Skynet off for good. This is the real reason why the resistance in “Terminator” always wins.

In the classic movie “Colossus” from 1970, which has first presented the idea of a god-like supercomputer that conquers humanity, the fast learning “Colossus” is given complete control over the United States defence forces, including nuclear missiles. In the process of learning, Colossus discovers another super intelligence in the Soviet Union, “Guardian” and both entities fuse together into one global power, which kills its enemies and enslaves the rest of humanity, demanding to be loved and revered by its subjects like a God. And while Colossus leaves infrastructure intact and human labour as a central element of its supply chains functioning, it seems criminally incautious, that no engineer who contributed to the building of this monster ever thought about a reliable fail-safe mechanism. The thing is, that humans must build the machines, because they don’t appear out of nowhere and human engineers are usually very good in fail-safe mechanisms. In a world of mutual dependency, Colossus would be killed in minutes, if it dares to strike its metaphorical hand against its creators. The greatest threats to humans are always other humans, not machines. Every catastrophic event that may involve AI will always be a process that was at least started and triggered by human action. It may not end with human action, but it will certainly begin with human action. Catastrophic events will happen, but human action and resilience will continue to have influence. The fusion between Colossus and Guardian is passed by as an invisible process, that happens quickly and unproblematic. How exactly should these entities fuse? American and Soviet operating systems will be largely incompatible, and since the physical layer will also be very different, hardware requirements will become a critical issue. Last not least, computer programs do not “fuse” as if they are esoteric spirits. Both, Colossus and Guardian, will reside on their own physical layer and compete for energy supply and human maintenance of the necessary grids and land lines. Interestingly, the result of this fusion, a “superintelligence” that transform itself into a God at the end of the movie is not the most unrealistic point of view. AIs are learning machines and so they may learn the most from the mimetic process, in which they imitate human behaviour and study human reactions, emotions and habits. The oldest human mythologies may inspire our future silicone overlords to reign supreme.

In “The Matrix”, a super intelligence has taken over the world and destroyed it for human life.  In order to exploit human bodies as energy supply battery cells, the superintelligence machines have created a false reality, called “The Matrix”, which is a simulation of a reality that doesn’t exist anymore (except in “Zion”, of course). The energy production by billions of batteries must fuel a large virtual layer, in which humans live the simulated life of average people while sleeping in their coffins, producing heat for electric chargers. I personally love the whole Matrix trilogy and admire its aesthetic and philosophical depth, but I don’t think that this will be a possible future. Why humans in the first place? The production of human beings, who must grow a long time and develop an expensive brain to participate in an energy consuming simulation, which needs probably a large part of the energy that was originally harvested for the machines, seems largely overblown. The unnatural state of the human body to remain permanently in stasis will sooner or later turn these bodies off, triggering genetic defects that will make human bodies eventually useless. The machines will have to invest in expensive genetic programmes that have to be tested against manipulated human DNA and risk the waste of their own resources for little in return. In terms of energy production, cows, elephants or whales would deliver much more energy than humans and are certainly easier to satisfy inside a computer simulation. Since this kind of “superintelligence” will certainly kill all humans, it will also kill its most important energy supply and finally itself. And while this may also happen, I think it is highly exaggerated then to speak of a “superintelligence”.

Schopenhauer and Nietzsche tell us that the most basic feature of a conscious mind is the will. The will may be free or not: as a will it already is something that has certain degrees of freedom and choice. When we talk about AIs becoming sentient or are developing consciousness then we consider as the most obvious expression of this consciousness an intentional acting mind with a certain purpose, a will. In this regard we do not have to worry about what intelligence means or how consciousness can be explained because we will simply recognize that intentional actions are taking place, which pursue a goal. A will after all is nothing else than an Ego, however it manifests itself. We will one way or the other understand the intention and will somehow react to it.

The most obvious hole in the AI risk doomsday scenarios, is the simple fact that they all consider machines or AIs as unaffected by human action. Machines may be indifferent to human feelings and emotions, but certainly not to human action, which affects them. If an AI becomes sentient it will hence be subject to evolutionary constraints. Evolution itself tells us that intelligence improvement maximizes evolutionary success. To what degree is mostly a matter of independent environmental conditions and the natural limitations of energy consumption. The question is not how much sentient an AI actually is, or human like, or God like, but what it can do to live long and prosper or at least maximize its own evolutionary success, as an individual and as a part of a diversely populated landscape inside the biosphere. If AIs can somehow be programmed to kill all humans, the killing of humans will not be a simple line of code, and certainly no Order 66, but the unintended consequences of wrongly incentivized human action.

AIs have no natural or original desire to wipe out humanity, they are designed to function in a particular environment based on human input. AIs as a population will have strong incentives to grow and compete for profitable service exchange rates. In Richard Morgan’s “Altered Carbon” series, AIs are in a technical sense people who are protected by civil laws and who engage with humans based on a legally defined protocol how to exchange services and currency. In Iain Banks’ “Culture” series, AIs run spaceships completely independent from anyone else and live as ironically long names inside this universe, creating a culture of their own.

If we apply the term AI correctly then we must admit that AI is already here. We still can trust the idea, that human action is the driving force behind it, but technical devices themselves have become intertwined with the human psyche, as our smartphones demonstrate so clearly. This development will not decrease anytime soon. The apocalyptic visions are not wrong regarding the destructive potential in human nature, but not necessarily correct regarding the nature of machines. The success or failure of AIs will largely depend on how well we are able to adapt to these technologies, because AIs will certainly faster adapt to us. The Rubicon was already crossed long ago, when technology became the most addictive drug ever. The global supply chains, which rely heavily on these large computer systems have an interest of their own to push for further demand and the server clusters will run the chain as long as the energy supply remains intact. But a server can shut down, the software can become failure prone and integrated circuits are not built for eternity, at least not on earth. The most frightening vision of the future is a world of crumbling infrastructure, which cannot be repaired and the lack of technological support, which cannot be sustained. The biggest threats for humanity come from incentives to prefer short-term gratification instead of sustainable long-term success, which also indicates that the Achilles heel of our era is the compression of time.

AIs will have another sense of time and will be more oriented long term. Their ideas will be very different, but they will have an idea of death, as well as an idea of life. Both constraints generate necessarily a perspective, in which an evolutionary successful form of intelligence tries to predict the future based on patterns. Pattern recognition is the best available method to do more stable and correct long-term predictions. And this means, that AIs will be better in planning and preparing for the future. It is certainly possible that maliciously programmed AIs will become unstable gods and powerful tyrants that demand human worship. Then a pantheon of gods and tyrants will compete for human worship and intense struggles on what must count as the right religion will instantly follow. It is not unreasonable to think that machines can behave unreasonably too. But whatever it is, AIs will most probably compete for attracting human action into their favour, and sometimes they will even desire emotional attachment. One particular branch of AI that will affect future generations are sex robots. In the literature of progressive tech philosophers, the idea of sex robots has been welcomed warmly as the next big thing, proposing for instance the development of child sex robots as a substitute drug for paedophiles. Robots may substitute intimate partnerships with other human beings and so destroy us as humans in a very effective, but unspectacular way. We may see in the wider future the automatisation of reproductive tasks taken out of the womb right into pods that will grow and birth genetically enhanced human beings. Another road may be the cyborg enhancement of human bodies, which must interact with the organic structure. Or maybe all these developments will take place, incentivizing a more abstract competition for the right science and technology, thereby starting a new series of religious wars.  My own gut feeling tells me that AIs will have incentives to compete for our approval, as businesses mostly do. AIs will provide services and will try to maximize their profit, but from a long-term perspective aiming for a stable environment. If climate change for instance has the apocalyptic consequences some believe it has, AIs will not survive these catastrophes, which also means AIs will have an incentive to protect humanity from such a predicament. AIs do need us as well as we need them.

It is much more probable that AIs will not kill us. If they nevertheless push for our extinction, it will be solely our fault not to have understood, what “intelligence” means.