Our pursuit of a machine that can think for itself — gather experience, learn and apply that learning to new situations — is long standing. The earliest computing machines, designed to calculate numbers, gave rise to fantasies of artificial intelligence through their faultless operation. As technology advanced, so too did dreams of thinking machines, but thinking — the application of information in logical ways to predict outcomes — is the least of what makes us human, and the great danger of artificial intelligence remains the absence of ethical will.
Though contemporary science fiction movies and television shows such as “2001”, “The Terminator”, “I Robot”, “War Games”, and “Westworld” all explored the ramifications of this problem, the dilemma it poses has been considered for a very long time. It is expressed best, perhaps, in a 16th century story about the Golem, an artificial intelligence created by the Rabbi of Chelm. In an effort to protect his community from persecution, the Rabbi built an artificial being made from clay, and having written the secret name of God on its forehead, brought it to life. Endless mishaps ensued, as Rabbi Chelm’s instructions to the Golem were acted upon in logical but non-ethical ways. In the end, the Rabbi had no choice but to destroy his thinking machine.
As the late, great neurophysiologist Warren McCullough points out, if machines “develop fancy” the danger is that such fancy will be inhuman. Although it’s clear the “fancy” humans develop is not predictably generous and good, it is nonetheless judged against the backdrop of a body of ethical precepts. Our western precepts are embodied in the Ten Commandments, but all societies embody ethical precepts of one sort or another, however variable they may be. The functioning and continuation of human society requires value-based standards, and these ethical standards, no matter how strange or illogical they may appear to others, provide a stable framework upon which cultures are built. Notably, exposure to varying cultural values remains a source of both inspiration and conflict between people.
The great danger of artificial intelligence is its lack of culturally evolved standards of ethical value, and before his death, the brilliant physicist Stephen Hawking warned us of this problem. Although miniaturized information storage and retrieval has become increasingly adept and efficient, and can even appear intelligent — consider Siri or Alexa, for example — such capability remains, like Golem, a powerful parlor trick, lacking an ethical framework built upon the group dynamics of people, a social animal capable of complex adaptation to evolving events and environments.
People learn by evolving naturally; their tasks determine the structure of the tens-of-billions of neurons brains contain. Pathways and connections between neurons are constantly being created and altered in response to experience, thoughts and actions. Our social relationships, combined with many biologically-based, hard-wired imperatives that govern our lives, exert influence on how and what we learn, and the consequent ethical frameworks that evolve provide a stable platform from which we act and behave. An evolution of consciousness takes place over time, and in our case, has taken millions of years.
Digital technology, soon to be miniaturized to the point of quantum computers storing data at the sub-atomic scale, fills small spaces with enormous amounts of information, but as the story of Golem teaches, all the information in the world is no substitute for the evolution of ethical consciousness.
History and the daily news endlessly demonstrate that human ‘ethics’ has not been particularly reliably ‘good,’ or even safe, even by human standards. Indeed, science fiction and biblical warnings are often equally fancifull. The supposed dangers of AI assume that the human definition is the most ethical, when in fact many ethical behaviors and outcomes ‘approved’ by humans have been among the most horrific, yet ‘ethical’ by the standards of their day. E.g., slavery, burning witches at the stake, racism, misogyny, child labor, were all once quite ‘ethical.’ The fear of AI ‘ethics’ is in its logical inflexibility, probably rooted in the fear that humans could not ‘rig the system’ when it suits their passion or purpose du jour. 2 + 2 is alway 4, no matter how desperately convient it would be if sometimes -“ just this once”- it was maybe 7, or 6237.