Press "Enter" to skip to content

Can humanity be reduced to an algorithm?


Each of us is different and in other ways we are all the same. Sameness and difference are two sides of a coin; you can’t have one without the other. In our case, sameness and difference are the products of one billion years or more of ever-increasing animal complexity. At the present time, that complexity is embodied in human consciousness: the workings of mind, its emotions, intelligence and seemingly unlimited imagination.

So great is our intelligence and imagination that we’ve used them to create human-like artificial intelligence, but can the human mind really be reduced to an algorithm? Answering this question requires asking another: what is it to be human-like?

Being human-like is not simply about solving problems. In addition to our intelligence and imagination we are also feeling beings, filled with emotions and memories. Greed, anger, jealousy, envy, sadness, exuberance, humor, sexiness and much more fill lives and experiences every moment of each day; the combinations and variations of the feelings we experience may be unlimited and affect our lives at least as much, and likely far more, than what our rational or in many cases, irrational intelligence provides. We are infinitely more complex than any algorithm.

So, what is an algorithm? An algorithm is a set of rules or instructions that can analyze data and provide solutions or conclusions by employing the programming logic of YES/NO and IF/THEN. At their best, the responses generated by the algorithms of artificial intelligence may appear human-like, but lack humanity. Algorithms do not have emotions and cannot feel anything at all; the appearance of emotionally human-like responses are simply algorithmic trickery. Famously, the Turing Test (named for computer scientist Alan Turing) is used to determine if people can be fooled into thinking the response from a computer is that of a human being. Today’s artificial intelligence is increasingly passing that test.

The algorithms of artificial intelligence are an effort to simplify human complexity through logic, but is compassion logical, or is anger or jealousy? The reduction of complex emotions to a series of logical YES/NO, IF/THEN formulas is impossible and downright dangerous. Intuitively, we all know this, even the scientists who are writing the computer codes. For this reason, the dangers of machine intelligence fuel the writing of books, movies and scientific warnings.

This subject was the theme of writer Phillip K. Dick’s “Do Androids Dream of Electric Sheep”, the book upon which the film “Blade Runner” was based. In the book and film, the Turing-inspired “Voight Kampff” test is administered by a special police unit to determine whether or not an individual is human or a manufactured, human-like replicant. Essentially, the test measures biometrics to gauge an empathy response. Dick’s point was that feelings, not intelligence, is what more properly defines us.

Empathy, like other emotions, is currently impossible for artificial intelligence; it’s far too complicated. Yet, the more human-like machine intelligence becomes, the more complicated it gets, and as is true of people, more complicated means more troubled. HAL, the computer in Arthur Clarke’s “2001” is a good example. VIKI in the Asimov-derived film “I Robot” is another, as is SKYNET in “The Terminator” films. We’ve all seen those movies, and they don’t end well.

Human complexity makes life complicated enough. Replacing humanity with increasingly complex computer code won’t solve our problems; rather, it will create new ones, either that, or end it all for everyone.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *