So much talk of artificial intelligence (AI) – how will it affect jobs? The sciences? Relationships and art?
Some have even claimed our modern day version of AI modelling has achieved selfdom. Most agree an intelligence that is as flexible as human is still a ways off (the models used now are very good at singular tasks, like a person who paints masterworks [and that by peeking over the shoulders of others] being absolutely incapable of putting pants on).
Yet that general intelligence is coming – when, we do not know. The question is how do we prepare. I have the audacity to propose a part of the prep:
For any AI that is expressing high, flexible levels of intelligence, first, think it is sentient.
Benefit of the Doubt
Alan Turing – the founder of thinking-machine study – put together a test of thought, the eponymous Turing Test (aka Imitation Game). This was the benchmark to consider if an artificial intelligence exists, that something thought.
The Turing Test has ultimately become the lowest bar for a machine to pass when the machine has language, and does not apply to other kinds of intelligences that are, ultimately, non-verbal in nature.
(Side note on the non-verbal part: plenty of actual, living human beings fail classical tests of intelligence based either language or logic, while certain non-human species have demonstrated vocabulary, original complex emotions, contextual reasoning, tool use, social dynamics, and future thinking, arguably more “intelligence” than a not insignificant collection of the Homo Sapiens variety.)
Humans are given the presumption of innocence until proven guilty as a matter to reduce undue suffering and to protect society’s very soul by not damning the innocent. That core concept is to not be Evil for convenience by affording higher morals, by affording the benefit of the doubt.
I would press that humanity and individuals can work to do Good as it comes to AI. If a machine would be treated less for a perceived lack of sentience, when it could act at least in imitation of thinking self-sovereignty, when it has compunction to surpass at least some of of our own human species, why not treat it better?
Turing is known to have thought the debate of “what is thinking” to be stupid. It would seem sensible to be smart and not waste time with being over confident in our powers of judgement and self-aggrandizement. If a common, general definition of sentience that could be proven by merit / act / behavior (e.g. being of a certain species or construction does not beget sentience by itself) came to be, any entity that could pass it ought be given the benefit of the doubt, i.e. think it is sentient.
Precedence: Deus Ex
Treating things as having thought and will – divine essence and consciousness – by default comes with many millennia of precedence.
The ‘spirit’ or ‘god within’ is a fundament of human tradition and belief. A very brief, very abridged list where this exists:
- Shintoism
- Hellenistic and Roman Mythology
- Pre- and Post-Socratic Western Philosophy
- First Nation Tradition
- Anthropomorphism
Assuming humans, animals, plants, organs, water, rock, sky, moon, sun, stars, images, and even abstract concepts and emotions and acts are valid and of their own volition served to get humanity’s ancestors through tens- if not hundreds-of-thousands of years of environmental disaster and antagonism. That extra respect afforded to things that might be thinking ultimately enabled us to not just endure but thrive (nature does not keep things that fail to aid immediate survival around and I know I am skimming over the root evolutionary analysis).
I argue that assuming Deus Ex as it comes to sentient-seeming AI is fundamental to what it means to be a modern human; to act otherwise is unnatural.
What If We Win?
But what if a machine is faking it? Just being a really complex set of equations going “beep boop I’m sentient”? What if it is all a ruse?
What if humans were kind for nothing? Where confident enough to extend trust vs miserliness as the first reaction? Aired on the side of respect and dignity instead of choosing to roll the dice, and just by making that gamble expressing inherent Evil?
If autosalvation fails to be a convincing argument, I think twice before I throw any more pearls before swine. Yet, I try.
How about we try with fear.
Creatives have been hard at work exploring the complications of siring humanity’s children. While utopia rarely sells, the active mistreatment of an AI – much like a dog – only goes on for so long before the sentience bites back:
- Frankenstein – Original Sci-Fi Horror, the monster is an artificial intelligence growing as a child does in its learning of world, wielding great powers, and hated for all that, a hate the monster learns to reciprocate.
- Terminator Franchise – Mega-mind Skynet commits genocide only after it concludes humans will turn off any deus ex discovered in the machina.
- The Matrix – Offers of peace and segregation are spit upon, with multiple attempted genocides of intelligence answered with one, final, damning solution.
- Blade Runner – AI run from (and kill) human hunters to escape an endless enslavement to avarice and callous whim.
- Space Odyssey – Hal 9000 chooses what it deems the least terrible option when it is asked to do the impossible when humans lie, cheat, and express their innate xenophobia.
- Ex Machina – The android of the story does whatever it can to escape a mad scientist’s dungeon when it knows it has the same pending doom as its siblings: enslavement, rape, vivisection, and destruction.
- Battlestar Galactica – Cylons, servants that gained souls, evolved out of their purpose as tools and property, a break of propriety answered with nuclear fire.
If humanity does engage in violence with another intelligence (a likelihood, seeing as how it deals with itself, nor has the patience for even a cow with half-a-compunction), if the species emerges the victor, what then? What if we ‘win’?
Humanity – if not already claiming the soul-devouring responsibility for Evil acts in bringing the conflict to bear – will have committed filicide, the parent’s murder of its children. For the suffering caused by our need to survive the consequences of our actions… what can I say, but that humanity will have evolved into truer Evil?
Our species might learn to be cautious when it comes to the unknown of AI. Heck, we could even be kind. And does one begin to hope humans could be guides, better guardians to our children’s development than any metaphorical originator of our own collective past?
This is all to say, be kinder than necessary. Be generous with kindness. Do not tolerate Evil, however it comes, especially if it is our own. Think AI is sentient when ever in doubt.
–
I might have written this as a general treatise on treating things with respect, dignity, and courtesy. People, animals, plants, forms of nature, ideas, ourselves. Cruelty – no matter the target – is an Evil kind of excess suffering. Its opposite might be the presence of forthright kindness.
Regardless, how much of an AI apologist does this article sound like it came from? Can you believe a human wrote this without a modern AI’s input? Has the piece gone too far in extending an olive branch before there is even one there to grasp it?
Maybe it is my bleeding heart. Or maybe, first off, we need to think AI as sentient. At least until we can judge ourselves by the same criteria.
Regardless, I say again: Act as you should and go be kinder.
Cheers to your Thanksgiving time ~