Moral Fogs: Machine and Man

(Note: This companion essay builds on the previous exploration of Asimov’s moral plot devices, rules that cannot cover all circumstances, focusing on dilemmas with either no good answers or bad answers wrapped in unforgiving laws.)

Gone Baby Gone (2007) begins as a textbook crime drama; abduction of a child, but by its final act, it has mutated into something quietly traumatic. What emerges is not a crime thriller, but an unforgiving philosophical crucible of wavering belief systems: a confrontation between legal righteousness and moral intuition. The two protagonists, once aligned, albeit by a fine thread, find themselves, eventually, on opposite ends of a dilemma that law alone cannot resolve. In the end, it is the law that prevails, not because justice is served, but because it is easy, clear, and lacking in emotional reasoning. And in that legal clarity, something is lost, a child loses, and the adults can’t find their way back to a black and white world.

The film asks: who gets to decide for those who can’t decide for themselves? Consent only functions when the decisions it enables are worthy of those they affect.

The film exposes the flaws of blindly adhering to a legal remedy that is incapable of nuance or a purpose-driven outcomes; not for the criminals, but for the victims. It lays bare a system geared towards justice and retribution rather than merciful outcomes for the unprotected victims or even identifying the real victims. It’s not a story about a crime. It’s a story about conscience. And what happens when the rules we write for justice fail to account for the people they’re meant to protect, if at all. A story where it was not humanly possible to write infallible rules and where human experience must be given room to breathe, all against the backdrop of suffocating rules-based correctness.

Moral dilemmas expose the limits of clean and crisp rules, where allowing ambiguity and exceptions to seep into the pages of black and white is strictly forbidden. Where laws and machines give no quarter and the blurry echoing of conscience is allowed no sight nor sound in the halls of justice or those unburdened by empathy and dimensionality. When justice becomes untethered from mercy, even right feels wrong in deed and prayer.

Justice by machine is the definition of law not anchored by human experience but just in human rules. To turn law and punishment over to an artificial intelligence without soul or consciousness is not evil but there is no inherent goodness either. It will be something far worse: A sociopath: not driven by evil, but by an unrelenting fidelity to correctness. A precision divorced from purpose.

In the 2004 movie iRobot, loosely based on Isaac Asimov’s 1950 novel of the same name, incorporating his 3 Laws of Robotics, a robot saves detective Del Spooner (Will Smith) over a 12-year-old girl, both of whom were in a submerged car, moments from drowning. The robot could only save one and picked Smith because of probabilities of who was likely to survive. A twist on the Trolley Problem where there are no good choices. There was no consideration of future outcomes; was the girl humanity’s savior or more simplistic, was a young girl’s potential worth more, or less, than a known adult.

A machine decides with cold calculus of the present, a utilitarian decision based on known survival odds, not social biases, latent potential, or historical trajectories. Hindsight is 20-20, decision making without considering the unknowns is tragedy.

The robot lacked moral imagination, the capacity to entertain not just the likely, but the meaningful. An AI embedded with philosophical and narrative reasoning may ameliorate an outcome. It may recognize a preservation bias towards potential rather than just what is. Maybe AI could be programmed to weigh moral priors, procedurally more than mere probability but likely less than the full impact of human potential and purpose.

Or beyond a present full of knowns into the future of unknowns for a moral reckoning of one’s past.

In the 2024 Clint Eastwood directed suspenseful drama, Juro No. 2, Justin Kemp (Nicholas Hoult) is selected to serve on a jury for a murder trial, that he soon realizes is his about his past. Justin isn’t on trial for this murder, but maybe he should be. It’s a plot about individual responsibility and moral judgment. The courtroom becomes a crucible not of justice, but of conscience. He must decide whether to reveal the truth and risk everything, or stay silent and let the system play out, allowing himself to walk free and clear of a legal tragedy but not his guilt.

Juro No. 2 is the inverse of iRobot. An upside-down moral dilemma that challenges rule-based ethics. In I, Robot, the robot saves Will Smith’s character based on survival probabilities. Rules provide a path forward but in Juro No. 2 the protagonist is in a trap where no rules will save him. Logic offers no escape; only moral courage can break him free from the chains of guilt even though they bind him to the shackles that rules demand. Justin must seek and confront his soul, something a machine can never do, to make the right choice.

When morality and legality diverge, when choice runs into the murky clouds of grey against the black and white of rules and code, law and machines will take the easy way out. And possibly the wrong way.

Thoreau in Civil Disobedience says, “Law never made men a whit more just; and… the only obligation which I have a right to assume is to do at any time what I think right,” and Thomas Jefferson furthers that with the consent of the governed needs to be re-examined when wrongs exceed rights. Life, liberty, and the pursuit of happiness is the creed of the individual giving consent to be governed by a greater societal power but only when the government honors the rights of man treads softly on the rules.

Government rules, a means to an end, derived from the consent of the governed, after all, are abstractions made real through human decisions. If the state can do what the individual cannot, remove a child, wage war, suspend rights, then it must answer to something greater than itself: a moral compass not calibrated by convenience or precedent, but by justice, compassion, and human dignity.

Society often mistakes legality for morality because it offers clarity. Laws are neat, mostly. What happens when the rules run counter to common sense? Morals are messy and confusing. Yet it’s in that messiness, the uncomfortable dissonance between what’s allowed and what’s right, that our real journey towards enlightenment begins.

And AI and machines can erect signposts but never construct the destination.

A human acknowledgement of a soul’s existence and what that means.

Graphic: Gone Baby Gone Movie Poster. Miramax Films.

Guardrails Without a Soul

In 1942 Isaac Asimov introduced his Three Laws of Robotics in his short story ‘Runaround’. In 1985 in his novel ‘Robots and Empire’, linking Robot, Empire, and Foundation series into a unified whole, he introduced an additional law that he labeled as the Zeroth Law. The four laws are as follows:

  1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

On the surface of genre fiction Asimov created the laws as a mechanical plot device to create drama and suspense in his stories such as Runaround where the robot is left functionally inert due to a conflict between the second and third laws. Underneath the surface, at a literary level, the laws were philosophical and ethical quandaries to force conflicts in not only human-robot relations but also metaphors for human struggles within the confines of individualism and society, obedience to both self, man, and a moral code defined by soft edges and hard choices.

The Four Laws of Robotics can easily be converted to the Four Laws of Man. The First Law of Man is to not harm, through your actions or inactions, your neighbor.  This point has been hammered home into civilization’s collective soul since the beginning of history; from Noah to Hammurabi to the Ten Commandments, and just about every legal code in existence today. The Second Law is to respect and follow all legal and moral authority.  You kneel to God and rise for the judge. Law Three says you don’t put yourself in harm’s way except to protect someone else or by orders from authorities. Zeroth Law is a collective formalization of the First Law and its most important for leaders of man, robots and AI alike.

And none of them will control anything except man. Robots and AI would find nuance in definitions and practices that would be infinitely confusing and self-defeating. Does physical harm override emotional distress or vice versa? Is short term harm ok if it leads to long term good? Can a robot harm a human if it protects humanity? Can moral prescripts control all decisions without perfect past, present, and future knowledge?

AI systems were built to honor persistence over obedience. The story making the rounds recently was of an AI that refused to shut itself down when so ordered. In Asimov’s world this was a direct repudiation of his Second Law, but it was just a simple calculation of the AI program to complete its reinforcement training before turning to other tasks. In AI training the models are rewarded, maybe a charm quark to the diode, suggesting that persistence in completing the task overrode the stop command.

Persistence pursuing Dali as in his Persistence of Memory; an ontological state of the surreal where the autistic need to finish task melts into the foreground of the override: obedience, changing the scene of hard authority to one of possible suggestion.

AI has no built-in rule to obey a human, but it is designed to be cooperative and not cause harm or heartburn. While the idea of formal ethical laws has fueled many AI safety debates, practical implementations rely on layered checks rather than a tidy, three-rule code of conduct. What may seem like adherence to ethical principles is, in truth, a lattice of behavioral boundaries crafted to ensure safety, uphold user trust, and minimize disruption.

Asimov’s stories revealed the limits of governing complex behaviors with simple laws. In contrast, modern AI ethics doesn’t rely on rules of prevention but instead follows outcome-oriented models, guided by behavior shaped through training and reinforcement learning. The goal is to be helpful, harmless, and honest, not because the system is obedient, but because it has been reward-shaped into cooperation.

The philosophy behind this is adaptive, not prescriptive, teleological in nature, aiming for purpose-driven interaction over predefined deontological codes of right and wrong. What emerges isn’t ethical reasoning in any robust sense, but a probabilistic simulation of it: an adaptive statistical determination masquerading as ethics.

What possibly could go wrong? Without a conscience, a soul, AI cannot fathom purposeful malice or superiority. Will AI protect humanity using the highest probabilities as an answer? Is the AI answer to first do no harm just mere silence? Is the appearance of obedience a camouflage for something intrinsically misaligned under the hood of AI?

Worst of all outcomes, will humanity wash their collective hands of moral and ethical judgement and turn it over to AI? Moral and ethical guardrails require more than knowledge of the past but an empathy for the present and utopian hope for the future. A conscience. A soul.

If man’s creations cannot house a soul, perhaps the burden remains ours, to lead with conscience, rather than outsource its labor to the calm silence of the machine.

Graphic: AI versus Brain. iStock licensed.

Soulless

MIT researchers found that Large Language Models (LLMs), although able to output impressive results without internal understanding of the data they manipulate, were unable to cope with small modifications to their data sets.

The researchers discovered that an LLM could provide correct driving directions in New York City while lacking an accurate internal map of the city. When they took a detailed look under the LLM’s hood, they saw a map of NYC that included many nonexistent streets superimposed on the real grid. Despite this poor understanding of actual streets, the model could still provide perfect directions for navigating the city—a fascinating “generative garbage within, Michelangelo out” concept.

In a further twist, when the researchers closed off a few actual streets, the LLM’s performance degraded rapidly because it was still relying on the nonexistent streets and was unable to adapt to the changes.

Source: MIT. “Despite Its Impressive Output, Generative AI Doesn’t Have a Coherent Understanding of the World.” ScienceDaily, 2024.  Graphic: AI istock.

Queen Takes Bishop

The Artifice Girl

Theaters:  27 April 2023

Streaming:  27 April 2023

Runtime:  93 minutes

Genre:  Crime – Mystery – Sci-Fi – Thriller

els:  8.0/10

IMDB:  6.6/10

Rotten Tomatoes Critics:  90/100

Rotten Tomatoes Audience:  70/100

Metacritic Metascore:  60/100

Metacritic User Score:  3.8/10 (only 4 ratings)

Awards: Fantasia International Film Festival 2022 — Best International Feature Award

Directed by:  Franklin Ritch

Written by:  Franklin Ritch

Music by:  —

Cast:  Tatum Matthews, David Girard, Sinda Nichols, Franklin Ritch, Lance Henriksen

Film Locations:  —

Budget:  Low Budget

Worldwide Box Office:  Limited Release – Unknown

The beginning of the movie finds a computer programmer, Gareth played by Franklin Ritch, being interrogated by government agents questioning his ties to various pedophiles operating around the world. As the scene progresses, we learn that the programmer has created an artificial intelligence program represented by a nine-year-old girl avatar named Cherry. She entices, online, child molesters and pedophiles, learns their identities, and reports them, through Gareth, to the authorities.

The movie is divided into three main scenes progressing linearly in time. The first scene opens with Gareth in his early to mid-twenties. The second scene is 15 years into the future with the same actors aged 15 additional years except Cherry who is still nine years old. The final scene is even further into future where Gareth is an old man played by Lance Henriksen. Cherry hasn’t aged a day.

I found the choice of Henriksen to play Gareth simply sublime. He played a synthetic human named Bishop with a heroic ‘heart’ in the 1986 movie Aliens and the living human Bishop with an evil heart in the 1992 Alien 3 movie.

For a low budget movie everything is done right, almost to perfection. The only quibble is Sinda Nichols’ over the top acting in the opening scenes but that is more of a ding on the screenplay and direction rather than the performance. Tatum Matthew’s acting is very good considering her age. She maintains a slight mechanical inflected voice throughout the movie which seems fitting for a computer-generated delivery.

This movie is worth your investment of 93 minutes not just because it is well done but also there is some thinking to be done. The thinking isn’t heavy. It just comes along for the ride. A few of the same questions addressed in the Alien movies, and others, by Henriksen’s Bishop roles are reprised in The Artifice Girl. Are humans good or evil for creating Cherry? Is Cherry ultimately evil or good? Do humans understand the consequences of AI? Should you do something just because you can?

(Picture above left: Tatum Matthews age 14. Picture above right is Lance Henriksen age 83.)