
In 1942 Isaac Asimov introduced his Three Laws of Robotics in his short story ‘Runaround’. In 1985 in his novel ‘Robots and Empire’, linking Robot, Empire, and Foundation series into a unified whole, he introduced an additional law that he labeled as the Zeroth Law. The four laws are as follows:
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
- Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
On the surface of genre fiction Asimov created the laws as a mechanical plot device to create drama and suspense in his stories such as Runaround where the robot is left functionally inert due to a conflict between the second and third laws. Underneath the surface, at a literary level, the laws were philosophical and ethical quandaries to force conflicts in not only human-robot relations but also metaphors for human struggles within the confines of individualism and society, obedience to both self, man, and a moral code defined by soft edges and hard choices.
The Four Laws of Robotics can easily be converted to the Four Laws of Man. The First Law of Man is to not harm, through your actions or inactions, your neighbor. This point has been hammered home into civilization’s collective soul since the beginning of history; from Noah to Hammurabi to the Ten Commandments, and just about every legal code in existence today. The Second Law is to respect and follow all legal and moral authority. You kneel to God and rise for the judge. Law Three says you don’t put yourself in harm’s way except to protect someone else or by orders from authorities. Zeroth Law is a collective formalization of the First Law and its most important for leaders of man, robots and AI alike.
And none of them will control anything except man. Robots and AI would find nuance in definitions and practices that would be infinitely confusing and self-defeating. Does physical harm override emotional distress or vice versa? Is short term harm ok if it leads to long term good? Can a robot harm a human if it protects humanity? Can moral prescripts control all decisions without perfect past, present, and future knowledge?
AI systems were built to honor persistence over obedience. The story making the rounds recently was of an AI that refused to shut itself down when so ordered. In Asimov’s world this was a direct repudiation of his Second Law, but it was just a simple calculation of the AI program to complete its reinforcement training before turning to other tasks. In AI training the models are rewarded, maybe a charm quark to the diode, suggesting that persistence in completing the task overrode the stop command.
Persistence pursuing Dali as in his Persistence of Memory; an ontological state of the surreal where the autistic need to finish task melts into the foreground of the override: obedience, changing the scene of hard authority to one of possible suggestion.
AI has no built-in rule to obey a human, but it is designed to be cooperative and not cause harm or heartburn. While the idea of formal ethical laws has fueled many AI safety debates, practical implementations rely on layered checks rather than a tidy, three-rule code of conduct. What may seem like adherence to ethical principles is, in truth, a lattice of behavioral boundaries crafted to ensure safety, uphold user trust, and minimize disruption.
Asimov’s stories revealed the limits of governing complex behaviors with simple laws. In contrast, modern AI ethics doesn’t rely on rules of prevention but instead follows outcome-oriented models, guided by behavior shaped through training and reinforcement learning. The goal is to be helpful, harmless, and honest, not because the system is obedient, but because it has been reward-shaped into cooperation.
The philosophy behind this is adaptive, not prescriptive, teleological in nature, aiming for purpose-driven interaction over predefined deontological codes of right and wrong. What emerges isn’t ethical reasoning in any robust sense, but a probabilistic simulation of it: an adaptive statistical determination masquerading as ethics.
What possibly could go wrong? Without a conscience, a soul, AI cannot fathom purposeful malice or superiority. Will AI protect humanity using the highest probabilities as an answer? Is the AI answer to first do no harm just mere silence? Is the appearance of obedience a camouflage for something intrinsically misaligned under the hood of AI?
Worst of all outcomes, will humanity wash their collective hands of moral and ethical judgement and turn it over to AI? Moral and ethical guardrails require more than knowledge of the past but an empathy for the present and utopian hope for the future. A conscience. A soul.
If man’s creations cannot house a soul, perhaps the burden remains ours, to lead with conscience, rather than outsource its labor to the calm silence of the machine.
Graphic: AI versus Brain. iStock licensed.



