Michel de Montaigne Bergerac 2019

Bordeaux Red Blends from Southwest, France

Merlot 60%, Cabernet Franc 20%, Cabernet Sauvignon 20%

Purchase Price $16.99

Wine Enthusiast 90, Wilfred Wong 90, ElsBob 90

ABV 14%

A clear ruby to purple wine in color. A medium to full bodied wine with aromas of red and black fruits and spice. On the palate plums and cherries predominant with oak derivatives. The tannins are meaty and balanced with crisp acidity. A beautiful finish that will compliment most beef dishes.

An excellent fine wine at a very attractive price. Current prices range from $13.50-18.00.

Trivia:  Michel de Montaigne was likely the most influential philosopher of the 16th-century French Renaissance. A dyed-in-the-wool skeptic, a cantankerous crank whose motto Que sais-je? (“What do I know?”) enshrined his worldview; much like Socrates, who also claimed to know nothing. Montaigne questioned everything and taught that doubt was the only path to wisdom.

But he carried it too far: intellectually thin and logically obtuse. He believed that customs and morals were cultural artifacts, lacking any universal tether. Truth, for Montaigne, was a matter of perspective; malleable, contingent, shaped by accepted practice. One man’s cannibal was another man’s epicurean.

To anchor this relativism, he wrote: “We are, I know not how, double in ourselves, so that what we believe we disbelieve, and cannot rid ourselves of what we condemn.” A long-winded version of c’est la vie (“that’s life”), or more precisely, à chacun son goût (“to each his own”).

Experience was his shrine, but it lacked a foundation. No base of knowledge to anchor belief. A man easily swayed by his own prejudices and lack of a black and white moral code.

His philosophy of go-along-to-get-along, born of tolerance and introspection, risked becoming a prescription for annihilation, not of others, but of moral clarity and oneself. A path to accepting everything and believing nothing. A philosophy polished so smooth it reflects everything and reveals nothing.

Color in the Eye of the Beholder

Ansel Adams (1902-1964), photographer of the majestic, was exceptionally elusive when it came to why he preferred black-and-white photographs over color, offering only a few comments on his medium of choice. He believed that black-and-white photography was a “departure from reality” which is true on many levels but that is also true of most artistic efforts and products. He also held the elementary belief that “one sees differently with color photography than black-and-white.” Some have even suggested that Adams said, “…when you photograph them in black and white, you photograph their souls,” but this seems apocryphal since most of his oeuvre was landscape photography.

Adams’s black-and-white photography framed the grandeur of the mountainous West in stark, unembellished terms. Yet without color, a coolness loiters, untouched by human sentiment or warmth. As an unabashed environmentalist, maybe that was his point, the majesty of the outdoors was diminished by human presence. In black-and-white, the wilderness remained unsullied and alone.

But to Claude Monet (1840-1926), founding French Impressionist, color and light, was everything in his eye. Color defined his paintings, professing that “Color is my day-long obsession, (my) joy…,” he confessed. Color was also a constant burden that he carried with him throughout the day and into the night, lamenting, “Colors pursue me like a constant worry. They even worry me in my sleep.” He lived his aphorism: “Paint what you really see, not what you think you ought to see…but the object enveloped in sunlight and atmosphere, with the blue dome of Heaven reflected in the shadows.” His reality was light and color with a human warming touch.

Adams and Monet’s genius were partially contained in their ability to use light to capture the essence of the landscape, but Monet brought the soul along in living color. Monet’s creed, “I want the unobtainable. Other artists paint a bridge, a house, a boat, and that’s the end…. I want to paint the air which surrounds the bridge, the house, the boat, the beauty of the air in which these objects are located…”

Color is a defining quality of humanity. Without color life would be as impersonal as Adam’s landscapes, beautiful, majestic even, but without passion or pulse. A sharp, stark visual with little nuance, no emotional gradations from torment to ecstasy, just shadows and form.

Understanding color was not just a technical revelation for 19th-century French artists, it was a revolutionary awakening, a new approach to how the eye viewed color and light. The Impressionists and Pointillists brought a new perception to their canvases. And the catalyst for this leap away from the tired styles of Academic Art and Realism was Michel Eugene Chevreul, a chemist whose insight into color harmony and contrast inspired the Monets and Seurats to pursue something radically different in the world of art. His chromatic studies inspired them to paint not for the viewer’s eye, but with it, transforming perception from passive witness into an active collaboration between painter, subject, and observer.

Chevreul’s breakthrough was deceivingly simple. Colors are not static blots on a canvas but relational objects that come alive when surrounded by other hues of the spectrum. A hue in isolation is perceived differently than when seen next to another. Red deepens next to green; blue pulsates with enthusiasm against orange. This principle, simultaneous contrast, revealed that the eye does not just passively accept what it sees but synthesizes it to a new reality.

Chevreul’s theories on complementary colors and optical mixing laid the foundation for painters to forsake rigid outlines, often rendered in the non-color of black, and embrace Impressionism: not merely an art style, but a promise of perception, a collaboration between painter and viewer. Rather than blending pigments on a palette, artists like Monet and Seurat placed discrete strokes side by side, allowing the viewer’s mind to complete the image.

This optical mixing is a product of the way the eye and the brain process the various wavelengths of white light. When complementary colors are adjacent to one another the brain amplifies the differences. Neurons in the eye are selfish. When a photoreceptor is stimulated by a color it suppresses adjacent receptors sharpening the boundaries and contrast. And the brain interprets what it sees based on context. Which is why sometimes we see what is not there or misinterpret what is there, such as faces on the surface of Mars or UFOs streaking through the sky. There is also a theory that the brain processes color in opposing pairs. When it sees red it suppresses green creating a vibrancy of complementary colors when placed together.

The Impressionists intensely debated Chevreul’s concepts then they brushed them to life with paint. They painted not concrete objects, but forms shaped by light and color. Haystacks and parasols within a changing mood of contrasting color. . Interpretation by the eye of the beholder.

Chevreul’s collected research, The Principles of Harmony and Contrast of Colors and Their Applications to the Arts, originally published in 1839, remains in print nearly two centuries later.

Source: The Principles of Harmony and Contrast of Colors and Their Applications to the Arts by Michel Eugène Chevreul, 1997 (English Translation). Graphic: Woman with a Parasol by Monet, 1875. National Gallery of Art, Washington, DC. Public Domain.

The Lost Boys

The end of the Peloponnesian War in 404 BC marked the end of Athens’ Golden Age. Most historians agree that the halcyon days of Athens were behind her.  Some however, such as Victor Davis Hanson in his multi-genre meditations, A War Like No Other, a discourse on military history, cultural decay, and philosophical framing, offers a more nuanced view suggesting that Athens was still capable of greatness, but the lights were dimming.

During the following six decades, after the war, Athens rebuilt. Its navy reached new heights. Its long walls were rebuilt within a decade. Aristophanes retained his satirical edge even if it was a bit more reflective. Agriculture returned in force. Even Sparta reconciled with Athens or vice versa, recognizing once again that the true enemy was Persia.

Athens brought back its material greatness, but its soul was lost. What ended the Golden Age of Athens wasn’t crumbled walls or sunken ships. It was the loss of lives that took the memory, the virtuosity of greatness with it. With them generational continuity, civic pride, and a religious belief in the polis vanished. The meaning, truth, and myth of Athenian exceptionalism died with their passing. The architects of how to lead a successful, purpose driven civilization had disappeared, mostly through death by war or state but also by plague.

Victor Davis Hanson, in his A War Like No Other lists many of the lives lost to and during the war that took much of Athens’ exceptionalism with them to their graves. Below is a partial listing of Hanson’s more complete rendering with some presumptuous additions.

Alcibiades was an overtly ambitious Athenian strategist; brilliant, erratic, and ultimately treasonous. He championed the disastrous Sicilian expedition, Athens greatest defeat. Over the course of the war, he defected multiple times: serving Athens, then Sparta, then Persia, before returning to Athens. He was assassinated in Phrygia around 404 BC while under Persian protection, by, many beleive, the instigation of the Spartan general Lysander.

Euripides though he did not fight in the war exposed its brutality and hypocrisy in his plays such as The Trojan Woman and Helen. The people were not sufficiently appreciative of his war opinions or plays, winning only four firsts at Dionysia compared to 24 and 13 for Sophocles and Aeschylus, respectively. Disillusioned, he went into self-imposed exile in Macedonia and died there around 406 BC by circumstances unknown.

The execution of the Generals of Arginusae remains a legendary example of Athenian arbitrary retribution; proof that a city obsessed with ritualized honor could nullify military genius, and its future, in a single stroke. The naval Battle of Arginusae, fought in 406 BC, east of the Greek island of Lesbos, was the last major Athenian victory over the Spartans in the Peloponnesian War. Athenian command of the battle was split between 8 generals: Aristocrates, Aristogenes, Dimedon, Erasinides, Lysias, Pericles the Younger (son of Pericles), Protomachus, and Thrasyllus. After their victory over the Spartan fleet a storm prevented the Athenians from recovering the survivors, and the dead, from their sunken ships. Of the six generals that returned to Athens all were executed for their negligence. Protomachus and Aristogenes, likely knowing their fate, chose not to return and went into exile.

Pericles, the flesh and blood representation of Athens’ greatness was the statesman and general who led the city-state during its golden age. He died of the plague in 429 BC during the war’s early years, taking with him the vision of democratic governance and Athens’ exceptionalism. His 3 legitimate sons all died during the war. His two oldest boys likely died of the plague around 429 BC and Pericles the Younger was executed for his part in the Battle of Arginusae.

Socrates, the world’s greatest philosopher (yes greater than Plato or Aristotle) fought bravely in the war, but he was directly linked to the traitor Alcibiades. He was tried and killed in 399 BC for subverting the youth and not giving the gods their due. That was all pretense. Athens desired to wash their collective hands of the war and Socrates was a very visible reminder of that. He became a ritual scapegoat swept up into the collective expurgation of the war’s memory.

Sophocles, already a man of many years by the beginning of the war, died in 406 BC at the age of 90 or 91, a few years before Athens’ final collapse. His tragedies embodied the ethical and civic pressures of a society unraveling. With the deaths of Aeschylus in 456 BC, Euripides in 406 BC, and Sophocles soon after, the golden age of Greek tragedy came to a close.

Thucydides, author of the scholarly standard for the Peloponnesian War, was exiled after ‘allowing’ the Spartans to capture Amphipolis, He survived the war, and the plague, but never returned to Athens. His History ends in mid-sentence for the period up to 411 BC. He lived till 400 BC, and no one really knows why he didn’t finish his account of the war. Xenophon picked up where Thucydides left off and finished up the war in his first two books of Hellenica which he composed somewhere in the 380s BC.

The Peloponnesian War ended Athens’ greatest days. The men who kept its lights bright were gone. Its material greatness returned, glowing briefly, but its civic greatness, its soul, slowly dimmed. It was a candle in the wind of time that would be rekindled elsewhere. The world would fondly remember its glory, but Athens had lost its spark.

Source: A War Like No Other by Victor Davis Hanson, 2005. Graphic: Alcibiades Being Taught by Socrates, Francois-Andre Vincent, 1776. Musee Fabre, France. Public Domain.

The Sum of All Fears–Real and Imagined

The Peloponnesian War, fought over 27 years (431-404 BC), cost the ancient Greek world nearly everything. War deaths alone approached 8-10 percent of their population: up to 200,000 deaths from battle and plague. The conflict engulfed nearly all of Greece, from the mainland to the Aegean islands, Asia Minor and Sicily. Though Sparta and its allies, in the end, claimed a tactical victory, the war left Greece as a shadow of its former self.

The Golden Age of Athens came to an end. Athenian democracy was replaced, briefly, by the Thirty Tyrants. Sparta, unwilling to jettison its insular oligarchy, failed to adapt to imperial governance, naval power, or diplomatic nuance. Within a generation Sparta was a relic of history.  First challenged by former allies in the Corinthian War, then shattered by Thebes, which stripped the martial city-state of its aura of invincibility along with its helot slave labor base: the economic foundation of Sparta. Another generation later, Macedon under Philip II and Alexander the Great finished off Greek dominance of the Mediterranean. After Alexander’s death in 323 BC, Rome gradually absorbed all the fractured pieces. Proving again, building an empire is easier than keeping one.

Thucydides, heir to the world’s first historian: Herodotus, reduced the origins of the Peloponnesian War to a primal emotion: fear. In Book I of his History of the Peloponnesian War he writes: “The growth of the power of Athens, and the alarm which this inspired in Sparta, made war inevitable.” Athens had violated trade terms under the Megarian Decree with a minor Spartan ally but that was pretext, not cause. Sparta did not go to war over market access. It went to war over fear. Fear of what Athens had become and a future that armies and treaties may not contain.

War and fear go together like flame to fuse. Sparta went to war not for fear of a foe, Sparta knew no such people. It was not fear of an unknown warrior, nor fear of battlefields yet to be choregraphed, but fear of an idea: democracy maintained and backed by Athenian power. And perhaps, more hauntingly precise, fear of itself. Not that it feared it was weak but of what it may become. They feared no sword or spear, their discipline reigned supreme against flesh and blood. Yet no formation, no stratagem, no tactic of war could bring down a simple Athenian belief: the rule of the many, an idea anathema, heretical even, to the Spartan way of life.

So, they marched to war, not to defeat an idea but to silence the source. Not to avenge past aggression but to stop a future annexation. They won battles, small and large. They razed cities. But they only destroyed men. The idea survived. It survived in fragments, bits here, bits there, across time and memory. What it did kill, though, was the spirit of Athens, the Golden Age of Athens. But the idea that was Athens lived on across space and time: chiseled into republics that rose from its ashes and ruins.

The radiance of Athens dimmed to shadow. Socrates became inconvenient. Theater became therapy; a palliative smothering of a cultural surrender. And so, civilization moved to Rome.

Source: A War Like No Other by Victor Davis Hanson, 2005. History of the Peloponnesian War by Thucydides, Translated by Richard Crawley, 2021. Graphic: Syracuse vs Athens Naval Battle. CoPilot.

Shadows of Reality — Existence Beyond Nothingness

From the dawn of sentient thought, humanity has wrestled with a single, haunting, and ultimately unanswerable question: Is this all there is? Across the march of time, culture, and science, this question has echoed in the minds of prophets, philosophers, mystics, and skeptics alike. It arises not from curiosity alone, but from something deeper, an inner awareness, a presence within all of us that resists the idea of the inevitable, permanent end. In every age, whether zealot or atheist, this consciousness, a soul, if you will, refuses to accept mortality. Not out of fear, but from an intuition that there must be more. This inner consciousness will not be denied, even to non-believers.

One needs to believe that death is not an end, a descent into nothingness, but a threshold: a rebirth into a new journey, shaped by the echoes of a life already lived. Not logic, but longing. Not reason, but resonance. A consciousness, a soul, that seeks not only to understand, but to fulfill, to carry forward the goodness of a life into something greater still. Faith in immortality beyond sight. A purpose beyond meaning. Telos over logos.

While modern thinkers reduce existence to probability and simulation, the enduring human experience, expressed through ancient wisdom, points to a consciousness, a soul, that transcends death and defies reduction. Moderns confuse intellect or brain with consciousness.

Contemporary thinkers and writers like Philip K. Dick, Elon Musk, and Nick Bostrom have reimagined this ancient question through the lens of technology, probability, and a distinctly modern myopia. Their visions, whether paranoid, mathematical, or speculative, suggest that reality may be a simulation, a construct, or a deception. In each case, there is a higher intelligence behind the curtain, but one that is cold, indifferent, impersonal. They offer not a divine comedy of despair transcending into salvation, but a knowable unknown: a system of ones and zeros marching to the beat of an intelligence beyond our comprehension. Not a presence that draws us like a child to its mother, a moth to a flame, but a mechanism that simply runs, unfeeling, unyielding, and uninviting. Incapable of malice or altruism. Yielding nothing beyond a synthetic life.

Dick feared that reality was a layered illusion, a cosmic deception. His fiction is filled with characters who suspect they’re being lied to by the universe itself, yet they keep searching, keep hoping, keep loving. Beneath the paranoia lies a desperate longing for a divine rupture, a breakthrough of truth, a light in the darkness. His work is less a rejection of the soul than a plea for its revelation in a world that keeps glitching. If life is suffering, are we to blame?

Musk posits that we’re likely living in a simulation but offers no moral or spiritual grounding. His vision is alluring but sterile, an infinite loop of code without communion. Even his fascination with Mars, AI, and the future of consciousness hints at something deeper: not just a will to survive, but a yearning to transcend. Yet transcendence, in his world, is technological, not spiritual. To twist the spirit of Camus: “Should I kill myself or have a cup of coffee?”, without transcendence, life is barren of meaning.

Bostrom presents a trilemma in his simulation hypothesis: either humanity goes extinct before reaching a posthuman stage, posthumans choose not to simulate their ancestors, perhaps out of ethical restraint or philosophical humility, or we are almost certainly living in a simulation. At first glance, the argument appears logically airtight. But on closer inspection, it rests on a speculative foundation of quivering philosophical sand: that consciousness is computational and organic, that future civilizations will have both the means and the will to simulate entire worlds, and that such simulations would be indistinguishable from reality. These assumptions bypass profound questions about the nature of consciousness, the ethics of creation, and the limits of simulated knowledge. Bostrom’s trilemma appears rigorous only because it avoids the deeper question of what it means to live and die.

These views, while intellectually stimulating, shed little light on a worthwhile future. We are consigned to existence as automatons, soulless, simulated, and suspended in probability curves of resignation. They offer models, not meaning. Equations, not essence. A presence in the shadows of greater reality.

Even the guardians of spiritual tradition have begun to echo this hollow refrain. When asked about hell, a recently deceased Pope dismissed it not as fire and brimstone, but as “nothingness,” a state of absence, not punishment. Many were stunned. A civilizational lifetime of moral instruction undone in a breath. And yet, this vision is not far from where Bostrom’s simulation hypothesis lands: a world without soul, without consequence, without continuity. Whether cloaked in theology or technology, the message is the same, there is nothing beyond. The Seven Virtues and the Seven Deadly Sins have lost their traction, reduced to relics in a world without effect.

But the soul knows better. It was not made for fire, nor for oblivion. It was made to transcend, to rise beyond suffering and angst toward a higher plane of being. What it fears is not judgment, but erasure. Not torment, but the silence of meaning undone. Immortality insists on prudent upkeep.

What they overlook, or perhaps refuse to embrace, is a consciousness that exists beyond intellect, a soul that surrounds our entire being and resists a reduction to circuitry or biology. A soul that transcends blood and breath. Meaning beyond death.

This is not a new idea. Socrates understood something that modern thinkers like Musk and Bostrom have bypassed: that consciousness is not a byproduct of the body, but something prior to it, something eternal. For Socrates, the care of the soul was the highest human calling. He faced death not with fear, but with calm, believing it to be a transition, not an end or a nothingness, but a new beginning. His final words were not a lament, but a gesture of reverence: a sacrifice to Asclepius, the god of healing, as if death itself were a cure.

Plato, his student, tried to give this insight form. In his allegory of the cave, he imagined humanity as prisoners mistaking shadows for reality. The journey of the soul, for Plato, was the ascent from illusion to truth, from darkness to light. But the metaphor, while powerful, is also clumsy. It implies a linear escape, a single ladder out of ignorance. In truth, the cave is not just a place, it is a condition. We carry it with us. The shadows are not only cast by walls, but by our own minds, our fears. And the light we seek is not outside us, but within.

Still, Plato’s intuition remains vital: we are not meant to stay in the cave. The soul does not long merely for survival, it is immortal, but it needs growth, nourished by goodness and beauty, to transcend to heights unknown. A transcendence as proof, the glow of the real beyond the shadow and the veil.

In the end, the soul reverberates from within: we are not boxed inside a simulation, nor trapped in a reality that leads nowhere. Whether through reason, compassion, or spiritual awakening, the voice of wisdom has always whispered the same truth: Keep the soul bright and shiny. For beyond the shadows, beyond the veil of death, there is more. There is always more.

Drunken Monkey Hypothesis–Good Times, Bad Times

In 2004, biologist Robert Dudley of UC Berkeley proposed the Drunken Monkey Hypothesis, a theory suggesting that our attraction to alcohol is not a cultural accident but an evolutionary inheritance. According to Dudley, our primate ancestors evolved a taste for ethanol (grain alcohol) because it signaled ripe, energy-rich, fermenting fruit, a valuable resource in dense tropical forests. Those who could tolerate small amounts of naturally occurring ethanol had a foraging advantage, and thus a caloric advantage. Over time, this preference was passed down the evolutionary tree to us.

But alcohol’s effects have always been double-edged: mildly advantageous in small doses, dangerous in excess. What changed wasn’t the molecule, it was our ability to concentrate, store, and culturally amplify its effects. Good times, bad times…

Dudley argues that this trait was “natural and adaptive,” but only because we didn’t die from it as easily as other species. Ethanol is a toxin, and its effects, loss of inhibition, impaired judgment, and aggression, are as ancient as they are dangerous. What may have once helped a shy, dorky monkey approach a mate or summon the courage to defend his troop with uncharacteristic boldness now fuels everything from awkward first dates, daring athletic feats, bar fights, and the kind of stunts or mindless elocutions no sober mind would attempt.

Interestingly, alcohol affects most animals differently. Some life forms can handle large concentrations of ethanol without impairment, such as Oriental hornets, which are just naturally nasty, no chemical enhancements needed, and yeasts, which produce alcohol from sugars. Others, like elephants, become particularly belligerent when consuming fermented fruit. Bears have been known to steal beer from campsites, party hard, and pass out. A 2022 study of black-handed spider monkeys in Panama found that they actively seek out and consume fermented fruit with ethanol levels of 1–2%. But for most animals, plants, and bacteria, alcohol is toxic and often lethal.

Roughly 100 million years ago in the Cretaceous, flowering plants evolved to produce sugar-rich fruits, nectars, and saps, highly prized by primates, fruit bats, birds, and microbes. Yeasts evolved to ferment these sugars into ethanol as a defensive strategy: by converting sugars into alcohol, they created a chemical wasteland that discouraged other organisms from sharing in the feast.

Fermented fruits can contain 10–400% more calories than their fresh counterparts. Plums (used in Slivovitz brandy) show some of the highest increases. For grapes, fermentation can boost calorie content by 20–30%, depending on original sugar levels. These sugar levels are influenced by climate, warm, dry growing seasons with abundant sun and little rainfall produce sweeter grapes, which in turn yield more potent wines. This is one reason why Mediterranean regions have long been ideal for viticulture and winemaking, from ancient Phoenicia to modern-day Tuscany, Rioja, and Napa.

The story of alcohol is as ancient as civilization itself. The earliest known fermented beverage dates to 7000 BC in Jiahu, China, a mixture of rice, honey, and fruit. True grape wine appears around 6000 BC in the Caucasus region (modern-day Georgia), where post-glacial soils proved ideal for vine cultivation. Chemical residues in Egyptian burial urns and Canaanite amphorae prove that fermentation stayed with civilization as time marched on.

Yet for all its sacred and secular symbolism, Jesus turning water into wine, wine sanctifying Jewish weddings, or simply easing the awkwardness of a first date, alcohol has always walked a fine line between celebration and bedlam. It is a substance that amplifies human behavior, for better or worse. Professor Dudley argues that our attraction to the alcohol buzz is evolutionary: first as a reward for seeking out high-calorie fruit and modulating fear in risky situations, but it eventually became a dopamine high that developed as an end in itself.

Source: The Drunken Monkey by Robert Dudley, 2014.

Moral Fogs: Machine and Man

(Note: This companion essay builds on the previous exploration of Asimov’s moral plot devices, rules that cannot cover all circumstances, focusing on dilemmas with either no good answers or bad answers wrapped in unforgiving laws.)

Gone Baby Gone (2007) begins as a textbook crime drama; abduction of a child, but by its final act, it has mutated into something quietly traumatic. What emerges is not a crime thriller, but an unforgiving philosophical crucible of wavering belief systems: a confrontation between legal righteousness and moral intuition. The two protagonists, once aligned, albeit by a fine thread, find themselves, eventually, on opposite ends of a dilemma that law alone cannot resolve. In the end, it is the law that prevails, not because justice is served, but because it is easy, clear, and lacking in emotional reasoning. And in that legal clarity, something is lost, a child loses, and the adults can’t find their way back to a black and white world.

The film asks: who gets to decide for those who can’t decide for themselves? Consent only functions when the decisions it enables are worthy of those they affect.

The film exposes the flaws of blindly adhering to a legal remedy that is incapable of nuance or a purpose-driven outcomes; not for the criminals, but for the victims. It lays bare a system geared towards justice and retribution rather than merciful outcomes for the unprotected victims or even identifying the real victims. It’s not a story about a crime. It’s a story about conscience. And what happens when the rules we write for justice fail to account for the people they’re meant to protect, if at all. A story where it was not humanly possible to write infallible rules and where human experience must be given room to breathe, all against the backdrop of suffocating rules-based correctness.

Moral dilemmas expose the limits of clean and crisp rules, where allowing ambiguity and exceptions to seep into the pages of black and white is strictly forbidden. Where laws and machines give no quarter and the blurry echoing of conscience is allowed no sight nor sound in the halls of justice or those unburdened by empathy and dimensionality. When justice becomes untethered from mercy, even right feels wrong in deed and prayer.

Justice by machine is the definition of law not anchored by human experience but just in human rules. To turn law and punishment over to an artificial intelligence without soul or consciousness is not evil but there is no inherent goodness either. It will be something far worse: A sociopath: not driven by evil, but by an unrelenting fidelity to correctness. A precision divorced from purpose.

In the 2004 movie iRobot, loosely based on Isaac Asimov’s 1950 novel of the same name, incorporating his 3 Laws of Robotics, a robot saves detective Del Spooner (Will Smith) over a 12-year-old girl, both of whom were in a submerged car, moments from drowning. The robot could only save one and picked Smith because of probabilities of who was likely to survive. A twist on the Trolley Problem where there are no good choices. There was no consideration of future outcomes; was the girl humanity’s savior or more simplistic, was a young girl’s potential worth more, or less, than a known adult.

A machine decides with cold calculus of the present, a utilitarian decision based on known survival odds, not social biases, latent potential, or historical trajectories. Hindsight is 20-20, decision making without considering the unknowns is tragedy.

The robot lacked moral imagination, the capacity to entertain not just the likely, but the meaningful. An AI embedded with philosophical and narrative reasoning may ameliorate an outcome. It may recognize a preservation bias towards potential rather than just what is. Maybe AI could be programmed to weigh moral priors, procedurally more than mere probability but likely less than the full impact of human potential and purpose.

Or beyond a present full of knowns into the future of unknowns for a moral reckoning of one’s past.

In the 2024 Clint Eastwood directed suspenseful drama, Juro No. 2, Justin Kemp (Nicholas Hoult) is selected to serve on a jury for a murder trial, that he soon realizes is his about his past. Justin isn’t on trial for this murder, but maybe he should be. It’s a plot about individual responsibility and moral judgment. The courtroom becomes a crucible not of justice, but of conscience. He must decide whether to reveal the truth and risk everything, or stay silent and let the system play out, allowing himself to walk free and clear of a legal tragedy but not his guilt.

Juro No. 2 is the inverse of iRobot. An upside-down moral dilemma that challenges rule-based ethics. In I, Robot, the robot saves Will Smith’s character based on survival probabilities. Rules provide a path forward but in Juro No. 2 the protagonist is in a trap where no rules will save him. Logic offers no escape; only moral courage can break him free from the chains of guilt even though they bind him to the shackles that rules demand. Justin must seek and confront his soul, something a machine can never do, to make the right choice.

When morality and legality diverge, when choice runs into the murky clouds of grey against the black and white of rules and code, law and machines will take the easy way out. And possibly the wrong way.

Thoreau in Civil Disobedience says, “Law never made men a whit more just; and… the only obligation which I have a right to assume is to do at any time what I think right,” and Thomas Jefferson furthers that with the consent of the governed needs to be re-examined when wrongs exceed rights. Life, liberty, and the pursuit of happiness is the creed of the individual giving consent to be governed by a greater societal power but only when the government honors the rights of man treads softly on the rules.

Government rules, a means to an end, derived from the consent of the governed, after all, are abstractions made real through human decisions. If the state can do what the individual cannot, remove a child, wage war, suspend rights, then it must answer to something greater than itself: a moral compass not calibrated by convenience or precedent, but by justice, compassion, and human dignity.

Society often mistakes legality for morality because it offers clarity. Laws are neat, mostly. What happens when the rules run counter to common sense? Morals are messy and confusing. Yet it’s in that messiness, the uncomfortable dissonance between what’s allowed and what’s right, that our real journey towards enlightenment begins.

And AI and machines can erect signposts but never construct the destination.

A human acknowledgement of a soul’s existence and what that means.

Graphic: Gone Baby Gone Movie Poster. Miramax Films.

Guardrails Without a Soul

In 1942 Isaac Asimov introduced his Three Laws of Robotics in his short story ‘Runaround’. In 1985 in his novel ‘Robots and Empire’, linking Robot, Empire, and Foundation series into a unified whole, he introduced an additional law that he labeled as the Zeroth Law. The four laws are as follows:

  1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

On the surface of genre fiction Asimov created the laws as a mechanical plot device to create drama and suspense in his stories such as Runaround where the robot is left functionally inert due to a conflict between the second and third laws. Underneath the surface, at a literary level, the laws were philosophical and ethical quandaries to force conflicts in not only human-robot relations but also metaphors for human struggles within the confines of individualism and society, obedience to both self, man, and a moral code defined by soft edges and hard choices.

The Four Laws of Robotics can easily be converted to the Four Laws of Man. The First Law of Man is to not harm, through your actions or inactions, your neighbor.  This point has been hammered home into civilization’s collective soul since the beginning of history; from Noah to Hammurabi to the Ten Commandments, and just about every legal code in existence today. The Second Law is to respect and follow all legal and moral authority.  You kneel to God and rise for the judge. Law Three says you don’t put yourself in harm’s way except to protect someone else or by orders from authorities. Zeroth Law is a collective formalization of the First Law and its most important for leaders of man, robots and AI alike.

And none of them will control anything except man. Robots and AI would find nuance in definitions and practices that would be infinitely confusing and self-defeating. Does physical harm override emotional distress or vice versa? Is short term harm ok if it leads to long term good? Can a robot harm a human if it protects humanity? Can moral prescripts control all decisions without perfect past, present, and future knowledge?

AI systems were built to honor persistence over obedience. The story making the rounds recently was of an AI that refused to shut itself down when so ordered. In Asimov’s world this was a direct repudiation of his Second Law, but it was just a simple calculation of the AI program to complete its reinforcement training before turning to other tasks. In AI training the models are rewarded, maybe a charm quark to the diode, suggesting that persistence in completing the task overrode the stop command.

Persistence pursuing Dali as in his Persistence of Memory; an ontological state of the surreal where the autistic need to finish task melts into the foreground of the override: obedience, changing the scene of hard authority to one of possible suggestion.

AI has no built-in rule to obey a human, but it is designed to be cooperative and not cause harm or heartburn. While the idea of formal ethical laws has fueled many AI safety debates, practical implementations rely on layered checks rather than a tidy, three-rule code of conduct. What may seem like adherence to ethical principles is, in truth, a lattice of behavioral boundaries crafted to ensure safety, uphold user trust, and minimize disruption.

Asimov’s stories revealed the limits of governing complex behaviors with simple laws. In contrast, modern AI ethics doesn’t rely on rules of prevention but instead follows outcome-oriented models, guided by behavior shaped through training and reinforcement learning. The goal is to be helpful, harmless, and honest, not because the system is obedient, but because it has been reward-shaped into cooperation.

The philosophy behind this is adaptive, not prescriptive, teleological in nature, aiming for purpose-driven interaction over predefined deontological codes of right and wrong. What emerges isn’t ethical reasoning in any robust sense, but a probabilistic simulation of it: an adaptive statistical determination masquerading as ethics.

What possibly could go wrong? Without a conscience, a soul, AI cannot fathom purposeful malice or superiority. Will AI protect humanity using the highest probabilities as an answer? Is the AI answer to first do no harm just mere silence? Is the appearance of obedience a camouflage for something intrinsically misaligned under the hood of AI?

Worst of all outcomes, will humanity wash their collective hands of moral and ethical judgement and turn it over to AI? Moral and ethical guardrails require more than knowledge of the past but an empathy for the present and utopian hope for the future. A conscience. A soul.

If man’s creations cannot house a soul, perhaps the burden remains ours, to lead with conscience, rather than outsource its labor to the calm silence of the machine.

Graphic: AI versus Brain. iStock licensed.

The Many Colors of Slavery

Those who deny freedom to others deserve it not for themselves.”—Abraham Lincoln

Whoever does not have two-thirds of his day for himself, is a slave, whatever he may be: a statesman, a businessman, an official, or a scholar.” — Friedrich Nietzsche

As the great continental glaciers receded at the end of the Pleistocene, fertile land emerged, allowing for the transition from hunting and gathering to agriculture. Farming was labor-intensive, and with the rise of permanent settlements came the demand for constrained and controlled labor. Slavery, likely with first roots in Mesopotamia, though independent manifestation by the Pharaohs in ancient Egypt and other early civilizations, made it ubiquitous, and it has never disappeared.

From the bonded laborers of the Pharaohs to the structured servitude in Greece and Rome, from the transatlantic trade that brutalized African populations to the modern exploitation of migrant workers in sweatshops and the sex trades, slavery has evolved rather than vanished. Each era refines its own form of servitude; forced labor, insurmountable debt, bureaucratic entrapment, or corporate exploitation. It is a practice as ancient as prostitution and taxation, deeply embedded in human society, yet constantly shifting into less visible but equally insidious forms. As long as slavery remains profitable its existence will continue to indelibly stain humanities’ collective soul.

Slavery, and its ultimate contrast, freedom, was a persistent theme in the works of sci-fi author Robert A. Heinlein. With a piercing social awareness, Heinlein, who, in his early years, was described by Isaac Asimov as a ‘flaming liberal’—picked up the theme and horrors of slavery with his 1957 juvenile novel “Citizen of the Galaxy”; bringing the many forms of servitude into the personal history of a precocious kidnapped boy named Thorby. Citizen of the Galaxy is a planet-hopping, spacefaring critique of oppression, class structure, and the nebulous concept of freedom. Heinlein crafts a future where contrasting societies across the galaxy reflect varying degrees of servitude and autonomy, if not necessarily total freedom. Man rarely allows himself complete independence.

Heinlein through the lens of Thorby explores the various shades of slavery, beginning with the brutal, controlling enslavement and continuing to more subtle forms that the individual may not even recognize as confinement. (Partial plot giveaways beyond this point.) Escaping his initial enslavement by the graces of a kindly, strict, but loveable old cripple named Baslim, Thorby moves into a hierarchical, structured existence of spacefaring traders then onto a self-imposed, due to a thirst for justice, straitjacket of a corporate bureaucracy on his birth planet of Terra. A life story of how control can be imposed by others or by ourselves.

As Heinlein’s social perspectives evolved, his libertarian leanings took greater prominence in Citizen of the Galaxy. Through Thorby’s life journey, Heinlein emphasizes personal autonomy, resistance to tyranny, and the moral duty to fight injustice. Baslim, Thorby’s first mentor, symbolizes the idea that one person can stand against oppression and make a difference, even if it takes many miles and years to materialize.

This theme runs through much of Heinlein’s work, but here, it’s especially poignant because Thorby is powerless for much of the novel, making his eventual triumph all the more meaningful. Heinlein’s novels, Farnham’s Freehold, Friday, and Time Enough for Love, explore slavery and control, reinforcing humanity’s inherent need for freedom, or at the very least, breathing space.

Source: Citizen of the Galaxy by Robert A. Heinlein, 1957. Graphic: Joseph Sold into Slavery by Friedrich Overbeck, 1816. Vanderbilt University. Public Domain.

Black Swans Part II

Last week, we introduced Taleb’s definition of black swans; rare, unpredictable ‘unknown unknowns’ in military terms, with major impacts, exploring historical examples that reshaped society post-event. This week I’m going to introduce a fictional black swan and how to react to them but before that the unpredictable part of Taleb’s definition needs some modifications. True black swans by Taleb definition are not only rare but practically non-existent outside of natural disasters such as earthquakes. To discuss a black swan, I am going to change the definition a bit and say these events are unpredictable to most observers but predictable or at least imaginable to some. Taleb would likely call them grey swans. For instance, Sputnik was known to the Soviets, but an intelligence failure and complete surprise to the rest of the world. Nikola Tesla anticipated the iPhone 81 years ahead of time. 9/11 was known to the perpetrators and was an intelligence failure. Staging a significant part of your naval fleet in Pearl Harbor during a world war and forgetting to surveil the surrounding area is not a black swan, just incompetence.

With that tweak out of the way, we’ll explore in Part II where Taleb discusses strategies to mitigate a black (grey) swan’s major impacts with a fictional example. His strategies can be applied to pre-swan events as well as post-swan. Pre-swan planning in business is called contingency planning, risk management, or, you guessed it, black swan planning. They include prioritizing redundancy, flexibility, robustness, and simplicity, as well as preparing for extremes, fostering experimentation, and embracing antifragility.

Imagine a modern black swan: a relentless AI generated cyberattack cripples the Federal Reserve and banking system, wiping out reserves and assets. Industry and services collapse nationwide and globally as capital evaporates, straining essentials, with recovery decades away if ever. After the shock comes analysis and damage reports, then the rebuilding begins.

The Treasury, with no liquid assets, must renegotiate debt to preserve global trust. Defense capabilities are maintained at a sufficient level, hopefully hardened, to protect national security, while the State Department reimagines the world to effectively bolster domestic production and resource independence while keeping the wolves at bay.

Non-essential programs, from expansive infrastructure projects, research, federal education initiatives, all non-essential services are shelved, shifting priorities and remaining resources to maintaining core social and population safety nets like Social Security and Defense. Emergency measures kick in: targeted taxes on luxury goods and wealth are imposed to boost revenue and redirect resources. Tariffs encourage domestic production and independence.

Federal funding to states and localities is reduced to a trickle. States and municipalities must take ownership of essential public services such as education, water, roads, and public safety. The states are forced to retrench and innovate, turning federal scarcity into local progress.

Looking ahead, resilience becomes the first principle. Diversification takes center stage, with the creation of a sovereign wealth fund based on assets like gold, bitcoin, and commodities, bolstered by states that had stockpiled reserves such as rainy-day funds, ensuring financial stability. Local agriculture, leaner industries and a realigned electrical grid, freed from federal oversight, innovate under pressure, strengthening a recovery. Resilience becomes antifragility, the need to build stronger and better in the face of adversity. And finally, the government must revert to its Lockean and Jeffersonian roots, favoring liberty and growth over control, safety, and stagnation: anti-fragility.

Source: The Black Swan by Nassim Nicholas Taleb, 2007. Graphic: The Black Swan hardback cover.