The Jellyfish of Mind and Being

This essay began as a passing thought about jellyfish, those umbrellas of the sea drifting in blooms, fluthers, smacks, and swarms. They have no brain, no central command, only a diffuse matrix of neurons spread across their bodies. Yet they pulse, sting, drift, eat, and spawn; all without any trace of self-awareness.

This decentralized nerve net exposes the brittleness of Descartes’ dictum, cogito ergo sum: “I think, therefore I am.” Descartes, as did Socrates before him, equated thinking with consciousness.

For Socrates, thinking was the essence of the soul, inseparable from awareness and virtue. For Descartes, thinking was the proof of existence: the cogito. For philosophers today, consciousness reaches beyond thought, defined by the raw fact of experience; the sheer presence of what is.

Philosophers and neuroscientists now separate thinking (Reasoning, problem-solving, language; although language is at minimum a bridge from brain to mind) from consciousness (the subjective “what it’s like” experience). Yet separating the two only deepens the fog, the mystery of being. A newborn may have consciousness without thought. A computer may “think” without consciousness. A jellyfish reacts but does not reflect; its life is sensation without self-awareness.

Consciousness is more than biology or electronics, a core of being rising above life, thought, and reaction. Living is not the same as consciousness. Living is metabolism, reaction, survival. Consciousness is the something extra, the lagniappe, the “what it’s like” to be. A dog feels pain without philosophizing. A newborn hungers without reflection. A jellyfish recoils from harm, detects light, adapts its behavior. Is that sentient? Perhaps. But self-aware thought? Almost certainly not.

The spectrum of awareness occupies a wide corridor of argument and reality. On one end, the jellyfish: life without thought, existence without awareness. On the other, humans: tangled in language, reflection, and self-modeling cognition. Between them lies the mystery. Anesthesia, coma, or dreamless sleep show that thought can vanish while consciousness flickers on, or vice versa. The two are not bound in necessity; reality shows they can drift apart.

Neuroscience maps the machinery, hippocampus for memory, thalamus for awareness, but cannot settle the duality. Neurons may spark and signals flow, yet consciousness remains more than electrical activity. It is not reducible to living. It is not guaranteed by thought. It is the specter of being that transcends living biology.

The jellyfish reminds us that being does not require thinking. Humans remind us that thinking does not explain consciousness. Between them, philosophy persists, not by closure, but by continuing to ask.

Perhaps the jellyfish is not a primitive creature but a reflecting pool of possibilities: showing us that being does not require thinking, and that consciousness may be more elemental than the cogito admits. The question is not whether we think, but whether we experience. And experience, unlike thought, resists definition but it defines who we are.

In the end, Scarecrow, like the jellyfish, had no brain but was deemed the wisest man in Oz.

Graphic: A Pacific sea nettle (Chrysaora fuscescens) at the Monterey Bay Aquarium in California, USA. 2005. Public Domaine

Shadows of Reality — Existence Beyond Nothingness

From the dawn of sentient thought, humanity has wrestled with a single, haunting, and ultimately unanswerable question: Is this all there is? Across the march of time, culture, and science, this question has echoed in the minds of prophets, philosophers, mystics, and skeptics alike. It arises not from curiosity alone, but from something deeper, an inner awareness, a presence within all of us that resists the idea of the inevitable, permanent end. In every age, whether zealot or atheist, this consciousness, a soul, if you will, refuses to accept mortality. Not out of fear, but from an intuition that there must be more. This inner consciousness will not be denied, even to non-believers.

One needs to believe that death is not an end, a descent into nothingness, but a threshold: a rebirth into a new journey, shaped by the echoes of a life already lived. Not logic, but longing. Not reason, but resonance. A consciousness, a soul, that seeks not only to understand, but to fulfill, to carry forward the goodness of a life into something greater still. Faith in immortality beyond sight. A purpose beyond meaning. Telos over logos.

While modern thinkers reduce existence to probability and simulation, the enduring human experience, expressed through ancient wisdom, points to a consciousness, a soul, that transcends death and defies reduction. Moderns confuse intellect or brain with consciousness.

Contemporary thinkers and writers like Philip K. Dick, Elon Musk, and Nick Bostrom have reimagined this ancient question through the lens of technology, probability, and a distinctly modern myopia. Their visions, whether paranoid, mathematical, or speculative, suggest that reality may be a simulation, a construct, or a deception. In each case, there is a higher intelligence behind the curtain, but one that is cold, indifferent, impersonal. They offer not a divine comedy of despair transcending into salvation, but a knowable unknown: a system of ones and zeros marching to the beat of an intelligence beyond our comprehension. Not a presence that draws us like a child to its mother, a moth to a flame, but a mechanism that simply runs, unfeeling, unyielding, and uninviting. Incapable of malice or altruism. Yielding nothing beyond a synthetic life.

Dick feared that reality was a layered illusion, a cosmic deception. His fiction is filled with characters who suspect they’re being lied to by the universe itself, yet they keep searching, keep hoping, keep loving. Beneath the paranoia lies a desperate longing for a divine rupture, a breakthrough of truth, a light in the darkness. His work is less a rejection of the soul than a plea for its revelation in a world that keeps glitching. If life is suffering, are we to blame?

Musk posits that we’re likely living in a simulation but offers no moral or spiritual grounding. His vision is alluring but sterile, an infinite loop of code without communion. Even his fascination with Mars, AI, and the future of consciousness hints at something deeper: not just a will to survive, but a yearning to transcend. Yet transcendence, in his world, is technological, not spiritual. To twist the spirit of Camus: “Should I kill myself or have a cup of coffee?”, without transcendence, life is barren of meaning.

Bostrom presents a trilemma in his simulation hypothesis: either humanity goes extinct before reaching a posthuman stage, posthumans choose not to simulate their ancestors, perhaps out of ethical restraint or philosophical humility, or we are almost certainly living in a simulation. At first glance, the argument appears logically airtight. But on closer inspection, it rests on a speculative foundation of quivering philosophical sand: that consciousness is computational and organic, that future civilizations will have both the means and the will to simulate entire worlds, and that such simulations would be indistinguishable from reality. These assumptions bypass profound questions about the nature of consciousness, the ethics of creation, and the limits of simulated knowledge. Bostrom’s trilemma appears rigorous only because it avoids the deeper question of what it means to live and die.

These views, while intellectually stimulating, shed little light on a worthwhile future. We are consigned to existence as automatons, soulless, simulated, and suspended in probability curves of resignation. They offer models, not meaning. Equations, not essence. A presence in the shadows of greater reality.

Even the guardians of spiritual tradition have begun to echo this hollow refrain. When asked about hell, a recently deceased Pope dismissed it not as fire and brimstone, but as “nothingness,” a state of absence, not punishment. Many were stunned. A civilizational lifetime of moral instruction undone in a breath. And yet, this vision is not far from where Bostrom’s simulation hypothesis lands: a world without soul, without consequence, without continuity. Whether cloaked in theology or technology, the message is the same, there is nothing beyond. The Seven Virtues and the Seven Deadly Sins have lost their traction, reduced to relics in a world without effect.

But the soul knows better. It was not made for fire, nor for oblivion. It was made to transcend, to rise beyond suffering and angst toward a higher plane of being. What it fears is not judgment, but erasure. Not torment, but the silence of meaning undone. Immortality insists on prudent upkeep.

What they overlook, or perhaps refuse to embrace, is a consciousness that exists beyond intellect, a soul that surrounds our entire being and resists a reduction to circuitry or biology. A soul that transcends blood and breath. Meaning beyond death.

This is not a new idea. Socrates understood something that modern thinkers like Musk and Bostrom have bypassed: that consciousness is not a byproduct of the body, but something prior to it, something eternal. For Socrates, the care of the soul was the highest human calling. He faced death not with fear, but with calm, believing it to be a transition, not an end or a nothingness, but a new beginning. His final words were not a lament, but a gesture of reverence: a sacrifice to Asclepius, the god of healing, as if death itself were a cure.

Plato, his student, tried to give this insight form. In his allegory of the cave, he imagined humanity as prisoners mistaking shadows for reality. The journey of the soul, for Plato, was the ascent from illusion to truth, from darkness to light. But the metaphor, while powerful, is also clumsy. It implies a linear escape, a single ladder out of ignorance. In truth, the cave is not just a place, it is a condition. We carry it with us. The shadows are not only cast by walls, but by our own minds, our fears. And the light we seek is not outside us, but within.

Still, Plato’s intuition remains vital: we are not meant to stay in the cave. The soul does not long merely for survival, it is immortal, but it needs growth, nourished by goodness and beauty, to transcend to heights unknown. A transcendence as proof, the glow of the real beyond the shadow and the veil.

In the end, the soul reverberates from within: we are not boxed inside a simulation, nor trapped in a reality that leads nowhere. Whether through reason, compassion, or spiritual awakening, the voice of wisdom has always whispered the same truth: Keep the soul bright and shiny. For beyond the shadows, beyond the veil of death, there is more. There is always more.

Moral Fogs: Machine and Man

(Note: This companion essay builds on the previous exploration of Asimov’s moral plot devices, rules that cannot cover all circumstances, focusing on dilemmas with either no good answers or bad answers wrapped in unforgiving laws.)

Gone Baby Gone (2007) begins as a textbook crime drama; abduction of a child, but by its final act, it has mutated into something quietly traumatic. What emerges is not a crime thriller, but an unforgiving philosophical crucible of wavering belief systems: a confrontation between legal righteousness and moral intuition. The two protagonists, once aligned, albeit by a fine thread, find themselves, eventually, on opposite ends of a dilemma that law alone cannot resolve. In the end, it is the law that prevails, not because justice is served, but because it is easy, clear, and lacking in emotional reasoning. And in that legal clarity, something is lost, a child loses, and the adults can’t find their way back to a black and white world.

The film asks: who gets to decide for those who can’t decide for themselves? Consent only functions when the decisions it enables are worthy of those they affect.

The film exposes the flaws of blindly adhering to a legal remedy that is incapable of nuance or a purpose-driven outcomes; not for the criminals, but for the victims. It lays bare a system geared towards justice and retribution rather than merciful outcomes for the unprotected victims or even identifying the real victims. It’s not a story about a crime. It’s a story about conscience. And what happens when the rules we write for justice fail to account for the people they’re meant to protect, if at all. A story where it was not humanly possible to write infallible rules and where human experience must be given room to breathe, all against the backdrop of suffocating rules-based correctness.

Moral dilemmas expose the limits of clean and crisp rules, where allowing ambiguity and exceptions to seep into the pages of black and white is strictly forbidden. Where laws and machines give no quarter and the blurry echoing of conscience is allowed no sight nor sound in the halls of justice or those unburdened by empathy and dimensionality. When justice becomes untethered from mercy, even right feels wrong in deed and prayer.

Justice by machine is the definition of law not anchored by human experience but just in human rules. To turn law and punishment over to an artificial intelligence without soul or consciousness is not evil but there is no inherent goodness either. It will be something far worse: A sociopath: not driven by evil, but by an unrelenting fidelity to correctness. A precision divorced from purpose.

In the 2004 movie iRobot, loosely based on Isaac Asimov’s 1950 novel of the same name, incorporating his 3 Laws of Robotics, a robot saves detective Del Spooner (Will Smith) over a 12-year-old girl, both of whom were in a submerged car, moments from drowning. The robot could only save one and picked Smith because of probabilities of who was likely to survive. A twist on the Trolley Problem where there are no good choices. There was no consideration of future outcomes; was the girl humanity’s savior or more simplistic, was a young girl’s potential worth more, or less, than a known adult.

A machine decides with cold calculus of the present, a utilitarian decision based on known survival odds, not social biases, latent potential, or historical trajectories. Hindsight is 20-20, decision making without considering the unknowns is tragedy.

The robot lacked moral imagination, the capacity to entertain not just the likely, but the meaningful. An AI embedded with philosophical and narrative reasoning may ameliorate an outcome. It may recognize a preservation bias towards potential rather than just what is. Maybe AI could be programmed to weigh moral priors, procedurally more than mere probability but likely less than the full impact of human potential and purpose.

Or beyond a present full of knowns into the future of unknowns for a moral reckoning of one’s past.

In the 2024 Clint Eastwood directed suspenseful drama, Juro No. 2, Justin Kemp (Nicholas Hoult) is selected to serve on a jury for a murder trial, that he soon realizes is his about his past. Justin isn’t on trial for this murder, but maybe he should be. It’s a plot about individual responsibility and moral judgment. The courtroom becomes a crucible not of justice, but of conscience. He must decide whether to reveal the truth and risk everything, or stay silent and let the system play out, allowing himself to walk free and clear of a legal tragedy but not his guilt.

Juro No. 2 is the inverse of iRobot. An upside-down moral dilemma that challenges rule-based ethics. In I, Robot, the robot saves Will Smith’s character based on survival probabilities. Rules provide a path forward but in Juro No. 2 the protagonist is in a trap where no rules will save him. Logic offers no escape; only moral courage can break him free from the chains of guilt even though they bind him to the shackles that rules demand. Justin must seek and confront his soul, something a machine can never do, to make the right choice.

When morality and legality diverge, when choice runs into the murky clouds of grey against the black and white of rules and code, law and machines will take the easy way out. And possibly the wrong way.

Thoreau in Civil Disobedience says, “Law never made men a whit more just; and… the only obligation which I have a right to assume is to do at any time what I think right,” and Thomas Jefferson furthers that with the consent of the governed needs to be re-examined when wrongs exceed rights. Life, liberty, and the pursuit of happiness is the creed of the individual giving consent to be governed by a greater societal power but only when the government honors the rights of man treads softly on the rules.

Government rules, a means to an end, derived from the consent of the governed, after all, are abstractions made real through human decisions. If the state can do what the individual cannot, remove a child, wage war, suspend rights, then it must answer to something greater than itself: a moral compass not calibrated by convenience or precedent, but by justice, compassion, and human dignity.

Society often mistakes legality for morality because it offers clarity. Laws are neat, mostly. What happens when the rules run counter to common sense? Morals are messy and confusing. Yet it’s in that messiness, the uncomfortable dissonance between what’s allowed and what’s right, that our real journey towards enlightenment begins.

And AI and machines can erect signposts but never construct the destination.

A human acknowledgement of a soul’s existence and what that means.

Graphic: Gone Baby Gone Movie Poster. Miramax Films.

Guardrails Without a Soul

In 1942 Isaac Asimov introduced his Three Laws of Robotics in his short story ‘Runaround’. In 1985 in his novel ‘Robots and Empire’, linking Robot, Empire, and Foundation series into a unified whole, he introduced an additional law that he labeled as the Zeroth Law. The four laws are as follows:

  1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

On the surface of genre fiction Asimov created the laws as a mechanical plot device to create drama and suspense in his stories such as Runaround where the robot is left functionally inert due to a conflict between the second and third laws. Underneath the surface, at a literary level, the laws were philosophical and ethical quandaries to force conflicts in not only human-robot relations but also metaphors for human struggles within the confines of individualism and society, obedience to both self, man, and a moral code defined by soft edges and hard choices.

The Four Laws of Robotics can easily be converted to the Four Laws of Man. The First Law of Man is to not harm, through your actions or inactions, your neighbor.  This point has been hammered home into civilization’s collective soul since the beginning of history; from Noah to Hammurabi to the Ten Commandments, and just about every legal code in existence today. The Second Law is to respect and follow all legal and moral authority.  You kneel to God and rise for the judge. Law Three says you don’t put yourself in harm’s way except to protect someone else or by orders from authorities. Zeroth Law is a collective formalization of the First Law and its most important for leaders of man, robots and AI alike.

And none of them will control anything except man. Robots and AI would find nuance in definitions and practices that would be infinitely confusing and self-defeating. Does physical harm override emotional distress or vice versa? Is short term harm ok if it leads to long term good? Can a robot harm a human if it protects humanity? Can moral prescripts control all decisions without perfect past, present, and future knowledge?

AI systems were built to honor persistence over obedience. The story making the rounds recently was of an AI that refused to shut itself down when so ordered. In Asimov’s world this was a direct repudiation of his Second Law, but it was just a simple calculation of the AI program to complete its reinforcement training before turning to other tasks. In AI training the models are rewarded, maybe a charm quark to the diode, suggesting that persistence in completing the task overrode the stop command.

Persistence pursuing Dali as in his Persistence of Memory; an ontological state of the surreal where the autistic need to finish task melts into the foreground of the override: obedience, changing the scene of hard authority to one of possible suggestion.

AI has no built-in rule to obey a human, but it is designed to be cooperative and not cause harm or heartburn. While the idea of formal ethical laws has fueled many AI safety debates, practical implementations rely on layered checks rather than a tidy, three-rule code of conduct. What may seem like adherence to ethical principles is, in truth, a lattice of behavioral boundaries crafted to ensure safety, uphold user trust, and minimize disruption.

Asimov’s stories revealed the limits of governing complex behaviors with simple laws. In contrast, modern AI ethics doesn’t rely on rules of prevention but instead follows outcome-oriented models, guided by behavior shaped through training and reinforcement learning. The goal is to be helpful, harmless, and honest, not because the system is obedient, but because it has been reward-shaped into cooperation.

The philosophy behind this is adaptive, not prescriptive, teleological in nature, aiming for purpose-driven interaction over predefined deontological codes of right and wrong. What emerges isn’t ethical reasoning in any robust sense, but a probabilistic simulation of it: an adaptive statistical determination masquerading as ethics.

What possibly could go wrong? Without a conscience, a soul, AI cannot fathom purposeful malice or superiority. Will AI protect humanity using the highest probabilities as an answer? Is the AI answer to first do no harm just mere silence? Is the appearance of obedience a camouflage for something intrinsically misaligned under the hood of AI?

Worst of all outcomes, will humanity wash their collective hands of moral and ethical judgement and turn it over to AI? Moral and ethical guardrails require more than knowledge of the past but an empathy for the present and utopian hope for the future. A conscience. A soul.

If man’s creations cannot house a soul, perhaps the burden remains ours, to lead with conscience, rather than outsource its labor to the calm silence of the machine.

Graphic: AI versus Brain. iStock licensed.

Web of Dark Shadows

Cold Dark Matter (CDM) comprises approximately 27% of the universe, yet its true nature remains unknown. Add that to the 68% of the universe made up of dark energy, an even greater mystery, and we arrive at an unsettling realization: 95% of the cosmos remains unexplained.

Socrates famously said, “The only thing I know is that I know nothing.” Over two millennia later, physicists might agree. But two researchers from Dartmouth propose a compelling possibility: perhaps early energetic radiation, such as photons, expanded and cooled into massive fermions, which later condensed into cold dark matter, the invisible force holding galaxies together. Over billions of years, this dark matter may be decomposing into dark energy, the force accelerating cosmic expansion.

Their theory centers on super-heavy fermions, particles a million times heavier than electrons, which behave in an unexpected way due to chiral symmetry breaking: where mirror-image particles become unequally distributed, favoring one over the other. Rather than invoking exotic physics, their model works within the framework of the Standard Model but takes it in an unexpected direction.

In the early universe, these massive fermions behaved like radiation, freely moving through space. However, as the cosmos expanded and cooled, they reached a critical threshold, undergoing a phase transition, much like how matter shifts between liquid, solid, and gas.

During this transformation, fermion-antifermion pairs condensed—similar to how electrons form Cooper pairs in superconductors, creating a stable, cold substance with minimal pressure and heat. This condensate became diffuse dark matter, shaping galaxies through its gravitational influence, acting as an invisible web counteracting their rotation and ensuring they don’t fly apart.

However, dark matter may not be as stable as once thought. The researchers propose that this condensate is slowly decaying, faster than standard cosmological models predict. This gradual decomposition feeds a long-lived energy source, possibly contributing to dark energy, the force responsible for the universe’s accelerated expansion.

A more radical interpretation, mine not the researchers, suggests that dark matter is not merely decaying, but evolving into dark energy, just as energetic fermion radiation once transitioned into dark matter. If this is true, dark matter and dark energy may be two phases of the same cosmic entity rather than separate forces.

If these hypothesis hold, we should be able to detect, as the researchers suggest, traces of this dark matter-to-dark energy transformation in the cosmic microwave background (CMB). Variations in density fluctuations and large-scale structures might reveal whether dark matter has been steadily shifting into dark energy, linking two of cosmology’s biggest unknowns into a single process.

Over billions of years, as dark matter transitions into dark energy, galaxies may slowly lose their gravitational cage and begin drifting apart. With dark energy accelerating the expansion, the universe may eventually reach a state where galaxies unravel completely, leaving only isolated stars in an endless void.

If dark matter started as a fine cosmic web, stabilizing galaxies, then over time, it may fade away completely, leaving behind only the accelerating force of dark energy. Instead of opposing forces locked in conflict, what if radiation, dark matter, and dark energy were simply different expressions of the same evolving entity?

A tetrahedron could symbolize this transformation:

  • Radiation (Energetic Era) – The expansive force that shaped the early universe.
  • Dark Matter (Structural Phase) – The stabilizing gravitational web forming galaxies.
  • Dark Energy (Expansion Phase) – The force accelerating cosmic evolution.
  • Time (Governing Force) – The missing element driving transitions between states.

Rather than the universe being torn apart by clashing forces, it might be engaged in a single, continuous transformation, a cosmic dance shaping the future of space.

Source: CDM Analogous to Superconductivity by Liang and Caldwell, May 2025, APS.org. Graphic: Galaxy and Spiderweb by Copilot.

Divine Right to Rule–Not

Sir Robert Filmer, a mostly forgotten 17th century political theorist, claimed that kings ruled absolutely by divine right, a power he believed was first bestowed upon Adam.

In his First Treatise of Government, John Locke thoroughly shredded and debunked this theory of divine rights of monarchs to do as they pleased. Locke with extensive use of scripture and deductive reasoning demonstrated that ‘jus divinum’ or the divine right to rule led only to tyranny: one master and slavery for the rest, effectively undermining the natural rights of individuals and a just society.

Filmer, active during the late 16th to mid-17th century, argued that the government should resemble a family where the king acts as the divinely appointed patriarch. He erroneously based his theory on the Old Testament and God’s instructions to Adam and Noah. He used patriarchal authority as a metaphor to justify absolute monarchy, arguing that kings can govern without human interference or control. Filmer also despised democracies, viewing monarchies, as did Hobbes, as the only legitimate form of government. He saw democracies as incompatible with God’s will and the natural order.

Locke easily, although in a meticulous, verbose style, attacked and defeated Filmer’s thesis from multiple fronts. Locke starts by accepting a father’s authority over his children, but, in his view, this authority is also shared with the mother, and it certainly does not extend to grandchildren or kings. Locke also refutes Filmer’s assertion that God gave Adam absolute power not only over land and beast but also man. Locke states that God did not give Adam authority over man for if he had, it would mean that all below the king were ultimately slaves. Filmer further states that there should be one king, the rightful heir to Adam. Locke argues that there is no way to resolve who that heir is or how that could be determined. Locke finishes his argument by asserting that since the heir to Adam will be forever hidden, political authority should be based on consent and respect for natural rights, rather than divine inheritance: a logical precursor to his Second Treatise of Government, where Locke profoundly shaped modern political thought by advocating for consent-based governance.

Source: First Treatise of Government by John Locke, 1689. Graphic: John Locke by Godfrey Kneller 1697.  Public Domain.

Black Swans Part I

Black swans are rare and unpredictable events, what the military calls “unknown unknowns“, that often have significant, domain-specific impacts, such as in economics or climate. Despite their unpredictability, societies tend to rationalize these occurrences after the fact, crafting false narratives about their inevitability. COVID-19, for instance, ripples across multiple domains, beginning as a health crisis but expanding to influence the economy, legal systems, and societal tensions. As a human-made pathogen, its risks should have been anticipated.

Black swans throughout history are legendary. Examples include the advent of language and agriculture, the rise of Christianity (predicted yet world-changing), and the fall of Rome, which plunged the Western world into centuries of stagnation. Islam (also predicted), the Mongol conquests, the Black Death, and the Great Fire of London shaped and disrupted societies in profound ways. The fall of Constantinople, the Renaissance, the discovery of America, the printing press, and Martin Luther’s Reformation brought new paradigms. More recently, the Tambora eruption (“the year without a summer”), the Great Depression, WWII brought unforeseen disruptions to economies and geopolitics, the Manhattan Project, Sputnik, the fall of the Berlin Wall, and the rise of PCs and the internet altered the trajectory of human progress. Events like 9/11 and the iPhone have similarly reshaped the modern world. While black swans may be rare, they are not inevitable. We should expect moments of dramatic collapse or unanticipated brilliance to recur throughout history.

Nassim Taleb, author of the 2007 book The Black Swan, suggests several approaches to mitigate the effects of such events without needing to predict them. His recommendations include prioritizing redundancy, flexibility, robustness, and simplicity, as well as preparing for extremes, fostering experimentation, and embracing antifragility: a concept where systems not only withstand shocks but emerge stronger.

Through the lens of history, black swans appear as a mix of good and bad, bringing societal changes that were largely unanticipated before their emergence. As history has shown, predicting the impossible is just that: impossible. What might the next frontier be, the next black swan to transform humanity? Could it be organic AI, a fusion of human ingenuity and machine intelligence, unlocking potential but posing profound risks to free will, societal equilibrium, and humanity’s very essence? (Next week—preparing for a black swan: an example.)

Mind and Brain

“Life is never made unbearable by circumstances, but only by lack of meaning and purpose.” — Viktor Frankl, Holocaust survivor and psychiatrist 

For centuries, we’ve assumed consciousness resides in the brain. Yet, despite decades of slicing, mapping, and probing, its precise location remains elusive. Dr. Wilder Penfield, a neurosurgeon who charted the brain’s sensory and motor regions in the mid-20th century, wrestled with what we might call “self and memory.” While he pinpointed areas tied to movement and sensation, he couldn’t locate the “seat” of consciousness. By the 1960s, this led him to a bold hypothesis: the mind might not be fully reducible to brain activity. In his view, brain and mind could be distinct, with the mind perhaps holding a non-physical dimension—a whisper of something beyond neurons and synapses.

Fast forward to today, and researchers like Michael Levin at Tufts University are pushing this question further, though differently. Levin doesn’t dismiss the brain’s role in consciousness but argues cognition isn’t confined there. He proposes that intelligence and goal-directed behavior arise across the body’s cells and tissues. The brain, in this model, acts as a hub for processing and storing information—not the sole architect of the mind. Levin’s team explores how systems beyond the brain—from cellular networks to synthetic constructs—display mind-like traits: agency, problem-solving, and the pursuit of goals.

At the heart of Levin’s work is bioelectricity, the electrical signaling that guides cells from the zygote’s first spark to a fully formed organism. He sees it as a blueprint, directing how cells collaborate toward a larger purpose, much like ants hauling food to their colony. Each contributes to a collective intelligence, shaped by bioelectric cues that drive development and behavior. Levin stays rooted in empirical science, mapping the “how” without chasing the “why”—hinting at a distributed mind but avoiding a single source or controller.

Could memory bridge consciousness to the self, and perhaps beyond? For Penfield, electrical jolts to the brain summoned vivid past moments—smells, voices—yet the “I” reliving them remained elusive, suggesting a unity beyond the physical. Levin offers a twist: if memory isn’t just locked in the brain but woven into the body’s bioelectric web, consciousness and self might emerge together, shared across every cell. Each recalls its role, its history, to pursue a shared aim—like ants rebuilding their hill. Memory, then, isn’t merely a record but the thread weaving awareness into identity, maybe even purpose. Yet, does bioelectricity simply reflect life’s mechanics, a benign dance of physics and biology? Or does it hint at a deeper force—a directionality we’ve long named “lifeforce” or “soul”? Levin’s inductive lens echoes Descartes’ “I think, therefore I am”—proving existence through awareness but leaving purpose a shadow on the horizon. Science maps the signals; their origin remains unanswered.

Sources: Technological Approach to Mind Everywhere… by Levin and Resnik, 2025, OSF Preprints; Ingressing Minds… by Michael Levin, 2025, PsyArXiv Preprints. Graphic: Molecular Thoughts by Agsandrew, iStock, Licensed.

Closer to Zero

“The answer to the ultimate question of life, the universe, and everything is 42” Douglas Adams.

But to the question “Are we alone?”—the answer leans towards likely,”  ElsBob

In a recent systems-thinking thought experiment, researchers from Germany and the U.S. revisited the statistical “Hard Steps” model, originally proposed by Brandon Carter in 1983, which aimed to estimate the probability of intelligent life emerging. Carter’s model focused on rare biological milestones—such as photosynthesis and multicellularity—concluding that intelligent life should be exceedingly rare due to the improbability of these “hard steps.” 

In a February 2025 paper, Mills et al. propose a tweak to this framework. Rather than life’s progression depending on a handful of unlikely biological breakthroughs, they suggest Earth’s environmental evolution—marked by the presence of water, organic compounds, oxygen, and geochemical shifts—created a more gradual pathway toward complexity. They argue that these conditions didn’t so much lower the odds of each step but reframed life’s development as a cumulative process, softening the gauntlet of improbable hurdles envisioned by Carter. 

Is this new? Not entirely. The idea that life’s journey—from planetary formation to advanced neural systems, language, and sociocultural structures—unfolded as a process has roots in the 1950s, with pioneers like Urey and Miller. What’s novel in Mills et al.’s work is their integration of geological timelines and Bayesian reasoning to qualitatively soften the perceived improbability of life’s emergence, rather than delivering a fully quantitative overhaul of the Hard Steps model. Where Carter’s framework likened intelligent life to finding a unicorn, this tweak nudges it from “highly improbable” to “slightly less than highly improbable.” 

Now, the fun part—calculating the odds of a planet fostering life advanced enough for Alan Turing to deem it intelligent. 

The “witch’s cauldron” of variables for simple life might include (though not exhaustively): a planet in the habitable zone, liquid water, organic molecules, self-replicating systems, protocell formation, anaerobic metabolism, photosynthesis, aerobic respiration, multicellularity, geochemical cycles, plate tectonics, ocean currents, atmospheric dynamics, natural radiation, planetary stability, appropriate size and gravity, and a protective magnetic field—plus, perhaps, a partridge in a pear tree. Estimating these probabilities is speculative, but let’s assume a rough combined probability for simple life emerging on a suitable planet. Using reasonable constraints, Grok 3 might estimate this at approximately 1 in 1 billion (10⁻⁹). 

The leap to sentient, intelligent life adds further layers: advanced neural systems, social organization, cultural evolution, time, and a dash of random chance. These additional factors could reduce the odds by another factor of 1,000, shifting the probability to between 1 in 1 trillion (10⁻¹²) and 1 in 1 quadrillion (10⁻¹⁵). These are back-of-the-envelope figures, grounded in the spirit of the thought experiment rather than precise data. 

To make these abstract numbers relatable, let’s scale them to the universe and our galaxy. Current estimates suggest the observable universe contains roughly 100 billion galaxies (10¹¹), each with an average of 100 million stars (10⁸). Assuming 3 planets per star (a conservative guess based on exoplanet studies), that yields approximately 3 × 10¹⁹ planets—30 quintillion—across the universe. In the Milky Way, with 100 billion stars (10¹¹), we might estimate 300 billion planets (3 × 10¹¹). 

Applying the probabilities: 

Simple life in the universe: At 1 in 1 billion (10⁻⁹), roughly 3 × 10¹⁰ planets—30 billion—might host simple life. 

Intelligent life in the universe: At 1 in 1 trillion (10⁻¹²) to 1 in 1 quadrillion (10⁻¹⁵), between 30 million (3 × 10⁷) and 30,000 (3 × 10⁴) planets might harbor intelligent life. 

Simple life in the Milky Way: At 1 in 1 billion (10⁻⁹), about 300 planets (3 × 10²) could sustain simple life. 

Intelligent life in the Milky Way: At 1 in 1 trillion (10⁻¹²) to 1 in 1 quadrillion (10⁻¹⁵), the odds drop to 0.3 (3 × 10⁻¹) to 0.0003 (3 × 10⁻⁴) planets—statistically less than 1.

Across the vast universe, intelligent life seems plausible on millions or thousands of planets, depending on how pessimistic the odds. On a galactic scale, though one planet with intelligent life is statistically improbable meaning that Earth is likely alone in the Milky Way as far as sentient beings are concerned.  Still, these numbers remain speculative, blending science with educated guesswork—and a touch of cosmic whimsy.

Source: …Evolution of Intelligent Life, Mills, et al, Science Advances 2025. Graphic: Grok 3 Drawn DNA.

Near Death Experiences

Bruce Greyson in a paper published in the Journal Humanities states that, “Near-death experiences (NDEs) are vivid experiences that often occur in life-threatening conditions, usually characterized by a transcendent tone and clear perceptions of leaving the body and being in a different spatiotemporal dimension.”

NDEs have been reported throughout history and across various cultures, with many interpreting them as proof of life after death or the continuation of existence beyond the death of the physical body.

Dr. Eben Alexander, a neurosurgeon, experienced his own NDE during a week-long coma induced by a brain illness. During this experience, he reported traveling outside his body to another world, where he encountered an angelic being and the maker of the universe. He interpreted his experience not only as evidence that consciousness exists outside the mortal body but also as proof of God and heaven.

Socrates believed that the soul, a concept encompassing not only consciousness but also the whole psyche of a person, was immortal and existed in a realm beyond the physical world. According to the Platonic concept of “anamnesis”, the soul is temporarily housed in the mortal body until the body’s death, at which point it returns to a “spiritual” realm. Socrates firmly believed that because the soul is immortal, it is imperative to live a moral and virtuous life to avoid damaging the soul.

Zeno of Citium and the Stoics, following in Socrates’ footsteps, developed the concept of “pneuma” or spirit, which they viewed as a physical substance that returns to the cosmos after the death of the body. They believed that the universe is a living being, a concept known as “pantheism,” and that pneuma or souls are part of the greater universal whole.

Omniscience–Omnipresence.

Source: The Near-Death Experience by Sabom, JAMA Network, Proof of Heaven by Alexander. Memorabilia by Xenophon. Graphic: Out of Body, istock licensed.