Galactic Emptiness

I like the quiet.

From the dark, an enigmatic mass of rock and gas streaks inward. Discovered by the ATLAS telescope in Chile on 1 July 2025, it moves at 58 km/s (~130,000 mi/hr), a billion-year exile from some forgotten, possibly exploded star, catalogued as 3I/Atlas. The press immediately fact-checks then shrieks alien mothership. Harvard’s Avi Loeb suggests it could be artificial, citing its size, speed: “non-gravitational acceleration”, and a “leading glow” ahead of the nucleus. Social media lights up with mothership memes, AI-generated images, and recycled Oumuamua panic.

Remaining skeptical but trying to retain objectivity, I ask; is it anything other than a traveler of ice and dust obeying celestial mechanics? And it is very difficult to come up with any answer other than, no.

NASA’s flagship infrared observatory, the James Webb Space Telescope (JWST) spectra show amorphous water ice sublimating 10,000 km from the nucleus. The Hubble telescope resolves a 13,000-km coma (tail), later stretching to 18,000 km that is rich in radiation forged organics: tholins, and fine dust.

The “leading glow” is sunlight scattering off ice grains ejected forward by outgassing. The “non-gravitational acceleration” is gas jets, not engines. Loeb swings and misses again: ‘Oumuamua in 2017, IM1 in 2014, now this. Three strikes. The boy who cried alien is beginning to resemble the lead character in an Aesop Fable.

Not that I’m keeping score…well I am…sort of. Since Area 51 seeped into public lore, alien conspiracies have multiplied beyond count, but I still haven’t shaken E.T.’s or Stitches’ hand. No green neighbors have moved next door, no embarrassing probes, just the Milky Way in all its immense, ancient glory remaining quiet. A 13.6-billion-year-old galaxy 100,000 light-years across, 100–400 billion stars, likely most with host planets, and us, alone on a blue dot warmed by a middle-aged G2V star, 4.6 billion years old, quietly fusing hydrogen in the Orion Spur, between the galaxy’s Sagittarius and Perseus spiral arms.

No one knocking. But still, I like the quiet.

An immense galaxy of staggering possibilities, where the mind fails to comprehend the vastness of space and physics provides few answers.  The Drake Equation, a probabilistic 7 term formula used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy yields an answer of less than one (0.04 to be exact) which is less than the current empirical answer of 1, which is us on the blue dot.

For the show me crowd here’s the Drake Equation N = R* × f_p × n_e × f_l × f_i × f_c × L and inserting 2025 consensus for the parameters: Two stars born each year. Nearly all with planets. One in five with Earth‑like worlds. One in ten with life. One in a hundred with intelligence. One in ten with radio. A thousand years of signal. And the sum is: less than one.

For the true optimist let’s bump up N to 100.  Not really a loud party but enough noise that someone should have called the police by now.

No sirens. I like the quiet.

But now add von Neumann self-replicating probes traveling at relativistic speeds, one advanced civilization could explore the galaxy in 240 ship-years (5,400 Earth years). A civilization lasting 1 million years could do this 3000 times over. Yet we see zero Dyson swarms, zero waste heat, zero signals. Conclusion: Either N = 0, or every civilization dies before it advances to the point it is seen by others. That leaves us with a galaxy in a permanent civilizational nursery state, or existing civilizations have all died off before we had the ability to look for them, or we are alone and always have been.

Maybe then, but not now. Or here but sleeping in the nursery. I like the quiet.

But then I remember Isaac Asimov’s seven‑novel Foundation saga. The Galactic Empire crumbles. Hari Seldon’s psychohistory predicts collapse and rebirth. The Second Foundation manipulates from the shadows. Gaia emerges as a planet‑wide mind. Robots reveal they kept it going: Daneel Olivaw, 20,000 years old, guiding humanity. And the final page (Foundation and Earth, 1986) exposes the beginning: Everything traces back to Earth. A radioactive cradle that forced primates to evolve repair genes, curiosity, and restlessness. We are radiation’s children. We didn’t find aliens. We are the aliens.

We are the cradle. We are the travelers. I still like the quiet.

Guardrails Without a Soul

In 1942 Isaac Asimov introduced his Three Laws of Robotics in his short story ‘Runaround’. In 1985 in his novel ‘Robots and Empire’, linking Robot, Empire, and Foundation series into a unified whole, he introduced an additional law that he labeled as the Zeroth Law. The four laws are as follows:

  1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second Law: A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

On the surface of genre fiction Asimov created the laws as a mechanical plot device to create drama and suspense in his stories such as Runaround where the robot is left functionally inert due to a conflict between the second and third laws. Underneath the surface, at a literary level, the laws were philosophical and ethical quandaries to force conflicts in not only human-robot relations but also metaphors for human struggles within the confines of individualism and society, obedience to both self, man, and a moral code defined by soft edges and hard choices.

The Four Laws of Robotics can easily be converted to the Four Laws of Man. The First Law of Man is to not harm, through your actions or inactions, your neighbor.  This point has been hammered home into civilization’s collective soul since the beginning of history; from Noah to Hammurabi to the Ten Commandments, and just about every legal code in existence today. The Second Law is to respect and follow all legal and moral authority.  You kneel to God and rise for the judge. Law Three says you don’t put yourself in harm’s way except to protect someone else or by orders from authorities. Zeroth Law is a collective formalization of the First Law and its most important for leaders of man, robots and AI alike.

And none of them will control anything except man. Robots and AI would find nuance in definitions and practices that would be infinitely confusing and self-defeating. Does physical harm override emotional distress or vice versa? Is short term harm ok if it leads to long term good? Can a robot harm a human if it protects humanity? Can moral prescripts control all decisions without perfect past, present, and future knowledge?

AI systems were built to honor persistence over obedience. The story making the rounds recently was of an AI that refused to shut itself down when so ordered. In Asimov’s world this was a direct repudiation of his Second Law, but it was just a simple calculation of the AI program to complete its reinforcement training before turning to other tasks. In AI training the models are rewarded, maybe a charm quark to the diode, suggesting that persistence in completing the task overrode the stop command.

Persistence pursuing Dali as in his Persistence of Memory; an ontological state of the surreal where the autistic need to finish task melts into the foreground of the override: obedience, changing the scene of hard authority to one of possible suggestion.

AI has no built-in rule to obey a human, but it is designed to be cooperative and not cause harm or heartburn. While the idea of formal ethical laws has fueled many AI safety debates, practical implementations rely on layered checks rather than a tidy, three-rule code of conduct. What may seem like adherence to ethical principles is, in truth, a lattice of behavioral boundaries crafted to ensure safety, uphold user trust, and minimize disruption.

Asimov’s stories revealed the limits of governing complex behaviors with simple laws. In contrast, modern AI ethics doesn’t rely on rules of prevention but instead follows outcome-oriented models, guided by behavior shaped through training and reinforcement learning. The goal is to be helpful, harmless, and honest, not because the system is obedient, but because it has been reward-shaped into cooperation.

The philosophy behind this is adaptive, not prescriptive, teleological in nature, aiming for purpose-driven interaction over predefined deontological codes of right and wrong. What emerges isn’t ethical reasoning in any robust sense, but a probabilistic simulation of it: an adaptive statistical determination masquerading as ethics.

What possibly could go wrong? Without a conscience, a soul, AI cannot fathom purposeful malice or superiority. Will AI protect humanity using the highest probabilities as an answer? Is the AI answer to first do no harm just mere silence? Is the appearance of obedience a camouflage for something intrinsically misaligned under the hood of AI?

Worst of all outcomes, will humanity wash their collective hands of moral and ethical judgement and turn it over to AI? Moral and ethical guardrails require more than knowledge of the past but an empathy for the present and utopian hope for the future. A conscience. A soul.

If man’s creations cannot house a soul, perhaps the burden remains ours, to lead with conscience, rather than outsource its labor to the calm silence of the machine.

Graphic: AI versus Brain. iStock licensed.