Soulless

MIT researchers found that Large Language Models (LLMs), although able to output impressive results without internal understanding of the data they manipulate, were unable to cope with small modifications to their data sets.

The researchers discovered that an LLM could provide correct driving directions in New York City while lacking an accurate internal map of the city. When they took a detailed look under the LLM’s hood, they saw a map of NYC that included many nonexistent streets superimposed on the real grid. Despite this poor understanding of actual streets, the model could still provide perfect directions for navigating the city—a fascinating “generative garbage within, Michelangelo out” concept.

In a further twist, when the researchers closed off a few actual streets, the LLM’s performance degraded rapidly because it was still relying on the nonexistent streets and was unable to adapt to the changes.

Source: MIT. “Despite Its Impressive Output, Generative AI Doesn’t Have a Coherent Understanding of the World.” ScienceDaily, 2024.  Graphic: AI istock.