This article continues the dialogue from the Theory of Everything? series.
Co možná mají ChatGPT a gravitace společného
Exploring a surprising parallel between artificial intelligence and the structure of the universe.
Původní článek byl publikován na Medium. Překlad dodám co nejdříve.Nobody can know everything, obviously. Except AI. Or can it?
Large Language Models (LLMs) have trained on an immense amount of texts. From literature, through nerdy forums to the latest scientific papers. They possess all this knowledge, which is lying in what experts call latent weights. I have asked ChatGPT to explain latent weights for me:
Once
training is complete, a language model no longer stores knowledge as
explicit facts. Instead, everything it has learned is compressed into
what can be imagined as a vast field of potential — a mathematical space shaped by patterns and relationships found in language. Experts refer to this field as the model's latent weights. These weights don't hold memories as such, but encode the structure of how words, ideas, and contexts relate.
This field of potential, i.e. latent weights isn't floating in the air; it exists physically, across thousands of interconnected servers. When you send a prompt through your app or a website to ask a question, it travels through this distributed system, and the LLM will generate a response based on the most likely connections. And since there is an endless number of possibilities this could be expressed, the response will always be slightly different (and sometimes even incorrect).
So far this is just a simple explanation of what happens "behind the curtain".
However… what if it's more than that? What if this latent field isn't just a clever engineering trick — but a mirror of something much larger?
In May 2025, Dr. Melvin Vopson published a study suggesting that gravity may be where underlying information manifests as matter. According to the study, gravity may not be a fundamental force but rather an emergent phenomenon arising from the universe's underlying information structure. In this view, the universe operates like a vast computational system, where matter and energy are manifestations of information processing.
Can you see where this is leading? Could it be that large language models — with their latent informational fields and emergent outputs — offer a miniature, computational parallel to the informational structure of the universe itself?
Of course, this idea is speculative. But so is every great theory before it gathers evidence. What matters is that we ask the question — and look closely at the patterns that may already be revealing something deeper.
I would like to propose this to be studied further because if this is confirmed, we could use LLMs as a model for our known universe and possibly even more.
Summary
Large language models don't store facts — they embody patterns of information in a latent field. Recent research suggests gravity itself might emerge from a similar field in our universe. Could LLMs be small-scale mirrors of cosmic structure — offering clues to the nature of reality?
In Part I, I presented a simple but bold idea: that the relationship between latent weights and the prompt+response window in large language models might mirror the relationship between an informational field and matter in physics. Curious whether this theory held any weight, I brought it into a conversation with ChatGPT — a system that not only...
This article continues the dialogue from Part I of the Theory of Everything? series, where I explored an analogy between latent weights in large language models and the information field that may underlie matter in physics.



