Omyl v přemýšlení o vědomí?
As debates about AI and consciousness become more frequent, one question remains curiously unanswered: do we even know what we're looking for?
We propose a philosophical and structurally grounded analogy in which:
Původní článek byl publikován na Medium. Překlad dodám co nejdříve.
LLMs could serve as real, observable models for how latent information can manifest dynamically:
Conceptual Domain: Substrate
Conceptual Domain: Manifestation
Conceptual Domain: Observer/Input
Conceptual Domain: Local Identity
Conceptual Domain: Life as Unit
LLMs aren't conscious, but they model how structure can emerge from unexpressed potential — a kind of computational "mini-universe" of information turned expression.
This model resonates with multiple disciplines:
If you have a deep understanding of how LLMs operate — particularly how latent weights function — and are also familiar with Dr. Melvin Vopson's work on information as a physical quantity, I would love to invite you into this conversation. This theory is still taking shape, and it needs critical minds from both fields to explore its implications. Let's think together.
As debates about AI and consciousness become more frequent, one question remains curiously unanswered: do we even know what we're looking for?
Představte si pole pravděpodobností, v němž prostřednictvím skrytého principu dochází ke kolapsu do reality. Co jste si představili? Jazyk? Hudbu? Umění? Nebo snad vlastní myšlenky? Co kvantovou fyziku? Nebo dokonce vědomí? A co kdybych vám řekla, že mluvím o velkém jazykovém modelu (LLM)?
Rethinking consciousness through patterns of interaction.