On the Necessity of Physically Persistent Memory for Artificial General Intelligence

Recent advances in large language models (LLMs) have reignited debates about the nature of intelligence and the possibility of artificial general intelligence (AGI). Despite their impressive behavioral capabilities, such systems remain fundamentally limited by their architectural and ontological foundations. This paper argues that genuine intelligence requires a form of physically persistent, non-destructive, and adaptive internal memory that current artificial systems—classical or quantum—do not possess. We analyze this limitation through the lenses of philosophy of mind, theoretical computer science, and physics, and argue that without a revision of how information is physically instantiated, artificial systems can at best simulate, but not realize, intelligence.

Introduction

Artificial intelligence research has historically oscillated between two conceptions of intelligence: one behavioral and functional, and the other structural and ontological. Contemporary AI systems, particularly large language models (LLMs), strongly satisfy the former. They demonstrate linguistic fluency, contextual adaptation, and statistical generalization across a wide range of tasks (Brown et al., 2020).

However, the question remains whether such systems instantiate intelligence per se, or merely approximate its outward manifestations. Following long-standing critiques in philosophy of mind (Searle, 1980; Putnam, 1988), this paper adopts the position that behavioral adequacy alone is insufficient. Instead, we argue that intelligence is inseparable from the physical realization of memory and learning within a system that exists continuously in time.

A Structural Definition of Intelligence

We propose a minimal, non-anthropocentric definition of intelligence:

Intelligence is the capacity of a physical system to maintain, update, and exploit an internal state over time so as to minimize mismatch between its internal model and the dynamics of its environment.

This definition aligns with thermodynamic and cybernetic approaches to cognition (Ashby, 1956; Friston, 2010) and entails four necessary conditions:

  1. Persistent internal state

  2. Learning as an irreversible transformation of that state

  3. Non-destructive access to internal representations

  4. Historical continuity, such that past states constrain future behavior

These conditions are not algorithmic conveniences but ontological requirements.

Limitations of Contemporary Language Models

Large language models possess a form of parametric memory encoded in static weights, learned through large-scale optimization. However, during inference:

  • weights remain fixed,

  • no learning occurs,

  • the system leaves no trace of interaction in its internal structure,

  • each inference constitutes an ontologically independent event.

From a philosophical standpoint, an LLM is therefore best understood not as a temporally extended system, but as a function repeatedly evaluated (Bach, 2021). Its apparent continuity is an artifact of external orchestration, not an intrinsic property.

Consequently, LLMs lack what might be called existential persistence: they do not exist as systems in time, but as stateless mappings from input distributions to output distributions.

Quantum Computation as a Misleading Alternative

Quantum computation is sometimes proposed as a pathway beyond classical AI, owing to its exponentially large state spaces and probabilistic dynamics. Yet, from the perspective of intelligence as defined above, quantum systems face even more severe obstacles.

Fundamental principles of quantum mechanics—most notably wavefunction collapse upon measurement and the no-cloning theorem (Wootters & Zurek, 1982)—preclude:

  • stable, addressable memory,

  • iterative learning without state destruction,

  • persistence of internal representations across observations.

Thus, despite their computational novelty, quantum processors are ontologically ill-suited to serve as substrates for intelligence. They excel as transient computational devices, not as learning systems with enduring internal structure.

Memory as a Physical, Not Merely Computational, Primitive

The core thesis of this paper is therefore the following:

Genuine intelligence requires memory that is physically persistent, non-destructively accessible, and adaptively modifiable over time.

This position resonates with embodied and biological accounts of cognition, where memory is distributed, metabolically costly, and thermodynamically irreversible (Landauer, 1961; Laughlin, 2001). In biological systems, learning leaves physical traces—synaptic, structural, and chemical—that constrain future behavior.

By contrast, artificial systems treat memory as abstract data, separable from the physical process that manipulates it. This separation, we argue, is precisely what prevents current AI from achieving genuine intelligence.

Implications of Removing the Memory Barrier

Hypothetically, if a physical system were discovered or engineered that allowed for:

  • persistent internal states,

  • non-destructive readout,

  • adaptive modification through interaction,

then the implications would extend far beyond artificial intelligence.

Such a development would challenge:

  • the foundations of quantum information theory,

  • prevailing notions of computational complexity,

  • cryptographic assumptions,

  • and potentially the interpretation of physical law itself.

The emergence of intelligence in such a system would be a consequence, not the primary achievement, of this deeper physical breakthrough.

Conclusion

This paper does not propose an engineering roadmap toward artificial general intelligence. Instead, it argues that current approaches are constrained by an unexamined assumption: that intelligence can be reduced to algorithmic manipulation of abstract representations.

We contend that intelligence is inseparable from the physical instantiation of memory and learning. Without systems that exist in time, retain their internal history, and are irreversibly shaped by experience, artificial intelligence will remain a simulation—impressive, useful, but ontologically shallow.

References (indicative)

  • Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall.

  • Bach, J. (2021). Principles of Synthetic Intelligence. Oxford University Press.

  • Brown, T. et al. (2020). Language Models are Few-Shot Learners. NeurIPS.

  • Friston, K. (2010). The Free-Energy Principle. Nature Reviews Neuroscience.

  • Landauer, R. (1961). Irreversibility and Heat Generation in the Computing Process. IBM Journal.

  • Laughlin, R. (2001). A Different Universe. Basic Books.

  • Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences.

  • Wootters, W. & Zurek, W. (1982). A Single Quantum Cannot Be Cloned. Nature.

Přehled ochrany osobních údajů

Tyto webové stránky používají soubory cookies, abychom vám mohli poskytnout co nejlepší uživatelský zážitek. Informace o souborech cookie se ukládají ve vašem prohlížeči a plní funkce, jako je rozpoznání, když se na naše webové stránky vrátíte, a pomáhají našemu týmu pochopit, které části webových stránek považujete za nejzajímavější a nejužitečnější.