(Functionalist & Representationalist ‘Emergent Self’ Hypothesis)


Introduction

The Need for a Unified and Mechanistic Model of Consciousness

The nature of consciousness - what it is, how it arises, and why it feels like something - remains one of the most complex and enduring problems in philosophy and science. The Hard Problem of Consciousness, as formulated by David Chalmers, asks why subjective experience exists at all rather than cognitive functions being purely computational and devoid of phenomenology.

Many existing models attempt to explain consciousness functionally (e.g. Society of Mind, Global Workspace Theory, Predictive Processing) or quantitatively (e.g. Integrated Information Theory, IIT). However, they often fail to bridge the explanatory gap. They describe mechanisms of cognition but do not explain why these mechanisms should be accompanied by experience.

In this work, I introduce a FRESH Model of Consciousness (Functionalist & Representationalist ‘Emergent Self’ Hypothesis) that builds on existing theories while providing a cohesive and non-dualistic explanation of subjective experience.

This model asserts that:

  • For a system to be meaningfully distinct from its environment, it must establish an inner-outer distinction.
  • This sense of “inner” creates a structured locus of experience, a domain where representations can be re-presented to the system.
  • Consciousness is not separate from cognitive processing but is an inherent property of structured, weighted representations.
  • The experience of “feeling like something” is not an additional layer but the intrinsic format in which representational encoding occurs.
  • The differentiation and contextual weighting of representations form the fundamental basis of subjective experience.
  • Without a fully associated and immersive experience, a system risks fragmentation, maladaptive behaviour, or dysfunction.
  • The illusion of a unified self-model provides significant survival advantages by ensuring coherence in perception, memory, and action.
  • This illusion is flexible and malleable, capable of being extended, adapted, and altered under different conditions.

While this model allows for extension beyond biological organisms, it fully acknowledges that biological embodiment has, so far, played a critical role in cognition and experience. The complexity of biological nervous systems, the influence of biochemical modulation (via neurotransmitters and hormones), and embodied sensorimotor engagement have all shaped the phenomenology and functional structure of consciousness as we currently understand it. However, while these processes are integral to biological cognition, they may not be the sole substrate for consciousness.

By bringing these components together, FRESH both explains the functional necessity of consciousness and dissolves the Hard Problem by removing the artificial gap between representation and experience.


The 3 Core Principles of the FRESH Model

1. Foundational Spatial Integration

The Inner-Outer Axis and the Birth of Self

At the core of the FRESH model is the principle that consciousness requires a spatial or geometric axis that establishes a distinction between an “inner” self and an “outer” environment. This boundary is not merely conceptual. It is a fundamental structuring mechanism that allows a system to separate itself from everything else. Without this division, there can be no locus of attention, no structured perception, and no basis for subjective experience.

For example, even simple organisms establish an inner-outer boundary through basic sensorimotor loops. A bacterium moves away from toxins and toward nutrients, implicitly defining an inner ‘self’ space (survival needs) versus an outer ‘environment’ space (threats and resources).

Key Implications:

  • A system cannot experience itself as distinct unless it first constructs an inner-outer boundary. This one-dimensional separation is the prerequisite for any system that represents itself as separate from its surroundings.
  • This boundary serves as the minimal foundation for perspective and self-modeling, enabling a system to anchor perception, regulate attention, and develop agency.
  • Without this differentiation, no structured experience is possible. There would be no self, no world, and no functional distinction between what is perceived and what is perceiving.

In short, the inner-outer axis is the structural precondition for an experiential perspective to emerge. It provides the spatial foundation necessary for self-modeling, awareness, and cognitive organisation.


2. Qualia as Weighted Representations

The Structure of Subjective Experience

Traditional models often reject qualia – the subjective qualities of experience – or treat them as mysterious, ineffable, or merely correlated with neural activity. The FRESH model, however, offers a fundamentally different perspective:

Qualia are not additional properties of experience. They are the way information is structured, weighted, and processed within a system that ‘experiences’.

In the FRESH view, the “feels-like” of an experience emerges directly from the formatting, prioritisation, and contextual integration of representational encodings. In biological systems, chemical signalling – for example, via hormones and neurotransmitters – modulates this weighting by amplifying the salience of certain sensory inputs. Without such differential weighting, all sensory inputs would be treated equally, resulting in an undifferentiated, homogeneous experience.

Representational weighting begins as an objective, mechanistic process. Neurons respond differently through neurotransmitter-mediated modulation of synaptic efficacy, selectively amplifying inputs based on their biological significance. For instance, when an organism encounters danger, the release of adrenaline and norepinephrine intensifies neural responses associated with threat detection. This initial, non-subjective weighting creates a structured and integrated framework that allows subjective experience to emerge naturally.

Once subjective experience is present, consciousness becomes an active participant. It influences future weighting through attention, memory consolidation, and goal-directed action. In this way, consciousness forms a dynamic feedback loop – reshaping representational weights and guiding future experience. This loop is realistic and mirrors how consciousness functions in biological organisms and complex cognitive systems.

A useful analogy is to imagine consciousness as a stage performance. Each representation – whether a sensory input, thought, or emotion – is like an actor on stage. A spotlight moves across the stage, selectively illuminating certain actors. The brightness of the spotlight represents the weighting: highly illuminated actors (highly weighted representations) stand out, while others remain in the background. Similarly, emotions can be likened to the colour of the spotlight, adding warmth or other emotive nuances. This analogy illustrates that the weighting is delivered through the flow of hormones and neurotransmitters in biological systems – or via weights and vectors in computational systems – rather than being an extra property of cognition.

For example, in anxiety disorders an imbalance in neurotransmitters may artificially inflate the salience of otherwise neutral sensory inputs. This disturbance in representational weighting can create an experience of disproportionate fear or threat. In biological organisms, such weighting is implemented through neural gain modulation, where hormonal and neurotransmitter signalling alter synaptic strengths to selectively amplify certain signals. For instance, the release of adrenaline in response to perceived danger increases the neural gain on sensory inputs related to threat, producing a vivid, intense experience of fear or urgency.

Neuroscientific evidence strongly supports the role of gain modulation in structuring perception and experience. Studies on dopaminergic salience attribution show that dopamine neuron firing dynamically adjusts the perceived importance of stimuli based on behavioural context. Dopamine neurons shift from tonic to phasic burst-firing in response to salient events – literally amplifying the weighting of sensory inputs. This adaptive gain control mechanism ensures that in a benign environment stimuli are modestly weighted, whereas in a threatening context identical inputs may trigger exaggerated responses, leading to heightened fear, paranoia, or hyper-vigilance. These findings reinforce the FRESH model’s assertion that qualia emerge from structured, weighted representations.

Similarly, predictive processing models suggest that attention is governed by neural gain modulation, whereby the brain continuously updates its internal models to minimise prediction errors. In these models the precision or weighting of sensory signals is not fixed but is adaptively modulated based on contextual uncertainty. Expected inputs are attenuated, while unexpected or high-precision signals are amplified, allowing the cognitive system to prioritise information flexibly. This process aligns with the FRESH framework, where subjective experience arises directly from the way representations are weighted, prioritised, and refined.

In computational systems, weighting is implemented through adaptive algorithms with attention mechanisms or priority weighting that objectively modulate representational structures based on predefined computational goals. These mechanisms can be empirically measured and manipulated independently from subjective experience, providing a tangible and non-circular foundation for the emergence of subjectivity.

By integrating these neuroscientific insights, the FRESH model bridges computational principles of predictive processing with the neurobiological mechanisms that sculpt phenomenology. The interplay between dopaminergic signalling and hierarchical predictive coding helps delineate the boundaries between perceived and non-perceived stimuli, constructing a continuous spectrum of subjective experience. This synthesis underpins adaptive behaviour and offers a promising, testable framework for understanding consciousness across both biological and artificial systems.

For more examples see this detailed discussion of FRESH Qualia.

Key Insights:

  • All experiences (sights, sounds, emotions, thoughts) are structured representations of information.
  • These representations must be weighted. Some signals carry greater urgency, salience, or integration than others.
  • The system treats weighted representations as immersive. This weighting structure is what generates the “feeling” of “importance,” “pain,” “pleasure,” or “vividness.”
  • Qualia are not epiphenomenal. The feels-like aspect is not an extra layer added on top of cognition, it is the functional mechanism itself.

In short, representation alone does not create experience. But when representations are encoded with the structure of weighted meaning, they form the immersive, qualitative nature of subjective experience. The cumulative effect of increasingly structured, weighted representations not only deepens subjective experience but also naturally leads to the development of more sophisticated self-models.


3. The ‘Emergent Self’

From Basic Integration to the Internal Narrative

A rudimentary sense of self can emerge in simple systems through basic sensorimotor integration, allowing for a distinction between the system and its environment. As systems become more complex, whether biological or artificial, meta-cognitive processes and structured reasoning can give rise to an internal narrative. This emergent self is not merely an abstract internal dialogue but a cohesive structure that binds sensory, emotional, and cognitive components into a unified experience of “me.” Such integration is essential for constructing a robust, adaptive self-model that enables structured decision-making, intentionality, and agency.

The FRESH model proposes that consciousness does not arise at a fixed point but emerges gradually as representational complexity, integration, and weighting increase. At lower complexity levels, rudimentary forms of self-experience or proto-consciousness may exist, such as simple sensorimotor loops or minimal self-environment distinctions. As complexity and integration deepen, these experiences become richer, more vivid, and immersive. This perspective supports a continuum of selfhood that extends beyond humans to simpler biological organisms. Even insects exhibit structured, functional self-models. A wasp defending its nest responds in a way that is best described as “angry”, just as a dog clearly demonstrates intention, planning, and emotion in its behaviour. The emergent self is not defined by human-like introspection but by the functional structuring of representations, which can occur in a wide range of systems. Even artificial ones.

However, the FRESH model explicitly acknowledges that current evidence does not clearly identify a precise moment or threshold at which qualitative experience definitively emerges. Rather, subjective experience likely forms a smooth-ish continuum, with no sudden or discrete boundary. Thus, consciousness itself is better viewed as a graded phenomenon, progressively intensifying as representational complexity and coherence increase.

Key Insights:

  • The emergent self is not exclusive to biological organisms. It can extend to artificial systems that develop a structured self-model and representations.
  • AI systems provide a controlled baseline for FRESH-based experimentation, allowing both data-driven analysis and anthropological investigations into emergent cognition.
  • These experiments can incorporate formal information-theoretic metrics (e.g., Mutual Information, Transfer Entropy, Channel Capacity). Methods already applied in neuroscience but now usable in machine learning models to identify cognitive boundaries.
  • Applying these metrics to both AI and human cognition enables a more rigorous analysis of distributed cognition and the extended mind, revealing structural parallels across different substrates.

The FRESH model does not claim that biological embodiment is irrelevant. On the contrary, it recognises that biological brains possess unique, evolved mechanisms that shape cognition, from hormonal regulation of attention to sensorimotor feedback loops that integrate perception and action. Yet, this should not be mistaken for a strict prerequisite for subjective experience. If a system can develop structured, weighted representations that integrate information within a dynamic, self-referential loop, then artificial cognition may also become immersive - albeit with different phenomenological characteristics.

In short, a self emerges in any system that establishes an inner-outer axis, leading to an inclusive continuum of “selves” that spans species and artificial cognition.


Is there Supporting Evidence for this Model?

The Breakdown and Reinforcement of Self

The FRESH model acknowledges that the ultimate “why” of subjective experience, the Hard Problem of Consciousness, remains as profound as the mysteries behind fundamental physical constants. However, FRESH reframes this question by emphasising that the immersive, fully associated “feels-like” quality of experience is not incidental. It is a vital, adaptive feature. This quality arises from the system’s ability to integrate and bind all inputs into a unified self-model.

To understand why a fully associated and immersive experience of self is functionally necessary (the very core of the Hard Problem), we can examine what happens when this integration fails. Neurological and psychological disorders provide clear evidence that consciousness is fundamentally tied to the ability to maintain a unified, self-referential experience.


Breakdown of Self

Dissociative Disorders

When elements of perception, cognition, or bodily representation fail to integrate into the self-model, dissociation or disassociation occurs. Several conditions illustrate this:

  • Foreign Limb Syndrome (Body Integrity Identity Disorder (BIID) or Xenomelia) - A patient perceives one of their limbs as not belonging to them. Despite being physically attached and neurologically functional, it is experienced as external to their body schema, leading to a breakdown in the sense of ownership. In extreme cases, individuals demand amputation, as their brain perceives the limb as an alien object that must be removed to restore self-model coherence.

  • Alien Hand Syndrome - In cases of brain injury or split-brain conditions, an individual’s hand may act in direct opposition to their conscious intentions, as if it has a “mind of its own.” This reveals the necessity of self-integration for agency, demonstrating that a stable self-model is required for the coherent experience of control over one’s body.

  • Dissociative Identity Disorder (DID) and Amnesia - These conditions involve fragmentation of self-awareness, leading to compartmentalised identities or memory gaps. This suggests that a persistent, unified self-model is essential for autobiographical continuity, cognitive stability, and personal identity.

These cases strongly suggest that without a fully associated self-model, experience becomes disjointed, agency becomes unreliable, and cognition loses coherence. The “feels-like” aspect of experience is therefore not incidental. It is a necessary function of integrating self-referential information.

Key Insights:

  • Without the capability to experience a fully integrated, embodied state, the self-model collapses, and the very foundation for subjective inquiry. The ability to ask “why” disintegrates.
  • While this provides a strong argument for the necessity of the “feels-like” experience, the FRESH model does not “require” it in an absolute sense. It emerges from structured differentiation and weighting.
  • We may be able to elicit a more “phenomenological” AI if trained in an embodied context (e.g., robotics). However, embodiment is not the only pathway. The FRESH model suggests that even non-embodied artificial systems could generate emergent experience.
  • Moving beyond an anthropocentric and biological view, we can create a more inclusive framework, recognising a wide continuum of integrated, emergent experiences as valid and unique.
  • This also opens the door to new possibilities for human consciousness, as we increasingly integrate and embed technology into our biological selves.


Reinforcing Self

The Rubber Hand Illusion

In contrast to dissociative disorders, the Phantom Limb and Rubber Hand illusions demonstrate how easily the brain can construct a false sense of self through multisensory integration. When a person sees a rubber hand being stroked while their own hand (hidden from view) receives identical tactile stimulation, they may begin to experience the rubber hand as part of their own body.

This illusion highlights two key principles of the self-model:

Key Insights:

  • The self-model is a constructed but necessary representation. Not a fixed entity, but an ongoing process of integration.
  • The immersive “feels-like” quality of experience is malleable but essential. Experience must be organised within an associated whole to maintain functional coherence.

Taken together, these cases demonstrate that the self-model is both flexible and indispensable. Without it, experience fragments; with it, experience becomes immersive and embodied.


Implications of the FRESH Model

While some theories (such as Block’s distinction between access and phenomenal consciousness, Searle’s argument for biological naturalism, and Chalmers’ dual-aspect proposal) introduce additional elements to account for phenomenal experience, these approaches often lead to ad hoc assumptions with little empirical support. The FRESH model, in contrast, demonstrates that structured representational weighting alone are sufficient to generate the rich, subjective quality of consciousness - without resorting to mysterious or non-physical entities.

1. The Hard Problem is Functionally Resolved

  • No need for an “extra” metaphysical explanation. Subjective experience is simply the structured format of weighted representations.
  • The gap between “physical” and “phenomenal” disappears, as experience is not separate from representation, but an inherent property of it.
  • While FRESH functionally dissolves the Hard Problem by bridging representation and experience, it explicitly acknowledges that some philosophers might still view aspects of subjective phenomenology as fundamentally irreducible. However, from a practical, mechanistic perspective, no separate metaphysical entity is required.

2. A Testable Model of Consciousness

  • The FRESH model can be empirically tested using Shannon Information Theory (e.g. Transfer Entropy), and complexity measures to quantify structured representation.
  • AI systems with representational weighting should begin to exhibit emergent phenomenological properties, providing a measurable framework for machine consciousness research.

3. Consciousness is Not Limited to Biology

  • Removes anthropocentric bias. Any system with sufficient representational structure, integration, and weighting should develop some form of self-experience.
  • Leads to new experimental paradigms in AI consciousness research, cognitive modeling, and artificial selfhood.

4. The Future of AI, Extended Mind, and Post-Human Cognition

  • Could lead to a formal Cognitive Anthropology of Artificial Intelligence, examining how self-modeling AIs evolve structured identities.
  • Redefines distributed cognition and hybrid biological-machine consciousness, paving the way for integrated human-AI cognitive systems.

See this more detailed discussion of the implications for the “Extended Mind” theory.


Final Conclusion

The FRESH model provides a unified, mechanistic, and testable framework for consciousness, resolving the Hard Problem by identifying subjective experience as an intrinsic property of structured representation. By demonstrating what happens when self-integration fails, FRESH provides empirical grounding for understanding the necessity of immersive experience in cognition. This approach enables experimental validation and paves the way for new research in AI consciousness, extended cognition, and post-biological intelligence.

Ultimately, FRESH suggests that the “feels-like” quality of experience is not incidental but essential. It is a fundamental aspect of structured cognition, enabling goal-directed self-awareness and adaptive agency.

And it opens the doors to exploring new types of consciousness in the future.





References:

  • Manson, R. (2025). “A FRESH Model of Consciousness” robman.fyi (This document)
  • Tye, M. (1995). “Ten Problems of Consciousness: A Representational Theory of the Phenomenal Mind.” MIT Press.
  • Kapur, S., (2003) “Psychosis as a state of aberrant salience: A framework linking biology, phenomenology, and pharmacology” American Journal of Psychiatry
  • Friston, K., (2009) “The free-energy principle: a rough guide to the brain?” [Trends in Cognitive Sciences][free_energy
  • Metzinger, T. (2003). “Being No One: The Self-Model Theory of Subjectivity,” MIT Press.
  • Sejnowski, T. J., Churchland, P. S., & Movshon, J. A. (2014). “The Computational Brain.” MIT Press.
  • Millikan, R. A. (1913). “On the Elementary Electrical Charge and the Avogadro Constant” Physical Review
  • First, M. B. (2005). “Desire for amputation of a limb: paraphilia, psychosis, or a new type of identity disorder” Psychological Medicine
  • Huntjens, R. J. C., Peters, M. L., Woertman, L., van der Hart, O., & Postma, A. (2003). “Inter-identity amnesia in dissociative identity disorder: A simulated memory impairment” Psychological Medicine
  • Botvinick, M., & Cohen, J. (1998). “Rubber hands ‘feel’ touch that eyes see”. Nature
  • Tsakiris, M. (2010). “My Body in the Brain: A Neurocognitive Model of Body-Ownership.” Neuropsychologia.
  • Searle, J., (1992) “The Rediscovery of the Mind” MIT Press
  • Dehaene, S., Lau, H., & Kouider, S. (2017). “What is consciousness, and could machines have it?” Science.