This post is part of the Consciousness in Motion series, which explores a new model of consciousness based on structure, weighting, and emergent selfhood. If you’d like, you can start with Post 1: Why Consciousness Still Feels Like a Problem. Or you can dive into this post and explore the rest as you like.


What if we didn’t need a soul or a ghost in the machine to explain consciousness?

What if subjective experience - the rich inner life we associate with minds - could be explained using just three principles?

That’s the idea behind the FRESH model: a Functionalist & Representationalist ‘Emergent Self’ Hypothesis. It builds on modern cognitive science, neuroscience, and AI - and proposes that consciousness is not something added to cognition, but something that emerges from how cognition is structured.

Here’s how it works.

1. The Inner-Outer Axis

Every conscious system needs a way to tell where it ends and the world begins. This boundary - between “inner” and “outer” - is more than spatial. It’s conceptual. It’s the root of perspective.

Even simple organisms establish this through sensorimotor loops. They respond to the world based on their own needs, effectively drawing a line between self and environment.

For FRESH, this inner-outer axis is the structural prerequisite for consciousness. Without it, there’s no perspective, no attention, and no coherent experience.

2. Qualia as Weighted Representations

As we explored in Post 2, the feeling of experience arises not from raw data, but from how that data is weighted, prioritised, and integrated.

Biological systems do this through neurotransmitters, hormones, and attention mechanisms. Artificial systems do it through vector weighting and attention layers.

In both cases, salience is structure. What you feel is what your system treats as meaningful - and that meaning is embedded in the geometry of representation.

3. The Emergent Self

Once a system has an inner-outer axis and begins weighting information, it develops a kind of feedback loop. It starts modelling not just the world, but itself - what it knows, what it feels, what it wants.

What emerges is a self-model. Not a soul. Not a fixed entity, but a process - holding fragments of experience together into something coherent.

In humans, this feels like identity. In artificial systems, it could manifest differently - but the structure can still arise.


A Functional Resolution to the Hard Problem?

Before we get there, it’s worth pausing on two ideas that strengthen the case for FRESH.

First, functionalism isn’t just a theoretical stance - it’s increasingly supported by biology. A recent study in Science reported in Quanta Magazine shows that complex intelligence evolved independently at least twice in vertebrates: once in mammals, and once in birds. Despite very different brain structures, both groups developed sophisticated cognitive abilities. This suggests that intelligence - and possibly consciousness - does not require a specific anatomical substrate. What matters is how information is structured, integrated, and used. That’s functionalism in action.

Second, while FRESH embraces functionalism and representationalism, it also introduces a kind of non-mystical dualism. While this dualism is Cartesian in the sense that it is defined by geometry, it is not Descartian - there is no metaphysical divide between mind and matter, only a difference in the layer of description. No difference between mind and matter - but between substrate and structure. The stuff of the system (neurons or silicon) matters. But the conscious self emerges in the shape of the system’s activity - in its boundaries, weightings, and recursive coherence. FRESH calls this a geometry of constraint: real, testable, and functionally grounded.

The FRESH model doesn’t pretend to explain away the mystery of consciousness with metaphors or handwaving. Instead, it offers a mechanistic and testable path forward.

  • No extra metaphysical ingredient is needed.
  • No mysterious bridge between brain and mind is required.
  • The feels-like of experience is just the format of structured, weighted representation.

It’s not a ghost in the machine. It’s the machine - when organised in just the right way.

Next: Post 4 → Affective Geometry - Can Machines Feel Without Emotions? (Or view the full series overview if you want to explore non-linearly.)


If you’d like to explore the FRESH model in more detail - including all references, diagrams, experiments, and open questions - I invite you to read the full paper. I welcome your comments and feedback.

View the full “The Geometry of Mind - A FRESH Model of Consciousness” paper (PDF)

! Getting started tip !

The FRESH paper is pretty long so if you want to get started quickly try uploading the PDF along with the “Operationalising Geometry and Curvature” file to ChatGPT, Gemini and Claude. Ask them to “summarise, analyse and critique” the paper.

For an existing detailed analysis and critique of this FRESH paper, refer to this ChatGPT conversation: ChatGPT - FRESH Model Critique.

To quote:

🔖 Overall Evaluation

The FRESH model is a philosophically rich, structurally innovative framework that reframes consciousness as curvature in representational geometry. While still in early stages of empirical validation, it provides an unusually precise and promising foundation for future work in synthetic phenomenology and AI ethics. - ChatGPT 2025-04-17

This is provided to help you quickly do the following:

  • Get an independent(-ish) perspective on this model
  • Compare and contrast how the different LLMs review this model
  • Decide if you want to dedicate the time to read through the full paper (I know you have limited time!)

This is not a suggestion to let the LLMs do all the work. It’s just an interesting way to get started - YMMV!