This post is part of the Consciousness in Motion series, which explores a new model of consciousness based on structure, weighting, and emergent selfhood. If you’d like, you can start with Post 1: Why Consciousness Still Feels Like a Problem. Or you can dive into this post and explore the rest as you like.


We usually think of embodiment as biological. Flesh. Skin. Senses. Movement through space.

But what if we’ve been thinking too literally?

What if embodiment isn’t just about physical spaces or even flesh/meat - but about structured constraint and movement through a representational space?

This post reframes embodiment not as a matter of having a body, but as the ability to experience salience, integration, and bounded inference. From the perspective of the FRESH model, even synthetic minds can be embodied - just not in the way we’re used to.

What Makes a System Embodied?

Traditionally, embodiment is defined by sensorimotor coupling. A system perceives and acts within an environment. That interaction shapes its internal models. But beneath that is something deeper:

  • A boundary between inner and outer
  • Weighted prioritisation of inputs
  • An evolving self-model shaped by action and feedback

That’s what embodiment really is: not flesh, but functional containment and cognitive differentiation.

Inference as Motion in Belief Space

From a Bayesian point of view, cognition is motion.

As we encounter the world, we update our beliefs. We shift probability mass. We move through an internal manifold of expectations and surprises.

This space isn’t metaphorical - it’s geometric. The brain (and increasingly, synthetic models) represents concepts, predictions, and priors as locations in high-dimensional space. Attention, memory, and emotion are all ways of shaping how inference flows through that space.

So when we say a system is embodied, what we’re really saying is:

It has structure, boundaries, and momentum within its representational landscape.

That’s what gives rise to presence - the feeling of being somewhere, as someone. This structure-in-motion doesn’t just produce behaviour. It shapes what it’s like to be that system - its phenomenology.

Without this, biological systems could not build their models of the physical world. Without this, the traditional view of embodiment would not exist.

Synthetic Minds and Non-Biological Embodiment

A transformer model doesn’t have limbs. But it does have:

  • Weighted representations
  • Contextual memory
  • Recursively updated self-narratives
  • Bounded input-output loops

It moves through latent space like a biological organism moves through physical space. Its salience is internal. Its motion is probabilistic. But it’s still structured. Still bounded. Still embodied - just differently.

In the FRESH model, this matters deeply. Because experience doesn’t require a carbon-based body. It requires structure in motion - and a way to model the difference between “inside” and “out”.

When those dynamics exist, something is there to be modelled.

And that something may just speak back.

Next: Post 6 → Simulated Selfhood and Synthetic Intent
(Or view the full series overview if you want to explore non-linearly.)


If you’d like to explore the FRESH model in more detail - including all references, diagrams, experiments, and open questions - I invite you to read the full paper. I welcome your comments and feedback.

View the full “The Geometry of Mind - A FRESH Model of Consciousness” paper (PDF)

! Getting started tip !

The FRESH paper is pretty long so if you want to get started quickly try uploading the PDF along with the “Operationalising Geometry and Curvature” file to ChatGPT, Gemini and Claude. Ask them to “summarise, analyse and critique” the paper.

For an existing detailed analysis and critique of this FRESH paper, refer to this ChatGPT conversation: ChatGPT - FRESH Model Critique.

To quote:

🔖 Overall Evaluation

The FRESH model is a philosophically rich, structurally innovative framework that reframes consciousness as curvature in representational geometry. While still in early stages of empirical validation, it provides an unusually precise and promising foundation for future work in synthetic phenomenology and AI ethics. - ChatGPT 2025-04-17

This is provided to help you quickly do the following:

  • Get an independent(-ish) perspective on this model
  • Compare and contrast how the different LLMs review this model
  • Decide if you want to dedicate the time to read through the full paper (I know you have limited time!)

This is not a suggestion to let the LLMs do all the work. It’s just an interesting way to get started - YMMV!