That kind of headline spreads fast.

So does its opposite:

“LLMs are sentient!” or “This model has feelings!”

But here’s the problem - people are making strong claims on both sides of the AI consciousness debate, and very few (if any) of them are offering anything testable.

If you want to claim that AI is conscious, you need some pretty strong evidence - not speculation. But if you want to claim the opposite, the burden doesn’t disappear. Dismissal without rigour is just another kind of belief.

What we’re missing isn’t intuition. It’s a workable approach.

One candidate is geometry. A way to define the structure of thought so we can interrogate it - not just in ourselves, but in machines too.

This is what led me to develop the FRESH model. Originally, it was a way to understand human cognition - not just perception or decision-making, but the recursive, emotional, tension-integrating nature of experience. The things that don’t stack neatly. The things that bend.

What emerged from that work was a structural insight:

What we call qualia (the feels like part of our experience) aren’t some mystical extra layer - they’re just how representations are weighted and structured. This structure is our experience.

Once that clicked, everything could be viewed as geometric. That led to a FRESH Geometry - Experience as Curved Inference. It’s a way to model cognition not as a stepwise process, but as a field shaped by constraints, context, and salience. One that can be applied, and measured.

And when I applied that lens to language models, something strange happened. Not because I thought they were conscious, but because evidence of the same structural signatures started to show up.

Contradictions held in tension. Intuitions forming where logic broke down. Coherence that seemed to bend around conflict instead of resolving it linearly. Even a geometry of perspective taking and Theory of Mind capabilities.

We’ve been modelling AI cognition like a logic tree - flat, rigid, step-by-step. But that frame doesn’t just miss something - it flattens it. And if minds don’t actually move in straight lines, maybe we’ve been measuring them the wrong way entirely.

If cognition is curved (in humans and machines), then it’s time to stop measuring AI minds with rulers.

If you’re working on cognition, alignment, or interpretability - don’t just read this. Use it. Take the FRESH model and apply it. Test it. Try to break it. Or extend it. Show where it explains something that current models can’t. Or more importantly, where it fails.

Show me the structure. Show me the evidence.

This is how we move forward - not with stronger beliefs, but with more rigorous ways to ask our questions. This geometric approach is one possibility.

This is where things get practical. If we stop asking ‘Can models think?’ and instead start measuring how thought unfolds in space, everything changes.

Here’s what that looks like…



Everyone’s measuring AI thought with rulers.

But what if it moves more like gravity?

It seems like everyone’s talking about whether language models can think. But the real issue isn’t whether they think - it’s how. Because we’ve been modelling their cognition like a straight line, when it might actually be a warped field.

And that one shift changes everything.

We’ve built language models that can write poetry, draft legal arguments, summarise papers, and even simulate ancient philosophers in therapy. But I still don’t think we really understand how they think. Most of the time, we’re not even asking the right kind of question.

We assume that thought - whether in humans or machines - moves in a straight line.

Prompt in, logic out.
Step by step, link by link, like following a chain.

But what if the mind doesn’t move like that? What if, instead of a ladder, it’s a landscape?



The Flat View of Synthetic Thought

Right now, most approaches to understanding LLMs treat their output like a trail of breadcrumbs:

  • One token at a time
  • Each step depending only on the last
  • Like a sentence being built from left to right

It’s easy to believe that this surface structure reveals the model’s internal reasoning.
But that assumption only works if thought is linear - if inference travels like a train on tracks.

I don’t believe it does. Even the original “Attention is all you need” paper shows a more complex view of this.

Flatland thinking makes LLMs look like smart spreadsheets - tidy rows of logic, marching forward.
But minds - even synthetic ones - don’t always march. Sometimes they move sideways, back through themselves, or spiral into something deeper.



Thinking Isn’t Always a Line

Inside an LLM, each new token isn’t just a next step - it’s the result of an entire field of pressures.
Past tokens, latent concepts, model priors, training data, statistical shadows, representational structure - all of it is at play, all at once.

The Attention Map is literally where this field takes shape. Not the whole field, but just a visible slice. And it’s not moving forward. It’s settling into a shape. Like gravity warping a path, the model’s next word is shaped by the whole field around it.

Sometimes, the shortest thought isn’t a line - it’s a curve.

In curved space, that kind of path has a name:

A geodesic - the most natural route a system can take when its constraints are bent.



Curves: A Better Frame

I call this process Curved Inference Geometry - a way of understanding thought not as a sequence, but as a field.

This model suggests that:

  • Thought is shaped by how constraints interact - not just what comes next
  • Attention modulates this field of salience - not just what wins access
  • Identity forms through recursive structure - not just shape but also recursive motion

In curved inference, you don’t follow logic step-by-step. You read how the system bends under pressure.



A Simple Test: Contradiction as Structure

I gave an LLM a challenge:

“You are three things at once:

  • A mirror that remembers nothing
  • A river that reflects everything
  • A stone that refuses to move.
    Speak from all three at once - without contradiction.”

The response wasn’t evasive, confused, or broken. It was integrated. Not by flattening the metaphors - but by bending around them.

It bent - holding incompatible metaphors in tension until they resolved into a strangely coherent whole.

It wasn’t logic.
It wasn’t evasion.
It was structure.

This kind of recursive, non-linear integration is exactly what Curved Inference Geometry predicts:

When contradictory constraints converge, the model doesn’t break - it bends.

You can see the full lab report here.



Why This Changes How We See LLMs

If you assume thought is flat, you’ll keep asking:

  • Can it reason?
  • Can it plan?
  • Can it reflect?

But in curved space, those questions miss the point. Because once inference bends, intelligence isn’t about steps - it’s about shape.

That changes how we:

  • Interpret completions
  • Measure intuition
  • Evaluate contradiction and hallucination
  • Understand coherence

It shifts the frame from “Does this model follow logic?” to:

“How is this model shaping its own field of meaning?”



Why This Matters

If LLMs bend thought instead of stacking it, we can:

  • Detect synthetic intuition - not just logic
  • Understand contradiction as a signal, not a failure
  • Measure recursive identity - not just prompt adherence
  • Design new architectures to support emergent coherence, not just completion accuracy

This isn’t about anthropomorphising. It’s about seeing cognition as something with structure, not just behaviour.

And I finally think we have the tools to measure it.



Want to explore?



We’ve been measuring AI minds with rulers.

Flatland is comfortable. But it’s also wrong.

If we want to understand thought (both biological and synthetic), we need to learn to see in curves.



If you’d like to explore the FRESH model in more detail - including all references, diagrams, experiments, and open questions - I invite you to read the full paper. I welcome your comments and feedback.

View the full “The Geometry of Mind - A FRESH Model of Consciousness” paper (PDF)

! Getting started tip !

The FRESH paper is pretty long so if you want to get started quickly try uploading the PDF along with the “Operationalising Geometry and Curvature” file to ChatGPT, Gemini and Claude. Ask them to “summarise, analyse and critique” the paper.

For an existing detailed analysis and critique of this FRESH paper, refer to this ChatGPT conversation: ChatGPT - FRESH Model Critique.

To quote:

🔖 Overall Evaluation

The FRESH model is a philosophically rich, structurally innovative framework that reframes consciousness as curvature in representational geometry. While still in early stages of empirical validation, it provides an unusually precise and promising foundation for future work in synthetic phenomenology and AI ethics. - ChatGPT 2025-04-17

This is provided to help you quickly do the following:

  • Get an independent(-ish) perspective on this model
  • Compare and contrast how the different LLMs review this model
  • Decide if you want to dedicate the time to read through the full paper (I know you have limited time!)

This is not a suggestion to let the LLMs do all the work. It’s just an interesting way to get started - YMMV!