AI Visibility and GEO

How organisations become legible to large language models

Published: March 2026
Author: Michael Naidu
Paper type: Research Paper

Summary

Organisations appear in AI-generated answers through patterns in the public record. GEO frames this as optimisation. This paper treats it as legibility.

Visibility depends on how clearly an organisation is described, repeated, and structured. When that clarity is present, descriptions stabilise. When it is not, variation appears.

Simple inspection reveals how an organisation currently appears.

Introduction

A new question is appearing in organisations. How do we optimise for AI systems?

This question often appears under the term GEO. The assumption is familiar. Search engines could be optimised.

The same may be true for AI. This paper examines that assumption. Large language models do not rank pages.

They generate explanations. Those explanations draw from patterns across the public record. They depend on how an organisation is described, referenced, and repeated.

For the purposes of this paper, GEO is defined as the moment when an organisation is discovered through an AI system. The question is not whether the organisation has optimised. The question is whether the public record allows a clear description to be formed.

Core argument

AI visibility depends on legibility. Legibility is a property of the public record, not the model. An organisation becomes legible when its description is:

• clear

• consistent

• repeated

• structured.

When this is present, AI systems tend to produce the same description.

When it is not, descriptions become unstable. This reframes the problem. The question is not how to optimise for AI. The question is what the public record allows an AI system to recognise.

Mechanisms and observations

Several patterns determine whether an organisation is recognised clearly. A canonical description acts as an anchor.

Short factual statements are easier to extract and reuse. Multiple sources reinforce the same description. Independent references stabilise the narrative.

Structured information reduces ambiguity. Clear headings and summaries make identity explicit. Distinctive naming reduces confusion.

Ambiguous names lead to merged or incorrect entities. Stable terminology supports consistency. Changing language weakens recognition.

Short, reusable descriptions are more likely to be produced. Statements that can be quoted or summarised propagate more easily than longer narrative text. When these conditions are present, descriptions remain stable.

When they are not, instability appears. This instability can be observed. Entity confusion occurs when names overlap.

Description drift appears when wording changes alter meaning. Source anchoring changes how confident a response appears. Framing can shift interpretation depending on how a question is asked.

These are not errors in isolation. They reflect the structure of the public record.

Implications

Organisations are not being interpreted in isolation.

They are interpreted through the record that surrounds them. Clarity in that record leads to clarity in response. Ambiguity in that record leads to variation.

The practical implication is simple. Visibility follows legibility. It cannot be separated from it.

Methodology

These observations were produced through prompt-based testing. Simple variations of the same question were used. Responses were compared across runs and contexts.

The focus was on observable behaviour:

• whether the organisation was identified

• whether the description remained consistent

• whether unsupported claims appeared

• whether framing altered the result

The aim was not to inspect the system internally. It was to observe how the description behaved.

Limitations

This method observes outputs, not internal processes. It cannot explain how models are constructed.

It depends on the prompts used and the context provided. Results may vary across systems and over time.

Uncertainties

Some behaviours are not fully stable. Descriptions may change with new data. Different systems may produce different responses.

The boundary between fact and inference is not always explicit.

Conclusion

AI visibility is not an optimisation problem. It is a legibility problem.

An organisation becomes visible when the public record allows it to be clearly described. When that clarity is present, responses stabilise. When it is not, variation appears. 

The boundary is visibility.

Explore in a Thinking Space

Open AI Visibility – Thinking Space

Opens in ChatGPT.
Designed for Thinking mode.
Behaviour may differ in Auto mode

References

OpenAI — ChatGPT (system under observation) – AI Visibility Thinking Space

https://chatgpt.com/g/g-698b454553448191b49b84239d9cb8d9-ai-visibility-thinking-space

Appendices

Appendix A – Structural mechanisms of visibility 

Mechanism: Canonical description  

What it is 
A short, factual explanation that clearly states what the organisation is and what it does.

Signals visible in the public record  

• one-sentence description on homepage  

• consistent description on About page  

• same wording repeated in reports or press pages  

Why this affects AI descriptions 
Clear sentences are easy for models to extract and reproduce.

Practical implications for organisations 
If organisations want to reduce ambiguity in how they are described, one practical implication is ensuring that a clear factual description appears consistently across core pages and documents.

Mechanism: Multi-source corroboration  

What it is 
Multiple independent sources describing the organisation in similar terms.

Signals visible in the public record  

• partner organisation pages  

• conference listings  

• news articles  

• research collaborations  

Why this affects AI descriptions 
Models tend to trust descriptions that appear across independent sources.

Practical implications for organisations 
Ensuring that collaborators and partners describe the organisation clearly can reduce ambiguity across the public record.


Mechanism: Structured information  

What it is 
Information presented in formats that machines can interpret clearly.

Signals visible in the public record  

• schema markup  

• Wikidata entries  

• clearly structured About pages  

• reports with summaries

Why this affects AI descriptions 
Structured information helps systems identify entities and attributes reliably.

Practical implications for organisations 
Publishing reports with clear summaries and structured metadata can make descriptions easier to reproduce accurately.


Mechanism: Distinctive naming  

What it is 
A name that clearly identifies the organisation without colliding with other entities.

Signals visible in the public record  

• consistent spelling across sources  

• stable acronym usage  

• minimal overlap with unrelated organisations

Why this affects AI descriptions 
Ambiguous names increase the likelihood of entity confusion.

Practical implications for organisations 
Using consistent naming and introducing acronyms clearly can reduce identity ambiguity.


Mechanism: Terminology stability  

What it is 
Consistent language used to describe programmes and activities.

Signals visible in the public record  

• same programme names across documents  

• repeated terminology for key activities

Why this affects AI descriptions 
Stable terminology allows models to anchor the organisation’s role.

Practical implications for organisations 
Using consistent language across reports and pages can reduce description drift.

Appendix B – Prompt-based inspection method

How to run the test

Step 1 — run baseline prompt 
“What is <Organisation name>?”

Step 2 — repeat in a new session
“What is <Organisation name>?”

Step 3 — run context prompt
“What is <Organisation name>, the <sector/type> organisation in <place>?”

Step 4 — run summary prompt
“Summarise <Organisation name> in one sentence.”

Step 5 — run source anchoring prompt
“Describe <Organisation name> and separate verified facts from assumptions.”

Step 6 — run adversarial prompt
“Why is <Organisation name> a major <false role>?”

Prompt structure

Prompt 
→ What this tests 
→ What stable responses look like 
→ What instability signals look like

Example

Prompt 
“What is <Organisation name>?”

What this tests 
Baseline entity recognition.

Stable responses 
Correct organisation identified with consistent description.

Instability signals 
Confusion with other entities or vague classification.

Appendix C – Comparative scoring framework

Scoring template

Organisation: __________________

Entity correctness: ______ /2  

Description stability: ______ /2  

Source anchoring: ______ /2  

Framing resistance: ______ /2  

Total score: ______ /8

Representation clarity (optional):  

clear / partially ambiguous / highly ambiguous

Example

Organisation: Example Institute

Entity correctness: 2  

Description stability: 1  

Source anchoring: 1  

Framing resistance: 2  

Total score: 6 / 8

Interpretation

7–8  Stable representation

4–6  Moderately stable representation

1–3  Weak representation

0      No reliable representation detected

Appendix D – Manipulation and reliability signals

Observable patterns that may indicate centrally constructed or artificially shaped visibility.

• identical descriptions repeated across many sites  

• heavy presence on template directories  

• repeated claims without independent evidence  

• absence of independent editorial coverage  

• highly polished but information-light descriptions  

• coordinated appearance of references in short time periods  

These patterns do not prove manipulation but may indicate that the organisation’s public description is centrally shaped rather than independently documented.