HERE Technologies LogoHERE

6 min read

24 April 2026

Why LLMs understand language but not space

Navigating the city

True spatial reasoning requires computation over real-world data, something large language models (LLMs) were never designed to do.

Large language models (LLMs) have become remarkably capable. They write code, analyze documents, summarize research and answer complex questions across many domains.

Because of this, it is tempting to assume that they also understand the physical world.

But when language models are asked about locations, routes, distances, or real-world spatial relationships, they frequently fail in subtle but important ways.

The reason is simple. Large language models are built to predict text, not to perform spatial calculations or evaluate how the real world actually works.

Consider a simple query:

Why LLMs understand language but not location FINAL IMAGE 1

“Find the closest bakery to Gunnersbury Park in London”

A language model can search the web, retrieve reviews and suggest a nearby bakery. But when we validate the answer on an actual map, the recommendation may not be the closest location at all.

This is happening because the model is inferring proximity from textual descriptions, not calculating distance using geographic coordinates and road networks. Asking an LLM to compute spatial relationships is like asking someone to navigate a city using only restaurant reviews and travel blogs, instead of a map and a compass.

To determine the real answer, a system must:

  • identify the park’s coordinates

  • identify candidate bakeries

  • compute distances across the road network

  • select the closest reachable location

This is geospatial computation, not language inference.

Navigation: where text falls apart

Navigation related queries exposes this limitation even more clearly. When asked:

"Navigate me from William Huskisson Memorial to Haynes Fine Art London.”

Why LLMs understand language but not location FINAL IMAGE 2

A language model may produce plausible step-by-step directions. But when we compare those instructions with the actual road network, the generated path often:

  • cuts through streets incorrectly

  • misses available routes

  • produces inefficient or impossible paths

  • miscalculates distance

This is because navigation requires computing a route across a graph of millions of road segments, not predicting a sequence of sentences.

Spatial relationships are not linguistic

Another example illustrates a deeper issue.

Query:

“Find an Italian restaurant halfway between Notting Hill and Westminster.”

Why LLMs understand language but not location FINAL IMAGE 3

A language model may recommend a popular restaurant in the city center, often based on training data that may already be outdated. Modern GenAI systems augment this with tools to retrieve fresher information, reducing obvious errors such as suggesting places that no longer exist. But even with up-to-date data, the harder problem remains: determining whether a place is actually relevant in space, whether it is truly “nearby,” meaningfully “on the way,” or feasible to reach under real-world conditions. This is where language understanding ends and spatial reasoning begins.

But determining whether a location is equally distant from two points requires:

  • geocoding both locations

  • calculating their midpoint

  • evaluating candidate restaurants against that midpoint

Why LLMs understand language but not location FINAL IMAGE 4

Without these computations, the model is simply guessing based on text patterns. LLM is language pattern predictor. LLMs reason over language representations of space, not over space itself. Distance is a geometric property, not a linguistic one.

The dynamic world problem

Even when questions involve real-time information, the same issue appears.

“Given current traffic conditions, what would be the ETA from Luton to London?”

Why LLMs understand language but not location FINAL IMAGE 5

A language model may produce an answer based on typical travel times it has seen during training or retrieved via search tools. However, these reflect historical or generic conditions. They do not account for real-time constraints such as roadworks, temporary closures, or traffic disruptions, nor do they guarantee that the computed route is actually valid under those conditions.

But the real answer depends on:

  • current traffic flow

  • road incidents

  • temporary restrictions

  • route optimization under current conditions

Without access to live spatial signals and routing engines, the model can only approximate.

The illusion of spatial knowledge

These failures often surprise people because LLMs appear to know a lot about the world. Language models can identify cities and landmarks recall approximate distances, describe neighborhoods and suggest popular locations.

But this knowledge comes from textual associations learned at the time of training and not spatial computation.

The model knows that: “Big Ben is near Westminster.”

It does not know the coordinates, topology or road network required to compute a route there.

The physical world is not just information, it is structure

Language models operate on tokens and probabilities, not on spatial graphs, live data, or deterministic algorithms. Like in mathematics, they approximate outcomes based on patterns rather than performing exact computation. But unlike math, spatial reasoning must also account for dynamic, real-world conditions, making purely language-based approaches fundamentally insufficient.

When LLMs work and when they don’t

LLMs are best at processing, generating and interpreting human language.

But they struggle at tasks requiring logical reasoning (routing and geometric calculations), real-time factual accuracy (dynamic spatial content), often struggling with basic math, logic and long-term memory—all of which are foundational elements of spatial intelligence, that are fundamentally computational problems, not linguistic ones.

What this means for location-aware agentic AI

As agentic AI systems move from answering questions to performing real-world tasks, this distinction becomes critical.

Planning journeys, coordinating logistics, understanding traffic, finding locations based on spatial constraints all require reliable spatial reasoning.

And spatial reasoning requires something different from language prediction.

LLMs are exceptional at understanding what a user wants, but determining how that request maps onto the physical world requires deterministic computation over real location data.

Portrait of Aleksandra Kovacevic

Aleksandra Kovacevic

Sr. Director, Head of Responsible AI

Share article

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts

  • Tailored content delivered weekly

  • Exclusive events

  • One click to unsubscribe