AI at HERE Technologies: our code of ethics
Giovanni Lanfranchi — 23 April 2021
6 min read
24 April 2026

Large language models (LLMs) have become remarkably capable. They write code, analyze documents, summarize research and answer complex questions across many domains.
Because of this, it is tempting to assume that they also understand the physical world.
But when language models are asked about locations, routes, distances, or real-world spatial relationships, they frequently fail in subtle but important ways.
The reason is simple. Large language models are built to predict text, not to perform spatial calculations or evaluate how the real world actually works.
Consider a simple query:

“Find the closest bakery to Gunnersbury Park in London”
A language model can search the web, retrieve reviews and suggest a nearby bakery. But when we validate the answer on an actual map, the recommendation may not be the closest location at all.
This is happening because the model is inferring proximity from textual descriptions, not calculating distance using geographic coordinates and road networks. Asking an LLM to compute spatial relationships is like asking someone to navigate a city using only restaurant reviews and travel blogs, instead of a map and a compass.
To determine the real answer, a system must:
identify the park’s coordinates
identify candidate bakeries
compute distances across the road network
select the closest reachable location
This is geospatial computation, not language inference.
Navigation related queries exposes this limitation even more clearly. When asked:
"Navigate me from William Huskisson Memorial to Haynes Fine Art London.”

A language model may produce plausible step-by-step directions. But when we compare those instructions with the actual road network, the generated path often:
cuts through streets incorrectly
misses available routes
produces inefficient or impossible paths
miscalculates distance
This is because navigation requires computing a route across a graph of millions of road segments, not predicting a sequence of sentences.
Another example illustrates a deeper issue.
Query:
“Find an Italian restaurant halfway between Notting Hill and Westminster.”

A language model may recommend a popular restaurant in the city center, often based on training data that may already be outdated. Modern GenAI systems augment this with tools to retrieve fresher information, reducing obvious errors such as suggesting places that no longer exist. But even with up-to-date data, the harder problem remains: determining whether a place is actually relevant in space, whether it is truly “nearby,” meaningfully “on the way,” or feasible to reach under real-world conditions. This is where language understanding ends and spatial reasoning begins.
But determining whether a location is equally distant from two points requires:
geocoding both locations
calculating their midpoint
evaluating candidate restaurants against that midpoint

Without these computations, the model is simply guessing based on text patterns. LLM is language pattern predictor. LLMs reason over language representations of space, not over space itself. Distance is a geometric property, not a linguistic one.
Even when questions involve real-time information, the same issue appears.
“Given current traffic conditions, what would be the ETA from Luton to London?”

A language model may produce an answer based on typical travel times it has seen during training or retrieved via search tools. However, these reflect historical or generic conditions. They do not account for real-time constraints such as roadworks, temporary closures, or traffic disruptions, nor do they guarantee that the computed route is actually valid under those conditions.
But the real answer depends on:
current traffic flow
road incidents
temporary restrictions
route optimization under current conditions
Without access to live spatial signals and routing engines, the model can only approximate.
These failures often surprise people because LLMs appear to know a lot about the world. Language models can identify cities and landmarks recall approximate distances, describe neighborhoods and suggest popular locations.
But this knowledge comes from textual associations learned at the time of training and not spatial computation.
The model knows that: “Big Ben is near Westminster.”
It does not know the coordinates, topology or road network required to compute a route there.
Language models operate on tokens and probabilities, not on spatial graphs, live data, or deterministic algorithms. Like in mathematics, they approximate outcomes based on patterns rather than performing exact computation. But unlike math, spatial reasoning must also account for dynamic, real-world conditions, making purely language-based approaches fundamentally insufficient.
LLMs are best at processing, generating and interpreting human language.
But they struggle at tasks requiring logical reasoning (routing and geometric calculations), real-time factual accuracy (dynamic spatial content), often struggling with basic math, logic and long-term memory—all of which are foundational elements of spatial intelligence, that are fundamentally computational problems, not linguistic ones.
As agentic AI systems move from answering questions to performing real-world tasks, this distinction becomes critical.
Planning journeys, coordinating logistics, understanding traffic, finding locations based on spatial constraints all require reliable spatial reasoning.
And spatial reasoning requires something different from language prediction.
LLMs are exceptional at understanding what a user wants, but determining how that request maps onto the physical world requires deterministic computation over real location data.

Aleksandra Kovacevic
Sr. Director, Head of Responsible AI
Share article
Why sign up:
Latest offers and discounts
Tailored content delivered weekly
Exclusive events
One click to unsubscribe