Debate worth having: do self-driving cars need maps to operate?

Maps. They’re essential for guidance. But how important are they for advanced driver assistance systems (ADAS) and autonomous vehicles?
These days, it’s easy to take the humble map for granted. It’s in our smartphones and our cars, helping us find and discover new places. We often overlook its convenience, but without a map we would, quite literally, be lost.
However, beyond A to B navigation, mapping and location intelligence is playing an important role in helping cars be more situationally aware. From ADAS systems that provide functions such as adaptive cruise control, automatic emergency braking and lane keep assist, to autonomous vehicles (AV) that take over the driving completely.
While most of these systems rely on cameras, sensors and lidar to function, maps are being used to enhance their capability, helping us understand the road ahead, what the speed limits and road rules are, and the positioning of other vehicles and road users.
But as cars become smarter and more self-aware, is the map in danger of redundancy? Do self-driving cars still need detailed, constantly updated maps? Or will artificial intelligence (AI) be capable of navigating the world the same way humans do, by observing, reasoning and reacting in real time?
To find out, we spoke to two experts about how essential maps are to automated driving systems.

Without maps, we’re driving blind
“For us, ADAS starts from SAE Level 0 but goes up to Level 2. After that, we call it highly automated driving, so Level 3 onward," said Sjoerd Spaargaren, Product Marketing Manager, Automated Driving at HERE.
“When we talk about ADAS for SAE Level 0, that’s where we say you need to have a map or map information. Some car companies aren’t in favor of having a map to make certain decisions. But in the end, they still do need a map, even if only for basic navigation purposes.
“When you go into these higher-level use cases, so go up the SAE Levels, car companies realize more and more that that’s where they also need to have a proper advanced map with relevant attributes like lane and localization information, etc...
“On one side, you have the sensor stack, which is really good at getting quick, dynamic information such as other road users, detecting objects, where the car is in reference to other things on the road or road furniture, but also in terms of GPS.
“The sensors can model that live environment. But the map can be used for contextualization and guidance and help the car see beyond. You can provide information like traffic, range-aware route optimization, road rules like speed limits and conditional signs, which sensors can pick up but don’t necessarily understand how to interpret.
"For example, things like school zone signs or when you enter a village. It means the speed limit automatically goes down, but it's not what we call an explicit speed limit which would show an actual value and are difficult to interpret for sensors.
“If you want proper localization of objects or change detection, you really need that fusion of the sensor data and the map. The map then acts as a canvas where you project everything, and then the sensor data can be put on top.
“If you navigate around the corner of a building, for instance, the sensors wouldn’t be able to see beyond that. Or if you’re reversing across a cycle lane, the map knows it’s there and can provide a warning to the driver.
“But we don’t think that having either a map or sensor data on their own is the best way forward. We really see that combining those is the best of both worlds.
“Then there’s sensor redundancy. If the sensors get wet, or are obscured from dust, fog, mud or snow, they don’t always work. With the map, we can act as a backup for what the sensors can’t see.
“I think there will always be a point where the map is required. The sensors cannot detect everything. The only level three production vehicles currently on the road actually use both map and sensor data.
“Maps aren’t just an add-on. They’re becoming foundational infrastructure for automated driving. Without them, we’re driving blind.”

Maps aren’t required at Level 5
"Autonomous vehicles have three main pillars," said Michael Frenkel, Research and Innovation Engineer and Lecturer, Ex-BMW Group and Mobileye. "The first pillar is the hardware. We need the relevant sensors, so lidar, radar, cameras, etc…
"The second pillar is the AI brain. It’s to try to build that AI that will mimic human thinking, to know how to negotiate the road and be aggressive enough.
"In an ideal world, if we could build that very smart, truly human AI, that’s it. We just need the sensors, like the eyes, but in electronic fashion. The AI brain will figure it out.
"However, practically, this is not where the AV problem or the challenges in AV stand today. The problems—and not just the technical challenges, but the safety requirements—are such that we need to be so perfect in that AI that it’s simply not practical enough.
"Sensors contribute so much data. Each lidar produces lots of gigabytes, and the same with cameras. This is an exponential growth in the data demands. Practically, we cannot even process everything in a timely manner for the AV requirements. This is where the third pillar comes into play. The map. The main role here is basically to ease the computational load or the requirements from the other two pillars to practically allow something like AVs.
"Only when we have all those three pillars, can they interact together and accomplish some degree of highly automated driving. If we get to Level 5, then we don’t need maps. Because the AI is so smart, it can figure out all the challenges. But I don’t think we’ll ever get to the definitions of autonomous driving we have today. We will reach a certain level, but we already see that the change is very slow compared to the hype we experienced in the previous decade.
"Therefore, my definition for Level 5 is to overcome those AI challenges—to build those neural networks, those large language models (LLMs), those AIs.
“Because even when we, as humans, arrive at a T-junctions, for instance, those occlusions are affecting us in the same way. We don’t have X-rays in our eyes, but we are smart enough to figure it out.
"But until we have the AI capability, maps are absolutely critical. There is even a very big question whether at all that definition for Level 5 is achievable.”
Sign up for our newsletter
Why sign up:
- Latest offers and discounts
- Tailored content delivered weekly
- Exclusive events
- One click to unsubscribe