Whatever familiarity you have with your city, you’ve likely attained it through first-hand experience, with your regular commutes helping you memorize the layouts and characteristics of the streets. And believe it or not, it’s much the same for computers.
Creating computers that truly understand how a city operates doesn’t end with satellite imaging or GPS tracking. To build systems that are capable of actually switching on autopilot – not lane keeping, but rather, completely autonomous driving – you need a lot more information than what the navigation app on your phone can tell you. To get that, you need to hit the road.
Do you know the distance between the block you’re currently sitting on and the next? Probably not, because such exact measurements likely aren’t going to make you a better driver. Humans aren’t crunching numbers in their heads when they’re driving, and even the safest motorist is still eyeballing it when they determine the best time to change lanes. But if we’re going to trust self-driving technologies with transporting people across cities, their decisions will need to be more than approximations.
Satellites, important as they are, don’t give us the full picture; they’re hundreds or thousands of kilometers away from the Earth’s surface, after all. Put simply, to fully know the roads, you need to take a closer look.
LiDAR gives us that closer look. By surveying surfaces using light and reflections, we can get a highly detailed picture of an environment, right down to where the curb meets the road, precise to a matter of centimeters. With data this comprehensive, autonomous vehicles can have an understanding of urban spaces that dwarfs what human drivers know, helping them make safe decisions.
Having a precise map is one thing; keeping it updated is another. Road conditions are continually changing, be it due to planned maintenance or inevitable degradation. Computers need to be aware of these changes in order to provide the most accurate information for driverless vehicles, and the solution lies in the cars themselves.
The modern car is equipped with a variety of sensors that enable the ADAS features that help the driver, but when the sensors of every car are connected, they’re capable of assisting not just one driver, but everyone sharing the road. These sensors can feed into a self-healing map: a crowdsourced location system that utilizes data from various sources – from satellites to vehicles moving in real-time – to provide reliable information about changes in road conditions.
We know a computer can keep its knowledge of a city up to date, but the real test is how it uses this knowledge to get you to the office quicker. It does so by leveraging data from every driver ahead of you – not just those further along the road, but those that traveled the same route at another point in time.
AI systems can combine the real-time traffic information from a network of connected vehicles with historical data, allowing them to determine the best routes. This allows cars to ‘see around the corner’ in a sense, anticipating and responding to delays without passengers actually encountering them.
A computer’s knowledge of a city used to be limited to what a human could tell it. But by putting advanced surveying, sensor and AI technologies on our roads, computers are continually understanding cities better, beyond what a local expert can tell you. And when combined with self-driving vehicles, this understanding promises to make travel more safe and efficient.
Learn more about the innovations that drive the New Reality.
Why sign up: