Skip to main content
Insights & Trends 6 min read

Enabling a smarter future with the HERE Reality Index

Enabling a smarter future with the HERE Reality Index

This article was originally published on LinkedIn by CTO Brian Lent.

Think about all the objects around us in terms of information. That street address, or the hours for that coffee shop, the history of that building, or the inventory for that store.  All this information can be thought of as intelligence points in the world around us, The Reality Index is connecting the dots.

That’s a relatable metaphor, but in reality, what we’re talking about here extends way, way individual points of information. The Autonomous World, and all its connected parts, need a near real-time geospatial index of places, objects, entities, and knowledge over time.

Put mildly, that’s a lot of data — and we’re collecting it.  We’re collecting it from places we’ve talked about before, like GPS, cameras, ultrasonic, lidar and radar sensors in vehicles on a global scale.  We’re also gathering geo-location data from websites, and public information from social media sites. 

We also planning to utilize data from systems that are leveraging our cloud processing.  For example, a drone flying over a crop field to use our cloud system to analyze spectral data.  Or an Augmented Reality smartphone app that needs location data to position custom objects in space.  Even something as simple as a connected thermostat or home automation IoT device can provide relevant and useful anonymized location information (and benefit from this data too).

All of these inputs feed into the HERE Reality Index, which maintains and updates a highly accurate view and understanding of the physical world.

You’re already aware that we’re applying AI to recognize objects in the world – our Computer Vision and Deep Learning Models can already spot stop signs and roadside lane markers with an extremely high degree of accuracy. On the web, our Natural Language Understanding algorithms can tell the difference between the subway [station] on 14th Street, and the Subway [sandwich shop] on 14th Street.

AI, semantic understanding, and your brain

Identifying objects is powerful, but this grows even more complex as we use AI to semantically understand objects and entities around us.  To preface that, realize this: your brain is the most advanced neural network in the world.  You have an innate ability to instantaneously recognize objects, and that is amazing.

When you look out at the road ahead of you, you might see a hole.  Is it a large hole, or a small hole?  We’ll call it medium.  Are the edges rough and crackled, or are they smooth?  Right, rough and jagged. Is it circular, or irregular? Irregular.  Is there water in the bottom?  Yes.

You’ve likely got an image in your mind now. It’s a pothole. 

But, if you saw that object in the road ahead of you while you were driving, you wouldn’t have had to ask yourself any of those questions.  You would have known, instantly, that it was a pothole.  Or if you were French, it’s a nid de poule.  Or in Germany, a schlagloch.

Here’s another example of semantic understanding – if you get into a cab in New York, you might ask the driver to take you to 15 Vanderbilt Ave.  Or, you might ask them to take you to ‘that secret bar above Grand Central Station’.  You might ask them to take you to the place that used to be called The Campbell Apartment.  A good driver will know that all of those things are the same place.

All of these objects, their places, their characteristics, their contents, their various names, and deep meta information form what we call a Knowledge Graph.  It’s a teachable system—using advanced AI and Machine Learning--that understands the world in human terms.

Enabling a new generation of services

When the Reality Index contains this data-rich picture of the world, applications and devices are going to be able to offer services that we’re only just beginning to understand.

Trucking companies and logistics managers can save money and time by using AI to plan out the optimal routes for the trucks in their fleets — routes that take into account all the factors of the real world like opening & closing times for each destination and hours of service for their drivers.

Automated vehicles and drones in the sky can communicate with each other to share information and accurately navigate urban areas, across multiple ‘air layers’ designated for drone traffic

We can also start to look at the world via location and context for ephemeral events and objects.  For example, a food truck might locate itself in a different place every day of the week.  With our real-time, contextualized knowledge graph, you could search for, “the best taco truck in New York”.  The app could then return the highest-rated taco truck, its operating hours, and its current location—as of right now.  As the data access continues to grow, the app could also report today’s specials, items they’re running low on, and how long the line currently is.

At HERE, our goal is to build these technologies to enable the next generation of services, a smarter future, in a highly collaborative process.  We’re demonstrating some of the technologies we’ve built, as well as some brand-new systems coming this year at CES 2018.  I hope you will come by and see us, to learn more about the Reality Index, and how it’s impacting the world of automotive, developer, and IoT innovation.

Brian Lent

Brian Lent

Chief Technology Officer, HERE

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe