Skip to main content
Insights & Trends 12 min read

AI at HERE Technologies: our code of ethics

AI at HERE Technologies: our code of ethics

To guide the proper use of AI at HERE, we follow a code of ethics that aligns with the fundamental requirements for trustworthy AI – this is how we put these principles to work.

Artificial intelligence is integral to our vision of the future. At HERE, we apply machine learning to automate map creation and make our location services more useful. We also run AI systems with a sophisticated capacity to learn and adapt independently of human design. These systems train on the biggest dataset of all – the physical world around us.

Our goal is to deploy AI that assists our customers in the pursuit of solutions to complex challenges like urban congestion, supply chain disruption and climate change.

Undoubtedly, AI has the potential to profoundly improve our lives. This motivates me and my HERE colleagues in our work. However, we recognize that such powerful technology must be used responsibly. Even with good intentions, AI systems can have unforeseen consequences. A key limitation is their lack of objectivity; algorithms make decisions based on large datasets that might be loaded with human biases and inaccuracies. 

To guide the proper use of AI at HERE, we follow a code of ethics that aligns with the fundamental requirements for trustworthy AI as formulated by the EU and referenced in the bloc’s proposed new regulation on AI.

Below, I outline the code’s seven principles, before explaining how we operationalize them. I conclude with a few examples of how we use AI at HERE today.

Seven principles guiding the development and application of AI at HERE

Human agency and oversight 

 
HERE believes that AI systems must not undermine human autonomy. We continually evaluate risks and take appropriate steps to ensure that people have oversight of algorithmic decisions, as well as the ability to overrule them. 

Technical robustness and safety 

 
HERE aims to develop AI systems that meet high standards of accuracy and reliability in a wide range of real-world situations.

Privacy and data governance 

 
HERE extends robust privacy and data security measures to cover all aspects of operations and processing where we use AI systems, from ideation, ingestion and enrichment through to model training and application simulation and delivery. 

Transparency 

 
HERE aims to provide meaningful explanation and traceability of its AI systems, recognizing the importance of being able to track which data sources are used by an algorithm or heuristic, understand the processes behind its decision-making, as well as communicate its potential limitations to users.  

Diversity, non-discrimination and fairness 

 
HERE aims for fairness and the avoidance of bias in the training and implementation of AI systems, recognizing the risks of inadvertent or unintended discrimination.  

Societal and environmental well-being 

 
HERE is committed to developing and using AI systems that benefit people, our environment, and wider society. Location-aware AI systems have an important role to play in, among other things, sustainable resource usage, smart cities, and more efficient transportation networks.

Accountability 

 
HERE is committed to implementing AI systems that work responsibly as well as ensuring that any possible negative impacts are redressed in accordance with the company’s high standards of corporate governance. 

 

Putting our principles to work


These principles for trustworthy AI are empty platitudes unless properly applied. So how do we do that?

First and foremost, we operate a privacy-by-design approach to ensure that both privacy and ethics are baked in from the ideation phase and thereafter addressed throughout the entire development cycle.

Secondly, we provide guidelines to AI developers as well as checklists to product managers. Our teams assess the risks associated with each AI system. Is it explainable and understood? And what is the potential impact on individuals and society if something was to go wrong? The risk impact assessment guides our strategy for how we then monitor, evaluate and mitigate the risks.

Read more: AI-supported supply chains can lead logistic managers to the “holy grail".

 

 

Consider the example of an AI system trained to capture and process real-world road hazards, which are then fed to the computer of an automated vehicle. Inaccurate readings produced by the system could affect the safety of road users. Equally, if certain geographies were underrepresented in the system’s training imagery, there could be a greater risk of errors in hazard detection in those regions. Some groups of people would then be at greater risk than others. That would not be a fair and trustworthy system, and a strong fallback plan would be needed.

In this case, a fix might involve setting boundary conditions and having the system fall back to rule-based AI. We would also look at how we can improve the quality of the training data as well as examine what data is used and how decisions are made. As Explainable AI (XAI) is still an emerging field, we are taking a pragmatic "crawl-walk-run" approach to providing a meaningful explanation of an AI system proportional to the potential impact it could have on individuals or society.

Last but not least, the development of beneficial and trustworthy AI does not happen in a vacuum. We also look to the outside world for insight and inspiration. Dialogue with those who may be affected by an AI system is a must if we are to fully understand its promise and impact. So too is collaboration and co-innovation with industry, government, and research institutions like IARAI. An AI system that HERE develops may prove useful in solving a specific mobility problem. But could partners apply the same system in other areas and if so, might further ethics problems arise?  

 

Urban planning offers lots of examples of where things could go wrong. Take an AI system that trawls through historical mobility data to help city planners determine where best to build a new road or transit line. If it is trained to make recommendations based on mobility trends, might it neglect poorer parts of the city where people could have smaller mobility footprints?

Inequality in transport exists and this is an example of where AI may not only predict the future but make the future. Only by creating a continual learning loop with all stakeholders can we get closer to understanding the full impact of AI on individuals and innovate in this increasingly complex and challenging field. 

Where HERE Technologies uses AI today

I am often asked about how we use AI at HERE. We apply AI systems of varying levels of sophistication across our technology stack. Below, I list a few examples.

Automating the detection and processing of real-world change in our map
 

The HERE map is an evolving and expanding canvas that serves different platform capabilities, products, and applications used by our customers. HERE uses numerous AI systems to create and update this map, including its 3D and HD elements. Our models are trained to extract and process information from imagery, LiDAR, and live vehicle sensor data.

 
Information such as road geometry, signage, and traffic rules help vehicles of different levels of automation to operate safely and efficiently while building footprints and venue maps support use cases such as last-mile delivery, emergency response, and targeted advertising.

Real-time detection of hazards
 

To help make driving safer HERE uses AI systems to aid the detection and notification of potential hazards and changes in the road network, such as a road closure.

Understanding road traffic dynamics
 

HERE location services and products leverage AI systems to deliver more useful routing and search calculations. We also deploy algorithms focused on specific problems, such as predicting traffic congestion following an incident.

Enhancing privacy
 

 
HERE uses AI systems to help protect people’s privacy. We apply blurring filters to vehicle registration numbers and faces in the street-level imagery we collect.
 
For our traffic service, we suppress revealing data as well as apply automated random re-assignment and rotation of ID numbers associated with vehicles and devices supplying traffic probe data. And we provide an anonymization pipeline for customers to process their real-time location data on the HERE platform.
 
This service removes personal information while preserving the utility of the data for various use cases. HERE is also developing a privacy diagnostic tool that organizations can use to test how well their anonymized datasets stand up to a reconstruction attack.

Location enrichment
 

Customers working on our platform use our own internally developed AI tooling for common location enrichment use cases, such as map matching, geospatial clustering, and semantic segmentation. The pre-trained models can be applied and adapted by customers for specific uses. For example:

 
  • A software technology company conflates its data with HERE data and third-party data (from the HERE Marketplace) and develops and operationalizes a road friction machine learning algorithm.
  • A large mining company applies heuristics to its data and visualizes truck speed in the context of the private map it has developed on the HERE platform.
  • A large automotive company develops and tests its algorithms at scale on the HERE platform as part of its driver assistance program.

Location-aware AI
 

Lately, the role of AI at HERE has expanded into the realm of ‘location-aware AI’. This is where we get into more sophisticated implementations of AI.

 
We use the term location-aware AI to signify the shift to a new computing paradigm where models are the outcome of autonomous rather than automated data processing. We believe it will offer transformative opportunities for our customers.

 

One area of our research focuses on how an agent can interact in an environment without any explicit prior information about it and learn how to best ‘navigate’ through it. This will be useful for a large spectrum of use cases such as autonomous driving, last-mile delivery, and private mapping.

Because location-aware AI agents understand the properties of spatial information, they can also uncover new patterns in data as well as understand how different location objects relate to one another.

Such linkages will prove useful in simulations of future real-world states. They also lend greater spatial awareness to adaptive applications. In practice, that means giving vehicles, machines, smartphones, wearables and a myriad of sensors the ability to better understand, predict, and react to their surroundings.

As we design and deploy AI systems, I believe HERE and its customers are only at the beginning of a long and exciting journey of discovery. The intersection of AI and location technology can help unlock solutions to some of the world’s biggest problems.

However, delivering on AI’s promise also means we must comprehend, mitigate and be transparent about the risks associated with its use. Our code of ethics, coupled with robust operationalization of its underlying principles, will guide our approach.

Giovanni Lanfranchi

Giovanni Lanfranchi

Senior Vice President Development and Chief Technology Officer

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe