Skip to main content
Release notes 20 min read

HERE Workspace & Marketplace 2.18 release

HERE Workspace & Marketplace 2.18 release


Ensure business continuity with Multi-region setup for Pipelines 

For a pipeline that requires minimum downtime, use the Multi-region option while creating the pipeline version so that when the primary region fails, the pipeline version gets automatically transferred to the secondary region. Just switch on the Multi-region option within the Web Portal or use the "--multi-region" flag via the CLI.

Note: For a pipeline to work successfully in the secondary region, the input and output catalogs used by the pipeline should be available in the secondary region.

Note: For a Stream pipeline to successfully utilize the multi-region option, it's important to enable Checkpointing within the code to allow Flink to take a periodic Savepoint while running the pipeline in the primary region. When the primary region fails, the last available Savepoint is used to restart the pipeline in the secondary region.

Note: For a Batch pipeline, the Spark History details are also transferred during a region failure.

Note: The state of an on-demand Batch pipeline is not transferred and it will need to be manually re-activated.


Use locally-stored data to speed up development and testing cycles while reducing cost 

Workspace now makes it easy to develop and test your applications with local data. Using either the platform CLI or a new feature of the Data Client Library, you will be able to run a local instance of the Data API, and copy data from the cloud Data API to this local instance. Via the CLI, you can also visually inspect the local data in the same way as in the portal. (The latter feature will not be available in China.)

This will make your development cycles faster and automated tests independent from one another as each test can easily run against its own copy of the same data set. Also, you will save cost during the development and testing phase. To learn more have a look at the tutorial on local development and testing. Note: Local catalogs are intended for development and test purposes, production use-cases are not supported.


Customize GeoJSON rendering plugins for Here Map Content and Traffic layers to visualize what you care about most for your use cases

The Schemas for HERE Map Content layers Topology & Geometry, Cartography, and Building Footprints, as well as the HERE Real Time Traffic Flow layer now include a GeoJSON rendering plugin. As with all other Schemas that contain such a plugin, you can edit the plugin in the browser to change the data visualization to highlight aspects that you care about as part of your development work. To learn more on writing GeoJSON plugins see our developer guide.


Use a new GeoJSON property called 'featureTag' to provide custom data filters to your visualization

Although catalog layers typically contain only one specific category of data this often fall into different sub categories. For example, cartographic data contains parks, rivers, woodland and many other things. Data Filters enable you to deselect those categories that are not relevant for your use case oriented assessment of a dataset.

The Data Inspector will automatically show a 'Data Filters' panel, if the selected tile has GeoJSON features with a property named 'featureTag'. For each unique 'featureTag'-value a toggle will be shown that controls the visibility of the features with that tag value. This works for GeoJSON layers as well as protobuf layers that have been transformed via a GeoJSON plugin. Check out the Cartography layer of the HERE Map Content to see this in action. 

To learn how to apply 'featureTag' to enable 'Data Filters' for your data, please refer to the Data Inspector’s Developer Guide


New Services added to

System status for Read Schemas, Write Schemas, and HERE Dynamic Content Services including On-Street Parking, Off-Street Parking, Fuel Prices, Safety Cameras and Traffic Connected Client (TPEG) are now accessible on

(Traffic Connected Client (TPEG) status is also added for China and South Korea)


Compact Index layer data to save storage cost and to query indexed data more efficiently

A new "Index Compaction Library" has been added to the Index storage tool set and is available in the HERE Data SDK for Java and Scala. Continually indexing data from small stream messages leads to the creation of metadata references to many tiny files, growing the size of Index storage and adversely impacting query performance. With the release of this library, you can compact your Index layer metadata and data to greatly reduce your index storage size, improve query performance and optimize big data processing by working with larger files. Further, you can use RSQL queries to submit criteria to compact your data within certain time windows or by other attributes you've defined to optimize the compaction process all while you continue to index incoming data. Learn more about the Index Compaction Library here.


Backup region support on

Events on the status page now include the region impacted in the outage details view. This is to support Data API users who are replicating data in a backup region. Events from 13 August forward will include the region field.



Use Map Creator with platform credentials

Platform Credentials can now be used to access Map Creator. Login to and you'll see the link in the launcher. With Map Creator you can edit the HERE map and help keep the world up to date. This includes editing places of interest, updating addresses and their exact location, updating roads, cycle paths and walkways. Existing Map Creator users should note that when logging in with platform credentials, edits and feedback from previous accounts disassociated from your platform organization, will not carry over.
(Not available on the Platform in China.)


Changes, Additions and Known Issues


SDKs and tools

Go to the HERE platform changelog to see details of all changes to our CLI, the Data SDKs for Python, TypeScript, C++, Java and Scala as well as to the Data Inspector Library.


Web & Portal

Changed: Support portal and documentation links have been removed from the launcher and now reside in the support menu designated with the  icon. 

Changed: Links to documentation can be found within the top navigation each module of the platform portal, data, pipeline, marketplace, access manager. These links were previously found in the launcher menu.

Changed: The profile menu has been updated, the menu found in the top right of the the portal navigation with an icon or initials. Links to apps and keys and the plugin installer have been removed from this menu. A link to notification preferences has been added. Account settings have also been consolidated into one page:, eliminating the previous profile page.

Changed: For the custom run-time configuration of a Pipeline Version, the character limit for the property name has been increased to 256 and the character limit for the property value has been increased to 1024.

Issue: Pipeline Templates can't be deleted from the Portal UI.
Workaround: Use the CLI or API to delete Pipeline Templates.

Issue: In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.
Workaround: Refresh the Jobs and Operations pages to see the latest job or operation in the list.


Projects & Access Management

Issue: A finite number of access tokens (~250) are available for each app or user. Depending on the number of resources included, this number may be smaller.
Workaround: Create a new app or user if you reach the limitation.

Issue: A finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions.

Issue: All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline.
Workaround: Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue: When updating permissions, it can take up to an hour for changes to take effect.

Issue: Projects and all resources in a Project are designed for use only in Workspace and are unavailable for use in Marketplace. For example, a catalog created in a Platform Project can only be used in that Project. It cannot be marked as "Marketplace ready" and cannot be listed in the Marketplace.
Workaround: Do not create catalogs in a Project when they are intended for use in both Workspace and Marketplace.



Issue: The changes released with 2.9 (RoW) and with 2.10 (China) to add OrgID to Catalog HRNs and with 2.10 (Global) to add OrgID to Schema HRNs could impact any use case (CI/CD or other) where comparisons are performed between HRNs used by various workflow dependencies.  For example, requests to compare HRNs that a pipeline is using vs what a group, user or app has permissions to will  result in errors if the comparison is expecting results to match the old HRN construct.  With this change, Data APIs will return only the new HRN construct which includes the OrgID, e.g. olp-here…, so a comparison between the old HRN and the new HRN will be unsuccessful.   

  • Reading from and writing to Catalogs using old HRNs is not broken and will continue to work until October 30, 2020.
  • Referencing old Schema HRNs is not broken and will work in perpetuity.

Workaround: Update any workflows comparing HRNs to perform the comparison against the new HRN construct, including OrgID.

Issue: Searching for a schema in the Portal using the old HRN construct will return only the latest version of the schema.  The Portal will not show older versions tied to the old HRN.

Workaround: Search for schemas using the new HRN construct OR lookup older versions of schemas by old HRN construct using the OLP CLI.

Issue: Visualization of Index layer data is not yet supported



Fixed: For Stream pipeline versions activated with the high-availability mode, the failure in selecting the primary Job Manager is now fixed with the Stream-3.0.0 run-time environment.

Deprecated: pipeline_jobs_canceled metric used within the Pipeline Status Dashboard is now deprecated. See the details in the Deprecation table at the bottom of this page.

Issue: A pipeline failure or exception can sometimes take several minutes to respond.

Issue: Pipelines can still be activated after a catalog is deleted.
Workaround: The pipeline will fail when it starts running and will show an error message about the missing catalog. Re-check the missing catalog or use a different catalog.

Issue: If several pipelines are consuming data from the same Stream layer and belong to the same group (pipeline permissions are managed via a group), then each of those pipelines will only receive a subset of the messages from the stream. This is because, by default, the pipelines share the same Application ID.
Workaround: Use the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector, you can specify a Kafka Consumer group ID per pipeline/application.  If the Kafka consumer group IDs are unique, the pipelines/applications will be able to consume all the messages from the stream.
If your pipelines use the HTTP connector, we recommend you to create a new group for each pipeline/application, each with its own Application ID.

Issue: The Pipeline Status Dashboard in Grafana can be edited by users. Any changes made by the user will be lost when updates are published in future releases because users will not be able to edit the dashboard in a future release.
Workaround: Duplicate the dashboard or create a new dashboard.


Map Content

Additions to existing layers in the HERE Map Content catalog:

  • Added AffiliationAttribute and OfficeTypeAttribute to the "Places-All Categories" layer
  • Added new fields for elevation (elevation_value) and curvature (curvature_value) in the "Roads-ADAS Attributes" layer. These attributes are nullable.
  • ADAS coverage in HMC is now global, which in total includes:
    • Eastern Europe
    • Western Europe
    • North America
    • South America
    • Middle East, Africa
    • APAC
    • Australia
    • India
    • Taiwan
    • Hong Kong.


Marketplace (Not available in China)

Issue: There is no throttling for the beta version of the External Service Gateway. When the system is overloaded, service will slow down across the board for all consumers who are reading from the External Service Gateway.

Workaround: Contact HERE technical support for help.

Issue: Users do not receive stream data usage metrics when reading or writing data from Kafka Direct.
Workaround: When writing data into a Stream layer, you must use the Ingest API to receive usage metrics. When reading data, you must use the Data Client Library, configured to use the HTTP connector type, to receive usage metrics and read data from a Stream layer.

Issue: When the Technical Accounting component is busy, the server can lose usage metrics.
Workaround: If you suspect you are losing usage metrics, contact HERE technical support for assistance rerunning queries and validating data.

Issue: Projects and all resources in a Project are designed for use only in Workspace and are unavailable for use in Marketplace. For example, a catalog created in a Platform Project can only be used in that Project. It cannot be marked as "Marketplace ready" and cannot be listed in the Marketplace.
Workaround: Do not create catalogs in a Project when they are intended for use in the Marketplace.


Summary of active deprecation notices across all components


Feature Summary

Deprecation Period Announced (Platform Release)

Deprecation Period Announced (Month)

Deprecation Period End


OrgID added to Catalog HRN (RoW)

2.9 (ROW)

2.10 (China)

November 2019

October 30, 2020


Deprecation Summary:

Catalog HRNs without OrgID will no longer be supported in any way after October 30, 2020.

  • Referencing catalogs and all other interactions with REST APIs using the old HRN format without OrgID OR by CatalogID will stop working after October 30, 2020,
    • Please ensure all HRN references in your code are updated to use Catalog HRNs with OrgID before October 30, 2020 so your workflows continue to work.
  • HRN duplication to ensure backward compatibility of Catalog version dependencies resolution will no longer be supported after October 30, 2020.
  • Examples of old and new Catalog HRN formats:
    • Old (without OrgID/realm): hrn:here:data:::my-catalog
    • New (with OrgID/realm): hrn:here:data::OrgID:my-catalog


Spark-ds-connector replaced by SDK for Java and Scala Spark Connector


February 2020

August 19, 2020


Deprecation Summary:

The spark-ds-connector will be deprecated (6) months from this release on August 19, 2020. Please upgrade to the latest SDK for Python version before then to get the latest SDK for Java and Scala Spark Connector.


Batch-2.0.0 run-time environment for Pipelines


February 2020

August 19, 2020


Deprecation Summary:

Batch-2.0.0 run-time environment for Batch pipelines is now deprecated. Existing Batch pipelines that use the Batch-2.0.0 run-time environment will continue to operate normally until August 19, 2020. During this period, Batch-2.0.0 run-time environment will receive security patches only. For this period, to continue developing pipelines with the Batch-2.0.0 environment, please use OLP SDK 2.11 or older. After August 19, 2020 we will remove the Batch-2.0.0 run-time environment and the pipelines still using it will be canceled. We recommend that you migrate your Batch Pipelines to the Batch-2.1.0 run-time environment to utilize the latest functionality and improvements.


Schema validation to be added


March 2020

November 30, 2020


Deprecation Summary:

For security reasons, the platform will start validating schema reference changes in layer configurations as of November 30, 2020. Schema validation will check if the user or application trying to make a layer configuration change indeed has at least read access to the existing schema associated with that layer (i.e. a user or application cannot reference or use a schema they do not have access to). If the user or application does not have access to a schema associated with any layer after this date, any attempt to update any configurations of that layer will fail until the schema association or permissions are corrected. Please ensure all layers refer only to real, existing schemas, or contain no schema reference at all before November 30, 2020. It is possible to use the Config API to remove or altogether change schemas associated with layers to resolve these invalid schema/layer associations. Also, any CI/CD jobs referencing non-existing or non-accessible schemas will need to be updated by this date or they will fail.


Customizable Volatile layer storage capacity and redundancy configurations


April 2020

October 30, 2020


Deprecation Summary:

The Volatile layer configuration option to set storage capacity as a "Package Type" will be deprecated within (6) months or by October 30, 2020. All customers should deprecate their existing volatile layers and create new volatile layers with these new configurations within (6) months of this feature release or by October 30, 2020.

7 Stream-2.0.0 run-time environment for Pipelines 2.17 July 2020 February 1, 2021
  Deprecation Summary: Stream-2.0.0 (with Apache Flink 1.7.1) run-time environment is now deprecated. Existing Stream pipelines that use the Stream-2.0.0 run-time environment will continue to operate normally until February 1, 2021. During this period, Stream-2.0.0 run-time environment will receive security patches only. For this period, to continue developing pipelines with the Stream-2.0.0 environment, please use Platform SDK 2.16 or older. After February 1, 2021 the Stream-2.0.0 run-time environment will be removed and the pipelines still using it will be canceled. We recommend that you migrate your Stream Pipelines to the new Stream-3.0.0 run-time environment to utilize the latest functionality and improvements. For more details about migrating an existing Stream pipeline to the new Stream-3.0.0 run-time environment, see Migrate Pipeline to new Run-time Environment. For more details about our general support for Apache Flink, please see Stream Pipelines - Apache Flink Support FAQ.
8 pipeline_jobs_canceled metric in Pipeline Status Dashboard 2.17 July 2020 February 1, 2021
  Deprecation Summary: pipeline_jobs_canceled metric used within the Pipeline Status Dashboard is now deprecated because it was tied to the Pause functionality and caused confusion. The metric and its explanation will be available to use until February 1, 2021. After that date, the metric will be removed.


Jeanus Ko

Jeanus Ko

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe