Skip to main content
Release notes 15 min read

HERE Workspace & Marketplace 2.5 release

HERE Workspace & Marketplace 2.5 release

Highlights

Rate Changes for HERE Credits

HERE credit rate changes went into effect on June 1. The new rates will be applied to your account starting on the anniversary billing date of your organization.

To make it easy to track which rates applied to which usage, the HERE credit usage report has been enhanced (at https://platform.here.com/profile/credit-usage). Now there are alerts displayed when a rate change has occurred over the period which was queried.

We've also added a table to show historical HERE credit rates: https://platform.here.com/profile/credit-usage/history.

For more about the upcoming rate changes, see the announcement.

New Layers in HERE Map Content

The HERE Map Content catalog has new layers:

  • Administrative Index: a standard layer which provides HERE Tile IDs for various locations published by the Administrative Places layer
  • Sign Text: a HERE Premium layer that publishes textual and graphic information posted on signs along roads
  • Environmental Zones: a HERE Premium layer that publishes area-restrictions & regulations based on environmental criteria

Note

HERE Premium layers may incur additional usage fees. For more information, see https://openlocation.here.com/plans.

New Data Source for HERE Weather

The HERE Live Weather North America and HERE Archive Weather North America catalogs now include a new high quality Multi-Radar/Multi-Sensor (MRMS) source from NOAA. This will result in expanded and more granular coverage for North America.

Inspect Batch Pipelines with Spark UI

You can now use the Spark UI to inspect your batch pipelines in the deployed OLP cloud environment. With this, you can view the execution and performance details of a currently-running Spark job, helping you to fine-tune and troubleshoot your pipeline configuration and logic.

To access the Spark UI, go to the Jobs tab for a running pipeline version and select the link to open the Spark UI from the actively-running job. When the job terminates (success or failure), the Spark UI will no longer be available.

Contact Email for Pipelines

You can now provide one email address for each pipeline, so that we can provide better support and communication for situations that might affect your pipeline operations. Such situations include:

  • Planned outages
  • Security or patch updates that require us to restart your pipeline

We request that you provide an email address for each of your pipelines, otherwise we cannot notify you of potential interruptions. We recommend specifying an email distribution list—instead of an individual email—so that all of your operations staff can be notified. This email address can be changed at any time, and as many times as needed.

We will only use this email address for communication about your pipeline, and not for marketing or other purposes.

Note: To be notified of unexpected or unplanned pipeline failures, you will still need to configure Alerts in the Pipeline Status Dashboard in Grafana. We will continue to improve the experience of pipeline notifications in future releases.

Configure Pipeline Runtime Credentials

You can now configure pipelines to run either under a specified service account (“app”) credential, or under your user credential.

By running pipeline instances under separate apps, you can restrict data access between them. For example, you can restrict data access between Dev, Test and Prod instances.

By default, the pipeline runs under your user credential, simplifying pipeline creation and configuration while developing and testing.

You can change the app as many times as needed for each pipeline version.

To configure a pipeline version to use an app credential, follow these steps:

  1. Create a new app.
  2. Add this new app to the group(s) to which your input and output catalogs belong.
  3. Deactivate the pipeline version.
  4. Activate the pipeline version, and, during activation, choose the new app for running the pipeline version.

If you prefer to run your pipeline version with your user credential, follow these steps:

  1. Deactivate the pipeline version.
  2. Activate the pipeline version. The system will automatically pick your user account for running the pipeline version.

Previously, pipelines ran using a system-generated app credential that was automatically added to the group selected when creating the pipeline. This had the effect of creating many app credentials (one per pipeline version), and caused confusion when determining how to grant pipelines access to data catalogs. With this release, we will deprecate the previously used system-generated apps. For more details, see the deprecation statement below.

Schema Permissions Associated with Catalog Permissions

Schema permissions are now associated with catalog permissions. You no longer need to share schemas separately from catalogs. Instead, the read permissions for a schema are granted at the same time you grant read permissions for the parent catalog. Data consumers will retain read access to a schema unless they lose read access to all catalogs associated with that schema.

Support for SENSORIS Data

OLP now supports the sensor data format SENSORIS. The following artifacts are available to help you work with SENSORIS data:

  • The SENSORIS schema
  • Catalogs with example data in both stream and versioned layers
  • Visualization support in volatile and versioned layers
  • An example data archiving pipeline for storing SENSORIS data in an index layer. (No example index layer is provided since index layers do not yet support data visualization.)
  • An example Zeppelin Notebook

Delete a Single Layer

You can now delete a single layer using the Data API, the Portal, the Data Client Library and the Data CLI. This feature enables you to delete stream, volatile and index layer types directly without deleting the entire catalog containing them. You no longer have to lose the catalog configuration or the data in a catalog when you want to delete individual layers.

This update does not enable you to delete a versioned layer because data dependencies can exist across versioned layers making the deletion of one potentially problematic to any workflows requiring others.

Account & Permissions

Known Issues

Issue

A finite number of access tokens (~ 250) are available for each app or user. Depending on the number of resources included, this number may be smaller.

Workaround

Create a new app or user if you reach the limitation.

Issue

Only a finite number of permissions are allowed for each app or user in the system across all services. It will be reduced depending on the inclusion of resources and types of permissions. Delete pipelines/pipeline templates to recover space.

Issue

All users and apps in a group are granted permissions to perform all actions on any pipeline associated with that group. There is no support for users or apps with limited permissions. For example, you cannot have a reduced role that can only view pipeline status, but not start and stop a pipeline. Limit the users in a pipeline's group to only those users who should have full control over the pipeline.

Issue

When updating permissions, it can take up to an hour for changes to take effect.

Data

Added

  • The Data Archiving Library now works behind a proxy, allowing you to write data to an index layer from behind a corporate firewall/proxy.

  • With this release, it is now possible to disregard Protobuf validators when creating a schema using the schema Maven archetype tool in the OLP SDK. Schema consumers can see which validations were applied on the OLP Portal schema details page.

  • The Data API Developer Guide has been updated to reflect the fact that the ingest API's /layers/<layerID>/sdiimessagelist endpoint does not support compressed messages. Compressed messages are not supported by this endpoint because this endpoint breaks down SDII MessageList messages into individual SDII messages, which it cannot do if the messages are compressed. This endpoint doesn't include any decompression/compression logic. In order to send compressed SDII MessageList messages to OLP, use the ingest API's generic ingestion endpoint. Note that you will have to break down the SDII MessageList messages into individual SDII messages yourself. For more information see the Data API Developer Guide's ingest API reference.

  • The Data, Catalog and Layer Metrics Grafana dashboard includes a new table for stream layer metrics: Messages Written Per Layer. These metrics show you the number of messages written to a given layer by any application in your realm, providing you with more information about usage of your stream layers to help you during integration and debugging.

Deprecated

  • The target end-of-life date for publish API v1 is July 31, 2019 (the six month deprecation period ended in April). If you use OLP SDK 2.3 or later, or Data Client Library 0.1.833 or later, you already have proper support for publish API v2. If you are using older versions of the OLP SDK or Data Client Library, you must update to the latest version before July 31, 2019 to avoid workflow interruptions.

If you use the REST API directly, you do not need to take any action because API Lookup has already been updated to use publish API v2.

NOTE: you must use API Lookup to get the proper base URL for any REST API request.

Known Issues

Issue

Catalogs not associated with a realm are not visible in OLP.

Issue

Visualization of Index Layer data is not yet supported.

Issue

When you use the Data API or Data Library to create a Data Catalog or Layer, the app credentials used do not automatically enable the user who created those credentials to discover, read, write, manage, and share those catalogs and layers.

Workaround

After the catalog is created, use the app credentials to enable sharing with the user who created the app credentials. You can also share the catalog with other users, apps, and groups.

Marketplace

Known Issues

Issue

Users do not receive stream data usage metrics when reading or writing data from Kafka Direct.

Workaround

When writing data into a stream layer, you must use the ingest API to receive usage metrics. When reading data, you must use the Data Client Library, configured to use the HTTP connector type, to receive usage metrics and read data from a stream layer.

Issue

When the Splunk server is busy, the server can lose usage metrics.

Workaround

If you suspect you are losing usage metrics, contact HERE technical support. We may be able to help rerun queries and validate data.

Notebooks

Added

  • A new sample notebook demonstrates the analysis of SENSORIS data. Refer to the updated Notebooks documentation (Upgrade and Install Libraries) for more information on the required artifacts and dependencies.

Known Issues

Issue

Notebooks cannot be shared with OLP user groups.

Workaround

Notebooks can be shared with one or more individual users by entering each account separately.

Issue

The Notebook Spark connector does not support analysis of stream layers and index layers.

Workaround

Use the Data Client Library to analyze stream and index layers.

Pipelines

Fixed

  • Fixed an issue where the Pipeline Status Dashboard in Grafana was displaying false failures and sending false email notifications when the pipeline didn't actually fail.

  • Fixed an issue where a batch pipeline wasn't terminating properly due to an OutOfMemory error.

Deprecated

  • System-generated apps are now deprecated. These apps will be removed six months after this release. In these six months, existing pipeline versions that are running or scheduled using a system-generated app will continue to operate normally. At the end of the six-month period, all pipeline versions that are not using either the user account or the user-generated apps will fail.

Known Issues

Issue

A pipeline failure or exception can sometimes take several minutes to respond.

Issue

Pipeline Status Dashboard in Grafana can be edited by users. Any changes made by the user will be lost when updates are published in future releases. Also, the dashboard will not be available for user edits in a future release.

Workaround

Duplicate the dashboard or create a new dashboard.

Issue

If multiple pipelines consuming data from a single stream layer all belong to the same group (pipeline permissions are managed via a group), then each of those pipelines will only receive a subset of the messages from the stream. This is due to the fact that the pipelines share the same Application ID.

Workaround

Using the Data Client Library to configure your pipelines to consume from a single stream: If your pipelines/applications use the Direct Kafka connector type, you can specify a Kafka Consumer Group ID per pipeline/application. If the Kafka consumer group IDs are unique, the pipelines/applications will consume all messages from the stream.

If your pipelines use the http connector type, we recommend you create a new Group for each pipeline/application, each with its own Application ID.

Issue

A pipeline version can be activated even after an input catalog that it uses is deleted.

Workaround

The pipeline will fail when it starts running and will show the error message about missing catalog. Re-check the missing catalog or use a different catalog.

Web & Portal

Known Issues

Issue

The custom run-time configuration for a Pipeline Version has a limit of 64 characters for the property name, and 255 characters for the value.

Workaround

For the property name, you can define a shorter name in the configuration and map that to the actual, longer name within the pipeline code. For the property value, you must stay within the limitation.

Issue

Pipeline Templates can't be deleted from the Portal UI.

Workaround

Use the CLI or API to delete Pipeline Templates.

Issue

In the Portal, new jobs and operations are not automatically added to the list of jobs and operations for a pipeline version while the list is open for viewing.

Workaround

Refresh the Jobs and Operations pages to see the latest job or operation in the list.

HERE Technologies

HERE Technologies

Have your say

Sign up for our newsletter

Why sign up:

  • Latest offers and discounts
  • Tailored content delivered weekly
  • Exclusive events
  • One click to unsubscribe