Powered by Zoomin Software. For more details please contactZoomin

Data Client Library - Developer Guide

Product category
Technology
Doc type
Version
Product lifecycle
This publication

Data Client Library - Developer Guide: Delete Volatile Data

Table of Contents

Delete Volatile Data

The Data Client Library provides the class LayerUpdater to perform update operations on volatile layers.

The LayerUpdater has 3 methods:

  • updateLayer(catalogHrn, layerId) defines the catalog and layer which should be updated.
  • option(key, value) can be used to specify if only the data should be deleted while the metadata is kept by setting "olp.volatile.delete-data-only" to true. By default both, metadata and data are deleted.
  • delete(queryString) performs the delete operation according the query string. The query string is in RSQL format. The delete function call is blocking/synchronous. It returns when the delete operation finished.

Project Dependencies

If you want to create an application that uses the HERE platform Spark Connector to delete data from volatile layer, add the required dependencies to your project as described in chapter Dependencies for Spark Connector.

Examples

The following snippet demonstrates how to delete data from a volatile layer of a catalog.


import com.here.platform.data.client.spark.LayerDataFrameReader.SparkSessionExt
import org.apache.spark.sql.SparkSession

    val df = sparkSession
      .updateLayer(catalogHrn, layerId)
      .delete(s"mt_partition=in=($partitionId, $anotherPartitionId)")

    val deleteResult = df.select("result").first.getString(0)
    val deletedCount = df.select("count").first.getInt(0)
    val deletionMessage = df.select("message").first.getString(0)
    


import static org.apache.spark.sql.functions.*;

import com.here.hrn.HRN;
import com.here.platform.data.client.spark.javadsl.JavaLayerUpdater;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.IntegerType;


    Dataset<Row> dataFrame =
        JavaLayerUpdater.create(sparkSession)
            .updateLayer(catalogHrn, layerId)
            .delete(String.format("mt_partition=in=(%s,%s)", partitionId, anotherPartitionId));

    String deleteResult = dataFrame.select("result").first().getString(0);
    int deletedCount = dataFrame.select("count").first().getInt(0);
    String deletionMessage = dataFrame.select("message").first().getString(0);
    

Note

For information on RSQL, see RSQL.

Was this article helpful?
TitleResults for “How to create a CRG?”Also Available inAlert