Pipeline patterns
- Last UpdatedMay 24, 2024
- 3 minute read
HERE platform pipelines are designed to accommodate specific usage patterns. The available patterns are illustrated below, starting with the simplest pattern and progressing to more complex use cases. Additional information is provided throughout the Developer Guide.
General pattern
This is the general pattern for using a pipeline.
Note
Data sources and sinks
A specific catalog layer can serve as a source or a sink, but never both at once. The type of catalog layer that may be used depends on the type of pipeline it is used with. For example, a streamed layer cannot be used with a batch pipeline.
Multiple inputs
You can have multiple inputs, but only one output from a pipeline.
Stream processing pattern
You can use the pipeline to process continuous data streams using Apache Flink framework and stream layers.
- The data catalog is defined in the
pipeline-config.conf
file. - The layer used is defined in the code.
Note
You may use the same data catalog for a stream pipeline's input and output as long as separate layers are being used for the data source and data sink.
Batch processing pattern
This is a typical batch processing pattern using Apache Spark framework and versioned layers.
Volatile pattern
This is a typical pattern using volatile layers.
Index pattern
These are typical patterns using index layers.
Index layer limits of use:
Pipeline | Source | Sink |
---|---|---|
Batch | Yes | Yes |
Stream | No | Yes |
Advanced patterns
A more advanced pattern uses a catalog's volatile layer as reference data.
In this case, the output catalog uses a stream layer.
Info
The stream layer here typically uses a windowing function.
But in this case, the output catalog is only interested in a "data snapshot," so the volatile layer is used.
Alternatively, you can use the output catalog's versioned layer, perhaps for aggregating data over a window of time. This approach could also be useful for archiving data, with or without processing enhancement. Also, it could be useful for historical analysis in a Notebook.
Or, you can use the output catalog's Index layer, perhaps for organizing historical data by event time.
Another pattern combines input data from a versioned data set with data from an index layer.
For examples of pipeline implementation, see HERE Workspace Examples for Java and Scala Developers.
For detailed step-by-step instructions on configuring and running pipelines, see Developer tutorials.