Overview
This example illustrates typcial typical use of the Parquet Decoder agent in a batch workflow. In this example, complete records are processed using the embedded document schema. The following configurations will be created:
- An Ultra Format
- A Batch Workflow that makes use of a Parquet Decoder agent that parses Parquet documents.
Define an Ultra Format
A simple Ultra Format needs to be created both for the incoming UDRs. For more information about the Ultra Format Editor and the UFDL syntax, refer to the Ultra Format Management User's Guide.
Info | |||||||
---|---|---|---|---|---|---|---|
| |||||||
Create an Ultra Format as defined below:
|
Create a Batch Workflow
In this workflow, Parquet files on disk are retrieved that are then decoded into UDRs that are written into a CSV file. The workflow is illustrated here:
Example workflow with Parquet Encoder
Walking through the example workflow from left to right ..., we have:
- A Disk agent named Disk_Source that reads in the source file (which contains a Parquet document) as a byte array.
- A Parquet Decoder agent that parses the bytes from the file as Parquet, passing ParquetDecoderUDRs to the Analysis agent.
- An Analysis agent named Analysis that transforms these incoming ParquetDecoderUDRs into BookRecord UDRs.
- An Encoder agent named CSV_Encoder that encodes the BookRecord UDRs as CSV bytes.
- The Disk_Destination forwarding agent receives the bytearray data and writes out a CSV document.
This section will walk walks through the steps of creating such a batch workflow.
Disk
Disk_Source is a Disk Collection agent that collects data from an input file and forwards it to the Decoder agent.
Double-click on the Disk_Source agent to display the configuration dialog for the agent:
Example of a Disk agent configuration
Parquet Decoder
The Parquet Decoder agent will collect collects the bytes from the Disk Collector into a complete Parquet document (with an embedded schema). The Parquet Decoder will create creates ParquetDecoderUDRs - one for each row - and forward forwards them on to the next agent.
Double-click on the Parquet Decoder agent to display the configuration dialog.
The Parquet Decoder agent with no Parquet Profile specified.
In this dialog, note that no Parquet Profile is specified. In this case, the ParquetDecoderUDRs will include all columns in the file. You can specify a Parquet Profile with a schema to subset the columns - which will to increase performance.
Analysis
The Analysis Agent transforms the data from each ParquetDecoderUDR into a BookRecord UDR as defined above in the Ultra. In particular, the ParquetDecoderUDR includes a payload map whose with contents mirrors that mirror the Parquet schema defined in the profile - that data is available when constructing well-typed UDRs (e.g.for example, BookRecord).
Double-click on the Analysis agent to display the configuration dialog.
The Analysis agent dialogue with the APL code defined.
In this dialog, the APL code for handling input data is written. In the example, each ParquetDecoderUDR is transformed into a BookeRecord UDR. Adapt the code according to your requirements.
You can also see the UDR type used in the UDR Types field, in this example it is aParquetDecoderUDR
.
Info | |||||||
---|---|---|---|---|---|---|---|
| |||||||
The APL code below shows an example of processing ParquetDecoderUDR:
|
The data in the payload map in the ParquetDecoderUDR will conform conforms to the embedded schema.
Encoder
The Encoder agent receives the BookRecord UDRs from the Analysis agent and generates byte arrays in CSV format - one byte array for each UDR. Double-click on the Encoder agent to display the configuration dialog.
Example of an Encoder agent configuration
In this dialog, choose the Encoder that you defined in your Ultra Format.
Disk Forwarder
Disk_Destination is a Disk Forwarding agent that writes bytes to an output file on disk.
...
Example of a Disk agent configuration
Running the Workflow
When you run the Workflow, it will process processes Parquet files from the input directory and write writes out corresponding CSV files in the configured output directory.
Scroll ignore | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||