Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

This is a step-by-step guide to create and run a stream which collects data from and forwards output to Amazon S3 buckets.

The company Acme EV Charging provides a frictionless electric vehicle charging service.

When their customers use the service to charge their cars, the volume, measured in kWh, is logged. The customers are then billed for the total volume on a monthly basis.

The logged Amazon S3 sessions are stored as CSV files in the relevant bucket.

Fields in the CSV format:

FieldDescription
type

A string that contains the value Start, Partial, or Complete. This field is used to indicate that the logged session spans multiple files.

dateA string that contains the date when the usage was logged.
kWhChargedA string that contains the logged amount (kWh) for a partial or a complete session.
userTechnicalIdA unique string that identifies the customer that is bound to the session. 
chargingPlaceA string that identifies the charging location.

The stream that you will create in this tutorial performs the following tasks:

  • Collect and decode the CSV files that are available in the S3 bucket
  • Route the records marked Complete to the Data Aggregator function. The ones marked Start and Partial are simply written to a log file.
  • Aggregate records based on chargingPlace (charging location), kWhCharged (logged energy consumption) and Date (month)
  • Forward the records to the billing system, emulated by storing the output in an Amazon S3 bucket

Step-by-Step Guide

Follow the instructions in the numbered tabs below.

  • No labels