Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 25 Next »

With the Amazon S3 Collector and Amazon S3 Forwarder functions you can send and receive data to and from Amazon S3 buckets.

Before using the functions you must ensure that the Amazon S3 bucket(s) are setup using an IAM policy with the following minimum content:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowGetPutDelete",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject*",
                "s3:PutObject*",
                "s3:DeleteObject*",
                "s3:AbortMultipartUpload",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": "arn:aws:s3:::<bucket-name>/*"
        },
        {
            "Sid": "AllowListingBucket",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::<bucket-name>"
        }
    ]
}

"Resource": "arn:aws:s3:::<bucket-name>/*" means that actions in a statement are applicable to all objects in the S3 bucket arn:aws:s3:::<bucket-name>.

If you only want to share a certain key (folder) in the S3 bucket, you can specify the folder after the bucket name like this: arn:aws:s3:::<bucket-name>/<directory-to-share>/* in the Resource section, but this is only applicable for the AllowGetPutDelete statement.

Warning!

It is not recommended to use the AWS managed policy AmazonS3FullAccess since this will allow all actions on all buckets and objects!

See https://docs.aws.amazon.com/s3/ for more information on how to setup IAM policies.

To connect to your bucket, you need to have your Access Key and Secret Access Key, see https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html for information on how to find this information. You also need to enter the folder from or to which you are collecting or sending data.

See Stream using Amazon S3 Functions - Acme EV for an example how the Amazone S3 collector and forwarder can be used in a stream.

Common Configurations

For both functions, you need to configure AWS Credentials and File location.

AWS Credentials

The AWS Credentials can either be specified directly in the configuration dialog for the functions, or in an AWS Secret in the Secrets Wallet, and must contain:

Setting

Description

Access Key

The identifier used to sign the requests sent to Amazon S3, is referenced by S3 as the Access Key ID

Secret Key

The Secret Key is used in conjunction with the Access Key to cryptographically sign Amazon AWS requests. When you create the Access Key in Amazon S3 you can view and download the Secret Key. S3 references this as the Secret Access Key.

File location

The File location settings include:

Setting

Description

Bucket

The Bucket is the public cloud storage resource available in Amazon S3.

Folder

Enther the path to the folder you want to collect data from and push data to when using the Amazon S3 functions in this field.

Note!

If the path to the folder is not specified, the root folder of the Amazon S3 bucket is selected by default.

Amazon S3 Collector Configuration

With the Amazon S3 Collector you can collect data from your AWS S3 bucket for processing in your stream.

To configure the Amazon S3 collector:

  1. Place the function in your stream and double-click to open the Configuration dialog.

    S3-Coll.png

  2. Configure the Common Configurations as described above.
    All files in the subfolder(s) to the main folder stated in the Folder field will be collected. You can also choose to not include files from any subfolders by selecting the Do not include files from subfolders checkbox.  

  3. In the After Collection section you can select to Remove files from Amazon server after collection

  4. In File information section you can specify the selection criteria for your files including how to Select files and the File format:

    File selection optionsDescription
    All files in folder

    This option collects all files in the specified folder. 

    Based on filenameThis option allows for the collection of file(s) based on specific file name(s), one or multiple files can be specified. 
    Based on regular expression

    This option allows for the collection of file(s) based on filename patterns provided by the user using regular expression.

    Based on list of files
    This option allows for the collection of the file(s) using a meta file containing the path to another file(s). If more than 1 meta-file is present, they will be processed at the same time. 

    The following formatting must be used for the meta file: 

    Mandatory meta file formatting 

    There are three mandatory prerequisites for this option: 

    • All meta-files (if more than one) to be stored in a single location/folder in S3.

    • The meta-file(s) must be in CSV format with a single header.

    • Manually remove the meta file if this type of collection is no longer used. 

    File format

    File formatDescription
    CSV

    Collect files in CSV format. Select Include table header to include the table header in the collected file(s).
    Specify a Delimiter for the CSV file format. The default value is a comma. Other available options are
    Tab, Semicolon, Space, or Other. If the “Other” option is selected you can enter a custom delimiter in an input field box which will appear underneath. 

    Excel

    Collect files in Excel format. The

    Select Include table header to include the table header in the collected file(s).

    JSON

    Collect files in JSON format.

    XMLCollect files in XML format.


    Note!

    During collection, the following applies:

    • Compressed files are automatically decompressed.
    • The type of archive file format is automatically identified based on the contents of the file instead of the file extension. The supported archive file formats are ZIP, gzip and zlib.

    For all supported archive file types, the following applies:

    • The archive must contain only a single file that is compressed.
    • The archive must not contain any directories.

    There are exceptions to the filename patterns when it comes to collector functions. 

    The 'Collector filename' option enables you to keep the same filename as your input file(s). However, there are few exceptions:

    • Count and other collector functions that do not read files: If you use Count or other collector functions that does not read from any input files there will be no input filename. In this case, a new filename is generated based on the name of the Collector function.
    • Script: If you send the data out in the Flush that is called at the end of each transaction, the original filename gets lost. In this case, a new filename is generated based on the name of the Script function.
    • Aggregate: The Aggregate function merges all the payloads, so a new filename is generated based on the name of the Aggregate function.


Amazon S3 Forwarder

The Amazon S3 Forwarder function allows you to send data to your Amazon S3 bucket from your stream.

To configure the Amazon S3 forwarder, take the following actions:

  1. In Amazon Credentials, specify the Access Key and the Secret Key. This information is available on your Amazon S3 account. You also have the option to use the Secrets Wallet option to enter the S3 account credentials. 

  2. In File Location, specify the Bucket and the path to the folder in FolderThe folder path cannot begin or end with a ' / '.

    Note!

    If the path to the folder is not specified, the root folder of the S3 bucket is selected by default.


  3. In Output file Information, specify how you want to handle the output file(s) in Filename options. You can select from the following options:

    Filename optionAction
    Collector filename

    Select Collector filename if you want to keep the same filename as your input file(s).

    If a collector does not have a filename, for example, Counter, the system generates a filename based on the function.

    Custom filenameSelect Custom filename to define a new filename for all the output files. If you require more flexibility in defining file names refer to Configuring Dynamic Naming in Fields.



  4. Select Append timestamp to append the timestamp to the name of the output file. For example, the output filename for a CSV file will look like <myfile>_<timestamp>.csv.

    Note!

    If Append timestamp is not selected, the existing file at the destination can be overwritten by the output file.


  5. In File format, select the format of the output file(s) from the following options:

    File formatDescription
    CSV

    Select to send the output file in CSV format. Select Include table header to include the table header in the output file(s).
    Specify a Delimiter for the CSV file format. The default value is ' , '.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    Excel

    Select to send the output file in Excel format. You can also specify the Sheet name. The default sheet name is Sheet 1.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    Buffer

    Select to send the output file in the Buffer format. 

    If you are reading or processing files containing binary data (Buffer format), for example for performance or other reasons, you can write these files through the AWS S3 Forwarder.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    JSON

    Select to send the output file in JSON format. Select the preferred output format, Action on records: one file with All in one array, one file with All in one array with key, or One file per record.

    Note!

    If One file per record is selected, you must also select Append timestamp, otherwise, the files will be overwritten.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    JSON files can be formatted for easier reading by selecting the  Output file in pretty print option.


  6. To output JSON files in a more compact form, deselect the Output file in pretty print checkbox. By default, pretty print is on.

    Note!

    Selecting pretty print increases the size of the output file.


  7. To compress the output file, select the Compress file checkbox and specify the Compression format. The supported formats are:

    • Zip

    • GZip

  8. You have the option to provide the S3 bucket owners full access to the object that are written by other S3 account holders. Select the checkbox, Bucket owner full control, to give this permission. To be able to do this Usage Engine uses an option called as Access Control List (ACL) and is enabled as ACL = bucket-owner-full-control.

Amazon S3 Metadata

You can view and access the following metadata properties of Amazon S3. To view the metadata, see the documentation of the meta object mentioned in the Script function.

Property name

Description

fileName

Name of the file.

Syntax
log.info(meta.fileName);


filePath

Path from where the file is collected. The file can be either in the Excel or CSV format.

Path format: bucket/folder/file

Syntax
log.info(meta.filePath);


fileSize

Size (in bytes) of the file.

Syntax
log.info(meta.fileSize); 


sheetName

Name of the sheet of the Excel file.

Syntax
log.info(meta.sheetName);


collectionTime

Timestamp when the file is collected.

Syntax
log.info(meta.collectionTime); 



  • No labels