Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Setting

Description

Bucket

The Bucket is the public cloud storage resource available in Amazon S3.

Folder

Enther the path to the folder you want to collect data from and push data to when using the Amazon S3 functions in this field.

title
Note

Note!

If the path to the folder is not specified, the root folder of the Amazon S3 bucket is selected by default.

...

  1. Place the function in your stream and double-click to open the Configuration dialog.

    S3-Coll.png

  2. Configure the Common Configurations as described above.
    All files in the subfolder(s) to the main folder stated in the Folder field will be collected. You can also choose to not include files from any subfolders by selecting the Do not include files from subfolders checkbox.  

  3. In the After Collection section you can select to Remove files from Amazon server after collection

  4. In File information section you can specify the selection criteria for your files including how to Select files and the File format:

Setting

Description

File selection

...

All files in folder

...

Select this option

...

to collect all files in the specified folder.

...

Based on filename

...

Select this option to collect files based on specific file

...

names. You can specify one or several file names.

Based on regular expression

...

Select this option to collect files based on file name patterns that you state using regular expressions.

Based on list of files

...

Select this option to collect files using a meta file containing the path to another file

...

. If there is more than

...

one meta

...

file

...

present, they will be processed

...

simultaneously.

...

...

You need to specify the following for the meta file:

...

...

meta file formatting.pngImage Added

Mandatory meta file formatting 

There are also three mandatory prerequisites for this option:

...

  • If there are more than one

...

  • meta file, all of them need to be stored in

...

  • the same folder.

  • The meta

...

  • files must be in CSV format with a single header.

...

  • If this type of collection is no longer used

...

  • , the meta file must be removed manuall

File format

...

Depending on what option you select in the this drop-down, you will see different complimentary settings beneath the drop-down list.

CSV

Select this option to collect files in CSV format.

...

Select the Include table header check box to include the table header in the collected file(s).

...

Select the delimiter for the CSV file

...

(s) in the Delimiter field. Comma is selected by default but you can also select Tab, Semicolon, Space, or Other. If

...

you select Other you can

...

specify a custom delimiter in

...

the Custom delimiter field that will then be displayed.

Excel

...

Select this option to collect files in Excel format

...

. Select the Include table header check box to include the table header in the collected file(s).

Select the All sheets in file

...

option to collect everything in the file, or the Specific Sheet(s)

...

option to specify which sheets you want to collect in the Select sheet(s) field that will be displayed.

JSON

...

Select this option to collect files in

...

XML format.

XML

...

Select this option to collect files in XML format

You have now configured the Amazon S3 Collection function.

Note
titleNote!

During collection, the following applies:

  • Compressed files are automatically decompressed.
  • The type of archive file format is automatically identified based on the contents of the file instead of the file extension. The supported archive file formats are ZIP, gzip and zlib.

For all supported archive file types, the following applies:

  • The archive must contain only a single file that is compressed.
  • The archive must not contain any directories.

There are exceptions to the filename patterns when it comes to collector functions. 

Insert excerpt
Frequently Asked Questions (FAQs)
Frequently Asked Questions (FAQs)
nopaneltrue

Anchor
AWSS3Forwarder
AWSS3Forwarder
Amazon S3 Forwarder

...

  1. In Amazon Credentials, specify the Access Key and the Secret Key. This information is available on your Amazon S3 account. You also have the option to use the Secrets Wallet option to enter the S3 account credentials. 

  2. In File Location, specify the Bucket and the path to the folder in FolderThe folder path cannot begin or end with a ' / '.

    Note
    titleNote!

    If the path to the folder is not specified, the root folder of the S3 bucket is selected by default.


  3. In Output file Information, specify how you want to handle the output file(s) in Filename options. You can select from the following options:

    Filename optionAction
    Collector filename

    Select Collector filename if you want to keep the same filename as your input file(s).

    If a collector does not have a filename, for example, Counter, the system generates a filename based on the function.

    Custom filenameSelect Custom filename to define a new filename for all the output files. If you require more flexibility in defining file names refer to Configuring Dynamic Naming in Fields.



  4. Select Append timestamp to append the timestamp to the name of the output file. For example, the output filename for a CSV file will look like <myfile>_<timestamp>.csv.

    Note
    titleNote!

    If Append timestamp is not selected, the existing file at the destination can be overwritten by the output file.


  5. In File format, select the format of the output file(s) from the following options:

    File formatDescription
    CSV

    Select to send the output file in CSV format. Select Include table header to include the table header in the output file(s).
    Specify a Delimiter for the CSV file format. The default value is ' , '.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    Excel

    Select to send the output file in Excel format. You can also specify the Sheet name. The default sheet name is Sheet 1.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    Buffer

    Select to send the output file in the Buffer format. 

    If you are reading or processing files containing binary data (Buffer format), for example for performance or other reasons, you can write these files through the AWS S3 Forwarder.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    JSON

    Select to send the output file in JSON format. Select the preferred output format, Action on records: one file with All in one array, one file with All in one array with key, or One file per record.

    Note
    titleNote!

    If One file per record is selected, you must also select Append timestamp, otherwise, the files will be overwritten.

    Compression can be toggled by enabling the Compress file option. A dropdown menu is used to select the format – Zip and GZIP are supported. 

    An additional option toggles the Bucket owner full control

    JSON files can be formatted for easier reading by selecting the  Output file in pretty print option.


  6. To output JSON files in a more compact form, deselect the Output file in pretty print checkbox. By default, pretty print is on.

    Note
    titleNote!

    Selecting pretty print increases the size of the output file.


  7. To compress the output file, select the Compress file checkbox and specify the Compression format. The supported formats are:

    • Zip

    • GZip

  8. You have the option to provide the S3 bucket owners full access to the object that are written by other S3 account holders. Select the checkbox, Bucket owner full control, to give this permission. To be able to do this Usage Engine uses an option called as Access Control List (ACL) and is enabled as ACL = bucket-owner-full-control.

...