File System (4.0)
The File System Profile is used for making file system-specific configurations, currently used by:
Amazon S3 collection agent
Amazon S3 forwarding agent
GCP Storage collection agent
GCP Storage forwarding agent
HDFS collection agent
HDFS forwarding agent
System Importer
System Exporter
The configuration options vary depending on the selected file system, and each file system is described separately below.
- 1 Git
- 2 Amazon S3
- 3 GCP Storage
- 4 HDFS
Menus
The External Reference button is specific for the File System profile configurations.
Setting | Description |
---|---|
External References  | Select this menu item to enable External References in the File System profile configuration. This can be used to configure the following fields: Amazon S3 file systems
GCP Storage file systems
HDFS file systems
|
Git
When selecting Git as a file system, you will see the General tab.
General Tab
The following settings are available in the General tab in the Git File System profile:
Setting | Description |
---|---|
Repository URL | The URL to the repository. |
Token | Token to access the repository. This field is optional. |
Use Secrets Profile | Select the checkbox to use a Secrets Profile to get the Token. |
Get Branches | Click this button to fetch the branches from the repository. If the connection is working the Branch combo box will be populated. If the connection fails, an error dialog will be shown. |
Branch | Select the branch to use. Note! It is not possible to create a new branch using Usage Engine. The branch must already exist in the repository specified in the Repository URL. |
Preview Repository | Click here to browse the folders in the repository. It is only possible when the configuration is saved. |
Note!
When you do a Save As operation, the remote repository is cloned to the platform and may take a little long time. This directory is $MZHOME/gitrepos by default. It can be changed by setting the property mz.git.basePath to some other path accessible from the Platform
It is not possible to change the Repository URL or branch once the configuration is saved.
Import of Git File System Profile
An imported new Git File System Profile configuration will always be invalid since the repository has not been cloned. You clone the repository in the profile by clicking the Clone Repository button.
When the cloning is done the text on the button will change to Preview Repository, and the configuration should now be valid, which you can verify by clicking the Validate button.Â
Amazon S3
When selecting Amazon S3 as a file system, you will see two tabs; General and Advanced.
General Tab
The following settings are available in the General tab in the Amazon S3 File System profile:
Setting | Description |
---|---|
File System Type | Select which file system type this profile should be applied for. You can choose either Amazon S3 or HDFS. |
Credentials from Environment | Select this check box to pick up the credentials from the environment instead of entering them in this profile. If this checkbox is selected, the Access Key and Secret Key fields will be disabled. |
Access Key  | Enter the access key for the user who owns the Amazon S3 account in this field. If you want to set a parameter, select the Parameterized checkbox and enter the parameter name using |
Secret Key  | Enter the secret key for the stated access key in this field. If you want to set a parameter, select the Parameterized checkbox and enter the parameter name using |
Region from Environment | Select this check box to pick up the region from the environment instead of entering the region in this profile. If this check box is selected, the Region field will be disabled. |
Region | Enter the name of the Amazon S3 region in this field. |
Bucket | Enter the name of the Amazon S3 bucket in this field. |
Advanced Tab
In the Advanced tab, you can configure properties for the Amazon S3 File System client.Â
For information on how to configure the properties for Amazon S3 File System client, see https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl.
GCP Storage
When selecting GCP Storage as a file system, you will see the tab General.
Location
Setting | Description |
---|---|
Bucket | Enter the name of the GCP Storage bucket in this field. |
Use GCP Profile | Select the checkbox and then choose an existing GCP Profile if the Authentication Details should be derived from a GCP Profile instead of adding them directly to this profile. |
HDFS
When selecting HDFS as a file systems, you will see two tabs; General and Advanced.
General Tab
The following settings are available in the General tab in the HDFS File System profile:
Setting | Description |
---|---|
File System Type | Select which file system type this profile should be applied for. You can choose either Amazon S3 or HDFS. |
Hadoop Mode | Select the type of Hadoop from the drop-down box:
|
Host | Enter the IP address or hostname of the NameNode in this field. See the Apache Hadoop Project documentation for further information about the NameNode. |
Port | Enter the port number of the NameNode in this field. |
Replication | Enter the number for HDFS to configure the replication factor. Replication is used for fault tolerance and more information regarding replication be found at: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#Data_Replication |
Advanced Tab
The Advanced tab contains Advanced Properties for the configuration of Kerberos authentication.
Kerberos is an authentication technology that uses a trusted third party to authenticate one service or user to another. Within Kerberos, this trusted third party is commonly referred to as the Key Distribution Center, or KDC. For HDFS, this means that the HDFS agent authenticates with the KDC using a user principal which must be pre-defined in the KDC. The HDFS cluster must be set up to use Kerberos, and the KDC must contain service principals for the HDFS NameNodes. For information on how to set up an HDFS cluster with Kerberos, see the Hadoop Users Guide at http://www.hadoop.apache.org.
To perform authentication towards the KDC without a password, the HDFS agent requires a keytab file.
You can set the advanced properties in the Advanced Properties dialog to activate and configure Kerberos authentication.
The following advanced properties are related to Kerberos authentication. Refer to the Advanced Properties dialog for examples.
Property | Description |
---|---|
| Set the value to Note!Due to limitations in the Apache Hadoop client libraries, if you change this property, you may be required to restart the ECs where workflows containing the HDFS agent is going to run. Â |
| This sets the service principal to use for the HDFS NameNode. This must be predefined in the KDC. The service principal is expected to be in the form of |
| This specifies the hostname of the Key Distribution Center. |
| This sets the name of the Kerberos realm. Uppercase only. |
| This sets the keytab file to use for authentication. A keytab must be predefined using Kerberos tools. The keytab must be generated for the user principal in |
| This sets the user principal that the HDFS agent authenticates as. This must be predefined in the KDC. User principals are expected to be in the form of |
| Set this value to |
The following properties are also included in the Advanced tab, but only apply if you have selected the HA version of Hadoop in the General tab:
Property | Description |
---|---|
| This sets the HDFS filesystem path prefix. |
| This sets the logical name for the name services. |
| This sets the unique identifiers for each NameNode in the name service. |
| This sets the fully-qualified RPC address for each NameNode to listen on. |
| This sets the fully-qualified HTTP address for each NameNode to listen on. |
| This sets the Java class that HDFS clients use to contact the Active NameNode. |
The Advanced Properties can also be configured using External References by following these steps:
Create a properties file containing the advanced configurations.
Create an External Reference profile pointing out the property file, and containing a key pair, e g "ADV_PROP" and "ADV_PROP".
In the workflow containing the agent, open up the Workflow Properties, select the Enable External Reference check box.
Click on the Browse button and select your Exernal Reference profile, and for the HDFS - Advanced Properties field, select either Default, or Per Workflow.
In the workflow table, right click and select the Enable External Reference option, and enter the key for the properties file, e g ADV_PROP, if that is what you used in step 2 above.
Â