Multi Directory Collection Strategy
This section includes a description of the Multi Directory Collection Strategy that is applied with the Disk, FTP, SFTP, and SCP agents. The Multi Directory Collection Strategy enables you to configure a collection agent to collect data from a series of directories that are listed in a control file. The collection agent reads the control file and collects it from the specified directories.
Configuration
You configure the Multi Directory Collection Strategy from the first tab in the agent configuration view.
The collection agent configuration dialog
Setting | Description |
---|---|
From the drop-down list select Multi Directory. | |
Enter the path and the name of the control TXT file. Note! If the control file is missing, it is empty or if the file is not readable, the workflow aborts. Example - A Control File controlfile.txt: directory1 directory1/subdir1 directory1/subdir2 directory2 /home/user/directory3 ... Example - A Control File for VMS controlfile_vms.txt: DISK$USERS:[USERS.USER1.TESTDIR1] DISK$USERS:[USERS.USER1.TESTDIR2] DISK$USERS:[USERS.USER1.TESTDIR2.SUBDIR1] DISK$USERS:[USERS.USER1.TESTDIR3] DISK$USERS:[USERS.USER1.TESTDIR4] ... | |
Filename | The regular expression of the names of the source files on the local file system. Regular expressions according to Java syntax apply. For further information, see http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html. Example To match all filenames beginning with Note! If you leave Filename empty, or if you specify |
Select this check box to abort the workflow if a directory, that is specified on the control file list, is missing on the server. Otherwise, the workflow continues to execute (default). | |
Select this check box to prevent the collection of the same file more than once. Files are considered to be duplicates if the absolute filename is the same. Note! This check box is selected by default. The workflow holds an internal data structure with information about which files the collector has collected in previous executions. The data structure is purged by the collection strategy based on the contents of the collection directories. If files collected in the past are no longer found in the collection directory they are removed from the data structure. Note! The internal data structure is stored in the workflow state. Since workflow state is only updated when files are collected the purged internal data structure will be stored the next time a successful file collection is performed. It is possible to manually purge the internal duplicate data structure if needed. To do this, disable the duplicate filter and run the workflow. The next time a duplicate filter is enabled the internal data structure will be empty. | |
Select this check box to enable the generation of error or debug messages. Note! If you choose to enable messaging, make sure to enable debugging on the Workflow Monitor, as well. For further information see Workflow Monitor. Since debugging has a negative impact on performance the debug option should never be enabled in a production environment. | |
Route FileReferenceUDR | Select this checkbox to route the File Reference UDR instead of raw data. |