Options

Pentaho Data Integration

Version
9.3.x
Audience
anonymous
Part Number
MK-95PDIA003-15


S3 CSV Input

CAUTION:
For technical reasons, parallel reading of S3 files is only supported on files that do not have fields with line breaks or carriage returns in them.

The following options are available for the S3 CSV Input transformation step.

Option Description
Step name Specify the unique name of the S3 CSV Input step on the canvas. You can customize the name or leave it as the default.
S3 Bucket Specify the S3 container where the CSV object file is stored. Or click Select bucket to browse to and select the S3 container where the CSV object file is stored.
Filename Specify one of the following names (or click Browse) for the input file:
  • The name (Filename) of the S3 source file.
  • The field which will become the name of the S3 source file or files at runtime, if the S3 CSV Input step receives data from another step.

The file name of a file in the S3 Cloud uses the following schema:

s3n://s3_bucket_name/absolute_path_to_file

Delimiter Specify the file delimiter character used in the source file.

The default delimiter for the CSV File Input step is a semicolon (;).

Click Insert Tab to use a tab as the delimiter.

Special characters can be set with the format: $[value]

For example: $[01] or $[6F,FF,00,1F]

Enclosure

Specify the enclosure character used in the source file. The default value is double quotes ().

Special characters can be set with the format: $[value]

For example: $[01] or $[6F,FF,00,1F]

Max line size Specify the maximum characters to be read per line by the input operation. The default is 5000.
Lazy conversion? Select this option to delay the conversion of a row of data until it is absolutely necessary.
Header row present? Select to indicate whether the source file contains a header row with column names.
The row number field name Specify the name of the field that will contain the row number in the output of this step.
Running in parallel

Select to indicate whether you will have multiple instances of this step running (step copies) and if you want each instance to read a separate part of the S3 file(s).

When reading multiple files, the total size of all files is taken into consideration to split the workload. In that specific case, make sure that ALL step copies receive all files that need to be read; otherwise, the parallel algorithm will not work correctly.