Input tab

Pentaho Data Integration

Version
9.3.x
Audience
anonymous
Part Number
MK-95PDIA003-15

Input tab

Use the options in this tab to define your input source for the Redshift COPY command:

Option Description
Source Specify the file, folder, prefix, or variable of the S3 bucket to use as the input for the Redshift COPY command. See Syntax of the COPY from Amazon S3 for more details on how to specify this option.
What file type is your source Select the file type of the input source. You can select one of the following types:
Delimited text
The input source is character-delimited UTF-8 text.
Avro
The input source is an Avro data serialization protocol.
JSON
The input source is a JavaScript Object Notation (JSON) data file containing a set of either objects or arrays.
ORC
The input source is an Optimized Row Columnar (ORC) file containing Hive data. See the Administer Pentaho Data Integration and Analytics document for further configuration information when using Hive with Spark on AEL.
Parquet
The input source is a Parquet file of nested data structures in a flat columnar format.
Fixed width
The input source is a file where each column width is a fixed length, rather than columns separated by a delimiter.
Compression Select the type of compression applied to your input source:
None
No compression is applied.
BZIP2
The Burrows–Wheeler algorithm is used to compress the input source.
GZIP
The DEFLATE algorithm is used as a basis for compressing the input source.
LZOP
The Lempel–Ziv–Oberhumer lossless algorithm is used as a basis for compressing the input source.

Depending on what file type you selected for What file type is your source, the following file settings appear at the bottom of this tab:

File Type File Settings
Delimited text Specify the following settings for a delimited text file:
Leading rows to skip
Specify the number of rows to use as an offset from the beginning of the file.
Delimiter
Specify the character used to separate a data field.
Quote character
Specify the character used to enclose a data field.
Remove quotes
Select one of the following values to indicate whether quotation characters should be removed from a data field during the bulk load:
  • Yes: removes the quotation characters.
  • No: retains the quotation characters.
Ignore blank lines
Select one of the following values to indicate whether blank lines should be removed from the data during the bulk load:
  • Yes: removes the blank lines.
  • No: retains the blank lines.
Empty as null
Select one of the following values to indicate whether empty data values should be set to null during the bulk load:
  • Yes: sets empty data values to null.
  • No: leaves data values as empty.
Avro Select one of the following values for the Empty as null option to indicate whether empty data values should be set to null during the bulk load:
  • Yes: sets empty data values to null.
  • No: leaves data values as empty.
JSON Select one of the following values for the Empty as null option to indicate whether empty data values should be set to null during the bulk load:
  • Yes: sets empty data values to null.
  • No: leaves data values as empty.
ORC Additional file settings handled by the ORC file.
Parquet Additional file settings handled by Parquet file.
Fixed width Specify the following settings for a fixed width file:
Columns:Width
Specify the columns and their fixed widths.
Leading rows to skip
Specify the number of rows to use as an offset from the beginning of the file.
Ignore blank lines
Select one of the following values to indicate whether blank lines should be removed from the data during the bulk load:
  • Yes: removes the blank lines.
  • No: retains the blank lines.
Empty as null
Select one of the following values to indicate whether empty data values should be set to null during the bulk load:
  • Yes: sets empty data values to null.
  • No: leaves data values as empty.