The following steps and entries support the VFS browser:
- Amazon EMR Job Executor (introduced in v.9.0)
- Amazon Hive Job Executor (introduced in v.9.0)
- AMQP Consumer (introduced in v.9.0)
- Avro Input (introduced in v.8.3)
- Avro Output (introduced in v.8.3)
- ETL metadata injection
- File Exists (Job Entry)
- Hadoop Copy Files
- Hadoop File Input
- Hadoop File Output
- JMS Consumer (introduced in v.9.0)
- Job Executor (introduced in v.9.0)
- Kafka consumer (introduced in v.9.0)
- Kinesis Consumer (introduced in v.9.0)
- Mapping (sub-transformation)
- MQTT Consumer (introduced in v.9.0)
- ORC Input (introduced in v.8.3)
- ORC Output (introduced in v.8.3)
- Parquet Input (introduced in v.8.3)
- Parquet Output (introduced in v.8.3)
- Oozie Job Executor (introduced in v.9.0)
- Simple Mapping (introduced in v.9.0)
- Single Threader (introduced in v.9.0)
- Sqoop Export (introduced in v.9.0)
- Sqoop Import (introduced in v.9.0)
- Transformation Executor (introduced in v.9.0)
- Weka Scoring (introduced in v.9.0)
Note: If you have a Pentaho address to an established VFS connection, you can copy and paste your PVFS location into the file or folder option of any of the steps and entries listed here. You do not need to click Browse to access the VFS browser.
The VFS dialog boxes are configured through specific transformation parameters. See Configure SFTP VFS for more information on configuring options for SFTP.