You can select the Pentaho Engine to run transformations in the default Pentaho (Kettle) environment.
You can also use the Spark Submit job entry to run big data transformations on your Hadoop cluster to coordinate large amounts of data over multiple nodes. See Spark Submit for details.