The Unique Rows (HashSet) step removes duplicate rows and filters only the unique rows as input data for the step.
This step differs from the Unique Rows transformation step by keeping track of the duplicate rows in memory and does not require a sorted input to process duplicate rows.
Note: Because of memory allocation issues, this step is intended for non-client machines. The required amount of memory and hardware will vary based on the size of the data you are processing. Alternatively, you can use this step with the Spark processing engine to help overcome memory constraint issues. See the Administer Pentaho Data Integration and Analytics document for instructions on setting up Adaptive Execution Layer (AEL to use the Spark processing engine from PDI.