VSP One SDS Block is a storage software product that builds and sets up a virtual storage system from multiple general-purpose servers.
The operation and management features of the VSP One SDS Block system are as follows. The system offers a high-performance, high-capacity block storage service with high reliability.
-
The initial cost is low because general-purpose servers (x86 servers) are used.
-
You can quickly introduce the system, add storage, or reduce storage.
-
You can centrally manage multiple storage nodes as a single storage system.
-
You can easily determine the operation status of resources, the maximum volume capacity that can be created, and I/O performance at the time of checking.
-
The REST APIs for VSP One SDS Block are compatible with the REST APIs for Hitachi Storage Advisor Embedded. Administrators with experience in managing Hitachi storage products can manage storage using VSP One SDS Block without extra time on learning VSP One SDS Block.
User data protection methods
VSP One SDS Block supports Hitachi Polyphase Erasure Coding (HPEC) and Mirroring as user data protection functions. HPEC is a proprietary data protection method developed by Hitachi for SDS systems and uses narrow internode network bandwidth for outstanding capacity efficiency. If HPEC is used, the user data is stored on a local drive. Mirroring is a data protection method that stores a copy of user data on another storage node.
(Virtual machine)(Bare metal) Select a method from HPEC 4D+1P, HPEC 4D+2P, or Mirroring Duplication and specify the method during setup.
(Cloud) Select either HPEC 4D+2P or Mirroring Duplication method and specify the method during setup.
For user data protection methods, there is a limit to the combination of storage controller redundancy and cluster master node redundancy settings.
(Virtual machine)(Bare metal) For how to design the capacity for the HPEC 4D+1P, HPEC 4D+2P, or Mirroring Duplication method, see Capacity design (for HPEC 4D+1P) (Virtual machine)(Bare metal), Capacity design (for HPEC 4D+2P), or Capacity design (for Mirroring) in this manual.
(Cloud) For how to design the capacity for the HPEC 4D+2P or Mirroring Duplication, see Capacity design (for HPEC 4D+2P) or Capacity design (for Mirroring) in this manual.
-
(Virtual machine)(Bare metal) If the number of failures exceeds the allowable limit, you need to re-install VSP One SDS Block or restore the configuration from the configuration backup file. In case more failures occur than are allowed, back up user data to other media and obtain the configuration backup file of VSP One SDS Block. User data cannot be restored by re-installing VSP One SDS Block or restoring the configuration from the configuration backup file.
- (Cloud) If the number of failures exceeds the allowable limit, you need to re-install VSP One SDS Block. User data cannot be restored by re-installing VSP One SDS Block.
HPEC 4D+1P (4 data areas + 1 parity area) (Virtual machine)(Bare metal)
This method is suitable when capacity efficiency and performance are important.
-
User data and its parities are stored on five or more different storage nodes for redundancy.
-
At least five storage nodes are required.
-
The maximum capacity available to users is 60 to 75% of the physical capacity.
However, if the rebuild capacity policy (rebuildCapacityPolicy) is set to "Fixed" (default), users can use a maximum of 60 to 75% of the physical capacity excluding the rebuild capacity on each storage node. For details about the rebuild capacity, see Rebuild capacity of a storage pool.
-
The read performance is the same as HPEC 4D+2P, but the write performance is better than HPEC 4D+2P.
-
The number of failures that can be tolerated is one.
The number is the sum of the number of defective storage nodes and the number of defective drives. However, the number is counted as one failure in the following cases:-
One or more drive failures occurred on a faulty storage node.
-
Drive failures occurred on a single storage node.
-
HPEC 4D+2P (4 data areas + 2 parity area)
This method is suitable when the number of failures that can be tolerated is important.
-
User data and its parities are stored on six or more different storage nodes for redundancy.
-
At least six storage nodes are required.
-
The maximum capacity available to users is 50 to 65% of the physical capacity.
However, if the rebuild capacity policy (rebuildCapacityPolicy) is set to "Fixed" (default), users can use a maximum of 50 to 65% of the physical capacity excluding the rebuild capacity on each storage node. For details about the rebuild capacity, see Rebuild capacity of a storage pool.
-
The read performance is the same as HPEC 4D+1P, but the write performance is about 60% of HPEC 4D+1P.
-
The number of failures that can be tolerated is two.
The number is the sum of the number of defective storage nodes and the number of defective drives. However, the number is counted as one failure in the following cases:-
One or more drive failures occurred on a faulty storage node.
-
Drive failures occurred on a single storage node.
-
(A) Store data locally and reduce network communication during read
(B) Primary coding: Coding reduces data volume for two redundancies
(C) Secondary coding: Data storage capacity is reduced to achieve capacity efficiency that is equivalent to EC (Erasure Coding).
Mirroring Duplication (1 data area + 1 data copy)
This method is suitable if performance is priority.
-
User data and its copies are stored redundantly on two different storage nodes.
-
At least three storage nodes are required.
-
The maximum capacity available to users is 40 to 48% of the physical capacity.
However, if the rebuild capacity policy (rebuildCapacityPolicy) is set to "Fixed" (default), users can use a maximum of 40 to 48% of the physical capacity excluding the rebuild capacity on each storage node. For details about the rebuild capacity, see Rebuild capacity of a storage pool.
-
(Virtual machine)(Bare metal) The read performance of this method is equivalent to the HPEC 4D+1P and HPEC 4D+2P methods but the write performance is superior to the HPEC 4D+1P method. The fault tolerance against storage node or drive failures is also superior to the HPEC methods.
-
(Cloud) The performance of read operations is equivalent to that of the HPEC 4D+2P method. The fault tolerance against storage node or drive failures is also superior to the HPEC 4D+2P method.
The number of failures that can be tolerated is one.
The number is the sum of the number of defective storage nodes and the number of defective drives. However, the number is counted as one failure in the following cases:-
One or more drive failures occurred on a faulty storage node.
-
Drive failures occurred on a single storage node.
However, two or more failures can be tolerated except in the following cases:
-
Condition 1: Case where storage node or drive failures occur on both storage nodes that belong to redundant storage controllers
For details about storage controllers, see Capacity management of storage nodes by the storage controllers in this manual.
CAUTION:Failures might not be tolerated during the following periods, regardless of Condition 1:-
After storage node addition until completion of drive data relocation
-
When a storage node is being removed
-
-
Condition 2: Case where failures occur on two or more cluster master nodes.
-