Remove a faulty drive as follows.
-
Required role: Storage
-
Verify the ID of the faulty drive to be removed and the ID of the storage node
that has the faulty drive.
Also record the WWID of the faulty drive to be removed. WWID is used to remove the faulty drive from the server.
Run either of the following commands with "Blockage" specified for the query parameter "status."
REST API: GET /v1/objects/drives
CLI: drive_list
-
Verify the status of the storage node containing the faulty drive.
Run either of the following commands with the storage node ID containing the faulty drive specified.
REST API: GET /v1/objects/storage-nodes/<id>
CLI: storage_node_show
If the status of the storage node is "Ready" or "RemovalFailed", go to the next step.
-
Turn on the locator LED of the drive to be removed.
Run either of the following commands with "TurnOn" specified for the operationType parameter (operation_type in the case of CLI).
REST API: POST /v1/objects/drives/<id>/actions/control-locator-led/invoke
CLI: drive_control_locator_led
Verify the job ID which is displayed after the command is run.
CAUTION:If a storage node failure occurs during a drive removal operation, the locator LED on/off status shown by using the REST API, CLI, or VSP One SDS Block Administrator might become different from the on/off status of the locator LED on the physical drive. The locator LED on/off status shown by using the REST API, CLI, or VSP One SDS Block Administrator is updated and corrected after the storage node is recovered from the failure.
Note:If the configuration differs from those described in VSP One SDS Block Hardware Compatibility Reference, locator LED operation might not be available. In this case, confirm the drive location by performing the procedure indicated in Note in step 5.
-
Verify the state of the job.
Run either of the following commands with the job ID specified.
REST API: GET /v1/objects/jobs/<jobID>
CLI: job_show
If the job state is "Succeeded", the job is completed.
-
On the server, find the drive whose locator LED is lit and confirm its mounting
position.
Then, remove the faulty drive from the server.
For details, see the documentation of your server vendor.
CAUTION:If a failure occurs during a drive removal operation, the locator LED might be turned off. In such a case, resume from step 3.
If you interrupt a drive removal operation and perform a maintenance operation that requires the storage node to be restarted, the locator LED might be turned off. In such a case, resume from step 3.
Note:-
If the locator LED is not lit, confirm the mounting position of the drive to be removed using the following method.
Find the drive that matches the WWID of the failed drive recorded in step 1 and the WWN or EUI value of the drive recorded at the time of expansion. Confirm the location in which the drive recorded in association with the WWN or EUI is installed.
-
If the value recorded at the time of drive addition was a WWN, there might be a difference in the last 1 to 3 digits of the right-side 16-digit part of the WWID recorded in step 1.
-
-
Verify the state of the write back mode with cache protection.
REST API: GET /v1/objects/storage
CLI: storage_show
Take the following action according to the state of the write back mode with cache protection (writeBackModeWithCacheProtection).
-
If the state is "Disabled" or "Enabling", go to step 8.
-
If the state is "Enabled" or "Disabling", go to the next step.
-
-
See Confirming metadata redundancy for cache
protection to verify that metadata redundancy for cache protection is
not degraded.
When the redundancy is not degraded, go to the next step.
When the redundancy is degraded, wait until it has been recovered. If event log KARS06596-E is output, take action according to the event log. After taking action, perform step 7 again.
Note:If the storage node is blocked, the metadata redundancy for cache protection is not recovered unless the storage node is recovered by maintenance operation. Recover the blocked storage node first by performing maintenance operation.
-
See Verifying Rebuild status and determine whether
the Rebuild is being performed or whether an error has occurred
during the Rebuild.
If the Rebuild is not being performed and no error has occurred, go to the next step.
If the Rebuild is being performed or an error has occurred during the Rebuild, take appropriate action (see Verifying Rebuild status).
Note:Before proceeding to the next step, obtain a list of the drives and verify that the target faulty drive exists. If the target faulty drive does not exist, go to step 12. For how to verify the target faulty drive, see step 1.
-
Remove the faulty drive.
Run either of the following commands with the drive ID obtained in step 1 specified.
REST API: POST /v1/objects/drives/<id>/actions/remove/invoke
CLI: drive_remove
Verify the job ID which is displayed after the command is run.
-
Verify the state of the job.
Run either of the following commands with the job ID specified.
REST API: GET /v1/objects/jobs/<jobId>
CLI: job_show
If the job state is "Succeeded", the job is completed.
-
Obtain a list of drives and verify that the target faulty drive
has been removed.
After step 9, removal of the drive might take approximately one minute.
REST API: GET /v1/objects/drives
CLI: drive_list
-
Back up the configuration information.
Perform this step by referring to Backing up the configuration information (Bare metal).
If you continue operations with other procedures, you must back up the configuration information after you have completed all operations.