-
Required role: Service and Storage
-
When the storage cluster to be migrated applies Multi-AZ configuration, confirm
the migration-destination fault domain from a list of fault domain information.
Run the following command.
REST API: GET /v1/objects/fault-domains
CLI: fault_domain_list
Verify the ID of the fault domain that has the same physicalZone and logicalZone values as that of the ID of the migration-destination Availability Zone.
-
Start data copy.
Run the following command with the ID of the storage node (initiator node) specified.
REST API: POST /v1/objects/storage-nodes/<id>/actions/migrate-external-volumes/invoke
-
If you want to use the data reduction function of the migration destination volume, specify the createCompressionVolumes parameter. For details about the createCompressionVolumes parameter, see REST API Reference. See Requirements for enabling the data reduction function for volumes of Overview of the data reduction function for volumes in the VSP One SDS Block and SDS Cloud System Administration beforehand to ensure that requirements are met.
-
When the storage cluster to be migrated applies Multi-AZ configuration, specify the ID of the fault domain to which all external volumes are migrated for the destinationFaultDomainId parameter. For details about the destinationFaultDomainId parameter, see REST API Reference.
Data copy from the Azure managed disks recognized by the initiator node in step 18 of Adding initiator nodes starts.
Verify the job ID which is displayed after the command is run.
CAUTION:-
The specified value of the data reduction function for the migration destination volumes is applied for all the volumes to be data copied. Setting the function as enabled or disabled on a per-volume basis is not possible. Also, the data reduction function setting cannot be changed after data copy completion. For this reason, if you want to copying data with data reduction enabled and disabled respectively for different volumes, you need to perform data copy multiple times separately.
-
The specified value for the migration-destination fault domain is applied to all the volumes subject to data copy. Specifying migration-destination fault domains on a per-volume basis is not possible. Therefore, if you migrate volumes separately for each fault domain multiple times, you must perform data copy as many times as you perform migration.
Note:Data copy is not performed even if step 2 is performed during the time after data copy has been started (step 3) until completion of data copy has been confirmed (step 7).
-
-
Verify the state of the job.
Run either of the following commands with the job ID specified.
REST API: GET /v1/objects/jobs/<jobId>
CLI: job_show
If the job state is "Failed" or "Unknown", see Troubleshooting in this document.
-
Verify that an event log (messageId = KARS06305-I) indicating the start of data
copy is output.
REST API: GET /v1/objects/event-logs
CLI: event_log_list
-
Verify that the dataMigration status of the external volumes linked to the
migration-target virtual volumes (external volumes with the same IDs as those of
the virtual volumes) has changed to "DataMigrationSetupCompleted" or "Copying"
(see Obtaining a list of external volume information).
After verification, go to the step 6.
If the dataMigration status is neither "DataMigrationSetupCompleted" nor "Copying", see Troubleshooting.
Note:If the data copy is complete, the status of externalVolumePaths is "Offline." Even when the migration completes, go to step 6.
-
Let the migration destination server recognize the virtual volumes, and then
restart the operations that have been performed on the migration source server
on the migration destination server.
Note that, during data copy, I/O performance is degraded on the virtual volume for the access from the migration destination server.
-
Verify that the volumeIds of all virtual volumes to be migrated are displayed
as null (see Obtaining a list of external volume information).
The fact that the volumeIds are displayed as null shows that the virtual volumes that existed in the initiator node have been migrated. Upon completion of data migration, volumes are created as ordinary volumes in VSP One SDS Block.
Obtain a list of volume information (see Obtaining a list of volumes in the VSP One SDS Block and SDS Cloud System Administration) and verify that volumeType changed to "Normal" indicating an ordinary volume. After verification, go to the step 8.
After volumeType has changed to "Normal", you can perform volume operations, such as expanding volumes or editing the settings.
If the status of dataMigration becomes "Error", or the volumeId of the virtual volume is displayed but dataMigration itself is not displayed in the obtained list of external volume information, see Troubleshooting.
Note:-
The path between the virtual volume and migration-destination server that you created in step 20 of Adding initiator nodes is inherited by the migration-destination volume. You do not need to set the path between the migration-destination volume and migration-destination server again.
After data copy for a virtual volume is complete, the degraded I/O performance on the virtual volume for the access from the migration destination server returns to normal.
-
To identify the correspondence between the migrated volumes and migrated Azure managed disks, confirm nickname for each migrated volume. nickname indicates an Azure managed disk name.
The maximum number of characters for nickname is 64. If the number of characters for an Azure managed disk name exceeds 64, the ID and disk name of the migrated volume are output in the event log (KARS06307-I) as an external volume serial number. Identify the migrated Azure managed disk by referencing the output contents.
CAUTION:If a failure occurs in an Azure managed disk to be migrated or elsewhere during the period from data copy start to copy end, data copy is stopped for the virtual volume whose dataMigration status is not yet "CopyCompleted", and its state reverts to how it was before step 2. To restart data copy, correct the failure by seeing Troubleshooting. After that, retry data copy of the virtual volume from step 1.
-
-
Detach the Azure managed disks for which data migration has completed from the
Azure Virtual Machines of the initiator node by using the Azure portal.
Note:
If the data-migration-source Azure managed disks are no longer be needed, remove them.
-
Wait approximately 20 seconds, and then verify that external volumes and
virtual volumes are deleted.
Verify that all the external volumes and virtual volumes to be migrated are no longer displayed (see Obtaining a list of external volume information).
CAUTION:If the migration target external volume and virtual volume are displayed in step 9, collect a dump log and contact a support center. See Contact customer support for details.
- If there are remaining Azure managed disks for which data migration has not been performed, repeat the procedures from step 18 of Adding initiator nodes to step 9 of Performing data copy.