-
Required role: Service and Storage
-
See Exporting configuration files (Cloud for Microsoft
Azure) in the VSP One SDS Block and SDS Cloud System Administration to obtain
configuration files for adding initiator nodes from VSP One SDS Block.
When exporting configuration files, specify "AddInitiatorNode" for the exportFileType parameter by using a REST API, and specify machineImageId parameter. For Multi-Az configuration, also specify the zone parameter. This step is mandatory and ensures that you use the latest configuration files.
CAUTION:-
Never edit the extracted configuration file unless instructed to do so in the document. Editing may cause the job to fail.
-
Use the URN of the same VM image as the version of the storage cluster for the machineImageId parameter. For the VM image URN to use, see VM image URN (Cloud for Microsoft Azure) in the VSP One SDS Block and SDS Cloud System Administration.
-
When the storage cluster applies Multi-AZ configuration, specify the zone parameter for "POST/v1/objects/configuration-file/actions/create/invoke." Obtain the information (Availability Zone ID) to specify from fault domain information. You can use the following commands to verify fault domain information.
REST API: GET /v1/objects/fault-domains
CLI: fault_domain_list
-
If you add initiator nodes in an Availability Zone different from the Availability Zone of an Azure managed disk, data migration performance is degraded.
-
When you specify "AddInitiatorNode" for the exportFileType parameter for configuration file export, use the REST API because CLI commands are unavailable.
-
The zone parameter to be specified for "POST/v1/objects/configuration-file/actions/create/invoke" is not indicated in the VSP One SDS Block and SDS Cloud REST API Reference and the VSP One SDS Block and SDS Cloud CLI Reference. See the memo.
Note:When the storage cluster applies Multi-AZ configuration, specify the Availability Zone ID for the zone parameter.
Use "logicalZone" of fault domain information as the Availability Zone ID.
[REST API]
"zone": "<ID-of-Availability-Zone-in-which-initiator-node-is-to-be-added>"
-
-
Add an initiator node.
To do so, see Adding storage nodes (Cloud for Microsoft Azure) in the VSP One SDS Block and SDS Cloud System Administration.
Note:-
You do not need to perform the following procedures (described in Adding storage nodes (Cloud for Microsoft Azure) of the VSP One SDS Block and SDS Cloud System Administration because you have already performed configuration file export in step 1.
-
Exporting configuration files (before starting node addition)
-
-
You can skip the following procedure described in Adding storage nodes (Cloud for Microsoft Azure) of the VSP One SDS Block and SDS Cloud System Administration because storage pool expansion is not performed by initiator node addition.
-
(Before starting storage node addition) Verifying logical capacity of the storage pool and free capacity of the storage pool
-
(After storage node addition completes) Obtaining a list of drives
-
(After storage node addition completes) Obtaining storage pool information
-
- The following resource is created with the fixed
resource name.
Resource
Resource name
Initiator node
InitiatorNode
-
Resource protection is not set for an initiator node.
-
-
Register information about the migration destination server.
Run either of the following commands with the nicknames of the intended compute nodes and the type of the OS running on the compute nodes specified.
Conventions to be followed when setting a nickname:
-
Number of characters: 1 to 229
-
Characters that can be used: Numbers (0 to 9), uppercase alphabet (A to Z), lowercase alphabet (a to z), symbols (\ . : @ _) for the first character. In addition to these, a hyphen (-) can be used for the second and subsequent characters.
-
Each compute node must have a unique nickname.
REST API: POST /v1/objects/servers
CLI: server_create
Verify the job ID which is displayed after the command is run.
-
-
Verify the state of the job.
Run either of the following commands with the job ID specified.
REST API: GET /v1/objects/jobs/<jobId >
CLI: job_show
After running the command, if you receive a response indicating "Succeeded" as the state, the job is completed.
-
Obtain a list of compute nodes and verify that the information about the
intended compute nodes is registered.
REST API: GET /v1/objects/servers
CLI: server_list
-
Verify the initiator name (iSCSI name or host NQN) of the applicable
compute node.
For details, see the documentation for the OS used on the compute node.
-
Verify that the initiator name (iSCSI name or host NQN) verified in step 6 is
not the same as the initiator name (iSCSI name or host NQN) of
another compute node.
If they are the same, change the initiator name (iSCSI name or host NQN).
-
Verify the ID of the applicable compute node.
If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.
REST API: GET /v1/objects/servers
CLI: server_list
-
Register information about the intended initiator.
Run either of the following commands with the ID of the compute node, connection protocol for the initiator, and iSCSI name or host NQN of the initiator.
If you use the CLI, you can specify a nickname instead of the compute node ID.
REST API: POST /v1/objects/servers/<id >/hbas
CLI: hba_create
Verify the job ID which is displayed after the command is run.
-
Verify the state of the job.
Run either of the following commands with the job ID specified.
REST API: GET /v1/objects/jobs/<jobId >
CLI: job_show
After running the command, if you receive a response indicating "Succeeded" as the state, the job is completed.
-
Obtain a list of information about initiators and verify that the information
about the intended initiator is registered.
Run either of the following commands with the compute node ID specified.
If you use the CLI, you can specify a nickname instead of the compute node ID.
REST API: GET /v1/objects/servers/<id >/hbas
CLI: hba_list
-
Register path information about the migration destination server.
Verify the ID of the intended compute node. If you use the CLI to specify a compute node by nickname, check the nickname of the compute node.
REST API: GET /v1/objects/servers
CLI: server_list
-
When you specify the hbaId parameter, verify the ID of the initiator for the
intended compute node.
Run either of the following commands with the compute node ID specified.
If you use the CLI, you can specify a nickname instead of the compute node ID.
REST API: GET /v1/objects/servers/<id >/hbas
CLI: hba_list
-
When you specify the portId parameter, verify the ID of the compute port to be
allocated to the intended compute node.
REST API: GET /v1/objects/ports
CLI: port_list
-
Register compute node path information.
Run either of the following commands with the compute node ID specified.
If you use the CLI, you can specify a nickname instead of the compute node ID. Also, you can specify the allocation-destination compute port by using the WWN or iSCSI name instead of the ID.
REST API: POST /v1/objects/servers/<id >/paths
CLI: path_create
Verify the job ID which is displayed after the command is run.
-
Verify the state of the job.
Run either of the following commands with the job ID specified.
REST API: GET /v1/objects/jobs/<jobId >
CLI: job_show
After running the command, if you receive a response indicating "Succeeded" as the state, the job is completed.
-
Obtain a list of path information and verify that the intended path information
is added.
Run either of the following commands with the compute node ID specified.
If you use the CLI, you can specify a nickname instead of the compute node ID.
REST API: GET /v1/objects/servers/<id >/paths
CLI: path_list
-
Attach the migration-target Azure managed disks (31 at maximum) to the Azure
Virtual Machines of the initiator node by using the Azure portal.
The Azure managed disks are mapped within VSP One SDS Block, and external volumes and virtual volumes are created.
CAUTION:-
If the subscription to which the migration-target Azure managed disks belong is different from the subscription for which a storage cluster was configured, the migration-target Azure managed disks cannot be directly attached to the Azure Virtual Machines of the initiator node. In this case, copy the Azure managed disks to the subscription for which a storage cluster was configured, and then attach the Azure managed disks.
-
For the maximum number of Azure managed disks that can be attached simultaneously, see Detailed specifications.
If you intend to migrate more Azure managed disks than the maximum number, repeat the migration operation for no more than 31 Azure managed disks per migration (from step 18 of Adding initiator nodes to step 9 of Performing data copy).
-
You can set whether to enable or disable the data reduction function of the migration-destination volume at the time of data copy, but the same values are applied to the all Azure managed disks attached in this procedure. Therefore, if you migrate data by the data reduction function setting multiple times, repeat the migration operation for each setting (from step 18 of Adding initiator nodes to step 9 of Performing data copy ).
-
You can specify the migration-destination fault domain at the time of data copy, but all the Azure managed disks attached in this procedure are migrated to the same fault domain. Therefore, if you migrate data by a migration-destination fault domain multiple times, repeat the migration operation for each fault domain (from step 18 of Adding initiator nodes to step 9 of Performing data copy).
-
Attach only migration-target Azure managed disks. Before attaching the Azure managed disks, verify that they are correct migration targets. If you have attached Azure managed disks other than the migration-target Azure managed disks, detach the Azure managed disks. Then, according to step 10 in Stopping data migration, delete the external volumes and virtual volumes that correspond to the wrongly attached Azure managed disks.
-
To collect dump files for initiator nodes by using the REST API/CLI, add the option that skips server certificate verification.
REST API: --insecure
CLI: --ignore_certificate_errors
-
-
Wait approximately 20 seconds, and then verify that external volumes and
virtual volumes corresponding to the Azure managed disks that were added in step
18 have been created.
Verify the volume IDs of the newly created external volumes according to Obtaining a list of external volume information. Also, obtain a list of volume information (see Obtaining a list of volumes in the VSP One SDS Block and SDS Cloud System Administration) to verify the nicknames of the virtual volumes that have the same IDs as the volume IDs of the external volumes. Verify that the obtained information matches the disk names of the migration-target Azure managed disks added in step 18.
Note:-
The name of the virtual volume is "Volume+<volumeNumber>." This number might start from the 10000s.
-
The maximum number of characters for nickname of a virtual volume created in step 18 is 64. If the name of the Azure managed disk to be compared to has more than 64 characters, verify that the Azure managed disk name matches serialNumber.
-
-
Set up a path between the virtual volume and the migration destination
server.
Set up a path to the virtual volume created in step 18.
See Allocating volumes to compute nodes (Cloud) in the VSP One SDS Block and SDS Cloud System Administration.
CAUTION:-
When setting both normal volume and virtual volume paths on the same destination server, assign a normal volume to the lower-numbered LUN and assign a higher-numbered LUN to the virtual volume.
If a number higher than the LUN of the virtual volume is assigned to a normal volume, the server might not be able to recognize the normal volume as well as the virtual volume as follows. In such a case, review the LUN assignment.
-
Depending on the type of OS installed on the server, the following might occur.
- When recognizing LUNs in ascending order of LUN numbers, if the server encounters an unrecognizable LUN, it gives up recognizing the subsequent LUNs.
-
When the server cannot recognize LUN of LUN=0, it does not recognize the subsequent LUNs.
-
The server cannot recognize virtual volumes after you perform step 20 until step 4 of Performing data copy.
-
-
This step is required to perform I/O from the migration destination server to the virtual volumes during data copy. This step cannot be performed after step 2 of Performing data copy has been performed.
-
The virtual volume settings cannot be edited. When you edit the QoS settings, complete step 7 of Performing data copy before doing so.
Note:Step 20 cannot be performed during the time after data copy has been started until completion of data copy has been confirmed (steps 2 through 7 of Performing data copy).
CAUTION:If an I/O operation is performed to virtual volumes during data copy, data of the I/O operation is not copied to the migration destination. Therefore, make sure that I/O operations to the disk to be migrated have stopped before performing data copy.
-