Configuring a storage cluster

Virtual Storage Platform One SDS Cloud for Google Cloud Platform Setup and Configuration

Version
1.18.x
Audience
anonymous
Part Number
MK-24VSP1SDS011-02

Configure a storage cluster of the cloud model for Google Cloud of VSP One SDS Block from Google Cloud Marketplace.

CAUTION:

When configuring a storage cluster, a Terraform state file (default.tfstate) is created on Cloud Storage buckets.

Changing or deleting this Terraform state file inadvertently might render storage cluster configuration unsuccessful. Do not edit the file.

Note:
  • When you configure a storage cluster, a trial license is registered in this storage cluster as a base license of VSP One SDS Block. The trial license is valid for 60 days. You need to register another type of base license for continued use of VSP One SDS Block after the trial license expires. For details about how to install a license, see the VSP One SDS Block and SDS Cloud System Administration.

  • For details about the license, see License management overview (Cloud) in the VSP One SDS Block and SDS Cloud System Administration.

  • When you configure a storage cluster, the label you specified for billingCode is assigned. For details, see About labels.

  1. Access the Hitachi Virtual Storage Platform One SDS Block product page of Google Cloud Marketplace.

    Follow the instructions on the pages to proceed.

  2. Click the Download button to download configuration file (.zip).
  3. Log in with a Google account that meets the conditions described in Google account permissions.
  4. Log in to the VM (for example, Cloud Shell) that allows for running gcloud CLI and Terraform.
  5. Locate the configuration file (.zip) in any empty directory of the VM to unzip them.

    The command to copy a file existing in a bucket to a VM is as follows:

    gsutil cp gs://[bucket-name]/[file-path] .
    Note:

    Run the terraform command in the directory in which a configuration file is located in the succeeding steps. This directory in which a configuration file is located is called the Terraform working directory.

    The Terraform working directory contains terraform configuration files, packages, and storage cluster control information.

    If you create multiple storage clusters, create an empty working directory for each storage cluster.

  6. Among the extracted configuration files, edit three files, backend.auto.tfvars, terraform.tfvars, and main.tf.

    Edit the following parameters for each file.

    backend.auto.tfvars

    Item Setting
    bucket

    Bucket name of Cloud Storage in which a terraform state file is to be stored.

    Setting requirements are as follows:

    • Do not add a slash ("/") at the end.

    prefix

    Folder name of Cloud Storage in which a terraform state file is to be stored. Specify the folder created beforehand.

    When creating multiple storage clusters, specify an empty folder for each storage cluster.

    Setting requirements are as follows:

    • Do not add a slash ("/") at the end.

    • No file exists in the specified folder.

    terraform.tfvars

    Item name Setting

    clusterName

    Name of a VSP One SDS Block storage cluster.

    Setting requirements are as follows:

    • The number of characters must be from 1 to 30.

    • Characters must be lowercase alphabetical characters, numbers, and symbols ("-").

    • The first character must be lowercase alphabetical.

    gsUri

    URI (starting with "gs://") of a Cloud Storage folder. During node addition, node replacement, or VSP One SDS Block setup, the system outputs the installation status. If any errors occur, error message files (including related dump log files) are also generated. Specify the folder created beforehand.

    Do not delete this folder because it is continuously used after VSP One SDS Block setup.

    When creating multiple storage clusters, specify an empty folder for each storage cluster.

    Setting requirements are as follows:

    • Add a slash ("/") at the end.

    timeZone

    Time zone to be set for VSP One SDS Block. Set a time zone where the OS can be configured (e.g., "Asia/Tokyo", "America/Los_Angeles").

    The time zone is used when contacting the support center during an incident.

    For details on configurable time zones, see the VSP One SDS Block and SDS Cloud REST API Reference or the VSP One SDS Block and SDS Cloud CLI Reference.

    project_id

    ID of the project in which the storage cluster is to be configured.

    Be careful not to specify a project name.

    region

    Name of the region in which the storage cluster is to be configured.

    isMultiAz

    Boolean value to determine configuration.

    Setting requirements are as follows:

    • For Single-Zone configuration: false

    • For Multi-Zone configuration: true

    zones

    Name of the zone in which the storage cluster is to be configured.

    Setting requirements are as follows:

    • For Single-Zone configuration: Specify one.

    • For Multi-Zone configuration: Specify three.

    clusterStructure

    User data protection method.

    Setting requirements are as follows:

    • Specify either HPEC 4D+2P or Mirroring Duplication.

    controlVpcName

    Name of the VPC for the control network.

    internodeVpcName

    Name of the VPC for the internode network.

    computeVpcName

    Name of the VPC for the compute network.

    controlSubnetName

    Name of the subnet for the control network.

    internodeSubnetName

    Name of the subnet for the internode network.

    computeSubnetName

    Name of the subnet for the compute network.

    computeIPv6Enable

    Sets an IPv6 address for the compute network.

    Setting requirements are as follows:

    • When an IPv6 address is used: true

    • When an IPv6 address is not used: false

    computePortProtocol

    Network protocol to be used by the compute port.

    Setting requirements are as follows:

    • Specify either "iSCSI" or "NVMe/TCP."

    storageNodeMachineType

    Machine type of a storage node VM.

    Setting requirements are as follows:

    • Specify either n4-highmem-32 or n4-standard-32.

    numberOfNodes

    Number of storage nodes for a storage cluster.

    Setting requirements are as follows:

    • When Single-Zone configuration and HPEC 4D+2P are used: 6, 12, 18

    • When Single-Zone configuration and Mirroring are used: 3, 6, 9, 12, 15, 18

    • When Multi-Zone configuration and Mirroring are used: 2, 4, 6, 8, 10, 12, 14, 16, 18

    numberOfDrives

    Number of user data drives per node.

    Setting requirements are as follows:

    • This number must be from 6 to 24.

    driveSize

    User data drive size [GiB].

    Setting requirements are as follows:

    • For HPEC 4D+2P: 1480, 2661, 3843, 5025, 6650

    • For Mirroring Duplication: 1579, 3155, 4730, 6405, 8473

    diskEncryptionKey

    Name of an encryption key for disk encryption.

    Setting requirements are as follows:

    • When Google default encryption is used: No entry is required ("").

    • When an encryption key created with Cloud KMS (Cloud Key Management Service) is used: Specify the created path.

    • To create and use a CMEK, additional roles are required.

      For details, see Data encryption.

    billingCode

    Label to be assigned to each resource.

    Setting requirements are as follows:

    • The number of characters must be 0 to 63.

    • You can specify lower-case alphabetical characters, numbers, "-", and "_".

    For conditions other than those indicated in this table, see Terraform and Google Cloud documentation.

    Note:
    • For details about Google Cloud Hyperdisk and Hyperdisk Balanced, see the following website:

      https://cloud.google.com/compute/docs/disks/hyperdisks

    • The character string you set for clusterName is to be used as a name prefix for the following resources.

      Resources Resource name Remarks

      Storage node VM

      <namePrefix>-snXX

      XX indicates the number of nodes.

      Tiebreaker node VM

      <namePrefix>-tb

       

      System disk

      <namePrefix>-snXX

      XX indicates the number of nodes.

      Tiebreaker node system disk

      <namePrefix>-tb

       

      User data disk

      <namePrefix>-snXX-userdatadiskYY

      XX indicates the number of nodes.

      YY indicates the number of drives.

      Load balancer

      <namePrefix>-lb

       

      Service account

      <namePrefix>-serviceaccount

       

      IAM role for a service account

      <namePrefix> Custom Role

       

      Resource policy

      <namePrefix>-resource-policyXX

      XX is 1 to 6.

      Initiator node VM

      <namePrefix>-initiatornode

      This is created only when adding initiator nodes.

      Initiator node system disk

      <namePrefix>-initiatornode

      This is created only when adding initiator nodes.

      Address reserved for a compute IPv6 address

      <namePrefix>-snXX-compute-ipv6

      XX indicates the number of nodes.

      This is created only when using an IPv6 address.
    • Some of the IPv4 addresses available for each subnet that you specified are automatically allocated to the storage nodes or load balancer.

      Note that, if you selected "true" for computeIPv6Enable (the case in which IPv6 is also used for compute network), IPv4 and IPv6 addresses are automatically allocated.

    main.tf

    Uncomment the following four lines so that a Terraform state file (default.tfstate) is created on Cloud Storage buckets.

    #terraform {
    #  backend "gcs" {
    #  }
    #}
  7. Run the following command in the Terraform working directory.
    terraform init -backend-config="backend.auto.tfvars"

    Verify that Terraform has been successfully initialized! is displayed.

    Also, verify that a file whose name is default.tfstate has been created in the directory that you specified for backend.auto.tfvars in the Cloud Storage bucket.

  8. Confirm the changes accompanying creation of a storage cluster based on the information of the terraform.tfvars file.

    Run the following command to confirm the changes.

    terraform plan

    If Changes to Outputs is output after running the command, there is no problem.

    If entry is prompted or an error is output in the middle of processing, review the configuration file.

    Note:

    In some cases of running the command, the beginning of the displayed output might not be visible because a large number of output lines cannot fit in a single screen. To view all the output contents, try output to a text file, for example.

  9. If there is no problem as a result of confirmation with the terraform plan command, run the following command to create a storage cluster.
    terraform apply

    When Enter a value: is displayed after running the command, enter yes.

    Then, Outputs is output. After the terraform apply command completes, storage cluster creation starts.

    If a file whose name is Completed is created in the following path under the directory you specified for gsUri, storage cluster creation is completed.

    <gsUri>/Installaion_<YYYYMMDD>_<hhmmss>_<clusterName>/Completed

    <YYYYMMDD>_<hhmmss> indicates the UTC time you started installation in the storage node.

    <clusterName> contains the storage cluster name set for clusterName in the terraform.tfvars file.

    CAUTION:

    <YYYYMMDD>_<hhmmss> differs from the time you run the terraform apply command.

    As a reference, this is approximately three minutes after the terraform apply command is run.

    Note:
    • If the terraform apply command results in failure, creation of the storage cluster might have resulted in unnecessary resources. Delete such unnecessary resources according to Deleting storage clusters in the VSP One SDS Block and SDS Cloud System Administration, verify that there is no problem with the configuration files and other items, and then run the terraform apply command again.

    • Either of the following files is created under the directory specified for gsUri according to the installation status.

      File path Installation status
      <gsUri>/Installaion_<YYYYMMDD>_<hhmmss>_<clusterName>/InProgress Installation is in progress.
      <gsUri>/Installaion_<YYYYMMDD>_<hhmmss>_<clusterName>/Completed Installation is successful.
      <gsUri>/Installaion_<YYYYMMDD>_<hhmmss>_<clusterName>/Failed Installation is unsuccessful.
    • The result of the installation (successful or unsuccessful) is recorded in the InstallationResult.txt file created in the directory of "Installation_<YYYYMMDD>_<hhmmss>_<clusterName>."

    • If installation results in failure, see Identifying the failure in the VSP One SDS Block and SDS Cloud Troubleshooting Reference.

    Tip:

    The approximate time required until storage cluster creation processing completes might differ depending on the storage node configuration. It takes approximately 30 minutes in a configuration of three or six storage nodes.

  10. Confirm the VM instance that forms the storage cluster.

    Verify that the VM instance name and zone you specified for the parameters are correct.

    The VM instance name is shown in the form of "<clusterName>-snXX" and consists of clusterName and numberOfNodes (XX value) of the terraform.tfvars file.

    For Multi-Zone configuration, a tiebreaker node is created with the name "<clusterName>-tb."

  11. Confirm the Network Service Load balancing.

    Select the load balancer created from the Backend tab of Load Balancing, and check the instance group of Backend.

    If the status corresponds to the following, there is no problem.

    For Single-Zone:

    • The number of instance groups is 1.

    • In "Healthy", "1/3" must be indicated for Mirroring Duplication and "1/5" must be indicated for HPEC 4D+2P.

    For Multi-Zone:

    • The number of instance groups is 3.

    • In "Healthy", "1/1" is indicated for one of the three instance groups and "0/1" is indicated for the other two instance groups.

    The load balancer name is shown in the form of "<clusterName>-lb" and consists of clusterName of the terraform.tfvars file.

    CAUTION:
    • When you configure a storage cluster, the following privileges are automatically granted to the created VM. Do not change or delete the privileges.

      • compute.addresses.get

      • compute.addresses.list

      • compute.addresses.use

      • compute.addresses.useInternal

      • compute.addresses.createInternal

      • compute.backendBuckets.get

      • compute.backendBuckets.list

      • compute.disks.create

      • compute.disks.delete

      • compute.disks.get

      • compute.disks.list

      • compute.disks.use

      • compute.firewalls.create

      • compute.firewalls.delete

      • compute.firewalls.get

      • compute.firewalls.list

      • compute.forwardingRules.get

      • compute.globalOperations.get

      • compute.healthChecks.get

      • compute.images.get

      • compute.images.useReadOnly

      • compute.instanceGroupManagers.create

      • compute.instanceGroups.get

      • compute.instanceTemplates.create

      • compute.instances.addAccessConfig

      • compute.instances.addResourcePolicies

      • compute.instances.create

      • compute.instances.delete

      • compute.instances.deleteAccessConfig

      • compute.instances.get

      • compute.instances.getGuestAttributes

      • compute.instances.list

      • compute.instances.setDeletionProtection

      • compute.instances.setMachineType

      • compute.instances.setMetadata

      • compute.instances.setServiceAccount

      • compute.instances.setTags

      • compute.instances.start

      • compute.instances.stop

      • compute.instances.update

      • compute.instances.updateNetworkInterface

      • compute.instanceGroups.list

      • compute.instanceGroupManagers.list

      • compute.networks.get

      • compute.networks.list

      • compute.networks.updatePolicy

      • compute.projects.get

      • compute.projects.setCommonInstanceMetadata

      • compute.regionBackendServices.get

      • compute.resourcePolicies.create

      • compute.resourcePolicies.delete

      • compute.resourcePolicies.get

      • compute.resourcePolicies.list

      • compute.subnetworks.get

      • compute.subnetworks.list

      • compute.subnetworks.use

      • compute.subnetworks.useExternalIp

      • compute.zoneOperations.get

      • compute.zones.get

      • compute.zones.list

      • iam.roles.get

      • iam.roles.list

      • iam.serviceAccounts.actAs

      • iam.serviceAccounts.create

      • iam.serviceAccounts.get

      • iam.serviceAccounts.list

      • iap.tunnelInstances.accessViaIAP

      • resourcemanager.projects.getIamPolicy

      • storage.buckets.create

      • storage.buckets.enableObjectRetention

      • storage.buckets.list

      • storage.objects.create

      • storage.objects.delete

      • storage.objects.get

      • storage.objects.list

    • To uninstall a storage cluster, see Removing the storage software in the VSP One SDS Block and SDS Cloud System Administration.

  12. Confirm the load balancer's IP address from the InstallationResult.txt file or the load balancer's configuration.

    The load balancer's IP address is used as the destination for accessing the storage cluster via VSP One SDS Block Administrator, the CLI, or the REST API.

    This completes configuration of the storage cluster.