How to create restore points and DR backups with snapshots of a remote synchronous clone

Ops Center Protector Oracle Application Guide

Version
7.8.x
Audience
anonymous
Part Number
MK-99PRT003-10
It is assumed that the following tasks have been performed:
  • The Oracle Database application has been installed and any Protector prerequisites are met.
  • The Protector Master software has been installed and licensed on a dedicated node.
  • The Protector Client software has been installed on the source node where the Oracle Database application resides.
  • The Protector Client software has been installed on the nodes that will act as a proxy for both the primary and secondary Hitachi Block storage devices. Note that for a TrueCopy replication, the source and destination LDEVs are located on different devices.
  • The storage devices have been set up as per the Protector requirements and prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups.

Snapshot of a remote clone enables both rapid recovery to a point in time by using Thin Image snapshots, while providing an additional level of protection by creating a full remote clone of the database using Hitachi TrueCopy technology. In synchronous replication, the storage system signals each write completion only once it is performed on the primary and secondary volume (copy on write).

This setup provides partial protection against a disaster at the local site and full protection at the remote site as the primary and secondary volumes are geographically separated. If necessary, production can be moved quickly to the remote site while the local site is being recovered. Taking snapshots of the remote clone adds the additional benefit of being able to roll back the backup copy to a given restore point from the remote site.

The data flow and policy are as follows:

Figure. TrueCopy Replication with Local and Remote Thin Image Snapshots Data Flow
Table. Oracle Replication/Snapshot Policy
Classification Type Parameters Value
Oracle Database Database Selection TestDb

(All the selected databases must be located on the same Block device)

Backup Mode Online
Operation Type Parameter Value Assigned Nodes
Replicate Run Options N/A

(TrueCopy is a continuous replication, so the Run option is ignored)

Secondary Hitachi Block Device

Souce Options Quiesce...
Snapshot

(on local device)

Mode Hardware Oracle Database
Hardware Type Hitachi Block
RPO 8 Hours
Retention 1 Week
Run Options Run on Schedule

(see synch group schedule below)

Source Options Quiesce...
Snapshot

(on remote device)

Mode Hardware Secondary Hitachi Block Device
Hardware Type Hitachi Block
RPO 8 hours

(this must match the local snapshot)

Retention 1 Week

(this can differ from the local snapshot)

Run Options Run on Schedule

(see synch group schedule below)

Table. Synchronization Group Schedule
       
Trigger N/A

(this schedule defines a synchronization group name for local and remote snapshots. All parameters are ignored.)

N/A Snapshot (local),

Snapshot (remote)

  1. Locate the source OS Host node in the Nodes Inventory and check that it is authorized and online.
    This node represents the Protector Client installed on the Oracle server.
  2. Create a new Oracle Database node using the Oracle Application Node Wizard and check that the node it is authorized and online.
    The Oracle Database node type is grouped under Application in the Node Type Wizard. This node will be used in the dataflow to represent the Oracle Database configuration to be protected.
    1. Select the OS Host node identified above as the Node running Oracle....
    2. Optional - Specify the credentials for both the Operating System and Database users.
  3. Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.
    These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and are identified as the Proxy Node when creating the primary and secondary Hitachi Block Device nodes in the next step. These nodes are known as ISM (Intelligent Storage Manager) nodes. The ISM nodes do not appear in the data flow.
  4. Create new primary and secondary Hitachi Block Device nodes (unless ones already exists) using the Block Storage Node Wizard and check that they are authorized and online.
    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. The secondary Hitachi Block Device node appears in the replication data flow as the destination node. The primary Hitachi Block Device node is represented in the data flow by the Oracle Database node where the primary LDEV is mounted.
  5. Define a policy as shown in the table above using the Policy Wizard. This policy contains operations for the replication, local and remote snapshots.
    1. Define an Oracle Database classification using the Oracle Database Classification Wizard.
      The Oracle Database classification is grouped under Application in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.
      TrueCopy replication runs as a continuous operation and thus no schedule needs to be defined.
    3. Define a local Snapshot operation using the Snapshot Operation Wizard.
      Thin Image snapshots run based on the RPO. However we also want to synchronize the local and remote snapshots. This is done by defining a trigger schedule that is applied to both the local and remote snapshot operations.
    4. Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules in the Snapshot Operation Wizard for the local snapshot.
      Only the trigger schedule name is required; the parameters are not relevant here since the RPO of the local snapshot dictates when the local and remote snapshot operations are triggered.
    5. Define a remote Snapshot operation using the Snapshot Operation Wizard.
      To synchronize the local and remote snapshots, apply the same trigger schedule to this snapshot operation that was applied to the local snapshot operation.
      Note: The local and remote snapshots must have the same RPO, otherwise a rules compiler error will be generated.
  6. Draw a data flow as shown in the figure above, that shows the Oracle Database source node connected to the secondary Hitachi Block Device via a Continuous mover.
    TrueCopy is a remote replication technology, so the Hitachi Block Device node shown on the data flow is the where the destination (SVOL) volume is located.
  7. Assign the Oracle-Replicate-Snaphot-Snapshot policy to the Oracle Database source node.
  8. Assign the local Snapshot operation to the Oracle Database source node.
    The Block Snapshot Operation Properties Dialog is displayed.
  9. Select the Pool by selecting the local Hitachi Block Device node created in the steps above, followed by one of the available Thin Image Pools.
  10. Leave the remaining snapshot parameters at their default settings, then click OK.
    The snapshot icon is now shown superimposed over the source node.
  11. Assign the remote Snapshot operation to the remote Hitachi Block Device node.
    The Block Snapshot Operation Properties Dialog is displayed.
  12. Select the Pool by selecting the remote Hitachi Block Device node created in the steps above, followed by one of the available Thin Image Pools.
  13. Leave the remaining snapshot parameters at their default settings, then click OK.
    The snapshot icon is now shown superimposed over the destination node.
  14. Assign the Replicate operation to the remote Hitachi Block Device node.
    The Block Replication Operation Properties Dialog is displayed.
  15. Set the replication type to Synchronous Remote Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.
  16. Compile and activate the data flow, checking carefully that there are no errors or warnings.
  17. Locate the active data flow in the Monitor Inventory and open its Monitor Details page.
    The policy will be invoked automatically to create and then maintain the replication according to the policy. Snapshot operations will be triggered synchronously on the source and destination nodes according to the RPO.
  18. Monitor the active data flow to ensure the policy is operating as expected.
    For a healthy data flow you will periodically see:
    • Replication and snapshot jobs appearing for the source node in the Jobs area triggered according to the RPO.
    • Snapshot jobs appearing for the destination node in the Jobs area synchronized to the local snapshot.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
  19. Review the status of the local Hitachi Block Device to ensure snapshots are being created. Review the status of the remote Hitachi Block Device to ensure the replication is being performed and that snapshots are being created.
    New local and remote snapshots will appear periodically as dictated by the RPO of the policy. Old snapshots will be removed periodically as dictated by the Retention Period of the policy.