How to create restore points and full backups with snapshots of a local clone

Ops Center Protector Oracle Application Guide

Version
7.8.x
Audience
anonymous
Part Number
MK-99PRT003-10
It is assumed that the following tasks have been performed:
  • The Oracle Database application has been installed and any Protector prerequisites are met.
  • The Protector Master software has been installed and licensed on a dedicated node.
  • The Protector Client software has been installed on the source node where the Oracle Database application resides.
  • The Protector Client software has been installed on the node that will act as a proxy for the Hitachi Block storage device. Note that for a ShadowImage replication, the source and destination LDEVs are located on the same device.
  • The storage device has been set up as per the Protector requirements and prerequisites.
  • Permissions have been granted to enable the Protector UI, required activities and participating nodes to be accessed. In this example all nodes will be left in the default resource group, so there is no need to allocate nodes to user defined resource groups.

Snapshot of a local clone enables both rapid recovery to a point in time by using Thin Image snapshots, while providing an additional level of protection by creating a full clone of the database by using ShadowImage technology. Taking snapshots of the clone adds the additional benefit of being able to roll back the backup copy to a given restore point.

Because ShadowImage is an in-system replication technology, it does not provide protection against a disaster at the local site, since both the primary and secondary volumes are co-located.

The data flow and policy are as follows:

Figure. ShadowImage Replication with Local Thin Image Snapshots Data Flow
Table. Oracle Replication/Snapshot Policy
Classification Type Parameters Value
Oracle Database Database Selection TestDb

(All the selected databases must be located on the same Block device)

Backup Mode Online
Operation Type Parameter Value Assigned Nodes
Replicate Run Options Run on Schedule

(see synch group schedule below)

Hitachi Block Device

Souce Options Quiesce...
Snapshot Mode Hardware Oracle Database
Hardware Type Hitachi Block
RPO 8 Hours
Retention 1 Week
Run Options Run on Schedule

(see synch group schedule below)

Source Options Quiesce...
Table. Synchronization Group Schedule
     
Trigger N/A

(this schedule defines a synchronization group name for local replications and snapshots. All parameters are ignored.)

Snapshot,

Replication

  1. Locate the source OS Host node in the Nodes Inventory and check that it is authorized and online.
    This node represents the Protector Client installed on the Oracle server.
  2. Create a new Oracle Database node using the Oracle Application Node Wizard and check that the node it is authorized and online.
    The Oracle Database node type is grouped under Application in the Node Type Wizard. This node will be used in the dataflow to represent the Oracle Database configuration to be protected.
    1. Select the OS Host node identified above as the Node running Oracle....
    2. Optional - Specify the credentials for both the Operating System and Database users.
  3. Locate the node in the Nodes Inventory that will control the Hitachi Block Devices via a CMD (Command Device) interface and check that it is authorized and online.
    This node is used by Protector to orchestrate replication of the LDEV and is identified as the Proxy Node when creating the Hitachi Block Device node in the next step. This node is known as an ISM (Intelligent Storage Manager) node. The ISM node does not appear in the data flow.
  4. Create a new Hitachi Block Device node (unless one already exists) using the Block Storage Node Wizard and check that it is authorized and online.
    The Hitachi Block Device node type is grouped under Storage in the Node Type Wizard. The Hitachi Block Device node appears in the replication data flow as the destination node.
  5. Define a policy as shown in the table above using the Policy Wizard. This policy contains operations for the local replication and snapshot.
    1. Define an Oracle Database classification using the Oracle Database Classification Wizard.
      The Oracle Database classification is grouped under Application in the Policy Wizard.
    2. Define a Replicate operation using the Replicate Operation Wizard.
      ShadowImage replication runs as a batch operation triggered by the RPO of the snapshot.
    3. Define a local Snapshot operation using the Snapshot Operation Wizard.
      Thin Image snapshots run based on the RPO. However we also want to synchronize the snapshot with the replication. This is done by defining a trigger schedule that is applied to both the snapshot and replication operations.
    4. Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules in the Snapshot Operation Wizard for the local snapshot.
      Only the trigger schedule name is required; the parameters are not relevant here since the RPO of the snapshot dictates when the replication operation is triggered.
  6. Draw a data flow as shown in the figure above, that shows the Oracle Database source node connected to the Hitachi Block Device via a Batch mover.
  7. Assign the to the Oracle Database source node.
  8. Assign Oracle-Replicate-Snaphot policy's Snapshot operation to the Oracle Database source node.
    The Block Snapshot Operation Properties Dialog is displayed.
  9. Select the Pool by selecting the local Hitachi Block Device node created in the steps above, followed by one of the available Thin Image Pools.
  10. Leave the remaining snapshot parameters at their default settings, then click OK.
    The snapshot icon is now shown superimposed over the source node.
  11. Assign the Replicate operation to the Hitachi Block Device node.
    The Block Replication Operation Properties Dialog is displayed.
  12. Set the replication type to In System Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.
  13. Compile and activate the data flow, checking carefully that there are no errors or warnings.
  14. Locate the active data flow in the Monitor Inventory and open its Monitor Details page.
    The policy will be invoked automatically to create and then maintain the replication according to the policy. Snapshot and replication operations will be triggered synchronously on the source node according to the RPO.
  15. Monitor the active data flow to ensure the policy is operating as expected.
    For a healthy data flow you will periodically see:
    • Replication and snapshot jobs appearing for the source node in the Jobs area triggered according to the RPO.
    • Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
    • Attachments to storage handler log events confirming which volumes are being replicated.
  16. Review the status of the Hitachi Block Device to ensure snapshots and replications are being created.
    New snapshots and a refreshed replication will appear periodically as dictated by the RPO of the policy. Old snapshots will be removed periodically as dictated by the Retention Period of the policy.