Snapshot of a remote clone enables both rapid recovery to a point in time by using Thin Image snapshots, while providing an additional level of protection by creating a full remote clone of the database using Hitachi TrueCopy technology. In synchronous replication, the storage system signals each write completion only once it is performed on the primary and secondary volume (copy on write).
This setup provides partial protection against a disaster at the local site and full protection at the remote site as the primary and secondary volumes are geographically separated. If necessary, production can be moved quickly to the remote site while the local site is being recovered. Taking snapshots of the remote clone adds the additional benefit of being able to roll back the backup copy to a given restore point from the remote site.
The data flow and policy are as follows:
Figure. TrueCopy Replication with Local and Remote Thin Image Snapshots Data Flow
Table. Oracle Replication/Snapshot Policy
Classification Type |
Parameters |
Value |
Oracle Database |
Database Selection |
TestDb (All the selected databases must be located on the same Block device)
|
Backup Mode |
Online |
Operation Type |
Parameter |
Value |
Assigned Nodes |
Replicate |
Run Options |
N/A (TrueCopy is a continuous replication, so the Run option is ignored)
|
Secondary Hitachi Block Device
|
Souce Options |
Quiesce... |
Snapshot (on local device)
|
Mode |
Hardware |
Oracle Database |
Hardware Type |
Hitachi Block |
RPO |
8 Hours |
Retention |
1 Week |
Run Options |
Run on Schedule (see synch group schedule below)
|
Source Options |
Quiesce... |
Snapshot (on remote device)
|
Mode |
Hardware |
Secondary Hitachi Block Device |
Hardware Type |
Hitachi Block |
RPO |
8 hours (this must match the local snapshot)
|
Retention |
1 Week (this can differ from the local snapshot)
|
Run Options |
Run on Schedule (see synch group schedule below)
|
Table. Synchronization Group Schedule
|
|
|
|
Trigger |
N/A (this schedule defines a synchronization group name for local
and remote snapshots. All parameters are ignored.)
|
N/A |
Snapshot (local), Snapshot (remote)
|
- Locate the source OS Host node in the Nodes Inventory and check that it is authorized and online.
This node represents the Protector Client installed on the Oracle server.
- Create a new Oracle Database node using the Oracle Application Node Wizard and check that the node it is authorized and online.
The Oracle Database node type is grouped under Application in the Node Type Wizard. This node will be used in the dataflow to represent the Oracle Database configuration to be protected.
- Select the OS Host node identified above as the Node running Oracle....
-
Optional - Specify the credentials for both the
Operating System and Database
users.
- Locate the nodes in the Nodes Inventory that will control the primary and secondary Hitachi Block Devices via CMD (Command Device) interfaces and check that they are authorized and online.
These nodes are used by Protector to orchestrate replication of the primary LDEV to the secondary and are
identified as the Proxy Node when creating the primary and
secondary Hitachi Block Device
nodes in the next step. These nodes are known as ISM (Intelligent Storage Manager) nodes. The ISM nodes do not appear in the data flow.
-
Create new primary and secondary Hitachi Block Device nodes (unless ones already
exists) using the Block Storage Node Wizard and check
that they are authorized and online.
The Hitachi Block Device node type is grouped under
Storage in the Node Type
Wizard. The secondary Hitachi Block Device node appears in the replication data flow as the
destination node. The primary Hitachi Block Device node is represented in the data flow by the Oracle Database node where the primary LDEV is mounted.
- Define a policy as shown in the table above using the Policy Wizard. This policy contains operations for the replication, local and remote snapshots.
- Define an Oracle Database classification using the Oracle Database Classification Wizard.
The Oracle Database classification is grouped under Application in the Policy Wizard.
- Define a Replicate operation using the Replicate Operation Wizard.
TrueCopy replication runs as a continuous operation and thus no schedule needs to be defined.
- Define a local Snapshot operation using the Snapshot Operation Wizard.
Thin Image snapshots
run based on the RPO. However we also want to synchronize the local and remote
snapshots. This is done by defining a trigger schedule that is applied to both the
local and remote snapshot operations.
- Define a Trigger schedule using the Schedule Wizard; accessed by clicking on Manage Schedules in the Snapshot Operation Wizard for the local snapshot.
Only the trigger schedule name is required; the parameters are not relevant here since the RPO of the local snapshot dictates when the local and remote snapshot operations are triggered.
- Define a remote Snapshot operation using the Snapshot Operation Wizard.
To synchronize the local and remote snapshots, apply the same trigger schedule to this snapshot operation that was applied to the local snapshot operation.
Note: The local and remote snapshots must have the same RPO, otherwise a rules compiler error will be generated.
-
Draw a data flow as shown in the figure above, that shows the Oracle
Database source node connected to the secondary Hitachi Block Device via
a Continuous mover.
TrueCopy is a
remote replication technology, so the Hitachi Block Device node shown on the data flow is
the where the destination (SVOL) volume is located.
- Assign the Oracle-Replicate-Snaphot-Snapshot policy to the Oracle Database source node.
- Assign the local Snapshot operation to the Oracle Database source node.
The Block Snapshot Operation Properties
Dialog is displayed.
-
Select the Pool by selecting the local Hitachi Block Device
node created in the steps above, followed by one of the available Thin Image Pools.
- Leave the remaining snapshot parameters at their default settings, then click OK.
The snapshot icon

is now shown superimposed over the source node.
-
Assign the remote Snapshot operation to the remote Hitachi Block
Device node.
The Block Snapshot Operation Properties
Dialog is displayed.
-
Select the Pool by selecting the remote Hitachi Block Device
node created in the steps above, followed by one of the available Thin Image Pools.
- Leave the remaining snapshot parameters at their default settings, then click OK.
The snapshot icon

is now shown superimposed over the destination node.
-
Assign the Replicate operation to the remote Hitachi Block
Device node.
The Block Replication Operation Properties
Dialog is displayed.
- Set the replication type to Synchronous Remote Clone, then choose a Pool from one of the available Dynamic Pools. Leave the remaining parameters at their default settings and click OK.
- Compile and activate the data flow, checking carefully that there are no errors or warnings.
-
Locate the active data flow in the Monitor Inventory
and open its Monitor Details page.
The policy will be invoked automatically to create and then maintain the
replication according to the policy. Snapshot operations will be triggered synchronously
on the source and destination nodes according to the RPO.
- Monitor the active data flow to ensure the policy is operating as expected.
For a healthy data flow you will periodically see:
- Replication and snapshot jobs appearing for the source node in the Jobs area triggered according to the RPO.
- Snapshot jobs appearing for the destination node in the Jobs area synchronized to the local snapshot.
- Information messages appearing in the Logs area below the data flow indicating rules activation, storage handler and sequencer events.
- Attachments to storage handler log events confirming which volumes are being replicated.
-
Review the status of the local Hitachi Block Device to ensure snapshots are being
created. Review the status of the remote Hitachi Block Device to ensure the replication is
being performed and that snapshots are being created.
New local and remote snapshots will appear periodically as dictated
by the RPO of the policy. Old snapshots will be removed
periodically as dictated by the Retention Period of the
policy.