Data Protection Power Guide-OnTAP9.0
Data Protection Power Guide-OnTAP9.0
ONTAP® 9
Eighth edition (September 2021)
© Copyright Lenovo 2018, 2021.
LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services
Administration (GSA) contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No.
GS-35F-05925
Contents
If you want more information about the SnapMirror Business Continuity feature added in ONTAP 9.8, which
provides Zero Recovery Time Objective (Zero RTO) or Transparent Application Failover (TAF) to enable
automatic failover of business-critical applications in SAN environments, see the following documentation:
Lenovo Documentation: SnapMirror Business Continuity
If you want to quickly perform volume backup and recovery using best practices, you should choose among
the following documentation:
• Volume disaster recovery preparation
Volume disaster recovery express preparation
• Volume disaster recovery
Volume disaster express recovery
• Volume backup using SnapVault
Volume express backup using SnapVault
• Volume restore using SnapVault
Volume restore express management using SnapVault
If you require additional configuration or conceptual information, you should choose among the following
documentation:
• ONTAP conceptual background
ONTAP concepts
• Synchronous disaster recovery in a MetroCluster configuration
MetroCluster management and disaster recovery
• Data protection for WORM files in SnapLock volumes
Archive and compliance using SnapLock technology
• Data protection using tape technology
– NDMP express configuration
– Data protection using tape backup
• Command reference
You can use a Snapshot copy to restore the entire contents of a volume, or to recover individual files or
LUNs. Snapshot copies are stored in the directory .snapshot on the volume.
In ONTAP 9.5 and later, a volume can contain up to 1023 Snapshot copies.
The default policy for a volume automatically creates Snapshot copies on the following schedule, with the
oldest Snapshot copies deleted to make room for newer copies:
• A maximum of six hourly Snapshot copies taken five minutes past the hour.
• A maximum of two daily Snapshot copies taken Monday through Saturday at 10 minutes after midnight.
• A maximum of two weekly Snapshot copies taken every Sunday at 15 minutes after midnight.
Unless you specify a Snapshot policy when you create a volume, the volume inherits the Snapshot policy
associated with its containing storage virtual machine (SVM).
You might back up a heavily used file system like a database every hour, while you back up rarely used files
once a day. Even for a database, you will typically run a full backup once or twice a day, while backing up
transaction logs every hour.
Other factors are the importance of the files to your organization, your Service Level Agreement (SLA), your
Recovery Point Objective (RPO), and your Recovery Time Objective (RTO). Generally speaking, you should
retain only as many Snapshot copies as necessary.
By default, ONTAP forms the names of Snapshot copies by appending a timestamp to the job schedule
name.
Example
The following example creates a job schedule named myweekly that runs on Saturdays at 3:00 a.
m.:
cluster1::> job schedule cron create -name myweekly -dayofweek "Saturday" -hour 3 -minute 0
By default, ONTAP forms the names of Snapshot copies by appending a timestamp to the job schedule
name:
daily.2020-05-14_0013/ hourly.2020-05-15_1106/
daily.2020-05-15_0012/ hourly.2020-05-15_1206/
hourly.2020-05-15_1006/ hourly.2020-05-15_1306/
You can substitute a prefix for the job schedule name if you prefer.
The snapmirror-label option is for SnapMirror replication. For more information, see “Defining a rule for a
policy” on page 30.
Example
The following example creates a Snapshot policy named snap_policy_daily that runs on a daily
schedule. The policy has a maximum of five Snapshot copies, each with the name daily .timestamp
and the SnapMirror label daily :
cluster1::> volume snapshot policy create -vserver vs0 -policy snap_policy_daily -schedule1 daily -count1 5
-snapmirror-label1 daily
This means that the rate of change of the file system is the key factor in determining the amount of disk
space used by Snapshot copies. No matter how many Snapshot copies you create, they will not consume
disk space if the active file system has not changed.
A FlexVol volume containing database transaction logs, for example, might have a Snapshot copy reserve as
large as 20% to account for its greater rate of change. Not only will you want to create more Snapshot copies
to capture the more frequent updates to the database, you will also want to have a larger Snapshot copy
reserve to handle the additional disk space the Snapshot copies consume.
A Snapshot copy consists of pointers to blocks rather than copies of blocks. You can think of a pointer as a
“claim” on a block: ONTAP “holds” the block until the Snapshot copy is deleted.
How deleting protected files can lead to less file space than expected
A Snapshot copy points to a block even after you delete the file that used the block. This explains why an
exhausted Snapshot copy reserve might lead to the counter-intuitive result in which deleting an entire file
system results in less space being available than the file system occupied.
Consider the following example. Before deleting any files, the d f command output is as follows:
After deleting the entire file system and making a Snapshot copy of the volume, the d f command generates
the following output:
Filesystem kbytes used avail capacity
/vol/vol0/ 3000000 2500000 500000 83%
/vol/vol0/.snapshot 1000000 3500000 0 350%
As the output shows, the entire 3 GB formerly used by the active file system is now being used by Snapshot
copies, in addition to the 0.5 GB used before the deletion.
Because the disk space used by the Snapshot copies now exceeds the Snapshot copy reserve, the overflow
of 2.5 GB “spills” into the space reserved for active files, leaving you with 0.5 GB free space for files where
you might reasonably have expected 3 GB.
Example
Example
The following example sets the Snapshot copy reserve for vol1 to 10 percent:
cluster1::> volume modify -vserver vs0 -volume vol1 -percent-snapshot-space 10
LUN and file clones are deleted when there are no more Snapshot copies to be deleted.
Example
The following example autodeletes Snapshot copies for vol1 when the Snapshot copy reserve is
exhausted:
cluster1::> volume modify -vserver vs0 -volume vol1 -enabled true -trigger snap_reserve
Every directory in the file system contains a subdirectory named .snapshot accessible to NFS and CIFS
users. The .snapshot subdirectory contains subdirectories corresponding to the Snapshot copies of the
volume:
$ ls .snapshot
daily.2020-05-14_0013/ hourly.2020-05-15_1106/
daily.2020-05-15_0012/ hourly.2020-05-15_1206/
hourly.2020-05-15_1006/ hourly.2020-05-15_1306/
Each subdirectory contains the files referenced by the Snapshot copy. If users accidentally delete or
overwrite a file, they can restore the file to the parent read-write directory by copying the file from the
Snapshot subdirectory to the read-write directory:
$ ls my.txt
ls: my.txt: No such file or directory
$ ls .snapshot
daily.2020-05-14_0013/ hourly.2020-05-15_1106/
daily.2020-05-15_0012/ hourly.2020-05-15_1206/
hourly.2020-05-15_1006/ hourly.2020-05-15_1306/
$ ls .snapshot/hourly.2020-05-15_1306/my.txt
my.txt
$ cp .snapshot/hourly.2020-05-15_1306/my.txt .
$ ls my.txt
my.txt
If you are restoring an existing LUN, a LUN clone is created and backed up in the form of a Snapshot copy.
During the restore operation, you can read to and write from the LUN.
Example
Example
cluster1::> volume snapshot restore-file -vserver vs0 -volume vol1 -snapshot daily.2020-01-25_0010 -path /myfile.txt
Example
The starting byte offset and byte count must be multiples of 4,096.
Example
The following example restores the first 4,096 bytes of the file myfile.txt :
cluster1::> volume snapshot partial-restore-file -vserver vs0 -volume vol1 -snapshot daily.2020-01-25_0010 -path
/myfile.txt -start-byte 0 -byte-count 4096
If the volume has SnapMirror relationships, manually replicate all mirror copies of the volume immediately
after you restore from a Snapshot copy. Not doing so can result in unusable mirror copies that must be
deleted and recreated.
Example
Example
cluster1::> volume snapshot restore-file -vserver vs0 -volume vol1 -snapshot daily.2020-01-25_0010
If the primary site is still available to serve data, you can simply transfer any needed data back to it, and not
serve clients from the mirror at all. As the failover use case implies, the controllers on the secondary system
should be equivalent or nearly equivalent to the controllers on the primary system to serve data efficiently
from mirrored storage.
You can also use SnapMirror for two special data protection applications:
A baseline transfer under the default SnapMirror policy MirrorAllSnapshots involves the following steps:
• Make a Snapshot copy of the source volume.
• Transfer the Snapshot copy and all the data blocks it references to the destination volume.
• Transfer the remaining, less recent Snapshot copies on the source volume to the destination volume for
use in case the “active” mirror is corrupted.
At each update under the MirrorAllSnapshots policy, SnapMirror creates a Snapshot copy of the source
volume and transfers that Snapshot copy and any Snapshot copies that have been made since the last
update. In the following output from the s n a p m i r r o r p o l i c y s h o w command for the MirrorAllSnapshots
policy, note the following:
• Create Snapshot is “true”, indicating that MirrorAllSnapshots creates a Snapshot copy when SnapMirror
updates the relationship.
• MirrorAllSnapshots has rules “sm_created” and “all_source_snapshots”, indicating that both the Snapshot
copy created by SnapMirror and any Snapshot copies that have been made since the last update are
transferred when SnapMirror updates the relationship.
cluster_dst::> snapmirror policy show -policy MirrorAllSnapshots -instance
Vserver: vs0
SnapMirror Policy Name: MirrorAllSnapshots
SnapMirror Policy Type: async-mirror
Policy Owner: cluster-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Network Compression Enabled: false
Create Snapshot: true
Comment: Asynchronous SnapMirror policy for mirroring all snapshots
and the latest active file system.
Total Number of Rules: 2
Total Keep: 2
Rules: SnapMirror Label Keep Preserve Warn Schedule Prefix
---------------- ---- -------- ---- -------- ------
sm_created 1 false 0 - -
all_source_snapshots 1 false 0 - -
MirrorLatest policy
The preconfigured MirrorLatest policy works exactly the same way as MirrorAllSnapshots, except that only the
Snapshot copy created by SnapMirror is transferred at initialization and update.
This functionality addresses the regulatory and national mandates for synchronous replication in financial,
healthcare, and other regulated industries where zero data loss is required.
The limit on the number of SnapMirror Synchronous replication operations per node depends on the
controller model.
Platform Number of SnapMirror Synchronous operations that are allowed per node
AFA 40
Hybrid 20
Supported features
In ONTAP 9.5 and later, SnapMirror Synchronous technology supports the NFSv3, FC, and iSCSI protocols
over all networks for which the latency does not exceed 10ms.
The following features are supported for SnapMirror Synchronous technology in ONTAP 9.7:
• Replication of application-created Snapshot copies
Only the Snapshot copies with the SnapMirror label that match the rule associated with Sync or Strict
Sync policy. Scheduled Snapshot copies created using a Snapshot policy are not replicated.
• FC-NVMe
• LUN clones and NVMe namespace clones
LUN clones backed by application-created Snapshot copies are also supported.
The following features are supported for SnapMirror Synchronous technology in ONTAP 9.6; provided all
nodes in the source and destination cluster are running ONTAP 9.6:
• NFSv4.0 and NFSv4.1
• SMB 2.0 or later
• Mixed protocol access(NFSv3 and SMB/CIFS)
• Antivirus on the primary volume of the SnapMirror Synchronous relationship
• Hard or soft quotas on the primary volume of the SnapMirror Synchronous relationship
The quota rules are not replicated to the destination; therefore, the quota database is not replicated to the
destination.
• FPolicy on the primary volume of the SnapMirror Synchronous relationship
• SnapMirror Synchronous mirror-mirror cascade
The relationship from the destination volume of the SnapMirror Synchronous relationship must be an
asynchronous SnapMirror relationship.
• Timestamp parity between source and destination volumes for NAS
Unsupported features
The following features are not supported with Synchronous SnapMirror relationships:
• SVM DR
• Mixed SAN and NAS access
The primary volume of a SnapMirror Synchronous relationship can either serve NAS data or SAN data.
Both SAN and NAS access from the primary volume of a SnapMirror Synchronous relationship is not
supported.
• Mixed SAN and NVMe access
LUNs and NVMe namespaces are not supported on the same volume or SVM.
• SnapLock volumes
• FlexGroup volumes
• FlexCache volumes
• SnapRestore
• DP_Optimized (DPO) systems
• Tape backup or restore using dump and SMTape on the destination volume
• Tape based restore to the source volume
• Throughput floor (QoS Min) for source volumes
• In a fan-out configuration, only one relationship can be a SnapMirror Synchronous relationship; all the
other relationships from the source volume must be asynchronous SnapMirror relationships.
• Global throttling
Modes of operation
SnapMirror Synchronous has two modes of operation based on the type of the SnapMirror policy used:
Sync mode
In Sync mode, an I/O to primary storage is first replicated to secondary storage. Then the I/O is written
to primary storage, and acknowledgment is sent to the application that issued the I/O. If the write to the
secondary storage is not completed for any reason, the application is allowed to continue writing to the
primary storage. When the error condition is corrected, SnapMirror Synchronous technology
automatically resynchronizes with the secondary storage and resumes replicating from primary storage
to secondary storage in Synchronous mode.
In Sync mode, RPO=0 and RTO is very low until a secondary replication failure occurs at which time
RPO and RTO become indeterminate, but equal the time to repair the issue that caused secondary
replication to fail and for the resync to complete.
StrictSync mode
SnapMirror Synchronous can optionally operate in StrictSync mode. If the write to the secondary
storage is not completed for any reason, the application I/O fails, thereby ensuring that the primary and
Relationship status
The status of a SnapMirror Synchronous relationship is always in the InSync status during normal operation.
If the SnapMirror transfer fails for any reason, the destination is not in sync with the source and can go to the
OutofSync status.
For SnapMirror Synchronous relationships, the system automatically checks the relationship status ( InSync
or OutofSync ) at a fixed interval. If the relationship status is OutofSync , ONTAP automatically triggers the
auto resync process to bring back the relationship to the InSync status. Auto resync is triggered only if the
transfer fails due to any operation, such as unplanned storage failover at source or destination or a network
outage. User-initiated operations such as s n a p m i r r o r q u i e s c e and s n a p m i r r o r b r e a k do not trigger auto
resync.
If the relationship status becomes OutofSync for a SnapMirror Synchronous relationship in the StrictSync
mode, all I/O operations to the primary volume are stopped. The OutofSync state for SnapMirror
Synchronous relationship in the Sync mode is not disruptive to the primary and I/O operations are allowed on
the primary volume.
In ONTAP 9.5, for a Sync policy, you need to consider a few important aspects while selecting the NFSv3 or
NFSv4 workloads. The amount of data read or write operations by workloads is not a consideration, as Sync
policy can handle high read or write IO workloads. In ONTAP 9.5, workloads that have excessive file creation,
directory creation, file permission changes, or directory permission changes may not be suitable (these are
referred to as high-metadata workloads). A typical example of a high-metadata workload is a DevOps
workload in which you create multiple test files, run automation, and delete the files. Another example is
parallel build workload that generate multiple temporary files during compilation. The impact of a high rate of
write metadata activity is that it can cause synchronization between mirrors to temporarily break which stalls
the read and write IOs from the client.
Starting with ONTAP 9.6, these limitations are removed and SnapMirror Synchronous can be used for
enterprise file services workloads that include multiuser environments, such as home directories and
software build workloads.
You might want to keep monthly Snapshot copies of your data over a 20-year span, for example, to comply
with government accounting regulations for your business. Since there is no requirement to serve data from
vault storage, you can use slower, less expensive disks on the destination system.
A baseline transfer under the default SnapVault policy XDPDefault makes a Snapshot copy of the source
volume, then transfers that copy and the data blocks it references to the destination volume. Unlike
SnapMirror, SnapVault does not include older Snapshot copies in the baseline.
At each update under the XDPDefault policy, SnapMirror transfers Snapshot copies that have been made
since the last update, provided they have labels matching the labels defined in the policy rules. In the
following output from the s n a p m i r r o r p o l i c y s h o w command for the XDPDefault policy, note the following:
• Create Snapshot is “false”, indicating that XDPDefault does not create a Snapshot copy when SnapMirror
updates the relationship.
• XDPDefault has rules “daily” and “weekly”, indicating that all Snapshot copies with matching labels on the
source are transferred when SnapMirror updates the relationship.
cluster_dst::> snapmirror policy show -policy XDPDefault -instance
Vserver: vs0
SnapMirror Policy Name: XDPDefault
SnapMirror Policy Type: vault
Policy Owner: cluster-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Network Compression Enabled: false
Create Snapshot: false
Comment: Default policy for XDP relationships with daily and weekly
rules.
Total Number of Rules: 2
Total Keep: 59
A baseline transfer under the default unified data protection policy MirrorAndVault makes a Snapshot copy of
the source volume, then transfers that copy and the data blocks it references to the destination volume. Like
SnapVault, unified data protection does not include older Snapshot copies in the baseline.
Vserver: vs0
SnapMirror Policy Name: MirrorAndVault
SnapMirror Policy Type: mirror-vault
Policy Owner: cluster-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Network Compression Enabled: false
Create Snapshot: true
Comment: A unified Synchronous SnapMirror and SnapVault policy for
mirroring the latest file system and daily and weekly snapshots.
Total Number of Rules: 3
Total Keep: 59
Rules: SnapMirror Label Keep Preserve Warn Schedule Prefix
---------------- ---- -------- ---- -------- ------
sm_created 1 false 0 - -
daily 7 false 0 - -
weekly 52 false 0 - -
Unified7year policy
The preconfigured Unified7year policy works exactly the same way as MirrorAndVault, except that a fourth
rule transfers monthly Snapshot copies and retains them for seven years.
You can protect against the possibility that an updated Snapshot copy is corrupted by creating a copy of the
last transferred Snapshot copy on the destination. This “local copy” is retained regardless of the retention
rules on the source, so that even if the Snapshot originally transferred by SnapMirror is no longer available on
the source, a copy of it will be available on the destination.
The key factor in determining the appropriateness of unified replication is the rate of change of the active file
system. A traditional mirror might be better suited to a volume holding hourly Snapshot copies of database
transaction logs, for example.
With improvements in performance, the significant benefits of version-flexible SnapMirror outweigh the slight
advantage in replication throughput obtained with version-dependent mode. For this reason, starting with
ONTAP 9.5, XDP mode has been made the new default, and any invocations of DP mode on the command
line or in new or existing scripts are automatically converted to XDP mode.
Existing relationships are not affected. If a relationship is already of type DP, it will continue to be of type DP.
Starting with ONTAP 9.5, MirrorAndVault is the new default policy when no data protection mode is specified
or when XDP mode is specified as the relationship type. The table below shows the behavior you can expect.
If you specify... The type is... The default policy (if you do not specify a policy) is...
As the table shows, the default policies assigned to XDP in different circumstances ensure that the
conversion maintains the functional equivalence of the old types. Of course, you can use different policies as
needed, including policies for unified replication:
DP XDPDefault SnapVault
This behavior occurs irrespective of any automatic growth setting on the destination. You cannot limit the
volume's growth or prevent ONTAP from growing it.
By default, data protection volumes are set to the grow_shrink autosize mode, which enables the volume to
grow or shrink in response to the amount of used space, and max-autosize is set to 16 TB for data protection
volumes.
• DM7000H, default DP volume max-autosize = 100TB
Both fan-out and cascade deployments support any combination of SnapMirror DR, SnapVault, or unified
replication; however, SnapMirror Synchronous relationships (supported starting with ONTAP 9.5) support
only fan-out deployments with one or more asynchronous SnapMirror relationships and do not support
cascade deployments. Only one relationship in the fan-out configuration can be a SnapMirror Synchronous
relationship, all the other relationships from the source volume must be asynchronous SnapMirror
relationships.
Note: You can use a fan-in deployment to create data protection relationships between multiple primary
systems and a single secondary system. Each relationship must use a different volume on the secondary
system.
A multiple-mirrors fan-out deployment consists of a source volume that has a mirror relationship to multiple
secondary volumes.
Starting with ONTAP 9.5, you can have fan-out deployments with SnapMirror Synchronous relationships;
however, only one relationship in the fan-out configuration can be a SnapMirror Synchronous relationship, all
the other relationships from the source volume must be asynchronous SnapMirror relationships.
A mirror-mirror cascade deployment consists of a chain of relationships in which a source volume is mirrored
to a secondary volume, and the secondary volume is mirrored to a tertiary volume. If the secondary volume
becomes unavailable, you can synchronize the relationship between the primary and tertiary volumes without
performing a new baseline transfer.
Starting with ONTAP 9.6, SnapMirror Synchronous relationships are supported in a mirror-mirror cascade
deployment. Only the primary and secondary volumes can be in a SnapMirror Synchronous relationship. The
relationship between the secondary volumes and tertiary volumes must be asynchronous.
A mirror-vault cascade deployment consists of a chain of relationships in which a source volume is mirrored
to a secondary volume, and the secondary volume is vaulted to a tertiary volume.
SnapMirror licensing
With the introduction of ONTAP 9.5, licensing has been simplified for replicating between ONTAP instances.
In ONTAP 9.x releases, the SnapMirror license supports both vault and mirror relationships. Users can now
purchase a SnapMirror license to support ONTAP replication for both backup and disaster recovery use
cases.
The DPO license was specifically designed for ONTAP clusters that were to be dedicated as secondary
targets for SnapMirror replication. In addition to increasing the maximum volumes per node on the DPO
controller, the DPO license also modified controller QoS settings to support greater replication traffic at the
expense of application I/O. For this reason, the DPO license should never be installed on a cluster that
supports application I/O, as application performance would be impacted. Later, Data Protection Bundles
based on the DM7000H offered as a solution and included programmatic free licenses based on the
customer environment. When purchasing the solution bundles, free SnapMirror licenses would be provided
for select older clusters which replicated to the DPO secondary. While the DPO license is needed on the Data
Protection solution cluster, primary clusters from the following platform list would be provided free
SnapMirror licenses. Primary clusters not included in this list would require purchase of SnapMirror licenses.
The DPO license is supported on both Hybrid and AFA platforms and can only be purchased pre-configured
with new clusters.
Additional ONTAP features were delivered with the DPO across multiple ONTAP releases.
* Details about priority for the SnapMirror backoff (workload bias) feature:
• Client: cluster I/O priority is set to client workloads (production apps), not SnapMirror traffic.
• Equality: SnapMirror replication requests have equal priority to I/O for production apps.
• SnapMirror: all SnapMirror I/O requests have higher priority that I/O for production apps.
9.4–9.5 9.4–9.5 With 9.6 Without 9.6 With DPO 9.7 Without 9.7 With DPO
Without DPO DPO DPO DPO
DM3000H 1000 1500 1000 1500 1000 1500
DM5000H 1000 1500 1000 1500 1000 1500
DM5000F 1000 1500 1000 1500 1000 1500
DM5000F 1000 1500 1000 1500 1000 1500
DM7000H/ 1000 1500 1000 2500 1000 2500
DM7100H
DM7000F 1000 1500 1000 2500 2500 2500
DM7100F 1000 1500 1000 2500 2500 2500
Starting in ONTAP 9.6, the maximum supported number of FlexVol volumes on secondary or data protection
systems has increased, enabling you to scale up to 2,500 FlexVol volumes per node, or up to 5,000 in failover
mode. The increase in FlexVol volumes is enabled with the DP_Optimized (DPO) license. A SnapMirror
license is still required on both the source and destination nodes.
Starting with ONTAP 9.5, the following feature enhancements are made to DPO systems:
• SnapMirror backoff: In DPO systems, replication traffic is given the same priority that client workloads are
given.
SnapMirror backoff is disabled by default on DPO systems.
• Volume background deduplication and cross-volume background deduplication: Volume background
deduplication and cross-volume background deduplication are enabled in DPO systems.
You can run the storage aggregate efficiency cross-volume-dedupe start -aggregate aggregate_name
-scan-old-data true command to deduplicate the existing data. The best practice is to run the command
during off-peak hours to reduce the impact on performance.
• Increased savings by using Snapshot blocks as donors: The data blocks that are not available in the
active file system but are trapped in Snapshot copies are used as donors for volume deduplication.
The new data can be deduplicated with the data that was trapped in Snapshot copies, effectively sharing
the Snapshot blocks as well. The increased donor space provides more savings, especially when the
volume has a large number of Snapshot copies.
• Compaction: Data compaction is enabled by default on DPO volumes.
Beginning with general availability in ONTAP 9.9.1, SnapMirror Business Continuity (SM-BC) provides Zero
Recovery Time Objective (Zero RTO) or Transparent Application Failover (TAF) to enable automatic failover of
business-critical applications in SAN environments. SM-BC is supported in a configuration of either two All
Flash Array (AFA) clusters.
Starting in ONTAP 9.5, you can use the s n a p m i r r o r p r o t e c t command to configure a data protection
relationship in a single step. Even if you use s n a p m i r r o r p r o t e c t , you need to understand each step in the
workflow.
The name of the destination volume is of the form source_volume_name _dst. In case of a conflict with an
existing name, the command appends a number to the volume name. You can specify a prefix and/or suffix
in the command options. The suffix replaces the system-supplied dst suffix.
In ONTAP 9.5 and later, a destination volume can contain up to 1019 Snapshot copies.
Note: Initialization can be time-consuming. s n a p m i r r o r p r o t e c t does not wait for initialization to complete
before the job finishes. For this reason, you should use the s n a p m i r r o r s h o w command rather than the j o b
s h o w command to determine when initialization is complete.
Starting with ONTAP 9.5, SnapMirror Synchronous relationships can be created by using the s n a p m i r r o r
p r o t e c t command.
Note: You must run this command from the destination SVM or the destination cluster. The -auto-
initialize option defaults to “true”.
Example
The following example creates and initializes a SnapMirror DR relationship using the default
MirrorAllSnapshots policy:
Note: You can use a custom policy if you prefer. For more information, see “Creating a custom
replication policy” on page 29.
Example
The following example creates and initializes a SnapVault relationship using the default XDPDefault
policy:
Example
The following example creates and initializes a unified replication relationship using the default
MirrorAndVault policy:
Example
The following example creates and initializes a SnapMirror Synchronous relationship using the
default Sync policy:
cluster_dst::> snapmirror protect -path-list svm1:volA, svm1:volB -destination-vserver svm_sync -policy Sync
Use the s n a p m i r r o r s h o w command to verify that the SnapMirror relationship was created. For complete
command syntax, see the man page.
Example
cluster_dst::> volume create -vserver SVM_backup -volume volA_dst -aggregate node01_aggr -type DP -size 2GB
You assign a job schedule when you create a data protection relationship. If you do not assign a job
schedule, you must update the relationship manually.
For -month, -dayofweek, and -hour, you can specify all to run the job every month, day of the week,
and hour, respectively.
Example
The following example creates a job schedule named my_weekly that runs on Saturdays at 3:00 a.
m.:
cluster_dst::> job schedule cron create -name my_weekly -dayofweek "Saturday" -hour 3 -minute 0
The policy type of the replication policy determines the type of relationship it supports. The table below
shows the available policy types.
async-mirror SnapMirror DR
vault SnapVault
When you create a custom replication policy, it is a good idea to model the policy after a default policy.
Starting with ONTAP 9.5, you can specify the schedule for creating a common Snapshot copy
schedule for SnapMirror Synchronous relationships by using the -common-snapshot-schedule
parameter. By default, the common Snapshot copy schedule for SnapMirror Synchronous
relationships is one hour. You can specify a value from 30 minutes to two hours for the Snapshot
copy schedule for SnapMirror Synchronous relationships.
Example
The following example creates a custom replication policy for SnapMirror DR that enables network
compression for data transfers:
cluster_dst::> snapmirror policy create -vserver svm1 -policy DR_compressed -type async-mirror -comment
“DR with network compression enabled” -is-network-compression-enabled true
Example
Example
The following example creates a custom replication policy for unified replication:
cluster_dst::> snapmirror policy create -vserver svm1 -policy my_unified -type mirror-vault
Example
The following example creates a custom replication policy for SnapMirror Synchronous relationship
in the StrictSync mode:
cluster_dst::> snapmirror policy create -vserver svm1 -policy my_strictsync -type strict-sync-mirror
-common-snapshot-schedule my_sync_schedule
For “vault” and “mirror-vault” policy types, you must define rules that determine which Snapshot copies are
transferred during initialization and update.
Use the s n a p m i r r o r p o l i c y s h o w command to verify that the SnapMirror policy was created. For complete
command syntax, see the man page.
Every policy with the “vault” or “mirror-vault” policy type must have a rule that specifies which Snapshot
copies to replicate. The rule “bi-monthly”, for example, indicates that only Snapshot copies assigned the
SnapMirror label “bi-monthly” should be replicated. You specify the SnapMirror label when you configure the
Snapshot policy on the source.
Each policy type is associated with one or more system-defined rules. These rules are automatically
assigned to a policy when you specify its policy type. The table below shows the system-defined rules.
daily vault, mirror-vault New Snapshot copies on the source with the
SnapMirror label “daily” are transferred on
initialization and update.
weekly vault, mirror-vault New Snapshot copies on the source with the
SnapMirror label “weekly” are transferred on
initialization and update.
app_consistent Sync, StrictSync Snapshot copies with the SnapMirror label “app_
consistent” on source are synchronously replicated
to the destination.
Supported starting with ONTAP 9.7.
Except for the “async-mirror” policy type, you can specify additional rules as needed, for default or custom
policies. For example:
• For the default MirrorAndVault policy, you might create a rule called “bi-monthly” to match Snapshot
copies on the source with the “bi-monthly” SnapMirror label.
• For a custom policy with the “mirror-vault” policy type, you might create a rule called “bi-weekly” to match
Snapshot copies on the source with the “bi-weekly” SnapMirror label.
Example
The following example adds a rule with the SnapMirror label bi-monthly to the default
MirrorAndVault policy:
cluster_dst::> snapmirror policy add-rule -vserver svm1 -policy MirrorAndVault -snapmirror-label bi-monthly -keep 6
Example
The following example adds a rule with the SnapMirror label bi-weekly to the custom my_snapvault
policy:
cluster_dst::> snapmirror policy add-rule -vserver svm1 -policy my_snapvault -snapmirror-label bi-weekly -keep 26
Example
The following example adds a rule with the SnapMirror label app_consistent to the custom Sync
policy:
cluster_dst::> snapmirror policy add-rule -vserver svm1 -policy Sync -snapmirror-label app_consistent -keep 1
You can then replicate Snapshot copies from the source cluster that match this SnapMirror label:
cluster_src::> snapshot create -vserver vs1 -volume vol1 -snapshot snapshot1 -snapmirror-label app_consistent
For complete command syntax, see the man page. For an example of how to create a job
schedule, see “Creating a replication job schedule” on page 28.
Example
The following example adds a schedule for creating a local copy to the default MirrorAndVault
policy:
cluster_dst::> snapmirror policy add-rule -vserver svm1 -policy MirrorAndVault -snapmirror-label my_monthly
-schedule my_monthly
Example
The following example adds a schedule for creating a local copy to the custom my_unified policy:
cluster_dst::> snapmirror policy add-rule -vserver svm1 -policy my_unified -snapmirror-label my_monthly -schedule
my_monthly
With improvements in performance, the significant benefits of version-flexible SnapMirror outweigh the slight
advantage in replication throughput obtained with version-dependent mode. For this reason, starting with
ONTAP 9.5, XDP mode has been made the new default, and any invocations of DP mode on the command
line or in new or existing scripts are automatically converted to XDP mode.
Existing relationships are not affected. If a relationship is already of type DP, it will continue to be of type DP.
The table below shows the behavior you can expect.
If you specify... The type is... The default policy (if you do not specify a policy) is...
In ONTAP 9.5 and later, a destination volume can contain up to 1019 Snapshot copies.
Note: The schedule parameter is not applicable when creating SnapMirror Synchronous
relationships.
Example
The following example creates a SnapMirror DR relationship using the default MirrorLatest policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -schedule
my_daily -policy MirrorLatest
Example
The following example creates a SnapVault relationship using the default XDPDefault policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -schedule
my_daily -policy XDPDefault
Example
The following example creates a unified replication relationship using the default MirrorAndVault
policy:
cluster_dst:> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -schedule
my_daily -policy MirrorAndVault
Example
The following example creates a unified replication relationship using the custom my_unified policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -schedule
my_daily -policy my_unified
Example
The following example creates a SnapMirror Synchronous relationship using the default Sync
policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -policy Sync
The following example creates a SnapMirror Synchronous relationship using the default StrictSync
policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -policy
StrictSync
Example
The following example creates a SnapMirror DR relationship. With the DP type automatically
converted to XDP and with no policy specified, the policy defaults to the MirrorAllSnapshots policy:
Example
The following example creates a SnapMirror DR relationship. With no type or policy specified, the
policy defaults to the MirrorAllSnapshots policy:
Example
The following example creates a SnapMirror DR relationship. With no policy specified, the policy
defaults to the XDPDefault policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -schedule
my_daily
Example
The following example creates a SnapMirror Synchronous relationship with the predefined policy
SnapCenterSync :
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -policy
SnapCenterSync
Note: The predefined policy SnapCenterSync is of type Sync . This policy replicates any Snapshot
copy that is created with the snapmirror-label of "app_consistent".
Use the s n a p m i r r o r s h o w command to verify that the SnapMirror relationship was created. For complete
command syntax, see the man page.
Initialization can be time-consuming. You might want to run the baseline transfer in off-peak hours.
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example initializes the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
The example shows how to create replication relationships based on two custom policies:
• The “snapvault_secondary” policy retains 7 daily, 52 weekly, and 180 monthly Snapshot copies on the
secondary destination cluster.
• The “snapvault_tertiary policy” retains 250 weekly Snapshot copies on the tertiary destination cluster.
Step 6. On the secondary destination cluster, create the relationship with the source cluster:
cluster_secondary::>snapmirror create -source-path svm_primary:volA -destination-path svm_
secondary:volA -type XDP -schedule my_snapvault -policy snapvault_secondary
Step 7. On the secondary destination cluster, initialize the relationship with the source cluster:
cluster_secondary::>snapmirror initialize -source-path svm_primary:volA -destination-path svm_
secondary:volA
Step 8. On the tertiary destination cluster, create the “snapvault_tertiary” policy:
cluster_tertiary::>snapmirror policy create -policy snapvault_tertiary -type vault -comment
“Policy on tertiary for vault to vault cascade” -vserver svm_tertiary
Step 9. On the tertiary destination cluster, define the “my-weekly” rule for the policy:
cluster_tertiary::>snapmirror policy add-rule -policy snapvault_tertiary -snapmirror-label my-
weekly -keep 250 -vserver svm_tertiary
Step 10. On the tertiary destination cluster, verify the policy:
cluster_tertiary::>snapmirror policy show snapvault_tertiary -instance
Vserver: svm_tertiary
SnapMirror Policy Name: snapvault_tertiary
SnapMirror Policy Type: vault
Policy Owner: cluster-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Network Compression Enabled: false
Create Snapshot: false
Comment: Policy on tertiary for vault to vault cascade
Total Number of Rules: 1
Total Keep: 250
Rules: SnapMirror Label Keep Preserve Warn Schedule Prefix
---------------- ---- -------- ---- -------- ------
my-weekly 250 false 0 - -
Step 11. On the tertiary destination cluster, create the relationship with the secondary cluster:
cluster_tertiary::>snapmirror create -source-path svm_secondary:volA -destination-path svm_
tertiary:volA -type XDP -schedule my_snapvault -policy snapvault_tertiary
SnapMirror does not automatically convert existing DP-type relationships to XDP. To convert the
relationship, you need to break and delete the existing relationship, create a new XDP relationship, and
resync the relationship. For background information, see “XDP replaces DP as the SnapMirror default” on
page 18.
Note: After you convert a SnapMirror relationship type from DP to XDP,space-related settings, such as
autosize and space guarantee are no longer replicated to the destination.
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example quiesces the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example breaks the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror break -source-path svm1:volA -destination-path svm_backup:volA_dst
Example
The following example disables Snapshot copy autodelete on the destination volume volA_dst :
cluster_dst::> volume snapshot autodelete modify -vserver svm_backup -volume volA_dst -enabled false
Example
Note: You must run this command from the destination SVM or the destination cluster.
The following example deletes the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror delete -source-path svm1:volA -destination-path svm_backup:volA_dst
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example creates a SnapMirror DR relationship between the source volume volA on
svm1 and the destination volume volA_dst on svm_backup using the default MirrorAllSnapshots
policy:
cluster_dst::> snapmirror create -source-path svm1:volA -destination-path svm_backup:volA_dst -type XDP -schedule
my_daily -policy MirrorAllSnapshots
Note: You must run this command from the destination SVM or the destination cluster. Although
resync does not require a baseline transfer, it can be time-consuming. You might want to run the
resync in off-peak hours.
Example
The following example resyncs the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror resync -source-path svm1:volA -destination-path svm_backup:volA_dst
Use the s n a p m i r r o r s h o w command to verify that the SnapMirror relationship was created. For complete
command syntax, see the man page.
Example
b. From the source cluster, release the SnapMirror relationship without deleting the common Snapshot
copies:
snapmirror release -relationship-info-only true -destination-path dest_SVM:dest_volume
Example
Example
Example
Example
Example
c. From the source cluster, release the SnapMirror relationship without deleting the common Snapshot
copies:
snapmirror release -relationship-info-only true -destination-path dest_SVM:dest_volume
Example
Example
Example
You cannot modify the policy of a Snapmirror Synchronous relationship to convert its mode.
Step 1. From the destination cluster, quiesce the existing SnapMirror Synchronous relationship:
snapmirror quiesce -destination-path dest_SVM:dest_volume
Example
Step 2. From the destination cluster, delete the existing SnapMirror Synchronous relationship:
snapmirror delete -destination-path dest_SVM:dest_volume
Example
Step 3. From the source cluster, release the SnapMirror relationship without deleting the common
Snapshot copies:
snapmirror release -relationship-info-only true -destination-path dest_SVM:dest_volume
Example
Step 4. From the destination cluster, create a SnapMirror Synchronous relationship by specifying the mode
to which you want to convert the SnapMirror Synchronous relationship:
snapmirror create -source-path vs1:vol1 -destination-path dest_SVM:dest_volume -policy Sync|
StrictSync
Example
Example
You must perform this task from the destination SVM or the destination cluster.
Example
The following example stops scheduled transfers between the source volume volA on svm1 and
the destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror quiesce -source-path svm1:volA -destination-path svm_backup:volA_dst
Note: This step is not required for SnapMirror Synchronous relationships (supported starting with
ONTAP 9.5).
Example
The following example stops ongoing transfers between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror abort -source-path svm1:volA -destination-path svm_backup:volA_dst
Example
The following example breaks the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup and the destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror break -source-path svm1:volA -destination-path svm_backup:volA_dst
NAS environment:
1. Mount the NAS volume to the namespace using the same junction path that the source volume was
mounted to in the source SVM.
2. Apply the appropriate ACLs to the CIFS shares at the destination volume.
3. Assign the NFS export policies to the destination volume.
4. Apply the quota rules to the destination volume.
5. Redirect clients to the destination volume.
6. Remount the NFS and CIFS shares on the clients.
SAN environment:
1. Map the LUNs in the volume to the appropriate initiator group.
2. For iSCSI, create iSCSI sessions from the SAN host initiators to the SAN LIFs.
3. On the SAN client, perform a storage re-scan to detect the connected LUNs.
The procedure below assumes that the baseline in the original source volume is intact. If the baseline is not
intact, you must create and initialize the relationship between the volume you are serving data from and the
original source volume before performing the procedure.
You must run this command from the destination SVM or the destination cluster.
Example
The following example deletes the relationship between the original source volume, volA on svm1 ,
and the volume you are serving data from, volA_dst on svm_backup :
cluster_src::> snapmirror delete -source-path svm1:volA -destination-path svm_backup:volA_dst
Example
Example
The following example stops the source SVM for the reversed relationship:
cluster_src::> vserver stop svm_backup
Note: You must run this command from the destination SVM or the destination cluster. The
command fails if a common Snapshot copy does not exist on the source and destination. Use
s n a p m i r r o r i n i t i a l i z e to re-initialize the relationship.
Example
The following example updates the relationship between the volume you are serving data from,
volA_dst on svm_backup , and the original source volume, volA on svm1 :
cluster_src::> snapmirror update -source-path svm_backup:volA_dst -destination-path svm1:volA
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example stops scheduled transfers between the volume you are serving data from,
volA_dst on svm_backup , and the original source volume, volA on svm1 :
cluster_src::> snapmirror quiesce -source-path svm_backup:volA_dst -destination-path svm1:volA
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example stops ongoing transfers between the volume you are serving data from,
volA_dst on svm_backup , and the original source volume, volA on svm1 :
cluster_src::> snapmirror abort -source-path svm_backup:volA_dst -destination-path svm1:volA
Note: You must run this command from the destination SVM or the destination cluster.
The following example breaks the relationship between the volume you are serving data from, volA_
dst on svm_backup , and the original source volume, volA on svm1 :
cluster_src::> snapmirror break -source-path svm_backup:volA_dst -destination-path svm1:volA
Example
You must run this command from the source SVM or the source cluster for the reversed
relationship.
Example
The following example deletes the reversed relationship between the original source volume, volA
on svm1 , and the volume you are serving data from, volA_dst on svm_backup :
cluster_src::> snapmirror delete -source-path svm_backup:volA_dst -destination-path svm1:volA
Example
The following example reestablishes the relationship between the original source volume, volA on
svm1 , and the original destination volume, volA_dst on svm_backup :
cluster_dst::> snapmirror resync -source-path svm1:volA -destination-path svm_backup:volA_dst
Use the s n a p m i r r o r s h o w command to verify that the SnapMirror relationship was created. For complete
command syntax, see the man page.
To restore a file or LUN from a SnapMirror Synchronous destination (supported starting with ONTAP 9.5),
you must first delete and release the relationship.
The volume to which you are restoring files or LUNs (the destination volume) must be a read-write volume:
• SnapMirror performs an incremental restore if the source and destination volumes have a common
Snapshot copy (as is typically the case when you are restoring to the original source volume).
• Otherwise, SnapMirror performs a baseline restore, in which the specified Snapshot copy and all the data
blocks it references are transferred to the destination volume.
Example
The following example shows the Snapshot copies on the vserverB:secondary1 destination:
cluster_dst::> volume snapshot show -vserver vserverB -volume secondary1
Step 2. Restore a single file or LUN or a set of files or LUNs from a Snapshot copy in a SnapMirror
destination volume:
snapmirror restore -source-path SVM:volume|cluster://SVM/volume, ... -destination-path SVM:
volume|cluster://SVM/volume, ... -source-snapshot snapshot -file-list source_file_path,
@destination_file_path
For complete command syntax, see the man page.
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following command restores the files file1 and file2 from the Snapshot copy daily.2020-01-
25_0010 in the original destination volume secondary1, to the same location in the active file system
of the original source volume primary1:
cluster_dst::> snapmirror restore -source-path vserverB:secondary1 -destination-path vserverA:primary1
-source-snapshot daily.2020-01-25_0010 -file-list /dir1/file1,/dir2/file2
[Job 3479] Job is queued: snapmirror restore for the relationship with destination vserverA:primary1
Example
The destination file path begins with the @ symbol followed by the path of the file from the root of
the original source volume. In this example, file1 is restored to /dir1/file1.new and file2 is restored
to /dir2.new/file2 on primary1:
[Job 3479] Job is queued: snapmirror restore for the relationship with destination vserverA:primary1
Example
The following command restores the files file1 and file3 from the Snapshot copy daily.2020-01-
25_0010 in the original destination volume secondary1, to different locations in the active file system
of the original source volume primary1, and restores file2 from snap1 to the same location in the
active file system of primary1.
In this example, the file file1 is restored to /dir1/file1.new and file3 is restored to /dir3.new/file3:
[Job 3479] Job is queued: snapmirror restore for the relationship with destination vserverA:primary1
To restore a volume from a SnapMirror Synchronous destination (supported starting with ONTAP 9.5), you
must first delete and release the relationship.
The destination volume for the restore operation must be one of the following:
• A read-write volume, in which case SnapMirror performs an incremental restore, provided that the source
and destination volumes have a common Snapshot copy (as is typically the case when you are restoring
to the original source volume).
Note: The command fails if there is not a common Snapshot copy. You cannot restore the contents of a
volume to an empty read-write volume.
• An empty data protection volume, in which case SnapMirror performs a baseline restore, in which the
specified Snapshot copy and all the data blocks it references are transferred to the source volume.
Restoring the contents of a volume is a disruptive operation. CIFS traffic must not be running on the
SnapVault primary volume when a restore operation is running.
If the destination volume for the restore operation has compression enabled, and the source volume does
not have compression enabled, disable compression on the destination volume. You need to re-enable
compression after the restore operation is complete.
Example
The following example shows the Snapshot copies on the vserverB:secondary1 destination:
cluster_dst::> volume snapshot show -vserver vserverB -volume secondary1
Step 2. Restore the contents of a volume from a Snapshot copy in a SnapMirror destination volume:
snapmirror restore -source-path SVM:volume|cluster://SVM/volume, ... -destination-path SVM:
volume|cluster://SVM/volume, ... -source-snapshot snapshot
For complete command syntax, see the man page.
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following command restores the contents of the original source volume primary1 from the
Snapshot copy daily.2020-01-25_0010 in the original destination volume secondary1:
cluster_dst::> snapmirror restore -source-path vserverB:secondary1 -destination-path vserverA:primary1
-source-snapshot daily.2020-01-25_0010
Warning: All data newer than Snapshot copy daily.2020-01-25_0010 on volume vserverA:primary1 will be deleted.
[Job 34] Job is queued: snapmirror restore from source vserverB:secondary1 for the snapshot daily.2020-01-25_0010.
Step 3. Remount the restored volume and restart all applications that use the volume.
SnapMirror aborts any transfers from a moved source volume until you update the replication relationship
manually.
Starting with ONTAP 9.5, SnapMirror Synchronous relationships are supported. Although the source and
destination volumes are in sync at all times in these relationships, the view from the secondary cluster is
synchronized with the primary only on an hourly basis. If you want to view the point-in-time data at the
destination, you should perform a manual update by running the s n a p m i r r o r u p d a t e command.
Note: You must run this command from the destination SVM or the destination cluster. The
command fails if a common Snapshot copy does not exist on the source and destination. Use
s n a p m i r r o r i n i t i a l i z e to re-initialize the relationship.
Example
The following example updates the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_src::> snapmirror update -source-path svm1:volA -destination-path svm_backup:volA_dst
Although resync does not require a baseline transfer, it can be time-consuming. You might want to run the
resync in off-peak hours.
Note: You must run this command from the destination SVM or the destination cluster.
Example
The following example resyncs the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_dst::> snapmirror resync -source-path svm1:volA -destination-path svm_backup:volA_dst
The s n a p m i r r o r r e l e a s e command deletes any SnapMirror-created Snapshot copies from the source. You
can use the -relationship-info-only option to preserve the Snapshot copies.
Step 1. If you have SnapMirror Synchronous relationships (supported starting with ONTAP 9.5), quiesce
the replication relationship:
snapmirror quiesce -destination-path SVM:volume|cluster://SVM/volume
Example
Note: You must run this command from the destination cluster or destination SVM.
Example
The following example deletes the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup :
cluster_src::> snapmirror delete -source-path svm1:volA -destination-path svm_backup:volA_dst
Note: You must run this command from the source cluster or source SVM.
Example
The following example releases information for the specified replication relationship from the
source SVM svm1 :
cluster_src::> snapmirror release -source-path svm1:volA -destination-path svm_backup:volA_dst
AFA systems manage storage efficiency settings differently from Hybrid systems after a destination volume
is made writeable:
You can use the v o l u m e e f f i c i e n c y s h o w command to determine whether efficiency is enabled on a volume.
For more information, see the man pages.
You can check if SnapMirror is maintaining storage efficiency by viewing the SnapMirror audit logs and
locating the transfer description. If the transfer description displays transfer_desc=Logical Transfer,
SnapMirror is not maintaining storage efficiency. If the transfer description displays transfer_desc=Logical
Transfer with Storage Efficiency, SnapMirror is maintaining storage efficiency. For example:
Fri May 22 02:13:02 CDT 2020 ScheduledUpdate[May 22 02:12:00]:cc0fbc29-b665-11e5-a626-00a09860c273 Operation-Uuid=39fbcf48
-550a-4282-a906-df35632c73a1 Group=none Operation-Cookie=0 action=End source=<sourcepath> destination=<destpath>
status=Success bytes_transferred=117080571 network_compression_ratio=1.0:1 transfer_desc=Logical Transfer -
Optimized Directory Mode
Note: This behavior is applicable to FlexVol volumes, only, and it does not apply to FlexGroup volumes.
• On resync, the caching policy is automatically set to “none”, and deduplication and inline compression are
automatically disabled, regardless of your original settings. You must modify the settings manually as
needed.
Note: Manual updates with storage efficiency enabled can be time-consuming. You might want to run the
operation in off-peak hours.
Note: You must run this command from the destination SVM or the destination cluster. The
command fails if a common Snapshot copy does not exist on the source and destination. Use
s n a p m i r r o r i n i t i a l i z e to re-initialize the relationship.
Example
The following example updates the relationship between the source volume volA on svm1 and the
destination volume volA_dst on svm_backup , and re-enables storage efficiency:
cluster_dst::> snapmirror update -source-path svm1:volA -destination-path svm_backup:volA_dst
-enable-storage-efficiency true
SnapMirror global throttling restricts the bandwidth used by incoming and/or outgoing SnapMirror and
SnapVault transfers. The restriction is enforced cluster wide on all nodes in the cluster.
For example, if the outgoing throttle is set to 100 Mbps, each node in the cluster will have the outgoing
bandwidth set to 100 Mbps. If global throttling is disabled, it is disabled on all nodes.
Note: The throttle has no effect on v o l u m e m o v e transfers or load-sharing mirror transfers. Although data
transfer rates are often expressed in bits per second (bps), the throttle values must be entered in kilobytes
per second (KBps).
Global throttling works with the per-relationship throttle feature for SnapMirror and SnapVault transfers. The
per-relationship throttle is enforced until the combined bandwidth of per-relationship transfers exceeds the
value of the global throttle, after which the global throttle is enforced. A throttle value 0 implies that global
throttling is disabled.
Important: Global throttling should not be enabled on clusters that have SnapMirror Synchronous
relationships.
Example
Example
The following example shows how to set the maximum total bandwidth used by incoming transfers
to 100 Mbps:
cluster_dst::> options -option-name replication.throttle.incoming.max_kbs 12500
Example
The following example shows how to set the maximum total bandwidth used by outgoing transfers
to 100 Mbps:
cluster_dst::> options -option-name replication.throttle.outgoing.max_kbs 12500
Details about these relationship types can be found here: Chapter 3 “Understanding SnapMirror volume
replication” on page 11.
The policy type of the replication policy determines the type of relationship it supports. The following table
shows the available policy types.
async-mirror SnapMirror DR
Existing relationships are not affected by the new default. If a relationship is already of type DP, it will
continue to be of type DP. The following table shows the behavior you can expect.
If you specify... The type is... The default policy (if you do not specify a policy) is...
Details about the changes in the default can be found here: “XDP replaces DP as the SnapMirror default” on
page 18.
Otherwise, SVM replication is almost identical to volume replication. You can use virtually the same workflow
for SVM replication as you use for volume replication.
Support details
The following table shows support details for SnapMirror SVM replication.
Replication scope Intercluster only. You cannot replicate SVMs in the same cluster.
Volume encryption • Encrypted volumes on the source are encrypted on the destination.
• Onboard Key Manager or KMIP servers must be configured on the destination.
• New encryption keys are generated at the destination.
• If the destination does not contain a node that supports volume .encryption,
replication succeeds, but the destination volumes are not encrypted.
FabricPool Starting with ONTAP 9.6, SnapMirror SVM replication is supported with FabricPools.
MetroCluster Starting with ONTAP 9.5, SnapMirror SVM replication is supported on MetroCluster
configurations.
• A MetroCluster configuration cannot be the destination of an SVM DR relationship.
• Only an active SVM within a MetroCluster configuration can be the source of an SVM
DR relationship.
A source can be a sync-source SVM before switchover or a sync-destination SVM
after switchover.
• When a MetroCluster configuration is in a steady state, the MetroCluster sync-
destination SVM cannot be the source of an SVM DR relationship, since the volumes
are not online.
• When the sync-source SVM is the source of an SVM DR relationship, the source SVM
DR relationship information is replicated to the MetroCluster partner.
• During the switchover and switchback processes, replication to the SVM DR
destination might fail.
However, after the switchover or switchback process completes, the next SVM DR
scheduled updates will succeed.
-identity-preserve true
Policy without
-discard-configs Policy with -discard-
Configuration replicated network set configs network set -identity-preserve false
SAN LIFs No No No
Firewall policies Yes Yes No
Routes Yes No No
Broadcast No No No
domain
Subnet No No No
IPspace No No No
Policy without
-discard-configs Policy with -discard-
Configuration replicated network set configs network set -identity-preserve false
User data No No No
Qtrees No No No
Quotas No No No
File-level QoS No No No
Attributes: state of No No No
the root volume,
space guarantee,
size, autosize,
and total number
of files
Storage QoS QoS policy group Yes Yes Yes
iSCSI No No No
Policy without
-discard-configs Policy with -discard-
Configuration replicated network set configs network set -identity-preserve false
igroups No No No
portsets No No No
Serial numbers No No No
SNMP v3 users Yes Yes No
Note: This workflow assumes that you are already using a default policy or a custom replication policy.
This workflow assumes that you are already using a default policy or a custom replication policy.
Example
Example
The following example creates a job schedule named my_weekly that runs on Saturdays at 3:00 a.
m.:
cluster_dst::> job schedule cron create -name my_weekly -dayofweek "Saturday" -hour 3 -minute 0
Step 3. From the destination SVM or the destination cluster, create a replication relationship:
snapmirror create -source-path SVM_name: -destination-path SVM_name: -type DP|XDP
-schedule schedule -policy policy -identity-preserve true
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options.
Example
The following example creates a SnapMirror DR relationship using the default MirrorAllSnapshots
policy:
cluster_dst::> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule my_daily
-policy MirrorAllSnapshots -identity-preserve true
Example
The following example creates a unified replication relationship using the default MirrorAndVault
policy:
cluster_dst:> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule my_daily
-policy MirrorAndVault -identity-preserve true
Example
Example
Assuming you have created a custom policy with the policy type mirror-vault , the following
example creates a unified replication relationship:
cluster_dst::> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule my_daily
-policy my_unified -identity-preserve true
Example
Step 5. From the destination SVM or the destination cluster, initialize the SVM replication relationship:
snapmirror initialize -source-path SVM_name: -destination-path SVM_name:
Example
The following example initializes the relationship between the source SVM, svm1 , and the
destination SVM, svm_backup :
cluster_dst::> snapmirror initialize -source-path svm1: -destination-path svm_backup:
The -identity-preserve option of the s n a p m i r r o r c r e a t e command must be set to true when you create the
SVM replication relationship.
Example
Example
The following example creates a job schedule named my_weekly that runs on Saturdays at 3:00 a.
m.:
cluster_dst::> job schedule cron create -name my_weekly -dayofweek "Saturday" -hour 3 -minute 0
Example
The following example creates a custom replication policy for SnapMirror DR that excludes LIFs:
cluster_dst::> snapmirror policy create -vserver svm1 -policy DR_exclude_LIFs -type async-mirror -discard-configs network
Example
The following example creates a custom replication policy for unified replication that excludes LIFs:
cluster_dst::> snapmirror policy create -vserver svm1 -policy unified_exclude_LIFs -type mirror-vault -discard
-configs network
Step 4. From the destination SVM or the destination cluster, run the following command to create a
replication relationship:
snapmirror create -source-path SVM: -destination-path SVM: -type DP|XDP -schedule schedule
-policy policy -identity-preserve true|false
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the examples below.
Example
Example
The following example creates a SnapMirror unified replication relationship that excludes LIFs:
cluster_dst::> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule my_daily
-policy unified_exclude_LIFs -identity-preserve true
Example
Step 6. From the destination SVM or the destination cluster, initialize a replication relationship:
snapmirror initialize -source-path SVM: -destination-path SVM:
For complete command syntax, see the man page.
The following example initializes the relationship between the source, svm1 and the destination,
svm_backup :
cluster_dst::> snapmirror initialize -source-path svm1: -destination-path svm_backup:
You must configure the network and protocols on the destination SVM for data access in the event a disaster
occurs.
For a list of preserved protocol and name service settings, see “Configurations replicated in SVM DR
relationships” on page 55.
Example
Example
The following example creates a job schedule named my_weekly that runs on Saturdays at 3:00 a.
m.:
cluster_dst::> job schedule cron create -name my_weekly -dayofweek "Saturday" -hour 3 -minute 0
Step 3. Create a replication relationship that excludes network, name service, and other configuration
settings:
snapmirror create -source-path SVM: -destination-path SVM: -type DP|XDP -schedule schedule
-policy policy -identity-preserve false
Example
The following example creates a SnapMirror DR relationship using the default MirrorAllSnapshots
policy. The relationship excludes network, name service, and other configuration settings from
SVM replication:
cluster_dst::> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule
my_daily -policy MirrorAllSnapshots -identity-preserve false
Example
The following example creates a unified replication relationship using the default MirrorAndVault
policy. The relationship excludes network, name service, and other configuration settings:
cluster_dst:> snapmirror create svm1: -destination-path svm_backup: -type XDP -schedule my_daily
-policy MirrorAndVault -identity-preserve false
Example
Assuming you have created a custom policy with the policy type async-mirror , the following
example creates a SnapMirror DR relationship. The relationship excludes network, name service,
and other configuration settings from SVM replication:
cluster_dst::> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule
my_daily -policy my_mirrored -identity-preserve false
Example
Assuming you have created a custom policy with the policy type mirror-vault , the following
example creates a unified replication relationship. The relationship excludes network, name
service, and other configuration settings from SVM replication:
cluster_dst::> snapmirror create -source-path svm1: -destination-path svm_backup: -type XDP -schedule
my_daily -policy my_unified -identity-preserve false
Example
Step 5. If you are using SMB, you must also configure a CIFS server.
See “CIFS only: Creating a CIFS server” on page 65.
Step 6. From the destination SVM or the destination cluster, initialize the SVM replication relationship:
snapmirror initialize -source-path SVM_name: -destination-path SVM_name:
You must configure the network and protocols on the destination SVM for data access in the event a disaster
occurs.
Example
Step 2. Verify that the destination SVM is in the running state and subtype is dp destination by using the
v s e r v e r s h o w command.
Example
Example
destination_cluster::>network interface create -vserver dvs1 -lif NAS1 -role data -data-protocol cifs -home-node
destination_cluster-01 -home-port a0a-101 -address 192.0.2.128 -netmask 255.255.255.128
Example
Example
Example
Example
Example
The following example excludes the volume volA_src from SVM replication:
cluster_src::> volume modify -vserver SVM1 -volume volA_src -vserver-dr-protection unprotected
If you later want to include a volume in the SVM replication that you originally excluded, run the
following command:
volume modify -vserver SVM -volume volume -vserver-dr-protection protected
Example
The following example includes the volume volA_src in the SVM replication:
cluster_src::> volume modify -vserver SVM1 -volume volA_src -vserver-dr-protection protected
Step 2. Create and initialize the SVM replication relationship as described in “Replicating an entire SVM
configuration” on page 60.
Note: In a disaster recovery scenario, you cannot perform a SnapMirror update from the source SVM to the
disaster recovery destination SVM because your source SVM and its data will be inaccessible, and because
updates since the last resync might be bad or corrupt.
Step 1. From the destination SVM or the destination cluster, stop scheduled transfers to the destination:
snapmirror quiesce -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example stops scheduled transfers between the source SVM svm1 and the
destination SVM svm_backup :
cluster_dst::> snapmirror quiesce -source-path svm1: -destination-path svm_backup:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example stops ongoing transfers between the source SVM svm1 and the destination
SVM svm_backup :
cluster_dst::> snapmirror abort -source-path svm1: -destination-path svm_backup:
Step 3. From the destination SVM or the destination cluster, break the replication relationship:
snapmirror break -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example breaks the relationship between the source SVM svm1 and the destination
SVM svm_backup :
cluster_dst::> snapmirror break -source-path svm1: -destination-path svm_backup:
Step 4. If you set -identity-preserve true when you created the SVM replication relationship, stop the
source SVM:
vserver stop -vserver SVM
Example
Example
Configure SVM destination volumes for data access, as described in “Configuring the destination volume for
data access” on page 42.
If you have increased the size of destination volume while serving data from it, before you reactivate the
source volume, you should manually increase max-autosize on the original source volume to ensure it can
grow sufficiently.
This procedure assumes that the baseline in the original source volume is intact. If the baseline is not intact,
you must create and initialize the relationship between the volume you are serving data from and the original
source volume before performing the procedure.
Step 1. From the original source SVM or the original source cluster, create a reverse SVM DR relationship
using the same configuration, policy, and identity-preserve setting as the original SVM DR
relationship:snapmirror create -source-path SVM: -destination-path SVM:
The following example creates a relationship between the SVM from which you are serving data,
svm_backup, and the original source SVM, svm1:
Step 2. From the original source SVM or the original source cluster, run the following command to reverse
the data protection relationship:
snapmirror resync -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Although resync does not require a baseline transfer, it can be time-consuming. You might want to
run the resync in off-peak hours.
Example
The following example reverses the relationship between the original source SVM, svm1 , and the
SVM you are serving data from, svm_backup :
cluster_src::> snapmirror resync -source-path svm_backup: -destination-path svm1:
Step 3. When you are ready to reestablish data access to the original source SVM, stop the original
destination SVM to disconnect any clients currently connected to the original destination SVM.
vserver stop -vserver SVM
Example
The following example stops the original destination SVM which is currently serving data:
cluster_dst::> vserver stop svm_backup
Step 4. Verify that the original destination SVM is in the stopped state by using the v s e r v e r s h o w
command.
Example
Step 5. From the original source SVM or the original source cluster, run the following command to perform
the final update of the reversed relationship to transfer all changes from the original destination
SVM to the original source SVM: snapmirror update -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example updates the relationship between the original destination SVM from which
you are serving data,svm_backup, and the original source SVM, svm1:
Step 6. From the original source SVM or the original source cluster, run the following command to stop
scheduled transfers for the reversed relationship:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example stops scheduled transfers between the SVM you are serving data from,
svm_backup, and the original SVM, svm1:
Step 7. When the final update is complete and the relationship indicates "Quiesced" for the relationship
status, run the following command from the original source SVM or the original source cluster to
break the reversed relationship:
snapmirror break -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example breaks the relationship between the original destination SVM from which
you were serving data, svm_backup, and the original source SVM, svm1:
Step 8. From the original destination SVM or the original destination cluster, reestablish the original data
protection relationship: snapmirror resync -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
The following example reestablishes the relationship between the original source SVM, svm1, and
the original destination SVM, svm_backup:
cluster_dst::> snapmirror resync -source-path svm1: -destination-path svm_backup:
Step 9. If the original source SVM was previously stopped from the original source cluster, start the original
source SVM:
vserver start -vserver SVM
Example
Step 10. From the original source SVM or the original source cluster, run the following command to delete
the reversed data protection relationship:
snapmirror delete -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the-source-path and-destination-path
options. See the example below.
Example
The following example deletes the reversed relationship between the original destination SVM,
svm_backup, and the original source SVM, svm1:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
The following example releases the reversed relationship between the original destination SVM,
svm_backup, and the original source SVM, svm1
cluster_dst::> snapmirror release -source-path svm_backup: -destination-path svm1:
Use the s n a p m i r r o r s h o w command to verify that the SnapMirror relationship was created. For complete
command syntax, see the man page.
Step 1. From the destination SVM or the destination cluster, run the following command to resync the
source and destination volumes:
snapmirror resync -source-path SVM:volume -destination-path SVM:volume -type DP|XDP
-schedule schedule -policy policy
For complete command syntax, see the man page.
Note: Although resync does not require a baseline transfer, it can be time-consuming. You might
want to run the resync in off-peak hours.
Example
The following example resyncs the relationship between the source volume volA on svm1 and the
destination volume volA on svm_backup :
cluster_dst::> snapmirror resync -source-path svm1:volA -destination-path svm_backup:volA
Step 2. Create an SVM replication relationship between the source and destination SVMs, as described in
“Replicating SVM configurations” on page 59.You must use the -identity-preserve true option of
the s n a p m i r r o r c r e a t e command when you create your replication relationship.
Step 3. Stop the destination SVM:
vserver stop -vserver SVM
Example
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.Although resync does not require a baseline transfer, it can be
time-consuming. You might want to run the resync in off-peak hours.
Example
The following example resyncs the relationship between the source SVM svm1 and the destination
SVM svm_backup :
cluster_dst::> snapmirror resync -source-path svm1: -destination-path svm_backup:
The s n a p m i r r o r r e l e a s e command deletes any SnapMirror-created Snapshot copies from the source. You
can use the -relationship-info-only option to preserve the Snapshot copies.
Step 1. Run the following command from the destination SVM or the destination cluster to break the
replication relationship:
snapmirror break -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example breaks the relationship between the source SVM svm1 and the destination
SVM svm_backup :
cluster_dst::> snapmirror break -source-path svm1: -destination-path svm_backup:
Step 2. Run the following command from the destination SVM or the destination cluster to delete the
replication relationship:
snapmirror delete -source-path SVM: -destination-path SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example deletes the relationship between the source SVM svm1 and the destination
SVM svm_backup :
cluster_dst::> snapmirror delete -source-path svm1: -destination-path svm_backup:
Step 3. Run the following command from the source cluster or source SVM to release the replication
relationship information from the source SVM:
Note: You must enter a colon (:) after the SVM name in the -source-path and -destination-path
options. See the example below.
Example
The following example releases information for the specified replication relationship from the
source SVM svm1 :
cluster_src::> snapmirror release -source-path svm1: -destination-path svm_backup:
The main purpose of load-sharing mirrors for SVM root volumes is no longer for load sharing; instead, their
purpose is for disaster recovery.
• If the root volume is temporarily unavailable, the load-sharing mirror automatically provides read-only
access to root volume data.
• If the root volume is permanently unavailable, you can promote one of the load-sharing volumes to
provide write access to root volume data.
If you create an LSM on the same node, and you then lose a disk in an HA pair, you have a single point of
failure, and you do not have a second copy from which to recover your data. But when you create the LSM
on a node other than the one containing the root volume, or on a different HA pair, your data is still
accessible in the event of an outage.
It is a best practice to name the root and destination volume with suffixes, such as _root and _m1.
Example
The following example creates a load-sharing mirror volume for the root volume svm1_root in
cluster_src :
cluster_src:> volume create -vserver svm1 -volume svm1_m1 -aggregate aggr_1 -size 1gb -state online -type DP
Step 2. Create a replication job schedule, as described in “Creating a replication job schedule” on page 28.
Step 3. Create a load-sharing mirror relationship between the SVM root volume and the destination volume
for the LSM:
snapmirror create -source-path SVM:volume|cluster://SVM/volume -destination-path SVM:
volume|cluster://SVM/volume -type LS -schedule schedule
For complete command syntax, see the man page.
The following example creates a load-sharing mirror relationship between the root volume svm1_
root and the load-sharing mirror volume svm1_m1 :
cluster_src::> snapmirror create -source-path svm1:svm1_root -destination-path svm1:svm1_m1 -type LS -schedule hourly
Result
Example
The following example initializes the load-sharing mirror for the root volume svm1_root :
cluster_src::> snapmirror initialize-ls-set -source-path svm1:svm1_root
Example
The following example updates the load-sharing mirror relationship for the root volume svm1_root :
cluster_src::> snapmirror update-ls-set -source-path svm1:svm1_root
You must use advanced privilege level commands for this task.
Example
The following example promotes the volume svm1_m2 as the new SVM root volume:
cluster_src::*> snapmirror promote -destination-path svm1:svm1_m2
Result
Enter y. ONTAP makes the LSM volume a read/write volume, and deletes the original root volume if
it is accessible.
Attention: The promoted root volume might not have all of the data that was in the original root
volume if the last update did not occur recently.
Step 3. Return to admin privilege level:
set -privilege admin
Step 4. Rename the promoted volume following the naming convention you used for the root volume:
volume rename -vserver SVM -volume volume -newname new_name
Example
The following example renames the promoted volume svm1_m2 with the name svm1_root :
cluster_src::> volume rename -vserver svm11 -volume svm1_m2 -newname svm1_root
Step 5. Protect the renamed root volume, as described in Step 3 on page 75 through Step 4 on page 76 in
“Creating and initializing load-sharing mirror relationships” on page 75.
s n a p m i r r o r commands use fully qualified path names in the following format: vserver:volume . You can
abbreviate the path name by not entering the SVM name. If you do this, the s n a p m i r r o r command assumes
the local SVM context of the user.
Assuming that the SVM is called “vserver1” and the volume is called “vol1”, the fully qualified path name is
vserver1:vol1.
You can use the asterisk (*) in paths as a wildcard to select matching, fully qualified path names. The
following table provides examples of using the wildcard to select a range of volumes.
Example
The following command initializes SnapMirror relationships that are in an Uninitialized state:
vs1::> snapmirror initialize {-state Uninitialized} *
If you use a combination mirror-vault fan-out or cascade deployment, you should keep in mind that updates
will fail if a common Snapshot copy does not exist on the source and destination volumes.
This is never an issue for the mirror relationship in a mirror-vault fan-out or cascade deployment, since
SnapMirror always creates a Snapshot copy of the source volume before it performs the update.
It might be an issue for the vault relationship, however, since SnapMirror does not create a Snapshot copy of
the source volume when it updates a vault relationship. You need to use the s n a p m i r r o r s n a p s h o t - o w n e r
c r e a t e command to ensure that there is at least one common Snapshot copy on both the source and
destination of the vault relationship.
Step 1. On the source volume, assign an owner to the labeled Snapshot copy you want to preserve:
snapmirror snapshot-owner create -vserver SVM -volume volume -snapshot snapshot -owner owner
Example
The following example assigns ApplicationA as the owner of the snap1 Snapshot copy:
clust1::> snapmirror snapshot-owner create -vserver vs1 -volume vol1
-snapshot snap1 -owner ApplicationA
Step 2. Update the mirror relationship, as described in “Updating a replication relationship manually” on
page 47.
Alternatively, you can wait for the scheduled update of the mirror relationship.
Step 3. Transfer the labeled Snapshot copy to the vault destination:
snapmirror update -source-path SVM:volume|cluster://SVM/volume, ... -destination-path SVM:
volume|cluster://SVM/volume, ... -source-snapshot snapshot
For complete command syntax, see the man page.
Example
Result
The labeled Snapshot copy will be preserved when the vault relationship is updated.
Step 4. On the source volume, remove the owner from the labeled Snapshot copy:
snapmirror snapshot-owner delete -vserver SVM -volume volume -snapshot snapshot -owner
owner
The following examples removes ApplicationA as the owner of the snap1 Snapshot copy:
clust1::> snapmirror snapshot-owner delete -vserver vs1 -volume vol1
-snapshot snap1 -owner ApplicationA
SnapMirror DR relationships
For SnapMirror relationships of type “DP” and policy type “async-mirror”:
Note: This table includes SnapMirror DP release interoperability up to ONTAP 9.9, at which time DP
relationships will be automatically converted to XDP.
A source volume in
ONTAP version... Can have a destination volume in ONTAP versions…
9.4 9.5 9.6 9.7 9.8
9.4 Yes Yes Yes No No
9.5 No Yes Yes Yes No
9.6 No No Yes Yes Yes
9.7 No No No Yes Yes
9.8 No No No No Yes
Note: Locate the higher, more recent ONTAP version in the left column, and in the right column locate the
lower ONTAP version to determine interoperability. Interoperability is bidirectional.
SnapMirror limitations
You should be aware of basic SnapMirror limitations before creating a data protection relationship.
Note: A source volume can have multiple destination volumes. The destination volume can be the source
volume for any type of SnapMirror replication relationship.
• You can fan out a maximum of eight destination volumes from a single source volume.
• You cannot restore files to the destination of a SnapMirror DR relationship.
• Source or destination SnapVault volumes cannot be 32-bit.
• The source volume for a SnapVault relationship should not be a FlexClone volume.
Note: The relationship will work, but the efficiency offered by FlexClone volumes will not be preserved.
• ONTAP concepts
Describes the concepts that inform ONTAP data management software, including data protection and
transfer.
• Cluster and SVM peering
Describes how to create peer relationships between source and destination clusters and between source
and destination SVMs.
• Archive and compliance using SnapLock technology
Describes how to replicate WORM files in a SnapLock volume.
• NDMP express configuration
Describes how to use NDMP to back up data directly to tape using a third-party backup application.
• Data protection using tape backup
Describes how to back up and recover data using tape backup and recovery features.
• SAN Administration Guide
Describes how to configure and manage the NVMe, iSCSI, and FC protocols, including configuration of
LUNs, igroups, and targets.
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a service
provider authorized by Lenovo to provide warranty service, go to https://datacentersupport.lenovo.com/
serviceprovider and use filter searching for different countries. For Lenovo support telephone numbers, see
https://datacentersupport.lenovo.com/supportphonelist for your region support details.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only that
Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service
that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any other product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document is not an offer and does not provide a license under any patents
or patent applications. You can send inquiries in writing to the following:
Lenovo (United States), Inc.
8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow
disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to
you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.