NetApp SnapMirror 7-Mode to Cluster-Mode

Here are the steps required to manually initiate a snapmirror relationship between a NetApp 7-Mode and a Cluster-Mode

7-Mode to Cluster Mode Version Requirements
Source Filer: 8.x 7-Mode and Higher
Dest. Filer: 8.2.x and Higher

7-Mode Source Filer Steps
– Allow destination filer either within snapmirror.allow file or within options snapmirror.access host =

Cluster-Mode Destination Filer Steps
– Create a new transition Lif assigned to a Node which is set to intercluster for both the Role and the Firewall Policy.
– Create a vserver peer transition relationship between the source and destination filer and associate it to the transition lif
– Create a new destination volume that is the same size or bigger than the source volume. Make sure to specify the type of the volume as DP (data protection)
– Create the new snapmirror relationship with type set to TDP
– Initialize the mirror using snapmirror initialize followed by the destination path

Optionally you can create a snapmirror policy that is applied to any new transition snapmirror relationships to ensure consistency when cutting over volumes in bulk.


Netapp – Rename Filer in SnapMirror Relationship

As organizations change so do the standards which guide them.  As part of these changes pre-existing filers names may need to be updated and changed.  Before attempting to update a filers name you must first perform a number of confirmation steps.  This will ensure that you do not lose connectivity to your filer and that existing snapmirror relationships can be re-established after the change.


  • Create a new DNS entry for the new filer name
  • Prep the /etc/hosts file on the filer with the new name and IP (comment it out) on both source and destination filers.
  • Add the VLAN to the filers VIF or new VIF depending on how you’d like routing configured.
  • Prep the /etc/rc file with the new filer name and default gateway details.
  • Prep the NTP server details (only if a new server will be used on the new network…)
  • Confirm all management stations and NetApp monitoring utilities have access to filers new IP address. ie. add any necessary firewall rules.
  • Confirm network connectivity between the old and new network. This is critical if the filers name and IP address will only be changed on one site.

Execution (Note: during these steps all existing snapmirror relationships will disappear)

  1. Take note of the existing snapmirror relationships & their associated volume names
  2. Break the relationship (From the destination)
  3. Update the filers IP address, hostname, and default gateway
  4. Update the /etc/hosts file on the source and destination filer. It should reflect the new hostname and associated IP.
  5. Use ping to confirm both source and destination filer can see each other.
  6. BEFORE re-establishing the mirror relationship double check that the hosts file has been correctly updated on both the source and destination filer.
  7. Update your snapmirror options to allow the new filer names to establish relationships with each other.
  8. Re-Establish the mirror from the destination by using snapmirror resync.


nap003> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
nap003:source_new nap004:dest_new Source 00:05:25 Idle

nap004> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
nap003:source_new nap004:dest_new Snapmirrored 00:06:25 Idle

nap004> snapmirror break dest_new

Update both filers associated configuration files
nap003>rdfile /etc/rc
nap003>rdfile /etc/hosts
nap004>rdfile /etc/rc
nap004>rdfile /etc/hosts

nap004 /etc/hosts updates to make it nap006 nap006 nap006-e0a nap005

nap004 /etc/rc updates to make it nap006
hostname nap006

nap004> hostname nap006

Perform the same steps on the second filer. Once completed you can now re-establish the relationship from the destination filer. Do not be alarmed that the snapmirror status shows no relationships exist.

nap005> options snapmirror.access host=nap006
nap006> options snapmirror.access host=nap005

nap006> snapmirror resync -S nap005:source_new nap006:dest_new
The resync base snapshot will be: nap004(4055372815)_dest_new.4
These older snapshots have already been deleted from the source
and will be deleted from the destination:
Are you sure you want to resync the volume? yes
Volume dest_new will be briefly unavailable before coming back online.
Wed Feb 13 22:43:13 EST []: SnapMirror resync of dest_new to nap005:source_new is using nap004(4055372815)_dest_new.4 as the base snapshot.
Wed Feb 13 22:43:16 EST [nap006:wafl.snaprestore.revert:notice]: Reverting volume dest_new to a previous snapshot.
Revert to resync base snapshot was successful.
Wed Feb 13 22:43:17 EST [nap006:replication.dst.resync.success:notice]: SnapMirror resync of dest_new to nap005:source_new was successful.
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

nap006> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
nap005:source_new nap006:dest_new Snapmirrored 00:08:06 Idle

Nasuni – Taking Cloud to the Next Level

I’ve been a cloud speculator since day one. In fact I’ve gone out of my way to introduce doubt into planning sessions where the hype had gone to far. Nasuni is the company which will turn me to the dark side. I’m jumping on the bandwagon. Nasuni bridges the gap between what you would build in your own data center and current public offerings. The key here is that their focus is only on file based storage. Block storage is out due to it’s inherent lack of tolerance for latency in any form. Bellow I’ve outlined my cliff notes of the Nasuni cloud storage offering.

Huge Problem = Disk to Disk or Cloud to Data Center data transfers take a really long time to complete. This becomes an exponentially bigger problem when dealing with multi terabyte much less multi petabyte data sets. RTO objects hinge on the time it takes to transfer data. With object based storage transfer time is removed.

Current Cloud
– Servers & Data Hosted within data center external to company
– Servers & Data Hosted within your data center (no optimization, isolated)
– Object based storage based on object based storage (Put, Get, Update – HTTPS)
– Data Assurance handled within cloud
– Eventual consistency model (object updates propagate, “eventually”)

Nasuni (Coined as “NetApp for the Cloud”)
– Object Based Storage (Cloud)
– Object Commands (Put, Get, Query, Update)
– NFS/CIFS encapsulated in HTTPS packet
– Objects Copied which ensures redundancy
– Any server serves any object through the use of a distributed hash table
– Infinite scalability… just add servers
– Storage as a service

What’s Missing: Standard protocols, access control, latency (delivery of multi-TB environments)

Ideal Scenario: Mirror Legacy Data Center
– Immediate Consistency
– Secure Multi-Site Access
– Complete SLA

Protection Requirements
– Version Control
– Offsite copies
– Application Dependencies (understand data structure)
– Compliance issues (HIPAA etc.)

Storage Controller = deliver consistent performance (no matter what!)
Nasuni Controller = Cloud Storage Controller

Nasuni (Storage As A Service VM or 1U unit)
– Posix File Systems (NFS, CIFS etc.) available in front of Object Storage
– Customer Encrypts Data at Customer Site
– SnapShots Created & Replicated based on Object Model
– Unlimited Storage (Add / Remove Array), Unlimited Bandwidth, Accessible from Anywhere…
– SLA Associated with Cloud Storage (They guarantee the data)
– Able to move data without customer knowing
– Application servers still reside within the data center (extending the cloud)
– Backend storage = Google & Amazon (Object Cloud Services)
– Leverage the biggest data center in the world that also have the most bandwidth
– Control at volume / export layer (allow one site read only vs. read/write
– Buy 1 TeraByte per Year ($8k-$12k) = 3 heads

Edge Nasuni NAS Appliance (Cloud Edge Device)
-Caching (Pre-Fetch Meta Data
-Secure (OpenPGP Encryption & Active Directory Authentication)
-WAN Acceleration (Dedupe, Compression… all performed prior to transfer to cloud)
-Speed over wire critical, ensures quick data recovery
-Charge for usable storage ONLY
-Convert Protocols (CIFS/NFS -> HTTPS)
-15 minute recovery (Download new VM, Tell it what filer to recover, specify pgp key.)
-Unlimited snapshots = No Backup Limit
-Retention Option, By default they keep everything!


  • Works like a NAS
  • Never fills up
  • Needs no offsite protection
  • Needs no backup
  • Synchronized globally

TPC 4.1 – Component Differences & Limitations

TPC 4.1 comes in two key flavors basic and standard edition.  TPC basic edition only includes disk and fabric management capabilities.  It is also important to note that it does not include performance analysis capabilities.  TPC standard edition expands upon the basic edition capabilities with multiple components designed to allow complete management of a storage environment.

TPC 4.1 Basic Edition

Disk & Fabric Management included

TPC 4.1 Standard Edition*

Disk Component: Discover and manage storage arrays and tape libraries.

Fabric Component: Discover and manage multiple fabrics from the standard TPC interface.

Data Component: Discover and manage hosts attached to storage arrays discovered by TPC.

Replication Component: Connect and monitor multiple TPC sites enhancing disaster recovery fail over capabilities.

*Licensing controls whether a component is visible within the TPC GUI interface.