The Future of NetApp

NetApp like it or not is in trouble.  Several sites have documented the recent decline and have shown the potential problems plaguing the company.  I think it’s time to move past the problems facing NetApp and offer up a number of potential solutions which would ensure NetApp has a bright future.  Since this is something that has been lacking in posts on other sites I thought I’d share how I think NetApp should change their business to adapt to the new storage marketplace.

New Hardware (controllers & shelves)

Somewhere within NetApp some engineer has realized that their hardware is outdated and in need of a serious refresh.  Specifically I’d like to see NetApp come out with much denser shelves which use 2.5″ drives.  Their target needs to be at least 50+ disks per 4U with either a PCIe or IB interconnect. The cheapest option will be IB, but to be truly innovative they should be looking at using the next high speed interconnect.  That would be PCIe.

Next let’s address the elephant in the room… their storage controllers.  It’s time for internal drives.  Why they didn’t do this years ago is beyond me.  How awesome would it be to have a root aggregate stored on mirrored SSD drives which are PCIe attached on each controller? Their short term solution is root-data partitioning which takes slices out of a select number of drives to host the root aggregate.  The catch is it’s only supported on all flash FAS and smaller arrays which I’m guessing don’t make sense to have isolated root aggrs due to the disk overhead.  The real reason for making the root aggregate separate and internal is to simplify architecting clean aggregate layouts on NetApp arrays.  Right now the math when buying shelves works out just like buying hot dogs and buns.

WAFL

WAFL file system is showing its age which means it’s time for NetApp to build the next generation of WAFL.  They can take a couple queues from their growing number of competitors.  First up is making it a fully implemented log structured file system.  This means in-line data defragmentation with no performance overhead.  With flash plentiful and cheap, this should no longer be an issue.  If Nimble Storage can do it, why can’t NetApp? NetApp should be innovating in this space and making it easier for its sales force and engineers to tell their customers how they are adapting their existing efficiencies to the new storage market.

Flash

The future of storage is flash based.  Anyone who disputes this is either focused on a niche workload or can’t see the tsunami of cheap high density flash that’s about to hit the market in the next year. Case in point Samsung just announced a 15TB flash drive. Granted it’s not the fastest thing on the planet, but it does signal that density will not be an issue for all flash arrays in the near future.  NetApp needs a play here and fast.  This either means that Flash Ray needs to get it’s act together, or all Flash FAS needs to do more than be a special version of Data ONTAP.

Either way here’s a list of must haves for the future all flash version of whatever NetApp comes out with.

  1. Active-Active Scale Up & Scale Out Array
  2. In-Line Dedupe & Compression
  3. Fully implemented log structured file system (defrag on ingest)
  4. Simple RAID Group Layouts (I don’t want to pick a RAID group size ever again)
  5. All Flash Tiering (super fast to not as fast)
  6. Cloud Tiering (Amazon S3, Swift, Azure etc.)

Saying that your existing Data ONTAP platforms support some of these features is not acceptable to existing and potential customers.  This is what the market is dictating.

NetApp… don’t let your competitors both new and existing eat your lunch.

NetApp SnapMirror 7-Mode to Cluster-Mode

Here are the steps required to manually initiate a snapmirror relationship between a NetApp 7-Mode and a Cluster-Mode

7-Mode to Cluster Mode Version Requirements
Source Filer: 8.x 7-Mode and Higher
Dest. Filer: 8.2.x and Higher

7-Mode Source Filer Steps
– Allow destination filer either within snapmirror.allow file or within options snapmirror.access host =

Cluster-Mode Destination Filer Steps
– Create a new transition Lif assigned to a Node which is set to intercluster for both the Role and the Firewall Policy.
– Create a vserver peer transition relationship between the source and destination filer and associate it to the transition lif
– Create a new destination volume that is the same size or bigger than the source volume. Make sure to specify the type of the volume as DP (data protection)
– Create the new snapmirror relationship with type set to TDP
– Initialize the mirror using snapmirror initialize followed by the destination path

Optionally you can create a snapmirror policy that is applied to any new transition snapmirror relationships to ensure consistency when cutting over volumes in bulk.

NetApp – Calculate Maximum Number of inodes per Volume

NetApp volumes allow for inodes to be dynamically allocated/increased on volumes which are provisioned on an array.  This begs the question, what is the maximum inode count supported by a volume and how is the maximum number calculated?

inodes = files

“The maximum number of inodes is limited to one inode per one block in the filesystem. (which is 1 inode per every 4KB).  It is generally recommended to NOT go that low.”

TB (Volume) GB MB KB
1.2 1,228.8 1,258,291 1,288,490,189

1,288,490,189KB / 4KB Blocks = 322,122,547 supported files / inodes per 1.2TB volume.

Credit where credit is due… https://communities.netapp.com/thread/2176

NetApp vFiler – Change IPSpace

NetApp virtual filers are handy for implementing multiple isolated environments which can have their own domain authentication and network isolation.  In order to completely isolate a vFiler from a physical filers interfaces separate dedicated interfaces must be assigned.  In the NetApp world interfaces are grouped based on IPSpaces.  For each IPSpace there can only be one default gateway.  By creating multiple IPSpaces you can isolate a vFilers storage traffic and also allow for multiple default gateways.  The use of multiple default gateways removes the need for adding static routes to the physical filer and also precludes asymmetric routing issues from occurring.

Goal: Change the IPSpace of an existing vfiler without losing any of the existing configuration settings.

Note: You will need to recreate any local accounts previously created using the useradmin command.

Prep:

Make a copy of the following files prior to attempting to change your vfilers IPSpace.

  • \etc\rc
  • \etc\passwd
  • \etc\quotas
  • \etc\registry
  • \etc\hosts
  • \etc\exports (only if the filer serves NFS shares)
  • \etc\cifsconfig_share.cfg (only if the filer serves CIFS shares)
  • \etc\cifs_homedir.cfg (only if you use home directory mapping capability)

Active Directory Filer Association

  • \etc\cifssec.cfg
  • \etc\krb5.keytab
  • \etc\krb5auto.conf
  • \etc\lclgroups.cfg

Create the new VIF & IPSpace. Note that vifs which use LACP will initially come up but show as broken until an IP address is bound to it.

brain dump in progress…

Netapp – Rename Filer in SnapMirror Relationship

As organizations change so do the standards which guide them.  As part of these changes pre-existing filers names may need to be updated and changed.  Before attempting to update a filers name you must first perform a number of confirmation steps.  This will ensure that you do not lose connectivity to your filer and that existing snapmirror relationships can be re-established after the change.

Prep

  • Create a new DNS entry for the new filer name
  • Prep the /etc/hosts file on the filer with the new name and IP (comment it out) on both source and destination filers.
  • Add the VLAN to the filers VIF or new VIF depending on how you’d like routing configured.
  • Prep the /etc/rc file with the new filer name and default gateway details.
  • Prep the NTP server details (only if a new server will be used on the new network…)
  • Confirm all management stations and NetApp monitoring utilities have access to filers new IP address. ie. add any necessary firewall rules.
  • Confirm network connectivity between the old and new network. This is critical if the filers name and IP address will only be changed on one site.

Execution (Note: during these steps all existing snapmirror relationships will disappear)

  1. Take note of the existing snapmirror relationships & their associated volume names
  2. Break the relationship (From the destination)
  3. Update the filers IP address, hostname, and default gateway
  4. Update the /etc/hosts file on the source and destination filer. It should reflect the new hostname and associated IP.
  5. Use ping to confirm both source and destination filer can see each other.
  6. BEFORE re-establishing the mirror relationship double check that the hosts file has been correctly updated on both the source and destination filer.
  7. Update your snapmirror options to allow the new filer names to establish relationships with each other.
  8. Re-Establish the mirror from the destination by using snapmirror resync.

Example:

nap003> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
nap003:source_new nap004:dest_new Source 00:05:25 Idle

nap004> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
nap003:source_new nap004:dest_new Snapmirrored 00:06:25 Idle

nap004> snapmirror break dest_new

Update both filers associated configuration files
nap003>rdfile /etc/rc
nap003>rdfile /etc/hosts
nap004>rdfile /etc/rc
nap004>rdfile /etc/hosts

nap004 /etc/hosts updates to make it nap006
192.168.0.201 nap006 nap006-e0a
192.168.0.200 nap005

nap004 /etc/rc updates to make it nap006
hostname nap006

nap004> hostname nap006
nap006>

Perform the same steps on the second filer. Once completed you can now re-establish the relationship from the destination filer. Do not be alarmed that the snapmirror status shows no relationships exist.

nap005> options snapmirror.access host=nap006
nap006> options snapmirror.access host=nap005

nap006> snapmirror resync -S nap005:source_new nap006:dest_new
The resync base snapshot will be: nap004(4055372815)_dest_new.4
These older snapshots have already been deleted from the source
and will be deleted from the destination:
nap004(4055372815)_dest_new.3
Are you sure you want to resync the volume? yes
Volume dest_new will be briefly unavailable before coming back online.
Wed Feb 13 22:43:13 EST [nap006:snapmirror.dst.resync.info:notice]: SnapMirror resync of dest_new to nap005:source_new is using nap004(4055372815)_dest_new.4 as the base snapshot.
Wed Feb 13 22:43:16 EST [nap006:wafl.snaprestore.revert:notice]: Reverting volume dest_new to a previous snapshot.
Revert to resync base snapshot was successful.
Wed Feb 13 22:43:17 EST [nap006:replication.dst.resync.success:notice]: SnapMirror resync of dest_new to nap005:source_new was successful.
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

nap006> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
nap005:source_new nap006:dest_new Snapmirrored 00:08:06 Idle