The Future of NetApp

NetApp like it or not is in trouble.  Several sites have documented the recent decline and have shown the potential problems plaguing the company.  I think it’s time to move past the problems facing NetApp and offer up a number of potential solutions which would ensure NetApp has a bright future.  Since this is something that has been lacking in posts on other sites I thought I’d share how I think NetApp should change their business to adapt to the new storage marketplace.

New Hardware (controllers & shelves)

Somewhere within NetApp some engineer has realized that their hardware is outdated and in need of a serious refresh.  Specifically I’d like to see NetApp come out with much denser shelves which use 2.5″ drives.  Their target needs to be at least 50+ disks per 4U with either a PCIe or IB interconnect. The cheapest option will be IB, but to be truly innovative they should be looking at using the next high speed interconnect.  That would be PCIe.

Next let’s address the elephant in the room… their storage controllers.  It’s time for internal drives.  Why they didn’t do this years ago is beyond me.  How awesome would it be to have a root aggregate stored on mirrored SSD drives which are PCIe attached on each controller? Their short term solution is root-data partitioning which takes slices out of a select number of drives to host the root aggregate.  The catch is it’s only supported on all flash FAS and smaller arrays which I’m guessing don’t make sense to have isolated root aggrs due to the disk overhead.  The real reason for making the root aggregate separate and internal is to simplify architecting clean aggregate layouts on NetApp arrays.  Right now the math when buying shelves works out just like buying hot dogs and buns.


WAFL file system is showing its age which means it’s time for NetApp to build the next generation of WAFL.  They can take a couple queues from their growing number of competitors.  First up is making it a fully implemented log structured file system.  This means in-line data defragmentation with no performance overhead.  With flash plentiful and cheap, this should no longer be an issue.  If Nimble Storage can do it, why can’t NetApp? NetApp should be innovating in this space and making it easier for its sales force and engineers to tell their customers how they are adapting their existing efficiencies to the new storage market.


The future of storage is flash based.  Anyone who disputes this is either focused on a niche workload or can’t see the tsunami of cheap high density flash that’s about to hit the market in the next year. Case in point Samsung just announced a 15TB flash drive. Granted it’s not the fastest thing on the planet, but it does signal that density will not be an issue for all flash arrays in the near future.  NetApp needs a play here and fast.  This either means that Flash Ray needs to get it’s act together, or all Flash FAS needs to do more than be a special version of Data ONTAP.

Either way here’s a list of must haves for the future all flash version of whatever NetApp comes out with.

  1. Active-Active Scale Up & Scale Out Array
  2. In-Line Dedupe & Compression
  3. Fully implemented log structured file system (defrag on ingest)
  4. Simple RAID Group Layouts (I don’t want to pick a RAID group size ever again)
  5. All Flash Tiering (super fast to not as fast)
  6. Cloud Tiering (Amazon S3, Swift, Azure etc.)

Saying that your existing Data ONTAP platforms support some of these features is not acceptable to existing and potential customers.  This is what the market is dictating.

NetApp… don’t let your competitors both new and existing eat your lunch.