The Future of NetApp

NetApp like it or not is in trouble.  Several sites have documented the recent decline and have shown the potential problems plaguing the company.  I think it’s time to move past the problems facing NetApp and offer up a number of potential solutions which would ensure NetApp has a bright future.  Since this is something that has been lacking in posts on other sites I thought I’d share how I think NetApp should change their business to adapt to the new storage marketplace.

New Hardware (controllers & shelves)

Somewhere within NetApp some engineer has realized that their hardware is outdated and in need of a serious refresh.  Specifically I’d like to see NetApp come out with much denser shelves which use 2.5″ drives.  Their target needs to be at least 50+ disks per 4U with either a PCIe or IB interconnect. The cheapest option will be IB, but to be truly innovative they should be looking at using the next high speed interconnect.  That would be PCIe.

Next let’s address the elephant in the room… their storage controllers.  It’s time for internal drives.  Why they didn’t do this years ago is beyond me.  How awesome would it be to have a root aggregate stored on mirrored SSD drives which are PCIe attached on each controller? Their short term solution is root-data partitioning which takes slices out of a select number of drives to host the root aggregate.  The catch is it’s only supported on all flash FAS and smaller arrays which I’m guessing don’t make sense to have isolated root aggrs due to the disk overhead.  The real reason for making the root aggregate separate and internal is to simplify architecting clean aggregate layouts on NetApp arrays.  Right now the math when buying shelves works out just like buying hot dogs and buns.


WAFL file system is showing its age which means it’s time for NetApp to build the next generation of WAFL.  They can take a couple queues from their growing number of competitors.  First up is making it a fully implemented log structured file system.  This means in-line data defragmentation with no performance overhead.  With flash plentiful and cheap, this should no longer be an issue.  If Nimble Storage can do it, why can’t NetApp? NetApp should be innovating in this space and making it easier for its sales force and engineers to tell their customers how they are adapting their existing efficiencies to the new storage market.


The future of storage is flash based.  Anyone who disputes this is either focused on a niche workload or can’t see the tsunami of cheap high density flash that’s about to hit the market in the next year. Case in point Samsung just announced a 15TB flash drive. Granted it’s not the fastest thing on the planet, but it does signal that density will not be an issue for all flash arrays in the near future.  NetApp needs a play here and fast.  This either means that Flash Ray needs to get it’s act together, or all Flash FAS needs to do more than be a special version of Data ONTAP.

Either way here’s a list of must haves for the future all flash version of whatever NetApp comes out with.

  1. Active-Active Scale Up & Scale Out Array
  2. In-Line Dedupe & Compression
  3. Fully implemented log structured file system (defrag on ingest)
  4. Simple RAID Group Layouts (I don’t want to pick a RAID group size ever again)
  5. All Flash Tiering (super fast to not as fast)
  6. Cloud Tiering (Amazon S3, Swift, Azure etc.)

Saying that your existing Data ONTAP platforms support some of these features is not acceptable to existing and potential customers.  This is what the market is dictating.

NetApp… don’t let your competitors both new and existing eat your lunch.

Fix OS X Mavericks Continuously Prompting for KeyChain Password

Symptom: Upon login OS X Mavericks continuously prompts you for a Key chain password which does not match either your iCloud/iTunes credentials or your local login credentials.


* Go to Finder.
* On the Finder menu, click on “Go”, then on “Go to Folder”. A box should come up.
* On the box, type in “~/Library/Keychains/” and click on “Go”. It should lead you to the Keychains folder where you will find three items: (1) a folder with a name mixed with letters and numbers, (2) login.keychain, and (3) metadata.keychain.

* Delete the folder with a name mixed with letters and numbers.
* Restart your computer. Check to see if the problem has been solved.

Solution Source:

Fix: Youtube buffering issue

Over the past two weeks I’ve noticed a continuous issue with loading youtube videos and having them endlessly buffer. Tonight I did some digging and found a quick fix!
The solution is pretty simple, and involves blocking a specific IP range associated with Verizon FIOS servers which are buffering Youtube traffic.  Since the IP may be different depending on your location I’ll go through the simple steps to identify the IP to block and the associated OS X command to run to block it.

1. Open a terminal window and type “traceroute”
2. Note the first IP address which shows up outside of your network.  It should be the one which doesn’t start with 192.x.x.x
On my network the offending IP is:
l100.<your area>-vfttp-<some number> (  19.260 ms  20.116 ms  18.862 ms
Also, note any entries which end in “” as these are Verizon FIOS servers.
3. Test loading a highdef youtube video.  Make sure to switch its resolution up to 1080p, and watch it buffer.
4. From the terminal window block the offending IP by running the following command
sudo ipfw add reject src-ip in
5. Confirm the IP is now blocked by running sudo ipfw list
Example output:
00100 reject ip from to any in
6. Refresh your browser by hitting F5 and reload the high def YouTube video.
Note, if this doesn’t work the blocked IP subnet can be removed using the following command:
sudo delete 00100 reject ip from to any in
If this doesn’t work you can also try blocking the IPs found within this post.