IBM Removes Older N series Systems for Newer Technology

IBM released information today that the N 6200 series will undergo normal evolution and withdrawl some of the older N6200 series systems for newer systems.  The N6220 and N6250 were released a few weeks ago and should fit in nicely where the N6210 and N6240 are being vacated.  I am still researching on the N6270 replacement and will update this article when I find out more.

Here is a snippet of the release info:

Effective August 16, 2013, IBM® will withdraw from marketing the following products:

  • IBM N series N6210 (Machine type 2858 Model C10 and C20)
  • IBM N series N6210 Function Authorization (Machine type 2870 Model 58D and 58E)
  • IBM N series N6240 (Machine type 2858 Model C21, E11, and E21)
  • IBM N series N6240 Function Authorization (Machine type 2870 Model 58A, 58B, and 58C)
  • IBM N series N6270 (Machine type 2858 Model C22, E12, and E22)
  • IBM N series N6270 Function Authorization (Machine type 2870 Model 58F, 58G, and 58H)

On or after the effective date of withdrawal, you can no longer order these products directly from IBM .

For new orders, the customer requested arrival date (CRAD) can be no later than September 13, 2013. You can obtain these products on an as-available basis through IBM Business Partners.

For the full presser, click Here

IBM release new N series systems

A few weeks ago, IBM released a couple of refreshed controllers, N6220 and N6250.  These two systems are refreshing the N6210 and N6240 respectfully.  While the new systems have larger total capacity (and number of spindles) the new systems are walloped with a ton of more memory.  Finally the maximum aggregate and volume sizes were increased.

Model Capacity GB Drives Memory GB Max Aggr TB Max Vol TB
6210 720 240 8 50 50
6220 1900 480 24 90 60
6240 1800 600 16 50 50
6250 2800 720 40 105 70

While this is a typical refresh of the mid-range systems one thing is clear, these systems are more powerful and give our customers higher capacity to keep up with a growing storage demand.

For more information on the new systems click on the link to see the IBM N series page.

IBM N series Disk Encryption

imagesAfter searching for some information on disk encryption on N series I came to the conclusion there wasn’t much out there.  I had worked with the product team to push through the testing and approval for the drives but there wasn’t much around how to implement.  Here are some things that will help you if you are wanting to encrypt your N series storage.

IBM now has a hardware solution for full disk encryption (FDE). This encryption is to protect from unauthorized access to the data after the drive has written the data, or what’s considered at rest. There are other methods of encrypting information in the data stream (like the Brocade Encryption Switch) but for this article we will only focus on data at rest.

In the storage array, each drive has a unique data encryption key that used to encrypt and de-crypt the data that lives on that drive. On top of that each drive key is then wrapped with another layer called an authentication key that is generated by Data ONTAP.

FDE_Workflow

When your N series system boots up, it will request the authentication key from the key manager and if successful the key is then passed to each drive where Drive encrypted key is able to unwrap and all access to the disk.

If you are using replication or vaulting both the target and the destination have to be encrypted.  This is very important as the data that is passed will need to be de-crypted during flight and re=encrypted with new keys once the data is written. As for deduplication, compression and snapshots, there is no impact and the performance for encryption is negligible.

There are a handful of key managers that are supported with this solution. Netapp has joined with SafeNet and has decided to push their technology.  I am not fan as most people don’t want to manage yet another box in their datacenter. The other reasonable solution is to use IBM Tivoli License Key Manager, TKLM. This software can be run from a AIX, RHEL, SUSE Linux, SOlaris and Win 2k3 and 2K8 servers. More information on TKLM can be found here on the IBM site.

Keep the following in mind when encrypting data:

  • Since encrypting data converts it into a random form, it becomes less compressible than non-encrypted data. It is therefore recommended that you do not enable hardware compression on encrypted data, as doing so may actually make the data grow in size.
  • The backup throughput for encrypted data will be lower when compared to non-encrypted data. Enabling Client Compression may provide a higher throughput for encrypted data which is not already compressed. Note that alternatively, the Auxiliary Copy Encryption feature, which encrypts the data during auxiliary copy operations, allows backups to run at full speed.
  • Exchange data that has been archived with pass-phrase encryption cannot be recovered from Outlook or OWA, but can be recovered by performing a Browse and Recovery operation from the CommCell Console.
  • If an archive operation is performed without encryption for File Archiver Agents, and then encryption is enabled, in order to recover the data, enter the current pass-phrase in the CommCell Console (for browse recoveries), or export the pass-phrase to the client computer (for stub recoveries).
  • While configuring the Windows File System backup sets, if you are using Data Classification as the scan method, you may face the following data encryption issue: When the data is restored, a non-encrypted file, in an encrypted folder, becomes encrypted.  This issue does not occur if you use the Change Journal or Classic Scan as the scan method during the backup.

For more information on the installation of encryption take a look at

IBM System Storage N series Data ONTAP 8.1 Storage Management Guide for 7-Mode

 

IBM Edge2012 Announced today.

Do you expect more out of your storage? IBM thinks you should and is putting their money where their mouth is. In the past it has gone under different names like STG University and Storage Symposium, but now IBM has revamped its premier storage conference. The big announcement came today with much fanfare that included a new website, some videos and bunch of hype on twitter. A three part conference for executives, gear heads and business partners there is something for everyone. But what will be different tham years in the past? I think IBM looked around how other vendors use conferences to help pump up its customer base (VMWorld, EMCwhatever) and decided to put some hype in the conference.

Think of this as a great place to go and network, learn and have a good time. The conference will be in Orlando and there will be tons of time to sit in class rooms and learn about the latest technologies but there will be sessions where IBM will be pulling in our top execs and analysts to tell you where IBM is going in the storage world.

The Executive Edge will feature different speakers from Jeff Jonas, Aviad Offer and IT Finance expert Calvin Braunstein. This track will take executives through new announcements, deep dives on technical platforms, one on one sessions with IBM Execs and some great entertainment. This is a new feature of the conference as in the past it was more geared towards the technical teams.

Of course the Executive Edge will be limited so talk to your local storage sales person to get a chance to be a part of this special event. There will be time to bring in your team and have special sessions and round tables with the IBM engineers who can help you find your way down this path of crazy storage growth. And there is a golf course on site which I have heard is very nice. Bring your clubs or rent them, I am sure there will be plenty of us out there so find a partner and have a good time.

More importantly IBM is making the effort to step up the event and have it on par with the other IBM conferences like Pulse. The technical portion will have over 250 sessions on storage related topics. You will also get road-map information from the product teams as well as a chance to become a certified technician. One area that has been expanding is our hands on labs and this year we will have the biggest one yet. You will be able to come in to the labs and actually see our storage systems and have a chance to ‘test drive’ them.

Early bird registration is open now and you can sign up today. The conference will be in sunny Orlando Florida at the Waldorf Astoria and Hilton Orlando at Bonnet Creek. The event starts on June 4th and runs to the 8th. You can follow the conference on twitter @IBMEdge and use the hashtag #ibmedge For the conference website go here

I look forward to seeing you in June.

Misalignment can be Twice as Costly

My father is a retired teacher but loves to work with his hands.  I can remember very early on in my up bringing, him teaching me that it is good to measure twice and cut once.  Whether it was building a deck or just a bird house the point was it took more time to cut something wrong and then has to re-cut the board shorter or even wastes the old board and cut a whole new one.

When I was preparing for this article I remember having to learn that lesson the hard way and how much effort really is put into that second cut.  The problem in the storage industry is the misaligned partitions from a move of a 512 byte sector to a new 4096 byte sector.  This has to be one of the bigger performance issues with virtualized systems and new storage.

Disk drives in the past had a limit on the number of sectors to 512 bytes.  This was ok when you had a 315 MB  drive because the number of 512 byte blocks was not nearly as large as what is in a 3 TB drive of today’s’ systems.  Newer versions of Windows and Linux will transfer the 4096 data block that match the native hard disk drive sector size.  But during migrations even new systems can have an issue.

There is also something called 512 byte sector emulation which is where a 4k sector on the hard disk is remapped to 8 512 byte sectors.  Each read and write would be done in eight 512 byte sectors.

When the older OS is created or migrated, it may or may not align the first block in the eight block group with the beginning of the 4k sector.  This causes misalignment of a one block segment.  As the reads and writes are laid down on the disks the misalignment of the logical sectors from the physical sectors mean the 8 512 byte blocks now occupy 2 4k sectors.

This now forces the disk to perform an additional read and/or write to two physical 4k sectors.  It has been documented that sector misalignment can cause a reduction in write performance of at least 30% for a 7200 RPM hard drive.

This issue is only magnified when adding other file systems on top of this misalignment.  When using a hyper visor like VMWare or Hyper-V, the virtual image can be misaligned and cause even further performance degradation.

There are hundreds of articles and blogs written on how to check for you disk alignment.  A simple Google search of the words “disk sector alignment” and you will find this has been a very popular topic.  Different applications will have different ways of checking and possibly realigning the sectors.

One application that can help you identify and fix these is a tool called the Pargon Alignment tool.  This tool is easy to use and will automatically determine if a drive’s partitions are misaligned.  If there is misalignment the utility then properly realigns the existing partitions including boot partitions to the 4k sector boundaries.

I came across this tool when looking for something to help N series customers who have misalignment issues in virtual systems.  One of the biggest things I saw as an advantage was this tool can align partitions while the OS is running and does not require the snapshots to be removed.  It also can align multiple VMDKs within a single virtual machine.

For more information on this tool and alignment check out the Paragon Software Group website.

In the end, your alignment will effect how much disk space you have, how much you can dedupe and the overall performance of your storage system.  It pays to check this before you start having issues and if you are already seeing problems I hope this can help.

When to Gateway and when to Virtualize

For the last six years IBM has been selling the N series gateway and it has been a great tool to add file based protocols to traditional block storage.  A gateway takes luns from the SAN storage and overlays its own operating system.  One of the ‘gotchas’ with the gateway is the storage has to be net new, meaning it can not take an existing lun that has data and present that to another device.


Traditionally the gateway was used to put in front of older storage to refit the old technology with new features.  In the case of N series, a gateway would be able to add features like snapshots, deduplication and replication. In the past few years, we have added the option to use both external and internal disk to a gateway system.  The only caveat to this solution is you have to order the gateway license when the system is initially ordered.  A filer can not be changed into a gateway system.
Another solution that we see in the field is when a customer is looking to purchase a new system and most of the requirement is SAN based and only a small portion is NAS.  Putting a gateway in front of a XIV became a very popular solution for many years and still is today.  IBM did release the SONAS platform that can be used as a NAS gateway in front of the V7000, SVC, and XIV.


I have seen some architects that wanted to use a gateway in an all NAS solution for new disks.  This only complicates the solution by having to add switches and multiple operating systems.
If we look at virtualization of storage, the gold standard has been the SAN Volume Controller (SVC).  This system can take new or existing luns from other storage systems and presents them as a lun to another host.  This data can be moved from one storage system to another without bringing the lun offline.  The IBM V7000 also has this virtualization feature as the code base for both systems are the same.  The cool feature that IBM has added to the V7000 is now the system has the ability to do NAS and SAN protocols.  This now competes in the same space as the EMC VNX and Netapp FAS systems.
The virtualization in the SVC code is somewhat similar to the gateway code in the N series.  They both can virtualuze the lun from another storage platform.  If you need to keep the data that is on the older system intact, then a SVC device is needed.  I would also mention that the movement of data between storage systems is much easier with the SVC.  I would also mention the N series gateway has more functionality like deduplication and easy replication than the SVC.
Finally, the SVC code was built by IBM to sit on top of complicated SAN environments.  Its robust nature is complimented with an easier to use gui from the XIV platform.  The N series gateway is somewhat easier to setup but is not to be used for large complicated SAN environments.
Both systems are good at what they do, and people try to compare them in the same manner.  I would tell them, Yes they both virtualize storage but are used in a different manner.

Top 10 Reasons clients choose to go with IBM N series

Top 10 Reasons clients choose to go with IBM N series

Can you see the difference?

Some years ago I put together a list of reasons why people choose to buy from IBM rather than purchase directly from Netapp. IBM has an OEM agreement with Netapp and rebrands the FAS and V-filer series as their N series product line. They are both made at the same plant and the only difference between them is the front bezel. You can even take a Netapp bezel off and stick it on an N series box and it fits exactly.

The Software is the same exactly. All we change is the logos and readme files. The entire functionality of the product is exactly the same. IBM does not add or take away any of the features built into the systems. The only difference is it takes IBM about 90 days once Netapp releases a product to get it put online and change the necessary documents.

Support for N series is done both at IBM and Netapp. Much like our other OEM partners, they stand behind IBM as the developers and IBM handles the issues. Customers still call the same 1.800.IBM.SERV for support and speak to trained engineers who have been working on N series equipment for 6+ years now. IBM actually has lower turn over than Netapp in their support division and has won awards for providing top-notch support. The call home features that most people are used to still go to Netapp via IBM servers.

10. The IBM customer engineer (CE) that is working with you today will be the same person who helps you with the IBM N series system.
9. IBM GBS team can provide consultation, installation and even administration of your environment.
8. IBM is able to provide financing for clients.
7. When you purchase your N series system from IBM, you can bundle it with servers, switches, other storage and software. This gives you one bill, one place to go to if you need anything and one support number to call.
6. IBM has two other support offerings to help our clients, Our Supportline offering allows customers to call in and ask installation or configuration questions. We also have an Enhanced Technical Support (ETS) team that will assign a personal engineer that will know everything about your environment and will provide you with everything you need. They will help you with health checks to be sure the system is running optimally, updates on the latest technology and single point of contact in case you need to speak to someone immediately.
5. IBM N series warranty support is done by IBM technicians and engineers at Level 1 and Level 2. If your issue can not be resolved by our Level 2 team they have a hotline into the Netapp Top Enterprise Account team. This is a team only a few very large Netapp accounts can afford and we provide this support to ALL of the IBM N series accounts no matter how large or small.
4. Our support teams from different platforms (X series, Power, TSM, DS, XiV, etc) all interact with another and when tough issues come up we are able to scale to the size of the issue. We can bring in experts that know the SAN, Storage, Servers, and Software all under one umbrella. With those tough cases we assign a coordinator to make sure the client does not have to call all of these resources themselves. This person can reach out to all the teams, assigns duties and will coordinate calls with you the customer.
3. All IBM N series Hardware and Software undergoes an Open Source Committee who validates there are no violations, copy right infringements or patent infringements.
2. All IBM N series Hardware and Software is tested in our Tucson testing facility for interoperability. We have a team of distinguished engineers who not only support N series but other hardware and software platforms within in the IBM portfolio.
1. All IBM N series equipment comes with a standard 3 year warranty for both Hardware and Software. This warranty can be extended beyond the three years as IBM supports equipment well beyond the normal 3-5 years of a system.

When it gets down to it, customers buy because they happy. Since the systems are exactly the same it comes down to what makes them happy. For some, the Netapp offering makes them happy because they like their sales engineer, for others they like IBM because they have been doing business with us for over 30 years.

For more information about IBM N series, check out our landing page on http://www-03.ibm.com/systems/storage/network/