NetApp

You are currently browsing articles tagged NetApp.

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.

Networking

  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!

Servers/Hardware

  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…

Security

  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.

Storage

  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.

Virtualization

  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

Exclusion or Not?

A couple days ago I read Stephen Foskett’s article “Alas, VMware, Whither HDS?”, and I felt like I really needed to respond to this growing belief—stated in Stephen’s article and in the sources to his article—that VMware is, for whatever reason, somehow excluding certain storage vendors from future virtualization-storage integration development. From my perspective, this is just bogus.

As far as I can tell, Stephen’s post—which is just one of several I’ve seen on this subject—is based on two sources: my session blog of VSP3205 and an article by The Register. I wrote the session blog, I sat in the session, and I listened to the presenters. Never once did one of the presenters indicate that the five technology partners that participated in this particular demonstration were the only technology partners with whom they would work moving forward, and my session blog certainly doesn’t state—or even imply—that VMware will only work with a limited subset of storage vendors. In fact, the thought that other storage vendors would be excluded never even crossed my mind until the appearance of The Register’s post. That invalidates my VSP3205 session blog as a credible source for the assertion that VMware would be working with only certain storage companies for this initiative.

The article at The Register cites my session blog and a post by Wikibon analyst David Floyer as a source. I’ve already shown how my blog doesn’t support the claim that some vendors will be excluded, but what about the other source? The Wikibon article states this:

Wikibon understands that VMware plans to work with the normal storage partners (Dell, EMC, Hewlett Packard, IBM, and NetApp) to provide APIs to help these traditional storage vendors add value, for example by optimizing the placement of storage on the disks.

This statement, however, is not an indication that VMware will work only with the listed storage vendors. (Floyer does not, by the way, cite any sources for that statement.)

Considering all this information, the only place that is implying VMware will limit the storage vendors with whom they will work is Chris Mellor at The Register. However, even Chris’ article quotes a VMware spokesperson who says:

“Note that we’re still in early days on this and none of the partners above have yet committed to support the APIs – and while it is our intent to make the APIs open, currently that is not the case given that what was demo’d during this VMworld session is still preview technology.”

In other words, just because HDS or any other vendor didn’t participate (which might indicate that the vendor chose not to participate) does not mean that they are somehow excluded from future inclusion in the development of this proposed new storage architecture. In fact, participation—or lack thereof—at this stage really means nothing, in my opinion. If this proposed storage architecture gets its feet under it and starts to run, then I’m confident VMware will allow any willing storage vendor to participate. In fact, it would be detrimental to VMware to not allow any willing storage partner to participate.

However, it gets more attention if you proclaim that a particular storage vendor was excluded; hence, the title (and subtitle) that The Register used. I have a feeling the reality is probably quite different than the picture painted in some of these articles.

Tags: , , , , , , ,

Welcome to Technology Short Take #10, my latest collection of data center-oriented links, articles, thoughts, and tidbits from around the Internet. I hope you find something useful or informative!

Networking

  • Link aggregation with VMware vSwitches is something I’ve touched upon a great many posts here on my site, but one thing that I don’t know I’ve ever specifically called out is that VMware vSwitches don’t support LACP. But that’s OK—Ivan Pepelnjak takes care of that for me with his recent post on LACP and the VMware vSwitch. He’s absolutely right: there’s no LACP support in VMware vSphere 4.x or any previous version.
  • Stephen Foskett does a great job of providing a plain English guide to CNA compatibility. Thanks, Stephen!
  • And while we are on the topic of Mr. Foskett, he also authored this piece on NFS in converged network environments. The article seemed a bit short for some reason. It kind of felt like the subject could have used a deeper, more thorough treatment. It’s still worth a read, though.
  • Need to trace a MAC address in your data center? CiscoZine provides all the necessary details in their post on how to trace a MAC address.
  • Jeremy Stretch of PacketLife.net provides a good overview of using WANem. If you need to do some WAN emulation/testing, this is worth reading.
  • Jeremy also does a walkthrough of configuring OSPF between Cisco and Force10 networking equipment.
  • I don’t entirely understand all the networking wisdom found here, but this post by Brad Hedlund on Nexus 7000 routing and vPC peer links is something I’m going to bookmark for when my networking prowess is sufficient for me to fully grasp the concepts. That might take a while…
  • On the other hand, this post by Brad on FCoE, VN-Tag, FEX, and vPC is something I can (and did) assimilate much more readily.
  • Erik Smith documents the steps for enabling FCoE QoS on the Nexus 5548, something that Brad Hedlund alerted me to via Twitter. It turns out, as Erik describes in his post about FCoE login failure with Nexus 5548, that without the FCoE QoS enabled fabric logins will fail. If you’re thinking of deploying Nexus 5548 switches, definitely keep this in mind.

Servers

  • In the event you haven’t already read up on it, the UCS 1.4(1) release for Cisco UCS was a pretty major release. See the write-up here by M. Sean McGee. By the way, Sean is an outstanding resource for UCS information; if you aren’t subscribed to his blog, you should be.
  • Dave Alexander also has a good discussion about some of the reasoning behind why certain things are or are not in Cisco UCS.

Storage

  • Nigel Poulton tackles a comparison between the HDS VSP and the EMC VMAX. I think he does a pretty good job of comparing and contrasting the two products, and I’m looking forward to his software-focused review of these two products in the future.
  • Brandon Riley provides his view of the recently-announced EMC VNX. The discussion in the comments about the choice of form factor (EFD) for flash-based cache is worth reading, too.
  • Andre Leibovici discusses the need for proper storage architecture in this treatment of IOPs, read/write ratios, and storage tiering with VDI. While his discussion is VDI-focused, the things he discussed are important to consider with any storage project, not just VDI. I would contend that too many organizations don’t do this sort of important homework when virtualizing applications (especially “heavier” workloads with more significant resource requirements), which is why the applications don’t perform as well after being virtualized. But that’s another topic for another day…
  • Environments running VMware Site Recovery Manager with the EMC CLARiiON SRA should have a look at this article.
  • Jason Boche recently published his results from a series of tests on jumbo frames with NFS and iSCSI in a VMware vSphere environment. There’s lots of great information in this post—I highly recommend reading it.

Virtualization

What, you didn’t think I’d overlook virtualization, did you?

Before I wrap up, I’ll just leave with you a few other links from my collection:

IOBlazer
Backing up, and restoring, VMware vCloud Director provisioned virtual machines
RSA SecurBook on Cloud Security and Compliance
Hyper-V Live Migration using SRDF/CE – Geographically Dispersed Clustering
The VCE Model: Yes, it is different
How to make a PowerShell server side VMware vCenter plugin
VMware vSphere 4 Performance with Extreme I/O Workloads
VMware KB: ESX Hosts Might Experience Read Performance Issues with Certain Storage Arrays
vSphere “Gold” Image Creation on UCS, MDS, and NetApp with PowerShell
Upgrading to ESX 4.1 with the Nexus 1000V
My System Engineer’s toolkit for Mac

That’s going to do it for this time around. As always, courteous comments are welcome and encouraged!

Tags: , , , , , , , , ,

The Future of NetApp

As I was sitting in London’s Heathrow Airport this morning catching up on RSS feeds before boarding my plane back to the United States, an article headline caught my eye: Why NetApp Must Seek Acquisition.

I can’t tell you how glad I am to see this article published, because it gives me the opportunity to share something I’ve been thinking about for a while, even before I joined EMC. I’m sure that everything I have to say about NetApp will be colored by the fact that I now work for EMC, and—whether I like it or not—all comments about any other storage vendor or technology are immediately suspect. Recent comments to my VPLEX article proved that point; it will take time to re-establish objectivity and prove to my readers that I’m not an EMC shill.

But I digress; back to the article. In the article, the author (“secretcto”) states why he believes that NetApp must seek acquisition in order to survive. The crux of his article is this:

Now lets take a look at the market cap of each of these players. A company’s market cap is a good place start in order to identify which of these companies will have money to invest in tomorrow. I am not saying that ‘cloud’ is the IT of tomorrow, but if it is the direction of tomorrow, then one thing is certain, the folks in that list that have more of the necessary ‘cloud’ pieces (or the money to invest in building out a portfolio of integrated cloud components) will be the most successful competitors.

The author states that NetApp has a few key problems:

  • NetApp only owns one component (storage) of the multiple components (the others being servers, networking, software, and security) necessary to continue to be a key competitor moving forward.
  • NetApp doesn’t own any software that drives customers to its products.
  • NetApp lacks the bankroll to acquire the technologies necessary to build out their portfolio in order to compete with more “full-featured” competitors.
  • NetApp has a history of difficulty integrating their acquisitions. Even if the bankroll was present, there is no indication that additionsl to their portfolio could be successfully integrated into the company.

I would add an additional weakness. Being on the outside—and not only on the outside, but working for a competitor that NetApp fiercely detests—I lack any inside knowledge of what might be going on at NetApp. I know they have a ton of very smart, talented folks over there, and those smart folks have engineered the heck out of their WAFL and snapshot technologies. But the reality is that NetApp is a one-trick pony. Look at their products: every single one is in some way based on the same underlying technologies. Kudos to them for getting as much mileage about of these technologies as they have; that’s a huge testament to the skill of their engineering staff.

However, it appears to me (again, lacking any inside knowledge I could be completely wrong) that they have reached the end of the road. I get the feeling that NetApp has done everything they possibly could do with WAFL and snapshots, and now that they have no more mileage with this pony and no more ponies in the stable, where does that leave them?

Again, I’m sure that everyone will take these comments as me bashing NetApp. My intent here is most definitely not to bash NetApp, but to simply state my observations. I’d love to hear others’ thoughts on the matter; my only request is that you fully disclose your affiliations. Speak up in the comments and let me know what you think! All courteous comments are welcome.

Tags: , , ,

I wanted to go ahead and get another issue of Virtualization Short Takes out the door before VMworld, as I suspect that I’ll be covered up both during and for some time after VMworld. So, here’s my latest collection of links and articles about virtualization, storage, and anything else I find interesting.

  • Chad Sakac brings up an important issue for EMC CLARiiON users also using vSphere and iSCSI; be sure to read the full post for all the details. Basically, this bug in the FLARE code puts us back to using multiple IP subnets to scale iSCSI traffic. Bummer. I imagine they’ll get it fixed up pretty quick, but until then it’s back to the old way of scaling IP-based storage traffic. Chad’s posts on VMware-storage integration (Part 1 and Part 2) are good reads as well.
  • Nick Triantos weighs in with a good post on how to configure ALUA support and Round Robin I/O in vSphere. This looks useful; too bad the old NetApp gear I have in the lab won’t run the latest Data ONTAP version so I can test this myself. Oh, and you should also check out Nick’s post on the NetApp Collector and Analyzer for Virtual Environments, which looks like it might be a handy tool for sizing NetApp storage environments.
  • Duncan Epping points out a couple of issues related to VMFS block size in this post on snapshots and block size. Good find!
  • Ben Armstrong puts up a great post about competitive arguments. I have to say that I have a new respect for Ben after reading this post. He’d always presented himself very professionally, but his open approach to comparing virtualization products is very refreshing, and one that I wish more people would adopt. I’m particularly impressed that Ben quoted Proverbs 27:17 in his post.
  • Aaron Sweemer posted a newsletter from a co-worker on his site that has some great information. You should definitely have a look, I think you’ll find something useful there.
  • Rick Scherer posted the steps necessary to remove a rogue vCenter Chargeback plug-in. Useful, but I wish all plug-ins provided a mechanism like this.
  • Jason Nash brings to light a bug in Cisco Nexus 1000V when used in conjunction with CNAs. Be sure to have a look if this has any similarity to your environment. Like Jason, I have some Gen 1 Emulex CNAs so I may run into the same issue myself as I build out the Nexus hardware in the lab.
  • The Systems Engineer (no name provided) gives a handy one-line command to map ESX datastores to EMC CLARiiON LUNs. I’ll have to give this one a try once I get my CLARiiON up and running.
  • Somewhere along the way I picked up the URL to this VMware KB article about problems with iSCSI or NFS over an EtherChannel link. Hmmm, that looks interesting, but when you read the article it points out that the issue exists when you are using EtherChannel but the vSwitch is configured as “Route based on originating virtual port ID.” That’s a configuration mismatch—of course you’re going to have problems! Simply change the vSwitch to “Route based on ip hash” (the strongly recommended setting when using EtherChannel) and the problems go away.
  • Stevie Chambers (formerly of VMware, now with Cisco) posts about 10 technology advances since 2005. The article is mostly about the Intel Xeon 5500 CPUs and a couple other features specific to Cisco’s Unified Computing System (UCS); namely, the Palo adapter and the Catalina ASIC. While he wanders a bit, I think Stevie’s point is about how virtualization architects and operations staff need to understand the impact of these technologies and how they affect the virtualization solution—a useful point, indeed.
  • Paul Fazzone has a couple of great posts on the Cisco Nexus 1000V: first an article with an overview of VM network security with the Nexus 1000V, then a second article describing how the Nexus 1000V compares to multiple vSwitches. Both are good reads for people seeking a bit more information on deployment scenarios for the Nexus 1000V.
  • Computerworld posted this article about the 7 half-truths of virtualization. The underlying point behind all of these “half-truths” is that in order for an organization to really reap the benefits of virtualization, that organization needs to change, to adapt, and to grow with the virtualization initiative. If you just virtualize and don’t change anything else, your ROI will be limited at best. I particularly agree with #5: if you’re investigating VDI for short-term cost savings, you’re barking up the wrong tree.
  • This is kind of cool. I might put this on my home network.
  • I haven’t had my chance to talk with Arista yet, but I’m surprised that there hasn’t been more buzz around their announcement of vEOS. In fact, I had to hear about it (other than a very brief e-mail from Doug Gourlay) from a Cisco contact! How crazy is that? I suppose, as I mentioned on Twitter, that Arista is going to make a big push next week during VMworld 2009 in San Francisco.

That wraps up this edition of Virtualization Short Takes. Next week will be a busy week; look for lots of coverage from the conference in San Francisco as well as summaries of my vendor meetings (and there are lots of them!). Until then, take care!

Tags: , , , , , , ,

I spent some time last week at the NetApp RTP office getting a special sneak preview of a couple of software products getting announced at VMworld 2009 in San Francisco. One of these is the Rapid Cloning Utility (RCU); the other is the Virtual Storage Console (VSC). Both of these software products are intended to plug into vCenter Server to provide more centralized access to both storage and virtualization management.

The Rapid Cloning Utility (RCU) has actually been around for a while, but it wasn’t an officially supported tool. This new version of RCU changes that; it now becomes a free tool but an officially supported tool for NetApp customers with active support agreements. The primary purpose of the RCU is to automate the use of NetApp’s FlexClone functionality for rapidly provisioning virtual machines. The scope and scale of virtual desktop deployments lends itself well to the use of RCU, but RCU could also be applicable in server virtualization environments as well.

Some of the functionality brought to the table by the RCU includes:

  • Full support for cloning block-based datastores accessed via Fibre Channel or iSCSI
  • Full support for file-level FlexCloning on NFS datastores
  • Automated import of virtual machines into View Manager, where applicable
  • Bulk import into XenDesktop, where applicable
  • Ability to store virtual machine swap file in separate VMDK in a separate datastore
  • Support for MetroCluster

In addition, the RCU incorporates support for deduplication (options to enable deduplication and report on it from within vCenter Server), NFS datastore resizing, creating/deleting/cloning block-based datastores, and support for basic role-based administration control (RBAC) in that user permissions are checked before tasks are launched. In addition, use of the RCU mitigates many of the drawbacks I’ve discussed in the past with regard to use array-based cloning in virtualized environments. All in all, it’s a useful addition to vCenter Server for environments using NetApp storage.

The Virtual Storage Console (VSC) replaces the old NetApp Host Utilities Kit (HUK), which NetApp used to fine-tune and configure certain host and HBA parameters. Now those same settings can be managed from within vCenter Server. The VSC also provides access to NetApp’s mbrscan and mbralign tools, which are designed to identify and correct problems with VMDK alignment.

Both of these utilities, if I recall correctly, require that you are running the very latest version of Data ONTAP.

Tags: , , ,

Author’s Note: This content was first published over at Storage Monkeys, but it appears that it has since disappeared and is no longer available. For that reason, I’m republishing it here (with minor edits). Where applicable, I’ll also be republishing other old content from that site in the coming weeks. Thanks!

I’ve discussed this topic before, but I felt like it was a topic that needed to be revisited again. Storage admins need to know how their choices in storage technologies may or may not impact virtualization efforts, and this particular choice—leveraging pointer-based snapshots or deduplication—is particularly important.

FlexClones Versus Deduplication with VMware Infrastructure

A number of times over the last few months, I’ve run into situations where NetApp’s FlexClone technology was being heavily pitched to customers interested in deploying, or expanding their deployment of, VMware Infrastructure.

In case you aren’t familiar with the use of NetApp FlexClones in conjunction with VMware Infrastructure, have a look at these earlier articles of mine:

How to Provision VMs Using NetApp FlexClones
NetApp FlexClones with VMware, Part 1
NetApp FlexClones with VMware, Part 2
LUN Clones vs. FlexClones

Now, after you’ve read all those articles (you did read them, didn’t you?), it should be fairly clear that using FlexClones can be very advantageous. However, those advantages come with some tradeoffs as well, most notably in the complete and total lack of integration with VMware Infrastructure itself.

This lack of integration means that users can’t use VirtualCenter templates, because the cloning is taking place at the storage array instead of within VMware Infrastructure. This also means that customers can’t apply customization specifications during the cloning process, so users will need to create their own Sysprep answer files and manually Sysprep the VMs before invoking the FlexClone process. Users are required to create scripts and tools to do simple things like using the VM name for the guest OS name during cloning. (Author’s note: many of these issues have been addressed by NetApp’s Rapid Cloning Utility (RCU), which provides some integration into VirtualCenter.)

Deduplication, on the other hand, works seamlessly with VMware Infrastructure. This is primarily because the details of the deduplication are completely hidden; it all occurs “inside the box.” Nothing needs to be configured within VirtualCenter; no VMs need to be modified. The NetApp storage system handles the details of the deduplication process itself, and VMware Infrastructure just consumes the storage.

Looking at these two technologies in that light, one might ask: why use FlexClones at all? If deduplication works seamlessly with VMware Infrastructure and FlexClones don’t, then why bother? To be honest, there are some instances where FlexClones make sense—even with the lack of integration. Consider some of the examples listed below.

  • In instances where a user needs to deploy lots of VMs in a very rapid fashion, FlexClones are much better. If time-to-deployment is the #1 driving factor, then FlexClones are the way to go. This could be particularly applicable and useful in VDI situations, as long as the broker doesn’t mandate handling provisioning itself (like VDM does).
  • In environments where provisioning and re-provisioning occurs on a frequent, regular basis, then FlexClones make sense. Even though large numbers of VMs aren’t being provisioned, the time saved on frequent re-provisioning via FlexClones will not be insignificant.
  • In situtations where there isn’t sufficient storage for the VMs before they are deduplicated, FlexClones may be a better option. Deduplication is post-process, meaning that storage will be needed for the full datasets until deduplication runs. In situations where that isn’t an option, then FlexClones can provide the same end benefit.

Personally, I’m of the opinion that unless an organization meets one of these criteria, then that organization should look to deduplication instead of FlexClones. Of course, that’s just my personal opinion, and I’m open to hear what others have to say about the matter. NetApp gurus, feel free to weigh in.

Tags: , , , ,

UPDATE: VMware has clarified their position; they will allow competitors to exhibit at VMworld. The text in the exhibitors agreement was legalese—supposedly consistent with other major vendor-sponsored conferences—meant to give them an out in the event an exhibitor behaves inappropriately.

I sincerely hope that Brian Madden is wrong about the recent change to vendor policies for VMworld.

This is exactly the wrong thing to do in this sort of competitive landscape. You know, earlier this week on the Virtual Thoughts podcast, I was defending VMworld’s move into the territory of their former ISVs with products like vCenter Data Recovery, vCenter Chargeback, and vCenter ConfigControl. After all, VMware is a publicly owned company, and they have to show value to their shareholders. But this? This doesn’t have anything to do with showing value to the shareholders. This is like a spoiled little kid saying, “This is my sandbox, and you can’t play in it.”

What are you going to do, VMware? Let’s see, you’re expanding into the territory formerly handled by many of your ISVs, and now you’re blocking access to competing products at VMworld. So who will be at VMworld? Let’s see…

  • Vizioncore can’t come, because vRanger Pro overlaps functionality VMware will provide in vCenter Data Recovery. And vFoglight overlaps with CapacityIQ.
  • VKernel can’t come; again, they overlap with CapacityIQ.
  • As Brian Madden mentioned, Quest won’t be there due to a conflict with VMware View.
  • Microsoft won’t be there, because they won’t be able to talk about Hyper-V. True, they could come and not talk about Hyper-V, but I suspect they’ll also act like a spoiled child by saying, “If we can’t play by our rules, we won’t play at all.” Hmm…considering 90-95% of all the workloads running on VMware are Microsoft Windows, that’s an interesting situation to create. Oh, and VMware: are you prepared to be excluded from Tech-Ed too?
  • Ditto for Citrix. And probably ditto for being allowed to exhibit at Synergy. So much for VMware vSphere being the best platform on which to run XenApp—you won’t get the chance to make that claim!
  • Leostream? Nope—conflicts/overlaps with VMware View.
  • What about Hyper9? Not sure, vCenter Server 4.0 does provide a Search feature now, so that could potentially preclude Hyper9 from coming, too.
  • Surely Veeam could come, but they can’t talk about Veeam Backup (conflicts with vCenter Data Recovery).
  • esXpress? Nope.
  • Hardware vendors—IBM, HP, Dell—will be there.
  • Storage vendors—EMC, NetApp, HP, Compellent, Dell—will be there.
  • Networking vendors like Cisco and HP will be there. Unless VMware thinks that HP’s networking functionality isn’t complementary enough to its own virtual networking functionality…

I’m sure that I’ve overlooked some companies, but it sounds to me like the vast majority of the third-party ISVs now find themselves precluded from exhibiting at VMworld, in addition to finding themselves competing head-to-head with VMware in their own markets. Looks like the exhibit hall is going to be a lot less crowded this year!

Is VMware the new Microsoft? I’ll let you answer that one on your own.

Disclaimer: Before anyone jumps the gun and says otherwise, note that these opinions are mine, and are not endorsed by my employer or any vendor or other organization.

Tags: , , , , , , , , , , ,

This session describes NetApp’s MultiStore functionality. MultiStore is the name given to NetApp’s functionality for secure logical partitioning of network and storage resources. The presenters for the session are Roger Weeks, TME with NetApp, and Scott Gelb with Insight Investments.

When using MultiStore, the basic building block is the vFiler. A vFiler is a logical construct within Data ONTAP that contains a lightweight instance of the Data ONTAP multi-protocol server. vFilers provide the ability to securely partition both storage resources and network resources. Storage resources are partitioned at either the FlexVol or Qtree level; it’s recommended to use FlexVols instead of Qtrees. (The presenters did not provide any further information beyond that recommendation. Do any readers have more information?) On the network side, the resources that can be logically partitioned are IP addresses, VLANs, VIFs, and IPspaces (logical routing tables).

Some reasons to use vFilers would include storage consolidation, seamless data migration, simple disaster recovery, or better workload management. MultiStore integrates with SnapMirror to provide some of the functionality needed for some of these use cases.

MultiStore uses vFiler0 to denote the physical hardware, and vFiler0 “owns” all the physical storage resources. You can create up to 64 vFiler instances, and active/active clustered configurations can support up to 130 vFiler instances (128 vFilers plus 2 vFiler0 instances) during a takeover scenario.

Each vFiler stores its configuration in a separate FlexVol (it’s own root vol, if you will). All the major protocols are supported within a vFiler context: NFS, CIFS, iSCSI, HTTP, and NDMP. Fibre Channel is not supported; you can only use Fibre Channel with vFiler0. This is due to the lack of NPIV support within Data ONTAP 7. (It’s theoretically possible, then, that if/when NetApp adds NPIV support to Data ONTAP that Fibre Channel would be supported within vFiler instances.)

Although it is possible to move resources between vFiler0 and a separate vFiler instance, doing so may impact client connections.

Managing vFilers appears to be the current weak spot; you can manage vFiler instances using the Data ONTAP CLI, but vFiler instances don’t have an interactive shell. Therefore, you have to direct commands to vFiler instances via SSH or RSH or using the vFiler context in vFiler0. You access the vFiler context by prepending the “vfiler” keyword to the commands at the CLI in vFiler0. Operations Manager 3.7 and Provisioning Manager can manage vFiler instances; FilerView can start, stop, or delete individual vFiler instances but cannot direct commands to an individual vFiler. If you need to manage CIFS on a vFiler instance, you can use the Computer Management MMC console to connect remotely to that vFiler instance to manage shares and share permissions, just as you can with vFiler0 (assuming CIFS is running within the vFiler, of course).

IPspaces are a logical routing construct that allow each vFiler to have its own routing table. For example, you may have a DMZ vFiler and an internal vFiler, each with their own, separate routing table. Up to 101 IPspaces are supported per controller. You can’t delete the default IPspace, as it’s the routing table for vFiler0. It is recommended to use VLANs and/or VIFs with IPspaces as a best practice.

One of the real advantages of using MultiStore and vFilers is the data migration and disaster recovery functionality that it enables when used in conjunction with SnapMirror. There are two sides to this:

  • “vfiler migrate” allows you to move an entire vFiler instance, including all data and configuration, from one physical storage system to another physical storage system. You can keep the same IP address or change the IP address. All other network identification remains the same: NetBIOS name, host name, etc., so the vFiler should look exactly the same across the network after the migration as it did before the migration.
  • “vfiler dr” is similar to “vfiler migrate” but uses SnapMirror to keep the source and target vFiler instances in sync with each other.

It makes sense, but you can’t use “vfiler dr” or “vfiler migrate” on vFiler0 (the physical storage system). My own personal thought regarding “vfiler dr”: what would this look like in a VMware environment using NFS? There could be some interesting possibilities there.

With regard to security, a Matasano security audit was performed and the results showed that there were no vulnerabilities that would allow “data leakage” between vFiler instances. This means that it’s OK to run a DMZ vFiler and an internal vFiler on the same physical system; the separation is strong enough.

Other points of interest:

  • Each vFiler adds about 400K of system memory, so keep that in mind when creating additional vFiler instances.
  • You can’t put more load on a MultiStore-enabled system than a non-MultiStore-enabled system. The ability to create logical vFilers doesn’t mean the physical storage system can suddenly handle more IOPS or more capacity.
  • You can use FlexShare on a MultiStore-enabled system to adjust priorities for the FlexVols assigned to various vFiler instances.
  • As of Data ONTAP 7.2, SnapMirror relationships created in a vFiler context are preserved during a “vfiler migrate” or “vfiler dr” operation.
  • More enhancements are planned for Data ONTAP 7.3, including deduplication support, SnapDrive 5.0 or higher support for iSCSI with vFiler instances, SnapVault additions, and SnapLock support.

Some of the potential use cases for MultiStore include file services consolidation (allows you to preserve file server identification onto separate vFiler instances), data migration, and disaster recovery. You might also use MultiStore if you needed support for multiple Active Directory domains with CIFS.

UPDATE: Apparently, my recollection of the presenters’ information was incorrect, and FTP is not a protocol supported with vFilers. I’ve updated the article accordingly.

Tags: , , , , , , , , , , , ,

NetApp has recently released TR-3747, Best Practices for File System Alignment in Virtual Environments. This document addresses the situations in which file system alignment is necessary in environments running VMware ESX/ESXi, Microsoft Hyper-V, and Citrix XenServer. The authors are Abhinav Joshi (he delivered the Hyper-V deep dive at Insight last year), Eric Forgette (wrote the Rapid Cloning Utility, I believe), and Peter Learmonth (a well-recognized name from the Toasters mailing list), so you know there’s quite a bit of knowledge and experience baked into this document.

There are a couple of nice tidbits of information in here. For example, I liked the information on using fdisk to set the alignment of a guest VMDK from the ESX Service Console; that’s a pretty handy trick! I also thought the tables which described the different levels at which misalignment could occur were quite useful. (To be honest, though, it took me a couple of times reading through that section to understand what information the authors were trying to deliver.)

Anyway, if you’re looking for more information on storage alignment, the different levels at which it may occur, and the methods used to fix it at each of these levels, this is an excellent resource that I strongly recommend reading. Does anyone have any pointers to similar documents from other storage vendors?

Tags: , , , , , , , , , ,

« Older entries