You are currently browsing articles tagged Storage.

Welcome to Technology Short Take #37, the latest in my irregularly-published series in which I share interesting articles from around the Internet, miscellaneous thoughts, and whatever else I feel like throwing in. Here’s hoping you find something useful!


  • Ivan does a great job of describing the difference between the management, control, and data planes, as well as providing examples. Of course, the distinction between control plane protocols and data plane protocols isn’t always perfectly clear.
  • You’ve heard me talk about snowflake servers before. In this post on why networking needs a Chaos Monkey, Mike Bushong applies to the terms to networks—a snowflake network is an intricately crafted network that is carefully tailored to utilize a custom subset of networking features unique to your environment. What is the fix—if one exists—to snowflake networks? Designing your network for resiliency and unleashing a Chaos Monkey on it is one way, as Mike points out. A fan of network virtualization might also say that decomposing today’s complex physical networks into multiple simple logical networks on top of a simpler physical transport network—similar to Mike’s suggestion of converging on a smaller set of reference architectures—might also help. (Of course, I am a fan of network virtualization, since I work with/on VMware NSX.)
  • Martijn Smit has launched a series of articles on VMware NSX. Check out part 1 (general introduction) and part 2 (distributed services) for more information.
  • The elephants and mice post at Network Heresy has sparked some discussion across the “blogosphere” about how to address this issue. (Note that my name is on the byline for that Network Heresy post, but I didn’t really contribute all that much.) Jason Edelman took up the idea of using OpenFlow to provide a dedicated core/spine for elephant flows, while Marten Terpstra at Plexxi talks about how Plexxi’s Affinities could be used to help address the problem of elephant flows. Peter Phaal speaks up in the comments to Marten’s article about how sFlow can be used to rapidly detect elephant flows, and points to a demo taking place during SC13 that shows sFlow tracking elephant flows on SCinet (the SC13 network).
  • Want some additional information on layer 2 and layer 3 services in VMware NSX? Here’s a good source.
  • This looks interesting, but I’m not entirely sure how I might go about using it. Any thoughts?


Nothing this time around, but I’ll keep my eyes peeled for something to include next time!


I don’t have anything to share this time—feel free to suggest something to include next time.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this post on getting the most out of HAProxy—in which Twilio walks through some of the configuration options they’re using and why—to be quite helpful. If you’re relatively new to HAProxy, as I am, then I’d recommend giving this post a look.
  • This list is reasonably handy if you’re not a Terminal guru. While written for OS X, most of these tips apply to Linux or other Unix-like operating systems as well. I particularly liked tip #3, as I didn’t know about that particular shortcut.
  • Mike Preston has a great series going on tuning Debian Linux running under vSphere. In part 1, he covered installation, primarily centered around LVM and file system mount options. In part 2, Mike discusses things like using the appropriate virtual hardware, the right kernel modules for VMXNET3, getting rid of unnecessary hardware (like the virtual floppy), and similar tips. Finally, in part 3, he talks about a hodgepodge of tips—things like blacklisting other unnecessary kernel drivers, time synchronization, and modifying the Linux I/O scheduler. All good stuff, thanks Mike!


  • “Captain KVM,” aka Jon Benedict, takes on the discussion of enterprise storage vs. open source storage solutions in OpenStack environments. One good point that Jon makes is that solutions need to be evaluated on a variety of criteria. In other words, it’s not just about cost nor is it just about performance. You need to use the right solution for your particular needs. It’s nice to see Jon say that if your needs are properly met by an open source solution, then “by all means stick with Ceph, Gluster, or any of the other cool software storage solutions out there.” More vendors need to adopt this viewpoint, in my humble opinion. (By the way, if you’re thinking of using NetApp storage in an OpenStack environment, here’s a “how to” that Jon wrote.)
  • Duncan Epping has a quick post about a VMware KB article update regarding EMC VPLEX and Storage DRS/Storage IO Control. The update is actually applicable to all vMSC configurations, so have a look at Duncan’s article if you’re using or considering the use of vMSC in your environment.
  • Vladan Seget has a look at Microsoft ReFS.


I’d better wrap it up here so this doesn’t get too long for folks. As always, your courteous comments and feedback are welcome, so feel free to start (or join) the discussion below.

Tags: , , , , , , ,

Recently a couple of open source software (OSS)-related announcements have passed through my Inbox, so I thought I’d make brief mention of them here on the site.

Mirantis OpenStack

Last week Mirantis announced the general availability of Mirantis OpenStack, its own commercially-supported OpenStack distribution. Mirantis joins a number of other vendors also offering OpenStack distributions, though Mirantis claims to be different on the basis that its OpenStack distribution is not tied to a particular Linux distribution. Mirantis is also differentiating through support for some additional projects:

  • Fuel (Mirantis’ own OpenStack deployment tool)
  • Savanna (for running Hadoop on OpenStack)
  • Murano (a service for assisting in the deployment of Windows-based services on OpenStack)

It’s fairly clear to me that at this stage in OpenStack’s lifecycle, professional services are a big play in helping organizations stand up OpenStack (few organizations lack the deep expertise to really stand up sizable installations of OpenStack on their own). However, I’m not yet convinced that building and maintaining your own OpenStack distribution is going to be as useful and valuable for the smaller players, given the pending competition from the major open source players out there. Of course, I’m not an expert, so I could be wrong.

Inktank Ceph Enterprise

Ceph, the open source distributed software system, is now coming in a fully-supported version aimed at enterprise markets. Inktank has announced Inktank Ceph Enterprise, a bundle of software and support aimed to increase adoption of Ceph among enterprise customers. Inktank Ceph Enterprise will include:

  • Open source Ceph (version 0.67)
  • New “Calamari” graphical manager that provides management tools and performance data with the intent of simplifying management and operation of Ceph clusters
  • Support services provided by Inktank; this includes technical support, hot fixes, bug prioritization, and roadmap input

Given Ceph’s integration with OpenStack, CloudStack, and open source hypervisors and hypervisor management tools (such as libvirt), it will be interesting to see how Inktank Ceph Enterprise takes off. Will the adoption of Inktank Ceph Enterprise be gated by enterprise adoption of these related open source technologies, or will it help drive their adoption? I wonder if it would make sense for Inktank to pursue some integration with VMware, given VMware’s strong position in the enterprise market. One thing is for certain: it will be interesting to see how things play out.

As always, feel free to speak up in the comments to share your thoughts on these announcements (or any other related topic). All courteous comments are welcome.

Tags: , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.


  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.


  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah


Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)


  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.


That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

This is a liveblog of Intel Developer Forum (IDF) 2013 session EDCS003, titled “Enhancing OpenStack with Intel Technologies for Public, Private, and Hybrid Cloud.” The presenters are Girish Gopal and Malini Bhandaru, both with Intel.

Gopal starts off by showing the agenda, which will provide an overview of Intel and OpenStack, and then dive into some specific integrations in the various OpenStack projects. The session will wrap up with a discussion of Intel’s Open IT Cloud, which is based on OpenStack. Intel is a Gold Member of the OpenStack Foundation, has made contributions to a variety of OpenStack projects (tools, features, fixes and optimizations), has built its own OpenStack-based private cloud, and is providing additional information and support via the Intel Cloud Builders program.

Ms. Bhandaru takes over to provide an overview of the OpenStack architecture. (Not surprisingly, they use the diagram prepared by Ken Pepple.) She tells attendees that Intel has contributed bits and pieces to many of the various OpenStack projects. Next, she dives a bit deeper into some OpenStack Compute-specific contributions.

The first contribution she mentions is Trusted Compute Pools (TCP), which was enabled in the Folsom release. TCP relies upon the Trusted Platform Module (TPM), which in turn builds on Intel TXT and Trusted Boot. Together with the Open Attestation (OAT) SDK (available from, Intel has contributed a “Trust Filter” for OpenStack Compute as well as a “Trust Filter UI” for OpenStack Dashboard. These components allow for hypervisor/compute node attestation to ensure that the underlying compute nodes have not been compromised. Users can then request that their instances are scheduled onto trusted nodes.

Intel has also done work on TCP plus Geo-Tagging. This builds on TCP to enforce policies about where instances are allowed to run. This includes a geo attestation service and Dashboard extensions to support that functionality. This work has not yet been done, but is found in current OpenStack blueprints.

In addition to trust, Intel has done work on security with OpenStack. Intel’s work focuses primarily around key management. Through collaboration with Rackspace, Mirantis, and some others, Intel has proposed a new key management service for OpenStack. This new service would rely upon good random number generation (which Intel strengthened in the Xeon E5 v2 release announced earlier today), secure storage (to encrypt the keys), careful integration with OpenStack Identity (Keystone) for authentication and access policies, extensive logging and auditing, high availability, and a pluggable-backend (similar to Cinder/Neutron). This would allow encryption of Swift objects, Glance images, and Cinder volumes. The key manager project is called Barbican ( and provides integration with OpenStack Identity. In the future, they are looking at creation and certification of private-public pairs, software support for periodic background tasks, KMIP support, and potential AES-XTS support for enhanced performance. This will also leverage Intel’s AES-NI support in newer CPUs/chipsets.

Intel also helped update the OpenStack Security Guide (

Next, Intel talks about how they have worked to expose hardware features into OpenStack. This would allow for greater flexibility with the Nova scheduler. This involves work in libvirt as well as OpenStack, so that OpenStack can be aware of CPU functionality (which, in turn, might allow cloud providers to charge extra for “premium images” that offer encryption support in hardware). The same goes for exposing PCI Express (PCIe) Accelerator support into OpenStack as well.

Gopal now takes over and moves the discussion into storage in OpenStack. With regard to block storage via Cinder, Intel has incorporated support to filter volumes based on availability zone, capabilities, capacity, and other features so that volumes are allocated more intelligently based on workload and type of service required. By granting greater intelligence to how volumes are allocated, cloud service providers can offer differentiated (read: premium priced) services for block storage. This work is enabled in the Grizzly release.

In addition to block storage, many OpenStack environments also leverage Swift for object storage. Intel is focused on enabling erasure coding to Swift, which would enable reduced storage requirements in Swift deployments. Initially, erasure coding will be used for “cold” objects (objects that aren’t accessed or updated frequently); this helps preserve the service level for “hot” objects. Erasure coding would replace triple replication to reduce storage requirements in the Swift capacity tier. (Note that this something I also discussed with SwiftStack a couple weeks ago during VMworld.)

Intel has also developed something called COSBench, which is an open source tool that can be used to measure cloud object storage performance. COSBench is available at

At this point, Gopal transitions to networking in OpenStack. This discussion focuses primarily around Intel Open Network Platform (ONP). There’s another session that will go deeper on this topic; I expect to attend that session and liveblog it as well.

The networking discussion is very brief; perhaps because there is a dedicated session for that topic. Next up is Intel’s work with OpenStack Data Collection (Ceilometer), which includes work to facilitate the transformation and collection of data from multiple publishers. In addition, Intel is looking at enhanced usage statistics to affect compute scheduling decisions (essentially this is utilization-based scheduling).

Finally, Gopal turns to a discussion of Intel IT Open Cloud, which is a private cloud within Intel. Intel is now at 77% virtualized, with 80% of all new servers being deployed in the cloud. It’s less than an hour to deploy instances. Intel estimates a savings of approximately $21 million so far. Where is Intel IT Open Cloud headed? Intel IT is looking at using all open source software for Intel IT Open Cloud (this implies that it is not built with open source software today). There is another session on Intel IT Open Cloud tomorrow that I will try to attend.

At this point, Gopal summarizes all of the various Intel contributions to OpenStack (I took a picture of this I posted via Twitter) and ends the session.

Tags: , , , , , ,

Vendor Meetings at VMworld 2013

This year at VMworld, I wasn’t in any of the breakout sessions because employees aren’t allowed to register for breakout sessions in advance; we have to wait in the standby line to see if we can get in at the last minute. So, I decided to meet with some vendors that seemed interesting and get some additional information on their products. Here’s the write-up on some of the vendor meetings I’ve attended while in San Francisco.

Jeda Networks

I’ve mentioned Jeda Networks before (see here), and I was pretty excited to have the opportunity to sit down with a couple of guys from Jeda to get more details on what they’re doing. Jeda Networks describes themselves as a “software-defined storage networking” company. Given my previous role at EMC (involved in storage) and my current role at VMware focused on network virtualization (which encompasses SDN), I was quite curious.

Basically, what Jeda Networks does is create a software-based FCoE overlay on an existing Ethernet network. Jeda accomplishes this by actually programming the physical Ethernet switches (they have a series of plug-ins for the various vendors and product lines; adding a new switch just means adding a new plug-in). In the future, when OpenFlow or its derivatives become more ubiquitous, I could see using those control plane technologies to accomplish the same task. It’s a fascinating idea, though I question how valuable a software-based FCoE overlay is in a world that seems to be rapidly moving everything to IP. Even so, I’m going to keep an eye on Jeda to see how things progress.

Diablo Technologies

Diablo was a new company to me; I hadn’t heard of them before their PR firm contacted me about a meeting while at VMworld. Diablo has created what they call Memory Channel Storage, which puts NAND flash on a DIMM. Basically, it makes high-capacity flash storage accessible via the CPU’s memory bus. To take advantage of high-capacity flash in the memory bus, Diablo supplies drivers for all the major operating systems (OSes), including ESXi, and what this driver does is modify the way that page swaps are handled. Instead of page swaps moving data from memory to disk—as would be the case in a traditional virtual memory system—the page swaps happen between DRAM on the memory bus and Diablo’s flash on the memory bus. This means that page swaps are extremely fast (on the level of microseconds, not the milliseconds typically seen with disks).

To use the extra capacity, then, administrators must essentially “overcommit” their hosts. Say your hosts had 64GB of (traditional) RAM installed, but 2TB of Diablo’s DIMM-based flash installed. You’d then allocate 2TB of memory to VMs, and the hypervisor would swap pages at extremely high speed between the DRAM and the DIMM-based flash. At that point, the system DRAM almost looks like another level of cache.

This “overcommitment” technique could have some negative effects on existing monitoring systems that are unaware of the underlying hardware configuration. Memory utilization would essentially run at 100% constantly, though the speed of the DIMM-based flash on the memory bus would mean you wouldn’t take a performance hit.

In the future, Diablo is looking for ways to make their DIMM-based flash appear to an OS as addressable memory, so that the OS would just see 3.2TB (or whatever) of RAM, and access it accordingly. There are a number of technical challenges there, not the least of which is ensuring proper latency and performance characteristics. If they can resolve these technical challenges, we could be looking at a very different landscape in the near future. Consider the effects of cost-effective servers with 3TB (or more) of RAM installed. What effect might that have on modern data centers?


HyTrust is a company with whom I’ve been in contact for several years now (since early 2009). Although HyTrust has been profitable for some time now, they recently announced a new round of funding intended to help accelerate their growth (though they’re already on track to quadruple sales this year). I chatted with Eric Chiu, President and founder of HyTrust, and we talked about a number of areas. I was interested to learn that HyTrust had officially productized a proof-of-concept from 2010 leveraging Intel’s TPM/TXT functionality to perform attestation of ESXi hypervisors (this basically means that HyTrust can verify the integrity of the hypervisor as a trusted platform). They also recently introduced “two man” support; that is, support for actions to be approved or denied by a second party. For example, an administrator might try to delete a VM, but that deletion would need to be approved by a second party before it is allowed to proceed. HyTrust also continues to explore other integration points with related technologies, such as OpenStack, NSX, physical networking gear, and converged infrastructure. Be sure to keep an eye on HyTrust—I think they’re going to be doing some pretty cool things in the near future.


Vormetric interested me because they offer a data encryption product, and I was interested to see how—if at all—they integrated with VMware vSphere. It turns out they don’t integrate with vSphere at all, as their product is really more tightly integrated at the OS level. For example, their product runs natively as an agent/daemon/service on various UNIX platforms, various Linux distributions, and all recent versions of Windows Server. This gives them very fine-grained control over data access. Given their focus is on “protecting the data,” this makes sense. Vormetric also offers a few related products, like a key management solution and a certificate management solution.


SimpliVity is one of a number of vendors touting “hyperconvergence,” which—as far as I can tell—basically means putting storage and compute together on the same node. (If there is a better definition, please let me know.) In that regard, they could be considered similar to Nutanix. I chatted with one of the designers of the SimpliVity OmniCube. SimpliVity leverages VM-based storage controllers that leverage VMDirectPath for accelerated access to the underlying hardware, and present that underlying hardware back to the ESXi nodes as NFS storage. Their file system—developed during the 3 years they spent in stealth mode—abstracts away the hardware so that adding OmniCubes means adding both capacity and I/O (as well as compute). They use inline deduplication not only to reduce storage capacity, but especially to avoid having to write I/Os to the storage in the first place. (Capacity isn’t usually the issue; I/Os are typically the issue.) SimpliVity’s file system enables fast backups and fast clones; although they didn’t elaborate, I would assume they are using a pointer-based system (perhaps even an optimized content-addressed storage [CAS] model) that keeps them from having to copy large amounts of data around the system. This is what enables them to do global deduplication, backups from any system to any other system, and restores from any system to any other system (system here referring to an OmniCube).

In any case, SimpliVity looks very interesting due to its feature set. It will be interesting to see how they develop and mature.

SanDisk FlashSoft

This was probably one of the more fascinating meetings I had at the conference. SanDisk FlashSoft is a flash-based caching product that supports various OSes, including an in-kernel driver for ESXi. What made this product interesting was that SanDisk brought out one of the key architects behind the solution, who went through their design philosophy and the decisions they’d made in their architecture in great detail. It was a highly entertaining discussion.

More than just entertaining, though, it was really informative. FlashSoft aims to keep their caching layer as full of dirty data as possible, rather than seeking to flush dirty data right away. The advantage this offers is that if another change to that data comes, FlashSoft can discard the earlier change and only keep the latest change—thus eliminating I/Os to the back-end disks entirely. Further, by keeping as much data in their caching layer as possible, FlashSoft has a better ability to coalesce I/Os to the back-end, further reducing the I/O load. FlashSoft supports both write-through and write-back models, and leverages a cache coherency/consistency model that allows them to support write-back with VM migration without having to leverage the network (and without having to incur the CPU overhead that comes with copying data across the network). I very much enjoyed learning more about FlashSoft’s product and architecture. It’s just a shame that I don’t have any SSDs in my home lab that would benefit from FlashSoft.


My last meeting of the week was with a couple folks from SwiftStack. We sat down to chat about Swift, SwiftStack, and object storage, and discussed how they are seeing the adoption of Swift in lots of areas—not just with OpenStack, either. That seems to be a pretty common misconception (that OpenStack is required to use Swift). SwiftStack is working on some nice enhancements to Swift that hopefully will show up soon, including erasure coding support and greater policy support.

Summary and Wrap-Up

I really appreciate the time that each company took to meet with me and share the details of their particular solution. One key takeaway for me was that there is still lots of room for innovation. Very cool stuff is ahead of us—it’s an exciting time to be in technology!

Tags: , , , , , ,

Welcome to Technology Short Take #35, another in my irregular series of posts that collect various articles, links and thoughts regarding data center technologies. I hope that something in here is useful to you.


  • Art Fewell takes a deeper look at the increasingly important role of the virtual switch.
  • A discussion of “statefulness” brought me again to Ivan’s post on the spectrum of firewall statefulness. It’s so easy sometimes just to revert to “it’s stateful” or “it’s not stateful,” but the reality is that it’s not quite so black-and-white.
  • Speaking of state, I like this piece by Ivan as well.
  • I tend not to link to TechTarget posts any more than I have to, because invariably the articles end up going behind a login requirement just to read them. Even so, this Q&A session with Martin Casado on managing physical and virtual worlds in parallel might be worth going through the hassle.
  • This looks interesting.
  • VMware introduced VMware NSX recently at VMworld 2013. Cisco shared some thoughts on what they termed a “software-only” approach; naturally, they have a different vision for data center networking (and that’s OK). I was a bit surprised by some of the responses to Cisco’s piece (see here and here). In the end, though, I like Greg Ferro’s statement: “It is perfectly reasonable that both companies will ‘win’.” There’s room for a myriad of views on how to solve today’s networking challenges, and each approach has its advantages and disadvantages.


Nothing this time around, but I’ll watch for items to include in future editions. Feel free to send me links you think would be useful to include in the future!


  • I found this write-up on using OVS port mirroring with Security Onion for intrusion detection and network security monitoring.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • In past presentations I’ve referenced the terms “snowflake servers” and “phoenix servers,” which I borrowed from Martin Fowler. (I don’t know if Martin coined the terms or not, but you can get more information here and here.) Recently among some of Martin’s material I saw reference to yet another term: the immutable server. It’s an interesting construct: rather than managing the configuration of servers, you simply spin up new instances when you need a new configuration; existing configurations are never changed. More information on the use of the immutable server construct is also available here. I’d be interested to hear readers’ thoughts on this idea.


  • Chris Evans takes a took at ScaleIO, recently acquired by EMC, and speculates on where ScaleIO fits into the EMC family of products relative to the evolution of storage in the data center.
  • While I was at VMworld 2013, I had the opportunity to talk with SanDisk’s FlashSoft division about their flash caching product. It was quite an interesting discussion, so stay tuned for that update (it’s almost written; expect it in the next couple of days).


  • The rise of new converged (or, as some vendors like to call it, “hyperconverged”) architectures means that we have to consider the impact of these new architectures when designing vSphere environments that will leverage them. I found a few articles by fellow VCDX Josh Odgers that discuss the impact of Nutanix’s converged architecture on vSphere designs. If you’re considering the use of Nutanix, have a look at some of these articles (see here, here, and here).
  • Jonathan Medd shows how to clone a VM from a snapshot using PowerCLI. Also be sure to check out this post on the vSphere CloneVM API, which Jonathan references in his own article.
  • Andre Leibovici shares an unofficial way to disable the use of the SESparse disk format and revert to VMFS Sparse.
  • Forgot the root password to your ESXi 5.x host? Here’s a procedure for resetting the root password for ESXi 5.x that involves booting on a Linux CD. As is pointed out in the comments, it might actually be easier to rebuild the host.
  • vSphere 5.5 was all the rage at VMworld 2013, and there was a lot of coverage. One thing that I didn’t see much discussion around was what’s going on with the free version of ESXi. Vladan Seget gives a nice update on how free ESXi is changing with version 5.5.
  • I am loving the micro-infrastructure series by my VMware vSphere Design co-author, Forbes Guthrie. See it here, here, and here.

It’s time to wrap up now; I’ve already included more links than I normally include (although it doesn’t seem like it). In any case, I hope that something I’ve shared here is helpful, and feel free to share your own thoughts, ideas, and feedback in the comments below. Have a great day!

Tags: , , , , , , , , ,

This is a liveblog of the day 2 keynote at VMworld 2013 in San Francisco. For a look at what happened in yesterday’s keynote, see here. Depending on network connectivity, I may or may not be able to update this post in real-time.

The keynote kicks off with Carl Eschenbach. Supposedly there are more than 22,000 people in attendance at VMworld 2013, making it—according to Carl—the largest IT infrastructure event. (I think some other vendors might take issue with that claim.) Carl recaps the events of yesterday’s keynote, revisiting the announcements around vSphere 5.5, VMware NSX, VMware VSAN, VMware Hybrid Cloud Service, and the expansion of the availability of Cloud Foundry. “This is the power of software”, according to Carl. Carl also revisits the three “imperatives” that Pat shared yesterday:

  1. Extending virtualization to all of it.
  2. IT management giving way to automation.
  3. Making hybrid cloud ubiquitous.

Carl brings out Kit Colbert, a principal engineer at VMware (and someone who relatively well-recognized within the virtualization community). They show a clip from a classic “I Love Lucy” episode that is intended to help illustrate the disconnect between the line of business and IT. After a bit of back and forth about the needs of the line of business versus the needs of IT, Kit moves into a demo of vCloud Automation Center (vCAC). The demo shows how to deploy applications to a variety of different infrastructures, including the ability to look at costs (estimated) across those infrastructures. The demo includes various database options as well as auto-scaling options.

So what does this functionality give application owners? Choice and visibility. What does it give IT? Governance (control), all made possible by automation.

The next view of the demo takes a step deeper, showing VMware Application Director deploying the sample application (called Project Vulcan in the demo). vCloud Application Director deploys complex application topologies in an automated fashion, and includes integration with tools like Puppet and Chef. Kit points out that what they’re showing isn’t just a vApp, but a “full blown” multi-tier application being deployed end-to-end.

The scripted “banter” between Carl and Kit leads to a review of some of the improvements that were included in the vSphere 5.5 release. Kit ties this back to the demo by calling out the improvements made in vSphere 5.5 with regard to latency-sensitive workloads.

Next they move into a discussion of the networking side of the house. (My personal favorite, but I could be biased.) Kit quickly reviews how NSX works and enables the creation of logical network services that are tied to the lifecycle of the application. Kit shows tasks in vCenter Server that reflect the automation being done by NSX with regard to automatically creating load balancers, firewall rules, logical switches, etc., and then reviews how we need to deploy logical network services in coordination with application lifecycle operations.

At Carl’s prompting, Kit goes yet another level deeper into how network virtualization works. He outlines how NSX eliminates the need to configure the physical network layer to provision new logical networks, and also discusses how NSX can provide logical routing, and they outline the benefits of distributed east-west routing (when routing occurs locally within the hypervisor). This, naturally, leads into a discussion of the distributed firewall functionality present in NSX, where firewall functionality occurs within the hypervisor, closest to the VMs. Following the list of features in NSX, Carl brings up load balancing, and Kit shows how load balancing works in NSX.

This leads into a customer testimonial video from WestJet, who discusses how they can leverage NSX’s distributed east-west firewalling to help better control and optimize traffic patterns in the data center. WestJet also emphasizes how they can leverage their existing networking investment while still deriving tremendous value from deploying NSX and network virtualization.

Next up in the demo is a migration from a “traditional” virtual network into an NSX logical network, and Kit shows how the migration is accomplished via a vMotion operation. This leads into a discussion of how VMware can not only do “V2V” migrations into NSX logical networks, but also “P2V” migrations using NSX’s logical-to-physical bridging functionality.

That concludes the networking section of the demo, and leads Carl and Kit into a storage-focused discussion centered around Carl’s mythical Project Vulcan. The discussion initially focuses on VMware VSAN, and how IT can leverage VSAN to help address application provisioning. The demo shows how VSAN can dynamically expand capacity by adding another ESXi host in the cluster; more hosts means more capacity for the VSAN datastore. Carl says that Kit has shown him simplicity, scalability, but not resiliency. This leads Kit to a slide that shows how VSAN ensures resiliency by maintaining multiple copies of data within a VSAN datastore. If some part of the local storage backing VSAN fails, VSAN will automatically copy the data elsewhere so that the policy around how many copies of the data is maintained and enforced.

Following the VSAN demo, Carl and Kit move into a demo of a few end-user computing demonstrations, showing application access via Horizon Workspace. Kit wraps up his time on stage with a brief video—taken from “When Harry Met Sally,” if I’m not mistaken—that describes how demanding the line of business can be. The wrap-up to the demo was quite natural feeling and demonstrated some good chemistry between Kit and Carl.

Next up on the stage is Joe Baguley, CTO of EMEA, to discuss operations and operational concerns. Joe reviews why script- and rules-based management isn’t going to work in the new world, and why the world needs to move toward policy-based automation and management. This leads into a demo, and Joe shows—via vCAC—how vCenter Operations has initiated a performance remediation operation via the auto scale-out feature that was enabled when the application was provisioned. The demo next leads into a more detailed review of application performance via vCenter Operations.

Joe reviews three key parts of automated operations:

  1. (missed this one, sorry)
  2. Intelligent analytics
  3. Visibility into application performance

Next, Joe shows how vCenter Operations is integrating information from a variety of partners to help make intelligent recommendations, one of which is that Carl should change the storage tier based on the disk I/O requirements of his Project Vulcan application. vCAC will show the estimated cost of that change, and when the administrator approves that change, vSphere will leverage Storage vMotion to migrate to a new storage tier.

The discussion between Carl and Joe leads up to a demo of VMware Log Insight, where Joe shows events being pulled from a wide variety of sources to help drill down to the root cause of the storage issue in the demonstration. VMworld attendees (or possibly anyone, I guess) are encouraged to try out Log Insight by simply following @VMLogInsight on Twitter (they will give 5 free licenses to new followers).

Next up in the demo is a discussion of vCloud Hybrid Service, showing how the vSphere Web Client can be used to manage templates in vCHS. Joe brings the demo full-circle by taking us back to vCAC to deploy Project Vulcan into vCHS via vCAC. Carl reviews some of the benefits of vCHS, and asks Joe to share a few use cases. Joe shares that test/dev, new applications (perhaps built on Cloud Foundry?), and rapid capacity expansion are good use cases for vCHS.

Carl wraps up the day 2 keynote by summarizing the technologies that were displayed during today’s general session, and how all these technologies come together to help organizations deliver IT-as-a-service (ITaaS). Carl also makes commitments that VMware’s SDDC efforts will protect and leverage customers’ existing investments and help leverage existing skill sets. He closes the session with the phrase, “Champions drive change, so go drive change, and defy convention!”

And that concludes the day 2 keynote.

Tags: , , , ,

Welcome to Technology Short Take #34, my latest collection of links, articles, thoughts, and ideas from around the web, centered on key data center technologies. Enjoy!


  • Henry Louwers has a nice write-up on some of the design considerations that go into selecting a Citrix NetScaler solution.
  • Scott Hogg explores jumbo frames and their benefits/drawbacks in a clear and concise manner. It’s worth reading if you aren’t familiar with jumbo frames and some of the considerations around their use.
  • The networking “old guard” likes to talk about how x86 servers and virtualization create network bottlenecks due to performance concerns, but as Ivan points out in this post, it’s rapidly becoming—or has already become—a non-issue. (By the way, if you’re not already reading all of Ivan’s content, you need to be. Seriously.)
  • Greg Ferro, aka EtherealMind, has a great series of articles on overlay networking (a component technology used in a number of network virtualization solutions). Greg starts out with a quick look at the value prop for overlay networking. In addition to highlighting one key value of overlay networking—that decoupling the logical network from the physical network enables more rapid change and innovation—Greg also establishes that overlay networking is not new. Greg continues with a more detailed look at how overlay networking works. Finally, Greg takes a look at whether overlay networking and the physical network should be integrated; he arrives at the conclusion that integrating the two is likely to be unsuccessful given the history of such attempts in the past.
  • Terry Slattery ruminates on the power of creating (and using) the right abstraction in networking. The value of the “right abstraction” has come up a number of times; it was a featured discussion point of Martin Casado’s talk at the OpenStack Summit in Portland in April, and takes center stage in a recent post over at Network Heresy.
  • Here’s a decent two-part series about running Vyatta on VMware Workstation (part 1 and part 2).
  • Could we use OpenFlow to build better internet exchanges? Here’s one idea.



I have nothing to share this time around, but I’ll keep watch for content to include in future Technology Short Takes.

Cloud Computing/Cloud Management

  • Tom Fojta takes a look at integrating vCloud Automation Center (vCAC) with vCloud Director in this post. (By the way, congrats to Tom on becoming the first VCDX-Cloud!)
  • In case you missed it, here’s the recording for the #vBrownBag session with Jon Harris on vCAC. (I had the opportunity to hear Jon speak about his employer’s vCAC deployment and some of the lessons learned at a recent New Mexico VMUG meeting.)

Operating Systems/Applications


  • Rawlinson Rivera starts to address a lack of available information about Virsto in the first of a series of posts on VMware Virsto. This initial post provides an introduction to Virsto; future posts will provide more in-depth technical details (which is what I’m really looking forward to getting).
  • Nigel Poulton talks a bit about target driven zoning, something I’ve mentioned before on this site. For more information on target driven zoning (also referred to as peer zoning), also be sure to check out Erik Smith’s blog.
  • Now that he’s had some time to come up to speed in his new role, Frank Denneman has started a great series on the basic elements of PernixData’s Flash Virtualization Platform (FVP). You can read part 1 here and part 2 here. I’m looking forward to future parts in this series.
  • I’d often wondered this myself, and now Cormac Hogan has the answer: why is uploading files to VMFS so slow? Good information.


It’s time to wrap up now, or this Technology Short Take is going to turn into a Technology Long Take. Anyway, I hope you found something useful in this little collection. If you have any feedback or suggestions for improvement for future posts, feel free to speak up in the comments below.

Tags: , , , , , , , , , , , , ,

EMC announced ViPR today, the culmination of the not-so-secret Project Bourne and its lesser-known predecessor, Project Orion. Although I used to work at EMC before I joined VMware earlier this year, I never really had deep access to what was going on with this project, so my thoughts here are strictly based on what’s been publicly disclosed. Naturally, given that the product was only announced today, these are very early thoughts.

Naturally, Chad Sakac has a write-up on ViPR and what led up to its creation. It’s worth having a read, but allocate plenty of time (it is a bit on the long side).

Based on the limited material that is publicly available so far, here are a few thoughts about ViPR:

  • I like the control plane-data plane separation model that EMC is taking with ViPR. I’ve had a few conversations about network virtualization and software-defined networking (SDN) recently (see here and here) and the amorphous use of the term “software-defined.” In fact, my good friend Matthew Leib wrote a post about software-defined storage in response to an exchange of tweets about the overuse of “software-defined [insert whatever here]“. If we go back to the original definition of what SDN meant, it referred to the separation of the networking control plane from the networking data plane and the architectural changes resulting from that separation. SDN wasn’t (and isn’t) about virtualizing network switches, routers, or firewalls; that’s NFV (Network Functions Virtualization). Similarly, running storage controller software as virtual machines isn’t software-defined storage, it’s the storage equivalent of NFV (SFV?). Separating the storage control plane from the storage data plane is a much closer storage analogy to SDN, in my opinion. I’m sure that EMC hopes that it will spark a renaissance in storage the way SDN has sparked a renaissance in networking (more on that below).

  • I like that EMC is including a variety of object storage APIs, including Atmos, AWS S3, and OpenStack Swift, and that there is API support for OpenStack Cinder and OpenStack Glance as well. It would have been the wrong move not to support these APIs in ViPR—in my opinion, EMC won’t get another opportunity like this to broaden their API and platform support.

  • Obviously, a key difference between SDN and SDS a la ViPR is openness. While EMC proclaims the openness of the solution based on broad API support, 3rd party back-end storage support, a public northbound API, and source code and examples for third-party southbound “plugins” for other platforms, the reality is that this separation of control plane and data plane is being driven by a vendor rather than as a result of collaboration between academic research and industry. The reason this distinction is important is that it’s one thing for a networking vendor to build OpenFlow support into its switches when OpenFlow wasn’t and isn’t created/controlled by a competing vendor, but it’s another thing for a storage vendor to build support into their products for a solution that belongs to EMC. Whether this really matters or not remains yet to be seen—it may be a non-issue. (Yes, I recognize the irony in the fact that I work for VMware, some of whose solutions might be similarly criticized with regard to openness.)

  • Hey, where’s the network virtualization support? ;-)

Anyway, those are my initial thoughts. Since I haven’t had access to more detailed information on what it does/doesn’t support or how it works, I reserve the right to revise these thoughts and impressions after I get more exposure to ViPR. In the meantime, feel free to add your own thoughts in the comments below. Courteous comments are always welcome (but do please add vendor affiliations where applicable)!

UPDATE: It was brought to my attention I misspelled Matthew Leib’s last name; that has been corrected. My apologies Matt!

Tags: , ,

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.


  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.


  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.


  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.


  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?


  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

« Older entries § Newer entries »