Storage

You are currently browsing articles tagged Storage.

Vendor Meetings at VMworld 2013

This year at VMworld, I wasn’t in any of the breakout sessions because employees aren’t allowed to register for breakout sessions in advance; we have to wait in the standby line to see if we can get in at the last minute. So, I decided to meet with some vendors that seemed interesting and get some additional information on their products. Here’s the write-up on some of the vendor meetings I’ve attended while in San Francisco.

Jeda Networks

I’ve mentioned Jeda Networks before (see here), and I was pretty excited to have the opportunity to sit down with a couple of guys from Jeda to get more details on what they’re doing. Jeda Networks describes themselves as a “software-defined storage networking” company. Given my previous role at EMC (involved in storage) and my current role at VMware focused on network virtualization (which encompasses SDN), I was quite curious.

Basically, what Jeda Networks does is create a software-based FCoE overlay on an existing Ethernet network. Jeda accomplishes this by actually programming the physical Ethernet switches (they have a series of plug-ins for the various vendors and product lines; adding a new switch just means adding a new plug-in). In the future, when OpenFlow or its derivatives become more ubiquitous, I could see using those control plane technologies to accomplish the same task. It’s a fascinating idea, though I question how valuable a software-based FCoE overlay is in a world that seems to be rapidly moving everything to IP. Even so, I’m going to keep an eye on Jeda to see how things progress.

Diablo Technologies

Diablo was a new company to me; I hadn’t heard of them before their PR firm contacted me about a meeting while at VMworld. Diablo has created what they call Memory Channel Storage, which puts NAND flash on a DIMM. Basically, it makes high-capacity flash storage accessible via the CPU’s memory bus. To take advantage of high-capacity flash in the memory bus, Diablo supplies drivers for all the major operating systems (OSes), including ESXi, and what this driver does is modify the way that page swaps are handled. Instead of page swaps moving data from memory to disk—as would be the case in a traditional virtual memory system—the page swaps happen between DRAM on the memory bus and Diablo’s flash on the memory bus. This means that page swaps are extremely fast (on the level of microseconds, not the milliseconds typically seen with disks).

To use the extra capacity, then, administrators must essentially “overcommit” their hosts. Say your hosts had 64GB of (traditional) RAM installed, but 2TB of Diablo’s DIMM-based flash installed. You’d then allocate 2TB of memory to VMs, and the hypervisor would swap pages at extremely high speed between the DRAM and the DIMM-based flash. At that point, the system DRAM almost looks like another level of cache.

This “overcommitment” technique could have some negative effects on existing monitoring systems that are unaware of the underlying hardware configuration. Memory utilization would essentially run at 100% constantly, though the speed of the DIMM-based flash on the memory bus would mean you wouldn’t take a performance hit.

In the future, Diablo is looking for ways to make their DIMM-based flash appear to an OS as addressable memory, so that the OS would just see 3.2TB (or whatever) of RAM, and access it accordingly. There are a number of technical challenges there, not the least of which is ensuring proper latency and performance characteristics. If they can resolve these technical challenges, we could be looking at a very different landscape in the near future. Consider the effects of cost-effective servers with 3TB (or more) of RAM installed. What effect might that have on modern data centers?

HyTrust

HyTrust is a company with whom I’ve been in contact for several years now (since early 2009). Although HyTrust has been profitable for some time now, they recently announced a new round of funding intended to help accelerate their growth (though they’re already on track to quadruple sales this year). I chatted with Eric Chiu, President and founder of HyTrust, and we talked about a number of areas. I was interested to learn that HyTrust had officially productized a proof-of-concept from 2010 leveraging Intel’s TPM/TXT functionality to perform attestation of ESXi hypervisors (this basically means that HyTrust can verify the integrity of the hypervisor as a trusted platform). They also recently introduced “two man” support; that is, support for actions to be approved or denied by a second party. For example, an administrator might try to delete a VM, but that deletion would need to be approved by a second party before it is allowed to proceed. HyTrust also continues to explore other integration points with related technologies, such as OpenStack, NSX, physical networking gear, and converged infrastructure. Be sure to keep an eye on HyTrust—I think they’re going to be doing some pretty cool things in the near future.

Vormetric

Vormetric interested me because they offer a data encryption product, and I was interested to see how—if at all—they integrated with VMware vSphere. It turns out they don’t integrate with vSphere at all, as their product is really more tightly integrated at the OS level. For example, their product runs natively as an agent/daemon/service on various UNIX platforms, various Linux distributions, and all recent versions of Windows Server. This gives them very fine-grained control over data access. Given their focus is on “protecting the data,” this makes sense. Vormetric also offers a few related products, like a key management solution and a certificate management solution.

SimpliVity

SimpliVity is one of a number of vendors touting “hyperconvergence,” which—as far as I can tell—basically means putting storage and compute together on the same node. (If there is a better definition, please let me know.) In that regard, they could be considered similar to Nutanix. I chatted with one of the designers of the SimpliVity OmniCube. SimpliVity leverages VM-based storage controllers that leverage VMDirectPath for accelerated access to the underlying hardware, and present that underlying hardware back to the ESXi nodes as NFS storage. Their file system—developed during the 3 years they spent in stealth mode—abstracts away the hardware so that adding OmniCubes means adding both capacity and I/O (as well as compute). They use inline deduplication not only to reduce storage capacity, but especially to avoid having to write I/Os to the storage in the first place. (Capacity isn’t usually the issue; I/Os are typically the issue.) SimpliVity’s file system enables fast backups and fast clones; although they didn’t elaborate, I would assume they are using a pointer-based system (perhaps even an optimized content-addressed storage [CAS] model) that keeps them from having to copy large amounts of data around the system. This is what enables them to do global deduplication, backups from any system to any other system, and restores from any system to any other system (system here referring to an OmniCube).

In any case, SimpliVity looks very interesting due to its feature set. It will be interesting to see how they develop and mature.

SanDisk FlashSoft

This was probably one of the more fascinating meetings I had at the conference. SanDisk FlashSoft is a flash-based caching product that supports various OSes, including an in-kernel driver for ESXi. What made this product interesting was that SanDisk brought out one of the key architects behind the solution, who went through their design philosophy and the decisions they’d made in their architecture in great detail. It was a highly entertaining discussion.

More than just entertaining, though, it was really informative. FlashSoft aims to keep their caching layer as full of dirty data as possible, rather than seeking to flush dirty data right away. The advantage this offers is that if another change to that data comes, FlashSoft can discard the earlier change and only keep the latest change—thus eliminating I/Os to the back-end disks entirely. Further, by keeping as much data in their caching layer as possible, FlashSoft has a better ability to coalesce I/Os to the back-end, further reducing the I/O load. FlashSoft supports both write-through and write-back models, and leverages a cache coherency/consistency model that allows them to support write-back with VM migration without having to leverage the network (and without having to incur the CPU overhead that comes with copying data across the network). I very much enjoyed learning more about FlashSoft’s product and architecture. It’s just a shame that I don’t have any SSDs in my home lab that would benefit from FlashSoft.

SwiftStack

My last meeting of the week was with a couple folks from SwiftStack. We sat down to chat about Swift, SwiftStack, and object storage, and discussed how they are seeing the adoption of Swift in lots of areas—not just with OpenStack, either. That seems to be a pretty common misconception (that OpenStack is required to use Swift). SwiftStack is working on some nice enhancements to Swift that hopefully will show up soon, including erasure coding support and greater policy support.

Summary and Wrap-Up

I really appreciate the time that each company took to meet with me and share the details of their particular solution. One key takeaway for me was that there is still lots of room for innovation. Very cool stuff is ahead of us—it’s an exciting time to be in technology!

Tags: , , , , , ,

Welcome to Technology Short Take #35, another in my irregular series of posts that collect various articles, links and thoughts regarding data center technologies. I hope that something in here is useful to you.

Networking

  • Art Fewell takes a deeper look at the increasingly important role of the virtual switch.
  • A discussion of “statefulness” brought me again to Ivan’s post on the spectrum of firewall statefulness. It’s so easy sometimes just to revert to “it’s stateful” or “it’s not stateful,” but the reality is that it’s not quite so black-and-white.
  • Speaking of state, I like this piece by Ivan as well.
  • I tend not to link to TechTarget posts any more than I have to, because invariably the articles end up going behind a login requirement just to read them. Even so, this Q&A session with Martin Casado on managing physical and virtual worlds in parallel might be worth going through the hassle.
  • This looks interesting.
  • VMware introduced VMware NSX recently at VMworld 2013. Cisco shared some thoughts on what they termed a “software-only” approach; naturally, they have a different vision for data center networking (and that’s OK). I was a bit surprised by some of the responses to Cisco’s piece (see here and here). In the end, though, I like Greg Ferro’s statement: “It is perfectly reasonable that both companies will ‘win’.” There’s room for a myriad of views on how to solve today’s networking challenges, and each approach has its advantages and disadvantages.

Servers/Hardware

Nothing this time around, but I’ll watch for items to include in future editions. Feel free to send me links you think would be useful to include in the future!

Security

  • I found this write-up on using OVS port mirroring with Security Onion for intrusion detection and network security monitoring.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • In past presentations I’ve referenced the terms “snowflake servers” and “phoenix servers,” which I borrowed from Martin Fowler. (I don’t know if Martin coined the terms or not, but you can get more information here and here.) Recently among some of Martin’s material I saw reference to yet another term: the immutable server. It’s an interesting construct: rather than managing the configuration of servers, you simply spin up new instances when you need a new configuration; existing configurations are never changed. More information on the use of the immutable server construct is also available here. I’d be interested to hear readers’ thoughts on this idea.

Storage

  • Chris Evans takes a took at ScaleIO, recently acquired by EMC, and speculates on where ScaleIO fits into the EMC family of products relative to the evolution of storage in the data center.
  • While I was at VMworld 2013, I had the opportunity to talk with SanDisk’s FlashSoft division about their flash caching product. It was quite an interesting discussion, so stay tuned for that update (it’s almost written; expect it in the next couple of days).

Virtualization

  • The rise of new converged (or, as some vendors like to call it, “hyperconverged”) architectures means that we have to consider the impact of these new architectures when designing vSphere environments that will leverage them. I found a few articles by fellow VCDX Josh Odgers that discuss the impact of Nutanix’s converged architecture on vSphere designs. If you’re considering the use of Nutanix, have a look at some of these articles (see here, here, and here).
  • Jonathan Medd shows how to clone a VM from a snapshot using PowerCLI. Also be sure to check out this post on the vSphere CloneVM API, which Jonathan references in his own article.
  • Andre Leibovici shares an unofficial way to disable the use of the SESparse disk format and revert to VMFS Sparse.
  • Forgot the root password to your ESXi 5.x host? Here’s a procedure for resetting the root password for ESXi 5.x that involves booting on a Linux CD. As is pointed out in the comments, it might actually be easier to rebuild the host.
  • vSphere 5.5 was all the rage at VMworld 2013, and there was a lot of coverage. One thing that I didn’t see much discussion around was what’s going on with the free version of ESXi. Vladan Seget gives a nice update on how free ESXi is changing with version 5.5.
  • I am loving the micro-infrastructure series by my VMware vSphere Design co-author, Forbes Guthrie. See it here, here, and here.

It’s time to wrap up now; I’ve already included more links than I normally include (although it doesn’t seem like it). In any case, I hope that something I’ve shared here is helpful, and feel free to share your own thoughts, ideas, and feedback in the comments below. Have a great day!

Tags: , , , , , , , , ,

This is a liveblog of the day 2 keynote at VMworld 2013 in San Francisco. For a look at what happened in yesterday’s keynote, see here. Depending on network connectivity, I may or may not be able to update this post in real-time.

The keynote kicks off with Carl Eschenbach. Supposedly there are more than 22,000 people in attendance at VMworld 2013, making it—according to Carl—the largest IT infrastructure event. (I think some other vendors might take issue with that claim.) Carl recaps the events of yesterday’s keynote, revisiting the announcements around vSphere 5.5, VMware NSX, VMware VSAN, VMware Hybrid Cloud Service, and the expansion of the availability of Cloud Foundry. “This is the power of software”, according to Carl. Carl also revisits the three “imperatives” that Pat shared yesterday:

  1. Extending virtualization to all of it.
  2. IT management giving way to automation.
  3. Making hybrid cloud ubiquitous.

Carl brings out Kit Colbert, a principal engineer at VMware (and someone who relatively well-recognized within the virtualization community). They show a clip from a classic “I Love Lucy” episode that is intended to help illustrate the disconnect between the line of business and IT. After a bit of back and forth about the needs of the line of business versus the needs of IT, Kit moves into a demo of vCloud Automation Center (vCAC). The demo shows how to deploy applications to a variety of different infrastructures, including the ability to look at costs (estimated) across those infrastructures. The demo includes various database options as well as auto-scaling options.

So what does this functionality give application owners? Choice and visibility. What does it give IT? Governance (control), all made possible by automation.

The next view of the demo takes a step deeper, showing VMware Application Director deploying the sample application (called Project Vulcan in the demo). vCloud Application Director deploys complex application topologies in an automated fashion, and includes integration with tools like Puppet and Chef. Kit points out that what they’re showing isn’t just a vApp, but a “full blown” multi-tier application being deployed end-to-end.

The scripted “banter” between Carl and Kit leads to a review of some of the improvements that were included in the vSphere 5.5 release. Kit ties this back to the demo by calling out the improvements made in vSphere 5.5 with regard to latency-sensitive workloads.

Next they move into a discussion of the networking side of the house. (My personal favorite, but I could be biased.) Kit quickly reviews how NSX works and enables the creation of logical network services that are tied to the lifecycle of the application. Kit shows tasks in vCenter Server that reflect the automation being done by NSX with regard to automatically creating load balancers, firewall rules, logical switches, etc., and then reviews how we need to deploy logical network services in coordination with application lifecycle operations.

At Carl’s prompting, Kit goes yet another level deeper into how network virtualization works. He outlines how NSX eliminates the need to configure the physical network layer to provision new logical networks, and also discusses how NSX can provide logical routing, and they outline the benefits of distributed east-west routing (when routing occurs locally within the hypervisor). This, naturally, leads into a discussion of the distributed firewall functionality present in NSX, where firewall functionality occurs within the hypervisor, closest to the VMs. Following the list of features in NSX, Carl brings up load balancing, and Kit shows how load balancing works in NSX.

This leads into a customer testimonial video from WestJet, who discusses how they can leverage NSX’s distributed east-west firewalling to help better control and optimize traffic patterns in the data center. WestJet also emphasizes how they can leverage their existing networking investment while still deriving tremendous value from deploying NSX and network virtualization.

Next up in the demo is a migration from a “traditional” virtual network into an NSX logical network, and Kit shows how the migration is accomplished via a vMotion operation. This leads into a discussion of how VMware can not only do “V2V” migrations into NSX logical networks, but also “P2V” migrations using NSX’s logical-to-physical bridging functionality.

That concludes the networking section of the demo, and leads Carl and Kit into a storage-focused discussion centered around Carl’s mythical Project Vulcan. The discussion initially focuses on VMware VSAN, and how IT can leverage VSAN to help address application provisioning. The demo shows how VSAN can dynamically expand capacity by adding another ESXi host in the cluster; more hosts means more capacity for the VSAN datastore. Carl says that Kit has shown him simplicity, scalability, but not resiliency. This leads Kit to a slide that shows how VSAN ensures resiliency by maintaining multiple copies of data within a VSAN datastore. If some part of the local storage backing VSAN fails, VSAN will automatically copy the data elsewhere so that the policy around how many copies of the data is maintained and enforced.

Following the VSAN demo, Carl and Kit move into a demo of a few end-user computing demonstrations, showing application access via Horizon Workspace. Kit wraps up his time on stage with a brief video—taken from “When Harry Met Sally,” if I’m not mistaken—that describes how demanding the line of business can be. The wrap-up to the demo was quite natural feeling and demonstrated some good chemistry between Kit and Carl.

Next up on the stage is Joe Baguley, CTO of EMEA, to discuss operations and operational concerns. Joe reviews why script- and rules-based management isn’t going to work in the new world, and why the world needs to move toward policy-based automation and management. This leads into a demo, and Joe shows—via vCAC—how vCenter Operations has initiated a performance remediation operation via the auto scale-out feature that was enabled when the application was provisioned. The demo next leads into a more detailed review of application performance via vCenter Operations.

Joe reviews three key parts of automated operations:

  1. (missed this one, sorry)
  2. Intelligent analytics
  3. Visibility into application performance

Next, Joe shows how vCenter Operations is integrating information from a variety of partners to help make intelligent recommendations, one of which is that Carl should change the storage tier based on the disk I/O requirements of his Project Vulcan application. vCAC will show the estimated cost of that change, and when the administrator approves that change, vSphere will leverage Storage vMotion to migrate to a new storage tier.

The discussion between Carl and Joe leads up to a demo of VMware Log Insight, where Joe shows events being pulled from a wide variety of sources to help drill down to the root cause of the storage issue in the demonstration. VMworld attendees (or possibly anyone, I guess) are encouraged to try out Log Insight by simply following @VMLogInsight on Twitter (they will give 5 free licenses to new followers).

Next up in the demo is a discussion of vCloud Hybrid Service, showing how the vSphere Web Client can be used to manage templates in vCHS. Joe brings the demo full-circle by taking us back to vCAC to deploy Project Vulcan into vCHS via vCAC. Carl reviews some of the benefits of vCHS, and asks Joe to share a few use cases. Joe shares that test/dev, new applications (perhaps built on Cloud Foundry?), and rapid capacity expansion are good use cases for vCHS.

Carl wraps up the day 2 keynote by summarizing the technologies that were displayed during today’s general session, and how all these technologies come together to help organizations deliver IT-as-a-service (ITaaS). Carl also makes commitments that VMware’s SDDC efforts will protect and leverage customers’ existing investments and help leverage existing skill sets. He closes the session with the phrase, “Champions drive change, so go drive change, and defy convention!”

And that concludes the day 2 keynote.

Tags: , , , ,

Welcome to Technology Short Take #34, my latest collection of links, articles, thoughts, and ideas from around the web, centered on key data center technologies. Enjoy!

Networking

  • Henry Louwers has a nice write-up on some of the design considerations that go into selecting a Citrix NetScaler solution.
  • Scott Hogg explores jumbo frames and their benefits/drawbacks in a clear and concise manner. It’s worth reading if you aren’t familiar with jumbo frames and some of the considerations around their use.
  • The networking “old guard” likes to talk about how x86 servers and virtualization create network bottlenecks due to performance concerns, but as Ivan points out in this post, it’s rapidly becoming—or has already become—a non-issue. (By the way, if you’re not already reading all of Ivan’s content, you need to be. Seriously.)
  • Greg Ferro, aka EtherealMind, has a great series of articles on overlay networking (a component technology used in a number of network virtualization solutions). Greg starts out with a quick look at the value prop for overlay networking. In addition to highlighting one key value of overlay networking—that decoupling the logical network from the physical network enables more rapid change and innovation—Greg also establishes that overlay networking is not new. Greg continues with a more detailed look at how overlay networking works. Finally, Greg takes a look at whether overlay networking and the physical network should be integrated; he arrives at the conclusion that integrating the two is likely to be unsuccessful given the history of such attempts in the past.
  • Terry Slattery ruminates on the power of creating (and using) the right abstraction in networking. The value of the “right abstraction” has come up a number of times; it was a featured discussion point of Martin Casado’s talk at the OpenStack Summit in Portland in April, and takes center stage in a recent post over at Network Heresy.
  • Here’s a decent two-part series about running Vyatta on VMware Workstation (part 1 and part 2).
  • Could we use OpenFlow to build better internet exchanges? Here’s one idea.

Servers/Hardware

Security

I have nothing to share this time around, but I’ll keep watch for content to include in future Technology Short Takes.

Cloud Computing/Cloud Management

  • Tom Fojta takes a look at integrating vCloud Automation Center (vCAC) with vCloud Director in this post. (By the way, congrats to Tom on becoming the first VCDX-Cloud!)
  • In case you missed it, here’s the recording for the #vBrownBag session with Jon Harris on vCAC. (I had the opportunity to hear Jon speak about his employer’s vCAC deployment and some of the lessons learned at a recent New Mexico VMUG meeting.)

Operating Systems/Applications

Storage

  • Rawlinson Rivera starts to address a lack of available information about Virsto in the first of a series of posts on VMware Virsto. This initial post provides an introduction to Virsto; future posts will provide more in-depth technical details (which is what I’m really looking forward to getting).
  • Nigel Poulton talks a bit about target driven zoning, something I’ve mentioned before on this site. For more information on target driven zoning (also referred to as peer zoning), also be sure to check out Erik Smith’s blog.
  • Now that he’s had some time to come up to speed in his new role, Frank Denneman has started a great series on the basic elements of PernixData’s Flash Virtualization Platform (FVP). You can read part 1 here and part 2 here. I’m looking forward to future parts in this series.
  • I’d often wondered this myself, and now Cormac Hogan has the answer: why is uploading files to VMFS so slow? Good information.

Virtualization

It’s time to wrap up now, or this Technology Short Take is going to turn into a Technology Long Take. Anyway, I hope you found something useful in this little collection. If you have any feedback or suggestions for improvement for future posts, feel free to speak up in the comments below.

Tags: , , , , , , , , , , , , ,

EMC announced ViPR today, the culmination of the not-so-secret Project Bourne and its lesser-known predecessor, Project Orion. Although I used to work at EMC before I joined VMware earlier this year, I never really had deep access to what was going on with this project, so my thoughts here are strictly based on what’s been publicly disclosed. Naturally, given that the product was only announced today, these are very early thoughts.

Naturally, Chad Sakac has a write-up on ViPR and what led up to its creation. It’s worth having a read, but allocate plenty of time (it is a bit on the long side).

Based on the limited material that is publicly available so far, here are a few thoughts about ViPR:

  • I like the control plane-data plane separation model that EMC is taking with ViPR. I’ve had a few conversations about network virtualization and software-defined networking (SDN) recently (see here and here) and the amorphous use of the term “software-defined.” In fact, my good friend Matthew Leib wrote a post about software-defined storage in response to an exchange of tweets about the overuse of “software-defined [insert whatever here]“. If we go back to the original definition of what SDN meant, it referred to the separation of the networking control plane from the networking data plane and the architectural changes resulting from that separation. SDN wasn’t (and isn’t) about virtualizing network switches, routers, or firewalls; that’s NFV (Network Functions Virtualization). Similarly, running storage controller software as virtual machines isn’t software-defined storage, it’s the storage equivalent of NFV (SFV?). Separating the storage control plane from the storage data plane is a much closer storage analogy to SDN, in my opinion. I’m sure that EMC hopes that it will spark a renaissance in storage the way SDN has sparked a renaissance in networking (more on that below).

  • I like that EMC is including a variety of object storage APIs, including Atmos, AWS S3, and OpenStack Swift, and that there is API support for OpenStack Cinder and OpenStack Glance as well. It would have been the wrong move not to support these APIs in ViPR—in my opinion, EMC won’t get another opportunity like this to broaden their API and platform support.

  • Obviously, a key difference between SDN and SDS a la ViPR is openness. While EMC proclaims the openness of the solution based on broad API support, 3rd party back-end storage support, a public northbound API, and source code and examples for third-party southbound “plugins” for other platforms, the reality is that this separation of control plane and data plane is being driven by a vendor rather than as a result of collaboration between academic research and industry. The reason this distinction is important is that it’s one thing for a networking vendor to build OpenFlow support into its switches when OpenFlow wasn’t and isn’t created/controlled by a competing vendor, but it’s another thing for a storage vendor to build support into their products for a solution that belongs to EMC. Whether this really matters or not remains yet to be seen—it may be a non-issue. (Yes, I recognize the irony in the fact that I work for VMware, some of whose solutions might be similarly criticized with regard to openness.)

  • Hey, where’s the network virtualization support? ;-)

Anyway, those are my initial thoughts. Since I haven’t had access to more detailed information on what it does/doesn’t support or how it works, I reserve the right to revise these thoughts and impressions after I get more exposure to ViPR. In the meantime, feel free to add your own thoughts in the comments below. Courteous comments are always welcome (but do please add vendor affiliations where applicable)!

UPDATE: It was brought to my attention I misspelled Matthew Leib’s last name; that has been corrected. My apologies Matt!

Tags: , ,

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.

Networking

  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.

Servers/Hardware

  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.

Security

  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.

Storage

  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?

Virtualization

  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #31, my irregularly published series that takes a look at links, posts, articles, and thoughts from around the web related to core data center technologies. I hope that you find something useful!

Networking

  • Umair Hoodbhoy speculates in this post that the inclusion of Cisco’s ONE Controller in the recently-announced “Daylight” effort could mean the end for Big Switch’s Floodlight. (Umair’s play on words—”in Daylight there is no need for Floodlights”—is cute.)
  • Of course, Big Switch recently moved to “diversify,” if you will, away from just Floodlight with the introduction of Switch Light. As usual, Brent Salisbury has an excellent write-up on Switch Light, so I recommend reading his post. Switch Light seems like a good idea—more competition is always good, isn’t that what people say?—but I wonder how much cooperation Big Switch will get from the major networking vendors with regards to OpenFlow interoperability now that Big Switch is competing even more directly with them via Switch Light.
  • I think I might have mentioned this before (sorry if so), but here’s a good write-up on using the Edge Gateway CLI for monitoring and troubleshooting. Nice.
  • Greg Ferro examines a potential SDN use case (an OpenFlow use case) in the form of enterprise firewall migrations.
  • Just getting started in the networking field? Last year, Brent Salisbury put together a couple of great posts that help “refresh the basics” of networking. Part 1 covers Ethernet, IP, and TCP headers in Wireshark captures; part 2 pulls that together to show how the headers encapsulate in the OSI stack. If you’re not already familiar with this information, this is good reading.

Servers/Hardware

Nothing this time around, but I’ll stay alert for information I can include in the next Technology Short Take!

Security

  • Mounting guest disk images on the host? That’s a no-no from a security perspective—see here to learn why.
  • Mike Foley shared recently that the release candidate of the vSphere 5 Security Hardening Guide has been released. Check it out here.

Cloud Computing/Cloud Management

  • I haven’t had the chance to actually try it out myself, but Blueprint looks interesting. As the website describes it, it’s designed to “reverse engineer” servers so that you can migrate them into a configuration management system like Chef or Puppet.
  • Looking for a decent high-level overview of OpenStack and how it works? Check out this article titled “In a nutshell: How OpenStack works”. (As an aside, I think it’s awesome how Ken Pepple’s diagrams show up in all sorts of places. One day I hope my material proves as useful to folks.)
  • If you use Puppet for configuration management and want to deploy GlusterFS, be sure to check out this Puppet Forge module. I’ve tested it and it works as advertised.
  • This is an older article (published in May of last year), and it’s a bit on the lengthy side, but I like the tack the author uses. He describes cloud as the synthesis of many different forms of innovation within IT, pulling together things like open source, virtualization, distributed programming, NoSQL, DevOps/NoOps, distributed teams, dynamic languages, and Big Data (among others). He then goes on to provide examples of how organizations building or leveraging clouds are synthesizing these various independent technological innovations together. If you have a few minutes (as I said, it’s a bit on the lengthy side), I’d recommend reading it.

Operating Systems/Applications

  • This series is a bit older, but an interesting one nevertheless. Brian McClain, who was one of the presenters in a Cloud Foundry/BOSH session I liveblogged at VMworld 2012, has his own personal blog and posted a series of articles on using BOSH with vSphere. I hadn’t really considered how one might use BOSH for deploying (and managing) multi-VM applications on vSphere, but Brian provides some practical examples. Part 1 of the series is here, followed by part 2, part 3, part 4, and part 5.
  • Like using Markdown on OS X? You might find these handy.
  • Ah, the good old days of DOS…reborn as FreeDOS.
  • Go ahead, read up on YAML. You know you want to. Well, YAML is used in both Hiera (can be used with Puppet) and BOSH, after all.
  • Here’s another interesting tool that I haven’t had the opportunity to actually test myself. Oz looks like it could be quite useful—especially in virtualized/cloud computing environments—but I’m struggling to determine why I should use Oz instead of OS-specific mechanisms (like a kickstart file). If anyone has used Oz and can shed some light on this question, I’d appreciate it.
  • You may have heard that I recently switched from TextMate to BBEdit as my default OS X text editor (and therefore the tool whereby I do most of my content generation). As part of the switch, I found this to be helpful. (I might post a separate entry about the switch, if enough people seem interested in reading about it.)

Storage

Virtualization

That’s it for this time. I have plenty more links I wanted to share, but I figured I’d better not let this post get any longer. As always, courteous comments are welcome, so I invite you to participate in the conversation by adding your thoughts below.

Tags: , , , , , , , , , ,

Technology Short Take #30

Welcome to Technology Short Take #30. This Technology Short Take is a bit heavy on the networking side, but I suppose that’s understandable given my recent job change. Enjoy!

Networking

  • Ben Cherian, Chief Strategy Officer for Midokura, helps make a case for network virtualization. (Note: Midokura makes a network virtualization solution.) If you’re wondering about network virtualization and why there is a focus on it, this post might help shed some light. Given that it was written by a network virtualization vendor, it might seem a bit rah-rah, so keep that in mind.
  • Brent Salisbury has a fantastic series on OpenFlow. It’s so good I wish I’d written it. He starts out by discussing proactive vs. reactive flows, in which Brent explains that OpenFlow performance is less about OpenFlow and more about how flows are inserted into the hardware. Next, he tackles the concerns over the scale of flow-based forwarding in his post on coarse vs. fine flows. I love this quote from that article: “The second misnomer is, flow based forwarding does not scale. Bad designs are what do not scale.” Great statement! The third post in the series tackles what Brent calls hybrid SDN deployment strategies, and Brent provides some great design considerations for organizations looking to deploy an SDN solution. I’m looking forward to the fourth and final article in the series!
  • Also, if you’re looking for some additional context to the TCAM considerations that Brent discusses in his OpenFlow series, check out this Packet Pushers blog post on OpenFlow switching performance.
  • Another one from Brent, this time on Provider Bridging and Provider Backbone Bridging. Good explanation—it certainly helped me.
  • This article by Avi Chesla points out a potential security weakness in SDN, in the form of a DoS (Denial of Service) attack where many switching nodes request many flows from the central controller. It appears to me that this would only be an issue for networks using fine-grained, reactive flows. Am I wrong?
  • Scott Hogg has a nice list of 9 common Spanning Tree mistakes you shouldn’t make.
  • Schuberg Philis has a nice write-up of their CloudStack+NVP deployment here.

Servers/Hardware

  • Alex Galbraith recently posted a two-part series on what he calls the “NanoLab,” a home lab built on the Intel NUC (“Next Unit of Computing”). It’s a good read for those of you looking for some very quiet and very small home lab equipment, and Alex does a good job of providing all the details. Check out part 1 here and part 2 here.
  • At first, I thought this article was written from a sarcastic point of view, but it turns out that Kevin Houston’s post on 5 reasons why you may not want blade servers is the real deal. It’s nice to see someone who focuses on blade servers opening up about why they aren’t necessarily the best fit for all situations.

Security

  • Nick Buraglio has a good post on the potential impact of Arista’s new DANZ functionality on tap aggregation solutions in the security market. It will be interesting to see how this shapes up. BTW, Nick’s writing some pretty good content, so if you’re not subscribed to his blog I’d reconsider.

Cloud Computing/Cloud Management

  • Although this post is a bit older (it’s from September of last year), it’s still an interesting comparison of both OpenStack and CloudStack. Note that the author apparently works for Mirantis, which is a company that provides OpenStack consulting services. In spite of that fact, he manages to provide a reasonably balanced approach to comparing the two cloud management platforms. Both of them (I believe) have had releases since this time, so some of the points may not be valid any longer.
  • Are you a CloudStack fan? If so, you should probably check out this collection of links from Aaron Delp. Aaron’s focused a lot more on CloudStack now that he’s at Citrix, so he might be a good resource if that is your cloud management platform of choice.

Operating Systems/Applications

  • If you’re just now getting into the whole configuration management scene where tools like Puppet, Chef, and others play, you might find this article helpful. It walks through the difference between configuring a system imperatively and configuring a system declaratively (hint: Puppet, Chef, and others are declarative). It does presume a small bit of programming knowledge in the examples, but even as a non-programmer I found it useful.
  • Here’s a three-part series on beginning Puppet that you might find helpful as well (Part 1, Part 2, and Part 3).
  • If you’re a developer-type person, I would first ask why you’re reading my site, then I’d point you to this post on the AMQP, MQTT, and STOMP messaging protocols.

Storage

Virtualization

  • Although these posts are storage-related, the real focus is on how the storage stack is implemented in a virtualization solution, which is why I’m putting them in this section. Cormac Hogan has a series going titled “Pluggable Storage Architecture (PSA) Deep Dive” (part 1 here, part 2 here, part 3 here). If you want more PSA information, you’d be hard-pressed to find a better source. Well worth reading for VMware admins and architects.
  • Chris Colotti shares information on a little-known vSwitch advanced setting that helps resolve an issue with multicast traffic and NICs in promiscuous mode in this post.
  • Frank Denneman reminds everyone in this post that the concurrent vMotion limit only goes to 8 concurrent vMotions when vSphere detects the NIC speed at 10Gbps. Anything less causes the concurrent limit to remain at 4. For those of you using solutions like HP VirtualConnect or similar that allow you to slice and dice a 10Gb link into smaller links, this is a design consideration you’ll want to be sure to incorporate. Good post Frank!
  • Interested in some OpenStack inception? See here. How about some oVirt inception? See here. What’s that? Not familiar with oVirt? No problem—see here.
  • Windows Backup has native Hyper-V support in Windows Server 2012. That’s cool, but are you surprised? I’m not.
  • Red Hat and IBM put out a press release today on improved I/O performance with RHEL 6.4 and KVM. The press release claims that a single KVM guest on RHEL 6.4 can support up to 1.5 million IOPS. (Cue timer until next virtualization vendor ups the ante…)

I guess I should wrap things up now, even though I still have more articles that I’d love to share with readers. Perhaps a “mini-TST”…

In any event, courteous comments are always welcome, so feel free to speak up below. Thanks for reading and I hope you’ve found something useful!

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #29! This is another installation in my irregularly-published series of links, thoughts, rants, and raves across various data center-related fields of technology. As always, I hope you find something useful here.

Networking

  • Who out there has played around with Mininet yet? Looks like this is another tool I need to add to my toolbox as I continue to explore networking technologies like OpenFlow, Open vSwitch, and others.
  • William Lam has a recent post on some useful VXLAN commands found in ESXCLI with vSphere 5.1. I’m a CLI fan, so I like this sort of stuff.
  • I still have a lot to learn about OpenFlow and networking, but this article from June of last year (it appears to have been written by Ivan Pepelnjak) discusses some of the potential scalability concerns around early versions of the OpenFlow protocol. In particular, the use of OpenFlow to perform granular per-flow control when there are thousands (or maybe only hundreds) of flows presents a scalability challenge (for now, at least). In my mind, this isn’t an indictment of OpenFlow, but rather an indictment of the way that OpenFlow is being used. I think that’s the point Ivan tried to make as well—it’s the architecture and how OpenFlow is used that makes a difference. (Is that a reasonable summary, Ivan?)
  • Brad Hedlund (who will be my co-worker starting on 2/11) created a great explanation of network virtualization that clearly breaks down the components and explains their purpose and function. Great job, Brad.
  • One of the things I like about Open vSwitch (OVS) is that it is so incredibly versatile. Case in point: here’s a post on using OVS to connect LXC containers running on different hosts via GRE tunnels. Handy!

Servers/Hardware

  • Cisco UCS is pretty cool in that it makes automation of compute hardware easier through such abstractions as server profiles. Now, you can also automate UCS with Chef. I traded a few tweets with some Puppet folks, and they indicated they’re looking at this as well.
  • Speaking of Puppet and hardware, I also saw a mention on Twitter about a Puppet module that will manage the configuration of a NetApp filer. Does anyone have a URL with more information on that?
  • Continuing the thread on configuration management systems running on non-compute hardware (I suppose this shouldn’t be under the “Servers/Hardware” section any longer!), I also found references to running CFEngine on network apliances and running Chef on Arista switches. That’s kind of cool. What kind of coolness would result from even greater integration between an SDN controller and a declarative configuration management tool? Hmmm…

Security

  • Want full-disk encryption in Ubuntu, using AES-XTS-PLAIN64? Here’s a detailed write-up on how to do it.
  • In posts and talks I’ve given about personal productivity, I’ve spoken about the need to minimize “friction,” that unspoken drag that makes certain tasks or workflows more difficult and harder to adopt. Tal Klein has a great post on how friction comes into play with security as well.

Cloud Computing/Cloud Management

  • If you, like me, are constantly on the search for more quality information on OpenStack and its components, then you’ll probably find this post on getting Cinder up and running to be helpful. (I did, at least.)
  • Mirantis—recently the recipient of $10 million in funding from various sources—posted a write-up in late November 2012 on troubleshooting some DNS and DHCP service configuration issues in OpenStack Nova. The post is a bit specific to work Mirantis did in integrating an InfoBlox appliance into OpenStack, but might be useful in other situation as well.
  • I found this article on Packstack, a tool used to transform Fedora 17/18, CentOS 6, or RHEL 6 servers into a working OpenStack deployment (Folsom). It seems to me that lots of people understand that getting an OpenStack cloud up and running is a bit more difficult than it should be, and are therefore focusing efforts on making it easier.
  • DevStack is another proof point of the effort going into make it easier to get OpenStack up and running, although the focus for DevStack is on single-host development environments (typically virtual themselves). Here’s one write-up on DevStack; here’s another one by Cody Bunch, and yet another one by the inimitable Brent Salisbury.

Operating Systems/Applications

  • If you’re interested in learning Puppet, there are a great many resources out there; in fact, I’ve already mentioned many of them in previous posts. I recently came across these Example42 Puppet Tutorials. I haven’t had the chance to review them myself yet, but it looks like they might be a useful resource as well.
  • Speaking of Puppet, the puppet-lint tool is very handy for ensuring that your Puppet manifest syntax is correct and follows the style guidelines. The tool has recently been updated to help fix issues as well. Read here for more information.

Storage

  • Greg Schulz (aka StorageIO) has a couple of VMware storage tips posts you might find useful reading. Part 1 is here, part 2 is here. Enjoy!
  • Amar Kapadia suggests that adding LTFS to Swift might create an offering that could give AWS Glacier a real run for the money.
  • Gluster interests me. Perhaps it shouldn’t, but it does. For example, the idea of hosting VMs on Gluster (similar to the setup described here) seems quite interesting, and the work being done to integrate KVM/QEMU with Gluster also looks promising. If I can ever get my home lab into the right shape, I’m going to do some testing with this. Anyone done anything with Gluster?
  • Erik Smith has a very informative write-up on why FIP snooping is important when using FCoE.
  • Via this post on ten useful OpenStack Swift features, I found this page on how to build the “Swift All in One,” a useful VM for learning all about Swift.

Virtualization

  • There’s no GUI for it, but it’s kind of cool that you can indeed create VM anti-affinity rules in Hyper-V using PowerShell. This is another example of how Hyper-V continues to get more competent. Ignore Microsoft and Hyper-V at your own risk…
  • Frank Denneman takes a quick look at using user-defined NetIOC network resource pools to isolate and protect IP-based storage traffic from within the guest (i.e., using NFS or iSCSI from within the guest OS, not through the VMkernel). Naturally, this technique could be used to “protect” or “enhance” other types of important traffic flows to/from your guest OS instances as well.
  • Andre Leibovici has a brief write-up on the PowerShell module for the Nicira Network Virtualization Platform (NVP). Interesting stuff…
  • This write-up by Falko Timme on using BoxGrinder to create virtual appliances for KVM was interesting. I might have to take a look at BoxGrinder and see what it’s all about.
  • In case you hadn’t heard, OVF 2.0 has been announced/released by the DMTF. Winston Bumpus of VMware’s Office of the CTO has more information in this post. I also found the OVF 2.0 frequently asked questions (FAQs) to be helpful. Of course, the real question is how long it will be before vendors add support for OVF 2.0, and how extensive that support actually is.

And that’s it for this time around! Feel free to share your thoughts, suggestions, clarifications, or corrections in the comments below. I encourage your feedback, and thanks for reading.

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #28, the first Technology Short Take for 2013. As always, I hope that you find something useful or informative here. Enjoy!

Networking

  • Ivan Pepelnjak recently wrote a piece titled “Edge and Core OpenFlow (and why MPLS is not NAT)”. It’s an informative piece—Ivan’s stuff is always informative—but what really drew my attention was his mention of a paper by Martin Casado, Teemu Koponen, and others that calls for a combination of MPLS and OpenFlow (and an evolution of OpenFlow into “edge” and “core” versions) to build next-generation networks. I’ve downloaded the paper and intend to review it in more detail. I’d love to hear from any networking experts who’ve read the paper—what are your thoughts?
  • Speaking of Ivan…it also appears that he’s quite pleased with Microsoft’s implementation of NVGRE in Hyper-V. Sounds like some of the other vendors need to get on the ball.
  • Here’s a nice explanation of CloudStack’s physical networking architecture.
  • The first fruits of Brad Hedlund’s decision to join VMware/Nicira have shown up in this joint article by Brad, Bruce Davie, and Martin Casado describing the role of network virutalization in the software-defined data center. (It doesn’t matter how many times I say or write “software-defined data center,” it still feels like a marketing term.) This post is fairly high-level and abstract; I’m looking forward to seeing more detailed and in-depth posts in the future.
  • Art Fewell speculates that the networking industry has “lost our way” and become a “big bag of protocols” in this article. I do agree with one of the final conclusions that Fewell makes in his article: that SDN (a poorly-defined and often over-used term) is the methodology of cloud computing applied to networking. Therefore, SDN is cloud networking. That, in my humble opinion, is a more holistic and useful way of looking at SDN.
  • It appears that the vCloud Connector posts (here and here) that (apparently) incorrectly identify VXLAN as a component/prerequisite of vCloud Connector have yet to be corrected. (Hat tip to Kenneth Hui at VCE.)

Servers/Hardware

Nothing this time around, but I’ll watch for content to include in future posts.

Security

  • Here’s a link to a brief (too brief, in my opinion, but perhaps I’m just being overly critical) post on KVM virtualization security, authored by Dell TechCenter. It provides some good information on securing the libvirt communication channel.

Cloud Computing/Cloud Management

  • Long-time VMware users probably remember Mike DiPetrillo, whose website has now, unfortunately, gone offline. I mention this because I’ve had this article on RabbitMQ AMQP with vCloud Director sitting in my list of “articles to write about” for a while, but some of the images were missing and I couldn’t find a link for the article. I finally found a link to a reprinted version of the article on DZone Enterprise Integration. Perhaps the article will be of some use to someone.
  • Sam Johnston talks about reliability in the cloud with a discussion on the merits of “reliable software” (software designed for failure) vs. “unreliable software” (more traditional software not designed for failure). It’s a good article, but I found the discussion between Sam and Massimo (of VMware) as equally useful.

Operating Systems/Applications

Storage

  • Want some good details on the space-efficient sparse disk format in vSphere 5.1? Andre Leibovici has you covered right here.
  • Read this article for good information from Andre on a potential timeout issue with recomposing desktops and using the View Storage Accelerator (aka context-based read cache, CRBC).
  • Apparently Cormac Hogan, aka @VMwareStorage on Twitter, hasn’t gotten the memo that “best practices” is now outlawed. He should have named this series on NFS with vSphere “NFS Recommended Practices”, but even misnamed as they are, the posts still have useful information. Check out part 1, part 2, and part 3.
  • If you’d like to get a feel for how VMware sees the future of flash storage in vSphere environments, read this.

Virtualization

  • This is a slightly older post, but informative and useful nevertheless. Cormac posted an article on VAAI offloads and KAVG latency when observed in esxtop. The summary of the article is that the commands esxtop is tracking are internal to the ESXi kernel only; therefore, abnormal KAVG values do not represent any sort of problem. (Note there’s also an associated VMware KB article.)
  • More good information from Cormac here on the use of the SunRPC.MaxConnPerIP advanced setting and its impact on NFS mounts and NFS connections.
  • Another slightly older article (from September 2012) is this one from Frank Denneman on how vSphere 5.1 handles parallel Storage vMotion operations.
  • A fellow IT pro contacted me on Twitter to see if I had any idea why some shares on his Windows Server VM weren’t working. As it turns out, the problem is related to hotplug functionality; the OS sees the second drive as “removable” due to hotplug functionality, and therefore shares don’t work. The problem is outlined in a bit more detail here.
  • William Lam outlines how to use new tagging functionality in esxcli in vSphere 5.1 for more comprehensive scripted configurations. The new tagging functionality—if I’m reading William’s write-up correctly—means that you can configure VMkernel interfaces for any of the supported traffic types via esxcli. Neat.
  • Chris Wahl has a nice write-up on the behavior of Network I/O Control with multi-NIC vMotion traffic. It was pointed out in the comments that the behavior Chris describes is documented, but the write-up is still handy, and an important factor to keep in mind in your designs.

I suppose I should end it here, before this “short take” turns into a “long take”! In any case, courteous comments are always welcome, so if you have additional information, clarifications, or corrections to share regarding any of the articles or links in this post, feel free to speak up below.

Tags: , , , , , , , , , , , , ,

« Older entries § Newer entries »