Storage

This category contains posts pertaining to storage and storage-related technologies or products.

Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!

Networking

  • Cormac Hogan has a list of a few useful NSX troubleshooting tips.
  • If you’re not really a networking pro and need a “gentle” introduction to VXLAN, this post might be a good place to start.
  • Also along those lines—perhaps you’re a VMware administrator who wants to branch into networking with NSX, or you’re a networking guru who needs to learn more about how this NSX stuff works. vBrownBag has been running a VCP-NV series covering various objectives from the VCP-NV exam. Check them out—objective 1, objective 2, objective 3, and objective 4 have been posted so far.

Servers/Hardware

  • I’m going to go out on a limb and make a prediction: In a few years time (let’s say 3–5 years), Intel SGX (Software Guard Extensions) will be regarded as important if not more important than the virtualization extensions. What is Intel SGX, you ask? See here, here, and here for a breakdown of the SGX design objectives. Let’s be real—the ability for an application to protect itself (and its data) from rogue software (including a compromised or untrusted operating system) is huge.

Security

  • CloudFlare (disclaimer: I am a CloudFlare customer) recently announced Keyless SSL, a technique for allowing organizations to take advantage of SSL offloading without relinquishing control of private keys. CloudFlare followed that announcement with a nitty gritty technical details post that describes how it works. I’d recommend reading the technical post just to get a good education on how encryption and TLS work, even if you’re not a CloudFlare customer.

Cloud Computing/Cloud Management

  • William Lam spent some time working with some “new age” container cluster management tools (specifically, govmomi, govc CLI, and Kubernetes on vSphere) and documented his experience here and here. Excellent stuff!
  • YAKA (Yet Another Kubernetes Article), this time looking at Kubernetes on CoreOS on OpenStack. (How’s that for buzzword bingo?)
  • This analytical evaluation of Kubernetes might be helpful as well.
  • Stampede.io looks interesting; I got a chance to see it live at the recent DigitalOcean-CoreOS meetup in San Francisco. Here’s the Stampede.io announcement post.

Operating Systems/Applications

  • Trying to wrap your head around the concept of “microservices”? Here’s a write-up that attempts to provide an introduction to microservices. An earlier blog post on cloud native software is pretty good, too.
  • Here’s a very nice collection of links about Docker, ranging from how to use Docker to how to use the Docker API and how to containerize your application (just to name a few topics).
  • Here’a a great pair of articles (part 1 and part 2) on microservices and Platform-as-a-Service (PaaS). This is really good stuff, especially if you are trying to expand your boundaries learning about cloud application design patterns.
  • This article by CenturyLink Labs—which has been doing some nice stuff around Docker and containers—talks about how to containerize your legacy applications.
  • Here’s a decent write-up on comparing LXC and Docker. There are also some decent LXC-specific articles on the site as well (see the sidebar).
  • Service registration (and discovery) in a micro-service architecture can be challenging. Jeff Lindsay is attempting to help address some of the challenges with Registrator; more information is available here.
  • Unlike a lot of Docker-related blog posts, this post by RightScale on combining VMs and containers for better cloud portability is a well-written piece. The pros and cons of using containers are discussed fairly, without hype.
  • Single-process containers or multi-process containers? This site presents a convincing argument for multi-process containers; have a look.
  • Tired of hearing about containers yet? Oh, come on, you know you love them! You love them so much you want to run them on your OS X laptop. Well…read this post for all the gory details.

Storage

  • The storage aspect of Docker isn’t typically discussed in a lot of detail, other than perhaps focusing on the need for persistent storage via Docker volumes. However, this article from Red Hat does a great job (in my opinion) of exploring storage options for Docker containers and how these options affect performance and scalability. Looks like OverlayFS is the clear winner; it will be great when OverlayFS is in the upstream kernel and supported by Docker. (Oh, and if you’re interested in more details on the default device mapper backend, see here.)
  • This is a nice write-up on Riverbed SteelFusion, aka “Granite.”

Virtualization

  • Azure Site Recovery (ASR) is similar to vCloud Air’s Disaster Recovery service, though obviously tailored toward Hyper-V and Windows Server (which is perfectly fine for organizations that are using Hyper-V and Windows Server). To help with the setup of ASR, the Azure team has a write-up on the networking infrastructure setup for Microsoft Azure as a DR site.
  • PowerCLI in the vSphere Web Client, eh? Interesting. See Alan Renouf’s post for full details.
  • PernixData recently released version 2.0 of FVP; Frank Denneman has all the details here.

That’s it for this time, but be sure to visit again for future episodes. Until then, feel free to start (or join in) a discussion in the comments below. All courteous comments are welcome!

Tags: , , , , , , , , , , , , ,

This is a liveblog of IDF 2014 session DATS009, titled “Ceph: Open Source Storage Software Optimizations on Intel Architecture for Cloud Workloads.” (That’s a mouthful.) The speaker is Anjaneya “Reddy” Chagam, a Principal Engineer in the Intel Data Center Group.

Chagam starts by reviewing the agenda, which—as the name of the session implies—is primarily focused on Ceph. He next transitions into a review of the problem with storage in data centers today; specifically, that storage needs “are growing at a rate unsustainable with today’s infrastructure and labor costs.” Another problem, according to Chagam, is that today’s workloads end up using the same sets of data but in very different ways, and those different ways of using the data have very different performance profiles. Other problems with the “traditional” way of doing storage is that storage processing performance doesn’t scale out with capacity, storage environments are growing increasingly complex (which in turn makes management harder).

Chagam does admit that not all workloads are suited for distributed storage solutions. If you need high availability and high performance (like for databases), then the traditional scale-up model might work better. For “cloud workloads” (no additional context/information provided to qualify what a cloud workload is), distributed storage solutions may be a good fit. This brings Chagam to discussing Ceph, which he describes as the “only” (quotes his) open source virtual block storage option.

The session now transitions to discussing Ceph in more detail. RADOS (stands for “Reliable, Autonomous, Distributed Object Store”) is the storage cluster that operates on the back-end of Ceph. On top of RADOS there are a number of interfaces: Ceph native clients, Ceph block access, Ceph object gateway (providing S3 and Swift APIs), and Ceph file system access. Intel’s focus is on improving block and object performance.

Chagam turns to discussing Ceph block storage. Ceph block storage can be mounted directly as a block device, or it can be used as a boot device for a KVM domain. The storage is shared peer-to-peer via Ethernet; there is no centralized metadata. Ceph storage nodes are responsible for holding (and distributing) the data across the cluster, and it is designed to operate without a single point of failure. Chagam does not provide any detailed information (yet) on how the data is sharded/replicated/distributed across the cluster, so it is unclear how many storage nodes can fail without an outage.

There are both user-mode (for virtual machines) and kernel mode RBD (RADOS block device) drivers for accessing the backend storage cluster itself. Ceph also uses the concept of an Object Store Daemon (OSD); one of these exists for every HDD (or SSD, presumably). SSDs would typically be used for journaling, but can also be used for caching. Using SSDs for journaling would help with write performance.

Chagam does a brief walkthrough of the read path and write path for data being read from or written to a Ceph storage cluster; here is where he points out that Ceph (by default?) stores three copies of the data on different disks, different servers, potentially even different racks or different fault zones. If you are writing multiple copies, you can configure various levels of consistency within Ceph with regard to how writing the multiple copies are handled.

So where is Intel focusing its efforts around Ceph? Chagam points out that Intel is primarily targeting low(er) performance and low(er) capacity block workloads as well as low(er) performance but high(er) capacity object storage workloads. At some point Intel may focus on the high performance workloads, but that is not a current area of focus.

Speaking of performance, Chagam spends a few minutes providing some high-level performance reviews based on tests that Intel has conducted. Most of the measured performance stats were close to the calculated theoretical maximums, except for random 4K writes (which was only 64% of the calculated theoretical maximum for the test cluster). Chagam recommends that you limit VM deployments to the maximum number of IOPS that your Ceph cluster will support (this is pretty standard storage planning).

With regard to Ceph deployment, Chagam reviews a number of deployment considerations:

  • Workloads (VMs, archive data)
  • Access type (block, file, object)
  • Performance requirements
  • Reliability requirements
  • Caching (server caching, client caching)
  • Orchestration (OpenStack, VMware)
  • Management
  • Networking and fabric

Next, Chagam talks about Intel’s reference architecture for block workloads on a Ceph storage cluster. Intel recommends 1 SSD per every 5 HDD (size not specified). Management traffic can be 1 Gbps, but storage traffic should run across a 10 Gbps link. Intel recommends 16GB or more of memory for using Ceph, with memory requirements going as high as 64GB for larger Ceph clusters. (Chagam does not talk about why the memory requirements are different.)

Intel also has a reference architecture for object storage; this looks very similar to the block storage reference architecture but includes “object storage proxies” (I would imagine these are conceptually similar to Swift proxies). Chagam does say that Atom CPUs would be sufficient for very low-performance implementations; otherwise, the hardware requirements look very much like the block storage reference architecture.

This brings Chagam to a discussion of where Intel is specifically contributing back to open source storage solutions with Ceph. There isn’t much here; Intel will help optimize Ceph to run best on Intel Architecture platforms, including contributing open source code back to the Ceph project. Intel will also publish Ceph reference architectures, like the ones he shared earlier in the presentation (which have not yet been published). Specific product areas from Intel’s lineup that might be useful for Ceph include Intel SSDs (including NVMe), Intel network interface cards (NICs), software libraries (Intel Storage Acceleration Library, ISA-L), software products (Intel Cache Acceleration Software, or CAS). Some additional open source projects/contributions are planned but haven’t yet happened. Naturally Intel partners closely with Red Hat to help address needs around Ceph development.

ISA-L is interesting, and fortunately Chagam has a slide on ISA-L. ISA-L is a set of algorithms optimized for Intel Architecture platforms. These algorithms, available as machine code or as C code, will help improve performance for tasks like data integrity, security, and encryption. One example provided by Chagam is improving performance for SHA–1, SHA–256, and MD5 hash calculations. Another example of ISA-L is the Erasure Code plug-in that will be merged with the main Ceph release (it currently exists in the development release).

Virtual Storage Manager (VSM) is an open source project that Intel is developing to help address some of the management concerns around Ceph. VSM will primarily focus on configuration management. VSM is anticipated to be available in Q4 of this year.

Intel Cache Acceleration Software (CAS) is another product (not open source) that might help in a Ceph environment. CAS uses SSDs and DRAM to speed up operations. CAS currently really only benefits read I/O operations.

Finally, Chagam takes a few minutes to talk about some Ceph best practices:

  • You should shoot for one HDD being managed by one OSD. That, in turn, translates to 1GHz of Xeon-class computing power per OSD and about 1GB of RAM per OSD. (Resource requirements are pretty significant.)
  • Jumbo frames are recommended. (No specific MTU size provided.)
  • Use 10x the default queue parameters.
  • Use the deadline scheduler for XFS.
  • Tune read_ahead_kb for sequential reads.
  • Use a queue depth of 64 for sequential workloads, and 8 for random workloads.
  • For small clusters, you can co-locate Ceph monitoring process with compute workloads, but it will take 4–6GB of RAM.
  • Use dedicated nodes for monitoring when you move beyond 100 OSDs.

Chagam now summarizes the key points and wraps up the session.

Tags: , , , ,

Welcome to Technology Short Take #44, the latest in my irregularly-published series of articles, links, ideas, and thoughts about various data center-related technologies. Enjoy!

Networking

  • One of the original problems with the VXLAN IETF specification was that it (deliberately) didn’t include any control plane information; as a result, the process of mapping MAC addresses to VTEPs (VXLAN Tunnel Endpoints) wasn’t defined, and the early implementations relied on multicast to handle this issue. To help resolve this issue, Cumulus Networks (and possibly Metacloud, I’m not sure of their involvement yet) has release an open source project called vxfld. As described in this Metacloud blog post, vxfld is designed to “handle VXLAN traffic from any operationg system or hardware platform that adheres to the IETF Internet-Draft for VXLAN”.
  • Nir Yechiel recently posted part 1 of a discussion on the need for network overlays. This first post is more of a discussion of why VLANs and VLAN-based derivatives aren’t sufficient, and why we should be looking to routing (layer 3) constructs instead. I’m looking forward to part 2 of the series.
  • One ongoing discussion in the network industry these days—or so it seems—is the discussion about the interaction between network overlays and the underlying transport network. Some argue that tight integration is required; others point to streaming video services and VoIP running across the Internet and insist that no integration or interaction is needed. In this post, Scott Jensen argues in favor of the former—that SDN solutions shouldn’t just manage network overlays, but should also manage the configuration of the physical transport network as well. I’d love to hear from more networking pros (please disclose company affiliations) about their thoughts on this matter.
  • I like the distinction made here between network automation and SDN.
  • Need to get a better grasp on OpenFlow? Check out OpenFlow basics and OpenFlow deep-dive.
  • Here’s a write-up on connecting Docker containers using VXLAN. I think there’s a great deal of promise for OVS in containerized environments, but what’s needed is better/tighter integration between OVS and container solutions like Docker.

Servers/Hardware

  • Is Intel having second thoughts about software-defined infrastructure? That’s the core question in this blog post, which explores the future of Intel in a software-defined world and the increasing interest in non-x86 platforms like ARM.
  • On the flip side, proponents who claim that platforms like ARM and others are necessary in order to move forward with SDN and NFV initiatives should probably read this article on 80 Gbps performance from an off-the-shelf x86 server. Impressive.

Security

  • It’s nice to see that work on OpenStack Barbican is progressing nicely; see this article for a quick overview of the project and an update on the status.

Cloud Computing/Cloud Management

  • SDN Central has a nice write-up on the need for open efforts in the policy space, which includes the Congress project.
  • The use of public cloud offerings as disaster recovery targets is on the rise; note this article from Microsoft on how to migrate on-premises workloads to Azure using Azure Site Recovery. VMware has a similar offering via the VMware vCloud Hybrid Service recovery-as-a-service offering.
  • The folks at eNovance have a write-up on multi-tenant Docker with OpenStack Heat. It’s an interesting write-up, but not for the faint of heart—to make their example work, you’ll need the latest builds of Heat and the Docker plugin (it doesn’t work with the stable branch of Heat).
  • Preston Bannister took a look at cloud application backup in OpenStack. His observations are, I think, rational and fair, and I’m glad to see someone paying attention to this topic (which, thus far, I think has been somewhat ignored).
  • Interested in Docker and Kubernetes on Azure? See here and here for more details.
  • This article takes a look at Heat-Translator, an effort designed to provide some interoperability between TOSCA and OpenStack HOT documents for application deployment and orchestration. The portability of orchestration resources is one of several aspects you’ll want to examine as you progress down the route of fully embracing a cloud computing operational model.

Operating Systems/Applications

  • Looks like we have another convert to Markdown—Anthony Burke recently talked about how he uses Markdown. Regular readers of this site know that I do almost all of my content generation using MultiMarkdown (a variation of Markdown with some expanded syntax options). Here’s a post I recently published on some useful Markdown tools for OS X.
  • Good to see that Ivan Pepelnjak thinks infrastructure as code makes sense. I guess that means the time I’ve spent with Puppet (you can browse Puppet-related posts here) wasn’t a waste.
  • I don’t know if I’ve mentioned this before (sorry if that’s the case), but I’m liking this “NIX4NetEng” series going on over at Nick Buraglio’s site (part 1, part 2, and part 3).
  • Mike Foley has a blog post on how to go from zero to Windows domain controller in only 4 reboots. Handy.

Storage

Virtualization

  • Running Hyper-V with Linux VMs? Ben Armstrong details what versions of Linux support the various Hyper-V features in this post.
  • Here’s a quick write-up on running VMs with VirtualBox 4.3 on a headless Ubuntu 14.04 LTS server.
  • Nested OS X guest on top of nested ESXi on top of VMware Fusion? Must be something William Lam’s tried. Go have a look at his write-up.
  • Here’s a quick update on Nova-Docker, the effort in OpenStack to allow users to deploy Docker containers via Nova. I’m not yet convinced that treating Docker as a hypervisor in Nova is the right path, but we’ll see how things develop.
  • This post is a nice write-up on the different ways to connect a Docker container to a local network.
  • Weren’t able to attend VMworld US in San Francisco last week? No worries. If you have access to the recorded VMworld sessions, check out Jason Boche’s list of the top 10 sessions for a priority list of what recordings to check out. Or need a recap of the week? See here (one of many recap posts, I’m sure).

That’s it this time around; hopefully I was able to include something useful for you. As always, all courteous comments are welcome, so feel free to speak up in the comments. In particular, if there is a technology area that I’m not covering (or not covering well), please let me know—and suggestions for more content sources are certainly welcome!

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #41, the latest in my series of random thoughts, articles, and links from around the Internet. Here’s hoping you find something useful!

Networking

  • Network Functions Virtualization (NFV) is a networking topic that is starting to get more and more attention (some may equate “attention” with “hype”; I’ll allow you to draw your own conclusion there). In any case, I liked how this article really hit upon what I personally feel is something many people are overlooking in NFV. Many vendors are simply rushing to provide virtualized versions of their solution without addressing the orchestration and automation side of the house. I’m looking forward to part 2 on this topic, in which the author plans to share more technical details.
  • Rob Sherwood, CTO of Big Switch, recently published a reasonably in-depth look at “modern OpenFlow” implementations and how they can leverage multiple tables in hardware. Some good information in here, especially on OpenFlow basics (good for those of you who aren’t familiar with OpenFlow).
  • Connecting Docker containers to Open vSwitch is one thing, but what about using Docker containers to run Open vSwitch in userspace? Read this.
  • Ivan knocks centralized SDN control planes in this post. It sounds like Ivan favors scale-out architectures, not scale-up architectures (which are typically what is seen in centralized control plane deployments).
  • Looking for more VMware NSX content? Anthony Burke has started a new series focusing on VMware NSX in pure vSphere environments. As far as I can tell, Anthony is up to 4 posts in the series so far. Check them out here: part 1, part 2, part 3, and part 4. Enjoy!

Servers/Hardware

  • Good friend Simon Seagrave is back to the online world again with this heads-up on a potential NIC issue with an HP Proliant firmware update. The post also contains a link to a fix for the issue. Glad to see you back again, Simon!
  • Tom Howarth asks, “Is the x86 blade server dead?” (OK, so he didn’t use those words specifically. I’m paraphrasing for dramatic effect.) The basic premise of Tom’s position is that new technologies like server-side caching and VSAN/Ceph/Sanbolic (turning direct-attached storage into shared storage) will dramatically change the landscape of the data center. I would generally agree, although I’m not sure that I agree with Tom’s statement that “complexity is reduced” with these technologies. I think we’re just shifting the complexity to a different place, although it’s a place where I think we can better manage the complexity (and perhaps mask it). What do you think?

Security

Cloud Computing/Cloud Management

  • Juan Manuel Rey has launched a series of blog posts on deploying OpenStack with KVM and VMware NSX. He has three parts published so far; all good stuff. See part 1, part 2, and part 3.
  • Kyle Mestery brought to my attention (via Twitter) this list of the “best newly-available OpenStack guides and how-to’s”. It was good to see a couple of Cody Bunch’s articles on the list; Cody’s been producing some really useful OpenStack content recently.
  • I haven’t had the opportunity to use SaltStack yet, but I’m hearing good things about it. It’s always helpful (to me, at least) to be able to look at products in the context of solving a real-world problem, which is why seeing this post with details on using SaltStack to automate OpenStack deployment was helpful.
  • Here’s a heads-up on a potential issue with the vCAC 6.0.1.1 upgrade—the upgrade apparently changes some configuration files. The linked blog post provides more details on which files get changed. If you’re looking at doing this upgrade, read this to make sure you aren’t adversely affected.
  • Here’s a post with some additional information on OpenStack live migration that you might find useful.

Operating Systems/Applications

  • RHEL7, Docker, and Puppet together? Here’s a post on just such a use case (oh, I forgot to mention OpenStack’s involved, too).
  • Have you ever walked through a spider web because you didn’t see it ahead of time? (Not very fun.) Sometimes I feel that way with certain technologies or projects—like there are connections there with other technologies, projects, trends, etc., that aren’t quite “visible” just yet. That’s where I am right now with the recent hype around containers and how they are going to replace VMs. I’m not so sure I agree with that just yet…but I have more noodling to do on the topic.

Storage

  • “Server SAN” seems to be the name that is emerging to describe various technologies and architectures that create pools of storage from direct-attached storage (DAS). This would include products like VMware VSAN as well as projects like Ceph and others. Stu Miniman has a nice write-up on Server SAN over at Wikibon; if you’re not familiar with some of the architectures involved, that might be a good place to start. Also at Wikibon, David Floyer has a write-up on the rise of Server SAN that goes into a bit more detail on business and technology drivers, friction to adoption, and some recommendations.
  • Red Hat recently announced they were acquiring Inktank, the company behind the open source scale-out Ceph project. Jon Benedict, aka “Captain KVM,” weighs in with his thoughts on the matter. Of course, there’s no shortage of thoughts on the acquisition—a quick web search will prove that—but I find it interesting that none of the “big names” in storage social media had anything to say (not that I could find, anyway). Howard? Stephen? Chris? Martin? Bueller?

Virtualization

  • Doug Youd pulled together a nice summary of some of the issues and facts around routed vMotion (vMotion across layer 3 boundaries, such as across a Clos fabric/leaf-spine topology). It’s definitely worth a read (and not just because I get mentioned in the article, either—although that doesn’t hurt).
  • I’ve talked before—although it’s been a while—about Hyper-V’s choice to rely on host-level NIC teaming in order to provide network link redundancy to virtual machines. Ben Armstrong talks about another option, guest-level NIC teaming, in this post. I’m not so sure that using guest-level teaming is any better than relying on host-level NIC teaming; what’s really needed is a more full-featured virtual networking layer.
  • Want to run nested ESXi on vCHS? Well, it’s not supported…but William Lam shows you how anyway. Gotta love it!
  • Brian Graf shows you how to remove IP pools using PowerCLI.

Well, that’s it for this time around. As always, I welcome all courteous comments, so feel free to share your thoughts, ideas, rants, links, or feedback in the comments below.

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #38, another installment in my irregularly-published series that collects links and thoughts on data center-related technologies from around the web. But enough with the introduction, let’s get on to the content already!

Networking

  • Jason Edelman does some experimenting with the Python APIs on a Cisco Nexus 3000. In the process, he muses about the value of configuration management tool chains such as Chef and Puppet in a world of “open switch” platforms such as Cumulus Linux.
  • Speaking of Cumulus Linux…did you see the announcement that Dell has signed a reseller agreement with Cumulus Networks? I’m pretty excited about this announcement, and I hope that Cumulus sees great success as a result. There are a variety of write-ups about the announcement; so good, many not so good. The not-so-good variety typically refers to Cumulus’ product as an SDN product when technically it isn’t. This article on Barron’s by Tiernan Ray is a pretty good summary of the announcement and some of its implications.
  • Pete Welcher has launched a series of articles discussing “practical SDN,” focusing on the key leaders in the market: NSX, DFA, and the yet-to-be-launched ACI. In the initial installation of the series, he does a good job of providing some basics around each of the products, although (as would be expected of a product that hasn’t launched yet) he has to do some guessing when it comes to ACI. The series continues with a discussion of L2 forwarding and L3 forwarding across the various products. Definitely worth reading, in my opinion.
  • Nick Buraglio takes away all your reasons for not collecting flow-based data from your environment with his write-up on installing nfsen and nfdump for NetFlow and/or sFlow collection.
  • Terry Slattery has a nice write-up on new network designs that are ideally suited for SDN. If you are looking for a primer on “next-generation” network designs, this is worth reviewing.
  • Need some Debian packages for Open vSwitch 2.0? Here’s another article from Nick Buraglio—he has some information to help you out.

Servers/Hardware

Nothing this time, but check back next time.

Security

Nothing from my end. Maybe you have something you’d like to share in the comments?

Cloud Computing/Cloud Management

  • Christian Elsen (who works in Integration Engineering at VMware) has a nice series of articles going on using OpenStack with vSphere and NSX. The series starts here, but follow the links at the bottom of that article for the rest of the posts. This is really good stuff—he includes the use of the NSX vSwitch with vSphere 5.5, and talks about vSphere OpenStack Virtual Appliance (VOVA) as well. All in all, well worth a read in my opinion.
  • Maish Saidel-Keesing (one of my co-authors on the first edition of VMware vSphere Design and also a super-sharp guy) recently wrote an article on how adoption of OpenStack will slow the adoption of SDN. While I agree that widespread adoption of OpenStack could potentially retard the evolution of enterprise IT, I’m not necessarily convinced that it will slow the adoption of SDN and network virtualization solutions. Why? Because, in part, I believe that the full benefits of something like OpenStack need a good network virtualization solution in order to be realized. Yes, some vendors are writing plugins for Neutron that manipulate physical switches. But for developers to get true isolation, application portability, the ability to re-create production environments in development—all that is going to require network virtualization.
  • Here’s a useful OpenStack CLI cheat sheet for some commonly-used commands.

Operating Systems/Applications

  • If you’re using Ansible (a product I haven’t had a chance to use but I’m closely watching), but I came across this article on an upcoming change to the SSH transport that Ansible uses. This change, referred to as “ssh_alt,” promises a significant performance increase for Ansible. Good stuff.
  • I don’t think I’ve mentioned this before, but Forbes Guthrie (my co-author on the VMware vSphere Design books and an already great guy) has a series going on using Linux as a domain controller for a vSphere-based lab. The series is up to four parts now: part 1, part 2, part 3, and part 4.
  • Need (or want) to increase the SCSI timeout for a KVM guest? See these instructions.
  • I’ve been recommending that IT pros get more familiar with Linux, as I think its influence in the data center will continue to grow. However, the problem that I sometimes face is that experienced folks tend to share these “super commands” that ordinary folks have a hard time decomposing. However, this site should make that easier. I’ve tried it—it’s actually pretty handy.

Storage

  • Jim Ruddy (an EMCer, former co-worker of mine, and an overall great guy) has a pretty cool series of articles discussing the use of EMC ViPR in conjunction with OpenStack. Want to use OpenStack Glance with EMC ViPR using ViPR’s Swift API support? See here. Want a multi-node Cinder setup with ViPR? Read how here. Multi-node Glance with ViPR? He’s got it. If you’re new to ViPR (who outside of EMC isn’t?), you might also find his articles on deploying EMC ViPR, setting up back-end storage for ViPR, or deploying object services with ViPR to also be helpful.
  • Speaking of ViPR, EMC has apparently decided to release it for free for non-commercial use. See here.
  • Looking for more information on VSAN? Look no further than Cormac Hogan’s extensive VSAN series (up to Part 14 at last check!). The best way to find this stuff is to check articles tagged VSAN on Cormac’s site. The official VMware vSphere blog also has a series of articles running; check out part 1 and part 2.

Virtualization

  • Did you happen to see this news about Microsoft Hyper-V Recovery Manager (HRM)? This is an Azure-hosted service that can be roughly compared to VMware’s Site Recovery Manager (SRM). However, unlike SRM (which is hosted on-premise), HRM is hosted by Microsoft Azure. As the article points out, it’s important to understand that this doesn’t mean your VMs are replicated to Azure—it’s just the orchestration portion of HRM that is running in Azure.
  • Oh, and speaking of Hyper-V…in early January Microsoft released version 3.5 of their Linux Integration Services, which primarily appears to be focused on adding Linux distribution support (CentOS/RHEL 6.5 is now supported).
  • Gregory Gee has a write-up on installing the Cisco CSR 1000V in VirtualBox. (I’m a recent VirtualBox convert myself; I find the vboxmanage command just so very handy.) Note that I haven’t tried this myself, as I don’t have a Cisco login to get the CSR 1000V code. If any readers have tried it, I’d love to hear your feedback. Gregory also has a few other interesting posts I’m planning to review in the next few weeks as well.
  • Sunny Dua, who works with VMware PSO in India, has a series of blog posts on architecting vSphere environments. It’s currently up to five parts; I don’t know how many more (if any) are planned. Here are the links: part 1 (clusters), part 2 (vCenter SSO), part 3 (storage), part 4 (design process), and part 5 (networking).

It’s time to wrap up now before this gets any longer. If you have any thoughts or tidbits you’d like to share, I welcome any and all courteous comments. Join (or start) the conversation!

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #37, the latest in my irregularly-published series in which I share interesting articles from around the Internet, miscellaneous thoughts, and whatever else I feel like throwing in. Here’s hoping you find something useful!

Networking

  • Ivan does a great job of describing the difference between the management, control, and data planes, as well as providing examples. Of course, the distinction between control plane protocols and data plane protocols isn’t always perfectly clear.
  • You’ve heard me talk about snowflake servers before. In this post on why networking needs a Chaos Monkey, Mike Bushong applies to the terms to networks—a snowflake network is an intricately crafted network that is carefully tailored to utilize a custom subset of networking features unique to your environment. What is the fix—if one exists—to snowflake networks? Designing your network for resiliency and unleashing a Chaos Monkey on it is one way, as Mike points out. A fan of network virtualization might also say that decomposing today’s complex physical networks into multiple simple logical networks on top of a simpler physical transport network—similar to Mike’s suggestion of converging on a smaller set of reference architectures—might also help. (Of course, I am a fan of network virtualization, since I work with/on VMware NSX.)
  • Martijn Smit has launched a series of articles on VMware NSX. Check out part 1 (general introduction) and part 2 (distributed services) for more information.
  • The elephants and mice post at Network Heresy has sparked some discussion across the “blogosphere” about how to address this issue. (Note that my name is on the byline for that Network Heresy post, but I didn’t really contribute all that much.) Jason Edelman took up the idea of using OpenFlow to provide a dedicated core/spine for elephant flows, while Marten Terpstra at Plexxi talks about how Plexxi’s Affinities could be used to help address the problem of elephant flows. Peter Phaal speaks up in the comments to Marten’s article about how sFlow can be used to rapidly detect elephant flows, and points to a demo taking place during SC13 that shows sFlow tracking elephant flows on SCinet (the SC13 network).
  • Want some additional information on layer 2 and layer 3 services in VMware NSX? Here’s a good source.
  • This looks interesting, but I’m not entirely sure how I might go about using it. Any thoughts?

Servers/Hardware

Nothing this time around, but I’ll keep my eyes peeled for something to include next time!

Security

I don’t have anything to share this time—feel free to suggest something to include next time.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this post on getting the most out of HAProxy—in which Twilio walks through some of the configuration options they’re using and why—to be quite helpful. If you’re relatively new to HAProxy, as I am, then I’d recommend giving this post a look.
  • This list is reasonably handy if you’re not a Terminal guru. While written for OS X, most of these tips apply to Linux or other Unix-like operating systems as well. I particularly liked tip #3, as I didn’t know about that particular shortcut.
  • Mike Preston has a great series going on tuning Debian Linux running under vSphere. In part 1, he covered installation, primarily centered around LVM and file system mount options. In part 2, Mike discusses things like using the appropriate virtual hardware, the right kernel modules for VMXNET3, getting rid of unnecessary hardware (like the virtual floppy), and similar tips. Finally, in part 3, he talks about a hodgepodge of tips—things like blacklisting other unnecessary kernel drivers, time synchronization, and modifying the Linux I/O scheduler. All good stuff, thanks Mike!

Storage

  • “Captain KVM,” aka Jon Benedict, takes on the discussion of enterprise storage vs. open source storage solutions in OpenStack environments. One good point that Jon makes is that solutions need to be evaluated on a variety of criteria. In other words, it’s not just about cost nor is it just about performance. You need to use the right solution for your particular needs. It’s nice to see Jon say that if your needs are properly met by an open source solution, then “by all means stick with Ceph, Gluster, or any of the other cool software storage solutions out there.” More vendors need to adopt this viewpoint, in my humble opinion. (By the way, if you’re thinking of using NetApp storage in an OpenStack environment, here’s a “how to” that Jon wrote.)
  • Duncan Epping has a quick post about a VMware KB article update regarding EMC VPLEX and Storage DRS/Storage IO Control. The update is actually applicable to all vMSC configurations, so have a look at Duncan’s article if you’re using or considering the use of vMSC in your environment.
  • Vladan Seget has a look at Microsoft ReFS.

Virtualization

I’d better wrap it up here so this doesn’t get too long for folks. As always, your courteous comments and feedback are welcome, so feel free to start (or join) the discussion below.

Tags: , , , , , , ,

Recently a couple of open source software (OSS)-related announcements have passed through my Inbox, so I thought I’d make brief mention of them here on the site.

Mirantis OpenStack

Last week Mirantis announced the general availability of Mirantis OpenStack, its own commercially-supported OpenStack distribution. Mirantis joins a number of other vendors also offering OpenStack distributions, though Mirantis claims to be different on the basis that its OpenStack distribution is not tied to a particular Linux distribution. Mirantis is also differentiating through support for some additional projects:

  • Fuel (Mirantis’ own OpenStack deployment tool)
  • Savanna (for running Hadoop on OpenStack)
  • Murano (a service for assisting in the deployment of Windows-based services on OpenStack)

It’s fairly clear to me that at this stage in OpenStack’s lifecycle, professional services are a big play in helping organizations stand up OpenStack (few organizations lack the deep expertise to really stand up sizable installations of OpenStack on their own). However, I’m not yet convinced that building and maintaining your own OpenStack distribution is going to be as useful and valuable for the smaller players, given the pending competition from the major open source players out there. Of course, I’m not an expert, so I could be wrong.

Inktank Ceph Enterprise

Ceph, the open source distributed software system, is now coming in a fully-supported version aimed at enterprise markets. Inktank has announced Inktank Ceph Enterprise, a bundle of software and support aimed to increase adoption of Ceph among enterprise customers. Inktank Ceph Enterprise will include:

  • Open source Ceph (version 0.67)
  • New “Calamari” graphical manager that provides management tools and performance data with the intent of simplifying management and operation of Ceph clusters
  • Support services provided by Inktank; this includes technical support, hot fixes, bug prioritization, and roadmap input

Given Ceph’s integration with OpenStack, CloudStack, and open source hypervisors and hypervisor management tools (such as libvirt), it will be interesting to see how Inktank Ceph Enterprise takes off. Will the adoption of Inktank Ceph Enterprise be gated by enterprise adoption of these related open source technologies, or will it help drive their adoption? I wonder if it would make sense for Inktank to pursue some integration with VMware, given VMware’s strong position in the enterprise market. One thing is for certain: it will be interesting to see how things play out.

As always, feel free to speak up in the comments to share your thoughts on these announcements (or any other related topic). All courteous comments are welcome.

Tags: , , ,

« Older entries