RedHat

You are currently browsing articles tagged RedHat.

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.

Networking

  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.

Servers/Hardware

  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.

Security

  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.

Storage

  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?

Virtualization

  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

Technology Short Take #30

Welcome to Technology Short Take #30. This Technology Short Take is a bit heavy on the networking side, but I suppose that’s understandable given my recent job change. Enjoy!

Networking

  • Ben Cherian, Chief Strategy Officer for Midokura, helps make a case for network virtualization. (Note: Midokura makes a network virtualization solution.) If you’re wondering about network virtualization and why there is a focus on it, this post might help shed some light. Given that it was written by a network virtualization vendor, it might seem a bit rah-rah, so keep that in mind.
  • Brent Salisbury has a fantastic series on OpenFlow. It’s so good I wish I’d written it. He starts out by discussing proactive vs. reactive flows, in which Brent explains that OpenFlow performance is less about OpenFlow and more about how flows are inserted into the hardware. Next, he tackles the concerns over the scale of flow-based forwarding in his post on coarse vs. fine flows. I love this quote from that article: “The second misnomer is, flow based forwarding does not scale. Bad designs are what do not scale.” Great statement! The third post in the series tackles what Brent calls hybrid SDN deployment strategies, and Brent provides some great design considerations for organizations looking to deploy an SDN solution. I’m looking forward to the fourth and final article in the series!
  • Also, if you’re looking for some additional context to the TCAM considerations that Brent discusses in his OpenFlow series, check out this Packet Pushers blog post on OpenFlow switching performance.
  • Another one from Brent, this time on Provider Bridging and Provider Backbone Bridging. Good explanation—it certainly helped me.
  • This article by Avi Chesla points out a potential security weakness in SDN, in the form of a DoS (Denial of Service) attack where many switching nodes request many flows from the central controller. It appears to me that this would only be an issue for networks using fine-grained, reactive flows. Am I wrong?
  • Scott Hogg has a nice list of 9 common Spanning Tree mistakes you shouldn’t make.
  • Schuberg Philis has a nice write-up of their CloudStack+NVP deployment here.

Servers/Hardware

  • Alex Galbraith recently posted a two-part series on what he calls the “NanoLab,” a home lab built on the Intel NUC (“Next Unit of Computing”). It’s a good read for those of you looking for some very quiet and very small home lab equipment, and Alex does a good job of providing all the details. Check out part 1 here and part 2 here.
  • At first, I thought this article was written from a sarcastic point of view, but it turns out that Kevin Houston’s post on 5 reasons why you may not want blade servers is the real deal. It’s nice to see someone who focuses on blade servers opening up about why they aren’t necessarily the best fit for all situations.

Security

  • Nick Buraglio has a good post on the potential impact of Arista’s new DANZ functionality on tap aggregation solutions in the security market. It will be interesting to see how this shapes up. BTW, Nick’s writing some pretty good content, so if you’re not subscribed to his blog I’d reconsider.

Cloud Computing/Cloud Management

  • Although this post is a bit older (it’s from September of last year), it’s still an interesting comparison of both OpenStack and CloudStack. Note that the author apparently works for Mirantis, which is a company that provides OpenStack consulting services. In spite of that fact, he manages to provide a reasonably balanced approach to comparing the two cloud management platforms. Both of them (I believe) have had releases since this time, so some of the points may not be valid any longer.
  • Are you a CloudStack fan? If so, you should probably check out this collection of links from Aaron Delp. Aaron’s focused a lot more on CloudStack now that he’s at Citrix, so he might be a good resource if that is your cloud management platform of choice.

Operating Systems/Applications

  • If you’re just now getting into the whole configuration management scene where tools like Puppet, Chef, and others play, you might find this article helpful. It walks through the difference between configuring a system imperatively and configuring a system declaratively (hint: Puppet, Chef, and others are declarative). It does presume a small bit of programming knowledge in the examples, but even as a non-programmer I found it useful.
  • Here’s a three-part series on beginning Puppet that you might find helpful as well (Part 1, Part 2, and Part 3).
  • If you’re a developer-type person, I would first ask why you’re reading my site, then I’d point you to this post on the AMQP, MQTT, and STOMP messaging protocols.

Storage

Virtualization

  • Although these posts are storage-related, the real focus is on how the storage stack is implemented in a virtualization solution, which is why I’m putting them in this section. Cormac Hogan has a series going titled “Pluggable Storage Architecture (PSA) Deep Dive” (part 1 here, part 2 here, part 3 here). If you want more PSA information, you’d be hard-pressed to find a better source. Well worth reading for VMware admins and architects.
  • Chris Colotti shares information on a little-known vSwitch advanced setting that helps resolve an issue with multicast traffic and NICs in promiscuous mode in this post.
  • Frank Denneman reminds everyone in this post that the concurrent vMotion limit only goes to 8 concurrent vMotions when vSphere detects the NIC speed at 10Gbps. Anything less causes the concurrent limit to remain at 4. For those of you using solutions like HP VirtualConnect or similar that allow you to slice and dice a 10Gb link into smaller links, this is a design consideration you’ll want to be sure to incorporate. Good post Frank!
  • Interested in some OpenStack inception? See here. How about some oVirt inception? See here. What’s that? Not familiar with oVirt? No problem—see here.
  • Windows Backup has native Hyper-V support in Windows Server 2012. That’s cool, but are you surprised? I’m not.
  • Red Hat and IBM put out a press release today on improved I/O performance with RHEL 6.4 and KVM. The press release claims that a single KVM guest on RHEL 6.4 can support up to 1.5 million IOPS. (Cue timer until next virtualization vendor ups the ante…)

I guess I should wrap things up now, even though I still have more articles that I’d love to share with readers. Perhaps a “mini-TST”…

In any event, courteous comments are always welcome, so feel free to speak up below. Thanks for reading and I hope you’ve found something useful!

Tags: , , , , , , , , , , , ,

In this post, I’m going to show you a way to automate the configuration of Open vSwitch (OVS) using Puppet. While this method will work, it is not without its drawbacks (which I’ll explain later in the post). I freely admit that there might be better ways of accomplishing this task; this is one simple way that I discovered.

I wish I could say that this method of automating OVS with Puppet was clever, but—to be honest—it’s really more of a brutish hack that anything else. In an earlier post, I described some integrations between RHEL and OVS that allows you to use interface configuration files in /etc/sysconfig/network-scripts to configure portions of OVS. For example, you could use this interface configuration file to create an OVS internal interface for management traffic:

Further, based on a number of the Puppet-related posts I’ve written (this one, for example), you probably know that Puppet has the ability to enforce the presence and contents of file-based resources.

So, Puppet can create and manage files, and files can be used to configure OVS. You can probably see where this is going. That’s right—if we use Puppet to manage interface configuration scripts, we can automate the configuration of OVS (only on systems running RHEL or RHEL variants, naturally).

Here’s a snippet of Puppet code that could be used to automate the configuration of OVS to use an internal interface for management traffic:

As I said, this is a bit of a brutish hack—not an elegant solution, but one that works. Naturally, because this builds on the RHEL-OVS integrations, you’ll need to do an ifdown and/or ifup to make the change(s) effective. (One could likely use an exec statement in the Puppet manifest to run these commands for you.) Another drawback is that this only works on RHEL and RHEL variants, whereas both OVS and Puppet are far more widely supported. Still, it might come in handy in some situations.

If you have corrections, clarifications, or suggestions, please feel free to speak up in the comments below. Courteous comments are both encouraged and welcomed!

Tags: , , , , ,

In early January 2012, I posted a list of my 2012 projects. Those projects included learning to script in Perl, learning German, becoming more familiar with Xen/Open vSwitch/OpenStack, and pursuing CCNP. In late June 2012, my mid-year project update let you know that I was dropping the Perl efforts, but sticking it out with German, Xen/OVS/OpenStack, and CCNP. (I later dropped Xen in favor of KVM.) Finally, in early January 2013 I graded myself on my progress on my 2012 goals. In this post, I’d like to take lessons learned from my 2012 projects and apply them toward my list of 2013 projects.

As a quick summary, here are some key “lessons learned” from my 2012 projects:

  • The synergy of the projects does make a difference. I should try to find projects that are related and that support one another in some fashion.
  • Projects need to be more tightly defined. Projects that are overly broad will have mixed results.
  • There are still some fundamental building blocks that I need to reinforce.

With the conclusions listed above in mind, here is my list of 2013 projects and—in some cases—some proposed goals for which I’ll be aiming.

  1. Continue to learn German. Obviously, this is a continuation of my 2012 project to learn to speak German. This year, I’ll need to re-evaluate ways to enhance my motivation, and find additional ways to reinforce the information I’m learning. I don’t have any specific goals in mind for this project, although that is something I’ll be evaluating over the course of the year (I do agree that clear goals—something I didn’t establish last year—can help with progress.)

  2. Reinforce base Linux knowledge. This is one of the “fundamental building blocks” that I needed to reinforce. I’ll be focusing on Red Hat Enterprise Linux (RHEL) and RHEL variants, since that seems to be what’s most commonly found in enterprise deployments today. My primary goal here is to be able to do most anything that I need to do without constantly having to look up a “how to” guide; a secondary goal that I’m considering (but haven’t decided upon) would be to look at a Red Hat-focused certification (like RHCE).

  3. Continue using Puppet for automation. This will be partly tied to #2. I’ve mentioned elsewhere that you can’t automate something you don’t fully understand, so as my base Linux knowledge is reinforced I will also be seeking ways to automate tasks with Puppet. As with my base Linux knowledge, my primary goal here is to be able to do most anything that I need to do with Puppet. I might consider Puppet certification, but I think that’s a long shot.

  4. Expand and deepen data center networking fundamentals. Making progress on CCNP was one of my 2012 goals, but it didn’t really fit in with the rest of my focus areas (which were almost all centered in the data center). To help improve the synergy of projects, then, I’ve decided that my networking focus should be DC-oriented. This also ties in nicely with my new job role at Nicira/VMware. My primary goal here is simply to be more knowledgeable, although over the course of the year I will consider pursuing CCNA-DC.

So, that’s my list of 2013 projects. I’m sure that I’ll have a few side projects here and there as well (for example, there is a book project and at least one video training series project on the books for 2013), but the projects listed above represent the majority of my technical focus over the coming year. I’d love to hear your feedback—am I missing something important? Are there ways I could further improve the synergy of the projects? As always, your honest and courteous comments are welcome.

Tags: , , , , ,

Welcome to Technology Short Take #27! This is my usual collection of links, thoughts, rants, and ideas about data center-related technologies. Here’s hoping you find something useful!

Networking

  • If you’re interested in learning more about OpenFlow and software-defined networking but need to do this on a shoestring budget in your home lab, a number of guides have been written to help out. I haven’t personally used any of these guides yet, but I’m working my way in that direction. (I needed to fill in some other knowledge gaps first.) First up is Brent Salisbury’s how to build an SDN lab without needing OpenFlow hardware. Brent is creating some fantastic content that I’ve found extremely useful. His earlier post on getting started with OpenFlow and Open vSwitch tutorial lab is also quite good. Another good resource is Dan Hersey’s guide to building an SDN-based private cloud in an hour. I encourage you to have a look at these posts if you’re at all interested in any of these technologies.
  • Bruce Davie and Martin Casado (with Nicira, now part of VMware) have written a post comparing the VXLAN and STT tunneling protocols. Not unsurprisingly, one of the key advantages of STT that’s highlighted is its improved performance due to TSO support in NIC hardware. VXLAN, on the other hand, is seeing broader adoption across multiple vendors. There’s no mention of NVGRE (or just plain GRE).
  • Related to the bare metal provisioning work (see below under “Servers/Hardware”), Mirantis also detailed some bare-metal networking stuff they’ve done for OpenStack in relation to the use of bare metal nodes.

Servers/Hardware

  • Mirantis published an article discussing a framework they built for bare-metal provisioning with OpenStack that allows OpenStack to place workloads onto bare-metal nodes instead of onto a hypervisor. It’s interesting work, but unfortunately it looks like this work won’t be returned to the community (it was developed for one or more of their clients). There are also a few follow-up posts, such as this one on placement control and multi-tenancy isolation and this one on preparing images for bare metal nodes. Also see the “Networking” section above for a related post on the networking aspects involved.

Security

I don’t have anything for this area this time around, but I’ll stay alert for articles to add next time. Feel free to share something in the comments!

Cloud Computing/Cloud Management

  • I might have mentioned this before, but Ken Pepple’s OpenStack Folsom architecture post is just awesome. It’s well worth reading and reviewing in depth.
  • This OpenStack-on-Debian HOWTO is a bit older (and probably out of date), but it does give a decent overview of the components that are involved and—via the configuration—how they relate to each other. While the details for installing a current version of OpenStack are likely to be different now, you might still find this conceptually helpful.
  • These articles are a bit long in the tooth, but CSS Corp has a useful series of articles on bundling various Linux distributions for use with OpenStack: bundling CentOS, bundling CentOS with VNC, bundling Debian, and bundling OpenSUSE. It would be interesting to me to see how much of this, if any, could be automated with something like Puppet. If any enterprise Puppet experts want to give it a go, I’d be happy to publish a guest blog post for you with full details on how it’s done.
  • Much like there are some great “how to’s” on how to run an SDN lab (see the Networking section earlier), there are also some great write-ups on doing the same for OpenStack. For example, Cody Bunch published this article on running OpenStack Private Cloud on ESXi, and Brent Salisbury (there he is again!) posted an older guide to OpenStack Essex on Ubuntu on VirtualBox as well as a newer guide to OpenStack DevStack on Fusion.

Operating Systems/Applications

Storage

  • I don’t fully understand all the details involved, but this post on changes in block protocol scalability in Xen outlines what sounds like good progress in improving efficiency.
  • This article is a bit older, published at the start of October, but it talks about an interesting project (product?) by Qlogic called “Mt. Rainier.” (Stu Miniman of Wikibon has more information here as well.) Apparently, “Mt. Rainier” will allow customers to combine PCIe-based SSD storage inside servers into a “virtual SAN” (now there’s an original and not over-used term). The really interesting aspect, in my opinion, is the use of “Mt. Rainier” to create shared caches across servers. Is this the beginning of the data center fractal edge?

Virtualization

  • Big news in the QEMU world: In the QEMU 1.3 release, the QEMU-KVM and QEMU projects have been merged. Why is this important? It’s first necessary to understand the relationship between QEMU and KVM. KVM is the set of kernel modules that leverage hardware virtualization functionality inside Intel and AMD CPUs, and it makes possible the virtualization of closed-source operating systems like Windows. QEMU, on the other hand, is needed to emulate everything else that a VM needs: networking, storage, USB, keyboard, mouse, etc. Both KVM and QEMU are needed for a full virtualization solution. Until the 1.3 release, QEMU (without hardware acceleration via KVM) was one branch, and QEMU-KVM (with KVM hardware acceleration) was a separate branch. The QEMU 1.3 release completes an effort to merge both efforts into a single development tree.
  • The merge of QEMU and QEMU-KVM isn’t the only cool thing happening with QEMU; also included in the 1.3 release is GlusterFS integration. This integration dramatically improves GlusterFS performance by allowing QEMU’s block layer to communicate directly with the Gluster backend without going through the userspace FUSE components.
  • Erik Scholten of VMGuru.nl has posted a good hypervisor feature comparison document. It includes RHEV 3.1 in the comparison, even though RHEV 3.1 wasn’t released (was still in beta) at the time the comparison was written.
  • Speaking of RHEV: apparently RHEV 3.1 was released yesterday (Wednesday, December 4, 2012), although I haven’t been able to find any sort of official press release or announcement.
  • Debunking an argument I’ve heard quite a bit is this article by Frank Denneman on using SIOC with multiple datastores backed by a single pool of disks.
  • Need to compact a virtual hard disk in Windows 8/Windows Server 2012? Ben Armstrong shows how here.
  • I enjoyed this article by Josh Townsend on using SUSE Studio and HAProxy to create a (free) open source load balancing solution for VMware View.

That’s it for this time around; no need to overwhelm you with too much information! Besides, I have to keep a few items around for Technology Short Take #28…

As always, comments, thoughts, rants, or corrections are welcome below.

Tags: , , , , , , , , , , , , ,

Welcome to Technology Short Take #26! As you might already know, the Technology Short Takes are my irregularly-published collections of links, articles, thoughts, and (sometimes) rants. I hope you find something useful here!

Networking

  • Chris Colotti, as part of a changed focus in his role at VMware, has been working extensively with Nicira NVP. He’s had a couple of good posts; this one is a primer on how NVP works, and this one discusses the use of the Open vSwitch (OVS) vApp. As I mentioned before in other posts, OVS is popping up in more and more places—it might be a good idea to make sure you’re familiar with it.
  • This article by Ivan Pepelnjak on VXLAN termination on physical devices is over a year old, but still very applicable—especially considering Arista Networks recently announced their 7150S switch, which sports hardware VTEP (VXLAN Tunnel End Point) support (meaning that it can terminate VXLAN segments).
  • Brad Hedlund dives into Midokura Midonet in this post on L2-L4 network virtualization. It’s a good overview (thanks Brad!) and worth reading if you want to get up to speed on what Midokura is doing. (Oh, just as an aside: note that Midokura leverages OVS in their solution. Just saying…)
  • This blog post provides more useful information from Kamau Wanguhu on VXLAN and proxy ARP. Kamau also has an interesting post on network virtualization, although—to be honest—the post is long on messaging/positioning and short on technical information. I prefer the latter instead of the former.

Servers/Hardware

  • This mention of the Dell PowerEdge M I/O Aggregator looks interesting, although I’m still not real clear on exactly what it is or how it works. I guess this first article was a tease?

Security

Nothing this time around, but I’ll stay alert for items to include in future posts!

Cloud Computing/Cloud Management

  • Want to know a bit more about how to configure VXLAN inside VCD? Rawlinson Rivera has a nice write-up that is worth reviewing.
  • Clint Kitson, an EMC vSpecialist, talks about some VCD integrity scripts he created. Looks like some pretty cool stuff—great work, Clint!
  • For the past couple of weeks I’ve been (slowly) reading Kevin Jackson’s OpenStack Cloud Computing Cookbook; it’s very useful. It’s worth a read if you want to get up to speed on OpenStack; naturally, you can get it from Amazon.

Operating Systems/Applications

  • At the intersection of cloud-based storage and configuration management, I happened to find this very interesting Puppet module designed to fetch and update files from an S3 bucket. Through this module, you could store files in S3 instead of using Puppet’s built-in file server. (By the way, this module also works with OpenStack Swift as well.)
  • One of the things I’ve complained about regarding newer versions of OS X is the “hiding” of the Unix underpinnings. Perhaps I should read this book and see if my thinking is unfounded?

Storage

  • Chris Evans takes a look at Hyper-V 3.0′s Virtual Fibre Channel feature in this write-up. From what I’ve read, it sounds like Hyper-V’s NPIV implementation is more robust than VMware’s broken and busted NPIV implementation. (If you don’t know why I say that about VMware’s implementation, ask anyone who’s tried to use it.) The real question is this: is NPIV support in a hypervisor of any value any longer?
  • Gina Minks (formerly of Dell, now with Inktank) recommended I have a look at Ceph and mentioned this post on migrating to Ceph (with a little libvirt thrown in).
  • Gluster might be another project that I need to spend some time examining; this post on using Gluster with oVirt 3.1 looks interesting. Anyone have any pointers for a Gluster beginner?
  • Mirantis has a post about some Nova Volume integration with Isilon. I’ve often said that I think scale-out platforms like Isilon (among others) are an important foundation for future storage solutions. It’s cool to see some third-party development happening to integrate Isilon and OpenStack.

Virtualization

That’s all for this time around. As always, courteous comments are welcome (encouraged, in fact!), so feel free to speak up in the comments below. I’d love to hear your feedback.

Tags: , , , , , , , , , , , , , ,

Over the last few days, I’ve been unsuccessfully trying to get Open vSwitch (OVS) running on Red Hat Enterprise Linux (RHEL) 6.3 (as well as CentOS 6.3). The rationale behind this is that RHEL/CentOS are the sort of battle-tested, well-supported Linux distributions that enterprises will (and do) deploy in their data centers. OVS, of course, is central to the strategy and functionality of several network virtualization players and projects:

  • Nicira
  • Midokura (more information here)
  • OpenStack Quantum

If you follow me on Twitter, you’ve probably noticed me talking about trying to rebuild source RPMs, running into all sorts of dependency issues, encountering errors when compiling, patching files, etc. I still haven’t achieved success (yet), but I’m going to keep trying—for a little while longer, at least.

And this is what brings me to the central thought in this Thinking Out Loud post. I’m no expert—especially in this area, where I’m still learning more and more every day—but shouldn’t projects like OVS just work with enterprise-class Linux distributions? I mean, if you’re going to build network virtualization products that attempt to bring cloud computing-style functionality to enterprise networks, don’t you kind of need a core part of your solution to work on the Linux distributions that enterprises will actually deploy? Yes, it’s great that OVS ships with Fedora 17, and that it works (so I’ve been told, I haven’t actually tried it personally). But what organization is going to deploy Fedora 17 to help support an important function like network virtualization? They won’t. Doesn’t it make sense for the developers behind projects like OVS to really pay attention to the distributions that are most likely to be used by their current (and potential) customers?

It really boils down to this: if OVS is going to see adoption in the enterprise, wouldn’t it make sense to improve its support on enterprise-focused Linux distributions?

I’m new to a lot of this stuff, so I freely admit I that I could be just missing some key piece of information here. So help me out—what I am missing? What am I overlooking? Speak up below—all courteous comments are welcome.

Tags: , , , ,

A few weeks ago I wrote about trying to use NetApp’s ONTAP Simulator as a VM under ESX Server so that I could do some testing with the new NAS and iSCSI functionality in ESX Server 3.0.  I finally got that working, but later had to shut it down when I started working with ESX Server 3.  As it turns out, the ProLiant 6400R servers I was using for my VMware lab (running ESX 2.5.x) were not supported for the final release of ESX 3 because the cpqarray.o driver was dropped from the final release, and the cpqarray.o driver is what supported the Smart Array 3200 and Smart Array 4250 RAID controllers in these two boxes.  No ESX 3 on these boxes.

Thankfully, I was able to locate an unused ProLiant ML350 G4p on which I could run ESX Server 3.  Instead of trying to use the ONTAP Simulator as a VM, then, I rebuilt one of the 6400R servers with Red Hat Linux 9.0 and installed the NetApp ONTAP Simulator directly from there.  After a fair amount of work, I finally had everything setup, and the simulated Filer was running with about 20GB of available storage to serve up via iSCSI, CIFS, NFS, or HTTP.

iSCSI was what I was really interested in, so the available storage from the ONTAP Simulator got carved into a couple of LUNS to be presented via iSCSI, along with the requisite initiator groups.  Once everything looked to be in place on the Simulator, I moved to the configuration of ESX Server.  Had I been lucky, I might have been trying to do this after Mike Laverick released his Service Console Guide, but I was a few days too early, and used the VI Client GUI instead to configure the VM Kernel NIC and the software iSCSI client.  The configuration looked correct, but there was no connectivity.

Not sure if the problem was ESX Server or the ONTAP Simulator, I mapped another LUN and created another initiator group and tested iSCSI connectivity from a Windows Server 2003 VM using Microsoft’s iSCSI initiator.  That worked perfectly—not even the first problem.  Clearly, the problem was not with the ONTAP Simulator, but with ESX Server instead.

Reviewing the configuration, I did find a couple of problems:

  • The iSCSI node name I had specified in the igroup configuration on the Simulator was incorrect, so I fixed that.  (Or thought I had; more on that in a moment.)
  • The iSCSI security configuration was incorrect, so I fixed that as well.

ESX Server should see the storage now, but there was still no connectivity.  Finally, I came across a blurb somewhere about the new firewall in ESX Server 3.0, and how it controlled not only inbound traffic, but also outbound traffic.  A quick trip to the service console and this command:

esxcfg-firewall -e swISCSIClient

You would expect that using the VI Client to enable the software iSCSI client would also enable outbound iSCSI traffic support on the ESX firewall, but it didn’t.  As soon as I entered that command, the Simulator’s console (where I was logged in) started showing iSCSI connection requests.  This, in turn, revealed another problem—ESX Server insisted on using its own iSCSI node name instead of the node name I had assigned.  That was easily and quickly corrected, and I was finally able to mount the iSCSI LUN and create a VMFS datastore.

Key points to remember:

  • Apparently, the iSCSI node name you specify in configuring ESX Server will be ignored, so don’t bother.  Just use whatever ESX is already configured with.
  • Be sure to either configure iSCSI from the command line (where outbound iSCSI traffic is allowed through the firewall automatically) or go back and allow outbound iSCSI traffic through the ESX firewall.

Having iSCSI-based storage eliminates one potential block to testing and demonstrating VMotion to customers—shared storage—but now I need to work on getting Gigabit Ethernet into the test lab as well.  Too bad there’s not a software workaround for that, too…

UPDATE:  I corrected the command-line above to reflect that the first “I” should be uppercase, not lowercase as was previously noted.  Thanks to Chauncey for catching the error!

Tags: , , , , , , , , , , ,

In a recent article discussing Novell’s leadership change, one analyst was quoted regarding the change as being positive for Novell in that they (Novell) could stop building very high quality products and instead build products that are just good enough.  I don’t know about you, but this spirit of mediocrity is exactly the wrong kind of thinking for IT vendors.

Specifically, the quote stated this:

“Ron Hovsepian appears to be an astute business person, one who will be able to quickly take stock of the environment and Novell’s position within that environment. This, I hope, will help Novell move from its current position of very slowly building extremely high quality products to quickly building and marketing products that are good enough to satisfy the market,” concluded Kusnetzky.

So what is he (Dan Kusnetzky) proposing then?  It sounds to me that Dan thinks IT software vendors should make their products just good enough to pass muster, instead of making them the best that they can be.

In my opinion, this spirit of mediocrity—this willingness to accept products that are knowingly released with imperfections and flaws because they are “good enough”—is exactly what brought the industry to where it is today.  This mediocrity is what brought SQL Slammer, Blaster, and Melissa.  This is the view that accepts that rebooting your computer a few times a day is just a part of life, and that our operating systems and applications shouldn’t be expected to be stable and reliable.  Just good enough?  When was the last time you recommended a product, service, or vendor because they were “just good enough”?  No, just good enough isn’t good enough.

Every major IT vendor out there—from HP, IBM, and Sun, to Apple, Microsoft, and Red Hat—should be held accountable for the quality of the products they release.  Hey, I understand that companies may make mistakes, and miss errors.  That’s understandable.  But any company that knowingly releases a product that’s “just good enough” when it could have been better is not a company we should be praising.  We should be supporting those companies that emphasize quality over “just good enough”.

Perhaps I’m overreacting.  Perhaps the analyst’s comments were merely directed at the speed with which Novell releases their products, and was instead trying to state that Novell needed to release competing products more quickly.  Even so, any vendor that values speed over quality is bound to get bitten sooner or later.  Microsoft got bitten, and changed their priorities (somewhat).  Apple will get bitten, too, if they start letting the quality of Mac OS X releases slide in favor of shorter development cycles.  The same goes for all the other vendors.

What about you?  I’d love to hear your comments on the matter.

Tags: , , , , , ,

UPDATE:  An updated version of these instructions has been posted.

The integration of (what was formerly called) Services for UNIX into Windows Server 2003 R2 also brought some other changes.  To accommodate those changes, I’ve updated my Linux-AD integration instructions (the previous instructions are here for pre-R2 versions of Windows).  If you need to integrate Linux systems for authentication into Active Directory with Windows Server 2003 R2, these instructions should get you there.

Overall, the instructions are very similar to the instructions for pre-R2 versions of Windows.

Preparing Active Directory (One-Time)

Based on what I’ve seen so far, it appears as if a partial RFC 2307-compliant schema is included by default with Windows Server 2003 R2.  This means that it is no longer necessary to extend the schema to include attributes such as uid, gid, login shell, etc.  However, while the schema does appear to be present by default, you must install the “Server for NIS” component on at least one domain controller in order to be able to actually set those attributes (and it will be necessary to set the attributes before logins from Linux will work).

You’ll also need to create an account in Active Directory that will be used to bind to Active Directory for LDAP queries.  This account does not need any special privileges; in fact, making the account a member of Domain Guests and not a member of Domain Users is perfectly fine.

Preparing Active Directory (Each User)

Each Active Directory account that will authenticate via Linux must be configured with a UID and other UNIX attributes.  This is accomplished via the new “UNIX Attributes” tab on the properties dialog box of a user account.  Installing the “Server for NIS” component enables this, as mentioned previously.

After all the user accounts have been configured, then we are ready to perform the additional tasks within Active Directory and on the Linux server that will enable the authentication.

Preparing Active Directory (Each Server)

For each Linux-based server that will be authenticating against Active Directory, follow the steps below.

  1. Create a computer account in Active Directory.  When creating the computer account, be sure to specify that this account may be used by a pre-Windows 2000–based computer.
  2. Use the following command at a command prompt to configure the new computer account:
    ktpass -princ host/fqdn@REALM -mapuser DOMAIN\name$
    -crypto DES-CBC-MD5 -pass password -ptype KRB5_NT_PRINCIPAL
    -out filename

    Of course, you’ll need to substitute the appropriate values for “fqdn” (the fully-qualified domain name of the computer), “REALM” (the DNS name of your Active Directory domain in UPPERCASE), “DOMAIN” (the NetBIOS name of your Active Directory domain), “password” (the password that will be set for the new computer account), and “filename” (the keytab that will be generated and must be copied over to the Linux computer).

If you need to rebuild the Linux server for whatever reason, you’ll need to delete the computer account you created and repeat this process.

Preparing Each Linux Server

Follow the steps below to configure the Linux server for authentication against Active Directory.

  1. Make sure that the appropriate Kerberos libraries, OpenLDAP, pam_krb5, and nss_ldap are installed.  If they are not installed, install them.
  2. Be sure that time is being properly synchronized between Active Directory and the Linux server in question.  Kerberos requires time synchronization.
  3. Edit the krb5.conf file to look something like this, substituting your actual host names and domain names where appropriate:
    [logging]
     default = FILE:/var/log/krb5libs.log
     kdc = FILE:/var/log/krb5kdc.log
     admin_server = FILE:/var/log/kadmind.log
    
    [libdefaults]
     default_realm = EXAMPLE.COM
     dns_lookup_realm = true
     dns_lookup_kdc = true
    
    [realms]
     EXAMPLE.COM = {
      kdc = host.example.com:88
      admin_server = host.example.com:749
      default_domain = example.com
     }
    
    [domain_realm]
     .example.com = EXAMPLE.COM
     example.com = EXAMPLE.COM
    
    [kdc]
     profile = /var/kerberos/krb5kdc/kdc.conf
    
    [appdefaults]
     pam = {
       debug = false
       ticket_lifetime = 36000
       renew_lifetime = 36000
       forwardable = true
       krb4_convert = false
     }
  4. Edit the /etc/ldap.conf file to look something like this, substituting the appropriate host names, domain names, account names, and distinguished names (DNs) where appropriate.
    host 10.10.10.10
    base dc=example,dc=com
    uri ldap://server.example.com/
    binddn [email protected]
    bindpw adldapbindpw
    scope sub
    ssl no
    nss_base_passwd dc=example,dc=com?sub
    nss_base_shadow dc=example,dc=com?sub
    nss_base_group dc=example,dc=com?sub
    nss_map_objectclass posixAccount user
    nss_map_objectclass shadowAccount user
    nss_map_objectclass posixGroup group
    nss_map_attribute gecos name
    nss_map_attribute homeDirectory unixHomeDirectory
    nss_map_attribute uniqueMember member
  5. Securely copy the file created using the ktpass.exe utility above to the Linux server in question, placing it in the /etc directory as krb5.keytab.  (SFTP or SCP are excellent candidates for this.)
  6. Configure PAM (this varies according to Linux distributions) to use pam_krb5 for authentication.  Many modern distributions use a stacking mechanism whereby one file can be modified and those changes will applied to all the various PAM-aware services.  For example, in Red Hat-based distributions, the system-auth file is referenced by most other PAM-aware services.
  7. Edit the /etc/nsswitch.conf file to include “ldap” as a lookup source for passwd, shadow, and groups.

That should be it.  Once you do that, you should be able to use kinit from a Linux shell prompt (for example, “kinit aduser”) and generate a valid Kerberos ticket for the specified Active Directory account.

At this point, any PAM-aware service that is configured to use the stacked system file (such as the system-auth configuration on Red Hat-based distributions) will use Active Directory for authentication.  Note, however, that unless you also add the pam_mkhomedir.so module in the PAM configuration, home directories will have to be created manually for any Active Directory account that may log on to that server.  (I generally recommend the use of pam_mkhomedir.so in this situation.)

This configuration was tested on Red Hat Linux 9.0 as well as CentOS 4.3.

Tags: , , , , , , , ,

« Older entries