You are currently browsing articles tagged ESX.

The question of VMware’s future in the face of increasing competition is not a new one; it’s been batted around by quite a few folks. So Steven J. Vaughan-Nichols’ article “Does VMware Have a Real Future?” doesn’t really open any new doors or expose any new secrets that haven’t already been discussed elsewhere. What it does do, in my opinion, is show that the broader market hasn’t yet fully digested VMware’s long-term strategy.

Before I continue, though, I must provide disclosure: what I’m writing is my interpretation of VMware’s strategy. Paul M. hasn’t come down and given me the low-down on their plans, so I can only speculate based on my understanding of their products and their sales strategy.

Mr. Vaughan-Nichols’ article focuses on what has been, to date, VMware’s most successful technology and product: the hypervisor. Based on what I know and what I’ve seen in my own experience, VMware created the virtualization market with their products and cemented their leadership in that market with VMware Infrastructure 3 and, later, vSphere 4 and vSphere 5. Their hypervisor is powerful, robust, scalable, and feature-rich. Yet, the hypervisor is only one part of VMware’s overall strategy.

If you go back and look at the presentations that VMware has given at VMworld over the last few years, you’ll see VMware focusing on what many of us refer to as the “three layer cake”:

  1. Infrastructure
  2. Applications (or platforms)
  3. End-user computing

If you like to think in terms of *aaS, you might think of the first one as Infrastructure as a Service (IaaS) and the second one as Platform as a Service (PaaS) or Software as a Service (SaaS). Sorry, I don’t have a *aaS acronym for the third one.

I believe that VMware knows that relying on the hypervisor as its sole differentiating factor will come to end. We can debate how quickly that will happen or which competitors will be most effective in making that happen, but those issues are beside the point. This is not to say that VMware is ceding the infrastructure/IaaS market, but instead recognizing that it cannot be all that VMware is. VMware must be more.

What is that “more”? I’m glad you asked.

Let’s look back at the forces that drove VMware’s hypervisor into power. We had servers with more capacity than the operating system (OS)/application stack could effectively leverage, leaving us with lots of servers that were lightly utilized. We had software stacks that drove us to a “one OS/one application” model, again leading to lots of servers that were lightly utilized. Along comes VMware with ESX (and later ESXi) and the ability to fix that problem, and—this is a key point—fix it without sacrificing compatibility. That is, you could continue to deploy your OS instances and your application stacks in much the same way. No application rewrite needed. That was incredibly powerful, and the market responded accordingly.

This compatibility-centered approach is both powerful yet limiting. Yes, you can maintain status quo, but the problem is that you’re maintaining status quo. Things aren’t really changing. You’re still bound by the same limitations as before. You can’t really take advantage the new functionality the hypervisor has introduced.

Hence, applications need to be rewritten. If you want to really take advantage of virtualization, you need a—gasp!—platform designed to exploit virtualization and the hypervisor. This explains VMware’s drive into the application development space with vFabric (Spring, GemFire, SQLFire, RabbitMQ). These tools give them the platform upon which a new generation of applications can be built. (And I haven’t even yet touched on CloudFoundry.) This new generation of applications will assume the presence of a hypervisor, and be able to exploit the functionality provided by it. However, a new generation applications that are still bound by the old ways of accessing those applications will constrain their effectiveness.

Hence, end users need new ways to access these applications, and organizations need new ways to deliver applications to end users. This explains VMware’s third layer in the “three layer cake”: end-user computing. Reshaping applications to embrace new form factors (tablets, smartphones) means re-architecting your applications. If you’re going to re-architect your applications, you might as well build them using a using a new platform and set of tools that lets you exploit the ever-ubiquitous presence of a hypervisor. Starting to see the picture now?

If you look at VMware only from the perspective of the hypervisor, then the question of VMware’s future viability is suspect. I’ll grant that. Take a broader look, though—look at VMware’s total vision and I think you’ll see a different picture. That’s why—assuming VMware can execute on this vision—I think that the answer to Mr. Vaughan-Nichols’ question, “Does VMware have a real future?”, is yes. VMware might not continue to reign as an undisputed market leader, but I do think their long-term viability isn’t in question (again, assuming they can execute on their vision.)

Feel free to share your thoughts in the comments. Do you think VMware has a future? What should they do (or not do) to ensure future success? Or is their fall a foregone conclusion? I’d love to hear your thoughts. I only ask for disclosure of vendor affiliations, where applicable. (For example, although I work for EMC and EMC has a financial relationship with VMware, I speak only for myself.)

Tags: , , , ,

I recently had the opportunity to work on a proof of concept (PoC) in which we wanted to help a customer streamline the processes needed to deploy new hosts and reduce the amount of time it took overall. One of the tools we used in the PoC for this purpose was PXE booting VMware ESX for an automated installation. Here are the details on how we made this work.

Before I get into the details, I’ll provide this disclaimer: there are probably easier ways of making this work. I specifically didn’t use UDA or similar because I wanted to gain the experience of how to do this the “old fashioned” way. I also wanted to be able to walk the customer through the “old fashioned” way and explain all the various components.

With that in mind, here are the components you’ll need to make this work:

  1. You’ll need a DHCP server to pass down the PXE boot information. In this particular instance, I used an existing Windows-based DHCP server. Any DHCP server should work; feel free to use the Linux ISC DHCP server if you prefer.
  2. You’ll need an FTP server to host the kickstart script and VMware ESX 4.0 Update 1 installation files. In this case, I used a third-party FTP server running on the same Windows-based server as DHCP. Again, feel free to use a Linux-based FTP server if you prefer.
  3. You will need a TFTP server to provide the boot files. The third-party FTP server used in the previous step also provided TFTP functionality. Use whatever TFTP server you prefer.

Make sure that each of these components is working as expected before proceeding. Otherwise, you’ll spend time troubleshooting problems that aren’t immediately apparent.

Preparing for the Automated ESX Installation

First, copy the contents for the VMware ESX 4.0 Update 1 DVD—not the actual ISO, but the contents of the ISO—to a directory on the FTP server. Test it to make sure that the files can be accessed via an anonymous FTP user.

Also go ahead and create a simple kickstart script that automates the installation of VMware ESX. I won’t bother to go into detail on this step here; it’s been quite adequately documented elsewhere. You’ll need to put this kickstart script on the FTP server as well.

At this point, you’re ready to proceed with gathering the PXE boot files.

Gathering the PXE Boot Files

The first task you’ll need to complete is gathering the necessary files for a PXE boot environment.

First, copy the vmlinuz and initrd.img files from the VMware ESX 4.0 Update 1 ISO image. Since I use a Mac, for me this was a simple case of mounting the ISO image and copying out the files I needed. Linux or Windows users, it might be a bit more complicated for you. These files, by the way, are in the ISOLINUX folder on the DVD image.

Next, you’ll need the PXE boot files. Specifically, you’ll need the menu.c32 and pxelinux.0 files. These files are not on the DVD ISO image; you’ll have to download Syslinux from this web site. Once you download Syslinux, extract the files into a temporary directory. You’ll find menu.c32 in the com32/menu folder; you’ll find pxelinux.0 in the core folder. Copy both of these files, along with vmlinuz and initrd.img, into the root directory of the TFTP server. (If you don’t know the root directory of the TFTP server, double-check its configuration.)

You’re now ready to configure the PXE boot process.

Configuring the PXE Boot Environment

Once the necessary files have been placed into the root directory of the TFTP server, you’re ready to configure the PXE boot environment. To do this, you’ll need to create a PXE configuration file on the TFTP server.

The file should be placed into a folder named pxelinux.cfg under the root of the TFTP server. The filename of the PXE configuration file should be named something like this:

01-<MAC address of network interface on host>

If the MAC address of the host was 01:02:03:04:05:06, the name of the text file in the pxelinux.cfg folder on the TFTP server would be:


The PoC in which I was engaged involved Cisco UCS, so we knew in advance what the MAC addresses were going to be (the MAC address is assigned in the UCS service profile).

The contents of this file should look something like this (lines have been wrapped here for readability and are marked by backslashes; don’t insert any line breaks in the actual file):

default menu.c32
menu title Custom PXE Boot Menu Title
timeout 30
label scripted
menu label Scripted installation
kernel vmlinuz
append initrd=initrd.img mem=512M ksdevice=vmnic0 \

You’ll want to replace ftp://A.B.C.D/ks.cfg with the correct IP address and path for the kickstart script on the FTP server.

Only one step remains: configuring the DHCP server.

Configuring the DHCP Server for PXE Boot

As I mentioned earlier, I used the Windows DHCP server as a matter of ease and convenience; feel free to use whatever DHCP server best suits your needs. There are only two options that are necessary for PXE boot:

066 Boot Server Host Name (specify the IP address of the TFTP server)
067 Bootfile Name (specify pxelinux.0)

In this particular example, I created reservations for each MAC address. Because the values were the same for all reservations, I used server-wide DHCP options, but you could use reservation-specific DHCP options if you wanted different boot options on a per-MAC address (i.e., per-reservation) basis.

The End Result

Recall that this PoC was using Cisco UCS blades. Thus, in this environment, to prepare for a new host coming online we only had to make sure that we had a PXE configuration file and create a matching DHCP reservation. The MAC address would get assigned via the service profile, and when the blade booted then it would automatically proceed with an unattended installation. Combined with Host Profiles in VMware vCenter, this took the process of bringing new ESX/ESXi hosts online down to mere minutes. A definite win for any customer!

Tags: , , , , ,

Earlier today, I had to reset the root password on a lab server running VMware ESX 4.0 Update 1. For some reason, the password we assigned yesterday when we built the server from scratch wasn’t working this morning. OK, no big deal, right? Just reboot the server into single user mode and away you go. I won’t bother to repeat the steps for getting into single user mode; go to this article and it will give you what you need (the article is written for ESX 3.5 but it works fine for ESX 4.0).

Because this is a lab environment we just wanted to assign a simple password that anyone on the team could easily remember. (I’m sure the security purists out there are screaming right now.) Unfortunately, once I had the ESX host booted into single user mode, the passwd command insisted on making me use a complex password. There didn’t seem to be any simple way around the restriction.

However, having spent a fair amount of time with PAM (Pluggable Authentication Modules) during my Linux-AD integration experiments, I figured there was a way around it by modifying the PAM configuration. Sure enough, the /etc/pam.d/system-auth-generic file contained a reference to, the library that is responsible for ensuring complex passwords. The fix, therefore, was to somehow remove from the PAM configuration so that I could assign a simple password.

The first thing I tried was simply commenting out the line for the module, but the passwd command then failed to work, reporting an error that the authentication token could not be obtained. Strike 1! Next, I leave the module commented out and try changing the next line to required instead of sufficient. Same error: strike 2!

Finally, I simply replaced the line with a reference to (after making a backup copy of the original /etc/pam.d/system-auth-generic file, of course—it never hurts to be prepared). Success! I was able to assign a simple password to the lab server.

After putting the original /etc/pam.d/system-auth-generic back in place and rebooting the host, we were back in action! So, what was the lesson learned? You can’t stop someone who’s determined to get around security requirements! No, I’m just kidding…there is no lesson learned. I just thought someone might find this information useful or interesting. Enjoy!

Tags: , , , ,

Welcome to Virtualization Short Take #30, my irregularly posted collection of links and thoughts on virtualization. I hope you find something useful here!

  • I believe Jason Boche already mentioned this on his own blog (I couldn’t find a link) and also started this VMware Communities thread discussing the fact that the 8/6 patch breaks FT compatibility between ESX and ESXi hosts in the same cluster. This VMware KB article is now available with more information on the problem. I’m hearing from VMware is that there is no short-term solution; the workaround is to use only ESX or only ESXi within a single cluster. (I don’t recommend not patching the hosts until the problem is fixed.)
  • And while we’re talking VMware FT, here’s a good document on VMware FT architecture and performance. (Eric Siebert’s Virtualization Pro blog post about VMware FT is really good, too.)
  • I’m also hearing reports that there are problems mixing ESX and ESXi in the same cluster when using host profiles. Theoretically, you should be able to use an ESX reference host and apply that to ESXi hosts, but in reality it’s not working so well.
  • If you’re using AppSpeed, you’ll need to manually turn off the AppSpeed sensor VMs in order to put ESX/ESXi hosts into Maintenance Mode. The sensor VM won’t VMotion off the host, so this prevents the host from entering Maintenance Mode.
  • Here’s another topic that I think has been mentioned elsewhere (looks like Duncan mentions it here), but SRM 1.0 Update 1 Patch 4 was released a couple of weeks ago and it includes a fit for customizing the IP addresses of Windows Server 2008 guest operating system instances.
  • Toward the end of August, VMware Infrastructure 3 support was added for NetApp MetroCluster (see this VMware KB article). Now, how about some VMware vSphere 4 support?
  • Most of you are aware by now (and if you aren’t aware, go buy a copy of my book so you will be aware) that you can use Storage VMotion to change virtual disks from thin provisioned to thick provisioned. The problem is this: the type of thick provisioned disk created when you do this via Storage VMotion is eagerzeroedthick, not zeroedthick. This means that it is not friendly to storage array thin provisioning!
  • I’m still looking for a valid use case for this little trick, but it’s mentioned by both Duncan and Eric: the ability to present multiple cores per socket to a virtual machine. Duncan’s post is here; Eric’s post is here. As Eric points out, licensing is one potential use. Anyone have any other valid use cases?
  • Eric Sloof has a great post on dvSwitch caveats and best practices that is definitely worth reading.
  • Want to make linked clones work on vSphere? Tom Howarth points out in this post some information made available by William Lam. Both articles are worth a look.
  • Tom also posted some useful information on enabling firewall logging on VMware ESX hosts.
  • This post over on Aaron Sweemer’s blog was actually written by guest author John Blessing (aka @vTrooper on Twitter) and just goes to illustrate how difficult it can be to create a chargeback model.
  • Of course, the “Super iSCSI Friends” recently produced a multi-vendor post on using iSCSI with VMware vSphere, a great follow-up to the original multi-vendor VI3 post. Here’s Chad’s version of the multi-vendor vSphere and iSCSI post.

That wraps it up for this time around. Thanks for reading, and feel free to submit any other useful or interesting links in the comments below.

Tags: , , , , , ,

Last week’s partner boot camp for the Cisco Unified Computing System (UCS) was very helpful. It has really helped me gain a better understanding of the solution, how it works, and its advantages and disadvantages. I’d like to share some random bits of information I gathered during the class here in the hopes that it will serve as a useful add-on to the formal training. I’m sorry the thoughts aren’t better organized.

  • Although the UCS 6100 fabric interconnects are based on Nexus 5000 technologies, they are not the same. It would be best for you not to compare the two, or you’ll find yourself getting confused (I did, at least) because there are some things the Nexus 5000 will do that the fabric interconnects won’t do. Granted, some of these differences are the result of design decisions around the UCS, but they are differences nonetheless.
  • You’ll see the terms “northbound” and “southbound” used extensively throughout UCS documentation. Northbound traffic is traffic headed out of the UCS (out of the UCS 6100 fabric interconnects) to external Ethernet and Fibre Channel networks. Southbound traffic is traffic headed into the UCS (out of the UCS 6100 fabric interconnects to the I/O modules in the chassis). You may also see references to “east-to-west” traffic; this is traffic moving laterally from chassis to chassis within a UCS.
  • For a couple of different reasons (reasons I will expand upon in future posts), there is no northbound FCoE or FC connectivity out of the UCS 6100 fabric interconnects. This means that you cannot hook your storage directly into the UCS 6100 fabric interconnects. This, in turn, means that purchasing a UCS alone is not a complete solution—customers need supporting infrastructure in order to install a UCS. That supporting infrastructure would include a Fibre Channel fabric and 10Gbps Ethernet ports.
  • Continuing the previous thought, this means that—with regard to UCS, at least—my previous assertion that there is no such thing as an end-to-end FCoE solution is true. (Read my correction article and you’ll see that I qualified the presence of end-to-end FCoE solutions as solutions that did not include UCS.)
  • The I/O Modules (IOMs) in the back of each chassis are fabric extenders, not switches. This is analogous to the Nexus 5000-Nexus 2000 relationship. (Again, be careful about the comparisons, though.) You’ll see the IOMs occasionally referred to as fabric extenders, or FEXs. As a result, there is no switching functionality in each chassis—all switching takes place within the UCS 6100 fabric interconnects. Some of the implications of this architecture include:
    1. All east-to-west traffic must travel through the fabric interconnects, even for east-to-west traffic between two blades in the same chassis.
    2. When you use the Cisco “Palo” adapter and start creating multiple virtual NICs and/or virtual HBAs, the requirement for all east-to-west traffic applies to each individual vNIC. This means that east-to-west traffic between individual vNIC instances on the same blade must also travel through the fabric interconnects.
    3. This means that in ESX/ESXi environments using hypervisor bypass (VMDirectPath) with Cisco’s “Palo” adapter, inter-VM traffic between VMs on the same host must travel through the fabric interconnects. (This is not true if you are using a software switch, including the Nexus 1000V, but rather only when using hypervisor bypass.)
  • Each IOM can connect to a single fabric interconnect only. You cannot uplink a single IOM to both fabric interconnects. For full redundancy, then, you must have both fabric interconnects and both IOMs in each and every chassis.
  • Each 10Gbps port on a blade connects to a single IOM. To use both ports on a mezzanine adapter, you must have both IOMs in the chassis; to have both IOMs in the chassis, you must have both fabric interconnects. This makes the initial cost much higher (because you have to buy everything), but incremental cost much lower.
  • If you want to use FCoE, you must purchase the Cisco “Menlo” adapter. This will provide both a virtual NIC (vNIC) and a virtual HBA (vHBA) for each IOM populated in the chassis (i.e., populate the chassis with a single IOM and you get one vNIC and one vHBA, use two IOMs and get two vNICs and two vHBAs).
  • If you use the Cisco “Oplin” adapter, you’ll get 10Gbps Ethernet only. There is no FCoE support; you would have to use a software-based FCoE stack.
  • The Cisco “Palo” adapter offers the ability to use SR-IOV to present multiple, discrete instances of vNICs and vHBAs. The number of instances is based on the number of uplinks from the IOMs to the fabric interconnects. The formula for calculating this number is 15 * (IOM uplinks) – 2. So, for two uplinks, you could create a total of 28 vNICs or vHBAs (any combination of the two, not 28 each).
  • Blades within a UCS are designed to be completely stateless; the full identity of the system can be assigned dynamically using a service profile. However, to take full advantage of this statelessness, organizations will also have to use boot-from-SAN. This further echoes the need for organizations to dramatically re-architect in order to really exploit the value of UCS.
  • There are Linux kernels embedded everywhere: in the blades firmware, in the firmware of the IOMs, in the chassis, and in the fabric interconnects. On the blades, this embedded Linux version is referred to as pnuOS. (At the moment, I can’t recall what it stands for. Sorry.)
  • In order to reconfigure a blade, the UCS Manager boots into pnuOS, reconfigures the blade, and then boots “normally.” While this is kind of cool, it also makes the reconfiguration of a blade take a lot longer than I expected. Frankly, I was a bit disappointed at the time it took to associate or de-associate a service profile to a blade.
  • To monitor the status of a service profile association or de-association, you’ll use the FSM (Finite State Machine) tab within UCS Manager.
  • You’ll need a separate block of IP addresses, presumably on a separate VLAN, for each blade. These addresses are the management addresses for the blades. Cisco folks won’t like this analogy, but consider these the equivalent of Enclosure Bay IP Addressing (EBIPA) in the HP c7000 environment.
  • The UCS Manager software is written in Java. Need I say anything further?
  • UCS Manager uses the idea of a “service profile” to control the entire identity of the server. However, admins must be careful when creating and associating service profiles. A service profile that has two vNICs assigned would require a blade in a chassis with two IOMs connected to two fabric interconnects, and that service profile would fail to associate to a blade in a chassis with only a single IOM. Similarly, a service profile that defines both vNICs and vHBAs (assuming the presence of the “Menlo” or “Palo” adapters) would fail to associate to a blade with an “Oplin” adapter because the “Oplin” adapter doesn’t provide vHBA functionality. The onus is upon the administrator to ensure that the service profile is properly configured for the hardware. Once again, I was disappointed that the system was not more resilient in this regard.
  • Each service profile can be associated to exactly one blade, and each blade may be associated to exactly one service profile. To apply the same type of configuration to multiple blades, you would have to use a service profile template to create multiple, identical service profiles. However, a change to one of those service profiles will not affect any of the other service profiles cloned from the same template.
  • UCS Manager does offer role-based access control (RBAC), which means that different groups within the organization can be assigned different roles: the network group can manage networking, the storage group can manage the SAN aspects, and the server admins can manage the servers. This effectively addresses the concerns of some opponents that UCS places the network team in control.
  • While UCS supports some operating systems on the bare metal, it really was designed with virtualization in mind. ESX 4.0.0 (supposedly) installs out of the box, although I have yet to actually try that myself. The “Palo” adapter is built for VMDirectPath; in fact, Cisco makes a big deal about hypervisor bypass (that’s a topic I’ll address in a future post). With that in mind, some of the drawbacks—such as how long it takes to associate or de-associate a blade—become a little less pertinent.

I guess that about does it for now. I’ll update this post with more information as I recall/remember it over the next few days. I also encourage other readers who have attended similar UCS events to share any additional points in the comments below.

Tags: , , , , , , , ,

VMware has completed Microsoft Server Virtualization Validation Program (SVVP) certifications for both VMware vSphere 4.0 as well as ESX/ESXi 3.5 Update 4. This brings the list of SVVP-certified products to include:

  • VMware vSphere 4.0 (ESX 4.0 and ESXi 4.0)
  • VMware ESX 3.5 Update 4
  • VMware ESXi 3.5 Update 4
  • VMware ESX 3.5 Update 3
  • VMware ESXi 3.5 Update 3
  • VMware ESX 3.5 Update 2

According to my contacts within VMware—and many of you have probably heard the same—the company is seeking to achieve SVVP certification for every ESX and ESXi release from 3.5 Update 2 onward for the maximum supported configuration of both CPUs and RAM on both Intel and AMD platforms. That includes both 32-bit and 64-bit versions of Windows Server.

You can view the full list of SVVP-certified platforms here.

Tags: , , , ,

A lot of the content on this site is oriented toward VMware ESX/ESXi users who have a pretty fair amount of experience. As I was working with some customers today, though, I realized that there really isn’t much content on this site for new users. That’s about to change. As the first in a series of posts, here’s some new user information on creating vSwitches and port groups in VMware ESX using the command-line interface (CLI).

For new users who are seeking a thorough explanation of how VMware ESX networking functions, I’ll recommend a series of articles by Ken Cline titled The Great vSwitch Debate. Ken goes into a great level of detail. Go read that, then you can come back here.

Before I get started it’s important to understand that, for the most part, the information in this article applies only to VMware ESX. VMware ESXi doesn’t have a Linux-based Service Console like VMware ESX, and therefore doesn’t have a readily-accessible CLI from which to run these sorts of commands. There is a remote CLI available, which I’ll discuss in a future post, but for now I’ll focus only on VMware ESX.

The majority of all the networking configuration you will need to perform on VMware ESX boils down to just a couple commands:

  • esxcfg-vswitch: You will use this command to manipulate virtual switches (vSwitches) and port groups.
  • esxcfg-nics: You will use this command to view (and potentially manipulate) the physical network interface cards (NICs) in the VMware ESX host.

Configuring VMware ESX networking boils down to a couple basic tasks:

  1. Creating, configuring, and deleting vSwitches
  2. Creating, configuring, and deleting port groups

I’ll start with creating, configuring, and deleting vSwitches.

Creating, Configuring, and Deleting vSwitches

You’ll primarily use the esxcfg-vswitch command for the majority of these tasks. Unless I specifically indicate otherwise, all the commands, parameters, and arguments are case-sensitive.

To create a vSwitch, use this command:

esxcfg-vswitch -a <vSwitch Name>

To link a physical NIC to a vSwitch—which is necessary in order for the vSwitch to pass traffic onto the physical network or to receive traffic from the physical network—use this command:

esxcfg-vswitch -L <Physical NIC> <vSwitch Name>

In the event you don’t have information on the physical NICs, you can use this command to list the physical NICs:

esxcfg-nics -l (lowercase L)

Conversely, if you need to unlink (remove) a physical NIC from a vSwitch, use this command:

esxcfg-vswitch -U <Physical NIC> <vSwitch Name>

To change the Maximum Transmission Unit (MTU) size on a vSwitch, use this command:

esxcfg-vswitch -m <MTU size> <vSwitch Name>

To delete a vSwitch, use this command:

esxcfg-vswitch -d <vSwitch Name>

Creating, Configuring, and Deleting Port Groups

As with virtual switches, the esxcfg-vswitch is the command you will use to work with port groups. Once again, unless I specifically indicate otherwise, all the commands, parameters, and arguments are case-sensitive.

To create a port group, use this command:

esxcfg-vswitch -A <Port Group Name> <vSwitch Name>

To set the VLAN ID for a port group, use this command:

esxcfg-vswitch -v <VLAN ID> -p <Port Group Name> <vSwitch Name>

To delete a port group, use this command:

esxcfg-vswitch -D <Port Group Name> <vSwitch Name>

To view the current list of vSwitches, port groups, and uplinks, use this command:

esxcfg-vswitch -l (lowercase L)

There are more networking-related tasks that you can perform from the CLI, but for a new user these commands should handle the lion’s share of all the networking configuration. Good luck!

Tags: , , , ,

It would appear (I have not yet been able to reliably verify this information) that VMware has removed support for the Adaptec aacraid driver from VMware vSphere. The driver was apparently listed on the Hardware Compatibility List (HCL) as supported at release, but shortly after release support was pulled. Apparently the aacraid driver is in such bad shape with vSphere that you can’t even install vSphere on systems using the aacraid driver. Even more mysterious is the fact that there is no documentation about this issue one way or the other, so it’s quite difficult to really verify where support stands. As of the last time I checked (a couple of days ago), the aacraid driver was still not listed on the HCL.

This primarily impacts white-box owners; users of systems from major vendors like HP, IBM, and Dell will most likely not be affected.

If anyone has any additional information on this matter, please speak up in the comments.

Tags: , , , , ,

Over the next couple of weeks, I’ll start posting a multi-part review of VMware vSphere 4.

Because this is such a large product with so many different features, I’ve decided to break it into three or four sections. As the different parts go live on the site, the list below will be modified to link to the appropriate part of the review:

  1. Overview and installation
  2. Networking and storage
  3. High availability and business continuity
  4. Management and operations
  5. Summary and wrap-up

If there is anything in particular you’d like to see covered in the review, please note your interests by posting a comment to this article. I’ll do my best to accommodate all requests, but I can’t make any guarantees.

Tags: , , , ,

In April 2008, I wrote an article on how to use jumbo frames with VMware ESX and IP-based storage (NFS or iSCSI). It’s been a pretty popular post, ranking right up there with the ever-popular article on VMware ESX, NIC teaming, and VLAN trunks.

Since I started working with VMware vSphere (now officially available as of 5/21/2009), I have been evaluating how to replicate the same sort of setup using ESX/ESXi 4.0. For the most part, the configuration of VMkernel ports to use jumbo frames on ESX/ESXi 4.0 is much the same as with previous versions of ESX and ESXi, with one significant exception: the vNetwork Distributed Switch (vDS, what I’ll call a dvSwitch). After a fair amount of testing, I’m pleased to present some instructions on how to configure VMkernel ports for jumbo frames on a dvSwitch.

How I Tested

The lab configuration for this testing was pretty straightforward:

  • For the physical server hardware, I used a group of HP ProLiant DL385 G2 servers with dual-core AMD Opteron processors and a quad-port PCIe Intel Gigabit Ethernet NIC.
  • All the HP ProLiant DL385 G2 servers were running the GA builds of ESX 4.0, managed by a separate physical server running the GA build of vCenter Server.
  • The ESX servers participated in a DRS/HA cluster and a single dvSwitch. The dvSwitch was configured for 4 uplinks. All other settings on the dvSwitch were left at the defaults.
  • For the physical switch infrastructure, I used a Cisco Catalyst 3560G running Cisco IOS version 12.2(25)SEB4.
  • For the storage system, I used an older NetApp FAS940. The FAS940 was running Data ONTAP 7.2.4.

Keep in mind that these procedures or commands may be different in your environment, so plan accordingly.

Physical Network Configuration

Refer back to my first article on jumbo frames to review the Cisco IOS commands for configuring the physical switch to support jumbo frames. Once the physical switch is ready to support jumbo frames, you can proceed with configuring the virtual environment.

Virtual Network Configuration

The virtual network configuration consists of several steps. First, you must configure the dvSwitch to support jumbo frames by increasing the MTU. Second, you must create a distributed virtual port group (dvPort group) on the dvSwitch. Finally, you must create the VMkernel ports with the correct MTU. Each of these steps is explained in more detail below.

Setting the MTU on the dvSwitch

Setting the MTU on the dvSwitch is pretty straightforward:

  1. In the vSphere Client, navigate to the Networking inventory view (select View > Inventory > Networking from the menu).
  2. Right-click on the dvSwitch and select Edit Settings.
  3. From the Properties tab, select Advanced.
  4. Set the MTU to 9000.
  5. Click OK.

That’s it! Now, if only the rest of the process was this easy…

By the way, this same area is also where you can enable Cisco Discovery Protocol support for the dvSwitch, as I pointed out in this recent article.

Creating the dvPort Group

Like setting the MTU on the dvSwitch, this process is pretty straightforward and easily accomplished using the vSphere Client:

  1. In the vSphere Client, navigate to the Networking inventory view (select View > Inventory > Networking from the menu).
  2. Right-click on the dvSwitch and select New Port Group.
  3. Set the name of the new dvPort group.
  4. Set the number of ports for the new dvPort group.
  5. In the vast majority of instances, you’ll want to set VLAN Type to VLAN and then set the VLAN ID accordingly. (This is the same as setting the VLAN ID for a port group on a vSwitch.)
  6. Click Next.
  7. Click Finish.

See? I told you it was pretty straightforward. Now on to the final step which, unfortunately, won’t be quite so straightforward or easy.

Creating a VMkernel Port With Jumbo Frames

Now things get a bit more interesting. As of the GA code, the vSphere Client UI still does not expose an MTU setting for VMkernel ports, so we are still relegated to using the esxcfg-vswitch command (or the vicfg-vswitch command in the vSphere Management Assistant—or vMA—if you are using ESXi). The wrinkle comes in the fact that we want to create a VMkernel port attached to a dvPort ID, which is a bit more complicated than simply creating a VMkernel port attached to a local vSwitch.

Disclaimer: There may be an easier way than the process I describe here. If there is, please feel free to post it in the comments or shoot me an e-mail.

First, you’ll need to prepare yourself. Open the vSphere Client and navigate to the Hosts and Clusters inventory view. At the same time, open an SSH session to one of the hosts you’ll be configuring, and use “su -” to assume root privileges. (You’re not logging in remotely as root, are you?) If you are using ESXi, then obviously you’d want to open a session to your vMA and be prepared to run the commands there. I’ll assume you’re working with ESX.

This is a two-step process. You’ll need to repeat this process for each VMkernel port that you want to create with jumbo frame support.

Here are the steps to create a jumbo frames-enabled VMkernel port:

  1. Select the host and and go the Configuration tab.
  2. Select Networking and change the view to Distributed Virtual Switch.
  3. Click the Manage Virtual Adapters link.
  4. In the Manage Virtual Adapters dialog box, click the Add link.
  5. Select New Virtual Adapter, then click Next.
  6. Select VMkernel, then click Next.
  7. Select the appropriate port group, then click Next.
  8. Provide the appropriate IP addressing information and click Next when you are finished.
  9. Click Finish. This returns you to the Manage Virtual Adapters dialog box.

From this point on you’ll go the rest of the way from the command line. However, leave the Manage Virtual Adapters dialog box open and the vSphere Client running.

To finish the process from the command line:

  1. Type the following command (that’s a lowercase L) to show the current virtual switching configuration:
    esxcfg-vswitch -l
    At the bottom of the listing you will see the dvPort IDs listed. Make a note of the dvPort ID for the VMkernel port you just created using the vSphere Client. It will be a larger number, like 266 or 139.
  2. Delete the VMkernel port you just created:
    esxcfg-vmknic -d -s <dvSwitch Name> -v <dvPort ID>
  3. Recreate the VMkernel port and attach it to the very same dvPort ID:
    esxcfg-vmknic -a -i <IP addr> -n <Mask> -m 9000 -s <dvSwitch Name> -v <dvPort ID>
  4. Use the esxcfg-vswitch command again to verify that a new VMkernel port has been created and attached to the same dvPort ID as the original VMkernel port.

At this point, you can go back into the vSphere Client and enable the VMkernel port for VMotion or FT logging. I’ve tested jumbo frames using VMotion and everything is fine; I haven’t tested FT logging with jumbo frames as I don’t have FT-compatible CPUs. (Anyone care to donate some?)

As I mentioned in yesterday’s Twitter post, I haven’t conducted any objective performance tests yet, so don’t ask. I can say that NFS feels faster with jumbo frames than without, but that’s purely subjective.

Let me know if you have any questions or if anyone finds a faster or easier way to accomplish this task.

UPDATE: I’ve updated the comments to delete and recreate the VMkernel port per the comments below.

Tags: , , , , , , , , ,

« Older entries