You are currently browsing articles tagged ESXi.

Welcome to Technology Short Take #37, the latest in my irregularly-published series in which I share interesting articles from around the Internet, miscellaneous thoughts, and whatever else I feel like throwing in. Here’s hoping you find something useful!


  • Ivan does a great job of describing the difference between the management, control, and data planes, as well as providing examples. Of course, the distinction between control plane protocols and data plane protocols isn’t always perfectly clear.
  • You’ve heard me talk about snowflake servers before. In this post on why networking needs a Chaos Monkey, Mike Bushong applies to the terms to networks—a snowflake network is an intricately crafted network that is carefully tailored to utilize a custom subset of networking features unique to your environment. What is the fix—if one exists—to snowflake networks? Designing your network for resiliency and unleashing a Chaos Monkey on it is one way, as Mike points out. A fan of network virtualization might also say that decomposing today’s complex physical networks into multiple simple logical networks on top of a simpler physical transport network—similar to Mike’s suggestion of converging on a smaller set of reference architectures—might also help. (Of course, I am a fan of network virtualization, since I work with/on VMware NSX.)
  • Martijn Smit has launched a series of articles on VMware NSX. Check out part 1 (general introduction) and part 2 (distributed services) for more information.
  • The elephants and mice post at Network Heresy has sparked some discussion across the “blogosphere” about how to address this issue. (Note that my name is on the byline for that Network Heresy post, but I didn’t really contribute all that much.) Jason Edelman took up the idea of using OpenFlow to provide a dedicated core/spine for elephant flows, while Marten Terpstra at Plexxi talks about how Plexxi’s Affinities could be used to help address the problem of elephant flows. Peter Phaal speaks up in the comments to Marten’s article about how sFlow can be used to rapidly detect elephant flows, and points to a demo taking place during SC13 that shows sFlow tracking elephant flows on SCinet (the SC13 network).
  • Want some additional information on layer 2 and layer 3 services in VMware NSX? Here’s a good source.
  • This looks interesting, but I’m not entirely sure how I might go about using it. Any thoughts?


Nothing this time around, but I’ll keep my eyes peeled for something to include next time!


I don’t have anything to share this time—feel free to suggest something to include next time.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this post on getting the most out of HAProxy—in which Twilio walks through some of the configuration options they’re using and why—to be quite helpful. If you’re relatively new to HAProxy, as I am, then I’d recommend giving this post a look.
  • This list is reasonably handy if you’re not a Terminal guru. While written for OS X, most of these tips apply to Linux or other Unix-like operating systems as well. I particularly liked tip #3, as I didn’t know about that particular shortcut.
  • Mike Preston has a great series going on tuning Debian Linux running under vSphere. In part 1, he covered installation, primarily centered around LVM and file system mount options. In part 2, Mike discusses things like using the appropriate virtual hardware, the right kernel modules for VMXNET3, getting rid of unnecessary hardware (like the virtual floppy), and similar tips. Finally, in part 3, he talks about a hodgepodge of tips—things like blacklisting other unnecessary kernel drivers, time synchronization, and modifying the Linux I/O scheduler. All good stuff, thanks Mike!


  • “Captain KVM,” aka Jon Benedict, takes on the discussion of enterprise storage vs. open source storage solutions in OpenStack environments. One good point that Jon makes is that solutions need to be evaluated on a variety of criteria. In other words, it’s not just about cost nor is it just about performance. You need to use the right solution for your particular needs. It’s nice to see Jon say that if your needs are properly met by an open source solution, then “by all means stick with Ceph, Gluster, or any of the other cool software storage solutions out there.” More vendors need to adopt this viewpoint, in my humble opinion. (By the way, if you’re thinking of using NetApp storage in an OpenStack environment, here’s a “how to” that Jon wrote.)
  • Duncan Epping has a quick post about a VMware KB article update regarding EMC VPLEX and Storage DRS/Storage IO Control. The update is actually applicable to all vMSC configurations, so have a look at Duncan’s article if you’re using or considering the use of vMSC in your environment.
  • Vladan Seget has a look at Microsoft ReFS.


I’d better wrap it up here so this doesn’t get too long for folks. As always, your courteous comments and feedback are welcome, so feel free to start (or join) the discussion below.

Tags: , , , , , , ,

This is a session blog for INF-VSP1423, esxtop for Advanced Users, at VMworld 2012 in San Francisco, CA. The presenter is Krishna Raj Raja from VMware.

The presenter starts out the session with an explanation of how the session is going to proceed. This year—this session has been presented multiple years at multiple VMworld conferences—he’s choosing to focus on exposing the inner workings of the ESXi hypervisor through esxtop. High-level topics include NUMA, CPU usage, host cache, SR-IOV, and managing data.

The session starts with managing data. The presenter suggests using vCenter Operations for managing data with large numbers of VMs, multiple vCenter instances, etc. However, in situations where you are focused on one host or one VM, then esxtop can be useful and pertinent. In many cases, though, vCenter Operations may be a better tool than esxtop.

He then provides a quick overview of managing the esxtop screen, such as proper TERM settings for Mac OS X (use “export TERM=xterm”), focusing on particular values, and showing/hiding counters. It’s also possible to exclude data by exporting the esxtop entities (using “—export-entity file.lst”), editing that text file, then importing them back in using “—import-entity file.lst”.

Having now provided some details on managing the data, he switches his discussion to the topic of granted memory. He provides the example of assigning 8 GB of memory to a Windows 7 32-bit system, which is only capable of accessing 3 GB of RAM. Hence, Granted Memory will be limited to 3 GB. Switching to 64-bit shows that Granted Memory will change that to nearly 8 GB, as Windows touches all pages during boot. Later, that same value descreases, but the MCTLSZ counter increased to show that the memory balloon oontrol has reclaimed some of that memory.

Interestingly enough, setting a limit doesn’t cause Granted Memory to be reduced to the limit. In this case, though, the Zero counter increased to show that multiple virtual memory pages were being pointed to a single memory page. Reducing the limit even further shows Granted Memory dropping and the MCTLSZ counter increased (meaning ballooning was active).

Next, the presenter talks about Active Memory. In many cases, when Active Memory is higher, CPU usage will also be higher. Similarly, when Active Memory is very low, CPU utilization is also low. This is not always the case, though. Be aware that Active Memory values can be reported as low if ballooning is active, even though CPU utilization might be high in that instance.

Next up is Overhead Memory. Found in the OVHDUW counter, this is influenced by a number of things; perhaps the biggest contributor is video memory. Setting large video memory settings can increase the overhead memory. In vSphere 5.0 and vSphere 5.1, VMware has been able to dramatically reduce overhead memory usage (in the example shown, it went from 260 MB of overhead in vSphere 4.x to 130 MB of overhead in vSphere 5.x). Less overhead memory means hosting more VMs. He also discussed OVHDMAX, but I couldn’t catch the details other than to say that this value is also greatly reduced in vSphere 5.x.

With regard to NUMA, there is a counter called NHN (NUMA Home Node). When NHN has multiple values, that means RAM is being split (or interleaved) across multiple NUMA nodes. This occurs when the VM has been allocated more resources than can be satisfied using a single NUMA node. Even though the host is using multiple NUMA nodes, the guest OS still only sees it as a single NUMA node. vNUMA, introduced in hardware version 9 and applied to VMs with more than 8 vCPUs, means that NUMA information is exposed to the guest OS. This can be helpful in improving the performance of NUMA-aware applications like databases, Java application servers, etc. (Virtual hardware version 9 is introduced in vSphere 5.1.) (CORRECTION: Although the presenter indicated that virtual hardware version 9 is required, documentation for vSphere 5.0 indicates that vNUMA only requires virtual hardware version 8.)

Now the presenter moves on to user worlds. Any process that runs in the VMkernel is called a user world. In esxtop, the GID column indicates the group ID of a process (the ID for this world group), and the NWLD column indicates how many worlds are in that process. When %RUN is high for idle, that’s perfectly normal and expected. There are world groups for VMs (and each VM has its own world group under the User world group). Pressing uppercase V, then esxtop will only display the VM world groups.

Let’s look at an application of world groups. By expanding a world group, you can see the specific worlds within a world group, and that will give you better visibility into what might be causing higher CPU usage. In the example he shared, turning on 3-D support meant that a separate SVGA thread (or world) could cause very high CPU usage in esxtop, even though the guest was not showing significant CPU utilization. The MKS threads (mouse, keyboard, screen) can also drive up CPU utilization. An example would be high CPU usage for the MKS worlds, but not necessary high CPU usage for the SVGA 3-D thread.

Expanding the LUN display (by pressing e) can expose which world is contributing I/O to a particular LUN. From there, you can drill down into various screens within esxtop to determine exactly which thread/process/world is generating CPU usage as a result of the I/O being generated.

Now the presenter moves on to CPU usage, starting with physical CPUs. esxtop recognizes when hyperthreading is enabled, but vCenter Server doesn’t distinguish logical CPUs and reports values twice per core. CPU core speeds are denoted by different P-states; P0 is the maximum rated clock speed, whereas P-n is the current clock speed. Different counters in esxtop might report different values because some are based on P-0, whereas others are based on P-n. Choosing “OS Control Mode” (or the equivalent) in the server BIOS then causes esxtop to report all the different P-states for the CPUs, with utilizations reported for each P-state. The PSTATE MHZ value shows the different clock speeds supported for the CPUs.

By default, the VMkernel tries to place (schedule) VMs on different cores, instead of just different logical CPUs (which might be hyperthreaded cores on the same physical core). This behavior can be modifed using the “Hyperthreaded Core Sharing” setting in vCenter Server; the default value for this setting is Any (which allows core sharing without any restrictions). For workloads that are sensitive to core sharing or caching behaviors, you can change this setting to optimize performance.

The presenter next went through a discussion of caching, but I wasn’t able to catch all the details. He discussed both cache to SSD as well as context-based read caching (CBRC). CBRC explains why esxtop might report some level of I/O at the VM level, but very little I/O at the LUN level. (CBRC is also referred to as the View Storage Accelerator.)

SR-IOV—Single Root I/O Virtualization—isn’t visible inside esxtop. SR-IOV allows a single physical device (or function; referred to as a PF) can present multiple virtual instances (or virtual functions, known as VFs) to the ESXi hypervisor. VFs can be presented directly to a guest OS using VMDirectPath. Running something like lspci in the guest OS shows the device being presented directly to the guest. esxtop will only show the physical NICs. There are other ways of viewing this information, though. You can look at CPU usage, or you can look at IRQs (interrupt requests), where esxtop will show a “pcip” device that denotes a PCI passthrough device (like an SR-IOV virtual function.)

At this point the presenter wrapped up the session.

Tags: , , ,

The question of VMware’s future in the face of increasing competition is not a new one; it’s been batted around by quite a few folks. So Steven J. Vaughan-Nichols’ article “Does VMware Have a Real Future?” doesn’t really open any new doors or expose any new secrets that haven’t already been discussed elsewhere. What it does do, in my opinion, is show that the broader market hasn’t yet fully digested VMware’s long-term strategy.

Before I continue, though, I must provide disclosure: what I’m writing is my interpretation of VMware’s strategy. Paul M. hasn’t come down and given me the low-down on their plans, so I can only speculate based on my understanding of their products and their sales strategy.

Mr. Vaughan-Nichols’ article focuses on what has been, to date, VMware’s most successful technology and product: the hypervisor. Based on what I know and what I’ve seen in my own experience, VMware created the virtualization market with their products and cemented their leadership in that market with VMware Infrastructure 3 and, later, vSphere 4 and vSphere 5. Their hypervisor is powerful, robust, scalable, and feature-rich. Yet, the hypervisor is only one part of VMware’s overall strategy.

If you go back and look at the presentations that VMware has given at VMworld over the last few years, you’ll see VMware focusing on what many of us refer to as the “three layer cake”:

  1. Infrastructure
  2. Applications (or platforms)
  3. End-user computing

If you like to think in terms of *aaS, you might think of the first one as Infrastructure as a Service (IaaS) and the second one as Platform as a Service (PaaS) or Software as a Service (SaaS). Sorry, I don’t have a *aaS acronym for the third one.

I believe that VMware knows that relying on the hypervisor as its sole differentiating factor will come to end. We can debate how quickly that will happen or which competitors will be most effective in making that happen, but those issues are beside the point. This is not to say that VMware is ceding the infrastructure/IaaS market, but instead recognizing that it cannot be all that VMware is. VMware must be more.

What is that “more”? I’m glad you asked.

Let’s look back at the forces that drove VMware’s hypervisor into power. We had servers with more capacity than the operating system (OS)/application stack could effectively leverage, leaving us with lots of servers that were lightly utilized. We had software stacks that drove us to a “one OS/one application” model, again leading to lots of servers that were lightly utilized. Along comes VMware with ESX (and later ESXi) and the ability to fix that problem, and—this is a key point—fix it without sacrificing compatibility. That is, you could continue to deploy your OS instances and your application stacks in much the same way. No application rewrite needed. That was incredibly powerful, and the market responded accordingly.

This compatibility-centered approach is both powerful yet limiting. Yes, you can maintain status quo, but the problem is that you’re maintaining status quo. Things aren’t really changing. You’re still bound by the same limitations as before. You can’t really take advantage the new functionality the hypervisor has introduced.

Hence, applications need to be rewritten. If you want to really take advantage of virtualization, you need a—gasp!—platform designed to exploit virtualization and the hypervisor. This explains VMware’s drive into the application development space with vFabric (Spring, GemFire, SQLFire, RabbitMQ). These tools give them the platform upon which a new generation of applications can be built. (And I haven’t even yet touched on CloudFoundry.) This new generation of applications will assume the presence of a hypervisor, and be able to exploit the functionality provided by it. However, a new generation applications that are still bound by the old ways of accessing those applications will constrain their effectiveness.

Hence, end users need new ways to access these applications, and organizations need new ways to deliver applications to end users. This explains VMware’s third layer in the “three layer cake”: end-user computing. Reshaping applications to embrace new form factors (tablets, smartphones) means re-architecting your applications. If you’re going to re-architect your applications, you might as well build them using a using a new platform and set of tools that lets you exploit the ever-ubiquitous presence of a hypervisor. Starting to see the picture now?

If you look at VMware only from the perspective of the hypervisor, then the question of VMware’s future viability is suspect. I’ll grant that. Take a broader look, though—look at VMware’s total vision and I think you’ll see a different picture. That’s why—assuming VMware can execute on this vision—I think that the answer to Mr. Vaughan-Nichols’ question, “Does VMware have a real future?”, is yes. VMware might not continue to reign as an undisputed market leader, but I do think their long-term viability isn’t in question (again, assuming they can execute on their vision.)

Feel free to share your thoughts in the comments. Do you think VMware has a future? What should they do (or not do) to ensure future success? Or is their fall a foregone conclusion? I’d love to hear your thoughts. I only ask for disclosure of vendor affiliations, where applicable. (For example, although I work for EMC and EMC has a financial relationship with VMware, I speak only for myself.)

Tags: , , , ,

A little over a month ago, I was installing VMware ESXi on a Cisco UCS blade and noticed something odd during the installation. I posted a tweet about the incident. Here’s the text of the tweet in case the link above stops working:

Interesting…this #UCS blade has local disks but all disks are showing as remote during #ESXi install. Odd…

Several people responded, indicating they’d run into similar situations. No one—at least, not that I recall—was able to tell me why this was occurring, only that they’d seen it happen before. And it wasn’t just limited to Cisco UCS blades; a few people posted that they’d seen the behavior with other hardware, too.

This morning, I think I found the answer. While reading this post about scratch partition best practices on VMware ESXi Chronicles, I clicked through to a VMware KB article referenced in the post. The KB article discussed all the various ways to set the persistent scratch location for ESXi. (Good article, by the way. Here’s a link.)

What really caught my attention, though, was a little blurb at the bottom of the KB article in reference to examples where scratch space may not be automatically defined on persistent storage. Check this out (emphasis mine):

2.  ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.

There’s the answer: although these drives are physically inside the server and are local to the server, they are considered remote during the VMware ESXi installation because they are SAS drives. Mystery solved!

Tags: , , , , ,

How’s that for acronyms?

In all seriousness, though, as I was installing VMware ESXi this evening onto some remote Cisco UCS blades, I ran into some interesting keymapping issues and I thought it might be handy to document what worked for me in the event others run into this issue as well.

So here’s the scenario: I’m running Mac OS X 10.6.7 on my MacBook Pro, and using VMware View 4.6 to connect to a remote Windows XP Professional desktop. Within that Windows XP Professional session, I’m running Cisco UCS Manager 1.4(1i) and loading up the KVM console to access the UCS blades. From there, I’m installing VMware ESXi onto the blades from a mapped ISO file.

What I found is that the following keystrokes worked correctly to pass through these various layers to the ESXi install process:

  • For the F2 key (necessary to log in to the ESXi DCUI), use Ctrl+F2 (in some places) or Cmd+F2 (in other places).
  • For the F5 key (to refresh various displays), the F5 key alone works.
  • For the F11 key (to confirm installation at various points during the ESXi install process), use Cmd+F11.
  • For the F12 key (used at the DCUI to shutdown/reboot), use Cmd+F12.

There are a couple of factors that might affect this behavior:

  • In the Keyboard section of System Preferences, I have “Use F1, F2, etc., keys as standard function keys” selected; this means that I have to use the Fn key to access any “special” features of the function keys (like increasing volume or adjusting screen brightness). I haven’t tested what impact this has on this key mapping behavior.
  • The Mac keyboard shortcuts in the preferences of the Microsoft Remote Desktop Connection do not appear to conflict with any of the keystrokes listed above, so it doesn’t appear that this is part of the issue.

If I find more information, or if I figure out why the keystrokes are mapping the way they are I’ll post an update to this article. In the meantime, if you happen to need to install VMware ESXi into a Cisco UCS blade via the UCSM KVM through VMware View from a Mac OS X endpoint, now you know how to make the keyboard shortcuts work.

Courteous comments are always welcome—speak up and contribute to the discussion!

Tags: , , , , , ,

This is MA6580, titled “Bridge the ESX/ESXi Management Gap with the vSphere Management Assistant (vMA), Tips and Tricks Included”. The presenters are Chris Monfet and Tim Murnane. This is my first session of Day 3 of VMworld 2010 in San Francisco. Following this is a whirlwind of vendor meetings, video interviews, a book signing, and more sessions this afternoon.

The focus on the vMA is due to the shift in focus from VMware from VMware ESX (vSphere 4.1 will be the last version with VMware ESX) to VMware ESXi. vMA is based on CentOS (will they switch to SuSE like they are for all other virtual appliances?) and supports VMware ESX/ESXi 3.5 Update 2 or later. The vMA uses 512MB of RAM and has a 5GB VMDK. It does use hardware version 4 in order to provide support for VI3 environments. You can deploy the vMA directly from the vSphere Client or by downloading the OVF and then deploying it.

The /opt/vmware/vima/bin/ script allows you to reconfigure vMA network settings if necessary.

The vma-update command (with the parameters info, update, or scan) allows you to patch or update the vMA. If you have a proxy server, you’ll want to update /etc/vmware/esxupdate/vmaupdate.conf file accordingly.

By default, vMA does not run the NTP daemon, although it is preconfigured to use the servers. You can use chkconfig to enable the NTP daemon. You’ll also want to update the time zone configuration.

The preferred target for vMA is vCenter Server, and you can also use it as a remote log host for VMware ESX/ESXi. You can also run vMA outside of the actual vSphere environment; for example, you can run it under VMware Workstation.

With regard to authentication, vMA uses interactive logon (prompted for username and password for every command), FastPass (stores credentials locally in a file), or Active Directory (using Likewise Open integration).

When using FastPass, you’ll use the vifp addserver, vifp removeserver, vifp listservers commands. There’s also a vifp rotatepassword option to automatically rotate passwords between the vMA and the VMware ESX/ESXi hosts.

With Active Directory integration, you only need to use the domainjoin-cli command to join the Active Directory domain. From there, authentication will happen automatically.

As I mentioned earlier, you can also use the vMA as a remote loghost. The vi-logger command is what you use to set this up. This is particularly important for VMware ESXi. Note that vxpa logs are not sent to syslog (see VMware KB 1017658). All log files go to /var/log/vmware/<hostname>.

The presenters now move into some use case/operational discussions. There are lots of examples provided; a bit more detail is provided for using the vMA to configure storage with the esxcli command. Examples are also provided for setting the MTU size on a vSwitch (using vicfg-vswitch), setting up log collection with vi-logger, and customizing management services. New to vMA 4.1 is the vicfg-hostops command, which you can use to put hosts into (and out of?) maintenance mode.

Now the session moves into a few best practices for vMA:

  • One vMA per 100 VMware ESX/ESXi hosts when using vi-logger.
  • Place vMA on your management LAN/VLAN.
  • Use a static IP address, a fully qualified domain name, and correct DNS settings. This is especially important for AD integration.
  • Configure the vMA as a remote log host.
  • Enable NTP and configure it for UTC (VMware ESXi uses UTC).
  • The recommended target for vMA/vCLI is vCenter Server (much in the same way vCenter Server is the recommended target for the vSphere Client).
  • You might need to leave a VMware ESX host for tools like mbralign; this functionality still hasn’t been migrated over to VMware ESXi or the vMA.
  • Cleanup local accounts on your VMware ESX/ESXi when using a new VMA or destroying one.
  • Try to limit the use of resxtop, and use it for real-time troubleshooting not monitoring.

The session wraps up with a few pre-recorded demos of bulk adding servers, bulk adding users, and running resxtop.

Tags: , , ,

This is one article in a series of articles focused toward new users. Some other New User’s Guide articles include:

This particular article is a follow-up of sorts to the first article listed above. While that article focused on virtual networking with VMware ESX, this article focuses on virtual networking with VMware ESXi. Given that VMware’s stated focus is on VMware ESXi moving forward, I thought this article would be helpful and timely.

For new users who are seeking a thorough explanation of how VMware ESX/ESXi networking functions, I’ll recommend a series of articles by Ken Cline titled The Great vSwitch Debate. Ken goes into a great level of detail. Go read that, then you can come back here.

All of the commands presented in this article were testing using VMware vSphere 4.1. The environment consisted of hosts running VMware ESXi 4.1 being managed by VMware vCenter Server 4.1. For CLI access, I used the vSphere Management Assistant (vMA) virtual appliance, deployed via OVF.

The majority of all the networking configuration you will need to perform on VMware ESXi boils down to just a few commands:

  • vicfg-vswitch: You will use this command to manipulate virtual switches (vSwitches) and port groups.
  • vicfg-vmknic: You will use this command to create, modify, or delete VMkernel NICs on the VMware ESXi hosts.
  • vicfg-nics: You will use this command to view (and potentially manipulate) the physical network interface cards (NICs) in a VMware ESXi host.

The tasks that you’ll actually perform using this commands are pretty straightforward:

  1. Creating, configuring, and deleting vSwitches
  2. Creating, configuring, and deleting port groups
  3. Creating, configuring, and deleting VMkernel NICs

I’ll start with a few prerequisites that are necessary due to the fact that you are using a remote CLI to access the VMware ESXi hosts.

As you can see from the list above, all the commands you’re going to use are the vicfg-* commands. All of these commands have some standard parameters they require in addition to the task-specific parameters. To make things a bit simpler for you, I’ll recommend that you set persistent values (persistent for the current vMA session, at least) to simplify the commands later. Here are the values I recommend you establish:

  • First, set the value of the VI_SERVER variable to be the fully qualified domain name of the vCenter Server computer. Use the bash export command to set this variable, like this:
    Setting this variable now means that none of the vicfg-* commands will need to have this parameter specified. Since it’s likely that you’ll consistently work with one specific instance of vCenter Server, then this is a pretty safe variable to set.
  • In the absence of using Active Directory integration (which is a far cleaner choice, but one which we’ll reserve for a future article), set the VI_USERNAME variable to the name of the user account you’ll use to authenticate against vCenter Server. Again, use the export command as outlined in the previous bullet.

Now that you have some basics established, I’ll move on to creating, configuring, and deleting vSwitches.

Creating, Configuring, and Deleting vSwitches

You’ll use the vicfg-vswitch command for the majority of these tasks. Unless I specifically indicate otherwise, all the commands, parameters, and arguments are case-sensitive. For all these vicfg-* commands, you will get prompted for the password to the user account you defined when you set the value of the VI_USERNAME variable.

To create a vSwitch, use this command:

vicfg-vswitch -h <ESXi hostname> -a <vSwitch Name>

To link a physical NIC to a vSwitch—which is necessary in order for the vSwitch to pass traffic onto the physical network or to receive traffic from the physical network—use this command:

vicfg-vswitch -h <ESXi hostname> -L <Physical NIC> <vSwitch Name>

In the event you don’t have information on the physical NICs, you can use this command to list the physical NICs:

vicfg-nics -h <ESXi hostname> -l (lowercase L)

Conversely, if you need to unlink (remove) a physical NIC from a vSwitch, use this command:

vicfg-vswitch -h <ESXi hostname> -U <Physical NIC> <vSwitch Name>

To change the Maximum Transmission Unit (MTU) size on a vSwitch, use this command:

vicfg-vswitch -h <ESXi hostname> -m <MTU size> <vSwitch Name>

To delete a vSwitch, use this command:

vicfg-vswitch -h <ESXi hostname> -d <vSwitch Name>

Creating, Configuring, and Deleting Port Groups

As with virtual switches, the vicfg-vswitch is the command you will use to work with port groups. Once again, unless I specifically indicate otherwise, all the commands, parameters, and arguments are case-sensitive.

To create a port group, use this command:

vicfg-vswitch -h <ESXi hostname> -A <Port Group Name> <vSwitch Name>

To set the VLAN ID for a port group, use this command:

vicfg-vswitch -h <ESXi hostname> -v <VLAN ID> -p <Port Group Name> <vSwitch Name>

To delete a port group, use this command:

vicfg-vswitch -h <ESXi hostname> -D <Port Group Name> <vSwitch Name>

To view the current list of vSwitches, port groups, and uplinks, use this command:

vicfg-vswitch -h <ESXi hostname> -l (lowercase L)

Creating, Configuring, and Deleting VMkernel NICs

To work with ESXi’s VMkernel NICs, you’ll primarily use the vicfg-vmknic command. As in the previous sections, all commands are case-sensitive unless I specifically indicate otherwise, and all commands assume you’ve defined the VI_SERVER and VI_USERNAME variables.

To create a new VMkernel NIC, use this command:

vicfg-vmknic -h <ESXi hostname> -a -i <VMkernel NIC IP address> -n <Subnet mask> <Port group>

To delete a VMkernel NIC, use this command:

vicfg-vmknic -h <ESXi hostname> -d <Port group>

To enable vMotion on an already-created VMkernel NIC:

vicfg-vmknic -h <ESXi hostname> -E <Port group>

There are more networking-related tasks that you can perform from the CLI, but for a new user these commands should handle the lion’s share of all the networking configuration. Good luck with your ESXi environment!

Tags: , , , ,

Welcome to the 40th post in the Virtualization Short Take series, where I share with you various virtualization-related links, thoughts, and news tidbits. (Occasionally, I throw in some stuff that’s not virtualization related just to see if you are paying attention.) Enjoy!

  • There have been a couple of posts now discussing Storage IO Control, a new feature that is possibly slated for inclusion in a future release of VMware vSphere. Storage IO Control extends the disk shares model cluster-wide, allowing administrators to properly shape access to back-end storage resources. The inimitable Scott Drummonds discussed it on Pivot Point (his blog), and Craig Stewart also recently published an article about Storage IO Control over at Gestalt IT. There’s a fair amount of duplication between the two articles (Craig based his article partly on Scott’s), but both are worth a read if you need to come up to speed on this new feature. (Quick disclaimer: I’m discussing Storage IO Control here only because it’s been mentioned elsewhere by others. As to whether or not this feature will or will not appear in a future release, or when that future release might be, I know nothing. OK?)
  • I don’t know why, but I saw this virtual appliance on VMware’s web site earlier today and it has triggered a nagging feeling in the back of my head. Is the future of computing found in simple “scale out” building blocks like this?
  • This article on “VM stall” by Andi Mann just re-confirms something I’ve been saying for a while: there are still too many companies out there that aren’t taking full advantage of virtualization. If you’re one of the almost 54% of companies that is still less than 30% virtualized, what’s holding you back?
  • Among all the other announcements from EMC World last week, this little tidbit might have gotten overlooked. Chad blogged about a fix contained in FLARE 30—the updated version announced at the conference—that addresses a problem with iSCSI initiators and the CLARiiON arrays. Good work, EMC engineering!
  • Over on VMware’s ESXi Chronicles blog, Charu Chaubal recently published a two-part series on hardware monitoring via CIM (part 1 and part 2).
  • A reader dropped me an e-mail about an issue uncovered in their environment while trying to automate the VMware Tools installation. Apparently, the VMware Tools installation relies on 8.3 file naming conventions. Normally, this wouldn’t be a problem, but in environments where 8.3 file name creation is disabled…well, you can see where there might be a problem. No workaround has yet been found. Any wizards out there who have suggestions are welcome to add them to the comments of this post.
  • Two posts popped up in the last couple of weeks regarding the default number of ports on a Nexus 1000V port profile: this post by Kevin Goodman and this post by Jason Nash. Fortunately, it’s a quick process to increase the default maximum of 32 ports.
  • Running your VMware vSphere environment on NFS? Have a look at this document from VMware.
  • Didier Pironet of DeinosCloud, the same gentleman who showed us how to increase the number of VMware HA primary nodes, posted a guide on adjusting the memory usage of Tomcat (the engine behind VMware vCenter’s web services). This is most likely an unsupported configuration change, but it might be handy in test/development environments.
  • Kevin Goodman also had a good post on configuring EMC PowerPath on Linux on Cisco UCS. I know, I know: this isn’t strictly related to virtualization, but it’s close enough in my book.
  • Alastair of finally declares that the emperor has no clothes when he states why he believes users don’t want a client hypervisor. Personally, I tend to agree with him; I think a hosted hypervisor is far more valuable on the client-side space (especially in the BYOPC scenario). Just because you can run a bare metal hypervisor on your laptop doesn’t necessarily mean that you should run a bare metal hypervisor on your laptop.
  • There seems to be a fair amount of confusion around the vStorage APIs; perhaps this is due to the different subsets of the vStorage APIs. There are the vStorage APIs for Array Integration (VAAI); these were discussed in some detail last week at EMC World. There are also the vStorage APIs for Multipathing (VAMP), which serve to support multipathing plugins like PowerPath/VE. Finally, there are the vStorage APIs for Data Protection (VADP), which are the APIs that serve to replace VMware Consolidated Backup. If you’d like to know more about VADP in particular, this VMware KB article has a list of frequently asked questions about VADP.
  • Tom Howarth has brought to light a potentially serious problem with Changed Block Tracking (CBT), a key part of the vStorage APIs for Data Protection (VADP) that enables lots of backup and recovery applications.
  • While reviewing one of the weekly VMware KB digests, I came across this VMware KB article in which virtual NICs are sometimes detected as removable hardware; this can, in turn, cause the virtual NIC to disappear from the virtual machine. It appears that the only workaround for this behavior is to disable HotPlug.
  • Xsigo recently posted a comparison of their I/O virtualization solution vs. other I/O virtualization solutions. They include FCoE as an I/O virtualization solution, but as I’ve said in the past I don’t consider FCoE an I/O virtualization solution. To include FCoE in this sort of comparison is kind of like saying that an apple is a poor orange because it lacks a thick outer skin. FCoE wasn’t designed to do I/O virtualization—it was designed to carry Fibre Channel traffic over Ethernet. Despite the liberty with which comparative technologies are selected, the article is worth reading nevertheless.

And to round out this issue of Virtualization Short Takes, here are a few “bonus links” I found:

UCS with disjointed L2 domains
The “Mini-Rack” Approach to Blade Server Design
Hot adding or removing a Cisco 3750 from a stack
EMC World Cubed – Here’s all the Video
ESXTOP, My understanding

That’s it for now. I hope you found something useful. Feel free to share more useful links in the comments, and thanks for reading!

Tags: , , , , , , ,

Welcome to Virtualization Short Take #33, the first installation of the Virtualization Short Take series for 2010! This installation will be a bit lean, but I hope that you find something useful among these nuggets of information.

  • This article by Kenneth Van Ditmarsch, backed up by this post by Chad Sakac, underscore the need for proper operational documentation for your virtualization environment. Organizations that have taken the time to prepare operational procedures and train their staff on using the documented procedures will, in my opinion, be far less likely to fall victim to this vSphere storage bug. I’m not saying you’ve got to go crazy on documentation, but take the time to document and validate the core procedures your team is using. I think you’ll find the results beneficial.
  • Speaking of vSphere bugs, Chad also describes a bug affecting vSphere 4 (including Update 1) involving NMP and Round Robin. If you change the I/O Operation Limit for Round Robin (using the esxcli command), you might find that the value gets changed to some random value upon reboot. The workaround is to not modify the I/O Operation Limit (the default value is 1000).
  • Scott Drummonds of VMware has been publishing a great series of articles on host swapping and memory overcommit. The series starts with a discussion on host swapping and the fact that VMware ESX does not track working sets within every VM (it would be too much overhead). He continues with this post on using SSDs to help alleviate potential host swapping performance concerns (also see this article that Scott references in his post). In the last two posts, Scott debunks some misconceptions about memory management and then goes to show why memory overcommit is important in optimizing memory utilization. Definitely some good stuff.
  • The VMware Communities blog post about using SSDs to improve performance when memory is overcommitted (found here) put me to thinking. In the tests documented in that post, local SSDs were used. What if EFDs were used instead? I’d be curious to know the results. This would support a boot from SAN approach that is more amenable to Cisco UCS model of stateless computing (although I’ve said before that I’m not entirely sold on stateless computing in a virtualized environment, since the hypervisor negates some of the benefits).
  • There are some areas where the Cisco UCS stateless computing model really shines; Steve Chambers describes one such use case in this post on multi-tenant DR with Cisco UCS.
  • Here’s a useful document on installing VMware ESXi on Cisco UCS using the UCS Manager KVM. Last time I tried the UCS Manager KVM on my Mac, it was barely usable and you couldn’t attach media; hopefully it’s improved since then.
  • Arnim van Lieshout has a great post on geographically dispersed VMware clusters. One thought that occurred to me as I was reading this post was that while Arnim’s post was written from the perspective of a production site and a DR site, the same challenges affect the use of external cloud providers and vCloud. As VMware and VMware’s partners start to address these challenges, not only does the idea of geographically dispersed clusters start to look more realistic and more flexible, but so too does the idea of leveraging additional capacity from a cloud provider via vCloud.
  • Interested in triggering an ESXi kernel panic on demand? Eric Sloof shows you how.
  • Finally, for users with the Nexus 1000V who want to update their ESX/ESXi hosts to Update 1 using the vihostupdate utility, Duncan’s post (and this associated VMware KB article) provides all the information necessary to make it work properly.

I did find a couple other useful posts that I haven’t had the time to properly read but which look interesting:

VMware Desktop Reference Architecture Workload Simulator (RAWC)
White Paper: VMware vSphere 4 Performance with Extreme I/O Workloads

That’s it for this time around. I welcome all courteous comments or thoughts on any of the links or posts I’ve mentioned here. Thanks for reading!

Tags: , , , , , ,

Welcome back to yet another Virtualization Short Take! Here is a collection of virtualization-related items—some recent, some not, but hopefully all interesting and/or useful.

  • Matt Hensley posted a link to this VIOPS document on how to setup VMware SRM 4.0 with an EMC Celerra storage array. I haven’t had the chance to read through it yet.
  • Jason Boche informs us that both Lab Manager 3 and Lab Manager 4 have problems with the VMXNET3 virtual NIC. In this blog post, Jason describes how his attempts to install Lab Manager server into a VM with the VMXNET3 NIC was failing. Fortunately, Jason provides a workaround as well, but you’ll have to read his article to get that information.
  • Bruce Hoard over at Virtualization Review (disclaimer: I write a regular column for the print edition of Virtualization Review) stirred up a bit of controversy with his post about Hyper-V’s three problems. The first problem is indeed a problem, but not an architectural or technological problem; VMware is indeed the market leader and has a quite solid user base. The second two “problems” stem from Microsoft’s architectural decision to embed the hypervisor into Windows Server. Like any other technology decision, this decisions has its advantages and disadvantages (these technology decisions are a real double-edged sword). Based on historical data, it would seem that the need to patch Windows Server will impact the uptime of the Windows virtualization solution; however, this is not to say that VMware ESX/ESXi are not without their patches and associated downtime as well. I guess the key takeaway here is that VMware seems to be doing a much better job of lessening (or even removing) the impact of the downtime through things like VMotion, DRS, HA, maintenance mode, and the like.
  • Apparently there is a problem with the GA release of the Host Update utility that is installed along with the vSphere Client, as outlined here by Barry Coombs. Downloading the latest version and reinstalling seems to fix the issue.
  • And while we are on the subject of ESX upgrades, here’s another one: if the /boot partition is too small, the upgrade to ESX 4.0.0 will fail. This isn’t really anything too new and, as Joep points out, is documented in the vSphere Upgrade Guide. I prefer clean installations of VMware ESX/ESXi anyway.
  • Dave Mishchenko details his adventures (part 1, part 2, and part 3) in managing ESXi without the VI Client or the vCLI. While it’s interesting and contains some useful information, I’m not so sure that the exercise is useful in any way other than academically. First of all, Dave enables SSH access to ESXi, which is unsupported. Second, while he shows that it’s possible to manage ESXi without the VI Client or the vCLI, it don’t seem to be very efficient. Still, there is some useful information to be gleaned for those who want to know more about ESXi and its inner workings.
  • I think Simon Seagrave and Jason Boche were collaborating in secret, since they both wrote posts about using vSphere’s power savings/frequency scaling functionality. Simon’s post is dated October 27; Jason’s post is dated November 11. Coincidence? I don’t think so. C’mon, guys, go ahead and admit it.
  • Thinking of using the Shared Recovery Site feature in VMware SRM 4.0? This VMware KB article might come in handy.
  • I’m of the opinion that every blogger has a few “masterpiece” posts. These are posts that are just so good, so relevant, so useful, that they almost transcend the other content on the blogger’s site. Based on traffic patterns, one of my “masterpiece” posts is the one on ESX Server, NIC teaming, and VLAN trunking. It’s not the most well-written post I’ve ever published, but it seems to have a lasting impact. Why do I mention this? Because I believe that Chad Sakac’s post on VMware I/O queues, microbursting, and multipathing is one of his “masterpiece” posts. Like Scott Drummonds, I’ve read that post multiple times, and every time I read it I get something else out of it, and I’m reminded of just how much I have yet to learn. Time to get back out of that comfort zone!
  • Oh, and speaking of Chad’s blog…this post is handy, too.

That’s all for now, folks. Stay tuned for the next installation, where I’ll once again share a collection of links about virtualization. Until then, feel free to share your own links in the comments.

Tags: , , , , , , ,

« Older entries