You are currently browsing articles tagged VMworld2008.

Regular readers may recall that I met with Hyper9 during VMworld 2008 in Las Vegas. Check here for a summary of my discussion with Hyper9. Since that meeting, during which I had a chance to see the beta product, I’ve been in communication with David Marshall (of vmblog.com, who also works at Hyper9) and the rest of the Hyper9 crew about getting some private beta invitations for my readers.

Today I am happy to announce that I do have some beta invitations available to readers! If you’re interested in a beta invitation to try out Hyper9, please post a comment on this article. Be sure to provide a valid e-mail address when posting the comment, as this is how we will contact you about your beta invitation request (don’t worry, your e-mail address is never published or made available to others). Do not e-mail me to ask for a beta invitation; I will only work with comments left on the article. Comments will be closed once all the beta invitations have been awarded.

Please keep in mind that Hyper9 does have some prerequisites that are required, so not everyone who requests an invitation may get one. Also keep in mind that all comments on this site are moderated, so you may not see your comment appear right away. Just be patient.

Rich over at VM /ETC has some screenshots and good coverage of the product, and I believe that he will have a few beta invitations as well. Good luck!

Tags: , ,

This edition of Virtualization Short Takes is mainly a collection of items from the VMworld 2008 conference in Las Vegas. Some of these are session transcripts from various bloggers; some are just VMworld-related blog posts.

  • Blogger Rich Brambley has some great coverage of various VMworld sessions: BC3819, BC2370, PO2575, and VD3261, among others. Excellent work, Rich.
  • The whole vStorage thing has prompted quite a flurry of attention. I’ll probably tackle this myself soon, but for now I’ll just comment on what others are saying. Stephen Foskett comments that “VDC-OS has legs!”, meaning that this vision has some substance behind it; I agree. He also has a nice collection of related vStorage links. There’s also Chad’s discussion of vStorage, which is quite helpful coming as it does from the perspective of a storage vendor seeking to take advantage of these new capabilities. Mark Twomey also had a little bit to say about vStorage as well.
  • Michael Keen’s VMworld 2008 analysis provides a good review of VMware’s announcements and strategy in relation to the competition and the partner and the partner ecosystem.
  • UK magazine Computing was disappointed by Paul Maritz’s vision and the VDC-OS announcement, expecting “more granular details on how the initiative would actually take off.” Personally, I felt that Steve Herrod’s keynote on Day 2 did a fairly reasonable job of showing the mechanics behind Paul’s vision.
  • Redmond Developer News provided another view of VMware’s VDC-OS announcement, although to be honest the article looked pretty much like every other discussion of VDC-OS, with the same interviews of the same people.
  • Kevin Fogarty’s take on the VDC-OS pitch, published on CIO.com originally, then republished by Computerworld and touched upon briefly by Tarry Singh, is that it’s “too much of a leap of faith for me and, I suspect, most of the VMware faithful as well.” I get that VMware’s new strategy is radically different from anything VMware has done before; up until now, their releases have been product-focused. Now Maritz is driving the company with a long-term vision that will be fulfilled through a series of product releases. It’s a shift in thinking, and one that will require an adjustment. Personally, I don’t like the new VDC-OS branding and I told VMware straight up that it would be confusing to customers. Nevertheless, the vision behind the brand is, in my opinion, solid and so I must disagree with Kevin’s analysis.
  • Massimo thinks Paul plagiarized the VDC-OS concept. If he were serious (which he’s not), he actually has a pretty good case.
  • Eric Sloof has a bit of additional Cisco Nexus 1000V coverage.

I think that about wraps up my collection of VMworld 2008-related links. If I’ve missed anything significant—and I’m sure I have—please feel free to add it in the comments.

Tags: , , , , ,

This is PO2061, VMware VirtualCenter 2.5 Database Best Practices. The presenter is Bruce McCready with VMware. My battery is starting to run low, so I may lose some or all of the rest of this session partly through the session. I have another battery that I can swap in if necessary.

The session starts out with the standard disclaimer. Most of this stuff is already out there, but the disclaimer is required anyway.

The VirtualCenter database is clearly an important part of the overall virtualization implementation. The goals of this presentation will be to help users get a better understanding of the VC DB inner workings, to help users take advantage of changes in VC 2.5 and VC 2.5 Update 1, talk about performance statistics and how they affect DB performance, and answer some FAQs regarding VC DB size and performance.

As an overview of the VC DB, it stores host and VM details. The DB is slightly larger in VC 2.5 as it keeps about 40% more data about hosts. The VC DB also stores information about alarms and events. Old data about alarms and events is not purged over time. Finally, the VC DB also stores performance statistics. Up to 90% of the DB is taken up by performance statistics. Performance statistics are the primary factor in scalability and performance of the VC DB, which is what led to a major refactoring of the database in VC 2.5 as opposed to VC 2.0.

Bruce briefly describes the components that add load to the VirtualCenter database: performance statistics inserts, performance statistics rollup stored procedures, and ad hoc queries.

There are four levels of performance statistics. Level 1 has only about 10 statistics, Level 2 goes to 25 items, Level 3 ups that to about 65, and Level 4 increases to around 100 items. You can multiply the number of statistics being captured by the number of objects in inventory to get a rough idea of how much data is being inserted into the VC DB.

Stored procedures are run as scheduled jobs in SQL Server every 30 minutes, 2 hours, and 24 hours. Every half hour the 5 minute statistics are rolled up into the half hour roll up. Every 2 hours the half hour statistics are rolled up, and every day the two hour statistics are rolled up. After a statistic is a year old, it is automatically purged. Some statistics (like the 5 minute statistics) are kept for shorter periods of time.

Noting that performance statistics are purged periodically, it’s important to understand the most of the unchecked table growth comes from alarms and events.

The performance improvements in VC 2.5 allow the VC DB and stored procedures to scale much higher and greatly reduce the amount of time required for the operations to complete. In addition, the refactoring also allowed the VC 2.5 database to reduce the amount of space required to store the same amount of performance data.

Bruce next shows some performance results about response times for various operations (rollup, purge, and insert).

When it comes to sizing for optimal performance, memory is the most important thing. Disk I/O and CPU are important, but memory will most likely have the greatest impact. SQL Server’s TEMPDB isn’t as important in VC 2.5 as it was used in VC 2.0. Keep in mind that SQL Server will take advantage of multi-core situations, and the VC DB is optimized to help take advantage of SQL Server’s parallelism.

Bruce again shows how more memory will greatly reduce the time required to perform two-hour rollups on a very large inventory with Level 4 performance statistics. This just reinforces the need to give VC plenty of memory.

To help calculate the sample size of performance statistics, use this formula:

Sample size = (number of entities) * (statistics per level)

Based on the sample size, VMware has come up with some recommendations for VC DB hardware:

Up to 40K: 1 core, 1GB of RAM, less than 120 IOPS
40K to 80K: 2 cores, 2GB of RAM, 120-200 IOPS
80K to 120K: 4 cores, 4GB of RAM, 200-300 IOPS
120K to 160K: 4 cores, 6GB of RAM, 400-500 IOPS
160K to 200K: 6 cores, 8GB of RAM, 500-600 IOPS
More than 200K: 8 cores, 12GB of RAM, 650 IOPS

Next he moves on to some performance tips for VC 2.5. One trick to configure different collection levels for different time periods. For example, longer-term data can be kept at Level 1, but short-term data can be gathered at Level 3.

From the SQL Server perspective, it may be beneficial to separate the data and log onto different physical drives. In addition, monitoring page fullness in vpx_hist_stat1 and vpx_hist_stat2 may also help improve performance. Use the NOLOCK option for ad hoc queries to prevent blocking of other operations that need to occur, and correctly size the memory that is allocated to SQL Server. In addition, configuring parallel processing in SQL Server may be helpful. This is typically on by default, but if it is off then you may want to turn it on.

It is important to decide on a suitable backup strategy as well.

Enable vardecimal format for vpx_hist_stat can reduce database size about 60%. However, vardecimal format is only supported on SQL Server 2005 SP2, and only in certain editions of that version (such as the Enterprise Edition).

Purging old data can also help with DB performance. Remember that performance statistics are purged automatically, but alarm and event data is not. See VMware KB article 1000125.

The presentation is still going and looks like it has about 20 minutes left at the most, but the battery is running low now so I’m going to go ahead and publish this entry. Readers who attended this session are invited to flesh out my notes in the comments for additional information that I wasn’t able to cover.

Tags: , ,

This is PO1644, VMware Update Manager Performance and Best Practices. The presenter is John Liang.

Covering some terminology before moving forward, the presenter covered the idea of a patch store (a location where patches are stored), baseline, compliance (the state of the host or VM when compared to the baseline; falls into compliant/not compliant/unknown/not applicable), scan (either VM scan or host scan; VM scans can be either online or offline); and remediation (the idea of applying patches to a host or VM).

VUM has two deployment models. In the Internet-connected model, the VUM server knows and has connectivity to the VMware patch repository, and VUM will work closely with VirtualCenter. VUM can also be connected to multiple VC servers on multiple subnets.

In an Internet-disconnected model, VUM has no direct connectivity to the Internet and is not able to download patches for deployment. In this model, a separate Update Manager Deployment Server (UMDS) instance can download the patches. The patches can be exported to physical media and transferred to the VUM server for use in scanning and remediation.

Next, the presenter moved into a discussion of VUM sizing. VUM uses a separate database. A small deployment (20 hosts, 200 VMs, 4 scans/month for VMs, 2 scans/month for hosts) will generate 17MB/month in database storage. A medium deployment ups to 109MB/month, and a large deployment would generate 552MB/month in database storage.

The presenter provided some guidelines for patch store disk space, but I couldn’t capture that information before he proceeded to the next slide.

There are a number of VUM deployment models. VUM can be deployed on the same server as VC and use the same database server as VC. However, for medium deployments, consider separating the VUM database to a different database server. Consider a medium deployment to be 500 VMs or 50 hosts. For even larger deployments, both VC and VUM should use separate servers and separate databases. The recommendation is to separate VUM from VC is there are more than 1000 VMs or 100 hosts. In addition, the VUM database should be on different disks than the VC database, use at least 2GB of RAM for caching (more is better), and finally to separate VC and VUM onto separate servers for maximal performance.

Next, the presenter discussed some performance results for VUM. The host running VC and VUM and the database was a dual socket/dual core host with 16GB of RAM, managing VMware ESX hosts with 32GB of RAM. The results that were presented:

  • 8 seconds to download the VM guest agent
  • 27 seconds to scan a powered-off Windows VM
  • 36 seconds to scan a powered-on Windows VM
  • 8 seconds to scan a Linux VM

Next, the presenter showed some results of resource consumption during these various tasks. Compared to other operations, scanning a powered-off Windows VM took the most CPU usage on the VUM server itself. For the same task, the VC server CPU was not tremendously impacted, VMware ESX CPU was not impacted, and disk and database performance was essentially equivalent across all operations. Again, these results are all for single-operation scenarios. Out of all the tasks, the most (relative) expensive operation was an offline Windows VM scan.

VUM is limited to 5 VM remediation tasks per VMware ESX host (48 per VUM server), 6 powered-on Windows scans per VMware ESX host (42 per VUM server) , 10 powered-off Windows scans per VMware ESX host (10 per VUM server), 6 powered-on Linux scans per VMware ESX host (72 per VUM server), 1 VMware ESX scan per VMware ESX host (72 per VUM server), and 1 VMware remediation task per VMware ESX host (48 per VUM server). Some of these limits make sense; you can’t scan more than one VMware ESX host per VMware ESX host, for example.

The presenter gave a quick of wanting to scan 5000 Windows VM across 100 hosts, each host with 50 VMs and each scan taking 60 seconds? The answer: just shy of 105 minutes. I won’t go into the math details.

Entering maintenance mode can be blocked for the following reasons:

  • VMware HA is configured with only two hosts
  • VMware DRS fails to VMotion a VM to another host
  • VMware DRS is configured for manual mode instead of automatic mode
  • Without VMware DRS, there is a VM powered on

To correct these problems, use VMware DRS and configure it to use automatic mode, and use more than 2 hosts in a cluster.

It’s important to remember that the guest agent is single-threaded, and the Shavlik scan and remediation are also single-threaded. Using multiple vCPUs won’t necessary help with guest OS performance with regards to patching.

What is the impact of patching on guest memory? VUM will create virtual CD-ROM images, attach them to the guest, and then issue the remediation command to the VM. This will trigger a fairly significant amount of network traffic between the VUM server and the VMs being remediated. This can have a significant impact on network performance. The remediation process itself is also memory intensive, which can be further exacerbated by larger patches (Windows XP SP3 is 331MB, for example). To help with performance, VMware recommends at least 1GB of RAM for Windows VMs.

Next, the presenter tackles the subject of the impact of high-latency networks on VUM operation. The time taken by various operations (online scans, offline scans, remediation, etc.) is directly related to network latency; the higher the latency, the longer the operation takes. Online VM scans are the only exception; they remain constant and very low.

To help address this potential problem, VMware recommends deploying the VUM server as close to the VMware ESX hosts as possible. This will help reduce network latency and packet drops. In addition, use online scans on high-latency networks to minimize the impact of network latency.

An offline scan works by having the VUM server mount the VMDKs for the offline Windows VMs and then scan them directly from the VUM server. This explains why the CPU utilization on the VUM server is so directly impacted as a result of performing offline Windows VM scans; it has to mount the VMDK and scan the Registry and disks locally on the VUM server.

To help optimize offline scan, exclude “\Device\vstor*” in any on-access anti-virus software on the VUM server itself. This will prevent the VUM server from performing more I/O operations than necessary. Making this optimization helps improve performance by reducing latency by almost 50% on a high-latency network. The impact is almost negligible on a low-latency network. The presenter walks through excluding the appropriate device/location in anti-virus, something with which most users in here are probably already familiar.

  • Use VMware DRS in automatic mode for host patching.
  • Separate physical disks for patch store and VUM database.
  • Use at least 2GB RAM for VUM server host to cache patch files in memory.
  • Separate the VUM server database from the VC database if the inventory is large enough.
  • Using multiple vCPUs in guests won’t necessarily help performance.
  • Deploy VUM close to ESX hosts where possible.
  • Prefer online scan on high-latency networks.
  • Configure on-access anti-virus appropriately on the VUM server.

At this point, the presenter closes the session with a summary of VUM and its role in a VMware Infrastructure deployment.

Tags: , , , ,

No Liveblog for TA2441

I originally had TA2441, VI3 Networking Concepts and Best Practices, on my schedule, but upon a close review of the agenda it looks like this is all stuff I’ve seen before. In fact, it looks like stuff I’ve written about quite extensively, so I’m skipping out on this session. Sorry to disappoint anyone!

Tags: , , ,

This is the liveblog for TA2644, Networking I/O Virtualization, presented by Pankaj Thakkar, Howie Xu, and Sean Varley.

The session starts out with yet another overview of VDC-OS. This session will focus on technologies that fall into the vNetwork infrastructure vService. The agenda for the session includes networking I/O virtualization, virtualized I/O, and VMDirectPath.

Starting out, the presenter first defines what exactly networking I/O virtualization is. Networking I/O virtualization is providing muxing/demuxing packets among VMs and the physical network. VMs need to be decoupled from physical adapters, and the networking I/O virtualization must be fast and efficient.

Now that the audience has an idea of what networking I/O virtualization is, the presenter changes focus to talk about the architecture that provides I/O virtualization. First, there is a virtual device driver that can either model a real device (e1000, vlance) or that can model a virtualization-friendly device (vmxnet, a paravirtualized device). I’m glad to see the vendor refer to this as a paravirtualized device, since that’s really what it is.

Below the virtual device and virtual device driver, there is the virtual networking I/O stack. This is where features like software offloads, packet switching, NIC teaming, failover, load balancing, traffic shaping, etc., are found.

Finally, at the lowest layer, are the physical devices and their drivers.

The next discussion is tracing the life of a received packet through the virtualized I/O stack. After tracing the life of a packet, the presenter discusses some techniques to help reduce the overhead of network I/O. These techniques include zero copy TX, jumbo frames, TCP segmentation offload (large send offload and large received offload).

The problem with using jumbo frames, though, is that the entire network must be configured to support jumbo frames. Instead, the use of TSO (or LSO, as it is sometimes known) can help because it pushes the segmentation of data into standard size MTU segments down to the NIC hardware. This is fast, but even a software-only implementation of TSO can provide benefits.

(As a side note, it’s difficult to really understand the presenter; he has a very thick accent.)

On the receive side, the technology called NetQueue is intended to help improve performance and reduce overhead. When the NIC receives the packet, it classifies the packet into the appropriate per-VM queue and notifies the hypervisor. The presence of multiple queues allows this solution to scale with the number of cores present in the hardware. It also looks like NetQueue can be used in load balancing/traffic shaping, although I’m unclear exactly how as I didn’t understand what the presenter said.

Zero copy TX was discussed earlier (copy the packet from the VM directly to the NIC), but there was no discussion of zero copy RX. With NetQueue and VM MAC addresses being associated with the various queues, it’s also possible to do zero copy RX. The caveat: the guest can access the data before it is actually delivered to it.

The focus on the presentation now shifts to a discussion of VMDirectPath, or I/O passthrough. This technology initiative from VMware requires hardware I/O MMU to perform DMA address translation. In this scenario, the guest controls the physical hardware and the guest will have a driver for that specific piece of hardware. VMDirectPath also needs a way to provide a generic device reset; FLR (Function-Level Reset) is a PCI standard that provides this.

SR-IOV (Single Root I/O Virtualization) is a PCI standard that allows multiple VMs to share a single physical device. If I understand correctly, this means that VMDirectPath will allow multiple VMs to share a single physical device via SR-IOV. Part of SR-IOV is creating virtual functions (VF) that the guest sees and mapping those to physical functions (PF) that the physical hardware controls and sees.

Challenges with VMDirectPath include:

  • Transparent VMotion: Because the guest controls the device, there’s no way to control device state, so VMotion won’t be possible. This is logical and fully expected, but certainly has an impact on the usefulness of this technology.
  • VM management: Users are now placed back into the issue of managing device drivers into VMs based on the hardware to which they are connected.
  • Isolation and security: A lot of the features provided by the hypervisor (VMsafe, MAC spoofing, promiscuous mode, VMware FT, etc.) are lost when using VMDirectPath.
  • No memory overcommitment: Physical devices will DMA into the guest memory, but this requires that memory overcommit is disabled so that this works.

Although these limitations around VMDirectPath are significant, there still can be valid use cases. Consider appliance VMs or special purpose VMs, such as a VM to share local storage or a firewall VM, where technologies like VMotion or VMware FT aren’t necessary or aren’t desired.

Generation 2 of VMDirectPath will attempt to address the challenges described above. One way of accomplishing that is called “uniform passthrough,” in which there is a uniform hardware/software interface for the passthrough part. This allows a transparent switch between hardware and software from the hypervisor while the guest is not affected or even aware of the mode. This puts the control path under the control of the hypervisor, but bypasses the hypervisor for the data path.

This Gen2 implementation allows for migration because the mode is switched from direct to emulation transparently and without any special support within the guest OS.

Another way of implementing is described by Sean Varley of Intel. This method is called Network Plug-in Architecture. Most of the functionality in this solution is embedded inside the guest device driver, which typically would be VMware’s paravirtualized vmxnet driver. Sean underscores the need for SR-IOV support in order to really take advantage of VMDirectPath, because it doesn’t really scale otherwise.

This particular solution consists of a guest OS-specific shell and an IHV-specific hardware plug-in. The interface of the guest OS-specific shell will be well-known and is the subject of a near-future joint disclosure between Intel and VMware. The plug-in will allow various other IHVs to write software that will allow their hardware to be used in this approach with VMDirectPath.

This plug-in also allows for emulated I/O, similar to what ESX offers today, in the event that SR-IOV support is not available or if the user does not want to use VMDirectPath. Upon initialization, the guest OS shell will load the appropriate plug-in and (where applicable) create a VF that maps onto a VMDirectPath-enabled physical NIC.

Migration in this scenario is enabled because the hypervisor remains in control of the state of the shell and the plug-in at all times. The hypervisor can reset the VF, migrate the VM, and then load a new plug-in on the destination via the initialization process described earlier.

The key advantages of this particular approach are IHV independence, driver containment, and hypervisor control. This enables IHV differentiation and removes the VM management headache described earlier (VMs won’t need hardware-specific drivers). Hypervisor control is maintained because the SR-IOV split model of VF/PF is maintained, and the hypervisor controls plug-in initialization and configuration.

The session ends with a sneak preview demonstration of a migration using the plug-in architecture and VMDirectPath, migrating a VM between an SR-IOV-enabled NIC and a non-enabled NIC on a separate host. The presenter showed how the vmxnet driver loaded the appropriate plug-in based on the underlying hardware.

Tags: , , ,

There is no general session this morning at VMworld 2008; instead, a “keynote” will be delivered about automating disaster recovery (DR) using VMware Site Recovery Manager (SRM). This is similar to the way in which other vendors have delivered various “keynotes” throughout the conference instead of all the announcements being crammed into the morning general sessions.

The speaker this morning is Jay Judkowitz, the product manager for VMware SRM. I’ve met Jay before; he’s a good guy. There’s a small technical glitch as the session begins because the slide deck doesn’t come up, but that gets resolved within only a few minutes and Jay begins his presentation.

The presentation begins with yet another overview of the VDC-OS vision; SRM is considered one of the vCenter management vServices. Jay then goes on to address all the various ways in which VMware provides application availability for applications hosted on VMware Infrastructure. This would be technologies like VMotion, VMware HA, VMware DRS, VMware FT, NIC teaming, storage multipathing, and of course Site Recovery Manager.

The traditional challenges of DR (including complex recovery processes and procedures, hardware dependence, inability to test extensively or repeatedly) are all addressed by VMware SRM. More accurately, they are addressed by the products that form a foundation underneath VMware SRM. Features like hardware independence, encapsulation, partitioning and consolidation, and resource pooling. These features have a direct play in a DR environment. It’s funny to see Jay taking this particular approach; it’s almost like he’s using the same slide deck that I’ve used in DR presentations given over the last couple of months.

That finally brings the discussion around to Site Recovery Manager specifically. Jay goes over some of the features of SRM, and discusses some “do’s and dont’s” for SRM. For example, SRM isn’t really intended to provide failover for a single VM, although you can architect it to do that (put that VM on a single LUN by itself and create a Protection Group for that LUN and VM, then craft your Recovery Plan).

It’s important to note that SRM is not a replication product, but instead relies upon replication products from supported partners. This is done via the Storage Replication Adapter (SRA), a piece of software written by the storage vendor.

When setting up SRM, there are number of steps that it goes through. First, you have to integrate with the storage replication in place already (and yes, the storage replication needs to be in place already). Next, you need to map recovery resources; this creates the link between resources used in the Protected Site to resources that will be used in the Recovery Site. Third, you need to create Recovery Plans, which is the automated equivalent of the DR runbook. That is, the Recovery Plan defines which VMs will failover, in which order, at the Recovery Site. That’s a bit of simplistic overview but it does get the point across.

At this point, I’ve decided that I’m going to try to get into a different session. I’m quite familiar with SRM, a lot of readers are probably familiar with it as well, and it doesn’t look like there is anything new that will be revealed here. For those readers that aren’t familiar with SRM, let me know in the comments. If there’s enough interest, I’ll write something separate after my return from VMworld 2008.

Tags: , , , , , ,

As I fully expected, another day at VMworld 2008 has passed and I seriously, seriously was not able to keep up with everything. I even blew off a session this afternoon that I really wanted to attend as well. Somebody, please—can you give me good session notes on TA2275, VMware Infrastructure Virtual Networking Future Directions? I’d be very grateful!

In any case, here’s a round-up of more coverage of VMworld 2008 from various places around the Internet:

Brian Madden

New VMware CEO makes the desktop a core focus for the company, with SIX desktop announcements at VMworld

Alessandro Perilli

Live from VMworld 2008: Day 2 – VMware Keynote

Rich Brambley

VMworld 2008 General Session Day 2
Linux Strategy and Roadmap #TA3201

Matthias Müller-Prove

Sun Ray Connector for VMware VDM certified

Colin McNamara

Altor Virtual Network Security Analyzer (VNSA) integrated with Cisco’s Nexus 1000v for VMware

Rick Blythe

VMware Fault Tolerance

Bob Plankers

VMworld 2008 Day 2 General Session


VMworld 2008 – Wednesday general session
VMworld 2008 – Tech preview: vCenter Orchestrator


VMworld 2008 – VMware CTO Dr. Stephen Herrod Keynote liveblog

Bill Petro

VMworld 2008: Day 2 Review – Virtually Anything is Possible

I guess that should about do it for today. Go have a look at some of these other articles; they captured information that I missed, and many of them have photos and shots of the keynote or other information. Enjoy!

Signing off for today…

Tags: , ,

Yesterday, September 16, 2008, the Distributed Management Task Force (DMTF) announced the release of the Open Virtualization Format (OVF) standard, along with a new management initiative called the Virtualization Management Initiative (VMAN). The full details are found here in the official press release.

<aside>Now, personally, I think they should have called it the vMan initiative so that we could continue the lowercase “v” thing that’s going on here at VMworld this year.</aside>

Anyway, a number of vendors have announced support for the VMAN initiative, including AMD, Broadcom, Citrix, Dell, HP, IBM, Intel, Sun Microsystems, Symantec, and VMware.

The DMTF site doesn’t really provide much detail about VMAN, other than to throw up some decidedly marketing-like terms:

The Virtualization Management Initiative (VMAN) from DMTF unleashes the power of virtualization by delivering broadly supported interoperability and portability standards to virtual computing environments. VMAN provides IT managers the freedom to deploy pre-installed, pre-configured solutions across heterogeneous computing networks and to manage those applications through their entire lifecycle. Management software vendors will offer a broad selection of tools that support the industry standard specifications that are a part of VMAN, thus lowering support and training costs for IT managers.

OK, but can you provide any real technical details? Exactly how is VMAN going to do this? Perhaps the PDF-based VMAN and OVF technical notes will contain more details, but until I see those this continues to look more like a marketing exercise than anything else. Sorry, I have to call it like I see it.

Tags: , , , ,

This is TA2659, Managing ESX in a COS-less World. The focus here is on tools that allow you to manage ESX without using the Service Console.

When I arrived at the session, the presenter was discussing VMware’s goal to drive parity between the “actual” CLI present in the Service Console and the Remote CLI. There is a desire to ensure that any command that can be run on ESX or ESXi can also be run on the other.

Another tool to use in managing ESX without the Service Console is the VI Toolkit for PowerShell. I’m sure that most readers are already familiar with the VI Toolkit, so I won’t go into any great detail there. There are about 120 cmdlets or so in the Toolkit today; another 50 or so are slated for release in the next version of the VI Toolkit. In addition, VI Toolkit management/compatibility is a core design facet—everything needs to be manageable via the VI Toolkit.

The VI Perl Toolkit is another scripting toolkit that can be used to manage ESX without relying upon the Service Console. This works on both Linux and Windows, where as the VI Toolkit only works on Windows. The vicfg-* tools are built on Perl.

Future directions include enhancements to the VI Perl Toolkit to expose functionality as Perl functions. A VI Java Toolkit will be available soon (within a month?). Other toolkits may become available depending upon market demand and direction.

From the perspective of server health monitoring, CIM SMASH is the direction in which VMware is moving. CIM (Common Information Model) is both a protocol and a data model representing functionality. SMASH (Systems Management Architecture for Server Hardware) is the data model for hardware health monitoring and management. An example of a tool that works with CIM SMASH is WinRM, which ships on Windows Vista and presumably Windows Server 2008. CIM SMASH profiles simply define the sensors and the values that should be retrieved for various hardware elements.

A fair amount of CIM SMASH functionality was exposed in ESX 3.5 Update 2. This is done per-host; in the future, VirtualCenter will aggregate that for multiple hosts.

An attendee asked a question about varying degrees of hardware monitoring being exposed in the initial release of ESXi and CIM support within both ESXi and the underlying hardware. The response is that this functionality is driven by both SIM providers, which are written by either VMware or the hardware vendor; in most cases, it’s VMware ESXi that needs to beef up the SIM providers.

In the future, the vision of VMware is to abstract the configuration into a configuration file. This configuration template could then be applied to multiple hosts, and the configuration can be assessed regularly to verify configuration, compliance, etc. The information required to do all this is already being handled by the web services API in VirtualCenter, and the tools necessary to perform the configuration are already present in the API (the esxcfg-* and vicfg-* commands already leverage the API to do these very tasks). Combining these two things would allow us to create a configuration template that can be applied to a host while still allowing for customization (like VM customization during cloning).

The subject of deployment is a key issue when we think about losing the Service Console. One approach to handling these issues is deploying physical machines; another would be to deploy virtual machines to handle these tasks. Partners could wrap up the agents that would typically be deployed in the Service Console as a virtual appliance, but then users could end up with numerous virtual appliances. What if VMware were to provide a virtual infrastructure management appliance? That’s what VIMA (Virtual Infrastructure Management Assistant) is.

VIMA is a virtual appliance packaged as OVF and is distributed, maintained, and supported by VMware. This is downloaded and installed by the customer according to their management procedures. This will be a well-known deployment environment that partners can rely upon being present. This will be a 64-bit Linux distribution with VMware Tools, VI Perl Toolkit, the Remote CLI (now known as the VI CLI), and a JRE already present. VIMA can be patched for updates, and it allows you to manage one or more VMware ESX hosts directly or through VirtualCenter. VIMA can enable agents to authenticate themselves, and VIMA will rotate its passwords on the hosts. Additionally, sample code and documentation will be available for programming applications to work in VIMA.

In “classic” ESX, management agents and hardware agents ran in the Service Console; with VIMA, updated management agents will talk through the VI API and hardware agents will talk through CIM SMASH. An example of this is the APC PowerChute Network Shutdown (PCNS), which is being rewritten to use the VI API and will run in VIMA.

Anyone interested in VIMA can e-mail [email protected] and request access to pre-GA versions of VIMA. VIMA is expected for general release in the fourth quarter of this year. All VIMA releases will work with both ESX and ESXi (again, pointing to the desire to keep parity between these two products).

Future versions of VIMA may add Active Directory support; authentication through vCenter Servers; improved automation, configuration, and updates; UI integration in the VI Client; additional VMware components pre-installed. Finally, VMware Studio will be used to build future versions of VIMA.

At this point, the presentation ended and the floor was opened up for a question and answer session.

Tags: , , , , , ,

« Older entries