NetApp

You are currently browsing articles tagged NetApp.

I’ve written a couple of times about NetApp virtual interfaces (VIFs), which are Data ONTAP’s name for a link aggregate using either EtherChannel or dynamic LACP. The earlier articles I wrote are:

Cisco Link Aggregation and NetApp VIFs
LACP with Cisco Switches and NetApp VIFs

I came across an issue today of which I was not aware. I’ve been working on a new NetApp deployment with a fellow engineer that called for a number of different VIFs to be created: one for CIFS traffic, one for NFS traffic, and one for SnapMirror traffic. (Yes, I know the SnapMirror VIF won’t really use more than one link because it’s all point-to-point traffic; it’s primarily for redundancy.) There were some really strange network issues going on, like losing connectivity to the default gateway one moment and then network connectivity being restored the next. We were having a hard time troubleshooting the problem until one of the network engineers casually commented that it looked like the static LACP bundles (the aggregated links represented by the VIFs on the NetApp storage array) weren’t really coming up.

That comment lead to a deeper inspection of the NetApp VIFs and eventually a case with NetApp. The end result was that we learned that multimode VIFs can’t span built-in NICs and add-in NICs. Since the FAS3000 series has a limited number of built-in NICs, we’d installed two additional quad-port NICs and then, as was customary, created VIFs spanning the built-in NICs and the add-in NICs for maximum redundancy. Well, that doesn’t work!

Once we reconfigured the Cisco switches (these were Cisco Catalyst 3750 switches uplinked via 10 Gigabit Ethernet to Catalyst 6509 switches) so that the link aggregates only contained add-in NICs or built-in NICs but not both, the connections came up fully and the network connectivity issues disappeared.

So, when creating multimode VIFs, be sure to only include NICs from add-in cards or the built-in NICs, but not both.

Tags: , , ,

Welcome to Virtualization Short Take #25, the first edition of this series for 2009! Here I’ve collected a variety of articles and posts that I found interesting or useful. Enjoy!

  • We’ll start off today’s list with some Hyper-V links. First up is this article on how to manually add a VM configuration to Hyper-V. It would be interesting to me to know some of the technical details—i.e., the design decisions that led Microsoft to architect things in this way—that might explain why this process is, in my opinion, so complicated. Was it scalability? Manageability? If anyone knows, please share your information in the comments.
  • It looks like this post by John Howard on how to resolve event ID 4096 with Hyper-V is also closely related.
  • This blog post brings to light a clause in Microsoft’s licensing policy that forces organizations to use Windows Server 2008 CALs when accessing a Windows Server 2003-based VM hosted on Hyper-V. In the spirit of disclosure, it’s important to note that this was written by VMware, but an independent organization apparently verified the licensing requirements. So, while you may get Hyper-V at no additional cost (not free) with Windows Server 2008, you’ll have to pay to upgrade your CALs to Windows Server 2008 in order to access any Windows Server 2003-based VMs on those Hyper-V hosts. Ouch.
  • Wrapping up this edition’s Microsoft virtualization coverage is this post by Ben Armstrong warning Hyper-V users about the use of physical disks with VMs. Apparently, it’s possible to connect a physical disk to both the Hyper-V parent partition as well as a guest VM, and…well, bad things can happen when you do that. The unfortunate part is that Hyper-V doesn’t block users from doing this very thing.
  • Daniel Feller asks the question, “Am I the only one who has trouble understanding Cloud Computing?” No, Daniel, you’re not the only one—I’ve written before about how amorphous and undefined cloud computing is. In this post over at the Citrix Community site, Daniel goes on to indicate that cloud computing’s undefined nature is actually its greatest strength:
     

    As I see it, Cloud Computing is a big white board waiting for organizations to make their requirements known. Do you want a Test/QA environment to do whatever? This is cloud computing. Do you want someone to deliver office productivity applications for you? That is cloud computing. Do you want to have all of your MP3s stored on an Internet storage repository so you can get to it from any device? That is also cloud computing.

    Daniel may be right there, but I still insist that there need to be well-defined and well-understood standards around cloud computing in order for cloud computing to really see broad adoption. Perhaps cloud computing is storing my MP3s on the Internet, but what happens when I want to move to a different MP3 storage provider? Without standards, that becomes quite difficult, perhaps even impossible. I’m not the only one who thinks this way, either; check this post by Geva Perry. Until some substance appears in all these clouds, people are going to hold off.

  • Rodney Haywood shared a useful command to use with VMware HA in this post about blades and VMware HA. He points out that it’s a good idea to spread VMware HA primary nodes across multiple blade chassis so that the failure of a single chassis does not take down all the primary nodes. One note about the using the “ftcli” command is that you’ll need to set the FT_DIR environment variable first using “export FT_DIR=/opt/vmware/aam” (assuming you’re using bash as the shell on VMware ESX). Otherwise, the advice to spread clusters across chassis as well as to ensure that primary agents are spread across chassis is advice that should be followed.
  • Joshua Townsend has a good post at VMtoday.com about using PowerShell and SQL queries to determine the amount of free space within guest VMs. As he states in his post, this can often impact the storage design significantly. It seems to me that there used to be a plug-in for vCenter that added this information, but I must be mistaken as I can no longer find it. Oh, and one of Eric Siebert’s top 10 lists also points out a free utility that will provide this information as well.
  • I don’t have a record of where this information turned up, but this article from NetApp (NOW login required) on troubleshooting NFS performance was quite helpful. In particular, it linked to this VMware KB article that provides in-depth information on how to identify IRQ sharing that’s occurring between the Service Console and the VMkernel. Good stuff.
  • Want more information on scaling a VMware View installation? Greg Lato posts a notice about the VMware View Reference Architecture Kit, available from VMware, that provides more information on some basic “building blocks” in creating a large-scale View implementation. I’ve only had the opportunity to skim through the documents thus far, but I like what I’ve seen thus far. Chad also mentions the Reference Architecture Kit on his site as well.
  • Duncan at Yellow Bricks posts yet another useful “in the trenches” post about VMFS-3 heap size. If your VMware ESX server is handling more than 4TB of open VMDK files, then it’s worth having a look at this VMware KB article.
  • The idea of “virtual routing” is an interesting idea, but I share the thoughts of one of the commenters in that technologies like VMotion/XenMotion/live migration may not be able to respond quickly enough to changing network patterns to be effective. Perhaps it’s just my server-centric view showing itself, but it seems more “costly” (in terms of effort) to move servers around to match traffic flow than to just route the traffic accordingly.
  • CrossBow looks quite cool, but I’m having a hard time understanding the real business value. I am quite confident that my lack of understanding about CrossBow is simply a reflection of the fact that I don’t know enough about Solaris Containers or how Xen handles networking, but can someone help me better understand? What is the huge deal with Crossbow?
  • Jason Boche shares some information with us about how to increase the number of simultaneous VMotion operations per host. That information could be quite handy in some cases.
  • I had high hopes for this document on VMFS best practices, but it fell short of my hopes. I was looking for hard guidelines on when to use isolation vs. consolidation, strong recommendations on VMFS volume sizes and the number of VMs to host in a VMFS volume, etc. Instead, I got an overview of what VMFS is and how it works—not what I needed.
  • Users interested in getting started with PowerShell with VMware Infrastructure should have a look at this article by Scott Herold. It’s an excellent place to start.
  • Here’s a list of some of the basic things you should do on a “golden master” template for Windows Server VMs. I actually disagree with #15, preferring instead to let Windows manage the time at the guest OS level. The only other thing I’d add: be sure your VMDK is aligned to the underlying storage. Otherwise, this is a great checklist to follow.

I think that should just about do it for this post. Comments are welcome!

Tags: , , , , , , , , , , ,

I was visiting Unclutterer and saw them sharing older content from the site in a similar fashion. So, I thought I might try it here. Enjoy some of these “blasts from the past”!

One Year Ago on blog.scottlowe.org

LACP with Cisco Switches and NetApp VIFs
Hyper-V Architectural Issue
Latest VDI Article Published

Two Years Ago on blog.scottlowe.org

Bookmark Spam?
Personal Computing as a Collection of VMs?
Application Agnosticism

Three Years Ago on blog.scottlowe.org

Mac OS X and .local Domains
WMF Flaw Exploit Grows Worse
Complete Linux-AD Authentication Details

Tags: , , , , , , ,

This session described VMware Site Recovery Manager (SRM) on NetApp storage. The session started out with a review of VMware SRM, its features and functionality, and some of the requirements. I was not aware, for example, that SRM cannot use SQL Server Express like VirtualCenter can; you must use a full-blown instance of SQL Server. Given VMware’s development history, I should not have been surprised to find that Perl 5.8 is required (it’s included in the distribution and installed automatically).

On the NetApp side, it’s important to note that users must first configure SecureAdmin in order for VMware SRM to use HTTPS when communicating with the NetApp storage arrays. If this isn’t done first, then the NetApp Site Recovery Adapter (SRA) will drop back to plain HTTP. The storage controllers must also have licenses for SnapMirror, iSCSI (included with the storage controllers), FCP (where applicable), and FlexClone. Without FlexClone, it’s impossible to do failover testing. NetApp again re-iterated that they anticipate seeing NFS support in VMware SRM somewhere in the March 2009 timeframe.

Note that there is no support for SnapVault or MetroCluster in SRM, although there are some interesting synergies between MetroCluster and VMware HA that are being explored. It will be interesting to see where, if anywhere, that may lead. NetApp admins may use either Volume SnapMirror (VSM) or Qtree SnapMirror (QSM), although VSM is preferred since it preserves deduplication with replication. QSM does not.

The presenters referred attendees to TR-3671, “VMware Site Recovery Manager in a NetApp Environment,” for more detailed information.

At the Recovery Site, users must configure an additional, non-replicated datastore. This additional datastore does not have to be very large, but it’s required for storing the “shadow VMs” (or “placeholder VMs”) that are created and maintained by VMware SRM.

At present, there is no integration between SnapManager for Virtual Infrastructure (SMVI) and VMware SRM. There are numerous technical questions, and I’m not entirely sure that I fully understand the implications just yet. This will be an area that I will be exploring further so that I can better understand the considerations of using these technologies together. NetApp is working with VMware to try to resolve some of the technical concerns around SMVI-SRM integration, but that will take some time. In other words, don’t hold your breath.

Finally, if you’ve downloaded the NetApp SRA prior to the last week or so (this was back in the middle of November), download it again. There were some issues fixed that have been addressed in a more recent release of the SRA. Unfortunately, VMware would not let NetApp increment the version number on the SRA, so it’s a bit difficult to tell what version you are running. If anyone has more information on that—I don’t recall or have any notes from the session on how to do this—it would be greatly appreciated.

Other miscellaneous notes from the session:

  • There are issues backing up a VMware SRM recovery plan; it’s not currently possible to export it to CSV/XML and then import it back in again)
  • VMware SRM and the NetApp SRA support dissimilar protocols between the Protected and Recovery Sites (e.g., FCP at Protected and iSCSI at Recovery) and dissimilar storage (e.g., FC disks at Protected and SATA disks at Recovery)
  • The appropriate iGroups must exist at the Recovery Site and the VMware ESX servers must be in the correct iGroups, but VMware SRM will handle mapping the LUNs to the iGroups

I think that’s all I have for this session. If any other session attendees have more information, please add it in the comments below.

Tags: , , , , ,

This session provided information on enhancements to NetApp’s cloning functionality. These enhancements are due to be released along with Data ONTAP 7.3.1, which is expected out in December. Of course, that date may shift, but that’s the expected release timeframe.

The key focus of the session was new functionality that allows for Data ONTAP to clone individual files without a backing Snapshot. This new functionality is an extension of NetApp’s deduplication functionality, and is enabled by changes within Data ONTAP that enable block sharing, i.e., the ability for a single block to appear in multiple files or in multiple places in the same file. The number of times these blocks appear is tracked using a reference count. The actual reference count is always 1 less than the number of times the block appears. A block which is not shared has no reference count; a block that is shared in two locations has a reference count of 1. The maximum reference count is 255, so that means a single block is allowed to be shared up to 256 times within a single FlexVol. Unfortunately, there’s no way to view the reference count currently, as it’s stored in the WAFL metadata.

As with other cloning technologies, the only space that is required is for incremental changes from the base. (There is small overhead for metadata as well.) This functionality is going to be incorporated into the FlexClone license and will likely be referred to as “file-level FlexClone”. I suppose that cloning volumes with be referred to as “volume FlexClone” or similar.

This functionality will be command-line driven, but only from advanced mode (must do a “priv set adv” in order to access the commands). The commands are described below.

To clone a file or a LUN (the command is the same in both cases):

clone start <src_path> <dst_path> -n -l

To check the status of a cloning process or stop a cloning process, respectively:

clone status
clone stop

Existing commands for Snapshot-backed clones (“lun clone” or “vol clone”) will remain unchanged.

File-level cloning will integrate with Volume SnapMirror (VSM) without any problems; the destination will be an exact copy of the source, including clones. Not so for Qtree SnapMirror (QSM) and SnapVault, which will re-inflate the clones to full size. Users will need to run deduplication on the destination to try to regain the space. Dump/restores will work like QSM or SnapVault.

Now for the limitations, caveats and the gotchas:

  • Users can’t run single-file SnapRestore and a clone process at the same time.
  • Users can’t clone a file or a LUN that exists only in a Snapshot. The file or LUN must exist in the active file system.
  • ACLs and streams are not cloned.
  • The “clone” command does not work in a vFiler context.
  • Users can’t use synchronous SnapMirror with a volume that contains cloned files or LUNs.
  • Volume SnapRestore cannot run while cloning is in progress.
  • SnapDrive does not currently support this method of cloning. It’s anticipated that SnapManager for Virtual Infrastructure (SMVI) will be the first to leverage this functionality.
  • File-level FlexClone will be available for NFS only at first. Although it’s possible to clone data regions within a LUN, support is needed at the host level that isn’t present today.
  • Because blocks can only be shared 256 times (within a file or across files), it’s possible that some blocks in a clone will be full copies. This is especially true if there are lots of clones. Unfortunately, there is no easy way to monitor or check this. “df -s” can show space savings due to cloning, but that isn’t very granular.
  • There can be a maximum of 16 outstanding clone operations per FlexVol.
  • There is a maximum of 16TB of shared data among all clones. Trying to clone more than that results in full copies.
  • The maximum volume size for being able to use cloning is the same as for deduplication.

Obviously, VMware environments—VDI in particular—are a key use case for this sort of technology. (By the way, in case no one has yet connected the dots, this is the technology that I discussed here). To leverage this functionality, NetApp will update a tool known as the Rapid Cloning Utility (RCU; described in more detail in TR-3705) to take full advantage of file-level FlexCloning after Data ONTAP 7.3.1 is released. Note that the RCU is available today, but it only uses volume-level FlexClone.

Tags: , , , , , , , ,

Storage Short Take #4

Last week I provided a list of virtualization-related items that had made their way into my Inbox in some form or another; today I’ll share storage-related items with you in Storage Short Take #4! This post will also be cross-published to the Storage Monkeys Blogs.

  • Stephen Foskett has a nice round-up of some of the storage-related changes available to users in VMWare ESX 3.5 Update 3. Of particular note to many users is the VMDK Recovery Tool. Oh, and be sure to have a look at Stephen’s list of top 10 innovative enterprise storage hardware products. He invited me to participate in creating the list, but I just didn’t feel like I would have been able to contribute anything genuinely useful. Storage is an area I enjoy, but I don’t think I’ve risen to the ranks of “storage guru” just yet.
  • And in the area of top 10 storage lists, Marc Farley shares his list of top 10 network storage innovations as well. I’ll have to be honest—I recognize more of these products than I did ones on Stephen’s list.
  • Robin Harris of StorageMojo provides some great insight into the details behind EMC’s Atmos cloud storage product. I won’t even begin to try to summarize some of that information here as it’s way past my level, but it’s fascinating reading. What’s also interesting to me is that EMC chose to require users to use an API to really interact with the Atmos (more detailed reasons why provided here by Chad Sakac), while child company VMware is seeking to prevent users from having to modify their applications to take advantage of “the cloud.” I don’t necessarily see a conflict between these two approaches as they are seeking to address two different issues. Actually, I see similarities between EMC’s Atmos approach and Microsoft’s Azure approach, both which require retooling applications to take advantage of the new technology.
  • Speaking of Chad, here’s a recent post on how to add storage to the Celerra Virtual Appliance.
  • Andy Leonard took up a concern about NetApp deduplication and volume size limits a while back. The basic gist of the concern is that in its current incarnation, NetApp deduplication limits the size of the volume that can be deduplicated. If the size of the volume ever exceeds that limit, it can’t be deduplicated—even if the volume is subsequently resized back within the limit. With that in mind, users must actively track deduplication space savings so that, in the event they need to undo the deduplication, they don’t inadvertently lose the ability to deduplicate because they exceeded the size limit. Although Larry Freeman aka “Dr Dedupe” responded in the comments to Andy’s post, I don’t think that he actually addressed the problem Andy was trying to state. Although the logical data size can grow to 16TB within a deduplicated volume, you’ll still need to watch deduplication space savings if you think you might need to undo the deduplication for whatever reason. Otherwise, you could exceed the volume size limitations and lose the ability to deduplicate that volume.
  • And while we are on the subject of NetApp, a blog post by Beth Pariseau from earlier in the year recently caught my attention; it was in regards to NetApp Snapshots in LUN environments. I’ve discussed a little bit of this before in my post about managing space requirements with LUNs. The basic question: how much additional space is recommended—or required—when using Snapshots and LUNs? Before the advent of Snapshot auto-delete and volume autogrow, the mantra from NetApp was “2x + delta”—two times the size of the LUN plus changes. With the addition of these features, deduplication, and additional thin provisioning functionality, NetApp has now moved their focus to “1x + Delta”—the size of the LUN plus space needed for changes. It’s not surprising to me that there is confusion in this area, as NetApp themselves has worked so hard to preach “2x + Delta” and now has to go back and change their message. Bottom line: You’re going to need additional space for storing Snapshots of your LUNs, and the real amount is determined by your change rate, how many Snapshots you will keep, and for how long you will keep them. 20% might be enough, or you might need 120%. It all depends upon your applications and your business needs.
  • If you’re into Solaris ZFS, be sure to have a look at this NFS performance white paper by Sun. It provides some good details on recent changes to how NFS exports are implemented in conjunction with ZFS.

That’s it for this time around, but feel free to share any interesting links and your thoughts on them in the comments!

Tags: , , , , , , , , , ,

This session provided information on running Hyper-V with NetApp storage. The first part of the session focused primarily on Hyper-V basics, such as VHD types (dynamically-expanding, fixed-size, passthrough, differencing), partition alignment (which can only be guaranteed with fixed-size VHDs, by the way), SCVMM 2008, Windows Failover Clustering support, and such. If you’re interested in details on those topics, I suggest you have a look at my coverage of Microsoft Tech-Ed 2008 back in the summer.

The second part of the session delved into some NetApp-specific information:

  • NetApp has a PVR-only tool called HyperVIBE that helps to coordinate storage array Snapshots with the hypervisor, providing VSS integration to quiesce the VMs before taking a Snapshot on the NetApp array. This is only supported on Server Core and requires a special release of SnapDrive 6.0. (It’s only available via PVR, so don’t go searching the NetApp web site for a free download.)
  • The various members of the SnapManager family—SnapManager for SQL, SnapManager for Exchange, and SnapManager for Sharepoint—are all fully supported on Hyper-V, but only for iSCSI LUNs.
  • NetApp SnapDrive 6.x is supported both on Hyper-V hosts as well as guest VMs. On the parent partition, it can manage both Fibre Channel LUNs and iSCSI LUNs; on a child partition, it can only manage iSCSI LUNs.
  • Version 5.x of the Host Utilities Kit is strongly recommended for use with Hyper-V, and supports Fibre Channel, iSCSI, and mixed connections. It runs on either the parent or child partition, although it seems to me that it would only make sense to run it on the parent partition.
  • Data ONTAP DSM 3.2R1 is the supported and recommended DSM for MPIO support with Hyper-V. On the parent partition, it supports and manages Fibre Channel, iSCSI, and mixed paths, but in a child partition it only supports iSCSI paths. It’s also only supported in child partitions running a server OS (so no Windows XP or Windows Vista support in child partitions).

For more information, readers can refer to TR-3701 and TR-3702. Note that updated versions of TR-3702 are expected to be released in the coming months to address additional product integrations.

Tags: , , , , , , , , ,

On November 10 through November 13, NetApp held their annual technical conference—formerly known as Fall Classic, this year renamed to Insight—for SEs and partner SEs in Los Angeles. I had the opportunity to attend the conference by virtue of the fact that I was also presenting (look for session 3173; that’s me!). Normally the information shared at this conference is covered by non-disclosure agreement (NDA), but I’ve been given special dispensation to discuss the sessions I attended and the information shared in those sessions.

So, over the next few days, look for blog posts about some of the sessions that I attended during NetApp Insight. They all be tagged Insight2008, in case you would like to browse them that way.

Note that a fair number of these sessions discuss timelines or targeted feature sets for future products. None of the information I post here should be taken as any sort of commitment from NetApp as to when a product will be delivered or what features it will contain. Just like any other company, things still in development may change before they are released. (No, NetApp did not ask me to say that—in fact, they are not reviewing this content at all. I’m just trying to help good sense prevail.)

Tags: , ,

NetApp Insight Posts

It looks like I may be able to blog about some of the content that was covered this week at NetApp Insight in LA after all. I’m still working out the details, but I hope to have things straightened out very soon. Stay tuned to this space for more details!

Tags: ,

Despite the fact that I’m out of town this week at NetApp Insight, I wanted to go ahead and get out the latest installation of Virtualization Short Takes—my sometimes-weekly collection of interesting or useful links and tidbits.

  • Much ado has been made about VMware’s acquisition of Trango and the announcement of VMware MVP (Mobile Virtualization Platform). Rich Brambley has a great write-up, and I completely agree with Rich and Alex Barrett about what this really means: don’t expect to see Windows XP on your smartphone any time soon. Alex said it best: this is virtualization, not emulation, and Windows XP doesn’t run on ARM.
  • I’m curious—how many people agree with my comments in Alex’s article about the Citrix ICA client for the iPhone. Is there any real, actual value in being able to access a Windows session from your iPhone? I tend to think not, but it would be an interesting discussion. Speak up in the comments.
  • Duncan points out that the issue with adding NICs to a team and keeping them all active—the workaround for which required editing esx.conf—has now been fixed in ESX 3.5 Update 3. It’s now possible to add NICs using esxcfg-vswitch and there’s no need to edit esx.conf. Excellent!
  • If you haven’t yet checked out Leo’s Ramblings, go give it a look. He’s got some good content. It’s worth subscribing to the RSS feed (I did).
  • Rick provides a helpful tool to resolving common system management issues with VMware Infrastructure. Thanks, Rick!
  • Regular readers may recall that Chad Sakac of EMC and I had a round of VMware HA-related posts a few months ago (check out the VMwareHA tag for a full list of VMware HA-related posts). As part of that discussion there was lots of information provided about Service Console redundancy, failover NICs, secondary Service Console connections, additional isolation addresses…all sorts of good stuff. Duncan joined in the conversation as well with a number of great posts, and has been keeping it up since then. His latest contribution to the conversation is a comparison of using failover NICs vs. using a secondary Service Console to prevent VMware HA isolation response. According to the documentation, using a secondary Service Console can help reduce the wait time for VMware HA to step in should isolation actually occur. Good stuff, and definitely worth some additional exploration in the lab.
  • As a sort of follow-up to the discussion about using NFS for VMware, this VMware Communities thread has some great information on why the NFS timeouts should be increased in NetApp NFS environments. If you’re like me, you like to know the reasons behind the recommendations, and this thread was very helpful to me. Let me also add that we’ve recently started recommended to customers to increase their Service Console memory to 800MB when using NFS, so that might be something to consider as well.
  • Need to change the path of where Update Manager stores its patches? Gabe shows you how here.
  • Eric Gray of VCritial explores the question: what would things be like without VMFS? Well, as he states, you can just ask a Hyper-V user, since Hyper-V doesn’t (yet) have a shared cluster filesystem. Yes, that will change in 2010 with Shared Cluster Volumes in Windows Server 2008 R2 and Hyper-V 2.0. I know. Or you can just add Melio FS from Sanbolic today and get the same thing. This is not anything new to me; I discussed this quite extensively here and here. Now, what would really be interesting is for VMware to work with Sanbolic to co-develop a more advanced version of VMFS that eliminates the SCSI reservation problems…
  • Need a nice summary of the various network communications that occur between different components of a VI3 implementation? Look no further than right here. Jason’s site is another one that’s worth adding to your RSS reader.
  • If you really like living on the edge, here’s a collection of some RPMs for VMware ESX 3.5. Keep in mind that installing third-party RPMs like this is not recommended or supported…
  • Andy Leonard picked up the DPM video by VMware and is looking forward to DPM no longer being experimental. What he’d really like, though, is some feature to move his VMs via Storage VMotion and spin down idle disks. Andy, I wouldn’t hold my breath.
  • If you are a Fusion user (if you own a Mac and need to run Windows, you should be a Fusion user!), this hint might come in handy.
  • Eric Siebert has a good post on traffic flow between VMs in various configuration scenarios—different vSwitches, same vSwitches, different port groups, same port groups, etc. Have a look if you are at all unclear how VMware ESX handles traffic flow.

That does it for this round. Speak up in the comments if you have any interesting or useful links to share with other readers. I’d also be interested in readers’ thoughts on the whole Citrix on the iPhone discussion—will it really bring any usefulness?

Tags: , , , , , , , ,

« Older entries § Newer entries »