You are currently browsing articles tagged EMC.

You might wonder what fate and free will have to do with virtualization and storage. The title of this post is a reference to the debate of Fate vs. Free Will, which in turn is a reference to Stephen Foskett’s recent post VMware as Oedipus: How Server Virtualization will Change Storage Forever. I won’t provide all the details here (go read the post), but the basic idea behind the post is that VMware’s drive to add storage features to the virtualization stack puts it on a collision course with EMC, a leading storage vendor. The twist here is the fact that EMC has a majority ownership in VMware, thereby earning EMC the term “parent company” and creating the Oedipal conflict to which Stephen alludes in his post.

First, let me sum up Stephen’s points:

  1. VMware is causing users not to purchase storage arrays.
  2. VMware integration is “leveling” the playing field.

Let’s take a look at each of these points.

A Decrease in Shared Storage?

Stephen makes this statement in his article (emphasis his):

VMware is rapidly innovating in this area. Integrating and developing snapshot, replication, thin provisioning, and other features in VMFS enables everyone to have advanced storage functionality, regardless of which storage device they use. In this way, VMware is already causing many users to forego an enterprise storage array purchase.

Perhaps it’s the specific term “enterprise storage array,” but I have a hard time believing that the adoption of VMware is causing users to forgo array purchases. Think about it: to even use many of the advanced features of vSphere like vSphere HA, vSphere DRS, or vSphere FT, shared storage is a prerequisite. Users literally cannot use these features without shared storage, and—today, at least—shared storage in almost all cases means an array.

If, however, the statement is intended to say that VMware users are buying less feature-rich arrays because of the features being added into vSphere—features like snapshots, replication, and thin provisioning—then I suppose I can see that. This is why array vendors are (or should be) driving innovation in other areas, such as dynamic auto-tiering, more robust snapshotting functionality, higher availability, and higher levels of performance.

Additionally, this is an opportunity for both virtualization experts and storage experts to help customers understand the differences between the features provided by the hypervisor and features provided at the storage layer. While these features share names, they can be very different! Here are a couple specific examples:

  • VMware’s snapshots are fundamentally and dramatically different from the snapshot features offered by many storage vendors. Not only are they different in how they work, they are also different in their uses and usage patterns. Storage administrators use array-based snapshots for different purposes and in different ways than vSphere administrators use snapshots.
  • vSphere’s replication functionality is a nice “check box” item, but lacks many of the features that array-based replication offers. For example, there’s no compression, no deduplication or WAN optimization, and no idea of consistency groups.

As you can see, while it’s true that VMware is offering features that are similar in name and purpose, these features often are not true competitors to the features that storage array vendors offer, EMC included. Looking ahead, I anticipate that will continue to be the case, and storage vendors will continue to have ample opportunities to offer functionality above and beyond what the hypervisor can or will offer.

Homogenization of Storage?

The second point of the article is the assertion that “ever tighter integration serves to anonymize and homogenize enterprise storage devices.” I don’t agree here, for a couple of reasons:

  1. This statement assumes that all integration is the same, which is not the case. One vendor’s level and type of integration with VMware can be very different than another vendor’s level and type of integration. A vCenter plug-in is not the end of the story. What about integration with your replication solution? What about integration with your snapshot functionality? What about the quality of your VAAI implementation? One vendor’s implementation of VAAI might behave quite differently than another vendor’s implementation. What about your support of VMware’s multipathing APIs? I could go on and on, but you get the idea.
  2. This statement excludes the value of innovation in other areas, implying that VMware integration is the sole factor that levels the playing field. As I’ve stated on many occasions, every storage solutions has its advantages and disadvantages. The way that EMC does things gives it an advantage over NetApp in some areas; at the same time, the way that NetApp does things gives it an advantage over EMC in other areas. If all arrays were the same and were measured only on their VMware integration, then I could see this statement. That’s not the case. And even if it were, then as I’ve just shown you, VMware integration can take many forms and many levels. Despite ever increasing levels of integration, vendors still have plenty of opportunities to differentiate themselves from other vendors through price, performance, data protection, scalability, reliability, and availability.


I don’t disagree that VMware will change the nature of enterprise storage; in fact, I would argue that it already has changed enterprise storage. But to say that VMware will completely anonymize and homogenize enterprise storage is, in my humble opinion, a bit of a reach. There are still plenty of areas in which storage vendors can innovate and differentiate, both in addition to as well as in spite of VMware’s own storage-related ambitions.

Disclaimer: It’s probably well-known anyway, but it’s important to state that I do work for a storage vendor (EMC), although—as my site-wide disclaimer indicates—content here is not sponsored by, reviewed by, or even approved by my employer.

Tags: , , ,

Exclusion or Not?

A couple days ago I read Stephen Foskett’s article “Alas, VMware, Whither HDS?”, and I felt like I really needed to respond to this growing belief—stated in Stephen’s article and in the sources to his article—that VMware is, for whatever reason, somehow excluding certain storage vendors from future virtualization-storage integration development. From my perspective, this is just bogus.

As far as I can tell, Stephen’s post—which is just one of several I’ve seen on this subject—is based on two sources: my session blog of VSP3205 and an article by The Register. I wrote the session blog, I sat in the session, and I listened to the presenters. Never once did one of the presenters indicate that the five technology partners that participated in this particular demonstration were the only technology partners with whom they would work moving forward, and my session blog certainly doesn’t state—or even imply—that VMware will only work with a limited subset of storage vendors. In fact, the thought that other storage vendors would be excluded never even crossed my mind until the appearance of The Register’s post. That invalidates my VSP3205 session blog as a credible source for the assertion that VMware would be working with only certain storage companies for this initiative.

The article at The Register cites my session blog and a post by Wikibon analyst David Floyer as a source. I’ve already shown how my blog doesn’t support the claim that some vendors will be excluded, but what about the other source? The Wikibon article states this:

Wikibon understands that VMware plans to work with the normal storage partners (Dell, EMC, Hewlett Packard, IBM, and NetApp) to provide APIs to help these traditional storage vendors add value, for example by optimizing the placement of storage on the disks.

This statement, however, is not an indication that VMware will work only with the listed storage vendors. (Floyer does not, by the way, cite any sources for that statement.)

Considering all this information, the only place that is implying VMware will limit the storage vendors with whom they will work is Chris Mellor at The Register. However, even Chris’ article quotes a VMware spokesperson who says:

“Note that we’re still in early days on this and none of the partners above have yet committed to support the APIs – and while it is our intent to make the APIs open, currently that is not the case given that what was demo’d during this VMworld session is still preview technology.”

In other words, just because HDS or any other vendor didn’t participate (which might indicate that the vendor chose not to participate) does not mean that they are somehow excluded from future inclusion in the development of this proposed new storage architecture. In fact, participation—or lack thereof—at this stage really means nothing, in my opinion. If this proposed storage architecture gets its feet under it and starts to run, then I’m confident VMware will allow any willing storage vendor to participate. In fact, it would be detrimental to VMware to not allow any willing storage partner to participate.

However, it gets more attention if you proclaim that a particular storage vendor was excluded; hence, the title (and subtitle) that The Register used. I have a feeling the reality is probably quite different than the picture painted in some of these articles.

Tags: , , , , , , ,

Beth Pariseau recently published an article discussing the practical value of long-distance vMotion, partially in response to EMC’s announcement of VPLEX Geo at EMC World 2011. In that article, Beth quotes some text from a tweet I posted as well as some text from Chad Sakac’s recent post on VPLEX Geo. However, there are a couple inaccuracies from Beth’s article that I really feel need to be cleared up:

  1. Long-distance vMotion and stretched clusters are not the same thing.
  2. L2 adjacency for virtual machines is not the same as L2 adjacency for the vMotion interfaces.

Regarding point #1, in her article, Beth implies that Chad’s statement “Stretched vSphere clusters over [long] distances are, as of right now, still not supported” is a statement that long-distance vMotion is not supported. Long-distance vMotion, over distances with latencies of less than 5 ms round trip time (RTT), is fully supported. What’s not supported is a stretched cluster, which is not a prerequisite for long-distance vMotion (as I pointed out in the stretched clusters presentation Beth also referenced). If you want to do long-distance vMotion, you don’t need to set up a stretched cluster, so statements of support for stretched clusters cannot be applied as statements of support for long-distance vMotion. Let’s not confuse the two, as they are separate and distinct.

Regarding point #2, the L2 adjacency for virtual machines (VMs) is absolutely necessary for distance vMotion. As I explained here, it is possible to use a Layer 3 protocol to handle the actual VMkernel (vMotion) traffic, but the VMs themselves still require Layer 2 adjacency. If you don’t maintain a single Layer 2 domain for the VMs, then VMs would have to change their IP addresses on a live migration. That’s REALLY BAD and it completely breaks live migration. Once again, there is a very separate and distinct behavior that you’re trying to modify with large L2 domains.

Am I off? Speak your mind in the comments below.

Tags: , , , , ,

I’m so far behind in my technology reading that I have this massive list of blog posts and links that I would normally put into an issue of Technology Short Takes. However, people are already “complaining” that my Short Takes aren’t all that short. To keep from overwhelming people, I’m breaking Technology Short Take #12 into three editions: Virtualization, Storage, and Networking.

Here’s the “Storage Edition” of Technology Short Take #12!

  • When planning storage details for your vSphere implementation, be sure to keep block size in mind. Duncan Epping’s post on the performance impact of the different datamovers in a Storage vMotion operation should bring to light why this is an important storage detail to remember. (And read this post if you need more info on the different datamovers.)
  • Richard Anderson of EMC (aka @storagesavvy) posted a “what if” about using cloud storage as a buffer with thin provisioning and FAST VP. It’s an interesting idea, and one that will probably see greater attention moving forward.
  • Richard also shared some real-world results on the benefits of using FAST Cache and FAST VP on a NS-480 array.
  • Interested in using OpenFiler as an FC target? Have a look here.
  • Nigel Poulton posted an analysis of EMC’s recent entry in the SPC benchmarketing wars in which he compares storage benchmarking to Formula 1 racing. I can see and understand his analogy, and to a certain extent he has a valid point. On the other hand, it doesn’t make sense to submit a more “mainstream” configuration if it’s a performance benchmark; to use Nigel’s analogy, that would be like driving your mini-van in a Formula 1 race. Yes, the mini-van is probably more applicable and useful to a wider audience, but a Formula 1 race is a “performance benchmark,” is it not? Anyway, I don’t know why certain configurations were or were not submitted; that’s for far more important people than me to determine.
  • Vijay (aka @veverything on Twitter) has a good deep dive on EMC storage pools as implemented on the CLARiiON and VNX arrays.
  • Erik Smith has a couple of great FCoE-focused blog posts, first on directly connecting to an FCoE target and then on VE_Ports and multihop FCoE. Both of these posts are in-depth technical articles that are, in my opinion, well worth reading.
  • Brian Norris posted about some limitations with certain FLARE 30 features when used in conjunction with Celerra (DART 6.0). I know that at least one of these limitations—the support for FAST VP on LUNs used by Celerra—are addressed in the VNX arrays.
  • Brian also recently posted some good information on a potential login issue with Unisphere; this is caused by SSL certificates that are generated with future dates.
  • J Metz of Cisco also has a couple of great FCoE-focused posts. In To Tell the Truth: Multihop FCoE, J covers in great detail the various topology options and the differences in each topology. Then, in his post on director-class multihop FCoE, J discusses the products that implement multihop FCoE for Cisco.
  • If you’ve never used EMC’s VSI (Virtual Storage Integrator) plug-in for vCenter Server, have a look at Mike Laverick’s write-up.

OK, that does it for the Storage Edition of Technology Short Take #12. Check back in a couple of days for the Networking Edition of Technology Short Take 12.

Tags: , , , ,

Have You Registered Yet?

This is a very short blog post. In fact, it’s probably less of a blog post and more of just a question:

Have you registered for Spousetivities at EMC World 2011 yet?

If you haven’t yet, I encourage you to surf over to the registration page and sign up now!

For more information on some of the planned activities, have a look at Crystal’s Spousetivities post here.

Tags: ,

In late 2009, I posted a how-to on making Snow Leopard work with an Iomega ix4-200d for Time Machine backups. I’ll recommend you refer back to that article for full details, but the basic steps are as follows:

  1. Use the hdiutil command to create the sparse disk image with the correct name (a concatenation of the computer’s name and the MAC address for the Ethernet interface).
  2. Create a special file inside the sparse disk image (the com.apple.TimeMachine.MachineID.plist file).
  3. Put the sparse disk image on the TimeMachine share on the ix4-200d (if you didn’t create it there).
  4. Set up Time Machine as normal.

In the comments to the original article, a few people suggested that newer firmware revisions to the Iomega ix4-200d eliminated the need for this process. However, in setting up my wife’s new 13″ MacBook Pro, I found that this process is still necessary. Even though my Iomega ix4-200d is now running the latest available firmware (the 2.1.38.xxx revision), her MacBook Pro—running Mac OS X 10.6.7 with all latest updates—would not work with the Iomega until I manually created the sparse disk image and populated it with the com.apple.TimeMachine.MachineID.plist file. Once I followed those steps, the laptop immediately started backing up.

So, it would seem that even with the latest available firmware on the ix4-200d, it’s still necessary to follow the steps I outlined in my previous article in order to make Time Machine work.

Tags: , , ,

Heading to EMC World 2011 in Las Vegas in May? Cool, I’ll probably see you there. What’s even cooler, though, is the fact that my wife, Crystal (@crystal_lowe on Twitter) will be, for the first time, organizing spouse activities for EMC World!

If you’re a regular reader of my site you know that Crystal first launched spouse activities at VMworld 2008 in Las Vegas, and since that time her spouse activities—now known as “Spousetivities”—have become enormously popular. So, this year she’s taking them to EMC World in Las Vegas!

For more details, head over to Crystal’s Spousetivities site (and follow @Spousetivities on Twitter) and check out her announcement.

Tags: , , ,

That’s right folks, it’s time for another installation of Technology Short Takes. This is Technology Short Take #11, and I hope that you find the collection of items I have for you this time around both useful and informative. But enough of the introduction—let’s get to the good stuff!


  • David Davis (of Train Signal) has a good write-up on the Petri IT Knowledgebase on using a network packet analyzer with VMware vSphere. The key, of course, is enabling promiscuous mode. Read the article for full details.
  • Jason Nash instructs you on how to enable jumbo frames on the Nexus 1000V, in the event you’re interested. Jason also has good coverage of the latest release of the Nexus 1000V; worth reading in my opinion. Personally, I’d like Cisco to get to version numbers that are a bit simpler than 4.2(1) SV1(4).
  • Now here’s a link that is truly useful: Greg Ferro has put together a list of Cisco IOS CLI shortcuts. That’s good stuff!
  • There are a number of reasons why I have come to generally recommend against link aggregation in VMware vSphere environments, and Ivan Pepelnjak exposes another one that rears its head in multi-switch environments in this article. With the ability for vSphere to utilize all the uplinks without link aggregation, the need to use link aggregation isn’t nearly as paramount, and avoiding it also helps you avoid some other issues as well.
  • Ivan also tackles the layer 2 vs. layer 3 discussion, but that’s beyond my pay grade. If you’re a networking guru, then this discussion is probably more your style.
  • This VMware KB article, surprisingly enough, seems like a pretty good introduction to private VLANs and how they work. If you’re not familiar with PVLANs, you might want to give this a read.


  • Want to become more familiar with Cisco UCS, but don’t have an actual UCS to use? Don’t feel bad, I don’t either. But you can use the Cisco UCS Emulator, which is described in a bit more detail by Marcel here. Very handy!


  • Ever find yourself locked out of your CLARiiON because you don’t know or can’t remember the username and password? OK, maybe not (unless you inherited a system from your predecessor), but in those instances this post by Brian Norris will come in handy.
  • Fabio Rapposelli posted a good write-up on the internals of SSDs, in case you weren’t already aware of how they worked. As SSDs gain traction in many different areas of storage, knowing how SSDs work helps you understand where they are useful and where they aren’t.
  • Readers that are new to the storage space might find this post on SAN terminology helpful. It is a bit specific to Cisco’s Nexus platform, but the terms are useful to know nevertheless.
  • If you like’s EMC’s Celerra VSA, you’ll also like the new Uber VSA Guide. See this post over at Nick’s site for more details.
  • Fellow vSpecialist Tom Twyman posted a good write-up on installing PowerPath/VE. It’s worth reading if you’re considering PP/VE for your environment.
  • Joe Kelly of Varrow posted a quick write-up about VPLEX and RecoverPoint, in which he outlines one potential issue with interoperability between VPLEX and RecoverPoint: how will VPLEX data mobility affect RP? For now, you do need to be aware of this potential issue. For more information on VPLEX and RecoverPoint together, I’d also suggest having a look at my write-up on the subject.
  • I won’t get involved in the discussion around Open FCoE (the software FCoE stack announced a while back); plenty of others (J Metz speaks out here, Chad Sakac weighed in here, Ivan Pepelnjak offers his opinions here, and Wikibon here) have already thrown in. Instead, I’ll take the “Show me” approach. Intel has graciously offered me two X520 adapters, which I’ll run in my lab next to some Emulex CNAs. From there, we’ll see what the differences are under the same workloads. Look for more details from that testing in the next couple of months (sorry, I have a lot of other projects on my plate).
  • Jason Boche has been working with Unisphere, and apparently he likes the Unisphere-VMware integration (he’s not alone). Check out his write-up here.


  • For the most part, a lot of people don’t have to deal with SCSI reservation conflicts any longer. However, they can happen (especially in older VMware Infrastructure 3.x environments), and in this post Sander Daems provides some great information on detecting and resolving SCSI reservation conflicts. Good write-up, Sander!
  • If you like the information vscsiStats gives you but don’t like the format, check out Clint Kitson’s PowerShell scripts for vscsiStats.
  • And while we’re talking vscsiStats, I would be remiss if I didn’t mention Gabe’s post on converting vscsiStats data into Excel charts.
  • Rynardt Spies has decided he’s going Hyper-V instead of VMware vSphere. OK, only in his lab, and only to learn the product a bit better. While we all agree that VMware vSphere far outstrips Hyper-V today, Rynardt’s decision is both practical and prudent. Keep blogging about your experiences with Hyper-V, Rynardt—I suspect there will be more of us reading them than perhaps anyone will admit.
  • Brent Ozar (great guy, by the way) has an enlightening post about some of the patching considerations around Azure VMs. All I can say is ouch.
  • The NIST has finally issued the final version of full virtualization security guidelines; see the VMBlog write-up for more information.
  • vCloud Connector was announced by VMware last week at Partner Exchange 2011 in Orlando. More information is available here and here.
  • Arnim van Lieshout posted an interesting article on how to configure EsxCli using PowerCLI.
  • Sander Daems gets another mention in this installation of Technology Short Takes, this time for good information on an issue with ESXi and certain BIOS revisions of the HP SmartArray 410i array controller. The fix is an upgrade to the firmware.
  • Sean Clark did some “what if” thinking in this post about the union of NUMA and vMotion to create VMs that span multiple physical servers. Pretty interesting thought, but I do have to wonder if it’s not that far off. I mean, how many people saw vMotion coming before it arrived?
  • The discussion of a separate “management cluster” has been getting some attention recently. First was Scott Drummonds, with this post and this follow up. Duncan responded here. My take? I’ll agree with Duncan’s final comment that “an architect/consultant will need to work with all the requirements and constraints”. In other words, do what is best for your situation. What’s right for one customer might not be right for the next.
  • And speaking of vShield, be sure to check out Roman Tarnavski’s post on extending vShield.
  • Interested in knowing more about how Hyper-V does CPU scheduling? Ben Armstrong is happy to help out, with Part 1 and Part 2 of CPU scheduling with Hyper-V.
  • Here’s a good write-up on how to configure Pass-Through Switching (PTS) on UCS. This is something I still haven’t had the opportunity to do myself. It kind of helps to actually have a UCS for stuff like this.

It’s time to wrap up now; I think I’ve managed to throw out a few links and articles that someone should find useful. As always, feel free to speak up in the comments below.

Tags: , , , , , , , , , ,

Welcome to Technology Short Take #10, my latest collection of data center-oriented links, articles, thoughts, and tidbits from around the Internet. I hope you find something useful or informative!


  • Link aggregation with VMware vSwitches is something I’ve touched upon a great many posts here on my site, but one thing that I don’t know I’ve ever specifically called out is that VMware vSwitches don’t support LACP. But that’s OK—Ivan Pepelnjak takes care of that for me with his recent post on LACP and the VMware vSwitch. He’s absolutely right: there’s no LACP support in VMware vSphere 4.x or any previous version.
  • Stephen Foskett does a great job of providing a plain English guide to CNA compatibility. Thanks, Stephen!
  • And while we are on the topic of Mr. Foskett, he also authored this piece on NFS in converged network environments. The article seemed a bit short for some reason. It kind of felt like the subject could have used a deeper, more thorough treatment. It’s still worth a read, though.
  • Need to trace a MAC address in your data center? CiscoZine provides all the necessary details in their post on how to trace a MAC address.
  • Jeremy Stretch of PacketLife.net provides a good overview of using WANem. If you need to do some WAN emulation/testing, this is worth reading.
  • Jeremy also does a walkthrough of configuring OSPF between Cisco and Force10 networking equipment.
  • I don’t entirely understand all the networking wisdom found here, but this post by Brad Hedlund on Nexus 7000 routing and vPC peer links is something I’m going to bookmark for when my networking prowess is sufficient for me to fully grasp the concepts. That might take a while…
  • On the other hand, this post by Brad on FCoE, VN-Tag, FEX, and vPC is something I can (and did) assimilate much more readily.
  • Erik Smith documents the steps for enabling FCoE QoS on the Nexus 5548, something that Brad Hedlund alerted me to via Twitter. It turns out, as Erik describes in his post about FCoE login failure with Nexus 5548, that without the FCoE QoS enabled fabric logins will fail. If you’re thinking of deploying Nexus 5548 switches, definitely keep this in mind.


  • In the event you haven’t already read up on it, the UCS 1.4(1) release for Cisco UCS was a pretty major release. See the write-up here by M. Sean McGee. By the way, Sean is an outstanding resource for UCS information; if you aren’t subscribed to his blog, you should be.
  • Dave Alexander also has a good discussion about some of the reasoning behind why certain things are or are not in Cisco UCS.


  • Nigel Poulton tackles a comparison between the HDS VSP and the EMC VMAX. I think he does a pretty good job of comparing and contrasting the two products, and I’m looking forward to his software-focused review of these two products in the future.
  • Brandon Riley provides his view of the recently-announced EMC VNX. The discussion in the comments about the choice of form factor (EFD) for flash-based cache is worth reading, too.
  • Andre Leibovici discusses the need for proper storage architecture in this treatment of IOPs, read/write ratios, and storage tiering with VDI. While his discussion is VDI-focused, the things he discussed are important to consider with any storage project, not just VDI. I would contend that too many organizations don’t do this sort of important homework when virtualizing applications (especially “heavier” workloads with more significant resource requirements), which is why the applications don’t perform as well after being virtualized. But that’s another topic for another day…
  • Environments running VMware Site Recovery Manager with the EMC CLARiiON SRA should have a look at this article.
  • Jason Boche recently published his results from a series of tests on jumbo frames with NFS and iSCSI in a VMware vSphere environment. There’s lots of great information in this post—I highly recommend reading it.


What, you didn’t think I’d overlook virtualization, did you?

Before I wrap up, I’ll just leave with you a few other links from my collection:

Backing up, and restoring, VMware vCloud Director provisioned virtual machines
RSA SecurBook on Cloud Security and Compliance
Hyper-V Live Migration using SRDF/CE – Geographically Dispersed Clustering
The VCE Model: Yes, it is different
How to make a PowerShell server side VMware vCenter plugin
VMware vSphere 4 Performance with Extreme I/O Workloads
VMware KB: ESX Hosts Might Experience Read Performance Issues with Certain Storage Arrays
vSphere “Gold” Image Creation on UCS, MDS, and NetApp with PowerShell
Upgrading to ESX 4.1 with the Nexus 1000V
My System Engineer’s toolkit for Mac

That’s going to do it for this time around. As always, courteous comments are welcome and encouraged!

Tags: , , , , , , , , ,

In late October and early November 2010 I published a couple of articles on interoperability between EMC RecoverPoint and VAAI (vStorage APIs for Array Integration, part of VMware vSphere 4.1). If you’d like to go back and read those articles for completeness, here are the links:

RecoverPoint and VAAI Interoperability
RecoverPoint and VAAI Update

In the comments to the second article, a reader indicated that he’d seen a problem when using RecoverPoint (which I’ll abbreviate as RP from here on) and VAAI. In this particular situation, his consistency groups (CGs) were failing to initialize. The only way he could get his CGs to properly initialize and replicate was to disable VAAI. I provided his information to RP product management, who after some additional testing found that there was indeed a potential issue when using hardware-assisted locking (also referred to as CAS, after the SCSI command Compare and Swap) in conjunction with the FLARE 30 array splitter and VAAI.

The fix for this potential issue is found in a type 2 patch for FLARE 30; this patch brings the FLARE 30 version to

<aside>For those of you that don’t know, type 2 patches are patch revisions that run through full qualification cycles and have formal releases. These are the sorts of patches that you should install when you can in order to stay current.</aside>

If you are running a revision of FLARE 30 prior to 509, you could see issues when using the RP array splitter and VAAI, and you will need to disable VAAI in order to resolve the issues. With the latest revision of FLARE 30 (, this RP-VAAI interoperability issue is resolved.

If you have any questions, please feel free to ask them in the comments below. Thanks!

Tags: , , , , ,

« Older entries § Newer entries »