Solaris

You are currently browsing articles tagged Solaris.

My irregular “Virtualization Short Takes” series was put on hold some time ago after I started work on Mastering VMware vSphere 4. Now that work on the book is starting to wind down just a bit, I thought it would be a good time to try to resurrect the series. So, without further delay, welcome to the return of Virtualization Short Takes!

  • Trigged by a series of blog posts by Arnim van Lieshout on VMware ESX memory management (Part 1, Part 2, and Part 3), Scott Herold decided to join the fray with this blog post. Both Scott’s post and Arnim’s posts are good reading for anyone interested in getting a better idea of what’s happening “under the covers,” so to speak, when it comes to memory management.
  • Perhaps prompted by my post on upgrading virtual machines in vSphere, a lot of information has come to light regarding the PVSCSI driver. Some are advocating changes to best practices to incorporate the PVSCSI driver, but others seem to be questioning the need to move away from a single drive model (a necessary move since PVSCSI isn’t supported for boot drives). Personally, I just want VMware to support the PVSCSI driver on boot drives.
  • Eric Sloof confirms for us that name resolution is still the Achilles’ Heel of VMware High Availability in VMware vSphere.
  • I don’t remember where I picked up this VMware KB article, but it sure would be handy if VMware could provide more information about the issue, such as what CPUs might be affected. Otherwise, you’re kind of shooting in the dark, aren’t you?
  • Upgraded to VMware vSphere, and now having issues with VMotion? Thanks to VMwarewolf, this pair of VMware KB articles (here and here) might help resolve the issue.
  • Chad Sakac of EMC and co-conspirator for the storage portion of Mastering VMware vSphere 4 (pre-order here), has been putting out some very good posts:
  • Leo Raikhman pointed me to this article about IRQ sharing between the Service Console and the VMkernel. I think I’ve mentioned this issue here before…but after over a 1,000 posts, it’s hard to keep track of everything. In any case, there’s also a VMware KB article on the matter.
  • And speaking of Leo, he’s been putting up some great information too: notes on migrating Ubuntu servers (in turn derived from these notes by Cody at ProfessionalVMware), a rant on CDP support in ESX, and a note about the EMC Storage Viewer plugin. Good work, Leo!
  • If you are interested in a run-down of the storage-related changes in VMware vSphere, check out this post from Stephen Foskett.
  • Rick Vanover notes a few changes to the VMFS version numbers here. The key takeaway here is that no action is required, but you may want to plan some additional tasks after your vSphere upgrade to optimize the environment.
  • In this article, Chris Mellor muses on how far VMware may go in assimilating features provided by their technology partners. This is a common question; many people see the addition of thin provisioning within vSphere as a direct affront to array vendors like NetApp, 3PAR, and others who also provide thin provisioning features in the array themselves. I’m not so convinced that this feature is as competitive as it is complementary. Perhaps I’ll write a post about that in the near future…oh wait, never mind, Chad already did!
  • File this one away in the “VMware-becoming-more-like-Microsoft” folder.
  • My occasional mentions of Crossbow prompted a full-on explanation of the Open Networking functionality of OpenSolaris by a Sun engineer. It kind of looks like SR-IOV and VMDirectPath to me…sort of. Don’t you think?
  • If you are thinking about how to incorporate HP Virtual Connect Flex-10 into your VMware environment, Frank Denneman has some thoughts to share. I’ve been told by HP that I have some equipment en route with which I can do some additional testing (the results of which will be published here, of course!), but I haven’t seen it yet.
  • OK, I guess that should just about do it. Thanks for reading, and please share your thoughts, interesting links, or (pertinent) rants in the comments.

    Tags: , , , , , , , , , ,

    A couple of days ago I posed this question: is Sun preparing to take on Cisco? The question generated some interesting responses in the comments to the article.

    Reader Bill had this to say:

    How on earth would Cisco respond if Sun started introducing products with better performance, at a fraction of the price, built on high volume open source adoption?

    As I responded, that’s the real $64,000 question, isn’t it? That’s the premise upon which this entire thing is built—that by using commodity hardware and open source components, Sun can produce high-quality, high-performing network equipment that they can sell for far less than Cisco.

    Reader Ed, on the other hand, questioned the validity of this kind of move:

    I would think that partnering with a Juniper or Foundry-type company and OEMing equipment from those companies would be a more prudent move than venturing on their own to create new network devices.

    Normally, I would agree with Ed if we were talking about a company that was merely interested in entering a market in order to become a more complete supplier to their customers. That’s not Sun’s purpose. Sun’s purpose is, I think, to fundamentally change the nature of the networking hardware market. How successful they’ll be…well, that’s another question.

    My original article also prompted a response elsewhere on the Internet. Christofer Hoff thought my use of the work “distracted” in describing Cisco and Project “California” wasn’t appropriate, and in one sense he’s correct—”California” is absolutely a natural evolution of Cisco’s products and technologies and it does make sense for them. As I pointed out to Hoff, though, being successful with this new solution (I can’t call it a server!) will take focus, and while Cisco is focused on “California” Sun has their opportunity.

    And it looks like they are definitely going to take that opportunity:

    As I’ve said before, general purpose microprocessors and operating systems are now fast enough to eliminate the need for special purpose devices. That means you can build a router out of a server – notice you cannot build a server out of a router, try as hard as you like. The same applies to storage devices.
     
    To demonstrate this point, we now build our entire line of storage systems from general purpose server parts, including Solaris and ZFS, our open source file system. This allows us to innovate in software, where others have to build custom silicon or add cost. We are planning a similar line of networking platforms, based around the silicon and software you can already find in our portfolio.

    The emphasis on that last sentence is mine, just to emphasize the clarity of where Sun is headed. Clearly, it is their intention to leverage OpenSolaris, Crossbow, ZFS, Solaris Zones, etc., to compete directly against Cisco. And Cisco appears to be their primary target, judging from this sentence:

    That means you can build a router out of a server – notice you cannot build a server out of a router, try as hard as you like.

    To me, that looks like a direct jab at “California”.

    So, I guess the question of whether Sun is going to take on Cisco is settled. Hoff, get your popcorn!

    Tags: , , , ,

    A while back in Virtualization Short Take #25 I briefly mentioned Sun’s Crossbow network virtualization software, which brings new possibilities to the Solaris networking world. Not being a Solaris expert, it was hard for me at the time to really understand why Solaris fans were so excited about it; since then, though, I’ve come to understand that Crossbow brings to Solaris the same kind of full-blown virtual network interfaces and such that I use daily with VMware ESX. Now I’m beginning to understand why people are so thrilled!

    In any case, an astute reader picked up on my mention of Crossbow and pointed me to this article by Jonathan Schwartz of Sun, and in particular this phrase:

    You’re going to see an accelerating series of announcements over the coming year, from amplifying our open source storage offerings, to building out an equivalent portfolio of products in the networking space…

    That seemingly innocuous mention was then coupled with this blog post and the result was this question: is Sun preparing to take on Cisco? Is Sun getting ready to try to use commodity hardware and open source software to penetrate the networking market in the same way that they are using commodity hardware and open source software to try to further penetrate the storage market with their open storage products (in particular, the 7000 series)?

    It’s an interesting thought, to say the least. Going up against Cisco is a bold move, though, and I question Sun’s staying power in that sort of battle. Of course, with Cisco potentially distracted by the swirling rumors regarding the networking giant’s entry into the server market, now may be the best time to make this move.

    Thoughts?

    Tags: , , , , ,

    Welcome to Virtualization Short Take #25, the first edition of this series for 2009! Here I’ve collected a variety of articles and posts that I found interesting or useful. Enjoy!

    • We’ll start off today’s list with some Hyper-V links. First up is this article on how to manually add a VM configuration to Hyper-V. It would be interesting to me to know some of the technical details—i.e., the design decisions that led Microsoft to architect things in this way—that might explain why this process is, in my opinion, so complicated. Was it scalability? Manageability? If anyone knows, please share your information in the comments.
    • It looks like this post by John Howard on how to resolve event ID 4096 with Hyper-V is also closely related.
    • This blog post brings to light a clause in Microsoft’s licensing policy that forces organizations to use Windows Server 2008 CALs when accessing a Windows Server 2003-based VM hosted on Hyper-V. In the spirit of disclosure, it’s important to note that this was written by VMware, but an independent organization apparently verified the licensing requirements. So, while you may get Hyper-V at no additional cost (not free) with Windows Server 2008, you’ll have to pay to upgrade your CALs to Windows Server 2008 in order to access any Windows Server 2003-based VMs on those Hyper-V hosts. Ouch.
    • Wrapping up this edition’s Microsoft virtualization coverage is this post by Ben Armstrong warning Hyper-V users about the use of physical disks with VMs. Apparently, it’s possible to connect a physical disk to both the Hyper-V parent partition as well as a guest VM, and…well, bad things can happen when you do that. The unfortunate part is that Hyper-V doesn’t block users from doing this very thing.
    • Daniel Feller asks the question, “Am I the only one who has trouble understanding Cloud Computing?” No, Daniel, you’re not the only one—I’ve written before about how amorphous and undefined cloud computing is. In this post over at the Citrix Community site, Daniel goes on to indicate that cloud computing’s undefined nature is actually its greatest strength:
       

      As I see it, Cloud Computing is a big white board waiting for organizations to make their requirements known. Do you want a Test/QA environment to do whatever? This is cloud computing. Do you want someone to deliver office productivity applications for you? That is cloud computing. Do you want to have all of your MP3s stored on an Internet storage repository so you can get to it from any device? That is also cloud computing.

      Daniel may be right there, but I still insist that there need to be well-defined and well-understood standards around cloud computing in order for cloud computing to really see broad adoption. Perhaps cloud computing is storing my MP3s on the Internet, but what happens when I want to move to a different MP3 storage provider? Without standards, that becomes quite difficult, perhaps even impossible. I’m not the only one who thinks this way, either; check this post by Geva Perry. Until some substance appears in all these clouds, people are going to hold off.

    • Rodney Haywood shared a useful command to use with VMware HA in this post about blades and VMware HA. He points out that it’s a good idea to spread VMware HA primary nodes across multiple blade chassis so that the failure of a single chassis does not take down all the primary nodes. One note about the using the “ftcli” command is that you’ll need to set the FT_DIR environment variable first using “export FT_DIR=/opt/vmware/aam” (assuming you’re using bash as the shell on VMware ESX). Otherwise, the advice to spread clusters across chassis as well as to ensure that primary agents are spread across chassis is advice that should be followed.
    • Joshua Townsend has a good post at VMtoday.com about using PowerShell and SQL queries to determine the amount of free space within guest VMs. As he states in his post, this can often impact the storage design significantly. It seems to me that there used to be a plug-in for vCenter that added this information, but I must be mistaken as I can no longer find it. Oh, and one of Eric Siebert’s top 10 lists also points out a free utility that will provide this information as well.
    • I don’t have a record of where this information turned up, but this article from NetApp (NOW login required) on troubleshooting NFS performance was quite helpful. In particular, it linked to this VMware KB article that provides in-depth information on how to identify IRQ sharing that’s occurring between the Service Console and the VMkernel. Good stuff.
    • Want more information on scaling a VMware View installation? Greg Lato posts a notice about the VMware View Reference Architecture Kit, available from VMware, that provides more information on some basic “building blocks” in creating a large-scale View implementation. I’ve only had the opportunity to skim through the documents thus far, but I like what I’ve seen thus far. Chad also mentions the Reference Architecture Kit on his site as well.
    • Duncan at Yellow Bricks posts yet another useful “in the trenches” post about VMFS-3 heap size. If your VMware ESX server is handling more than 4TB of open VMDK files, then it’s worth having a look at this VMware KB article.
    • The idea of “virtual routing” is an interesting idea, but I share the thoughts of one of the commenters in that technologies like VMotion/XenMotion/live migration may not be able to respond quickly enough to changing network patterns to be effective. Perhaps it’s just my server-centric view showing itself, but it seems more “costly” (in terms of effort) to move servers around to match traffic flow than to just route the traffic accordingly.
    • CrossBow looks quite cool, but I’m having a hard time understanding the real business value. I am quite confident that my lack of understanding about CrossBow is simply a reflection of the fact that I don’t know enough about Solaris Containers or how Xen handles networking, but can someone help me better understand? What is the huge deal with Crossbow?
    • Jason Boche shares some information with us about how to increase the number of simultaneous VMotion operations per host. That information could be quite handy in some cases.
    • I had high hopes for this document on VMFS best practices, but it fell short of my hopes. I was looking for hard guidelines on when to use isolation vs. consolidation, strong recommendations on VMFS volume sizes and the number of VMs to host in a VMFS volume, etc. Instead, I got an overview of what VMFS is and how it works—not what I needed.
    • Users interested in getting started with PowerShell with VMware Infrastructure should have a look at this article by Scott Herold. It’s an excellent place to start.
    • Here’s a list of some of the basic things you should do on a “golden master” template for Windows Server VMs. I actually disagree with #15, preferring instead to let Windows manage the time at the guest OS level. The only other thing I’d add: be sure your VMDK is aligned to the underlying storage. Otherwise, this is a great checklist to follow.

    I think that should just about do it for this post. Comments are welcome!

    Tags: , , , , , , , , , , ,

    Storage Short Take #4

    Last week I provided a list of virtualization-related items that had made their way into my Inbox in some form or another; today I’ll share storage-related items with you in Storage Short Take #4! This post will also be cross-published to the Storage Monkeys Blogs.

    • Stephen Foskett has a nice round-up of some of the storage-related changes available to users in VMWare ESX 3.5 Update 3. Of particular note to many users is the VMDK Recovery Tool. Oh, and be sure to have a look at Stephen’s list of top 10 innovative enterprise storage hardware products. He invited me to participate in creating the list, but I just didn’t feel like I would have been able to contribute anything genuinely useful. Storage is an area I enjoy, but I don’t think I’ve risen to the ranks of “storage guru” just yet.
    • And in the area of top 10 storage lists, Marc Farley shares his list of top 10 network storage innovations as well. I’ll have to be honest—I recognize more of these products than I did ones on Stephen’s list.
    • Robin Harris of StorageMojo provides some great insight into the details behind EMC’s Atmos cloud storage product. I won’t even begin to try to summarize some of that information here as it’s way past my level, but it’s fascinating reading. What’s also interesting to me is that EMC chose to require users to use an API to really interact with the Atmos (more detailed reasons why provided here by Chad Sakac), while child company VMware is seeking to prevent users from having to modify their applications to take advantage of “the cloud.” I don’t necessarily see a conflict between these two approaches as they are seeking to address two different issues. Actually, I see similarities between EMC’s Atmos approach and Microsoft’s Azure approach, both which require retooling applications to take advantage of the new technology.
    • Speaking of Chad, here’s a recent post on how to add storage to the Celerra Virtual Appliance.
    • Andy Leonard took up a concern about NetApp deduplication and volume size limits a while back. The basic gist of the concern is that in its current incarnation, NetApp deduplication limits the size of the volume that can be deduplicated. If the size of the volume ever exceeds that limit, it can’t be deduplicated—even if the volume is subsequently resized back within the limit. With that in mind, users must actively track deduplication space savings so that, in the event they need to undo the deduplication, they don’t inadvertently lose the ability to deduplicate because they exceeded the size limit. Although Larry Freeman aka “Dr Dedupe” responded in the comments to Andy’s post, I don’t think that he actually addressed the problem Andy was trying to state. Although the logical data size can grow to 16TB within a deduplicated volume, you’ll still need to watch deduplication space savings if you think you might need to undo the deduplication for whatever reason. Otherwise, you could exceed the volume size limitations and lose the ability to deduplicate that volume.
    • And while we are on the subject of NetApp, a blog post by Beth Pariseau from earlier in the year recently caught my attention; it was in regards to NetApp Snapshots in LUN environments. I’ve discussed a little bit of this before in my post about managing space requirements with LUNs. The basic question: how much additional space is recommended—or required—when using Snapshots and LUNs? Before the advent of Snapshot auto-delete and volume autogrow, the mantra from NetApp was “2x + delta”—two times the size of the LUN plus changes. With the addition of these features, deduplication, and additional thin provisioning functionality, NetApp has now moved their focus to “1x + Delta”—the size of the LUN plus space needed for changes. It’s not surprising to me that there is confusion in this area, as NetApp themselves has worked so hard to preach “2x + Delta” and now has to go back and change their message. Bottom line: You’re going to need additional space for storing Snapshots of your LUNs, and the real amount is determined by your change rate, how many Snapshots you will keep, and for how long you will keep them. 20% might be enough, or you might need 120%. It all depends upon your applications and your business needs.
    • If you’re into Solaris ZFS, be sure to have a look at this NFS performance white paper by Sun. It provides some good details on recent changes to how NFS exports are implemented in conjunction with ZFS.

    That’s it for this time around, but feel free to share any interesting links and your thoughts on them in the comments!

    Tags: , , , , , , , , , ,

    I’m sorry, folks, but I’m not going to have the time or the resources to publish an update to my existing instructions for integrating Solaris 10 into Active Directory. Quite some time ago I had posted that I planned on creating an update to the original instructions so as to incorporate some lessons learned, but it keeps get pushed aside for other tasks that are more important and more relevant to my day-to-day work. Rather than keep readers hanging on for something that will likely never appear, I’d rather just be upfront and frank about the situation. As much as I’d love to spend some time working on the Solaris-AD integration situation and documenting my findings, I just don’t have the time. Sorry.

    Tags: , , , , , ,

    I’ve spoken to the folks at eG Innovations a couple of times. In case you didn’t know, eG Innovations makes a product that is designed around managing virtualization environments. eG claims to be unique in that it gathers information from both inside the guest as well as outside the guest (from the host) and correlates the data from the two views.

    Today, eG Innovations announced that eG VM Monitor now supports not only VMware ESX and Solaris Containers, but also Citrix XenServer and Solaris Logical Domains (LDoms). In addition, the new version of eG Enterprise Suite provides integration with VMware Virtual Desktop Manager (VDM) to provide greater visibility in a virtual desktop infrastructure (VDI) environment.

    What I didn’t see in today’s announcement was support for Hyper-V. Given that today was Microsoft’s big virtualization launch event, I kind of expected to see eG announcing Hyper-V support as well.

    Tags: , , , ,

    Here’s the latest installation of Virtualization Short Takes, my occasionally-weekly view on various virtualization news, reviews, and other happenings. Hopefully I can share something interesting with you!

    • Via VMblog.com, I saw that Transitive Corporation is supporting the use of QuickTransit within Hyper-V virtual machines. This is interesting because it extends the ability of Hyper-V to help customers consolidate applications. QuickTransit, in case you aren’t aware, allows applications written for Solaris/SPARC environments to run in Linux/x86 environments. It was also the technology behind Apple’s Rosetta, which allowed Mac users to run PowerPC apps on Intel Macs. Does anyone know if QuickTransit is supported within VMware VMs, or is this specific to Hyper-V?
    • This one was quite interesting to me. Question #2 is particularly applicable: why is a reboot required, anyway? (Yes, yes, I know—there is a workaround that does not require a reboot. It’s the principle of the matter.)
    • Via various sources on the Internet, I learned about the release of ESX Manager. This looks like quite an interesting tool, although I have not yet had the opportunity to install or try it yet. Anyone out there tried this and have some feedback for us?
    • Every now and then, something comes up about Citrix XenServer and Xen and it makes me wonder about the relationship between Citrix and the open source Xen community. The latest thing is what appears to be an offhand comment by Simon Crosby of Citrix where he says, “Because we own the hypervisor, we can do much more integration and development around it” (read it in context here). What does that mean? What does “ownership” of the Xen hypervisor mean? And if the Xen hypervisor is licensed under an open source license (GNU GPL v2, according to this page), how can Citrix make proprietary extensions to the hypervisor without being forced to release those extensions back to the community? I guess I just don’t understand the relationship there and how it works. This is where the murky waters of a commercial entity “owning” an open source project come into play, in my mind.
    • I ran across this very useful tip for creating a vSwitch with a specific number of ports. It looks like Dwight Hubbard, the maintainer of the site, also has some other interesting posts. Might be worth adding his feed to your RSS reader.
    • Nick Triantos discusses NetApp’s Site Recovery Adapter (SRA) and its role with VMware Site Recovery Manager (SRM). Anyone have any links to similar discussions of the SRAs for other storage vendors?
    • John Howard provides a great breakdown of how Hyper-V generates dynamic MAC addresses and how Hyper-V attempts to protect against MAC collisions in some circumstances.
    • The VI3 Security Hardening Guide has been updated, which is good because some people felt it just didn’t go far enough.
    • VMware re-iterated their stance on being storage protocol agnostic, and in the article included a very useful table that summarizes the various products and technologies and which are supported with which storage protocols. While the rest of the post is helpful, that summary of supported features is probably the most helpful.
    • Interesting in trying out Hyper-V, but don’t have shared storage? Take a look at this blog post. I think you’ll find it helpful.

    I’m always on the lookout for other interesting or useful virtualization news, tips, and tricks, so feel free to share with me and other readers in the comments.

    Tags: , , , , , , , ,

    I just wanted to provide a quick update on some articles I have in the works to be (hopefully) published soon.

    • I’m working on an article discussing when to use various NIC teaming configurations with VMware ESX. There are some significant repercussions here for a variety of network configurations, but especially so for configurations involving IP-based storage (iSCSI or NFS).
    • I’m finally wrapping up an article on the Xsigo I/O Director. I’ve been working a Xsigo VP780 in the lab for quite some time, and this article will provide a brief overview along with some tips and tricks.
    • I received word from HP that I should be getting a ProCurve switch in my lab soon, so that means I can provide a ProCurve-oriented version of this NIC teaming and VLAN trunking article.
    • I have some notes on using NetApp Open Systems SnapVault (OSSV) in conjunction with VMware ESX that I plan to post here as well.

    New versions of the Linux and Solaris AD integration articles are on the way as well, starting with an update of the Solaris instructions to accommodate Solaris 10 Update 5 and Windows Server 2008.

    If there’s anything else you’re interested in seeing, let me know in the comments. Thanks for reading!

    UPDATE: The NIC utilization article is available here.

    Tags: , , , , , , , ,

    I came across an interesting paper discussing how various virtualization environments protect well-behaved VMs from misbehaving VMs. The paper is available here.

    In the tests described in the paper, researchers used virtual machines on Xen 3.0 (the open source hypervisor not the commercial XenServer product, as far as I can tell), VMware Workstation 5.5, and “Open Solaris 10” (quotes mine). As pointed out in the paper, these three environments represent paravirtualization, full virtualization, and OS virtualization (or containers). I’m not sure if the researchers actually meant OpenSolaris; I suspect not since that’s a very recent release. Instead, I believe they probably just meant Solaris 10. On Xen and VMware Workstation, both running under Linux, they used Linux-based VMs; on Solaris, they used additional instances of Solaris. Each VM or instance ran Apache 2 and was tested using physical clients to connect to the HTTP server in each VM.

    The results are interesting; VMware showed the best protection of well-behaved VMs from a misbehaving VM, followed by Xen with Solaris Containers providing the least protection. The level of protection was tested using a memory consumption stress test, a CPU stress test, a disk I/O stress test, and a network I/O stress test. I’d encourage you to have a look at the full paper for all the details.

    These results are very interesting, but I wonder how much the results would change if we were to use VMware’s ESX server product line instead of one of the hosted products like VMware Workstation? As a product representative of “full virtualization” solutions, I’d be curious to know if the results seen with VMware Workstation were also seen with ESX.

    In any case, the results are a validation of what we, as consultants, have been talking about: full virtualization provides the best isolation of well-behaved workloads from ill-behaved workloads, preventing a workload in one VM from affecting other workloads due to mishandling of CPU, RAM, disk, or network resources. As the researchers conclude in the paper, “…it is clear that VMware completely protects the well-behaved VMs under all stress tests. Its performance is sometimes substantially lower for the misbehaving VM, but in a commercial hosting environment this would be exactly the right tradeoff to make.”

    Tags: , , , , ,

    « Older entries