HP’s ProLiant G6 Servers

I spent the day today at Hewlett-Packard’s Houston campus, talking about their Xeon 5500-based ProLiant servers. In HP parlance, these are the G6 (sixth generation) ProLiant servers, and HP has eleven different G6 server models. That’s more Xeon 5500-based servers than both IBM and Dell, and represents a significant investment on HP’s part to incorporate this new technology into their product lineup. HP’s not alone in delivering servers based on the Xeon 5500 CPUs, but I think there may be a bit more here than meets the eye.

Memory Bandwidth

There is little doubt that you haven’t seen some of the performance comparisons of the Xeon 5500 with previous generations; these comparisons typically show significant performance advantages with the Xeon 5500. Many people have attributed that to QuickPath Interconnect (QPI), the new high-speed bus architecture Intel uses with the Xeon 5500 CPUs. Among other things, QPI provides higher memory bandwidth—but how much higher depends on the amount of memory installed. That’s right: the more memory you install in the server, the slower your memory speed will be. This is a “dirty little secret” that many server vendors don’t want to disclose.

Populate a memory bus with only a single DIMM, and it runs at 1333MHz. Put two DIMMs on the bus, and it will run at 1066MHz. Put three DIMMs on a bus (the maximum supported for each bus), and the speed drops to 800MHz. Ouch! So, the very systems that can benefit most from larger amounts of RAM (like virtualization hosts) will also be the systems that see the least benefit from increased memory bandwidth.

What HP has done here to differentiate themselves from some of the other server vendors is spent extra engineering time on the signal integrity of the memory bus so that they can actually preserve the bus speed in some instances. For example, when populating a memory bus with 2 DIMMs, HP ProLiant G6 servers can continue to run at 1333MHz instead of having to drop back to 1066MHz. In a server with 18 DIMM slots (like the DL380 G6, the BL490c G6, or the ML370 G6), this means the server can be loaded with up to 96GB of RAM and the memory will still run at 1333MHz. This helps to maximize both the capacity gains and the bandwidth gains of QPI and the Xeon 5500 CPU. To take advantage of this functionality, there is a BIOS setting that must be enabled.

Power Efficiency

Power efficiency is another area where all the server vendors who are shipping servers with Xeon 5500 CPUs are claiming advances. I would imagine that most, if not all, of these claims of reduced power consumption are absolutely true. I would also guess that the server vendors are simply “piggy-backing” on the power consumption improvements that Intel made in the Xeon 5500 and not necessarily innovating on their own.

HP’s ProLiant G6 servers have what they call a “sea of sensors”: 32 different sensors on various areas of the system board that provide extensive information back to the system so that it can intelligently adjust fan speeds on the fly as the system operates. Fans are independently operated, so that a specific fan can be spun up to help reduce heat in specific areas of the system without having to spin up other fans. This can result in some pretty significant power reductions.

HP has also enabled systems with redundant power supplies to run primarily on one power supply instead of splitting the load equally between the power supplies. It’s a well-known fact that a power supply running at a higher load works more efficiently, so this further helps improve the ProLiant G6′s power savings.

<aside>By the way, I just have to point out that all the various G6 models now share a common power supply. Finally! Kudos to the HP team who pushed this through.</aside>

I’ve always thought highly of the ProLiant server line, although some products were better liked than others (I didn’t care for p-Class blades, for example). With this G6 release, I like HP’s ProLiant servers even more. All the various features—the universal power supplies, the well-marked memory installation guidelines inside the chassis, the signal integrity work to help preserve bus speed—to me show an attention to detail that is exciting to see.

If you’re interested in learning more about the G6 products, HP is hosting a “Web Jam” event tomorrow (Tuesday, April 7, 2009). Visit the site and register to join the virtual event. I’ll be online during the virtual event, chatting with visitors and answering questions, so feel free to register and stop by.

UPDATE: Partly in conjunction with this post, it has also come to light that the maximum bus speed is also affected by the wattage of the CPU. Only the 95 watt CPUs are able to run at the maximum speed of 1333MHz. In addition, HP did clarify today that the ability to maintain the bus speed when using 2 DIMMs is not yet available, but will be available shortly in a ROM update.

UPDATE 2: I received word from HP today that the ROM update that enables 1333MHz operation with 2 DIMMs on each bus is actually already on all shipping “Nehalem” systems. Of course, this would require a 95W CPU, since only 95W CPUs can support the 1333MHz memory bus speed.

Tags: , ,


  1. Dejan Ilic’s avatar

    Beside changing the bus, there is now a new type of memory (again!) leaving the old one in the dust. It now uses DDR3 Registered (RDIMM) or Unbuffered (UDIMM) memory according to the specs. So the industry pushes for completly new servers again.

    I praise all the upgrades that are done to this generation, but I’m sick of doing a “fork-lift” upgrade every time.

  2. Dejan Ilic’s avatar

    Don’t forget the other cheavats :
    If only one processor is installed, only half the DIMM slots are available.
    If you use UDIMM, only 12x2GB (24GB) is supported.
    If you have UDIMM, you can’t mix with RDIMM in the same server.
    If you wan’t performance, choose Quad-ranked memory instead of dual-ranked.

    There is now at least 4 (!) different kind of memories of the same size, depending on the “rank”. single and dual rank UDIMM, dual and quadrank RDIMM.

    Remember that and keep to a standard in you datacenter or you will have an administrative hell keeping track of memories if you wish to move them around when needed somewhere else.

    Everything according to HP:s sites recommendation and to make our life easier.

  3. slowe’s avatar


    Yes, you are correct about some of the other caveats of all Xeon 5500-based servers, unfortunately. These aren’t specific to HP, but rather to the new CPU and new chipset. I agree with you–I wish they’d not forced people down this road with all the various different types of memory.

  4. adelp’s avatar

    Dejan – I will actually be going over all of the memory caveats shortly. Scott and I are trying to get to the bottom of the standards and what some of the manufacturers are doing to address some of the memory limitations. Stay tuned!

  5. weetbix’s avatar

    I work as a tech at a HP dealer I was interested to see the bit on memory speed as it contradicted the info I had seen so far. Then when I returned back today to comment I saw your update on that feature. Anyway I’ll still add the info I was going to as the table HP have in their quickspecs is quite good in laying out the options available.

    If you check out the memory section you will find a list of 19 odd “guidelines” plus a table which lays out RDIMM/UDIMM, memory rank, and speed per number of installed DIMMs.

    Link to the DL360 G6 section on memory is

  6. vmzare’s avatar

    Thanks for the article above. Below link


    provides very good details about memory configuration.

  7. Rick Vanover’s avatar

    Thanks, Scott. Good info as always.

  8. ssl’s avatar

    I’m in the throes of purchasing one of these beasts (HP DL360 g6) for virtualization, and I sent your comment, IE
    “I received word from HP today that the ROM update that enables 1333MHz operation with 2 DIMMs on each bus is actually already on all shipping “Nehalem” systems. Of course, this would require a 95W CPU, since only 95W CPUs can support the 1333MHz memory bus speed.”
    To my local sales rep. He responded back with the quickspecs, and reiterated that at 2 dimms per channel bus speed was 1066MHZ.


    I’m wondering who you “received word from” on this, and whether anyone with a working G6 can independently verify this., because otherwise I’m questioning the truthiness…

  9. slowe’s avatar

    My information came from sources at HP’s Houston campus. I agree that the QuickSpecs still say that the bus speed drops to 1066MHz with two DIMMs; what I have been told is that the QuickSpecs were not updated with the newer information. I’ll continue to track this down and see what additional information I can uncover.

  10. jeff’s avatar

    Regarding the “truthiness” (per ssl’s entry) questions of the new 2 DIMMs Per Channel @ 1333MHz feature from HP, I just want to reiterate Scott’s original news and the updates posted. The feature is now available from a ROM update via ROM Based Setup Utility (RBSU). Yes, some of the “not specific to HP caveats” do still apply (95 watt cpu’s for example) as well. Quick specs edits are in the works too.

  11. Casper42’s avatar

    I like all the frustration you guys seem to be having over the memory.

    I especially like this one:
    “I wish they’d not forced people down this road with all the various different types of memory.”

    So let me get this straight. They FORCED you into having TOO MANY CHOICES? Get a clue.

    As for the move to DDR3, both AMD and Intel are going that direction, so I dont see how you can blame Intel for forcing your hand.
    DDR2 FB-DIMMs are going away mainly because they were like the RAMBUS of the server world and were so power hungry that you didnt get to fully see the low power benefits from the Core2 architecture that everyone should have. Its was a mistake to go that direction, but its water under the bridge now.

    The fact is that for any 24×7 Datacenter you should put the Unbuffered memory out of your head and just move on with life. Those are meant for smaller shops and workstation class machines. Keep in mind the Xeon 5500 family can be used in everything from workstations to small blades to storage servers to Virtualization optimized machines.

    As for single rank vs dual rank vs quad rank, this is all about the number of channels the machine can support. I have not heard of any major performance gains from quad rank memory as previously stated. The only concern I think people should have is to find out from HP what the max number of memory channels is across the entire machine. If DIMM Sockets x 4 > max memory channels the machine supports, then you will not be able to fully populate all the sockets on the machine.

    4GB DIMMs seem to be the sweet spot right now as far as pricing goes, both for DDR2 and DDR3, but with the NUMA design of the Xeon 5500s, you would need 6 DIMMs to get the most bandwidth to the memory in a system. Thats 12/24GB (2GB/4GB DIMMs) of memory per machine which is obscene in some cases. So doing things like only running 4 DIMMs (2 per proc) @ 1333 is probably a better solution for some. You dont get the full bandwidth of the platform, but you do get the memory running at full speed.

    Anyway, I think its a GOOD problem to have that we have so many options to choose from. As Dejan said, just come up with a standard that works for your environment. And keep in mind that you might need more than 1 standard (VMWare ESX Host vs Small App/Web servers).


  12. Tim’s avatar

    all, there’s a handy tool which assists with all BTO and CTO configured servers using DDR3, it might come in handy:


  13. Steve’s avatar

    I notice when you order one of the nehalem machines, the memory options are all for the very high speed. Does this bandwidth issue not have anything to do with the intrinsic speed of the memory itself? I guess my real question is, is there any point to putting 1333 memory into a new nehalem machine if you’re going to put in 3 DIMMs? Or might you just as well use the 800 memory because that’s all you’re going to see anyway?

  14. Elsa’s avatar

    Hello – could someone explain to me the difference/functions of “HP Blades” and “HP Proliant”. I not technical, just trying to understand what I’m suppose to be looking for to fill a position that I’m recruiting on.


  15. Guridosul’s avatar

    Sometime confusing : HP Blade system family is part of the large HP Proliant family.
    HP Proliant are x86 (Intel or AMD) servers when HP Integrity are Itanium based servers. HP Blade are also part of the HP Integrity family.
    HP Proliant family is made of three line of products : BL (Blade), DL (density-rackmounted servers) and ML (Modular – Tower) servers.

    In the BL (Blade) line, there are four series : 200, 400, 600 and 800
    200 : two socket, low end, low cost, half eight slot
    400 : two socket, high performance, virtualization, 2 LOM, 2 mezzanine
    600 : four socket, Full height slot, 4 LOMs, 3 mezzanine
    800 : Itanium based, 2 or 4 sockets, full height slot.

    Finally, if the above number end with 0, it’s Intel, with 5 it’s AMD:
    Ex : BL460 : Intel Xeon 2 socket, BL685 : AMD 4 socket.

    hope this help

Comments are now closed.