What is SR-IOV?

I/O virtualization is a topic that has received a fair amount of attention recently, due in no small part to the attention given to Xsigo Systems after their participation in the Gestalt IT Tech Field Day. While Xsigo uses InfiniBand as their I/O virtualization mechanism, there are other I/O virtualization technologies out there as well. One of these technologies is Single Root I/O Virtualization (SR-IOV).

So what is SR-IOV? The short answer is that SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. The SR-IOV specification was created and is maintained by the PCI SIG, with the idea that a standard specification will help promote interoperability.

SR-IOV works by introducing the idea of physical functions (PFs) and virtual functions (VFs). Physical functions (PFs) are full-featured PCIe functions; virtual functions (VFs) are “lightweight” functions that lack configuration resources. (I’ll explain why VFs lack these configuration resources shortly.)

SR-IOV requires support in the BIOS as well as in the operating system instance or hypervisor that is running on the hardware. Until very recently, I had been under the impression that SR-IOV was handled solely in hardware and did not require any software support; unfortunately, I was mistaken. Software support in the operating system instance or hypervisor is definitely required. To understand why, I must talk a bit more about PFs and VFs.

I mentioned earlier that PFs are full-featured PCIe functions; they are discovered, managed, and manipulated like any other PCIe device. PFs have full configuration resources, meaning that it’s possible to configure or control the PCIe device via the PF, and (of course) the PF has full ability to move data in and out of the device. VFs are similar to PFs but lack configuration resources; basically, they only have the ability to move data in and out. VFs can’t be configured, because that would change the underlying PF and thus all other VFs; configuration can only be done against the PF. Because VFs can’t be treated like a full PCIe device, then the OS or hypervisor instance must be aware that they are not full PCIe devices. Hence, OS or hypervisor support is required for SR-IOV so that the OS instance or hypervisor can properly detect and initialize PFs and VFs correctly and appropriately. At this time, SR-IOV support is only found in some of the open source Linux kernels; this means it will find its way into KVM and Xen first. I do not have a timeframe for SR-IOV support in VMware vSphere or Microsoft Hyper-V.

So, putting this all together: what do you get when you have an SR-IOV-enabled PCIe device in a system with the appropriate BIOS and hardware support and you’re running an OS instance or hypervisor with SR-IOV support? Basically, you get the ability for that PCIe device to present multiple instances of itself up to the OS instance or hypervisor. The number of virtual instances that can be presented depends upon the device.

The PCI SIG SR-IOV specification indicates that each device can have up to 256 VFs. Depending on the SR-IOV device in question and how it is made, it might present itself in a variety of ways. Consider these exampes:

  • A quad-port SR-IOV network interface card (NIC) presents itself as four devices, each with a single port. Each of these devices could have up to 256 VFs (single port NICs) for a theoretical total of 1,024 VFs. In this case, each VF would essentially represent a single NIC port.
  • A dual-port SR-IOV host bus adapter (HBA) presents itself as one device with two ports. With 256 VFs, this would result in 512 HBA ports spread across 256 dual-port virtual HBAs.

These are, of course, theoretical maximums. Because each VF requires actual hardware resources, practical limits are much lower. Currently, 64 VFs seems to be the upper limit for most devices.

In situations where VFs represent additional NIC ports or HBA ports, other technologies must also come into play. For example, suppose that you had an SR-IOV-enabled Fibre Channel HBA in a system; that HBA could present itself as multiple, separate HBAs. Of course, because these logical HBAs would still share a single physical HBA port, you’d need NPIV (more information here) to support running multiple WWNs and N_Port_IDs on a single physical HBA port.

Similarly, you might have a Gigabit Ethernet NIC with SR-IOV support. That NIC could theoretically (according to the PCI SIG SR-IOV specification) present itself as up to 256 virtual NICs. Each of these NICs would be discrete and separate to the OS instance or hypervisor, but the physical Ethernet switch wouldn’t be aware of the VFs. Switches wouldn’t, by default, reflect some types of traffic arriving inbound on a port (from one VF) back out on the same port (to another VF). This could create some unexpected results.

SR-IOV does have its limitations. The VFs have to be the same type of device as the PF; you couldn’t, for example, have VFs that presented themselves as one type of device when the PF presented itself as a different type of device. Also, recall from earlier that VFs generally can’t be used to configure the actual physical device, although the extent to which this is true depends upon the implementation. The SR-IOV specification allows some leeway in the actual implementation; this leeway means that some SR-IOV-enabled NICs may also have VF switching functionality present (where the NIC could switch traffic between VFs without the assistance of a physical switch) while other NICs may not have VF switching functionality present (in which case VFs would not be able to communicate with each other without the presence of a physical switch).

I do want to point out that SR-IOV is related to, but not the same as, hypervisor bypass (think VMDirectPath in VMware vSphere). SR-IOV enables hypervisor bypass by providing the ability for VMs to attach to a VF and share a single physical NIC. However, the use of SR-IOV does not automatically indicate the hypervisor bypass will also be involved. Hypervisor bypass is a topic that I’m sure I will discuss in more detail in the near future.

Finally, it’s worth noting that the PCI SIG is also working on a separate IOV specification that allows multiple systems to share PCIe devices. This specification, known as Multi-Root IOV (MR-IOV), would enable multiple systems to share PCIe VFs. I hope to have more information on MR-IOV in the near future as well.

You now should have a basic understanding of SR-IOV, what it does, what is necessary to support it, and some of the benefits and drawbacks that SR-IOV creates. Feel free to post any questions you have about SR-IOV in the comments and I’ll do my best to get answers for you.

Tags: , , , ,


  1. Rob’s avatar

    good atricle Scott, i’m interested to know if the Cisco UCS M81KR uses SR-IOV for it’s interface virtualisation and if it then worsk with vSphere?

  2. Nik Simpson’s avatar

    Excellent summary of SR-IOV, but it misses one point with respect to VM live migration. For migration to work, both the source and target physical server for the VM migration must both have the SR-IOV adapter, because the drivers in the VM expect to find the same physical hardware.

    BTW, if you want to get a good update on where MR-IOV is (and it’s already shipping for some applications) talk to NextIO.

  3. slowe’s avatar


    Your point regarding VM live migration is not about SR-IOV, it’s about hypervisor bypass. That was why I tried to separate the discussion about SR-IOV and hypervisor bypass. SR-IOV enables hypervisor bypass, but SR-IOV doesn’t necessarily mean that you will always use hypervisor bypass.

    Otherwise, you are correct—in the current incarnations of hypervisor bypass, you generally give up live migration or have more stringent live migration requirements than you might otherwise.

    Thanks for commenting!

  4. lior’s avatar

    Hi Scott,
    Let me join Rob & Nik. Excellent summery.
    I’m now starting to get into the SR-IOV, MR-IOV (although more from the hardware prespective) and your post is a big help in understanding the big picture.
    Could add some information on current use of this technology (beside the two you’ve mentioned in the post) as well as future uses for VHs?

  5. rukhsana’s avatar

    Hi Scott,

    Can you provide a comparison of how SR-IOV stacks up against MF cards used in conjunction with Vt-d.

  6. Brian Johnson’s avatar


    Just to let you know that we just released the 10Gb SR-IOV driver for Xen. Now both 1Gb and 10Gb Intel Adapters have SR-IOV support.

    SourceForge also has ixgbe (base/PF) and ixgbevf (Linux VF) available for download:


  7. Anjali’s avatar

    I am interested in knowing whether the yukon-marvell chip 88E8022 supports SR-IOV or MR-IOV. ?
    Also, how does the performance from SR-IOV compare with IOMMU?

  8. Robert’s avatar

    The Cisco UCS M81KR does NOT use SR-IOV. This was to allow Cisco to provide device drivers for new & older OSes without having to wait for the OS vendors to package/include them. The M81KR is SR-IOV capable with a firmware update so when the time comes, it can function under the standard SR-IOV or under Cisco-specific operation.

  9. slowe’s avatar

    Robert, thanks for your comment. I’m glad to see you clearly state the relationship between the M81KR and SR-IOV; all I’ve been able to get is “it’s SR-IOV compliant,” which doesn’t really answer the question of whether or not it actually uses SR-IOV (which we know now that it doesn’t).

    Anjali, SR-IOV and IOMMU are two distinct (but slightly related) technologies, so I’m not sure that a performance comparison is pertinent (or even necessary).

    Rukhsana, I honestly don’t know how SR-IOV would compare to a MF device with VT-d. Anyone else care to chime in here with more information?

  10. Patrick’s avatar

    From what I’ve read in the MR-IOV spec, you would expect about the same performance with MR-IOV as it is still a direct-assignment technology. Just a lot more complicated than SR-IOV (which is pretty complicated itself).

    For those still interested in learning about SR-IOV, there is an update to the SR-IOV Primer doc I just finished available here:
    as well as a littel 10 minute video I did with some PowerPoint voodoo:

  11. slowe’s avatar

    Good information—thanks Patrick!

  12. Brian Johnson’s avatar

    To show the potential of SR-IOV in a virtual environment
    take a look at Patrick’s summary of an IBM report on SPECvirt_2010
    results. IBM* Releases SR-IOV Performance on Red Hat* KVM with
    IntelĀ® Gigabit ET Server Adapter
    Another resource to show the value of SR-IOV can be found on the
    SPECvirt_2010 benchmark results page.
    IBM just posted in Dec 2010 results of 5466.58@336VMs using the
    Intel(R) Ethernet Controller X520 with SR-IOV vs. 778@48VMs as a
    maximum without SR-IOV on 1Gb. Yes, this compares 10Gb vs. 1Gb but
    if you read the IBM white paper that Patrick references you will
    see that the 1Gb non-SR-IOV results were about as much performance
    you can get regardless of the pipe size. They show that by using
    SR-IOV in SPECvirt_2010 they see a performance advantage over
    non-SR-IOV, even on 1Gb. I hope this provides some additional
    information regarding the benefits that can be seen when SR-IOV is
    used in a virtual environment. Brian Johnson – Intel Corp.

  13. slowe’s avatar

    Brian, also great information—thank you!

  14. Sumesh.J.K’s avatar

    Informative blog on SR-IOV

  15. Ravi’s avatar

    Excellent summary of SR-IOV. Thanks Scott.

  16. Ryan’s avatar

    Very helpful, but unfortunately I’m having difficulty determining if the BIOS supporst SR-IOV. Our sales lead for the vendor keeps citing me information about when it’ll be available in software (Linux and Windows8) but nothing about if it’s supported in the hardware. Is there any sort of list, or keyword that can be looked for when attempting to find info about support? Would it be known by any other name than SR-IOV? Good article, explains it well, I’m just having difficulty finding out if it’s supported with the hardware we’re looking at.

  17. Roland’s avatar

    Apparently the following servers will support SR-IOV with BIOS updates from their respective vendors.

    Dell: PowerEdge 11G Gen II Servers: T410, R410, R510, R610, T610, R710, T710 and R910.
    Further information is available at http://en.community.dell.com/techcenter/os-applications/w/wiki/3458.microsoft-windows-server-8.aspx

    Fujitsu: PRIMERGY RX900 S2 using BIOS version 01.05. http://www.fujitsu.com/fts/products/computing/servers/primergy/rack/rx900/

    HP: Proliant DL580 G7 and Proliant DL585 G7. Further information is available at http://www.hp.com/go/win8server

    For more details check out the following page for SR-IOV information on Windows 8 beta.

    Release Notes: Important Issues in Windows Server “8″ Beta: http://technet.microsoft.com/en-us/library/hh831668.aspx

  18. Santosh’s avatar

    Nice article, this is good to start

  19. Farrukh’s avatar

    Well written. Good article, very helpful

  20. Maikel’s avatar

    Great article. Thanks for publishing…

  21. Aaron’s avatar

    so, if we use SRIOV with PF – in fact, this is meaningless, we only see one device with one port NIC?

  22. slowe’s avatar

    Aaron, I’m not sure I understand your question. The PF is, by definition, the actual device. It’s the VFs (the virtual functions) that you’ll want to use.

  23. Sajid’s avatar

    QQ, Will SRIOV work with Brocade FC switch line, also what if direct attached

  24. Raja’s avatar

    Thanks for such a nice article, you’ve explained things very we’ll. I’ve been trying get a good understanding about sr-IOv but nothing out there was as good as this.

  25. ramakrishna’s avatar

    Good article – so, as I understand for SRIOV to work, it is required support in PCIe and as well in device (e..g NIC card). Please, add little more information on this if possible.. any one


  26. slowe’s avatar

    Ramakrishna, SR-IOV requires support in the actual PCI device and the operating system/hypervisor.

  27. ramakrishna’s avatar

    Slowe thanks for the information.
    rephrasing it – SRIOV to work only the pci device should have this feature in its hardware supported. PCIe chipset hardware in itself does not need any support. And next thing is the os instance or hypervisor in terms of software. correct.


  28. slowe’s avatar

    Ramakrishna, the chipset must support it, the device must support it, and the OS/hypervisor must support it.

  29. ramakrishna’s avatar

    Slowe – thanks. it is clear now.

  30. Ranga Sankar’s avatar

    Hello Slowe
    So how is SR-IOV for FC different from vFC (Virtual Fibre Channel)?
    vFC is supported in Windows Server 2012 and allows individual VMs
    to have virtual Fibre channel ports. The HBAs should support vFC and NPIV support also is needed. vFC does not require SRIOV support in the BIOS.
    I am wondering if both SRIOV for FC and vFC serve the same purpose in
    consolidation and if there are any advantages/drawbacks of one over the other.

  31. Babu Alluri’s avatar

    Hello Slowe,
    Thanks for precise explanation and answers. As I started reading this article, I started mapping from device up to the top software consumer. Device, IO chipset and software must support. With my understanding, now, I would like put/think this way. Device type/class (PF) can represent multiple instance of SAME type (VFs), so info could be populated to the chipset (only PF can be configured through PCIe) and then it is up to the software (OS, Hyper…) to represent as multiple ports for single OS or VMs, etc. Some one comment if this representation is misleading.

  32. Jayram Deshpande’s avatar


    A great and concise presentation of SR-IOV fundamentals. Bravo !

Comments are now closed.