You are currently browsing articles tagged Intel.

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!


  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.


  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.


  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications



  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

This is session SFTS012, titled “Designing a Trusted Cloud with OpenStack.” It is, unfortunately, my last session of the conference; I’m leaving from here to head to the airport to catch my flight home. The speaker for the session is Vin Sharma, who helps lead open source software strategies at Intel. The focus of this session, as you can tell from the title, is on security and trust (presumably talking about how to leverage Intel TXT with OpenStack).

Intel’s “Cloud 2015″ vision recognizes there will be more users, more devices, and more data, and is shooting toward an ecosystem of open, interoperable cloud operating environments (OEs) built on open APIs and open standards. (That’s a lot of “open”.)

Based on feedback from ODCA (Open Data Center Alliance) members, security issues are top of mind for cloud implementations. These issues are second only to concerns on how to migrate applications to cloud OEs (or, in Sharma’s terms, “how to cloudify applications”). 20% of ODCA members included security-related issues as top challenges for cloud implementations.

Sharma reviews again some of the ODCA cloud computing usage models. In the security space, there are a number of different usage models available. Sharma will focus on the “Security Provider Assurance” usage model during this presentation.

The challenges in this space include:

  • Proving compliance with an audit record
  • Provide visibility or control over placement of workloads in the cloud

Sharma (and Intel) believes that “trusted compute pools” are the answer to these challenges. Using Intel technologies like TXT (Trusted Execution Technology) along with a policy engine and related components such as TPM, providers can build “trusted compute pools” to help solve security-related issues in a cloud OE.

The problem, according to Sharma, is that the natural evolution of open source projects–which are playing an increasingly influential role in the direction of data centers, cloud OEs, and provider implementations/services–is often at odds with what enterprises need. This means that there must be some sort of “driving force,” such as a vendor or organization, that helps shape and focus open source development in the right development. Intel believes OpenStack is the right cloud OE, and believes that the support of OpenStack across the industry provides the right “shape and focus” to ensure that OpenStack starts to address enterprise data center needs. For this reason, Sharma states, Intel believes OpenStack is the right vehicle to address the ODCA usage models and the right place to implement trusted compute pools.

While Sharma believes OpenStack is the right vehicle, it still has a way to go. He shows a couple of slides that demonstrate that users expect a certain level of functionality, but OpenStack is (today) only prepared to deliver a subset of that functionality. This gap provides a number of opportunities, not only for Intel but also for other vendors. This is especially true in areas like auditing and security incident event management (SIEM). (RSA, are you listening?)

At the heart of enabling trusted compute pools is the scheduler, and that’s where Intel started. The scheduler needs to be able to make intelligent decisions about where a VM should be provisioned, so that it can provision workloads on the basis of platform characteristics (is this a trusted host or an untrusted host, in this particular instance). While the initial changes to the scheduler are focused on trusted compute pools, there are additional directions to take the scheduler (power consumption, workload characteristics, and performance, for example). In order to be able to determine the trust status of a host, OpenStack needs an attestation service. This allows the scheduler to determine if a host is trusted/untrusted, and therefore be able to make intelligent scheduling decisions based on trust and security policy.

So how does this open attestation service that Intel has created work? It uses something called the Trousers stack (I’m not familiar with this one) and a host agent to determine trusted/untrusted status. The attestation server uses HTTPS to communicate with the host agent’s API, and provides an API by which OpenStack can communicate with the attestation server (in order to check status). Sharma indicates that a white paper is under development that will provide more details on exactly how this is implemented. The OpenAttestation code is available on Github. The other components required to make this work will either be delivered in Folsom (where changes in the scheduler are available) or already in the Linux kernel (like the tboot functionality/support).

At this point Sharma wraps up the session.

Tags: , , ,

Now that day 2 of Intel Developer Forum (IDF) 2012 is behind me, here’s a quick summary of the day. Most of today was taken up by meetings; some of these were meetings with various Intel folk to get my feedback on the event, others were meetings that I’m not at liberty to discuss. (Sorry folks!) I did, however, manage to capture a couple of sessions:

DATS002: Big Data Meets High-Performance Computing:

CLDS006: Exploring New Intel Xeon Processor E5 Based Platform Optimizations for 10 Gb Ethernet Network Infrastructures

That second session, in particular, was a really good session. If you ever get the chance to listen to Brian Johnson from Intel speak on 10 Gigabit Ethernet design in virtualized environments, you are in for a treat. The guy knows his stuff, no question about it. I would have loved for that session to run two hours so that Brian could have gone into even more detail about some of the intricacies that go into properly architecting a 10 Gb Ethernet environment for vSphere.

<aside>For example, did you know that some servers have PCIe slots that have x8 connectors, but are only wired for x4 lanes? If you don’t understand why that’s important, then you really need to sit in one of Brian’s sessions!</aside>

Brian shared a bunch of information with me during VMworld a couple of weeks ago, and I’m still processing that along with the information he shared today in his session. I hope to be able to break it all down into digestible chunks and post more information here in the near future.

So, aside from meeting Robert Scoble at Starbucks this morning, that’s it for my day 2 summary. Feel free to post any thoughts or questions in the comments below.

Tags: , ,

This is session CLDS006, “Exploring New Intel Xeon Processor E5 Based Platform Optimizations for 10 Gb Ethernet Network Infrastructures.” That’s a long title! The speakers are Brian Johnson from Intel and Alex Rodriguez with Expedient.

The session starts with Rodriguez giving a (thankfully) brief overview of Expedient and then getting into the evolution of networking with 10 Gigabit Ethernet (GbE). Rodriguez provides the usual “massive growth” numbers that necessitated Expedient’s relatively recent migration to 10 GbE in their data center. As a provider, Expedient has to balance five core resources: compute, storage (capacity), storage (performance), network I/O, and memory. Expedient found that migrating to 10 GbE actually “unlocked” additional performance headroom in the other resources, which wasn’t expected. Using 10 GbE also matched upgrades in the other resource areas (more CPU power, more RAM through more slots and higher DIMM densities, larger capacity drives, and SSDs).

Rodriguez turns the session over to Brian Johnson, who will focus on some of the specific technologies Intel provides for 10 GbE environments. After briefly discussing various form factors for 10 GbE connectivity, Johnson moves into a discussion of some of the I/O differences between Intel’s 5500/5600 processors and the E5 processors. The E5 processors integrate PCI Express root ports, providing upwards of 200 Gbps of throughput. This is compared to the use of the “Southbridge” with the 5500/5600 series CPUs, which were limited to about 50 Gbps.

Integrated I/O in the E5 CPUs has also allowed Intel to introduce something like Intel Data Direct I/O (DDIO). DDIO allows PCIe devices to DMA information directly to cache–instead of main memory–where it can then be fetched by a processor core. This results in reduced memory transactions and, as a result, greater performance. The end result is that the E5 CPUs can support more throughput on more ports than previous generation CPUs (up to 150 Gbps across 16 10 GbE ports with an E5-2600 CPU).

Johnson also points out that the use of AES-NI helps with the performance of encryption, and turns the session back over to Rodriguez. Rodriguez shares some details on Expedient’s experience with Intel AES-NI, 10 GbE, and DDIO. In some tests that Expedient performed, throughput increased from 5.3 Gbps at ~91% CPU utilization with a Xeon 5500 (no AES-NI) to 33.3 Gpbs at ~79% CPU utilization on an E5-2600 with AES-NI support. (These tests were 256-bit SSL tests with Open SSL.)

Rodriguez shares some of the reasons why Expedient felt 10 GbE was the right choice for their data center. Using 1 GbE would have required too many ports, too many cables, and too many switches; 10 GbE offered Expedient a 23% reduction in cables and ports, a 14% reduction in infrastructure costs, and offered a significant bandwidth improvement (compared to the previous 1 GbE architecture).

Next the presentation shifts focus a little bit to discuss FCoE. Rodriguez goes over the reasons that Expedient is evaluating FCoE for their data center. Expedient is looking to build the first Cat6a-based 10GBase-T FCoE environment leveraging FC-BB-6 and VN2VN standards.

Johnson takes back over again to discuss some of the specific technical items behind Expedient’s FCoE initiative. Johnson shows a great diagram that reviews all the various types of VM-to-VM communications that can exist in modern data centers:

  • VM-to-VM (on the same host) via the software-based virtual switch (could be speeds of 30 to 40 Gbps in this use case)
  • VM-to-VM (on the same host) via a hardware-based virtual switch in an SR-IOV network interface card (NIC)
  • VM-to-VM (on a different host) over a traditional external switch

One scenario that Johnson didn’t cover was VM-to-VM traffic (on different hosts) over a fabric extender (interface virtualizer) environment, such as a Cisco Nexus 2000 connected up to a Nexus 5000 (there are some considerations there; I’ll try to discuss those in a separate post).

Intel VT-c actually provides a couple of different ways to work in virtualized environments. VMDq can provide a hardware assist when the hypervisor softswitch is involved, or you can use hypervisor bypass and SR-IOV to attach VMs directly to VFs (virtual functions). Johnson shows that the E5 processor provides higher throughput at lower CPU usage with VMDq compared to a Xeon 5500 CPU (tests were done using an Intel X520 with VMware ESXi 5.0). Using SR-IOV–support for which is included in vSphere 5.1 as well as Microsoft Windows Server 2012 and Hyper-V–allows VMware customers to use DirectPath I/O to assign VMs directly to a VF, bypassing the hypervisor. (Note that there are trade-offs as a result.) In this case, the switching is done in hardware in the SR-IOV NIC. The use of SR-IOV shows dramatic improvements in throughput with small packet sizes as well as significant reductions in CPU utilization. Because of the trade-offs associated with SR-IOV (no hypervisor intervention, no vMotion on vSphere, etc.), it’s not a great general-purpose solution. It is, however, very well-suited to workloads that need predictable performance and that work with lots of small packets (firewalls, load balancers, other network devices).

Going back to the earlier discussion about PCIe root ports being integrated into the E5 CPUs, this leads to a consideration for the placement of PCIe cards. Make sure your high-speed cards aren’t inserted in a slot that runs through the C600 chipset southbridge. Make sure that you are using Gen2 x8 slot, and make sure that the slot is actually wired to support a x8 card (some slots on some systems have a x8 connector but are only wired for x4 throughput). Johnson recommends using either LoM, slot 2, slot 3, or slot 5 for 10 GbE PCIe NICs; this will ensure direct connections to one of the CPUs and not to the southbridge chipset.

Johnson next transitions into a discussion of VF failover using NIC teaming software. There’s a ton of knowledge disclosed (too much for me to capture; I’ll try to do a separate blog post on it). The key takewaway: don’t use NCI teaming in the guest when using SR-IOV VFs, or else traffic patterns could vary dramatically and create unpredictable results without very careful planning. Johnson also mentions DPDK; see this post for more details.

At this point, Johnson wraps up the session with a summary of key Intel initiatives with regard to networking (optimized drivers and initiators, intelligent use of offloads, everything based on open standards) and then opens the floor to questions.

Tags: , , , ,

This is session DATS002, titled “Big Data Meets High Performance Computing.” The speakers are John Hengeveld from Intel and Micahel Franklin from UC Berkeley. The title sounds like this might be a marketing session, but I’m hoping that this material will be substantial instead of just fluff.

Hengeveld starts the session by reminding attendees that Big Data is really about the insights that are gained from the data. Big Data isn’t about the data, it’s about the insight and understanding. In this context, discussions of Big Data that talk about millions of Facebook posts, or data posted to Google or Twitter, or clickstream data, often miss the opportunity to discuss the insight that can be gained from the data. Hengeveld describes Big Data as an oilfield–massive, deep, full of “rich” information. Big data technologies are the “mining” technologies that allow you to extract value from that oilfield.

Continuing the oilfield analogy, you need something that can work with the data to produce the insight–something that can refine insight from the raw data. This is HPC–high-performance computing. So what kinds of insights can HPC mine from Big Data? Better medical therapies, improved security through facial recognition, analyzing routes through traffic based on cell phone density, and urban planning and simulation are four examples that Hengeveld shares in the session.

At this point the presentation shifts to Professor Franklin from UC Berkeley, who works at the AMP Lab and is doing some Big Data research. Franklin describes “AMP” as standing for “Algorithms, Machines, & People”–meaning that all three have value in extracting value from Big Data. AMP was launched in February 2011 and is funded by a consortium of organizations, including Intel (hence the connection to IDF). All of the software that AMP is generating is released under the BSD license.

Franklin takes a moment to point out that much of the data that comprises Big Data is generated from online activities, but an even greater amount of data comes from tracking online activities. This could be from systems that track the user experience, logs from systems that support the online activities, health/utilization reports from the underlying infrastructure, etc.

What AMP is building is called BDAS, which stands for Berkeley Data Analysis System. In addition to compute resources, BDAS is also designed to leverage people (crowdsourcing) and data collectors such as public data sources. The goals of BDAS are threefold: 1) to effectively manage cluster resources; 2) efficiently extract value out of big data; and 3) continuously optimize cost, time, and answer quality. In order to support these initiatives, AMP has pushed data quality and answer quality attributes deep into the system. The BDAS components that have been released so far includes Mesos (the cluster resource management layer); Spark (an alternative to Hadoop that is targeted at interactive and iterative workloads); and Shark (a port of Hive from Hadoop to Shark; completely compatible with existing Hive queries). Mesos is under the governance of the Apache Foundation; the other projects are available on Github (Shark is here; Spark is here). These are licensed with the BSD license, as mentioned earlier.

Franklin next goes over an application AMP developed called Carat. Carat uses crowdsourced data in conjunction with AWS and Spark to do analysis of applications and battery usage. This helps determine correlations between battery life and application usage. Free iOS and Android apps are available to help contribute to Carat (and gain information from Carat, such as which applications drain your battery most).

At this point, Hengeveld takes the platform again to wrap up the session. He reiterates the need for new big data software stacks (like Spark and Shark) to keep up with growing data volume, velocity, and variety (the “three V’s of Big Data”). The discussion then shifts to a review of the technologies and initiatives that Intel is developing to help in the Big Data and HPC (and Big Data+HPC together). One of the areas that Intel is working on is fixing “1960′s era” storage hierarchies that simply don’t support Big Data paradigms. The new approach is object-based storage; Hengeveld uses Lustre as an example. Another area of development is interconnect technology; so Intel is working to improve effective bandwidth by lowering latency with Intel True Scale (formerly Qlogic) technologies. Finally, Hengeveld believes that the Intel Many Integrated Core (MIC) architecture, now officially known as Xeon Phi, will really help with processing highly parallel data structures.

At this point, Hengeveld wraps up with a call to action for developers and opens the floor to questions.

Tags: ,

IDF 2012 Day 1 Summary and Thoughts

I just completed day 1 of Intel Developer Forum (IDF) 2012 in San Francisco. I tried to blog about as much as I possibly could. Here are the links to what I was able to capture during the day:

IDF 2012 Day 1 Keynote:

Next-Generation Microarchitecture, code-named “Haswell”:

Data Plane Virtualization:

ODCA and Cloud Usage Models:

One thing that really stuck out to me was an announcement made during a data center-focused press briefing directly after lunch. While the announcements made during the keynote were nice, they were consumer-oriented. The announcements during the press briefing, on the other hand, were much more enterprise data center-focused. The one thing that really stuck out to me was Intel’s announcement of their Seacliff Trail reference platform.

The Seacliff Trail reference platform is a 1U top of rack (ToR) switch sporting 48 10 Gigabit Ethernet (GbE) ports and four 40 GbE ports. The platform supports OpenFlow (has been optimized for OpenFlow, in fact), and has hardware support for overlay encapsulations like VXLAN and NVGRE. Advanced networking technologies like TRILL, Shortest Path Bridging (SPB), Edge Virtual Bridging (EVB), and FCoE are also supported. Switching latency for cut-through switching is about 400 ns. Essentially, this is a reference platform for a pretty full-featured L2/L3 10 GbE/40 GbE ToR switch that can compete reasonably well with the “Tier 1″ networking vendors like Cisco, Juniper, Arista, and others—presumably at a far lower cost.

Why did this stick out to me? To me, the introduction of mass-produced merchant silicon and an Intel reference platform for this sort of ToR switch sounds the death knell for networking vendors who differentiate themselves through hardware. It’s the same thing that happened in the server hardware space. In the grander scheme of things, a Cisco UCS server is by and large the same as an HP ProLiant server and a Dell PowerEdge server. (Sorry, guys.) Sure, there are minor tweaks here and there from each of the major vendors, but these are mostly inconsequential. Now Intel is preparing to do the same to the 1U ToR network switch space, and it creates a lot of questions in my mind:

  • What does this mean for the hardware-differentiated network vendors of the world? How do they continue to compete in this space? Does all of the innovation shift to software? If so, who among the “top tier” vendors is best poised to take advantage of this shift in development priorities?
  • This switch has built-in support for OpenFlow. What does this mean for the adoption of the OpenFlow protocol? Who will emerge as the dominant supplier of OpenFlow controllers for all these OF-enabled ToR switches?
  • This switch has hardware support for next-generation overlay protocols like VXLAN and NVGRE. What impact will that have on the uptake of these protocols in modern data centers?

Obviously, the answers to many (if not all) of these questions will be determined by the success (or failure) of the Seacliff Trail reference platform and the OEMs/ODMs that take up the reference platform. If Seacliff Trail becomes hugely successful, it could end up having quite an impact.

There are also other discussions that result from the Seacliff Trail announcement around convergence, but I’m going to hold on those discussions for the time being until I’ve had a bit more time to research and reflect. In the meantime, feel free to speak up in the comments below with your thoughts about what Seacliff Trail and Intel’s move into the networking hardware space means to you. Please be sure to provide industry/employer affiliations where appropriate.

(My disclosure: I work for EMC, but I’m attending IDF at the request of Intel. Intel is covering my expenses and provided a pass for the show.)

Tags: , , ,

This is session CLDS001, titled “The Open Data Center Alliance and Developing a Usage Model Roadmap for Cloud Computing.” The presenters are Mario Mueller, VP of IT Infrastructure at BMW Group and John Pereira, a marketing director at Intel. Both Mueller and Pereira are also involved with the Open Data Center Alliance (ODCA).

The session starts out with some background on ODCA. The ODCA is driven by customer requirements and customer demands, and the requirements are not guided by any vendor bias (at least, that’s the goal). The ODCA acts in three ways: create and deliver open cloud services; collaborate with standards bodies to create open cloud standards; and something else I wasn’t able to catch. (Sorry.)

The ODCA has hundreds of members. There are organizations on the steering committee (like BMW), contributing members (like Nokia or Verizon), solution provider members (like CiRBA, Citrix, Cisco, RedHat, CA, VMware, Teradata, Hortonworks, and more), and adopter members (too many to list). Intel serves as a technical alliance to the ODCA, but does not have a voting role in the ODCA.

ODCA started in late 2010 with only 5 organizations, quickly growing to 70 members aiming to create user-driven cloud requirements. In 2011, the ODCA released its first set of user-driven requirements (focused on security), and membership increased significantly. In 2012, the ODCA held Forecast (tied in with Cloud Expo) and the first solutions provider summit, and more usage models were released.

The lifecycle for usage models:

  1. First, the ODCA defines usage models (customer voice).
  2. Next, align the SP solutions to ODCA (industry solutions).
  3. Third, members start to adopt solutions (initial adoption).
  4. ODCA shares the results of customer adoption (scale out).
  5. Usage models are evolved based on the results of customer adoption (learn).

Pereira next takes the audience through a sample usage model, this one focused on security monitoring. Following that, Pereira matches up various Intel technologies (he does work for Intel, after all) to various ODCA initiatives.

At this point, Mueller (with BMW Group) takes over, and starts his section with a brief video. (Unfortunately, it’s mostly a BMW commercial.) As a result of BMW’s adoption of ODCA-related initiatives, Mueller reports the following results (among other things):

  • 99.95% IT availability in plants
  • Consolidation from 25 data centers to 9 data centers
  • Energy savings of about 4900 MWh per year
  • 53% reduction of the highest risk segment
  • Annual 20% reduction of critical incidents with business impact
  • 55% of the IT budget is spent on developing and enhancing IT solutions (only 45% for operations)

Even with these results, BMW is still seeking to improve uptime, self-service, automation, and flexibility/elasticity.

Naturally, BMW faces a number of challenges:

  • Lock-in with proprietary solutions can happen quickly; must be avoided
  • Operational reservations (fear of losing control)
  • Licensing and license management
  • Migration scenarios

BMW uses/used ODCA usage models in their datacenter/cloud design, and references usage models in the procurement process. Mueller states that BMW is “100% committed to the goals of the ODCA”.

Which usage models has BMW adopted? BWM is using material from both the operations usage model (service catalog, standard unit of measure) and technology usage model (carbon footprint, VM interoperability, long-distance workload migration). Repeating an earlier announcement, Mueller talks about his 100% renewable energy-based datacenter in Iceland. This datacenter has a PUE of <1.2 and produces zero carbon dioxide emissions.

Mueller now shifts his discussion to BMW ConnectedDrive, which now has more than 1 million cars delivering data. This will generate enormous amounts of data and lots of application requests. This service is something that must be built to exacting levels of performance and availability, given the visibility of the service to the end customers of BMW (the car owners).

Mueller ends the session with a call to join ODCA and help “shape the future,” and then opens the floor to questions from the attendees.

Tags: ,

This is session COMS002, titled “Next Generation Cloud Infrastructure with Data Plane Virtualization.” The speaker is Edwin Verplanke, a System Architect with Intel.

Verplanke believes that DPDK (Data Plane Development Kit) and virtualization are key to virtualizing workloads that move around lots and lots of packets, such as firewalls, routers, and other similar functions. Late in the session Verplanke will discuss some Open vSwitch optimizations.

Verplanke first goes over some of the challenges that are driving the industry to look at data plane and control plane virtualization solutions in next-generation infrastructure. Next, he discusses the evolution of network devices so far. Devices first started as tightly-coupled hardware and software solutions. In recent years, we’ve seen more devices running off-the-shelf software (like Linux). The future, Verplanke believes, is fully virtualized network devices.

Verplanke describes three use cases for virtualized communications platforms: security appliance virtualization, router and edge device virtualization, and service delivery platform virtualization.

Each of these use cases has some challenges. For the security appliance, all traffic is actually intercepted by the hypervisor, which will generate context switches for every I/O packet received. This will drive down performance. This is quite different from virtualizing an endpoint device.

When virtualizing router and edge devices, not only do you encounter the same I/O performance issue as with security devices, you also need to enable very efficient intra-VM communications to enable multiple networking services (like SSL/TLS or IPSec).

The third use case, virtualizing a virtual service delivery platform, needs an efficient programmable interconnect for deployment of new services. Is the Linux bridge efficient enough? Intel is looking at Open vSwitch as a way to address that; more information on that later. This is similar to the issue described with virtualizing security devices.

These challenges can be summarized as:

  1. I/O-intensive application virtualization
  2. Intra-VM communication
  3. Efficient VMM softswitch implementation

For challenge #1, we’ll look at data plane virtualization. For #2, we’ll examine the Intel DPDK. And for #3, we’ll examine some Open vSwitch optimizations.

Intel DPDK is a standard set of libraries to move packets around a system. The DPDK is BSD licensed and source code is available from Intel. The DPDK contains data plane libraries, optimized NIC poll mode drivers, a runtime environment, a new queuing environment for the Linux kernel, and an Environment Abstraction Layer (EAL) that automatically optimizes for various Intel architectures (Atom, Core, Xeon).

Verplanke next reviews some of the I/O optimizations and architecture of the Xeon E5 2600 series platform. These optimizations include things like Data Direct I/O (DDIO), VT-d, QPI, and QuickAssist (this helps with pattern matching, crypto, and compression).

The Intel hardware virtualization assists (things like Extended Page Tables, large VT-d page support, reduced context switch latency) also greatly improve with the virtualization of communications-related infrastructure. EPT and VT-d large page support are particularly important. (Note from my earlier Haswell session blog that Intel is further reducing virtualization-related context switch latency even more.)

To help with intra-VM communication, Intel DPDK offers several benefits. First, DPDK provides optimized pass-through support. Second, DPDK offers SR-IOV support and allows L2 switching in hardware on Intel’s network interface cards (estimated to be 5-6x more performant than the soft switch). Third, DPDK provides optimized vNIC drivers for Xen, KVM, and VMware.

Verplanke shows off how the SR-IOV virtual function (VF) support works with an Intel 82599 10 Gigabit Ethernet NIC.

Finally, Verplanke discusses some Open vSwitch optimizations. The current Open vSwitch architecture is fine for endpoint virtualization, but not for network data plane virtualization. What Intel has been working on is replace the VirtIO drivers with DPDK-enabled VirtIO drivers, and use DPDK to replace the Linux bridging utilities with a DPDK-enabled forwarding application. The DPDK-enabled forwarding application is a “zero copy” operation, thus reducing latency and processing load when forwarding packets. Intel is also creating shims between DPDK and Open vSwitch, so that an OVS controller can update Open vSwitch, which can then update the DPDK forwarding app to modify or manipulate forwarding tables.

At this point, Verplanke summarized what he’s discussed in the session and describes the benefits of DPDK for data plane virtualization.

Tags: , , ,

This is session SPCS001, titled “Technology Insight: Intel Next Generation Microarchitecture Code Name Haswell”. The speakers are Tom Piazza, Hong Jiang, Per Hammarlund, and Ronak Singhal, Sr.

Haswell is a “tock” as opposed to a “tick” (referring to Intel’s “tick/tock” release cycle); this means it is a significant change at the platform and microarchitecture levels. It is a 22nm platform (Ivy Bridge was also 22nm). Haswell will retain key features from the Sandy Bridge/Ivy Bridge platform, like Hyper-Threading, Turbo Boost, and Ring Interconnect.

A key philosophy for the Haswell microarchitecture is the use of a “converged code”; that is, a single microarchitecture that scales from tablets all the way to servers in the datacenter. That might seem odd, but the power advantages present in Haswell are just as applicable to tablets and mobile devices as they are to servers running tens (hundreds?) of cores in the data center.

Major focus areas for Haswell include performance improvements (not only for existing “legacy” code but also for new code), modularity, and power innovations.

With regard to modularity, Haswell enables a variety of permutations of core count, cache size, and other variables. This enables more flexibility by Intel’s OEMs in delivering Haswell to a variety of platforms.

In the area of power innovations, Haswell still uses the S0 and S3/4 (active and sleep, respectively) power states. Intel is working to reduce overall power usage in S0, and working to reduce power usage and resume time for S3/4. Haswell introduces S0ix, which is “active/idle” state, which provides dramatically reduced power usage and dramatically reduced resume time (from multiple seconds to hundreds of milliseconds).

As the presenters went into performance improvements, the presentation got extremely technical. While it probably made sense to a developer (the target audience at IDF), much of it did not make sense to me. I’ve included in the information below in a bulleted format for completeness.

  • Increased buffer sizes to allow for greater parallelism to be discovered in code execution
  • Enhancements in branch prediction
  • The addition of two more operations per cycle (Nehalem/Sandy Bridge could do 6 operations per cycle; Haswell can perform 8 operations per cycle)
  • Doubling of floating point operations per cycle through the addition of two Fused Multiply-Add (FMA) operations
  • Reduced virtualization latencies (no additional details provided)
  • A new gather instruction that allows the system to read multiple locations in memory in one operation
  • Introduction of AVX2 instruction set to further improve integer performance and vectorization
  • Improved performance in bit manipulation operations; this should have an impact on cipher/encryption/decryption operations
  • Introduction of TSX (Transactional Synchronization Extensions) to help with creating software that has greater parallelism

The session next transitioned into some discussions specific to improvements in graphics performance and media performance. Improved modularity in the graphics core allows for more “scale-out” performance; this is responsible for the 2x reduction in power usage for matching graphics performance when using Haswell. (This was part of Perlmutter’s keynote.) With regards to media performance, Haswell introduces hardware-based SVC (Scalable Video Coding) and several hardware codecs, including an MPEG codec. Haswell also improves the Video Quality Engine (VQE), which supports an extensive suite of video processing functions. The end result is higher video quality at lower power usage.

The remainder of the session focused on specific power improvements in the media and graphics space, then closes with a summary of the improvements in the Haswell microarchitecture.

Tags: , ,

IDF 2012 Keynote, Day 1

This is a liveblog of the day 1 keynote at Intel Developer Forum (IDF) 2012 at Moscone West in San Francisco, CA. This is my first time attending IDF, and I appreciate the invitation to attend from Intel. (Disclosure: My travel expenses are being reimbursed and I was given a pass to attend, but I am not receiving any other form of compensation.)

Prior to the start of the keynote, they show a video talking about “What misconception about engineers bothers you most?” It’s a collection of snippets talking with various people at the show (probably from last year). It’s a interesting and amusing video. According to the people on the video, the most common misconception is that engineers don’t know how to have fun.

At 9:01, David (Dadi) Perlmutter takes the stage after a short video about Intel. Admirably, Perlmutter recognizes the grave importance of today’s date (9/11), something that I have to give him credit for. I fear that many organizations would not have taken time out of their limited schedule to do so, and I commend Intel.

This year marks the 15th anniversary of IDF, which launched for the very first time in 1997. Today Perlmutter’s discussion will focus on “reinventing computing” (at least, he admits again and not for the last time). Tomorrow’s keynote will focus on security, and Thursday’s keynote is about connecting to the future.

Perlmutter states that “reinventing computing” isn’t just about Intel; it’s about working in collaboration with Intel’s partners and Intel’s developers to “shape the future.” He shows off two hardware samples: the ultra-small Medfield system-on-a-chip (SoC) and the much larger Xeon Phi high-performance computing (HPC) platform. However, it’s not just about hardware–it’s also about the software.

According to Intel and Perlmutter, “data creates opportunities”. There are opportunities for creating digital data, storing digital data, and analyzing digital data. All this leads to cloud and big data.

Perlmutter now shifts his discussion from a broad look at the driving factors in the industry to a more specific look at the data center specifically. And while Intel is involved in the data center–both directly and through a wide array of partners–Perlmutter feels the “real revolution” is in the transformation of personal computing. This, naturally, leads to a discussion of Intel’s Ultrabooks, now equipped with Intel’s 22nm 3rd generation Intel Core processors. He then shows off several different form factors from various Intel OEMs (all of them are running Windows 8). He demonstrates a tablet/slate form factor, as well as a detachable form, the traditional notebook/clamshell, and the convertible form factor.

Going back to his earlier statement about the importance of software, Perlmutter now talks about how software features like sensors, facial recognition, instant on, responsive voice, and others will help enable new experiences in the personal computing arena. This also includes new, more “natural” and “intuitive” computing that employs voice and touch interfaces.

Next up is a demonstration of some new voice interface capabilities that are being jointly developed by Intel and Nuance. The technology and software demonstrated is said to be available in beta form on Q4 of this year.

Following that demonstration, Perlmutter demonstrates some new technologies using gestures. Working with Creative and SoftKinetic, he shows off a few examples of how 3-D cameras and gesture support can enable new ways of interacting with our computers.

At 9:28 AM, Gary Flood of MasterCard joins Perlmutter on stage to discuss how Intel can make the e-commerce experience better for both users and merchants. According to Flood, e-commerce needs to be secure, non-intrusive, seamless, and fluid for both consumers and merchants. That leads to a discussion of MasterCard’s PayPass wallet services. Following that is a demonstration of NFC (Near-Field Communications) sensors on next-generation Ultrabooks. This demo incorporates Intel Identity Protection Technology (IPT) to provide even more security and to associate the user with the endpoint (an ultrabook, in this case).

The next demonstration shows a couple of different applications running on both Atom and Core CPUs; Perlmutter uses a couple of Windows 8 applications as his example.

Perlmutter now introduces “Haswell,” Intel’s 22nm 4th generation Intel Core processor, designed with mobility in mind and intended for use in devices such as tables and ultrabooks all the way up to full-size workstations. He demonstrates graphics performance between the current-generation Core CPU and the next-generation Core CPU. The graphics performance of the next-generation CPU is significantly better, as one would expect.

Perlmutter now reviews the five Intel-based smartphones that are currently available on the market, and he discusses applying the same innovations shown earlier in the mobility context to the all-in-one (AIO) form factor. Perlmutter also shows off a Coca-Cola intelligent vending machine that is powered by an Intel Core i7 CPU.

The Intel vision is: “This decade we will create and extend computing technology to connect and enrich the lives of every person on earth.” The keynote ends with a video that talks about how people use Intel technologies to help solve the problems that face humanity.

And with that, the keynote concludes.

Tags: , ,

« Older entries