Hardware

You are currently browsing articles tagged Hardware.

Technology and Travel

Cody Bunch recently posted a quick round-up of what he carries when traveling, and just for fun I thought I’d do the same. Like Cody, I don’t know that I would consider myself a road warrior, but I have traveled a pretty fair amount. Here’s what I’m currently carrying when I hit the road:

  • Light laptop and tablet: After years of carrying around a 15″ MacBook Pro, then going down to a 13″ MacBook Pro, I have to say I’m pretty happy with the 13" MacBook Air that I’m carrying now. Weight really does make a difference. I’m still toting the full-size iPad, but will probably switch to an iPad mini later in the year to save a bit more weight.
  • Bag: I settled on the Timbuktu Commute messenger bag (see my write-up) and I’m quite pleased with it. A good bag makes a big difference when you’re mobile.
  • Backup battery: I’m carrying the NewTrent PowerPak 10.0 (NT100H). It may not be the best product out there, but it’s worked pretty well for me. It’s not too heavy and not too big, and will charge both phones and tablets.
  • Noise-canceling earphones: The Bose QC15 earphones (in-ear) are awesome. Naturally they let in a bit more noise than the bigger on-ear QC20 headphones, but the added noise is worth the tremendous decrease in size and weight.

On the software side, I’ll definitely echo Cody’s recommendation of Little Snitch; it’s a excellent product that I’ve used for years. You might also consider enabling the built-in firewall (see this write-up for enabling pf on OS X Mountain Lion; haven’t tried on Mavericks yet) for an added layer of network protection.

What about you, other road warriors out there? What are you carrying these days?

Tags: , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.

Networking

  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.

Servers/Hardware

  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah

Security

Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)

Storage

  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.

Virtualization

That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

IDF 2013 Summary and Thoughts

I’m back home in Denver after spending a few days in San Francisco at Intel Developer Forum (IDF) 2013, so I thought I’d take a few minutes to sit down and share a summary of the event and my thoughts.

First, here are links to all the liveblogging I did while at the conference:

IDF 2013 Keynote, Day 1:
http://blog.scottlowe.org/2013/09/10/idf–2013-keynote-day–1/

Enhancing OpenStack with Intel Technologies for Public, Private, and Hybrid Cloud:
http://blog.scottlowe.org/2013/09/10/idf–2013-enhancing-openstack-with-intel-technologies/

IDF 2013 Keynote, Day 2:
http://blog.scottlowe.org/2013/09/11/idf–2013-keynote-day–2/

Rack Scale Architecture for Cloud:
http://blog.scottlowe.org/2013/09/11/idf–2013-rack-scale-architecture-for-cloud/

Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI):
http://blog.scottlowe.org/2013/09/11/idf–2013-virtualizing-the-network-to-enable-sdi/

The Future of Software-Defined Networking with the Intel Open Network Platform Switch Reference Design:
http://blog.scottlowe.org/2013/09/12/idf–2013-future-of-sdn-with-the-intel-onp-switch-reference-design/

Enabling Network Function Virtualization and Software Defined Networking with the Intel Open Network Platform Server Reference Architecture Design:
http://blog.scottlowe.org/2013/09/12/idf–2013-enabling-nfvsdn-with-intel-onp-server-reference-design/

Overall, I enjoyed the event and found it quite useful. It appears to me that Intel has a three-prong strategy:

  1. Expand the footprint of IA (Intel Architecture, what everyone else calls x86, x86_64, or x64) CPUs by moving into adjacent markets
  2. Extend Intel’s reach with non-IA hardware (FM6000 series, QuickAssist Server Acceleration Card [QASAC]
  3. Use software, especially open source software, to drive more development toward Intel-based solutions

I’m sure there’s probably more, but those are the ones that really stand out. You can see some evidence of these moves:

  • Intel’s Open Network Platform (ONP) Switch reference design (aka “Seacliff Trail”) helps drive #1 and #2; it contains an IA CPU for programmability and leverages the FM6000 series for high-speed networking functionality
  • Intel’s ONP Server reference design (“Sunrise Trail”) pushes Intel-based servers into markets they haven’t traditionally seen (telco/networking roles), especially in conjunction with strategy #3 above (as shown by the strong use of Intel DPDK to optimize Sunrise Trail for SDN/NFV applications)
  • Intel’s Avoton and newly-announced Quark families push Intel into new markets (micro-servers, tablets, phones, sensors) where they haven’t traditionally been a major player

All in all, it will be very interesting to see how things play out. As others have said, an definitely an interesting time to be in technology.

As with other relevant industry conferences (like VMworld, for example), one of the values of IDF is engaging in conversations with other professionals. I had a few of these conversations while at IDF:

  • I spent some time talking with an Intel employee about Intel’s Cache Acceleration Software (CAS), which came out of the acquisition of a company called Nevex. I wasn’t even aware that Intel was doing cache acceleration. Intel CAS operates at the operating system (OS) level, serving as a file-level cache on Windows and a block-level cache (with file awareness) on Linux. It also supports caching to a remote SSD (in a SAN, for example) so that you can still use vMotion in vSphere environments. In the near future, they’re looking at supporting cache clustering with a cache coherence algorithm that would allow you to use SSDs/flash from multiple servers as a single cache.
  • I had a brief conversation with an Intel engineer who specialized in SSDs (didn’t get to hit him up for some free Intel DC S3700s, though). We touched on a number of different areas, but one interesting statistic that came out of the conversation was the reality behind the “running out of writes” on an SSD. (This refers to the number of times you can write data to an SSD, which is made out to be a pretty big deal by some folks.) He spoke of a test that wrote 45GB an hour to an SSD; even at that rate, would have taken multiple decades of use before the SSD would not have been able to perform any writes.
  • Finally, I spent some time chatting with my friend Brian Johnson, who works in the networking group at Intel. There’s lots of cool stuff going on there, but—unfortunately—I can’t really discuss most of what he shared with me. Sorry folks! We did have an interesting discussion around the user experience, personal data, mobility, and ubiquitous connectivity. Who knows—maybe the next great startup will emerge out of our discussion! :-)

Anyway, that’s it for my IDF 2013 coverage. I hope that some of the information I shared proves useful to you in some way. As usual, courteous comments (with vendor disclosures, where needful) are always welcome.

(Disclosure: I work for VMware, but was invited to attend IDF at Intel’s expense.)

Tags: , ,

This is session COMS003, titled “Enabling Network Function Virtualization and Software Defined Networking with the Intel Open Network Platform Server Reference Architecture Design.” (How’s that for a mouthful!) The speakers are Frank Schapfel, Senior Product Line Manager with Intel, and Brian Skerry, Open Networks System Architect with Intel. This session is slightly related to IDF 2013 session COMS0002, which focused more on the Intel ONP Switch reference design.

Frank kicks the session off with a quick review of the agenda, then dives right into the content. He starts first with reviewing what SDN and NFV are; I won’t repeat all that here again since it’s already been covered multiple times. (See the liveblog from the COMS002 session for more details, if you need them.)

Next, Frank moves into Intel’s role in enabling SDN/NFV. The key takeaway is that Intel’s CPUs are gradually “eating away” at typically-proprietary functions like packet processing and signal processing. With these function now possible in x86_64 CPUs, they can be moved into a VM to help achieve NFV. (It could be argued that full machine virtualization might not be the most efficient way of handling NFV. Lightweight containers might be more efficient.) According to Frank, once NFV has been addressed this enables SDN, which he describes as greater automation across the network via a separated control plane. Naturally, a series of Intel ingredients underpin this: Intel CPUs, Intel NICs, switch silicon (the FM6700), Intel DPDK, and Open Networking Software.

This leads Frank into a discussion of how Intel will address this market moving forward. In this year, Intel has the Intel platform for Communications Infrastructure, leveraging Xeon and Avoton CPUs. Next year and in 2015, you can expect Intel to leverage the Haswell microarchitecture to refresh this platform. Beyond that, future microarchitectures will deliver more capabilities and more capacity that can be brought to bear on the SDN/NFV market.

At this point, Frank transitions into a more detailed and specific discussion of the ONP Server reference platform (code-named “Sunrise Trail”). The platform leverages Xeon E5–2600 v2 CPUs, plus a host of other Intel technologies (SR-IOV and packet acceleration via DPDK). Of particular note is the use of the Intel QuickAssist Services Acceleration Card (QASAC), which has its own PCI connection to the CPU cores and are designed to help accelerate tasks like encryption and compression. QASAC can offer up to 50Gbps of encryption/compression acceleration, with higher levels available via additional PCIe Gen 3 add-in cards.

Both Seacliff Trail (ONP Switch) and Sunrise Trail (ONP Server) will evolve over time as rack-scale architecture (RSA) matures and evolves as well. Eventually, Seacliff Trail and Sunrise Trail could eventually merge as part of RSA (referred to as Intel ONP for Hybrid Data Plane). Note that the merging of ONP Server and ONP Switch is something that I postulated last year after IDF 2012.

Sunrise Trail will leverage one of a number of potential enterprise Linux distributions, integration with OpenStack, the Intel DPDK for packet acceleration, various hypervisors will be supported (KVM, Hyper-V, KVM), support for OpenFlow (which will undoubtedly come via Open vSwitch [OVS]). For telco environments, ONP Server will likely leverage Wind River Systems’ real-time Linux distribution along with other components.

Frank now turns it over to Brian, who will discuss some of the software pieces involved in ONP Server. He first shows a high-level architecture (I tweeted a picture of this separately). Brian notes that this architecture does not map directly to the ETSI NFV architecture.

Some key challenges that this architecture faces:

  • Integration of legacy OSS/BSS systems
  • Element management needs to work in a virtualized environment
  • Infrastructure orchestration such as OpenStack has industry momentum, but challenges still remain
  • SDN controller architectures and the marketplace are still evolving
  • Service orchestration is being addressed through a number of organizations with lots of opportunity for commercial and open source solutions

Brian takes a moment to zoom in a bit on OpenStack as an infrastructure orchestrator. He calls out enhanced platform awareness (making OpenStack aware of underlying platform capabilities, such as TCP) and passthrough of PCI devices and VF assignment (when using SR-IOV). He really focuses on platform awareness, which makes sense since Intel needs to differentiate at the platform level.

The discussion now shifts to a more focused discussion on the software that actually runs inside Sunrise Trail. Brian mentions the importance of an Intel DPDK vSwitch (which is typically a DPDK-accelerated version of OVS). The reason a DPDK-accelerated virtual switch is so important is because the virtual appliances that are leveraged by NFV will quickly become a bottleneck if the virtual switch isn’t getting accelerated by the underlying hardware. Brian mentions some performance figures: stock OVS gets about 300K small packets per second, but doesn’t yet provide any DPDK-accelerated numbers. Source code for DPDK acceleration is available at http://01.org, although it is missing some features (it is not feature comparable with stock OVS right now). Brian issues a call to contributors to their effort, but I wonder why they don’t just contribute their effort to stock OVS and leverage that community.

DPDK also enables other functions like deep packet inspection (DPI) and Quality of Service (QoS) fine-grained control.

Brian now turns it back over to Frank, who provides more information on where attendees can get more information on Intel’s SDN/NFV enables. He points attendees to Intel’s Network Builders program, provides a summary of the key points from the session, and then opens for questions and answers.

Tags: , , , ,

This is IDF 2013 session CLDS001, titled “Rack Scale Architecture for Cloud.” The speaker is Mohan Kumar, a Sr Principal Engineer with Intel. Kumar works in the Server Platforms Group.

Kumar mentions that Krzanich mentions rack-scale architecture (RSA, not to be confused with a security company of the same name) as one of three pillars of accelerating the data center, and this session will be diving a bit deeper on rack-scale architecture. He’ll start with an overview of the motivation for RSA, then provide an overview of RSA and how it works.

The motivation for developing RSA is really rooted in the vision of the “Internet of Things,” which Intel estimates will reach approximately 30 billion devices by 2020. This means there will be tremendous need for servers in data centers to support this vast number of connected devices. However, the current architectures aren’t sufficient. Resources are locked into individual servers, making it more difficult to shift resources as workloads change and adapt. (I’d contend that virtualization helps address most of this concern.) Thermal inefficiencies and a lack of service-based (software-defined?) configurability of resources are other motivations for RSA. (Again, I’d contend that the configurability of resources is mitigated somewhere by the extensive use of virtualization.) Finally, individual resources within a server can’t be upgraded. To address these concerns, Intel believes that a rack-level architecture is needed.

So where does RSA stand today? Today, RSA can offer shared power (a single power bus instead of multiple power supplies in each server), shared cooling, and rack management (more intelligent in the rack itself). In the near future, Intel wants RSA to include a “rack fabric,” using optical interconnects that allows for a much greater level of disaggregation and much greater modularity. The ultimate goal of RSA is completely modularized servers with pooled CPUs, pooled RAM, pooled I/O, and pooled storage. This is going to be a key point of Kumar’s presentation.

So what are the key Intel technologies involved in RSA?

  • Reference architectures and orchestration software
  • Intel’s Open Network Platform (ONP)
  • Storage technologies, like PCIe SSD and caching
  • Photonics and switch fabrics
  • CPUs and silicon (Atom, Xeon, Quark?)

Intel wants RSA to align with Open Compute Platform (OCP) efforts as well. Kumar also mentions something called Scorpio, which I hadn’t heard before (this is similar to OCP in some way).

In looking at how these components come together in RSA, Intel estimates that cloud operators would see the following benefits:

  • 3x reduction in cable requirements using silicon photonics
  • 2.5x improvement in network uplinks
  • 25x improvement in network downlinks
  • Up to 1.5x improvement in server/rack density
  • Up to 6x reduction in power provisioning

Most of these improvements come from the use of silicon photonics, according to Kumar.

Looking ahead into the future of RSA, what are some of the key problems that remain to be solved? Kumar points to the following challenges:

  1. There is no service-based configurability of memory. (What about virtualization here? I could see this argument for the virtualization hosts, but that scale will be vastly smaller than the scale for the VMs/instances themselves.) Kumar believes that pooled memory is the answer to this challenge.
  2. Similarly, there is no service-based configurability for direct-attached storage. (My same comments regarding the pervasive use of virtualization applies here as well.) Kumar’s response to this is a high-density Ethernet JBOD he calls a PBOD (pooled bunch of disks).

With RSA, a rack becomes the unit of scaling in a cloud environment. The management domain will aggregate multiple racks together in a pod. Looking specifically at the OCP implementation, within a rack the sub-unit of scaling is a tray. A tray consists of multiple nodes. The tray contains compute, memory, storage, and networking modules; a node is a compute module.

Diving a bit deeper on this idea, server CPU(s) will be connected to resource modules (memory, storage, networking) and will be managed by a tray manager. All this occurs within a tray. Between trays, Intel would look at using silicon photonics (or possibly a ToR switch); from there, the uplink goes out to the end-of-row (EoR) switch.

The resource modules (memory, storage, networking) are referred to as RSA pooled functions. A pooled memory controller would manage pooled memory. Pooled networking would be SDN-ready network connectivity. Pooled storage is the PBOD (Ethernet-connected JBOD, not using iSCSI but over straight Ethernet—are we talking ATA over Ethernet? Something else entirely?). The tray manager, mentioned earlier, ensures that resources are properly allocated and enforced.

Next, Kumar shifts his attention to pooled memory in particular. The key motivations for pooled memory include memory sharing, memory disaggregation, and right-sizing memory to workloads running on the node. If you were to also enable memory sharing—assign memory to two nodes at the same time—then you could enable new forms of innovation (think very fast VM migration or tightly-coupled OS clustering). It seems to me that memory sharing would require changes to operating systems and hypervisors in order for it to be supported, though.

Looking closer at how pooled memory works, it requires something called a pooled memory controller, which manages all the centrally pooled RAM. The pooled memory controller is responsible for managing memory partitions and allocation. (Would it be possible for this pooled memory controller to do “memory deduplication” too?) This is the piece that will enable shared memory partitions, but Kumar doesn’t elaborate on changes required to today’s OSes and hypervisors in order to support this functionality.

Kumar next shows a recorded demo of some of the RSA technologies in action.

At this point, Kumar shifts gears to discuss pooled storage in a bit more detail. The motivation for doing pooled storage is similar to the reasons for all of RSA—more flexibility in allocating resources, eliminating bottlenecks, and on-demand allocation.

Intel’s RSA addressed pooled storage through a PBOD, which would be accessed over Ethernet using RDSP (Remote DAS Protocol). Individual compute nodes will use RDSP to communicate with the PBOD controller, which acts like the shared memory controller in that it handles partitioning storage and allocating storage to the individual compute nodes. Kumar tries to show a recorded demo of pooled storage but runs into a few technical issues.

At this point, Kumar provides a summary of the work that has been done toward RSA, reminds attendees that RSA technologies can be seen in the Technology Showcase, and opens the session up for questions and answers.

Tags: ,

IDF 2013: Keynote, Day 2

This is a liveblog of the day 2 keynote at Intel Developer Forum (IDF) 2013 in San Francisco. (Here is the link for the liveblog from the day 1 keynote.)

The keynote starts a few minutes after 9am, following a moment of silence to observe 9/11. Following that, Ulmonth Smith (VP of Sales and Marketing) takes the stage to kick off the keynote. Smith takes a few moments to recount yesterday’s keynote, particularly calling out the Quark announcement. Today’s keynote speakers are Kirk Skaugen, Doug Fisher, and Dr. Hermann Eul. The focus of the keynote is going to be mobility.

The first to take the stage is Doug Fisher, VP and General Manager of the Software and Services Group. Fisher sets the stage for people interacting with multiple devices, and devices that are highly mobile, supported by software and services delivered over a ubiquitous network connection. Mobility isn’t just the device, it isn’t just the software and services, it isn’t just the ecosystem—it’s all of these things. He then introduces Hermann Eul.

Eul takes the stage; he’s the VP and General Manager of the Mobile and Communications Group at Intel. Eul believes that mobility has improved our complex lives in immeasurable ways, though the technology masks much of the complexity that is involved in mobility. He walks through an example of taking a picture of “the most frequently found animal on the Internet—the cat.” Eul walks through the basic components of the mobile platform, which includes not only hardware but also mobile software. Naturally, a great CPU is key to success. This leads Eul into a discussion of the Intel Silvermont core: built with 22nm Tri-Gate transistors, multi-core architecture, 64-bit support, and a wide dynamic power operating range. This leads Eul into today’s announcement: the introduction of the Bay Trail reference platform.

Bay Trail is a mobile computing experience reference architecture. It leverages a range of Intel technologies: next-gen Intel multi-core SoC, Intel HD graphics, on-demand performance with Intel Burst Technology 2.0, and a next-gen programmable ISP (Image Service Processor). Eul then leads into a live demo of a Bay Trail product. It appears it’s running some flavor of Windows. Following that demo, Jerry Shen (CEO of Asus) takes the stage to show off the Asus T100, a Bay Trail-based product that boasts touchscreen IPS display, stereo audio, detachable keyboard dock, and an 11 hour battery life.

Following the Asus demo, Victoria Molina—a fashion industry executive—takes the stage to talk about how technology has/will shape online shopping. Molina takes us through a quasi-live demo about virtual shopping software that leverages 3-D avatars and your personal measurements. As the demo proceeds, they show you a “fit view” that shows how tight or loose the garments will fit. The software also does a “virtual cat walk” that shows how the garments will look as you walk and move around. Following the final Bay Trail demo, Eul wraps up the discussion with a review of some of the OEMs that will be introducing Bay Trail-based products. At this point, he introduces Neil Hand from Dell to introduce his Bay Trail-based product. Hand shows a Windows 8-based 8" tablet from Dell, the start of a new family of products that will be branded Venue.

What’s next after Bay Trail? Eul shares some roadmap plans. Next up is the Merrifield platform, which will increase performance, graphics, and battery life. In 2014 will come Advanced LTE (A-LTE). Farther out is 14nm technology, called Airmont.

The final piece from Eul is a demonstration of Bay Trail and some bracelets that were distributed to the attendees, in which he uses an Intel-based Samsung tablet to control the bracelets, making them change colors, blink, and make patterns.

Now Kirk Skaugen takes the stage. Skaugen is a Senior VP and General Manager of the PC Client Group. He starts his portion of the keynote discussing the introduction of the Ultrabook, and how that form factor has evolved over the last few years to include things like touch support and 2-in–1 form factors. Skaugen takes some time to describe more fully the specifications around 2-in–1 devices, coming from hardware partners like Dell, HP, Lenovo, Panasonic, Sony, and Toshiba. This leads into a demonstration of a variety of 2-in–1 devices: sliders, fold-over designs, detachables (where the keyboard detaches), and “ferris wheel” designs where the screen flips. Now taking the stage is Tami Reller from Microsoft, whose software powered all the 2-in–1 demonstrations that Intel just showed. The keynote sort of digresses into a “Microsoft Q&A” for a few minutes before getting back on track with some Intel announcements.

From a more business-focused perspective, Intel announces 4th generation vPro-enabled Intel Core processors. Location-based services are also being integrated into vPro to enable location-based services (one example provided is automatically restricting access to confidential documentation when they leave the building). Intel also announced (this week) the Intel SSD Pro 1500. Additionally, Intel is announcing Intel Pro WiDi (Wireless Display) to better integrate wireless projectors. Finally, Intel is working with Cisco to eliminate passwords entirely. They are doing that via Intel Identity Password, which embeds keys into the hardware to enable passwordless VPNs.

Taking the stage now is Mario Müller, VP of IT Infrastructure at BMW. Müller talks about how Intel Atom CPUs are built into BMW cars, especially the new BMW i8 (BMW’s first all-electric car, if I heard correctly). He also refers to some new deployments within BMW that will leverage the new vPro-enabled Intel Core CPUs, many of which will be Ultrabooks. Müller indicates that 2-in–1 is useful, not for all employees, but certainly for select individuals who need that functionality.

Skaugen now announces Bay Trail M and Bay Trail D reference platforms. While Bay Trail (also referred to as Bay Trail T) is intended for tablets, but the M and D platforms will help drive innovation in mobile and desktop form factors. After a quick look at some hardware prototypes, Skaugen takes a moment to look ahead at what Intel will be doing over the next year or so. He shows 30% reductions in power usage coming from Broadwell, which will be the 14nm technology Intel will introduce next year. From there, Skaugen shifts into a discussion of perceptual computing (3D support). He shows off a 3-D camera that can be embedded into the bezel of an ultrabook, then shows a video of kids interacting with a prototype hardware and software combination leveraging Intel’s 3-D/perceptual computing support.

And now Doug Fisher returns to the stage. He starts his portion of the keynote by returning to the Intel-Microsoft partnership and focusing on innovations like fast start, longer battery life, touch- and sensor-awareness, on a highly responsive platform that also offers full compatibility around applications and devices. Part of Fisher’s presentation includes tools for developers to help make their applications aware of the 2-in–1 form factor, so that applications can automatically adjust their behavior and UI based on the form factor of the device on which they’re running.

Intel is also working closely with Google to enhance Android on Intel. This includes work on the Dalvik runtime, optimized drivers and firmware, key kernel contributions, and the NDK app bridging technology that will allow apps developed for other platforms (iOS?) to run on Android. Fisher next introduces Gonzague de Vallois of GameLoft, a game developer. Vallois talks about how they have been developing natively on Intel architecture and shows an example of a game they’ve written running on a Bay Trail T-based platform. The tools, techniques, and contributions that Intel have with Android are also being applied to Chrome OS. Fisher brings Sundar Pichai from Google on to the stage. Pichai is responsible for both Android and Chrome OS, and he talks about the momentum he’s seeing on both platforms.

Fisher says that Intel believes HTML5 to be an ideal mechanism for crossing platform boundaries, and so Intel is announcing a new version of their XDK for HTML5 development. This leads into a demonstration of using the Intel XDK (which stands for “cross-platform development kit”) to build a custom application that runs across multiple platforms. With that, he concludes the general session for day 2.

Tags: , , , ,

This is a liveblog of Intel Developer Forum (IDF) 2013 session EDCS003, titled “Enhancing OpenStack with Intel Technologies for Public, Private, and Hybrid Cloud.” The presenters are Girish Gopal and Malini Bhandaru, both with Intel.

Gopal starts off by showing the agenda, which will provide an overview of Intel and OpenStack, and then dive into some specific integrations in the various OpenStack projects. The session will wrap up with a discussion of Intel’s Open IT Cloud, which is based on OpenStack. Intel is a Gold Member of the OpenStack Foundation, has made contributions to a variety of OpenStack projects (tools, features, fixes and optimizations), has built its own OpenStack-based private cloud, and is providing additional information and support via the Intel Cloud Builders program.

Ms. Bhandaru takes over to provide an overview of the OpenStack architecture. (Not surprisingly, they use the diagram prepared by Ken Pepple.) She tells attendees that Intel has contributed bits and pieces to many of the various OpenStack projects. Next, she dives a bit deeper into some OpenStack Compute-specific contributions.

The first contribution she mentions is Trusted Compute Pools (TCP), which was enabled in the Folsom release. TCP relies upon the Trusted Platform Module (TPM), which in turn builds on Intel TXT and Trusted Boot. Together with the Open Attestation (OAT) SDK (available from https://github.com/OpenAttestation/OpenAttestation), Intel has contributed a “Trust Filter” for OpenStack Compute as well as a “Trust Filter UI” for OpenStack Dashboard. These components allow for hypervisor/compute node attestation to ensure that the underlying compute nodes have not been compromised. Users can then request that their instances are scheduled onto trusted nodes.

Intel has also done work on TCP plus Geo-Tagging. This builds on TCP to enforce policies about where instances are allowed to run. This includes a geo attestation service and Dashboard extensions to support that functionality. This work has not yet been done, but is found in current OpenStack blueprints.

In addition to trust, Intel has done work on security with OpenStack. Intel’s work focuses primarily around key management. Through collaboration with Rackspace, Mirantis, and some others, Intel has proposed a new key management service for OpenStack. This new service would rely upon good random number generation (which Intel strengthened in the Xeon E5 v2 release announced earlier today), secure storage (to encrypt the keys), careful integration with OpenStack Identity (Keystone) for authentication and access policies, extensive logging and auditing, high availability, and a pluggable-backend (similar to Cinder/Neutron). This would allow encryption of Swift objects, Glance images, and Cinder volumes. The key manager project is called Barbican (https://github.com/cloudkeep/barbican) and provides integration with OpenStack Identity. In the future, they are looking at creation and certification of private-public pairs, software support for periodic background tasks, KMIP support, and potential AES-XTS support for enhanced performance. This will also leverage Intel’s AES-NI support in newer CPUs/chipsets.

Intel also helped update the OpenStack Security Guide (http://docs.openstack.org/sec/).

Next, Intel talks about how they have worked to expose hardware features into OpenStack. This would allow for greater flexibility with the Nova scheduler. This involves work in libvirt as well as OpenStack, so that OpenStack can be aware of CPU functionality (which, in turn, might allow cloud providers to charge extra for “premium images” that offer encryption support in hardware). The same goes for exposing PCI Express (PCIe) Accelerator support into OpenStack as well.

Gopal now takes over and moves the discussion into storage in OpenStack. With regard to block storage via Cinder, Intel has incorporated support to filter volumes based on availability zone, capabilities, capacity, and other features so that volumes are allocated more intelligently based on workload and type of service required. By granting greater intelligence to how volumes are allocated, cloud service providers can offer differentiated (read: premium priced) services for block storage. This work is enabled in the Grizzly release.

In addition to block storage, many OpenStack environments also leverage Swift for object storage. Intel is focused on enabling erasure coding to Swift, which would enable reduced storage requirements in Swift deployments. Initially, erasure coding will be used for “cold” objects (objects that aren’t accessed or updated frequently); this helps preserve the service level for “hot” objects. Erasure coding would replace triple replication to reduce storage requirements in the Swift capacity tier. (Note that this something I also discussed with SwiftStack a couple weeks ago during VMworld.)

Intel has also developed something called COSBench, which is an open source tool that can be used to measure cloud object storage performance. COSBench is available at https://github.com/intel-cloud/cosbench.

At this point, Gopal transitions to networking in OpenStack. This discussion focuses primarily around Intel Open Network Platform (ONP). There’s another session that will go deeper on this topic; I expect to attend that session and liveblog it as well.

The networking discussion is very brief; perhaps because there is a dedicated session for that topic. Next up is Intel’s work with OpenStack Data Collection (Ceilometer), which includes work to facilitate the transformation and collection of data from multiple publishers. In addition, Intel is looking at enhanced usage statistics to affect compute scheduling decisions (essentially this is utilization-based scheduling).

Finally, Gopal turns to a discussion of Intel IT Open Cloud, which is a private cloud within Intel. Intel is now at 77% virtualized, with 80% of all new servers being deployed in the cloud. It’s less than an hour to deploy instances. Intel estimates a savings of approximately $21 million so far. Where is Intel IT Open Cloud headed? Intel IT is looking at using all open source software for Intel IT Open Cloud (this implies that it is not built with open source software today). There is another session on Intel IT Open Cloud tomorrow that I will try to attend.

At this point, Gopal summarizes all of the various Intel contributions to OpenStack (I took a picture of this I posted via Twitter) and ends the session.

Tags: , , , , , ,

IDF 2013 Keynote, Day 1

This is a liveblog of the Intel Developer Forum (IDF) Day 1 keynote. I was lucky enough to be invited to attend, as I did last year, and I’ll be liveblogging as many sessions and events as possible.

The keynote starts promptly at 9AM with Ulmont Smith (I didn’t catch his title at Intel). He mentions that this is the 16th year of hosting IDF, and previews some of the things that will be available this year at IDF: 170 technical sessions, poster chats, longer hours in the Technology Showcase, and more engineers on-site than any previous IDF. Ulmont also previews the attendee appreciation party (with Counting Crows) and gives a word of thanks to the IDF sponsors. Ulmont then introduces the new Intel CEO, Brian Krzanich, and the new Intel President, Renée James, who will be the keynote speakers in the day 1 keynote.

Brian Krzanich, the CEO of Intel, now takes the stage. He starts out with discussing what IDF means to him (and what it means to Intel). Krzanich talks about how this is an exciting time in the industry, and he indicates that he’ll lay out Intel’s strategy for how they will succeed in this highly transformative time in the IT industry. The underlying themes driving this transformation are related to the “connectedness” to the user; as computing moves closer to the user, the volume increases. So, in the transition from servers to desktops to notebooks to tablets to phones and next to the “Internet of Things,” the volume of computing increases. Computing is getting more personal and more connected, according to Krzanich.

Krzanich talks about how this migration to the “Internet of Things” drives Intel away from CPU-centric architectures into integrated (i.e., system-centric or System-on-Chip [SoC]) architectures, and he believes this is why Intel will win and drive marketshare in this transition. He talks about Intel’s assets—46K engineers, $10B in R&D, etc.—and how that will help Intel be successful. Intel’s plan is to lead in every segment of computing. That includes servers, desktops, notebooks, tablets, phones, and emerging segments.

With that, Krzanich takes a deeper look at some of these segments. He starts with the data center, and briefly discusses Intel’s leadership in CPUs (from the low-end with Atom and Avoton to the high-end with Xeon E5), server rack-scale architecture, and software-defined networking. He next transitions into a brief discussion of the evolution of the PC. He uses an HP prototype to talk about how notebooks will be fanless, lightweight, with long battery lives and ample computing power.

Not surprisingly, Krzanich next talks about Intel silicon. He announces a PC built on a 14nm SoC-based notebook, which Intel intends to start shipping by the end of the year (the 14nm SoC, not the actual products). “14nm is coming to a PC near you!” he says. Next he focuses on what he calls the “2-in–1″; these are the notebook/tablet convertible devices that easily switch between form factors. Krzanich believes that “2-in–1″ believes this is where the PC is headed.

But what about tablets? He picks up a Lenovo-branded Intel-based tablet to show that Intel-based tablets are available today. He points out that users can choose between Windows or Android, but of course there is no mention of iOS-based devices. Intel is targeting a sub-$100 system price point for the 2013 holiday season. This naturally leads into a discussion of what Intel’s doing in the phone market, and he shows off the first 22nm SoC-based phone, and discusses extensive LTE support—both for data and voice over LTE. Krzanich also shows LTE Advanced and carrier aggregation allows speeds up to 70Mbps (with 150Mbps quite possible).

Krzanich next moves to the mythical “Internet of Things.” Data volume, battery life, and security are all very important. This leads him to an announcement of the Intel Quark family of silicon chipsets. The Quark SoC is Intel’s smallest SoC, 1/5 the size of Atom and drawing only 1/10 the power. He next shows off “wearable” products that Intel is designing (based around Quark, I would assume). More reference designs around Quark are forthcoming.

Summarizing what he’s shown the attendees, Krzanich refers the Intel’s “landscape of opportunity” with products ranging from low-end server CPUs to high-end server CPUs, SoCs for tablets, SoCs for phones, and the all-new Quark SoC for the “Internet of Things” and wearable computing. With that, he wrapped up his portion of the keynote, and reminds attendees that he and Renée will be doing open Q&A at the end of the keynote. He turns the stage over to Renée James, Intel’s President.

James now takes the stage, and reminds attendees of Intel’s 45 years of leadership and innovation. She believes that Intel will help society transform to what she calls “integrated computing,” where we’ll move away from worrying about the form factor of computing toward using technology to transform lives and solve big problems. Her intent during this portion of the keynote is to show some projects that are already underway on how integrated computing will change computing. James gives us a quick review of some Intel history, reminding attendees that “Moore’s Law” remains alive and well. Krzanich announced 14nm today, but James points ahead to 10nm (in 2015) and 7nm (in 2017). James points to a number of technical hurdles that Intel has overcome:

  • 3-D transistors
  • Gate last approach
  • Hi-K
  • Phase shift masks

Overcoming these technical barriers has allowed Intel to continue its leadership in computing. James shows off a cell phone manufactured using 1500nm technology. She compares that to a modern state-of-the-art phone. This phone, a Lenovo K900 (I think), runs at 2GHz and has more computing power than a Pentium 4 processor.

James reviews three “phases” of computing: task-based computing, lifestyle computing, and integrated computing. James’ discussion of integrated computing is inclusive of terms like “Internet of Things” and wearable computing; it’s a vision of embedding silicon/intelligence/sensors into everyday devices. She uses an example of how Dublin, Ireland, is using integrated computing (sensors in the city drainage systems) to intelligently manage drain water, traffic, and congestion in real-time. This leads James into a discussion of how integrated computing can be used to manage “mega-cities”.

The next use case of integrated computing is in healthcare. She shows off a wristband-based device that gathers health metrics in real-time. Next, James shows off a silicon-based patch that will replace the wristband device she just showed, and this silicon-based patch will gather health metrics in real time.

Of course, all of these devices are transmitting data, and this leads James to a discussion of big data. She uses human genome mapping as a example, and she talks about how the cost and time requirements for human genome mapping have dropped dramatically. Intel’s advancements in computing have driven the cost and time requirements for human genome mapping down to weeks (targeting days and hours) and down to about $5,000 (and targeting less than $1,000). Why is this important? Because at these cost and time requirements, it enables customized healthcare—for example, being able to create treatments that are targeted at an individual’s specific ailments (they used cancer as an example).

James closed out her portion of the keynote with a quote from one of Intel’s founders, Robert Noyce: “Don’t be encumbered by history, go off and do something wonderful.” At this point, Krzanich returns to the stage, and he and James open up for general questions and answers (which I do agree is unusual for a keynote).

Tags: ,

« Older entries