Hardware

You are currently browsing articles tagged Hardware.

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!

Networking

  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.

Servers/Hardware

  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.

Security

  • Need to scramble some data on a block device? Check out this command. (I love the commandlinefu.com site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.

Storage

  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.

Virtualization

  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #41, the latest in my series of random thoughts, articles, and links from around the Internet. Here’s hoping you find something useful!

Networking

  • Network Functions Virtualization (NFV) is a networking topic that is starting to get more and more attention (some may equate “attention” with “hype”; I’ll allow you to draw your own conclusion there). In any case, I liked how this article really hit upon what I personally feel is something many people are overlooking in NFV. Many vendors are simply rushing to provide virtualized versions of their solution without addressing the orchestration and automation side of the house. I’m looking forward to part 2 on this topic, in which the author plans to share more technical details.
  • Rob Sherwood, CTO of Big Switch, recently published a reasonably in-depth look at “modern OpenFlow” implementations and how they can leverage multiple tables in hardware. Some good information in here, especially on OpenFlow basics (good for those of you who aren’t familiar with OpenFlow).
  • Connecting Docker containers to Open vSwitch is one thing, but what about using Docker containers to run Open vSwitch in userspace? Read this.
  • Ivan knocks centralized SDN control planes in this post. It sounds like Ivan favors scale-out architectures, not scale-up architectures (which are typically what is seen in centralized control plane deployments).
  • Looking for more VMware NSX content? Anthony Burke has started a new series focusing on VMware NSX in pure vSphere environments. As far as I can tell, Anthony is up to 4 posts in the series so far. Check them out here: part 1, part 2, part 3, and part 4. Enjoy!

Servers/Hardware

  • Good friend Simon Seagrave is back to the online world again with this heads-up on a potential NIC issue with an HP Proliant firmware update. The post also contains a link to a fix for the issue. Glad to see you back again, Simon!
  • Tom Howarth asks, “Is the x86 blade server dead?” (OK, so he didn’t use those words specifically. I’m paraphrasing for dramatic effect.) The basic premise of Tom’s position is that new technologies like server-side caching and VSAN/Ceph/Sanbolic (turning direct-attached storage into shared storage) will dramatically change the landscape of the data center. I would generally agree, although I’m not sure that I agree with Tom’s statement that “complexity is reduced” with these technologies. I think we’re just shifting the complexity to a different place, although it’s a place where I think we can better manage the complexity (and perhaps mask it). What do you think?

Security

Cloud Computing/Cloud Management

  • Juan Manuel Rey has launched a series of blog posts on deploying OpenStack with KVM and VMware NSX. He has three parts published so far; all good stuff. See part 1, part 2, and part 3.
  • Kyle Mestery brought to my attention (via Twitter) this list of the “best newly-available OpenStack guides and how-to’s”. It was good to see a couple of Cody Bunch’s articles on the list; Cody’s been producing some really useful OpenStack content recently.
  • I haven’t had the opportunity to use SaltStack yet, but I’m hearing good things about it. It’s always helpful (to me, at least) to be able to look at products in the context of solving a real-world problem, which is why seeing this post with details on using SaltStack to automate OpenStack deployment was helpful.
  • Here’s a heads-up on a potential issue with the vCAC 6.0.1.1 upgrade—the upgrade apparently changes some configuration files. The linked blog post provides more details on which files get changed. If you’re looking at doing this upgrade, read this to make sure you aren’t adversely affected.
  • Here’s a post with some additional information on OpenStack live migration that you might find useful.

Operating Systems/Applications

  • RHEL7, Docker, and Puppet together? Here’s a post on just such a use case (oh, I forgot to mention OpenStack’s involved, too).
  • Have you ever walked through a spider web because you didn’t see it ahead of time? (Not very fun.) Sometimes I feel that way with certain technologies or projects—like there are connections there with other technologies, projects, trends, etc., that aren’t quite “visible” just yet. That’s where I am right now with the recent hype around containers and how they are going to replace VMs. I’m not so sure I agree with that just yet…but I have more noodling to do on the topic.

Storage

  • “Server SAN” seems to be the name that is emerging to describe various technologies and architectures that create pools of storage from direct-attached storage (DAS). This would include products like VMware VSAN as well as projects like Ceph and others. Stu Miniman has a nice write-up on Server SAN over at Wikibon; if you’re not familiar with some of the architectures involved, that might be a good place to start. Also at Wikibon, David Floyer has a write-up on the rise of Server SAN that goes into a bit more detail on business and technology drivers, friction to adoption, and some recommendations.
  • Red Hat recently announced they were acquiring Inktank, the company behind the open source scale-out Ceph project. Jon Benedict, aka “Captain KVM,” weighs in with his thoughts on the matter. Of course, there’s no shortage of thoughts on the acquisition—a quick web search will prove that—but I find it interesting that none of the “big names” in storage social media had anything to say (not that I could find, anyway). Howard? Stephen? Chris? Martin? Bueller?

Virtualization

  • Doug Youd pulled together a nice summary of some of the issues and facts around routed vMotion (vMotion across layer 3 boundaries, such as across a Clos fabric/leaf-spine topology). It’s definitely worth a read (and not just because I get mentioned in the article, either—although that doesn’t hurt).
  • I’ve talked before—although it’s been a while—about Hyper-V’s choice to rely on host-level NIC teaming in order to provide network link redundancy to virtual machines. Ben Armstrong talks about another option, guest-level NIC teaming, in this post. I’m not so sure that using guest-level teaming is any better than relying on host-level NIC teaming; what’s really needed is a more full-featured virtual networking layer.
  • Want to run nested ESXi on vCHS? Well, it’s not supported…but William Lam shows you how anyway. Gotta love it!
  • Brian Graf shows you how to remove IP pools using PowerCLI.

Well, that’s it for this time around. As always, I welcome all courteous comments, so feel free to share your thoughts, ideas, rants, links, or feedback in the comments below.

Tags: , , , , , , , , , , , , ,

Technology and Travel

Cody Bunch recently posted a quick round-up of what he carries when traveling, and just for fun I thought I’d do the same. Like Cody, I don’t know that I would consider myself a road warrior, but I have traveled a pretty fair amount. Here’s what I’m currently carrying when I hit the road:

  • Light laptop and tablet: After years of carrying around a 15″ MacBook Pro, then going down to a 13″ MacBook Pro, I have to say I’m pretty happy with the 13" MacBook Air that I’m carrying now. Weight really does make a difference. I’m still toting the full-size iPad, but will probably switch to an iPad mini later in the year to save a bit more weight.
  • Bag: I settled on the Timbuktu Commute messenger bag (see my write-up) and I’m quite pleased with it. A good bag makes a big difference when you’re mobile.
  • Backup battery: I’m carrying the NewTrent PowerPak 10.0 (NT100H). It may not be the best product out there, but it’s worked pretty well for me. It’s not too heavy and not too big, and will charge both phones and tablets.
  • Noise-canceling earphones: The Bose QC20 earphones (in-ear) are awesome. Naturally they let in a bit more noise than the bigger on-ear QC15 headphones, but the added noise is worth the tremendous decrease in size and weight.

On the software side, I’ll definitely echo Cody’s recommendation of Little Snitch; it’s a excellent product that I’ve used for years. You might also consider enabling the built-in firewall (see this write-up for enabling pf on OS X Mountain Lion; haven’t tried on Mavericks yet) for an added layer of network protection.

What about you, other road warriors out there? What are you carrying these days?

Update: Thanks to Ivan Pepelnjak, who pointed out that I had inadvertently swapped out the product names for the Bose earphones and headphones. That’s been corrected!

Tags: , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.

Networking

  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.

Servers/Hardware

  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah

Security

Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)

Storage

  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.

Virtualization

That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

IDF 2013 Summary and Thoughts

I’m back home in Denver after spending a few days in San Francisco at Intel Developer Forum (IDF) 2013, so I thought I’d take a few minutes to sit down and share a summary of the event and my thoughts.

First, here are links to all the liveblogging I did while at the conference:

IDF 2013 Keynote, Day 1:
http://blog.scottlowe.org/2013/09/10/idf–2013-keynote-day–1/

Enhancing OpenStack with Intel Technologies for Public, Private, and Hybrid Cloud:
http://blog.scottlowe.org/2013/09/10/idf–2013-enhancing-openstack-with-intel-technologies/

IDF 2013 Keynote, Day 2:
http://blog.scottlowe.org/2013/09/11/idf–2013-keynote-day–2/

Rack Scale Architecture for Cloud:
http://blog.scottlowe.org/2013/09/11/idf–2013-rack-scale-architecture-for-cloud/

Virtualizing the Network to Enable a Software-Defined Infrastructure (SDI):
http://blog.scottlowe.org/2013/09/11/idf–2013-virtualizing-the-network-to-enable-sdi/

The Future of Software-Defined Networking with the Intel Open Network Platform Switch Reference Design:
http://blog.scottlowe.org/2013/09/12/idf–2013-future-of-sdn-with-the-intel-onp-switch-reference-design/

Enabling Network Function Virtualization and Software Defined Networking with the Intel Open Network Platform Server Reference Architecture Design:
http://blog.scottlowe.org/2013/09/12/idf–2013-enabling-nfvsdn-with-intel-onp-server-reference-design/

Overall, I enjoyed the event and found it quite useful. It appears to me that Intel has a three-prong strategy:

  1. Expand the footprint of IA (Intel Architecture, what everyone else calls x86, x86_64, or x64) CPUs by moving into adjacent markets
  2. Extend Intel’s reach with non-IA hardware (FM6000 series, QuickAssist Server Acceleration Card [QASAC]
  3. Use software, especially open source software, to drive more development toward Intel-based solutions

I’m sure there’s probably more, but those are the ones that really stand out. You can see some evidence of these moves:

  • Intel’s Open Network Platform (ONP) Switch reference design (aka “Seacliff Trail”) helps drive #1 and #2; it contains an IA CPU for programmability and leverages the FM6000 series for high-speed networking functionality
  • Intel’s ONP Server reference design (“Sunrise Trail”) pushes Intel-based servers into markets they haven’t traditionally seen (telco/networking roles), especially in conjunction with strategy #3 above (as shown by the strong use of Intel DPDK to optimize Sunrise Trail for SDN/NFV applications)
  • Intel’s Avoton and newly-announced Quark families push Intel into new markets (micro-servers, tablets, phones, sensors) where they haven’t traditionally been a major player

All in all, it will be very interesting to see how things play out. As others have said, an definitely an interesting time to be in technology.

As with other relevant industry conferences (like VMworld, for example), one of the values of IDF is engaging in conversations with other professionals. I had a few of these conversations while at IDF:

  • I spent some time talking with an Intel employee about Intel’s Cache Acceleration Software (CAS), which came out of the acquisition of a company called Nevex. I wasn’t even aware that Intel was doing cache acceleration. Intel CAS operates at the operating system (OS) level, serving as a file-level cache on Windows and a block-level cache (with file awareness) on Linux. It also supports caching to a remote SSD (in a SAN, for example) so that you can still use vMotion in vSphere environments. In the near future, they’re looking at supporting cache clustering with a cache coherence algorithm that would allow you to use SSDs/flash from multiple servers as a single cache.
  • I had a brief conversation with an Intel engineer who specialized in SSDs (didn’t get to hit him up for some free Intel DC S3700s, though). We touched on a number of different areas, but one interesting statistic that came out of the conversation was the reality behind the “running out of writes” on an SSD. (This refers to the number of times you can write data to an SSD, which is made out to be a pretty big deal by some folks.) He spoke of a test that wrote 45GB an hour to an SSD; even at that rate, would have taken multiple decades of use before the SSD would not have been able to perform any writes.
  • Finally, I spent some time chatting with my friend Brian Johnson, who works in the networking group at Intel. There’s lots of cool stuff going on there, but—unfortunately—I can’t really discuss most of what he shared with me. Sorry folks! We did have an interesting discussion around the user experience, personal data, mobility, and ubiquitous connectivity. Who knows—maybe the next great startup will emerge out of our discussion! :-)

Anyway, that’s it for my IDF 2013 coverage. I hope that some of the information I shared proves useful to you in some way. As usual, courteous comments (with vendor disclosures, where needful) are always welcome.

(Disclosure: I work for VMware, but was invited to attend IDF at Intel’s expense.)

Tags: , ,

This is session COMS003, titled “Enabling Network Function Virtualization and Software Defined Networking with the Intel Open Network Platform Server Reference Architecture Design.” (How’s that for a mouthful!) The speakers are Frank Schapfel, Senior Product Line Manager with Intel, and Brian Skerry, Open Networks System Architect with Intel. This session is slightly related to IDF 2013 session COMS0002, which focused more on the Intel ONP Switch reference design.

Frank kicks the session off with a quick review of the agenda, then dives right into the content. He starts first with reviewing what SDN and NFV are; I won’t repeat all that here again since it’s already been covered multiple times. (See the liveblog from the COMS002 session for more details, if you need them.)

Next, Frank moves into Intel’s role in enabling SDN/NFV. The key takeaway is that Intel’s CPUs are gradually “eating away” at typically-proprietary functions like packet processing and signal processing. With these function now possible in x86_64 CPUs, they can be moved into a VM to help achieve NFV. (It could be argued that full machine virtualization might not be the most efficient way of handling NFV. Lightweight containers might be more efficient.) According to Frank, once NFV has been addressed this enables SDN, which he describes as greater automation across the network via a separated control plane. Naturally, a series of Intel ingredients underpin this: Intel CPUs, Intel NICs, switch silicon (the FM6700), Intel DPDK, and Open Networking Software.

This leads Frank into a discussion of how Intel will address this market moving forward. In this year, Intel has the Intel platform for Communications Infrastructure, leveraging Xeon and Avoton CPUs. Next year and in 2015, you can expect Intel to leverage the Haswell microarchitecture to refresh this platform. Beyond that, future microarchitectures will deliver more capabilities and more capacity that can be brought to bear on the SDN/NFV market.

At this point, Frank transitions into a more detailed and specific discussion of the ONP Server reference platform (code-named “Sunrise Trail”). The platform leverages Xeon E5–2600 v2 CPUs, plus a host of other Intel technologies (SR-IOV and packet acceleration via DPDK). Of particular note is the use of the Intel QuickAssist Services Acceleration Card (QASAC), which has its own PCI connection to the CPU cores and are designed to help accelerate tasks like encryption and compression. QASAC can offer up to 50Gbps of encryption/compression acceleration, with higher levels available via additional PCIe Gen 3 add-in cards.

Both Seacliff Trail (ONP Switch) and Sunrise Trail (ONP Server) will evolve over time as rack-scale architecture (RSA) matures and evolves as well. Eventually, Seacliff Trail and Sunrise Trail could eventually merge as part of RSA (referred to as Intel ONP for Hybrid Data Plane). Note that the merging of ONP Server and ONP Switch is something that I postulated last year after IDF 2012.

Sunrise Trail will leverage one of a number of potential enterprise Linux distributions, integration with OpenStack, the Intel DPDK for packet acceleration, various hypervisors will be supported (KVM, Hyper-V, KVM), support for OpenFlow (which will undoubtedly come via Open vSwitch [OVS]). For telco environments, ONP Server will likely leverage Wind River Systems’ real-time Linux distribution along with other components.

Frank now turns it over to Brian, who will discuss some of the software pieces involved in ONP Server. He first shows a high-level architecture (I tweeted a picture of this separately). Brian notes that this architecture does not map directly to the ETSI NFV architecture.

Some key challenges that this architecture faces:

  • Integration of legacy OSS/BSS systems
  • Element management needs to work in a virtualized environment
  • Infrastructure orchestration such as OpenStack has industry momentum, but challenges still remain
  • SDN controller architectures and the marketplace are still evolving
  • Service orchestration is being addressed through a number of organizations with lots of opportunity for commercial and open source solutions

Brian takes a moment to zoom in a bit on OpenStack as an infrastructure orchestrator. He calls out enhanced platform awareness (making OpenStack aware of underlying platform capabilities, such as TCP) and passthrough of PCI devices and VF assignment (when using SR-IOV). He really focuses on platform awareness, which makes sense since Intel needs to differentiate at the platform level.

The discussion now shifts to a more focused discussion on the software that actually runs inside Sunrise Trail. Brian mentions the importance of an Intel DPDK vSwitch (which is typically a DPDK-accelerated version of OVS). The reason a DPDK-accelerated virtual switch is so important is because the virtual appliances that are leveraged by NFV will quickly become a bottleneck if the virtual switch isn’t getting accelerated by the underlying hardware. Brian mentions some performance figures: stock OVS gets about 300K small packets per second, but doesn’t yet provide any DPDK-accelerated numbers. Source code for DPDK acceleration is available at http://01.org, although it is missing some features (it is not feature comparable with stock OVS right now). Brian issues a call to contributors to their effort, but I wonder why they don’t just contribute their effort to stock OVS and leverage that community.

DPDK also enables other functions like deep packet inspection (DPI) and Quality of Service (QoS) fine-grained control.

Brian now turns it back over to Frank, who provides more information on where attendees can get more information on Intel’s SDN/NFV enables. He points attendees to Intel’s Network Builders program, provides a summary of the key points from the session, and then opens for questions and answers.

Tags: , , , ,

This is IDF 2013 session CLDS001, titled “Rack Scale Architecture for Cloud.” The speaker is Mohan Kumar, a Sr Principal Engineer with Intel. Kumar works in the Server Platforms Group.

Kumar mentions that Krzanich mentions rack-scale architecture (RSA, not to be confused with a security company of the same name) as one of three pillars of accelerating the data center, and this session will be diving a bit deeper on rack-scale architecture. He’ll start with an overview of the motivation for RSA, then provide an overview of RSA and how it works.

The motivation for developing RSA is really rooted in the vision of the “Internet of Things,” which Intel estimates will reach approximately 30 billion devices by 2020. This means there will be tremendous need for servers in data centers to support this vast number of connected devices. However, the current architectures aren’t sufficient. Resources are locked into individual servers, making it more difficult to shift resources as workloads change and adapt. (I’d contend that virtualization helps address most of this concern.) Thermal inefficiencies and a lack of service-based (software-defined?) configurability of resources are other motivations for RSA. (Again, I’d contend that the configurability of resources is mitigated somewhere by the extensive use of virtualization.) Finally, individual resources within a server can’t be upgraded. To address these concerns, Intel believes that a rack-level architecture is needed.

So where does RSA stand today? Today, RSA can offer shared power (a single power bus instead of multiple power supplies in each server), shared cooling, and rack management (more intelligent in the rack itself). In the near future, Intel wants RSA to include a “rack fabric,” using optical interconnects that allows for a much greater level of disaggregation and much greater modularity. The ultimate goal of RSA is completely modularized servers with pooled CPUs, pooled RAM, pooled I/O, and pooled storage. This is going to be a key point of Kumar’s presentation.

So what are the key Intel technologies involved in RSA?

  • Reference architectures and orchestration software
  • Intel’s Open Network Platform (ONP)
  • Storage technologies, like PCIe SSD and caching
  • Photonics and switch fabrics
  • CPUs and silicon (Atom, Xeon, Quark?)

Intel wants RSA to align with Open Compute Platform (OCP) efforts as well. Kumar also mentions something called Scorpio, which I hadn’t heard before (this is similar to OCP in some way).

In looking at how these components come together in RSA, Intel estimates that cloud operators would see the following benefits:

  • 3x reduction in cable requirements using silicon photonics
  • 2.5x improvement in network uplinks
  • 25x improvement in network downlinks
  • Up to 1.5x improvement in server/rack density
  • Up to 6x reduction in power provisioning

Most of these improvements come from the use of silicon photonics, according to Kumar.

Looking ahead into the future of RSA, what are some of the key problems that remain to be solved? Kumar points to the following challenges:

  1. There is no service-based configurability of memory. (What about virtualization here? I could see this argument for the virtualization hosts, but that scale will be vastly smaller than the scale for the VMs/instances themselves.) Kumar believes that pooled memory is the answer to this challenge.
  2. Similarly, there is no service-based configurability for direct-attached storage. (My same comments regarding the pervasive use of virtualization applies here as well.) Kumar’s response to this is a high-density Ethernet JBOD he calls a PBOD (pooled bunch of disks).

With RSA, a rack becomes the unit of scaling in a cloud environment. The management domain will aggregate multiple racks together in a pod. Looking specifically at the OCP implementation, within a rack the sub-unit of scaling is a tray. A tray consists of multiple nodes. The tray contains compute, memory, storage, and networking modules; a node is a compute module.

Diving a bit deeper on this idea, server CPU(s) will be connected to resource modules (memory, storage, networking) and will be managed by a tray manager. All this occurs within a tray. Between trays, Intel would look at using silicon photonics (or possibly a ToR switch); from there, the uplink goes out to the end-of-row (EoR) switch.

The resource modules (memory, storage, networking) are referred to as RSA pooled functions. A pooled memory controller would manage pooled memory. Pooled networking would be SDN-ready network connectivity. Pooled storage is the PBOD (Ethernet-connected JBOD, not using iSCSI but over straight Ethernet—are we talking ATA over Ethernet? Something else entirely?). The tray manager, mentioned earlier, ensures that resources are properly allocated and enforced.

Next, Kumar shifts his attention to pooled memory in particular. The key motivations for pooled memory include memory sharing, memory disaggregation, and right-sizing memory to workloads running on the node. If you were to also enable memory sharing—assign memory to two nodes at the same time—then you could enable new forms of innovation (think very fast VM migration or tightly-coupled OS clustering). It seems to me that memory sharing would require changes to operating systems and hypervisors in order for it to be supported, though.

Looking closer at how pooled memory works, it requires something called a pooled memory controller, which manages all the centrally pooled RAM. The pooled memory controller is responsible for managing memory partitions and allocation. (Would it be possible for this pooled memory controller to do “memory deduplication” too?) This is the piece that will enable shared memory partitions, but Kumar doesn’t elaborate on changes required to today’s OSes and hypervisors in order to support this functionality.

Kumar next shows a recorded demo of some of the RSA technologies in action.

At this point, Kumar shifts gears to discuss pooled storage in a bit more detail. The motivation for doing pooled storage is similar to the reasons for all of RSA—more flexibility in allocating resources, eliminating bottlenecks, and on-demand allocation.

Intel’s RSA addressed pooled storage through a PBOD, which would be accessed over Ethernet using RDSP (Remote DAS Protocol). Individual compute nodes will use RDSP to communicate with the PBOD controller, which acts like the shared memory controller in that it handles partitioning storage and allocating storage to the individual compute nodes. Kumar tries to show a recorded demo of pooled storage but runs into a few technical issues.

At this point, Kumar provides a summary of the work that has been done toward RSA, reminds attendees that RSA technologies can be seen in the Technology Showcase, and opens the session up for questions and answers.

Tags: ,

IDF 2013: Keynote, Day 2

This is a liveblog of the day 2 keynote at Intel Developer Forum (IDF) 2013 in San Francisco. (Here is the link for the liveblog from the day 1 keynote.)

The keynote starts a few minutes after 9am, following a moment of silence to observe 9/11. Following that, Ulmonth Smith (VP of Sales and Marketing) takes the stage to kick off the keynote. Smith takes a few moments to recount yesterday’s keynote, particularly calling out the Quark announcement. Today’s keynote speakers are Kirk Skaugen, Doug Fisher, and Dr. Hermann Eul. The focus of the keynote is going to be mobility.

The first to take the stage is Doug Fisher, VP and General Manager of the Software and Services Group. Fisher sets the stage for people interacting with multiple devices, and devices that are highly mobile, supported by software and services delivered over a ubiquitous network connection. Mobility isn’t just the device, it isn’t just the software and services, it isn’t just the ecosystem—it’s all of these things. He then introduces Hermann Eul.

Eul takes the stage; he’s the VP and General Manager of the Mobile and Communications Group at Intel. Eul believes that mobility has improved our complex lives in immeasurable ways, though the technology masks much of the complexity that is involved in mobility. He walks through an example of taking a picture of “the most frequently found animal on the Internet—the cat.” Eul walks through the basic components of the mobile platform, which includes not only hardware but also mobile software. Naturally, a great CPU is key to success. This leads Eul into a discussion of the Intel Silvermont core: built with 22nm Tri-Gate transistors, multi-core architecture, 64-bit support, and a wide dynamic power operating range. This leads Eul into today’s announcement: the introduction of the Bay Trail reference platform.

Bay Trail is a mobile computing experience reference architecture. It leverages a range of Intel technologies: next-gen Intel multi-core SoC, Intel HD graphics, on-demand performance with Intel Burst Technology 2.0, and a next-gen programmable ISP (Image Service Processor). Eul then leads into a live demo of a Bay Trail product. It appears it’s running some flavor of Windows. Following that demo, Jerry Shen (CEO of Asus) takes the stage to show off the Asus T100, a Bay Trail-based product that boasts touchscreen IPS display, stereo audio, detachable keyboard dock, and an 11 hour battery life.

Following the Asus demo, Victoria Molina—a fashion industry executive—takes the stage to talk about how technology has/will shape online shopping. Molina takes us through a quasi-live demo about virtual shopping software that leverages 3-D avatars and your personal measurements. As the demo proceeds, they show you a “fit view” that shows how tight or loose the garments will fit. The software also does a “virtual cat walk” that shows how the garments will look as you walk and move around. Following the final Bay Trail demo, Eul wraps up the discussion with a review of some of the OEMs that will be introducing Bay Trail-based products. At this point, he introduces Neil Hand from Dell to introduce his Bay Trail-based product. Hand shows a Windows 8-based 8" tablet from Dell, the start of a new family of products that will be branded Venue.

What’s next after Bay Trail? Eul shares some roadmap plans. Next up is the Merrifield platform, which will increase performance, graphics, and battery life. In 2014 will come Advanced LTE (A-LTE). Farther out is 14nm technology, called Airmont.

The final piece from Eul is a demonstration of Bay Trail and some bracelets that were distributed to the attendees, in which he uses an Intel-based Samsung tablet to control the bracelets, making them change colors, blink, and make patterns.

Now Kirk Skaugen takes the stage. Skaugen is a Senior VP and General Manager of the PC Client Group. He starts his portion of the keynote discussing the introduction of the Ultrabook, and how that form factor has evolved over the last few years to include things like touch support and 2-in–1 form factors. Skaugen takes some time to describe more fully the specifications around 2-in–1 devices, coming from hardware partners like Dell, HP, Lenovo, Panasonic, Sony, and Toshiba. This leads into a demonstration of a variety of 2-in–1 devices: sliders, fold-over designs, detachables (where the keyboard detaches), and “ferris wheel” designs where the screen flips. Now taking the stage is Tami Reller from Microsoft, whose software powered all the 2-in–1 demonstrations that Intel just showed. The keynote sort of digresses into a “Microsoft Q&A” for a few minutes before getting back on track with some Intel announcements.

From a more business-focused perspective, Intel announces 4th generation vPro-enabled Intel Core processors. Location-based services are also being integrated into vPro to enable location-based services (one example provided is automatically restricting access to confidential documentation when they leave the building). Intel also announced (this week) the Intel SSD Pro 1500. Additionally, Intel is announcing Intel Pro WiDi (Wireless Display) to better integrate wireless projectors. Finally, Intel is working with Cisco to eliminate passwords entirely. They are doing that via Intel Identity Password, which embeds keys into the hardware to enable passwordless VPNs.

Taking the stage now is Mario Müller, VP of IT Infrastructure at BMW. Müller talks about how Intel Atom CPUs are built into BMW cars, especially the new BMW i8 (BMW’s first all-electric car, if I heard correctly). He also refers to some new deployments within BMW that will leverage the new vPro-enabled Intel Core CPUs, many of which will be Ultrabooks. Müller indicates that 2-in–1 is useful, not for all employees, but certainly for select individuals who need that functionality.

Skaugen now announces Bay Trail M and Bay Trail D reference platforms. While Bay Trail (also referred to as Bay Trail T) is intended for tablets, but the M and D platforms will help drive innovation in mobile and desktop form factors. After a quick look at some hardware prototypes, Skaugen takes a moment to look ahead at what Intel will be doing over the next year or so. He shows 30% reductions in power usage coming from Broadwell, which will be the 14nm technology Intel will introduce next year. From there, Skaugen shifts into a discussion of perceptual computing (3D support). He shows off a 3-D camera that can be embedded into the bezel of an ultrabook, then shows a video of kids interacting with a prototype hardware and software combination leveraging Intel’s 3-D/perceptual computing support.

And now Doug Fisher returns to the stage. He starts his portion of the keynote by returning to the Intel-Microsoft partnership and focusing on innovations like fast start, longer battery life, touch- and sensor-awareness, on a highly responsive platform that also offers full compatibility around applications and devices. Part of Fisher’s presentation includes tools for developers to help make their applications aware of the 2-in–1 form factor, so that applications can automatically adjust their behavior and UI based on the form factor of the device on which they’re running.

Intel is also working closely with Google to enhance Android on Intel. This includes work on the Dalvik runtime, optimized drivers and firmware, key kernel contributions, and the NDK app bridging technology that will allow apps developed for other platforms (iOS?) to run on Android. Fisher next introduces Gonzague de Vallois of GameLoft, a game developer. Vallois talks about how they have been developing natively on Intel architecture and shows an example of a game they’ve written running on a Bay Trail T-based platform. The tools, techniques, and contributions that Intel have with Android are also being applied to Chrome OS. Fisher brings Sundar Pichai from Google on to the stage. Pichai is responsible for both Android and Chrome OS, and he talks about the momentum he’s seeing on both platforms.

Fisher says that Intel believes HTML5 to be an ideal mechanism for crossing platform boundaries, and so Intel is announcing a new version of their XDK for HTML5 development. This leads into a demonstration of using the Intel XDK (which stands for “cross-platform development kit”) to build a custom application that runs across multiple platforms. With that, he concludes the general session for day 2.

Tags: , , , ,

« Older entries