iSCSI

You are currently browsing articles tagged iSCSI.

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!

Networking

  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.

Servers/Hardware

  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.

Security

  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out Cloudbase.it, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).

Storage

  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.

Virtualization

  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!

Networking

  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.

Servers/Hardware

  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.

Security

  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the OpenNebula.org site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications

Storage

Virtualization

  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #32, the latest installment in my irregularly-published series of link collections, thoughts, rants, raves, and miscellaneous information. I try to keep the information linked to data center technologies like networking, storage, virtualization, and the like, but occasionally other items slip through. I hope you find something useful.

Networking

  • Ranga Maddipudi (@vCloudNetSec on Twitter) has put together two blog posts on vCloud Networking and Security’s App Firewall (part 1 and part 2). These two posts are detailed, hands-on, step-by-step guides to using the vCNS App firewall—good stuff if you aren’t familiar with the product or haven’t had the opportunity to really use it.
  • The sentiment behind this post isn’t unique to networking (or networking engineers), but that was the original audience so I’m including it in this section. Nick Buraglio climbs on his SDN soapbox to tell networking professionals that changes in the technology field are part of life—but then provides some specific examples of how this has happened in the past. I particularly appreciated the latter part, as it helps people relate to the fact that they have undergone notable technology transitions in the past but probably just don’t realize it. As I said, this doesn’t just apply to networking folks, but to everyone in IT. Good post, Nick.
  • Some good advice here on scaling/sizing VXLAN in VMware deployments (as well as some useful background information to help explain the advice).
  • Jason Edelman goes on a thought journey connecting some dots around network APIs, abstractions, and consumption models. I’ll let you read his post for all the details, but I do agree that it is important for the networking industry to converge on a consistent set of abstractions. Jason and I disagree that OpenStack Networking (formerly Quantum) should be the basis here; he says it shouldn’t be (not well-known in the enterprise), I say it should be (already represents work created collaboratively by multiple vendors and allows for different back-end implementations).
  • Need a reasonable introduction to OpenFlow? This post gives a good introduction to OpenFlow, and the author takes care to define OpenFlow as accurately and precisely as possible.
  • SDN, NFV—what’s the difference? This post does a reasonable job of explaining the differences (and the relationship) between SDN and NFV.

Servers/Hardware

  • Chris Wahl provides a quick overview of the HP Moonshot servers, HP’s new ARM-based offerings. I think that Chris may have accidentally overlooked the fact that these servers are not x86-based; therefore, a hypervisor such as vSphere is not supported. Linux distributions that offer ARM support, though—like Ubuntu, RHEL, and SuSE—are supported, however. The target market for this is massively parallel workloads that will benefit from having many different cores available. It will be interesting to see how the support of a “Tier 1″ hardware vendor like HP affects the adoption of ARM in the enterprise.

Security

  • Ivan Pepelnjak talks about a demonstration of an attack based on VM BPDU spoofing. In vSphere 5.1, VMware addressed this potential issue with a feature called BPDU Filter. Check out how to configure BPDU Filter here.

Cloud Computing/Cloud Management

  • Check out this post for some vCloud Director and RHEL 6.x interoperability issues.
  • Nick Hardiman has a good write-up on the anatomy of an AWS CloudFormation template.
  • If you missed the OpenStack Summit in Portland, Cody Bunch has a reasonable collection of Summit summary posts here (as well as materials for his hands-on workshops here). I was also there, and I have some session live blogs available for your pleasure.
  • We’ve probably all heard the “pets vs. cattle” argument applied to virtual machines in a cloud computing environment, but Josh McKenty of Piston Cloud Computing asks whether it is now time to apply that thinking to the physical hosts as well. Considering that the IT industry still seems to be struggling with applying this line of thinking to virtual systems, I suspect it might be a while before it applies to physical servers. However, Josh’s arguments are valid, and definitely worth considering.
  • I have to give Rob Hirschfeld some credit for—as a member of the OpenStack Board—acknowledging that, in his words, “we’ve created such a love fest for OpenStack that I fear we are drinking our own kool aide.” Open, honest, transparent dealings and self-assessments are critically important for a project like OpenStack to succeed, so kudos to Rob for posting a list of some of the challenges facing the project as adoption, visibility, and development accelerate.

Operating Systems/Applications

Nothing this time around, but I’ll stay alert for items to add next time.

Storage

  • Nigel Poulton tackles the question of whether ASIC (application-specific integrated circuit) use in storage arrays elongates the engineering cycles needed to add new features. This “double edged sword” argument is present in networking as well, but this is the first time I can recall seeing the question asked about modern storage arrays. While Nigel’s article specifically refers to the 3PAR ASIC and its relationship to “flash as cache” functionality, the broader question still stands: at what point do the drawbacks of ASICs begin to outweight the benefits?
  • Quite some time ago I pointed readers to a post about Target Driven Zoning from Erik Smith at EMC. Erik recently announced that TDZ works after a successful test run in a lab. Awesome—here’s hoping the vendors involved will push this into the market.
  • Using iSER (iSCSI Extensions for RDMA) to accelerate iSCSI traffic seems to offer some pretty promising storage improvements (see this article), but I can’t help but feel like this is a really complex solution that may not offer a great deal of value moving forward. Is it just me?

Virtualization

  • Kevin Barrass has a blog post on the VMware Community site that shows you how to create VXLAN segments and then use Wireshark to decode and view the VXLAN traffic, all using VMware Workstation.
  • Andre Leibovici explains how Horizon View Multi-VLAN works and how to configure it.
  • Looking for a good list of virtualization and cloud podcasts? Look no further.
  • Need Visio stencils for VMware? Look no further.
  • It doesn’t look like it has changed much from previous versions, but nevertheless some people might find it useful: a “how to” on virtualization with KVM on CentOS 6.4.
  • Captain KVM (cute name, a take-off of Captain Caveman for those who didn’t catch it) has a couple of posts on maximizing 10Gb Ethernet on KVM and RHEV (the KVM post is here, the RHEV post is here). I’m not sure that I agree with his description of LACP bonds (“2 10GbE links become a single 20GbE link”), since any given flow in a LACP configuration can still only use 1 link out of the bond. It’s more accurate to say that aggregate bandwidth increases, but that’s a relatively minor nit overall.
  • Ben Armstrong has a write-up on how to install Hyper-V’s integration components when the VM is offline.
  • What are the differences between QuickPrep and Sysprep? Jason Boche’s got you covered.

I suppose that’s enough information for now. As always, courteous comments are welcome, so feel free to add your thoughts in the comments below. Thanks for reading!

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #31, my irregularly published series that takes a look at links, posts, articles, and thoughts from around the web related to core data center technologies. I hope that you find something useful!

Networking

  • Umair Hoodbhoy speculates in this post that the inclusion of Cisco’s ONE Controller in the recently-announced “Daylight” effort could mean the end for Big Switch’s Floodlight. (Umair’s play on words—”in Daylight there is no need for Floodlights”—is cute.)
  • Of course, Big Switch recently moved to “diversify,” if you will, away from just Floodlight with the introduction of Switch Light. As usual, Brent Salisbury has an excellent write-up on Switch Light, so I recommend reading his post. Switch Light seems like a good idea—more competition is always good, isn’t that what people say?—but I wonder how much cooperation Big Switch will get from the major networking vendors with regards to OpenFlow interoperability now that Big Switch is competing even more directly with them via Switch Light.
  • I think I might have mentioned this before (sorry if so), but here’s a good write-up on using the Edge Gateway CLI for monitoring and troubleshooting. Nice.
  • Greg Ferro examines a potential SDN use case (an OpenFlow use case) in the form of enterprise firewall migrations.
  • Just getting started in the networking field? Last year, Brent Salisbury put together a couple of great posts that help “refresh the basics” of networking. Part 1 covers Ethernet, IP, and TCP headers in Wireshark captures; part 2 pulls that together to show how the headers encapsulate in the OSI stack. If you’re not already familiar with this information, this is good reading.

Servers/Hardware

Nothing this time around, but I’ll stay alert for information I can include in the next Technology Short Take!

Security

  • Mounting guest disk images on the host? That’s a no-no from a security perspective—see here to learn why.
  • Mike Foley shared recently that the release candidate of the vSphere 5 Security Hardening Guide has been released. Check it out here.

Cloud Computing/Cloud Management

  • I haven’t had the chance to actually try it out myself, but Blueprint looks interesting. As the website describes it, it’s designed to “reverse engineer” servers so that you can migrate them into a configuration management system like Chef or Puppet.
  • Looking for a decent high-level overview of OpenStack and how it works? Check out this article titled “In a nutshell: How OpenStack works”. (As an aside, I think it’s awesome how Ken Pepple’s diagrams show up in all sorts of places. One day I hope my material proves as useful to folks.)
  • If you use Puppet for configuration management and want to deploy GlusterFS, be sure to check out this Puppet Forge module. I’ve tested it and it works as advertised.
  • This is an older article (published in May of last year), and it’s a bit on the lengthy side, but I like the tack the author uses. He describes cloud as the synthesis of many different forms of innovation within IT, pulling together things like open source, virtualization, distributed programming, NoSQL, DevOps/NoOps, distributed teams, dynamic languages, and Big Data (among others). He then goes on to provide examples of how organizations building or leveraging clouds are synthesizing these various independent technological innovations together. If you have a few minutes (as I said, it’s a bit on the lengthy side), I’d recommend reading it.

Operating Systems/Applications

  • This series is a bit older, but an interesting one nevertheless. Brian McClain, who was one of the presenters in a Cloud Foundry/BOSH session I liveblogged at VMworld 2012, has his own personal blog and posted a series of articles on using BOSH with vSphere. I hadn’t really considered how one might use BOSH for deploying (and managing) multi-VM applications on vSphere, but Brian provides some practical examples. Part 1 of the series is here, followed by part 2, part 3, part 4, and part 5.
  • Like using Markdown on OS X? You might find these handy.
  • Ah, the good old days of DOS…reborn as FreeDOS.
  • Go ahead, read up on YAML. You know you want to. Well, YAML is used in both Hiera (can be used with Puppet) and BOSH, after all.
  • Here’s another interesting tool that I haven’t had the opportunity to actually test myself. Oz looks like it could be quite useful—especially in virtualized/cloud computing environments—but I’m struggling to determine why I should use Oz instead of OS-specific mechanisms (like a kickstart file). If anyone has used Oz and can shed some light on this question, I’d appreciate it.
  • You may have heard that I recently switched from TextMate to BBEdit as my default OS X text editor (and therefore the tool whereby I do most of my content generation). As part of the switch, I found this to be helpful. (I might post a separate entry about the switch, if enough people seem interested in reading about it.)

Storage

Virtualization

That’s it for this time. I have plenty more links I wanted to share, but I figured I’d better not let this post get any longer. As always, courteous comments are welcome, so I invite you to participate in the conversation by adding your thoughts below.

Tags: , , , , , , , , , ,

Welcome to Technology Short Take #23, another collection of links and thoughts related to data center technologies like networking, storage, security, cloud computing, and virtualization. As usual, we have a fairly wide-ranging collection of items this time around. Enjoy!

Networking

  • A couple of days ago I learned that there are a couple open source implementations of LISP (Locator/ID Separation Protocol). There’s OpenLISP, which runs on FreeBSD, and there’s also a project called LISPmob that brings LISP to Linux. From what I can tell, LISPmob appears to be a bit more focused on the endpoint than OpenLISP.
  • In an earlier post on STT, I mentioned that STT’s re-use of the TCP header structure could cause problems with intermediate devices. It looks like someone has figured out how to allow STT through a Cisco ASA firewall; the configuration is here.
  • Jose Barreto posted a nice breakdown of SMB Multichannel, a bandwidth-enhancing feature of SMB 3.0 that will be included in Windows Server 2012. It is, unexpectedly, only supported between two SMB 3.0-capable endpoints (which, at this time, means two Windows Server 2012 hosts). Hopefully additional vendors will adopt SMB 3.0 as a network storage protocol. Just don’t call it CIFS!
  • Reading this article, you might deduce that Ivan really likes overlay/tunneling protocols. I am, of course, far from a networking expert, but I do have to ask: at what point does it become necessary (if ever) to move some of the intelligence “deeper” into the stack? Networking experts everywhere advocate the “complex edge-simple core” design, but does it ever make sense to move certain parts of the edge’s complexity into the core? Do we hamper innovation by insisting that the core always remain simple? As I said, I’m not an expert, so perhaps these are stupid questions.
  • Massimo Re Ferre posted a good article on a typical VXLAN use case. Read this if you’re looking for a more concrete example of how VXLAN could be used in a typical enterprise data center.
  • Bruce Davie of Nicira helps explain the difference between VPNs and network virtualization; this is a nice companion article to his colleague’s post (which Bruce helped to author) on the difference between network virtualization and software-defined networking (SDN).
  • The folks at Nicira also collaborated on this post regarding software overhead of tunneling. The results clearly favor STT (which was designed to take advantage of NIC offloading) over GRE, but the authors do admit that as “GRE awareness” is added to the cards that protocol’s performance will improve.
  • Oh, and while we’re on the topic of SDN…you might have noticed that VMware has taken to using the term “software-defined” to describe many of the services that vSphere (and related products) provide. This includes the use of software-defined networking (SDN) to describe the functionality of vSwitches, distributed vSwitches, vShield, and other features. Personally, I think that the term software-based networking (SBN) is far more applicable than SDN to what VMware does. It is just me?
  • Brad Hedlund wrote this post a few months ago, but I’m just now getting around to commenting about it. The gist of the article—forgive me if I munge it too much, Brad—is that the use of open source software components might dramatically change the shape/way/means in which networking protocols and standards are created and utilized. If two components are communicating over the network via open source components, is some sort of networking standard needed to avoid being “proprietary”? It’s an interesting thought, and goes to show the power of open source on the IT industry. Great post, Brad.
  • One more mention of OpenFlow/SDN: it’s great technology (and I’m excited about the possibilities that it creates), but it’s not a silver bullet for scalability.

Security

  • I came across this interesting post on a security attack based on VMDKs. It’s quite an interesting read, even if the probability of being able to actually leverage this attack vector is fairly low (as I understand it).

Storage

  • Chris Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not mistaken, I believe the VMkernel selection is based upon which VMkernel interface is listed in the first routing table entry for the subnet. This is something about which I wrote back in 2008, but I’m glad to see Chris bringing it to light again.
  • George Crump published this article on using DCB to enhance iSCSI. (Note: The article is quite favorable to Dell, and George discloses an affiliation with Dell at the end of the article.) One thing I did want to point out is that—if I recall correctly—the 802.1Qbb standard for Priority Flow Control only defines a single “no drop” class of service (CoS). Normally that CoS is assigned to FCoE traffic, but in an environment without FCoE you could assign it to iSCSI. In an environment with both, that could be a potential problem, as I see it. Feel free to correct me in the comments if my understanding is incorrect.
  • Microsoft is introducing data deduplication in Windows Server 2012, and here is a good post providing an introduction to Microsoft’s deduplication implementation.
  • SANRAD VXL looks interesting—anyone have any experience with it? Or more detailed technical information?
  • I really enjoyed Scott Drummonds’ recent storage performance analysis post. He goes pretty deep into some storage concepts and provides real-world, relevant information and recommendations. Good stuff.

Cloud Computing/Cloud Management

  • After moving CloudStack to the Apache Software Foundation, Citrix published this discourse on “open washing” and provides a set of questions to determine the “openness” of software projects with which you may become involved. While the article is clearly structured to favor Citrix and CloudStack, the underlying point—to understand exactly what “open source” means to your vendors—is valid and worth consideration.
  • Per the AWS blog, you can now export EC2 instances out of Amazon and into another environment, including VMware, Hyper-V, and Xen environments. I guess this kind of puts a dent in the whole “Hotel California” marketing play that some vendors have been using to describe Amazon.
  • Unless you’ve been hiding under a rock for the past few weeks, you’ve most likely heard about Nick Weaver’s Razor project. (If you haven’t heard about it, here’s Nick’s blog post on it.) To help with the adoption/use of Razor, Nick also recently announced an overview of the Razor API.

Virtualization

  • Frank Denneman continues to do a great job writing solid technical articles. The latest article to catch my eye (and I’m sure that I missed some) was this post on combining affinity rule types.
  • This is an interesting post on a vSphere 5 networking bug affecting iSCSI that was fixed in vSphere 5.0 Update 1.
  • Make a note of this VMware KB article regarding UDP traffic on Linux guests using VMXNET3; the workaround today is using E1000 instead.
  • This post is actually over a year old, but I just came across it: Luc Dekens posted a PowerCLI script that allows a user to find the maximum IOPS values over the last 5 minutes for a number of VMs. That’s handy. (BTW, I have fixed the error that kept me from seeing the post when it was first published—I’ve now subscribed to Luc’s blog.)
  • Want to use a Debian server to provide NFS for your VMware environment? Here is some information that might prove helpful.
  • Jeremy Waldrop of Varrow provides some information on creating a custom installation ISO for ESXi 5, Nexus 1000V, and PowerPath/VE. Cool!
  • Cormac Hogan continues to pump out some very useful storage-focused articles on the official VMware vSphere blog. For example, both the VMFS locking article and the article on extending an EagerZeroedThick disk were great posts. I sincerely hope that Cormac keeps up the great work.
  • Thanks to this Project Kronos page, I’ve been able to successfully set up XCP on Ubuntu Server 12.04 LTS. Here’s hoping it gets easier in future releases.
  • Chris Colotti takes on some vCloud Director “challenges”, mostly surrounding vShield Edge and vCloud Director’s reliance on vShield Edge for specific networking configurations. While I do agree with many of Chris’ points, I personally would disagree that using vSphere HA to protect vShield Edge is an acceptable configuration. I was also unable to find any articles that describe how to use vSphere FT to protect the deployed vShield appliances. Can anyone point out one or more of those articles? (Put them in the comments.)
  • Want to use Puppet to automate the deployment of vCenter Server? See here.

I guess it’s time to wrap up now, lest my “short take” get even longer than it already is! Thanks for reading this far, and I hope that I’ve shared something useful with you. Feel free to speak up in the comments if you have questions, thoughts, or clarifications.

Tags: , , , , , , , , , , , , , , , , ,

I’ve had these FCoE-related articles sitting around in my Yojimbo database for a while, but I’m only now getting around to doing something with them. There’s some great information in these posts, but be sure to check the comments to the posts as well—there’s some equally good information to be found there as well.

FCoE Multi-hop: Why wait?
Re-examining FCoE and iSCSI Pros and Cons
FCoE vs. iSCSI: The Cagefight!
Gartner on FCoE. Whoa There, Sparky
8Gb Fibre Channel or 10Gb Ethernet w/ FCoE?

Tags: , , ,

I was browsing through an EMC technical document titled “EMC CLARiiON Integration with VMware ESX Server” (download it here) a little while ago and I came across a phrase in the document that caught my attention:

“VMware ESX/ESXi support both Fibre Channel and iSCSI storage. However, VMware and EMC do not support connecting VMware ESX/ESXi servers to CLARiiON Fibre Channel and iSCSI devices on the same array simultaneously.”

What? No Fibre Channel and iSCSI from the same array to a VMware ESX/ESXi host simultaneously? That piqued my curiosity, so I contacted a few people within EMC to question the veracity of that statement. It turns out that the answer is more complicated than it might seem at first glance.

For those of you who aren’t interested in the deep technical details, here’s the short explanation behind this behavior:

  • VMware fully supports the use of both Fibre Channel and iSCSI from the same array to the same VMware ESX/ESXi host simultaneously.
  • VMware does not support presenting the same LUN via both protocols concurrently to the same host. (I qualified this directly with VMware.)
  • For a Celerra, you can use both Fibre Channel (via the CLARiiON side of the array) and iSCSI (via the Celerra side of the array) simultaneously. This is a fully supported configuration.
  • A CLARiiON array can easily present the same LUN via both Fibre Channel and iSCSI, but then VMware wouldn’t support it (see earlier bullet).
  • With a CLARiiON array, it is possible to present some LUNs via Fibre Channel and some LUNs via iSCSI to the same VMware ESX/ESXi host (i.e., LUN A via Fibre Channel and LUN B via iSCSI), but EMC will only support it if you file an RPQ. Without an RPQ, it’s an unsupported configuration. An RPQ, by the way, is a request to qualify a certain configuration for support.

I’m confident that some other array vendors out there will be very quick to jump on this post and harp on this limitation until the cows come home. I would just ask this question: is it really as big of a limitation as it seems? I’ll come back to that question in a moment.

With the short explanation in mind, here are the more in-depth details. If you like the longer, more technical explanation, then read on!

From EMC’s side, the root of the restriction about using both Fibre Channel and iSCSI devices on the same array simultaneously stems from the interaction of host registration and storage groups.

Host registration is a requirement in the CLARiiON world. In order to present storage to a host from a CLARiiON array, you must first register the host’s initiators with the array in Navisphere. Once the host has been registered, then you can proceed with presenting storage to that host. In theory the CLARiiON could operate without registering hosts and initiators, but EMC chose to require registration. EMC made this choice in order to help simplify host management.

Requiring host registration is a bit different than some of other storage arrays on the market. It’s not better or worse—just different. (Remember, pros and cons come from every technology decision.)

If you’re like me, you’re probably wondering at this point how requiring host registration simplifies anything. Instead of having to manage multiple paths, multiple initiators, and individual hosts every time you want to present storage to a host, you only need to register the host—and all of its initiators—and then you can refer to that same object (the host) over and over again as needed. Yes, host registration does mean a bit more work up front, but the idea is that it will save some work down the road. I guess you can think of host registration kind of like defining aliases in your Fibre Channel zoning configuration: it’s a bit more work up front, but it simplifies things later down the road. If you didn’t create device aliases in your Fibre Channel switch, you’d end up having to re-enter Fibre Channel WWPNs multiple times. You create the aliases so that it’s easier later. The same applies to host registration. Again, it’s a matter of choices.

One might also say that registration is security measure, albeit a weak measure. Rather than allow just any Fibre Channel-attached or iSCSI-attached host to see storage, the array requires that it know about the host (via host registration) in order to present storage to the host. This provides an additional layer of security to ensure that only authorized hosts are presented storage from the array.

Now you have a fairly decent idea of why host registration is necessary. So how does host registration occur? Host registration can occur either manually or automatically. Starting with version 4.0, both VMware ESX and VMware ESXi will automatically register with a CLARiiON array running any recent version of FLARE (ESX 3i version 3.5 also supports this form of push registration). FLARE release 28 and earlier will show these hosts as “Manually registered, unmanaged”; starting with FLARE 29, these hosts are listed as “Manually registered, managed”. In either case, the registration occurs automatically. If the host is Fibre Channel-attached, then the Fibre Channel initiators will be included in the automatic registration. The same goes for iSCSI initiators. Normally, this is a good thing because it saves the administrator the extra steps of registering the host with the storage array. (Also, because VMware ESX/ESXi hosts register automatically, there is no need to install the Navisphere Agent.)

In this case, though, the automatic registration causes a problem. Why? This goes back to the second item I said I needed to discuss: storage groups. Specifically, storage groups have two characteristics that come into play here:

  1. First, any given host—not just VMware ESX/ESXi hosts, but all types of hosts—can only be connected to a single storage group at any given time.
  2. Second, while the CLARiiON can present Fibre Channel LUNs and iSCSI LUNs simultaneously (including presenting the same LUN via both protocols simultaneously), there is no way within a single storage group to specify which LUNs should be accessed via Fibre Channel and which LUNs should be accessed via iSCSI. This is necessary because VMware won’t support accessing the same LUN via both protocols at the same time (see earlier VMware support statement).

Do you see how all the pieces come together? The only way to control which LUNs should be presented via which protocol is to use multiple storage groups—but a host can only be in a single storage group at a time. With only a single host object for any given VMware ESX/ESXi host, that host can only see either Fibre Channel LUNs (by being in a storage group containing Fibre Channel LUNs) or iSCSI LUNs (by being in a storage group containing iSCSI LUNs), but not both. Hence, the statement in the CLARiiON document I referenced in the very beginning of this blog post that outlines using either Fibre Channel or iSCSI but not both. This behavior is required to enforce the single-protocol LUN access required by VMware.

As with all things, there is a workaround. Because it is a workaround, that’s why the RPQ is necessary to get full support.

To work around this problem, you’ll need to ignore the automatic host registration (or disable the automatic host registration) and instead create two manually registered “pseudo-hosts”: one with the Fibre Channel initiators and one with the iSCSI initiators. These “pseudo-hosts” will need fake IP addresses (if they both use the same IP address, Navisphere will treat them as the same host, thus defeating the purpose of the workaround). Put the Fibre Channel initiators into the Fibre Channel storage group(s), and put the iSCSI initiators into the iSCSI storage group(s). Each “pseudo-host” will be able to see LUNs presented to that storage group and therefore would see both Fibre Channel and iSCSI LUNs at the same time. And, as required by VMware, any given LUN would be accessed only via Fibre Channel or iSCSI but not both. Remember that you need to file an RPQ in order to get support on this configuration.

For VMware ESX/ESXi 4.0 hosts (and ESX 3i version 3.5 hosts), you can disable automatic registration using the Disk.EnableNaviReg advanced configuration option. Setting this value to 0 disables the automatic registration with Navisphere. (Here are screenshots for VMware ESX 3i and VMware ESX/ESXi 4.) If you disable the automatic registration, then you only need to manually register the Fibre Channel and iSCSI initiators as separate “pseudo-hosts” and you’re ready to go.

Let me reiterate again that if you are presenting iSCSI LUNs via the Celerra and not the CLARiiON, none of this applies. Presenting Fibre Channel LUNs via the CLARiiON and iSCSI LUNs via the Celerra to the same VMware ESX/ESXi host is fine. This workaround that I’ve described only applies when you want to present some LUNs via Fibre Channel and some LUNs via iSCSI from a CLARiiON to a single VMware ESX/ESXi host.

Earlier you’ll recall that I asked this question: is this really a limitation? There are a couple of viewpoints:

  • One viewpoint states there is no need for both Fibre Channel and iSCSI connectivity to the same array. Since you already have Fibre Channel connectivity to the array, what’s the point in using iSCSI? Conversely, if you already have iSCSI connectivity to an array, why invest in establishing Fibre Channel connectivity? Since you can’t use it for failover (that would violate the VMware support position), running another block protocol against the same array and same sets of disks doesn’t add a great deal of value.
  • A second viewpoint argues that the ability to provide a differentiation of service based on the different performance characteristics of Fibre Channel and iSCSI (and NFS, but we’re focusing on block protocols for this discussion) is valuable, and thus the need to be able to easily present LUNs via either protocol from the same array to the same host is a worthwhile function. There are a number of potential use cases here—test/development environments, Tier 2 applications, varying SLAs, etc. This is especially true if you are using different disk pools (fast Fibre Channel drives or EFDs vs. slower SATA drives) on the same array.

I can see both sides of the coin. Personally, I tend to side more with the second viewpoint and would prefer to see the CLARiiON have the ability to easily present Fibre Channel and iSCSI to the same host, especially when multiple disk pools are involved. I think that CLARiiON engineering is now evaluating this possibility; as more information emerges, I’ll be sure to keep you posted.

Courteous and professional comments, clarifications, or corrections are always welcome!

Tags: , , , , ,

I had a reader contact me with a couple of questions, one of which I felt warranted a blog post. Paraphrased, the question was this: How do I make IP-based storage work with VMware vSphere on Cisco UCS?

At first glance, you might look at this question and scoff. Remember though, that Cisco UCS does—at this time—have a few limitations that make this a bit more complicated than at first glance. Specifically:

  • Recall that the UCS 6100XP fabric interconnects only have two kinds of ports: server ports and uplink ports.
  • Server ports are southbound, meaning they can only connect to the I/O Modules running in the back of the blade chassis.
  • Uplink ports are northbound, meaning they can only connect to an upstream switch. They cannot be used to connect directly to another end host or directly to storage.

With this in mind, then, how does one connect IP-based storage to a Cisco UCS? In these scenarios, you must have another set of Ethernet switches between the 6100XP fabric interconnects and the target storage array. Further, since the 6100XP fabric interconnects require 10GbE uplinks and do not—at this time—offer any 1GbE uplink functionality, you need to have the right switches between the 6100XP fabric interconnects and the target storage array.

Naturally, the Nexus 5000 fits the bill quite nicely. You can use a pair of Nexus 5000 switches between the UCS 6100XP interconnects and the storage array. Dual-connect the 6100XP interconnects to the Nexus 5000 switches for redundancy and active-active data connections, and dual-connect the target storage array to the Nexus 5000 switches for redundancy and (depending upon the array) active-active data connections. It would look something like this:

ipstorage-with-ucs.jpg

From the VMware side of the house, since you’re using 10GbE end-to-end, it’s very unlikely that you’ll need to worry about bandwidth; that eliminates any concerns over multiple VMkernel ports on multiple subnets or using multiple NFS targets so as to be able to use link aggregation. (I’m not entirely sure you could use link aggregation with the 6100XP interconnects anyway. Anyone?) However, since you are talking Cisco UCS you’ll have only two 10GbE connections (unless you’re using the full width blade, which is unlikely). This means you’ll need to pay careful attention to the VMware vSwitch (or dvSwitch, or Nexus 1000V) configuration. In general, the recommendation in this sort of configuration is to place Service Console, VMotion, and IP-based storage traffic on one 10GbE uplink, place virtual machine traffic on the second 10GbE uplink, and use whatever mechanisms are available to preferentially specify which uplink should be used in the course of normal operation. This provides redundancy in the uplinks but some level of separation of traffic.

One quick side note: although I’m talking IP-based storage here, block-based storage fans need to remember that Cisco UCS does not—at this time—support northbound FCoE. That means that although you have FCoE support southbound, and FCoE support in the Nexus 5000, and possibly FCoE support in your storage arrays, you still can’t do end-to-end FCoE with Cisco UCS.

For those readers who are very familiar with Cisco UCS and Nexus, this will seem like a pretty simplistic post. However, we need to keep in mind that there are lots of readers out there who have not had the same level of exposure. Hopefully, this will help provide some guidance and food for thought.

(Of course, one could just buy a Vblock and not have to worry about putting all the pieces together…hey, can’t blame me for trying, right?)

Clarifications, questions, or suggestions are welcome in the comments below. Thanks!

Tags: , , , , , , ,

Welcome to Virtualization Short Take #30, my irregularly posted collection of links and thoughts on virtualization. I hope you find something useful here!

  • I believe Jason Boche already mentioned this on his own blog (I couldn’t find a link) and also started this VMware Communities thread discussing the fact that the 8/6 patch breaks FT compatibility between ESX and ESXi hosts in the same cluster. This VMware KB article is now available with more information on the problem. I’m hearing from VMware is that there is no short-term solution; the workaround is to use only ESX or only ESXi within a single cluster. (I don’t recommend not patching the hosts until the problem is fixed.)
  • And while we’re talking VMware FT, here’s a good document on VMware FT architecture and performance. (Eric Siebert’s Virtualization Pro blog post about VMware FT is really good, too.)
  • I’m also hearing reports that there are problems mixing ESX and ESXi in the same cluster when using host profiles. Theoretically, you should be able to use an ESX reference host and apply that to ESXi hosts, but in reality it’s not working so well.
  • If you’re using AppSpeed, you’ll need to manually turn off the AppSpeed sensor VMs in order to put ESX/ESXi hosts into Maintenance Mode. The sensor VM won’t VMotion off the host, so this prevents the host from entering Maintenance Mode.
  • Here’s another topic that I think has been mentioned elsewhere (looks like Duncan mentions it here), but SRM 1.0 Update 1 Patch 4 was released a couple of weeks ago and it includes a fit for customizing the IP addresses of Windows Server 2008 guest operating system instances.
  • Toward the end of August, VMware Infrastructure 3 support was added for NetApp MetroCluster (see this VMware KB article). Now, how about some VMware vSphere 4 support?
  • Most of you are aware by now (and if you aren’t aware, go buy a copy of my book so you will be aware) that you can use Storage VMotion to change virtual disks from thin provisioned to thick provisioned. The problem is this: the type of thick provisioned disk created when you do this via Storage VMotion is eagerzeroedthick, not zeroedthick. This means that it is not friendly to storage array thin provisioning!
  • I’m still looking for a valid use case for this little trick, but it’s mentioned by both Duncan and Eric: the ability to present multiple cores per socket to a virtual machine. Duncan’s post is here; Eric’s post is here. As Eric points out, licensing is one potential use. Anyone have any other valid use cases?
  • Eric Sloof has a great post on dvSwitch caveats and best practices that is definitely worth reading.
  • Want to make linked clones work on vSphere? Tom Howarth points out in this post some information made available by William Lam. Both articles are worth a look.
  • Tom also posted some useful information on enabling firewall logging on VMware ESX hosts.
  • This post over on Aaron Sweemer’s blog was actually written by guest author John Blessing (aka @vTrooper on Twitter) and just goes to illustrate how difficult it can be to create a chargeback model.
  • Of course, the “Super iSCSI Friends” recently produced a multi-vendor post on using iSCSI with VMware vSphere, a great follow-up to the original multi-vendor VI3 post. Here’s Chad’s version of the multi-vendor vSphere and iSCSI post.

That wraps it up for this time around. Thanks for reading, and feel free to submit any other useful or interesting links in the comments below.

Tags: , , , , , ,

I wanted to go ahead and get another issue of Virtualization Short Takes out the door before VMworld, as I suspect that I’ll be covered up both during and for some time after VMworld. So, here’s my latest collection of links and articles about virtualization, storage, and anything else I find interesting.

  • Chad Sakac brings up an important issue for EMC CLARiiON users also using vSphere and iSCSI; be sure to read the full post for all the details. Basically, this bug in the FLARE code puts us back to using multiple IP subnets to scale iSCSI traffic. Bummer. I imagine they’ll get it fixed up pretty quick, but until then it’s back to the old way of scaling IP-based storage traffic. Chad’s posts on VMware-storage integration (Part 1 and Part 2) are good reads as well.
  • Nick Triantos weighs in with a good post on how to configure ALUA support and Round Robin I/O in vSphere. This looks useful; too bad the old NetApp gear I have in the lab won’t run the latest Data ONTAP version so I can test this myself. Oh, and you should also check out Nick’s post on the NetApp Collector and Analyzer for Virtual Environments, which looks like it might be a handy tool for sizing NetApp storage environments.
  • Duncan Epping points out a couple of issues related to VMFS block size in this post on snapshots and block size. Good find!
  • Ben Armstrong puts up a great post about competitive arguments. I have to say that I have a new respect for Ben after reading this post. He’d always presented himself very professionally, but his open approach to comparing virtualization products is very refreshing, and one that I wish more people would adopt. I’m particularly impressed that Ben quoted Proverbs 27:17 in his post.
  • Aaron Sweemer posted a newsletter from a co-worker on his site that has some great information. You should definitely have a look, I think you’ll find something useful there.
  • Rick Scherer posted the steps necessary to remove a rogue vCenter Chargeback plug-in. Useful, but I wish all plug-ins provided a mechanism like this.
  • Jason Nash brings to light a bug in Cisco Nexus 1000V when used in conjunction with CNAs. Be sure to have a look if this has any similarity to your environment. Like Jason, I have some Gen 1 Emulex CNAs so I may run into the same issue myself as I build out the Nexus hardware in the lab.
  • The Systems Engineer (no name provided) gives a handy one-line command to map ESX datastores to EMC CLARiiON LUNs. I’ll have to give this one a try once I get my CLARiiON up and running.
  • Somewhere along the way I picked up the URL to this VMware KB article about problems with iSCSI or NFS over an EtherChannel link. Hmmm, that looks interesting, but when you read the article it points out that the issue exists when you are using EtherChannel but the vSwitch is configured as “Route based on originating virtual port ID.” That’s a configuration mismatch—of course you’re going to have problems! Simply change the vSwitch to “Route based on ip hash” (the strongly recommended setting when using EtherChannel) and the problems go away.
  • Stevie Chambers (formerly of VMware, now with Cisco) posts about 10 technology advances since 2005. The article is mostly about the Intel Xeon 5500 CPUs and a couple other features specific to Cisco’s Unified Computing System (UCS); namely, the Palo adapter and the Catalina ASIC. While he wanders a bit, I think Stevie’s point is about how virtualization architects and operations staff need to understand the impact of these technologies and how they affect the virtualization solution—a useful point, indeed.
  • Paul Fazzone has a couple of great posts on the Cisco Nexus 1000V: first an article with an overview of VM network security with the Nexus 1000V, then a second article describing how the Nexus 1000V compares to multiple vSwitches. Both are good reads for people seeking a bit more information on deployment scenarios for the Nexus 1000V.
  • Computerworld posted this article about the 7 half-truths of virtualization. The underlying point behind all of these “half-truths” is that in order for an organization to really reap the benefits of virtualization, that organization needs to change, to adapt, and to grow with the virtualization initiative. If you just virtualize and don’t change anything else, your ROI will be limited at best. I particularly agree with #5: if you’re investigating VDI for short-term cost savings, you’re barking up the wrong tree.
  • This is kind of cool. I might put this on my home network.
  • I haven’t had my chance to talk with Arista yet, but I’m surprised that there hasn’t been more buzz around their announcement of vEOS. In fact, I had to hear about it (other than a very brief e-mail from Doug Gourlay) from a Cisco contact! How crazy is that? I suppose, as I mentioned on Twitter, that Arista is going to make a big push next week during VMworld 2009 in San Francisco.

That wraps up this edition of Virtualization Short Takes. Next week will be a busy week; look for lots of coverage from the conference in San Francisco as well as summaries of my vendor meetings (and there are lots of them!). Until then, take care!

Tags: , , , , , , ,

« Older entries