You are currently browsing articles tagged Security.

Welcome to Technology Short Take #42, another installation in my ongoing series of irregularly published collections of news, items, thoughts, rants, raves, and tidbits from around the Internet, with a focus on data center-related technologies. Here’s hoping you find something useful!


  • Anthony Burke’s series on VMware NSX continues with part 5.
  • Aaron Rosen, a Neutron contributor, recently published a post about a Neutron extension called Allowed-Address-Pairs and how you can use it to create high availability instances using VRRP (via keepalived). Very cool stuff, in my opinion.
  • Bob McCouch has a post over at Network Computing (where I’ve recently started blogging as well—see my first post) discussing his view on how software-defined networking (SDN) will trickle down to small and mid-sized businesses. He makes comparisons among server virtualization, 10 Gigabit Ethernet, and SDN, and feels that in order for SDN to really hit this market it needs to be “not a user-facing feature, but rather a means to an end” (his words). I tend to agree—focusing on SDN is focusing on the mechanism, rather than focusing on the problems the mechanism can address.
  • Want or need to use multiple external networks in your OpenStack deployment? Lars Kellogg-Stedman shows you how in this post on multiple external networks with a single L3 agent.


  • There was some noise this past week about Cisco UCS moving into the top x86 blade server spot for North America in Q1 2014. Kevin Houston takes a moment to explore some ideas why Cisco was so successful in this post. I agree that Cisco had some innovative ideas in UCS—integrated management and server profiles come to mind—but my biggest beef with UCS right now is that it is still primarily a north/south (server-to-client) architecture in a world where east/west (server-to-server) traffic is becoming increasingly critical. Can UCS hold on in the face of a fundamental shift like that? I don’t know.


  • Need to scramble some data on a block device? Check out this command. (I love the site. It reminds me that I still have so much yet to learn.)

Cloud Computing/Cloud Management

  • Want to play around with OpenDaylight and OpenStack? Brent Salisbury has a write-up on how to OpenStack Icehouse (via DevStack) together with OpenDaylight.
  • Puppet Labs has released a module that allows users to programmatically (via Puppet) provision and configure Google Compute Platform (GCP) instances. More details are available in the Puppet Labs blog post.
  • I love how developers come up with these themes around certain projects. Case in point: “Heat” is the name of the project for orchestrating resources in OpenStack, HOT is the name for the format of Heat templates, and Flame is the name of a new project to automatically generate Heat templates.

Operating Systems/Applications

  • I can’t imagine that anyone has been immune to the onslaught of information on Docker, but here’s an article that might be helpful if you’re still looking for a quick and practical introduction.
  • Many of you are probably familiar with Razor, the project that former co-workers Nick Weaver and Tom McSweeney created when they were at EMC. Tom has since moved on to CSC (via the vCHS team at VMware) and has launched a “next-generation” version of Razor called Hanlon. Read more about Hanlon and why this is a new/separate project in Tom’s blog post here.
  • Looking for a bit of clarity around CoreOS and Project Atomic? I found this post by Major Hayden to be extremely helpful and informative. Both of these projects are on my radar, though I’ll probably focus on CoreOS first as the (currently) more mature solution.
  • Linux Journal has a nice multi-page write-up on Docker containers that might be useful if you are still looking to understand Docker’s basic building blocks.
  • I really enjoyed Donnie Berkholz’ piece on microservices and the migrating Unix philosophy. It was a great view into how composability can (and does) shift over time. Good stuff, I highly recommend reading it.
  • cURL is an incredibly useful utility, especially in today’s age of HTTP-based REST API. Here’s a list of 9 uses for cURL that are worth knowing. This article on testing REST APIs with cURL is handy, too.
  • And for something entirely different…I know that folks love to beat up AppleScript, but it’s cross-application tasks like this that make it useful.


  • Someone recently brought the open source Open vStorage project to my attention. Open vStorage compares itself to VMware VSAN, but supporting multiple storage backends and supporting multiple hypervisors. Like a lot of other solutions, it’s implemented as a VM that presents NFS back to the hypervisors. If anyone out there has used it, I’d love to hear your feedback.
  • Erik Smith at EMC has published a series of articles on “virtual storage networks.” There’s some interesting content there—I haven’t finished reading all of the posts yet, as I want to be sure to take the time to digest them properly. If you’re interested, I suggest starting out with his introductory post (which, strangely enough, wasn’t the first post in the series), then moving on to part 1, part 2, and part 3.


  • Did you happen to see this write-up on migrating a VMware Fusion VM to VMware’s vCloud Hybrid Service? For now—I believe there are game-changing technologies out there that will alter this landscape—one of the very tangible benefits of vCHS is its strong interoperability with your existing vSphere (and Fusion!) workloads.
  • Need a listing of the IP addresses in use by the VMs on a given Hyper-V host? Ben Armstrong shares a bit of PowerShell code that produces just such a listing. As Ben points out, this can be pretty handy when you’re trying to track down a particular VM.
  • vCenter Log Insight 2.0 was recently announced; Vladan Seget has a decent write-up. I’m thinking of putting this into my home lab soon for gathering event information from VMware NSX, OpenStack, and the underlying hypervisors. I just need more than 24 hours in a day…
  • William Lam has an article on lldpnetmap, a little-known utility for mapping ESXi interfaces to physical switches. As the name implies, this relies on LLDP, so switches that don’t support LLDP or that don’t have LLDP enabled won’t work correctly. Still, a useful utility to have in your toolbox.
  • Technology previews of the next versions of Fusion (Fusion 7) and Workstation (Workstation 11) are available; see Eric Sloof’s articles (here and here for Fusion and Workstation, respectively) for more details.
  • vSphere 4 (and associated pieces) are no longer under general support. Sad face, but time stops for no man (or product).
  • Having some problems with VMware Fusion’s networking? Cody Bunch channels his inner Chuck Norris to kick VMware Fusion networking in the teeth.
  • Want to preview OS X Yosemite? Check out William Lam’s guide to using Fusion or vSphere to preview the new OS X beta release.

I’d better wrap this up now, or it’s going to turn into one of Chad’s posts. (Just kidding, Chad!) Thanks for taking the time to read this far!

Tags: , , , , , , , , , , , , , , ,

Welcome to Technology Short Take #41, the latest in my series of random thoughts, articles, and links from around the Internet. Here’s hoping you find something useful!


  • Network Functions Virtualization (NFV) is a networking topic that is starting to get more and more attention (some may equate “attention” with “hype”; I’ll allow you to draw your own conclusion there). In any case, I liked how this article really hit upon what I personally feel is something many people are overlooking in NFV. Many vendors are simply rushing to provide virtualized versions of their solution without addressing the orchestration and automation side of the house. I’m looking forward to part 2 on this topic, in which the author plans to share more technical details.
  • Rob Sherwood, CTO of Big Switch, recently published a reasonably in-depth look at “modern OpenFlow” implementations and how they can leverage multiple tables in hardware. Some good information in here, especially on OpenFlow basics (good for those of you who aren’t familiar with OpenFlow).
  • Connecting Docker containers to Open vSwitch is one thing, but what about using Docker containers to run Open vSwitch in userspace? Read this.
  • Ivan knocks centralized SDN control planes in this post. It sounds like Ivan favors scale-out architectures, not scale-up architectures (which are typically what is seen in centralized control plane deployments).
  • Looking for more VMware NSX content? Anthony Burke has started a new series focusing on VMware NSX in pure vSphere environments. As far as I can tell, Anthony is up to 4 posts in the series so far. Check them out here: part 1, part 2, part 3, and part 4. Enjoy!


  • Good friend Simon Seagrave is back to the online world again with this heads-up on a potential NIC issue with an HP Proliant firmware update. The post also contains a link to a fix for the issue. Glad to see you back again, Simon!
  • Tom Howarth asks, “Is the x86 blade server dead?” (OK, so he didn’t use those words specifically. I’m paraphrasing for dramatic effect.) The basic premise of Tom’s position is that new technologies like server-side caching and VSAN/Ceph/Sanbolic (turning direct-attached storage into shared storage) will dramatically change the landscape of the data center. I would generally agree, although I’m not sure that I agree with Tom’s statement that “complexity is reduced” with these technologies. I think we’re just shifting the complexity to a different place, although it’s a place where I think we can better manage the complexity (and perhaps mask it). What do you think?


Cloud Computing/Cloud Management

  • Juan Manuel Rey has launched a series of blog posts on deploying OpenStack with KVM and VMware NSX. He has three parts published so far; all good stuff. See part 1, part 2, and part 3.
  • Kyle Mestery brought to my attention (via Twitter) this list of the “best newly-available OpenStack guides and how-to’s”. It was good to see a couple of Cody Bunch’s articles on the list; Cody’s been producing some really useful OpenStack content recently.
  • I haven’t had the opportunity to use SaltStack yet, but I’m hearing good things about it. It’s always helpful (to me, at least) to be able to look at products in the context of solving a real-world problem, which is why seeing this post with details on using SaltStack to automate OpenStack deployment was helpful.
  • Here’s a heads-up on a potential issue with the vCAC upgrade—the upgrade apparently changes some configuration files. The linked blog post provides more details on which files get changed. If you’re looking at doing this upgrade, read this to make sure you aren’t adversely affected.
  • Here’s a post with some additional information on OpenStack live migration that you might find useful.

Operating Systems/Applications

  • RHEL7, Docker, and Puppet together? Here’s a post on just such a use case (oh, I forgot to mention OpenStack’s involved, too).
  • Have you ever walked through a spider web because you didn’t see it ahead of time? (Not very fun.) Sometimes I feel that way with certain technologies or projects—like there are connections there with other technologies, projects, trends, etc., that aren’t quite “visible” just yet. That’s where I am right now with the recent hype around containers and how they are going to replace VMs. I’m not so sure I agree with that just yet…but I have more noodling to do on the topic.


  • “Server SAN” seems to be the name that is emerging to describe various technologies and architectures that create pools of storage from direct-attached storage (DAS). This would include products like VMware VSAN as well as projects like Ceph and others. Stu Miniman has a nice write-up on Server SAN over at Wikibon; if you’re not familiar with some of the architectures involved, that might be a good place to start. Also at Wikibon, David Floyer has a write-up on the rise of Server SAN that goes into a bit more detail on business and technology drivers, friction to adoption, and some recommendations.
  • Red Hat recently announced they were acquiring Inktank, the company behind the open source scale-out Ceph project. Jon Benedict, aka “Captain KVM,” weighs in with his thoughts on the matter. Of course, there’s no shortage of thoughts on the acquisition—a quick web search will prove that—but I find it interesting that none of the “big names” in storage social media had anything to say (not that I could find, anyway). Howard? Stephen? Chris? Martin? Bueller?


  • Doug Youd pulled together a nice summary of some of the issues and facts around routed vMotion (vMotion across layer 3 boundaries, such as across a Clos fabric/leaf-spine topology). It’s definitely worth a read (and not just because I get mentioned in the article, either—although that doesn’t hurt).
  • I’ve talked before—although it’s been a while—about Hyper-V’s choice to rely on host-level NIC teaming in order to provide network link redundancy to virtual machines. Ben Armstrong talks about another option, guest-level NIC teaming, in this post. I’m not so sure that using guest-level teaming is any better than relying on host-level NIC teaming; what’s really needed is a more full-featured virtual networking layer.
  • Want to run nested ESXi on vCHS? Well, it’s not supported…but William Lam shows you how anyway. Gotta love it!
  • Brian Graf shows you how to remove IP pools using PowerCLI.

Well, that’s it for this time around. As always, I welcome all courteous comments, so feel free to share your thoughts, ideas, rants, links, or feedback in the comments below.

Tags: , , , , , , , , , , , , ,

Technology and Travel

Cody Bunch recently posted a quick round-up of what he carries when traveling, and just for fun I thought I’d do the same. Like Cody, I don’t know that I would consider myself a road warrior, but I have traveled a pretty fair amount. Here’s what I’m currently carrying when I hit the road:

  • Light laptop and tablet: After years of carrying around a 15″ MacBook Pro, then going down to a 13″ MacBook Pro, I have to say I’m pretty happy with the 13" MacBook Air that I’m carrying now. Weight really does make a difference. I’m still toting the full-size iPad, but will probably switch to an iPad mini later in the year to save a bit more weight.
  • Bag: I settled on the Timbuktu Commute messenger bag (see my write-up) and I’m quite pleased with it. A good bag makes a big difference when you’re mobile.
  • Backup battery: I’m carrying the NewTrent PowerPak 10.0 (NT100H). It may not be the best product out there, but it’s worked pretty well for me. It’s not too heavy and not too big, and will charge both phones and tablets.
  • Noise-canceling earphones: The Bose QC20 earphones (in-ear) are awesome. Naturally they let in a bit more noise than the bigger on-ear QC15 headphones, but the added noise is worth the tremendous decrease in size and weight.

On the software side, I’ll definitely echo Cody’s recommendation of Little Snitch; it’s a excellent product that I’ve used for years. You might also consider enabling the built-in firewall (see this write-up for enabling pf on OS X Mountain Lion; haven’t tried on Mavericks yet) for an added layer of network protection.

What about you, other road warriors out there? What are you carrying these days?

Update: Thanks to Ivan Pepelnjak, who pointed out that I had inadvertently swapped out the product names for the Bose earphones and headphones. That’s been corrected!

Tags: , ,

Welcome to Technology Short Take #40. The content is a bit light this time around; I thought I’d give you, my readers, a little break. Hopefully there’s still some useful and interesting stuff here. Enjoy!


  • Bob McCouch has a nice write-up on options for VPNs to AWS. If you’re needing to build out such a solution, you might want to read his post for some additional perspectives.
  • Matthew Brender touches on a networking issue present in VMware ESXi with regard to VMkernel multi-homing. This is something others have touched on before (including myself, back in 2008—not 2006 as I tweeted one day), but Matt’s write-up is concise and to the point. You’ll definitely want to keep this consideration in mind for your designs. Another thing to consider: vSphere 5.5 introduces the idea of multiple TCP/IP stacks, each with its own routing table. As the ability to use multiple TCP/IP stacks extends throughout vSphere, it’s entirely possible this limitation will go away entirely.
  • YAOFC (Yet Another OpenFlow Controller), interesting only because it focuses on issues of scale (tens of thousands of switches with hundreds of thousands of endpoints). See here for details.


  • Intel recently announced a refresh of the E5 CPU line; Kevin Houston has more details here.


  • This one slipped past me in the last Technology Short Take, so I wanted to be sure to include it here. Mike Foley—whom I’m sure many of you know—recently published an ESXi security whitepaper. His blog post provides more details, as well as a link to download the whitepaper.
  • The OpenSSL “Heartbleed” vulnerability has captured a great deal of attention (justifiably so). Here’s a quick article on how to assess if your Linux-based server is affected.

Cloud Computing/Cloud Management

  • I recently built a Windows Server 2008 R2 image for use in my OpenStack home lab. This isn’t as straightforward as building a Linux image (no surprises there), but I did find a few good articles that helped along the way. If you find yourself needing to build a Windows image for OpenStack, check out creating a Windows image on OpenStack (via Gridcentric) and building a Windows image for OpenStack (via Brent Salisbury). You might also check out, which offers a version of cloud-init for Windows as well as some prebuilt evaluation images. (Note: I was unable to get the prebuilt images to download, but YMMV.)
  • Speaking of building OpenStack images, here’s a “how to” guide on building a Debian 7 cloud image for OpenStack.
  • Sean Roberts recently launched a series of blog posts about various OpenStack projects that he feels are important. The first project he highlights is Congress, a policy management project that has recently gotten a fair bit of attention (see a reference to Congress at the end of this recent article on the mixed messages from Cisco on OpFlex). In my opinion, Congress is a big deal, and I’m really looking forward to seeing how it evolves.
  • I have a related item below under Virtualization, but I wanted to point this out here: work is being done on a VIF driver to connect Docker containers to Open vSwitch (and thus to OpenStack Neutron). Very cool. See here for details.
  • I love that Cody Bunch thinks a lot like I do, like this quote from a recent post sharing some links on OpenStack Heat: “That generally means I’ve got way too many browser tabs open at the moment and need to shut some down. Thus, here comes a huge list of OpenStack links and resources.” Classic! Anyway, check out the list of Heat resources, you’re bound to find something useful there.

Operating Systems/Applications

  • A short while back I had a Twitter conversation about spinning up a Minecraft server for my kids in my OpenStack home lab. That led to a few other discussions, one of which was how cool it would be if you could use Heat autoscaling to scale Minecraft. Then someone sends me this.
  • Per the Microsoft Windows Server Team’s blog post, the Windows Server 2012 R2 Udpate is now generally available (there’s also a corresponding update for Windows 8.1).


  • Did you see that EMC released a virtual edition of VPLEX? It’s being called the “data plane” for software-defined storage. VPLEX is an interesting product, no doubt, and the introduction of a virtual edition is intriguing (but not entirely unexpected). I did find it unusual that the release of the virtual edition signalled the addition of a new feature called “MetroPoint”, which allows two sites to replicate back to a single site. See Chad Sakac’s blog post for more details.
  • This discussion on MPIO and in-guest iSCSI is a great reminder that designing solutions in a virtualized data center (or, dare I say it—a software-defined data center?) isn’t the same as designing solutions in a non-virtualized environment.


  • Ben Armstrong talks briefly about Hyper-V protected networks, which is a way to protect a VM against network outage by migrating the VM to a different host if a link failure occurs. This is kind of handy, but requires Windows Server clustering in order to function (since live migration in Hyper-V requires Windows Server clustering). A question for readers: is Windows Server clustering still much the same as it was in years past? It was a great solution in years past, but now it seems outdated.
  • At the same time, though, Microsoft is making some useful networking features easily accessible in Hyper-V. Two more of Ben’s articles show off the DHCP Guard and Router Guard features available in Hyper-V on Windows Server 2012.
  • There have been a pretty fair number of posts talking about nested ESXi (ESXi running as a VM on another hypervisor), either on top of ESXi or on top of VMware Fusion/VMware Workstation. What I hadn’t seen—until now—was how to get that working with OpenStack. Here’s how Mathias Ewald made it work.
  • And while we’re talking nested hypervisors, be sure to check out William Lam’s post on running a nested Xen hypervisor with VMware Tools on ESXi.
  • Check out this potential way to connect Docker containers with Open vSwitch (which then in turn opens up all kinds of other possibilities).
  • Jason Boche regales us with a tale of a vCenter 5.5 Update 1 upgrade that results in missing storage providers. Along the way, he also shares some useful information about Profile-Driven Storage in general.
  • Eric Gray shares information on how to prepare an ESXi ISO for PXE booting.
  • PowerCLI 5.5 R2 has some nice new features. Skip over to Alan Renouf’s blog to read up on what is included in this latest release.

I should close things out now, but I do have one final link to share. I really enjoyed Nick Marshall’s recent post about the power of a tweet. In the post, Nick shares how three tweets—one with Duncan Epping, one with Cody Bunch, and one with me—have dramatically altered his life and his career. It’s pretty cool, if you think about it.

Anyway, enough is enough. I hope that you found something useful here. I encourage readers to contribute to the discussion in the comments below. All courteous comments are welcome.

Tags: , , , , , , , , , , ,

Welcome to Technology Short Take #39, in which I share a random assortment of links, articles, and thoughts from around the world of data center-related technologies. I hope you find something useful—or at least something interesting!


  • Jason Edelman has been talking about the idea of a Common Programmable Abstraction Layer (CPAL). He introduces the idea, then goes on to explore—as he puts it—the power of a CPAL. I can’t help but wonder if this is the right level at which to put the abstraction layer. Is the abstraction layer better served by being integrated into a cloud management platform, like OpenStack? Naturally, the argument then would be, “Not everyone will use a cloud management platform,” which is a valid argument. For those customers who won’t use a cloud management platform, I would then ask: will they benefit from a CPAL? I mean, if they aren’t willing to embrace the abstraction and automation that a cloud management platform brings, will abstraction and automation at the networking layer provide any significant benefit? I’d love to hear others’ thoughts on this.
  • Ethan Banks also muses on the need for abstraction.
  • Craig Matsumoto of SDN Central helps highlight a recent (and fairly significant) development in networking protocols—the submission of the Generic Network Virtualization Encapsulation (Geneve) proposal to the IETF. Jointly authored by VMware, Microsoft, Red Hat, and Intel, this new protocol proposal attempts to bring together the strengths of the various network virtualization encapsulation protocols out there today (VXLAN, STT, NVGRE). This is interesting enough that I might actually write up a separate blog post about it; stay tuned for that.
  • Lee Doyle provides an analysis of the market for network virtualization, which includes some introductory information for those who might be unfamiliar with what network virtualization is. I might contend that Open vSwitch (OVS) alone isn’t an option for network virtualization, but that’s just splitting hairs. Overall, this is a quick but worthy read if you are trying to get started in this space.
  • Don’t think this “software-defined networking” thing is going to take off? Read this, and then let me know what you think.
  • Chris Margret has a nice dissection of how bash completion works, particularly in regards to the Cumulus Networks implementation.


  • Via Kevin Houston, you can get more details on the Intel E7 v2 and new blade servers based on the new CPU. x86 marches on!
  • Another interesting tidbit regarding hardware: it seems as if we are now seeing the emergence of another round of “hardware offloads.” The first round came about around 2006 when Intel and AMD first started releasing their hardware assists for virtualization (Intel VT and AMD-V, respectively). That technology was only “so-so” at first (VMware ESX continued to use binary translation [BT] because it was still faster than the hardware offloads), but it quickly matured and is now leveraged by every major hypervisor on the market. This next round of hardware offloads seems targeted at network virtualization and related technologies. Case in point: a relatively small company named Netronome (I’ve spoken about them previously, first back in 2009 and again a year later), recently announced a new set of network interface cards (NICs) expressly designed to provide hardware acceleration for software-defined networking (SDN), network functions virtualization (NFV), and network virtualization solutions. You can get more details from the Netronome press release. This technology is actually quite interesting; I’m currently talking with Netronome about testing it with VMware NSX and will provide more details as that evolves.


  • Ben Rossi tackles the subject of security in a software-defined world, talking about how best to integrate security into SDN-driven architectures and solutions. It’s a high-level article and doesn’t get into a great level of detail, but does point out some of the key things to consider.

Cloud Computing/Cloud Management

  • “Racker” James Denton has some nice articles on OpenStack Neutron that you might find useful. He starts out with discussing the building blocks of Neutron, then goes on to discuss building a simple flat network, using VLAN provider networks, and Neutron routers and the L3 agent. And if you need a breakdown of provider vs. tenant networks in Neutron, this post is also quite handy.
  • Here’s a couple (first one, second one) of quick walk-throughs on installing OpenStack. They don’t provide any in-depth explanations of what’s going on, why you’re doing what you’re doing, or how it relates to the rest of the steps, but you might find something useful nevertheless.
  • Thinking of building your own OpenStack cloud in a home lab? Kevin Jackson—who along with Cody Bunch co-authored the OpenStack Cloud Computing Cookbook, 2nd Edition—has three articles up on his home OpenStack setup. (At least, I’ve only found three articles so far.) Part 1 is here, part 2 is here, and part 3 is here. Enjoy!
  • This post attempts to describe some of the core (mostly non-technical) differences between OpenStack and OpenNebula. It is published on the site, so keep that in mind as it is (naturally) biased toward OpenNebula. It would be quite interesting to me to see a more technically-focused discussion of the two approaches (and, for that matter, let’s include CloudStack as well). Perhaps this already exists—does anyone know?
  • CloudScaling recently added a Google Compute Engine (GCE) API compatibility module to StackForge, to allow users to leverage the GCE API with OpenStack. See more details here.
  • Want to run Hyper-V in your OpenStack environment? Check this out. Also from the same folks is a version of cloud-init for Windows instances in cloud environments. I’m testing this in my OpenStack home lab now, and hope to have more information soon.

Operating Systems/Applications



  • Brendan Gregg of Joyent has an interesting write-up comparing virtualization performance between Zones (apparently referring to Solaris Zones, a form of OS virtualization/containerization), Xen, and KVM. I might disagree that KVM is a Type 2 hardware virtualization technology, pointing out that Xen also requires a Linux-based dom0 in order to function. (The distinction between a Type 1 that requires a general purpose OS in a dom0/parent partition and a Type 2 that runs on top of a general purpose OS is becoming increasingly blurred, IMHO.) What I did find interesting was that they (Joyent) run a ported version of KVM inside Zones for additional resource controls and security. Based on the results of his testing—performed using DTrace—it would seem that the “double-hulled virtualization” doesn’t really impact performance.
  • Pete Koehler—via Jason Langer’s blog—has a nice post on converting in-guest iSCSI volumes to native VMDKs. If you’re in a similar situation, check out the post for more details.
  • This is interesting. Useful, I’m not so sure about, but definitely interesting.
  • If you are one of the few people living under a rock who doesn’t know about PowerCLI, Alan Renouf is here to help.

It’s time to wrap up; this post has already run longer than usual. There was just so much information that I want to share with you! I’ll be back soon-ish with another post, but until then feel free to join (or start) the conversation by adding your thoughts, ideas, links, or responses in the comments below.

Tags: , , , , , , , , , , , ,

Welcome to Technology Short Take #36. In this episode, I’ll share a variety of links from around the web, along with some random thoughts and ideas along the way. I try to keep things related to the key technology areas you’ll see in today’s data centers, though I do stray from time to time. In any case, enough with the introduction—bring on the content! I hope you find something useful.


  • This post is a bit older, but still useful in the event if you’re interested in learning more about OpenFlow and OpenFlow controllers. Nick Buraglio has put together a basic reference OpenFlow controller VM—this is a KVM guest with CentOS 6.3 with the Floodlight open source controller.
  • Paul Fries takes on defining SDN, breaking it down into two “flavors”: host dominant and network dominant. This is a reasonable way of grouping the various approaches to SDN (using SDN in the very loose industry sense, not the original control plane-data plane separation sense). I’d like to add to Paul’s analysis that it’s important to understand that, in reality, host dominant and network dominant systems can coexist. It’s not at all unreasonable to think that you might have a fabric controller that is responsible for managing/optimizing traffic flows across the physical transport network/fabric, and an overlay controller—like VMware NSX—that integrates tightly with the hypervisor(s) and workloads running on those hypervisors to create and manage logical connectivity and logical network services.
  • This is an older post from April 2013, but still useful, I think. In his article titled “OpenFlow Test Deployment Options“, Brent Salisbury—a rock star new breed network engineer emerging in the new world of SDN—discusses some practical deployment strategies for deploying OpenFlow into an existing network topology. One key statement that I really liked from this article was this one: “SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking.” New technologies don’t make talented folks who embrace change obsolete; if anything, these new technologies make them more valuable.
  • Great post by Ivan (is there a post by Ivan that isn’t great?) on flow table explosion with OpenFlow. He does a great job of explaining how OpenFlow works and why OpenFlow 1.3 is needed in order to see broader adoption of OpenFlow.


  • Intel announced the E5 2600 v2 series of CPUs back at Intel Developer Forum (IDF) 2013 (you can follow my IDF 2013 coverage by looking at posts with the IDF2013 tag). Kevin Houston followed up on that announcement with a useful post on vSphere compatibility with the E5 2600 v2. You can also get more details on the E5 2600 v2 itself in this related post by Kevin as well. (Although I’m just now catching Kevin’s posts, they were published almost immediately after the Intel announcements—thanks for the promptness, Kevin!)
  • blah


Nothing this time around, but I’ll keep my eyes posted for content to share with you in future posts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I found this refresher on some of the most useful apt-get/apt-cache commands to be helpful. I don’t use some of them on a regular basis, and so it’s hard to remember the specific command and/or syntax when you do need one of these commands.
  • I wouldn’t have initially considered comparing Docker and Chef, but considering that I’m not an expert in either technology it could just be my limited understanding. However, this post on why Docker and why not Chef does a good job of looking at ways that Docker could potentially replace certain uses for Chef. Personally, I tend to lean toward the author’s final conclusions that it is entirely possible that we’ll see Docker and Chef being used together. However, as I stated, I’m not an expert in either technology, so my view may be incorrect. (I reserve the right to revise my view in the future.)


  • Using Dell EqualLogic with VMFS? Better read this heads-up from Cormac Hogan and take the recommended action right away.
  • Erwin van Londen proposes some ideas for enhancing FC error detection and notification with the idea of making hosts more aware of path errors and able to “route” around them. It’s interesting stuff; as Erwin points out, though, even if the T11 accepted the proposal it would be a while before this capability showed up in actual products.


That’s it for this time around, but feel free to continue to conversation in the comments below. If you have any additional information to share regarding any of the topics I’ve mentioned, please take the time to add that information in the comments. Courteous comments are always welcome!

Tags: , , , , , , , , , , , ,

Vendor Meetings at VMworld 2013

This year at VMworld, I wasn’t in any of the breakout sessions because employees aren’t allowed to register for breakout sessions in advance; we have to wait in the standby line to see if we can get in at the last minute. So, I decided to meet with some vendors that seemed interesting and get some additional information on their products. Here’s the write-up on some of the vendor meetings I’ve attended while in San Francisco.

Jeda Networks

I’ve mentioned Jeda Networks before (see here), and I was pretty excited to have the opportunity to sit down with a couple of guys from Jeda to get more details on what they’re doing. Jeda Networks describes themselves as a “software-defined storage networking” company. Given my previous role at EMC (involved in storage) and my current role at VMware focused on network virtualization (which encompasses SDN), I was quite curious.

Basically, what Jeda Networks does is create a software-based FCoE overlay on an existing Ethernet network. Jeda accomplishes this by actually programming the physical Ethernet switches (they have a series of plug-ins for the various vendors and product lines; adding a new switch just means adding a new plug-in). In the future, when OpenFlow or its derivatives become more ubiquitous, I could see using those control plane technologies to accomplish the same task. It’s a fascinating idea, though I question how valuable a software-based FCoE overlay is in a world that seems to be rapidly moving everything to IP. Even so, I’m going to keep an eye on Jeda to see how things progress.

Diablo Technologies

Diablo was a new company to me; I hadn’t heard of them before their PR firm contacted me about a meeting while at VMworld. Diablo has created what they call Memory Channel Storage, which puts NAND flash on a DIMM. Basically, it makes high-capacity flash storage accessible via the CPU’s memory bus. To take advantage of high-capacity flash in the memory bus, Diablo supplies drivers for all the major operating systems (OSes), including ESXi, and what this driver does is modify the way that page swaps are handled. Instead of page swaps moving data from memory to disk—as would be the case in a traditional virtual memory system—the page swaps happen between DRAM on the memory bus and Diablo’s flash on the memory bus. This means that page swaps are extremely fast (on the level of microseconds, not the milliseconds typically seen with disks).

To use the extra capacity, then, administrators must essentially “overcommit” their hosts. Say your hosts had 64GB of (traditional) RAM installed, but 2TB of Diablo’s DIMM-based flash installed. You’d then allocate 2TB of memory to VMs, and the hypervisor would swap pages at extremely high speed between the DRAM and the DIMM-based flash. At that point, the system DRAM almost looks like another level of cache.

This “overcommitment” technique could have some negative effects on existing monitoring systems that are unaware of the underlying hardware configuration. Memory utilization would essentially run at 100% constantly, though the speed of the DIMM-based flash on the memory bus would mean you wouldn’t take a performance hit.

In the future, Diablo is looking for ways to make their DIMM-based flash appear to an OS as addressable memory, so that the OS would just see 3.2TB (or whatever) of RAM, and access it accordingly. There are a number of technical challenges there, not the least of which is ensuring proper latency and performance characteristics. If they can resolve these technical challenges, we could be looking at a very different landscape in the near future. Consider the effects of cost-effective servers with 3TB (or more) of RAM installed. What effect might that have on modern data centers?


HyTrust is a company with whom I’ve been in contact for several years now (since early 2009). Although HyTrust has been profitable for some time now, they recently announced a new round of funding intended to help accelerate their growth (though they’re already on track to quadruple sales this year). I chatted with Eric Chiu, President and founder of HyTrust, and we talked about a number of areas. I was interested to learn that HyTrust had officially productized a proof-of-concept from 2010 leveraging Intel’s TPM/TXT functionality to perform attestation of ESXi hypervisors (this basically means that HyTrust can verify the integrity of the hypervisor as a trusted platform). They also recently introduced “two man” support; that is, support for actions to be approved or denied by a second party. For example, an administrator might try to delete a VM, but that deletion would need to be approved by a second party before it is allowed to proceed. HyTrust also continues to explore other integration points with related technologies, such as OpenStack, NSX, physical networking gear, and converged infrastructure. Be sure to keep an eye on HyTrust—I think they’re going to be doing some pretty cool things in the near future.


Vormetric interested me because they offer a data encryption product, and I was interested to see how—if at all—they integrated with VMware vSphere. It turns out they don’t integrate with vSphere at all, as their product is really more tightly integrated at the OS level. For example, their product runs natively as an agent/daemon/service on various UNIX platforms, various Linux distributions, and all recent versions of Windows Server. This gives them very fine-grained control over data access. Given their focus is on “protecting the data,” this makes sense. Vormetric also offers a few related products, like a key management solution and a certificate management solution.


SimpliVity is one of a number of vendors touting “hyperconvergence,” which—as far as I can tell—basically means putting storage and compute together on the same node. (If there is a better definition, please let me know.) In that regard, they could be considered similar to Nutanix. I chatted with one of the designers of the SimpliVity OmniCube. SimpliVity leverages VM-based storage controllers that leverage VMDirectPath for accelerated access to the underlying hardware, and present that underlying hardware back to the ESXi nodes as NFS storage. Their file system—developed during the 3 years they spent in stealth mode—abstracts away the hardware so that adding OmniCubes means adding both capacity and I/O (as well as compute). They use inline deduplication not only to reduce storage capacity, but especially to avoid having to write I/Os to the storage in the first place. (Capacity isn’t usually the issue; I/Os are typically the issue.) SimpliVity’s file system enables fast backups and fast clones; although they didn’t elaborate, I would assume they are using a pointer-based system (perhaps even an optimized content-addressed storage [CAS] model) that keeps them from having to copy large amounts of data around the system. This is what enables them to do global deduplication, backups from any system to any other system, and restores from any system to any other system (system here referring to an OmniCube).

In any case, SimpliVity looks very interesting due to its feature set. It will be interesting to see how they develop and mature.

SanDisk FlashSoft

This was probably one of the more fascinating meetings I had at the conference. SanDisk FlashSoft is a flash-based caching product that supports various OSes, including an in-kernel driver for ESXi. What made this product interesting was that SanDisk brought out one of the key architects behind the solution, who went through their design philosophy and the decisions they’d made in their architecture in great detail. It was a highly entertaining discussion.

More than just entertaining, though, it was really informative. FlashSoft aims to keep their caching layer as full of dirty data as possible, rather than seeking to flush dirty data right away. The advantage this offers is that if another change to that data comes, FlashSoft can discard the earlier change and only keep the latest change—thus eliminating I/Os to the back-end disks entirely. Further, by keeping as much data in their caching layer as possible, FlashSoft has a better ability to coalesce I/Os to the back-end, further reducing the I/O load. FlashSoft supports both write-through and write-back models, and leverages a cache coherency/consistency model that allows them to support write-back with VM migration without having to leverage the network (and without having to incur the CPU overhead that comes with copying data across the network). I very much enjoyed learning more about FlashSoft’s product and architecture. It’s just a shame that I don’t have any SSDs in my home lab that would benefit from FlashSoft.


My last meeting of the week was with a couple folks from SwiftStack. We sat down to chat about Swift, SwiftStack, and object storage, and discussed how they are seeing the adoption of Swift in lots of areas—not just with OpenStack, either. That seems to be a pretty common misconception (that OpenStack is required to use Swift). SwiftStack is working on some nice enhancements to Swift that hopefully will show up soon, including erasure coding support and greater policy support.

Summary and Wrap-Up

I really appreciate the time that each company took to meet with me and share the details of their particular solution. One key takeaway for me was that there is still lots of room for innovation. Very cool stuff is ahead of us—it’s an exciting time to be in technology!

Tags: , , , , , ,

Welcome to Technology Short Take #35, another in my irregular series of posts that collect various articles, links and thoughts regarding data center technologies. I hope that something in here is useful to you.


  • Art Fewell takes a deeper look at the increasingly important role of the virtual switch.
  • A discussion of “statefulness” brought me again to Ivan’s post on the spectrum of firewall statefulness. It’s so easy sometimes just to revert to “it’s stateful” or “it’s not stateful,” but the reality is that it’s not quite so black-and-white.
  • Speaking of state, I like this piece by Ivan as well.
  • I tend not to link to TechTarget posts any more than I have to, because invariably the articles end up going behind a login requirement just to read them. Even so, this Q&A session with Martin Casado on managing physical and virtual worlds in parallel might be worth going through the hassle.
  • This looks interesting.
  • VMware introduced VMware NSX recently at VMworld 2013. Cisco shared some thoughts on what they termed a “software-only” approach; naturally, they have a different vision for data center networking (and that’s OK). I was a bit surprised by some of the responses to Cisco’s piece (see here and here). In the end, though, I like Greg Ferro’s statement: “It is perfectly reasonable that both companies will ‘win’.” There’s room for a myriad of views on how to solve today’s networking challenges, and each approach has its advantages and disadvantages.


Nothing this time around, but I’ll watch for items to include in future editions. Feel free to send me links you think would be useful to include in the future!


  • I found this write-up on using OVS port mirroring with Security Onion for intrusion detection and network security monitoring.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • In past presentations I’ve referenced the terms “snowflake servers” and “phoenix servers,” which I borrowed from Martin Fowler. (I don’t know if Martin coined the terms or not, but you can get more information here and here.) Recently among some of Martin’s material I saw reference to yet another term: the immutable server. It’s an interesting construct: rather than managing the configuration of servers, you simply spin up new instances when you need a new configuration; existing configurations are never changed. More information on the use of the immutable server construct is also available here. I’d be interested to hear readers’ thoughts on this idea.


  • Chris Evans takes a took at ScaleIO, recently acquired by EMC, and speculates on where ScaleIO fits into the EMC family of products relative to the evolution of storage in the data center.
  • While I was at VMworld 2013, I had the opportunity to talk with SanDisk’s FlashSoft division about their flash caching product. It was quite an interesting discussion, so stay tuned for that update (it’s almost written; expect it in the next couple of days).


  • The rise of new converged (or, as some vendors like to call it, “hyperconverged”) architectures means that we have to consider the impact of these new architectures when designing vSphere environments that will leverage them. I found a few articles by fellow VCDX Josh Odgers that discuss the impact of Nutanix’s converged architecture on vSphere designs. If you’re considering the use of Nutanix, have a look at some of these articles (see here, here, and here).
  • Jonathan Medd shows how to clone a VM from a snapshot using PowerCLI. Also be sure to check out this post on the vSphere CloneVM API, which Jonathan references in his own article.
  • Andre Leibovici shares an unofficial way to disable the use of the SESparse disk format and revert to VMFS Sparse.
  • Forgot the root password to your ESXi 5.x host? Here’s a procedure for resetting the root password for ESXi 5.x that involves booting on a Linux CD. As is pointed out in the comments, it might actually be easier to rebuild the host.
  • vSphere 5.5 was all the rage at VMworld 2013, and there was a lot of coverage. One thing that I didn’t see much discussion around was what’s going on with the free version of ESXi. Vladan Seget gives a nice update on how free ESXi is changing with version 5.5.
  • I am loving the micro-infrastructure series by my VMware vSphere Design co-author, Forbes Guthrie. See it here, here, and here.

It’s time to wrap up now; I’ve already included more links than I normally include (although it doesn’t seem like it). In any case, I hope that something I’ve shared here is helpful, and feel free to share your own thoughts, ideas, and feedback in the comments below. Have a great day!

Tags: , , , , , , , , ,

I’ve written before about adding an extra layer of network security to your Macintosh by leveraging the BSD-level ipfw firewall, in addition to the standard GUI firewall and additional third-party firewalls (like Little Snitch). In OS X Lion and OS X Mountain Lion, though, ipfw was deprecated in favor of pf, the powerful packet filter that I believe originated on OpenBSD. (OS X’s version of pf is ported from FreeBSD.) In this article, I’m going to show you how to use pf on OS X.

Note that this is just one way of leveraging pf, not necessarily the only way of doing it. I tested (and am currently using) this configuration on OS X Mountain Lion 10.8.3.

There are X basic pieces involved in getting pf up and running on OS X Mountain Lion:

  1. Putting pf configuration files in place.
  2. Creating a launchd item for pf.

Let’s look at each of these pieces in a bit more detail. We’ll start with the configuration files.

Putting Configuration Files in Place

OS X Mountain Lion comes with a barebones /etc/pf.conf preinstalled. This barebones configuration file references a single anchor, found in /etc/pf.anchors/ This anchor, however, does not contain any actual pf rules; instead, it appears to be nothing more than a placeholder.

Since there is a configuration file already in place, you have two options ahead of you:

  1. You can overwrite the existing configuration file. The drawback of this approach is that a) Apple has been known to change this file during system updates, undoing your changes; and b) it could break future OS X functionality.

  2. You can bypass the existing configuration file. This is the approach I took, partly due to the reasons listed above and partly because I found that pfctl (the program used to manage pf) wouldn’t activate the filter rules when the existing configuration file was used. (It complained about improper order of lines in the existing configuration file.)

Note that some tools (like IceFloor) take the first approach and modify the existing configuration file.

I’ll assume you’re going to use option #2. What you’ll need, then, are (at a minimum) two configuration files:

  1. The pf configuration file you want it to parse on startup
  2. At least one anchor file that contains the various options and rules you want to pass to pf when it starts

Since we’re bypassing the existing configuration file, all you really need is an extremely simple configuration file that points to your anchor and loads it, like this:

The other file you need has the actual options and rules that will be passed to pf when it starts. You can get fancy here and use a separate file to define macros and tables, or you can bundle the macros and tables in with the rules. Whatever approach you take, be sure that you have the commands in this file in the right order: options, normalization, queueing, translation, and filtering. Failure to put things in the right order will cause pf not to enable and will leave your system without this additional layer of network protection.

A very simple set of rules in an anchor might look something like this:

Naturally, you’d want to customize these rules to fit your environment. At the end of this article I provide some additional resources that might help with this task.

Once you have the configuration file in place and at least one anchor defined with rules (in the right order!), then you’re ready to move ahead with creating the launchd item for pf so that it starts automatically.

However, there is one additional thing you might want to do first—test your rules to be sure everything is correct. Use this command in a terminal window while running as an administrative user:

sudo pfctl -v -n -f <path to configuration file>

If this command reports errors, go back and fix them before proceeding.

Creating the launchd Item for pf

Creating the launchd item simply involves creating a properly-formatted XML file and placing it in /Library/LaunchDaemons. It must be owned by root, otherwise it won’t be processed at all. If you aren’t clear on how to make sure it’s owned by root, go do a bit of reading on sudo and chown.

Here’s a launchd item you might use for pf:

A few notes about this launchd item:

  • You’ll want to change the last <string> item under the ProgramArguments key to properly reflect the path and filename of the custom configuration file you created earlier. In my case, I’m storing both the configuration file and the anchor in the /etc/pf.anchors directory.
  • As I stated earlier, you must ensure this file is owned by root once you put it into /Library/LaunchDaemons. It won’t work otherwise.
  • If you have additional parameters you want/need to pass to pfctl, add them as separate lines in the ProgramArguments array. Each individual argument on the command line must be a separate item in the array.

Once this file is in place with the right ownership, you can either use launchctl to load it or restart your computer. The robust pf firewall should now be running on your OS X Mountain Lion system. Enjoy!

Some Additional Resources

Finally, it’s important to note that I found a few different web sites helpful during my experimentations with pf on OS X. This write-up was written with Lion in mind, but applies equally well to Mountain Lion, and this site—while clearly focused on OpenBSD and FreeBSD—was nevertheless quite helpful as well.

It should go without saying, but I’ll say it nevertheless: courteous comments are welcome! Feel free to add your thoughts, ideas, questions, or corrections below.

Tags: , , ,

Technology Short Take #30

Welcome to Technology Short Take #30. This Technology Short Take is a bit heavy on the networking side, but I suppose that’s understandable given my recent job change. Enjoy!


  • Ben Cherian, Chief Strategy Officer for Midokura, helps make a case for network virtualization. (Note: Midokura makes a network virtualization solution.) If you’re wondering about network virtualization and why there is a focus on it, this post might help shed some light. Given that it was written by a network virtualization vendor, it might seem a bit rah-rah, so keep that in mind.
  • Brent Salisbury has a fantastic series on OpenFlow. It’s so good I wish I’d written it. He starts out by discussing proactive vs. reactive flows, in which Brent explains that OpenFlow performance is less about OpenFlow and more about how flows are inserted into the hardware. Next, he tackles the concerns over the scale of flow-based forwarding in his post on coarse vs. fine flows. I love this quote from that article: “The second misnomer is, flow based forwarding does not scale. Bad designs are what do not scale.” Great statement! The third post in the series tackles what Brent calls hybrid SDN deployment strategies, and Brent provides some great design considerations for organizations looking to deploy an SDN solution. I’m looking forward to the fourth and final article in the series!
  • Also, if you’re looking for some additional context to the TCAM considerations that Brent discusses in his OpenFlow series, check out this Packet Pushers blog post on OpenFlow switching performance.
  • Another one from Brent, this time on Provider Bridging and Provider Backbone Bridging. Good explanation—it certainly helped me.
  • This article by Avi Chesla points out a potential security weakness in SDN, in the form of a DoS (Denial of Service) attack where many switching nodes request many flows from the central controller. It appears to me that this would only be an issue for networks using fine-grained, reactive flows. Am I wrong?
  • Scott Hogg has a nice list of 9 common Spanning Tree mistakes you shouldn’t make.
  • Schuberg Philis has a nice write-up of their CloudStack+NVP deployment here.


  • Alex Galbraith recently posted a two-part series on what he calls the “NanoLab,” a home lab built on the Intel NUC (“Next Unit of Computing”). It’s a good read for those of you looking for some very quiet and very small home lab equipment, and Alex does a good job of providing all the details. Check out part 1 here and part 2 here.
  • At first, I thought this article was written from a sarcastic point of view, but it turns out that Kevin Houston’s post on 5 reasons why you may not want blade servers is the real deal. It’s nice to see someone who focuses on blade servers opening up about why they aren’t necessarily the best fit for all situations.


  • Nick Buraglio has a good post on the potential impact of Arista’s new DANZ functionality on tap aggregation solutions in the security market. It will be interesting to see how this shapes up. BTW, Nick’s writing some pretty good content, so if you’re not subscribed to his blog I’d reconsider.

Cloud Computing/Cloud Management

  • Although this post is a bit older (it’s from September of last year), it’s still an interesting comparison of both OpenStack and CloudStack. Note that the author apparently works for Mirantis, which is a company that provides OpenStack consulting services. In spite of that fact, he manages to provide a reasonably balanced approach to comparing the two cloud management platforms. Both of them (I believe) have had releases since this time, so some of the points may not be valid any longer.
  • Are you a CloudStack fan? If so, you should probably check out this collection of links from Aaron Delp. Aaron’s focused a lot more on CloudStack now that he’s at Citrix, so he might be a good resource if that is your cloud management platform of choice.

Operating Systems/Applications

  • If you’re just now getting into the whole configuration management scene where tools like Puppet, Chef, and others play, you might find this article helpful. It walks through the difference between configuring a system imperatively and configuring a system declaratively (hint: Puppet, Chef, and others are declarative). It does presume a small bit of programming knowledge in the examples, but even as a non-programmer I found it useful.
  • Here’s a three-part series on beginning Puppet that you might find helpful as well (Part 1, Part 2, and Part 3).
  • If you’re a developer-type person, I would first ask why you’re reading my site, then I’d point you to this post on the AMQP, MQTT, and STOMP messaging protocols.



  • Although these posts are storage-related, the real focus is on how the storage stack is implemented in a virtualization solution, which is why I’m putting them in this section. Cormac Hogan has a series going titled “Pluggable Storage Architecture (PSA) Deep Dive” (part 1 here, part 2 here, part 3 here). If you want more PSA information, you’d be hard-pressed to find a better source. Well worth reading for VMware admins and architects.
  • Chris Colotti shares information on a little-known vSwitch advanced setting that helps resolve an issue with multicast traffic and NICs in promiscuous mode in this post.
  • Frank Denneman reminds everyone in this post that the concurrent vMotion limit only goes to 8 concurrent vMotions when vSphere detects the NIC speed at 10Gbps. Anything less causes the concurrent limit to remain at 4. For those of you using solutions like HP VirtualConnect or similar that allow you to slice and dice a 10Gb link into smaller links, this is a design consideration you’ll want to be sure to incorporate. Good post Frank!
  • Interested in some OpenStack inception? See here. How about some oVirt inception? See here. What’s that? Not familiar with oVirt? No problem—see here.
  • Windows Backup has native Hyper-V support in Windows Server 2012. That’s cool, but are you surprised? I’m not.
  • Red Hat and IBM put out a press release today on improved I/O performance with RHEL 6.4 and KVM. The press release claims that a single KVM guest on RHEL 6.4 can support up to 1.5 million IOPS. (Cue timer until next virtualization vendor ups the ante…)

I guess I should wrap things up now, even though I still have more articles that I’d love to share with readers. Perhaps a “mini-TST”…

In any event, courteous comments are always welcome, so feel free to speak up below. Thanks for reading and I hope you’ve found something useful!

Tags: , , , , , , , , , , , ,

« Older entries