Using GRE Tunnels with Open vSwitch

I’m back with another “how to” article on Open vSwitch (OVS), this time taking a look at using GRE (Generic Routing Encapsulation) tunnels with OVS. OVS can use GRE tunnels between hosts as a way of encapsulating traffic and creating an overlay network. OpenStack Quantum can (and does) leverage this functionality, in fact, to help separate different “tenant networks” from one another. In this write-up, I’ll walk you through the process of configuring OVS to build a GRE tunnel to build an overlay network between two hypervisors running KVM.

Naturally, any sort of “how to” such as this always builds upon the work of others. In particular, I found a couple of Brent Salisbury’s articles (here and here) especially useful.

This process has 3 basic steps:

  1. Create an isolated bridge for VM connectivity.
  2. Create a GRE tunnel endpoint on each hypervisor.
  3. Add a GRE interface and establish the GRE tunnel.

These steps assume that you’ve already installed OVS on your Linux distribution of choice. I haven’t explicitly done a write-up on this, but there are numerous posts from a variety of authors (in this regard, Google is your friend).

We’ll start with an overview of the topology, then we’ll jump into the specific configuration steps.

Reviewing the Topology

The graphic below shows the basic topology of what we have going on here:

Topology overview

We have two hypervisors (CentOS 6.3 and KVM, in my case), both running OVS (an older version, version 1.7.1). Each hypervisor has one OVS bridge that has at least one physical interface associated with the bridge (shown as br0 connected to eth0 in the diagram). As part of this process, you’ll create the other internal interfaces (the tep and gre interfaces, as well as the second, isolated bridge to which VMs will connect. You’ll then create a GRE tunnel between the hypervisors and test VM-to-VM connectivity.

Creating an Isolated Bridge

The first step is to create the isolated OVS bridge to which the VMs will connect. I call this an “isolated bridge” because the bridge has no physical interfaces attached. (Side note: this idea of an isolated bridge is fairly common in OpenStack and NVP environments, where it’s usually called the integration bridge. The concept is the same.)

The command is very simple, actually:

ovs-vsctl add-br br2

Yes, that’s it. Feel free to substitute a different name for br2 in the command above, if you like, but just make note of the name as you’ll need it later.

To make things easier for myself, once I’d created the isolated bridge I then created a libvirt network for it so that it was dead-easy to attach VMs to this new isolated bridge.

Configuring the GRE Tunnel Endpoint

The GRE tunnel endpoint is an interface on each hypervisor that will, as the name implies, serve as the endpoint for the GRE tunnel. My purpose in creating a separate GRE tunnel endpoint is to separate hypervisor management traffic from GRE traffic, thus allowing for an architecture that might leverage a separate management network (which is typically considered a recommended practice).

To create the GRE tunnel endpoint, I’m going to use the same technique I described in my post on running host management traffic through OVS. Specifically, we’ll create an internal interface and assign it an IP address.

To create the internal interface, use this command:

ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal

In your environment, you’ll substitute br2 with the name of the isolated bridge you created earlier. You could also use a different name than tep0. Since this name is essentially for human consumption only, use what makes sense to you. Since this is a tunnel endpoint, tep0 made sense to me.

Once the internal interface is established, assign it with an IP address using ifconfig or ip, whichever you prefer. I’m still getting used to using ip (more on that in a future post, most likely), so I tend to use ifconfig, like this:

ifconfig tep0 192.168.200.20 netmask 255.255.255.0

Obviously, you’ll want to use an IP addressing scheme that makes sense for your environment. One important note: don’t use the same subnet as you’ve assigned to other interfaces on the hypervisor, or else you can’t control that the GRE tunnel will originate (or terminate) on the interface you specify. This is because the Linux routing table on the hypervisor will control how the traffic is routed. (You could use source routing, a topic I plan to discuss in a future post, but that’s beyond the scope of this article.)

Repeat this process on the other hypervisor, and be sure to make note of the IP addresses assigned to the GRE tunnel endpoint on each hypervisor; you’ll need those addresses shortly. Once you’ve established the GRE tunnel endpoint on each hypervisor, test connectivity between the endpoints using ping or a similar tool. If connectivity is good, you’re clear to proceed; if not, you’ll need to resolve that before moving on.

Establishing the GRE Tunnel

By this point, you’ve created the isolated bridge, established the GRE tunnel endpoints, and tested connectivity between those endpoints. You’re now ready to establish the GRE tunnel.

Use this command to add a GRE interface to the isolated bridge on each hypervisor:

ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre \
options:remote_ip=<GRE tunnel endpoint on other hypervisor>

Substitute the name of the isolated bridge you created earlier here for br2 and feel free to use something other than gre0 for the interface name. I think using gre as the base name for the GRE interfaces makes sense, but run with what makes sense to you.

Once you repeat this command on both hypervisors, the GRE tunnel should be up and running. (Troubleshooting the GRE tunnel is one area where my knowledge is weak; anyone have any suggestions or commands that we can use here?)

Testing VM Connectivity

As part of this process, I spun up an Ubuntu 12.04 server image on each hypervisor (using virt-install as I outlined here), attached each VM to the isolated bridge created earlier on that hypervisor, and assigned each VM an IP address from an entirely different subnet than the physical network was using (in this case, 10.10.10.x).

Here’s the output of the route -n command on the Ubuntu guest, to show that it has no knowledge of the “external” IP subnet—it knows only about its own interfaces:

ubuntu:~ root$ route -n
Kernel IP routing table
Destination  Gateway       Genmask        Flags Metric Ref Use Iface
0.0.0.0      10.10.10.254  0.0.0.0        UG    100    0   0   eth0
10.10.10.0   0.0.0.0       255.255.255.0  U     0      0   0   eth0

Similarly, here’s the output of the route -n command on the CentOS host, showing that it has no knowledge of the guest’s IP subnet:

centos:~ root$ route -n
Kernel IP routing table
Destination  Gateway        Genmask        Flags Metric Ref Use Iface
192.168.2.0  0.0.0.0        255.255.255.0  U     0      0   0   tep0
192.168.1.0  0.0.0.0        255.255.255.0  U     0      0   0   mgmt0
0.0.0.0      192.168.1.254  0.0.0.0        UG    0      0   0   mgmt0

In my case, VM1 (named web01) was given 10.10.10.1; VM2 (named web02) was given 10.10.10.2. Once I went through the steps outlined above, I was able to successfully ping VM2 from VM1, as you can see in this screenshot:

VM-to-VM connectivity over GRE tunnel

(Although it’s not shown here, connectivity from VM2 to VM1 was obviously successful as well.)

“OK, that’s cool, but why do I care?” you might ask.

In this particular context, it’s a bit of a science experiment. However, if you take a step back and begin to look at the bigger picture, then (hopefully) something starts to emerge:

  • We can use an encapsulation protocol (GRE in this case, but it could have just as easily been STT or VXLAN) to isolate VM traffic from the physical network and from other VM traffic. (Think multi-tenancy.)
  • While this process was manual, think about some sort of controller (an OpenFlow controller, perhaps?) that could help automate this process based on its knowledge of the VM topology.
  • Using a virtualized router or virtualized firewall, I could easily provide connectivity into or out of this isolated (encapsulated) private network. (This is probably something I’ll experiment with later.)
  • What if we wrapped some sort of orchestration framework around this, to help deploy VMs, create networks, add routers/firewalls automatically, all based on the customer’s needs? (OpenStack Networking, anyone?)

Anyway, I hope this is helpful to someone. As always, I welcome feedback and suggestions for improvement, so feel free to speak up in the comments below. Vendor disclosures, where appropriate, are greatly appreciated. Thanks!

Tags: , , , , , ,

  1. Lennie’s avatar

    “We can use an encapsulation protocol (GRE in this case, but it could have just as easily been STT or VXLAN)”

    Actually, I think that isn’t completely true and this is just an implementation detail but:
    VXLAN support in Open vSwitch was added in version 1.10.0. Which was released May 1st and thus didn’t make it in the Ubuntu release of a couple of days before that or a RedHat derived distribution yet.

    And STT isn’t in the Linux kernel and not Open vSwitch as far as I know. Now that VMware bought Nicera I hope someone is still working on that. Because STT does have it’s advantages, the biggest advantage being the ability to offload part of the work to the NIC. The code I did see was not yet able to take advantage of that though.

    The OpenStack Quantum Open vSwitch plugin also only supports GRE at this point.

    In other Open vSwitch overlay news, CAPWAP dataplane support (some sort of experiment by the developers) was removed when they added VXLAN.

    In my test-setup I went a bit further and changed the MTU of the NIC and Open vSwitch and VM. So the VM can have the full 1500.

    I hope to have time to try connecting sflow with the Host sFlow agent to see if I can make pretty graphs. Probably not just graphs for the network traffic, but also KVM/libvirt.

    I also still need to figure out how I’m gonna provide stateless NAT.

    But for now I’m focusing on the underlay-network though.

  2. Chris Bennett’s avatar

    Great article – I’ve really enjoyed following the Open vSwitch related posts you’ve been making.

    This might be a typo? ‘Using a virtualized firewall or virtualized firewall, [..]‘

  3. Truman’s avatar

    Awesomely written mate. Really good writeup on OVS and the magic of GRE tunnels.

  4. Dmitri Kalintsev’s avatar

    > ovs-vsctl add-port br2 tep0 — set interface tep0 type=internal

    This should probably be “br0″, rather than “br2″?

  5. Dmitri Kalintsev’s avatar

    Also,

    > Using a virtualized firewall or virtualized firewall, I could easily provide connectivity into or out of this isolated (encapsulated) private network.

    probably had in mind a “router” instead of one of the “firewall”s ;)

  6. slowe’s avatar

    Lennie, my point was that the encapsulation protocol—which is what so many people spend so much time discussing—is but one part of the overall picture. VXLAN support is already in OVS (as you mentioned) and CAPWAP is out. Who knows what will be next? NVGRE? The real value of this sort of configuration is that you will be able to use whatever encapsulation method is best for your environment.

    Chris (and others), thanks for catching that typo! It’s now fixed.

    Dmitri, you are correct—the tunnel endpoint interface should be attached to a bridge with physical network connectivity. That would be br0, not br2, as you point out. Thanks for catching that error!

  7. Lennie’s avatar

    Scott, that is why I started my comment with: “and this is just an implementation detail but” because I’m sure most of it will be solved in 6 months to a year.

    For example I’ve seen the code and adding VXLAN to the OpenStack Quantum Open vSwitch plugin is probably just changing 10 lines of code where it only says TYPE_GRE now.

    I know it’s possible to swap encapsulations and there is one I’ve been wanting to try that is also to a 10 lines of kernel code change and that is to replace UDP with DCCP in VXLAN. That would add congestion control. I keep wondering if that would help with sudden changes in the network.

    I think it might have a positive effect to prevent even more packetloss when some networking device fails and there are suddenly less links available, but I haven’t had the time to test it.

    I know the switches’ ECMP implementation might not like DCCP, but I don’t want to depend on that, I’m trying to depend on routing alone. Using DCCP would just be a test to see what the effect of congestion control would be in such a case. What I would like to see is someone add multipath like in MultiPath-TCP so you can better utilize all links and have fast failover. But that is a lot of work, that isn’t just an experiment anymore.

    So far no-one told me I’m an idiot for wanting to try these things, but no-one said it was a great idea either.

  8. Jared Evans’s avatar

    When using GRE tunnels, be aware of MTU size, etc. In case, you encounter odd connectivity problems.

    http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a0080093f1f.shtml

  9. Sean’s avatar

    Scott:
    I’ve been trying to conceptualize what’s going on here from the perspective of the Linux OS because I’m considering a hypervisor design much like you presented.

    Based on your scenario, what is the use of br0 (and hence tep0) in your current deployment? Could you deploy this without br0 and use eth0 as the tunnel endpoint? I understand the concept of having a separate management network, but I’m assuming that you could use eth1 as your backbone network interface and have gre0 tied to that when you decided to decouple your management network from your backbone interface.

    Just trying to think this through. Thanks!

  10. Donny’s avatar

    “What if we wrapped some sort of orchestration framework around this, to help deploy VMs, create networks, add routers/firewalls automatically, all based on the customer’s needs? (OpenStack Networking, anyone?)”

    VCNS…

  11. Chris Paggen’s avatar

    What’s quite interesting from an educational/troubleshooting standpoint is to run “ovs-appctl dpif/dump-flows br2″ to see the Openflow action that points VM traffic to the GRE tunnel (and the reverse flow).

    You can then repeat this with br0 and notice the IP Protocol 47 bi-directional flow being set up (the GRE tunnel).

  12. Sascha’s avatar

    Newbie question:

    Does this mean the GRE tunnel actually works like a patch cable between the two isolated bridges?

  13. Sascha’s avatar

    Also, what I don’t get: If br2 is an isolated bridge, who does it get traffic from gre0 (on br2) over to tep0 (on br0)?

  14. slowe’s avatar

    Sean, Sascha, I’ve written a new post on how various Open vSwitch (OVS) configurations affect traffic. Hopefully this will help clear up some of your questions. Have a look at the article here:

    http://blog.scottlowe.org/2013/05/15/examining-open-vswitch-traffic-patterns/

    Thanks!

  15. gn’s avatar

    All nice to link 2 stupid virtual machines. But what happens when you need to link 10 virtual machines on 10 hypervisors in one LAN ? Do i need to build a full-mesh of GRE tunnels ? does it support mGRE ? or daisy-chained GRE tunnels ? If daisy-chained, when a middle hypervisor fails, it will split the vlan, does it rebuild dynamically around this ? (if so, what convergence time ?) if not dynamically, i will have to add manually redundant GRE links. and there we have the old beast: STP. Will it run STP ? What version ? who wil configure it (you or the controller) ? And so we are back to plain old networking problems…

  16. Lennie’s avatar

    @gn this is why normal GRE isn’t the preferred method by most and why you see people proposing protocols like VXLAN, NXGRE and STT. But GRE is widely available so you can use it for examples and test setups. The protocol used doesn’t really much for learning how to configure it.

    Some people use LACP or similar solutions to create some redundancy.

    I actually run a routing daemon on my hypervisor so that if a link fails, it will reroute traffic over an other link.

    To each their own.

    I’m sure in a few years, a best common practice will present itself. Or maybe switches with OpenFlow will be commen place and we’ve found an effective and reliable way to run that.

  17. slowe’s avatar

    GN, this post is not intended to say that this is the way things should be done, but rather as an illustrative post that describes how you might use these technologies. Are these technologies the right technologies for every problem? No, of course not; that’s the basis of everyone’s favorite IT answer, “It depends.” This is just another tool in your toolbelt.

  18. kevin’s avatar

    Thanks for the nice post,but is it possible to extend it to more than 2 hosts.

  19. slowe’s avatar

    Kevin, it is certainly possible to extend this configuration to more than 2 hosts. You’d need to create a full-mesh configuration, where each host has a GRE tunnel to every other host.

  20. kevin’s avatar

    Slowe thanks for your reply,but i wonder how this is salable.If we have 5 hosts dont we need 4 endpoints on each host.Also do we need these endpoint to be on different network.
    For 100 hosts 99 endpoints ohhh noo!!!!!.Can you please clarify this,do we have any other options,expecting a reply.

  21. slowe’s avatar

    Kevin, you do not need a separate tunnel endpoint for each host-to-host tunnel. A single tunnel endpoint can be used to initiate/terminal all host-to-host tunnels.

  22. kevin’s avatar

    Thanks alot slowe,thanks for your tip :) and it worked,but i have found that if we have more than one isolated network we need to have multiple endpoints.In my case i tried isolated1 and isolated2 ,but isolated2 didn’t work until i created another endpoint in different network.Could you please say whether its possible with same endpoint that we used for isolated1.

  23. ananth’s avatar

    nice post,but why do we need endpoint tep0,if we have ip address on that bridge in this case br0,we dont need to specify additional ip for tep0.In my case it worked with out adding additional ip.

  24. slowe’s avatar

    Ananth, we don’t have an IP address assigned to br0, but if you did it would work fine. To understand why, read this article:

    http://blog.scottlowe.org/2013/05/15/examining-open-vswitch-traffic-patterns/

    Thanks!

  25. Ahmed’s avatar

    I want to create a full-mesh topology whith GRE. Each physical node has gre tunnels to other nodes. When I tested this whith 5 nodes my VMs don’t respond to ping. However in star topology it works fine.
    My question now, does OVS support mesh-topology based on multiple gre tunnels or GRE it self couldn’t work in mesh topology ?

  26. Denis’s avatar

    Hi, i’m trying to do same thing with KVM + OVS, but i stopped on GRE+OVS, I have centos 6.4 + ovs instlled, when i try to start OVS in log there are some errors:
    2013-06-18T10:42:26Z|00004|dpif|WARN|system@ovs-system: failed to add gre0 as port: Address family not supported by protocol

    Tryied to google that error, no lock, any help would be great.

    Thanks

  27. Denis’s avatar

    Problem SOLVED, you need install Openvswitch (and kernel module) from source, don’t trust RDO repository.

  28. fr33host’s avatar

    hi,

    Denis how did you resolve this issue? dir build custom rpm from openvswitch source or other way?

  29. othmane’s avatar

    Hi Scott,

    i’ve followed the tutorial and i could create one GRE tunnel.
    I want to know if its possible to set multiple tunnels gre on the same link.
    Actually, I have two host: Host 1 with 2 VMs (VM1 and VM2) and OVS1, Host 2
    with VM3, VM4 and OVS2.

    In OVS1 (10.180.121.41) I have three bridge : br0 attached to the physical
    interface eth0, br1 (192.168.10.2) connected to VM1 (192.168.10.4) and br2
    (172.16.10.2) connected to VM2 (172.16.10.4)

    In OVS2 (10.180.121.70) I have three bridge too: br0 attached to the
    physical interface eth0, br1 (192.168.10.3) connected to VM3 (192.168.10.5)
    and br2 (172.16.10.3) connected to VM4 (172.16.10.5)

    I establish a first gre tunnel between VM1 and VM3 and everything worked
    fine! I could ping the two machine. However, when I try to establish a
    second gre tunnel over the same link between VM2 and VM4, it doesn’t work.
    Actually I created the tunnel correctly, the configuration in ovs-vsctl show
    is correct, I didn’t have any error but when the ping between 172.16.10.3
    and 172.16.10.2 and the ping between 172.16.10.4 and 172.16.10.5 failed
    both. When I remove br1 from both machines, the previous pings work. “route
    -n” is correct too. I was wondering if it possible to establish multiple gre
    tunnel over the same link.

    Any help will be appreciated, thinks in advance.

  30. Jinhwan’s avatar

    Thanks for your post ‘examing-open-vswitch-traffic-patterns’ i got how traffic goes out from host1 to host2 over gre when i ping from vm at host1 to vm at host2. But still i can’t understand how how traffic which arrives at tep of host2 reach br2, isolated brdige? there are no connection between br0 at host2 and br2 at hos2. how traffic finds and route way to br2 from br0? it looks like there are no rule that will route traffic at br0 to br2. I did ‘ovs-flows dump-flows’ on host2. But it only show me that flow that come and out from eth0 to tep0 and that forward packet to gre0.

  31. Shane’s avatar

    So I have some developers using the latest and greatest openstack networking tools (neutron, etc.) and noticed they are playing with this gre option. One basic question I have is GRE is a non-broadcast point to point kind of deal. It’s not apples to apples with VLANs. This model is great for two hosts (point to point) but what about 3 hosts, 4, 5? A routing protocol or static routes could direct traffic but you’d need someway to synchronize all the hosts and a rulebook to play by (don’t use the same subnet, etc.)

    It seems to me something like MPLS would be more versatile with less logical configuration overhead. GRE is ancient technology designed for point to point networks. This can be done (and is) but I’m not sure it’s the most efficient.

  32. Buiosu’s avatar

    Hello Scott,

    is there a way to configure bare Linux (Ubuntu) VM (no hypervisor) with OVS installed to serve as a tunnel endpoint for a machine/VM or whatever device connected directly to it’s eth interface (having another eth as an interface to the Internet)?
    I would like to test software EoGRE and VXLAN tunnels assuming that endpoints are independent VMs or PCs.

    Regards

  33. Pasquale’s avatar

    I don’t think I get it…
    1)what is the purpose of tep0? I mean…I see no code line in which you use it. As it was created now the gre tunnel seems to pass directly to eth0.
    2)Now vm can communicate to each other…but if VM1 wants to communicate with the external world, and so through eth0? It is not a special case: on VMs you tipically install servers which have to interact with real clients…
    How to link br1 to br0?Can’t find an answer…

  34. Shiva’s avatar

    Hello Scott,
    I have followed your tutorial on gre tunneling.I was able to set it up last time.Now,I am having problems.
    Here is my setup 2 host machines with 2 vm’s.

    Hypervisor1
    ovs-vsctl add-br virbr3
    ifconfig virbr3 20.1.1.1 netmask 255.255.255.0
    ovs-vsctl add-port virbr3 gre2 — set interface gre2 type=gre options:remote_ip=a.b.c.d
    VM1 IP on hypervisor1 – 20.1.1.20

    Hypervisor2
    ovs-vsctl add-br virbr3
    ifconfig virbr3 20.1.1.2 netmask 255.255.255.0
    ovs-vsctl add-port virbr3 gre2 — set interface gre2 type=gre options:remote_ip=p.q.r.s

    VM2 IP on hypervisor2 – 20.1.1.30

    I am able to ping from vm1 to hypervisor 1 and vm2 to hypervisor2.VM’s are not able to ping any external ip’s. Though I am able to connect to the external world from host with no issues.

    Can you please let me know what I am doing wrong here?Your help in this regard is greatly appreciated.

  35. Shiva’s avatar

    Just to add

    External bridge ovs-commands

    On Hypervisor1
    ovs-vsctl add-br br1
    ovs-vsctl add-port br1 eth0
    ifconfig eth0 0.0.0.0
    ifconfig br1 p.q.r.s 255.255.255.0

    On Hypervisor2
    ovs-vsctl add-br br1
    ovs-vsctl add-port br1 eth0
    ifconfig eth0 0.0.0.0
    ifconfig br1 a.b.c.d 255.255.255.0

  36. Shiva’s avatar

    Hello Scott, I am setting up two gre tunnels between two hosts using the same external bridge.In this case (br1).I use virbr3 for internal communication. This is my config steps: Hypervisor 1: External communication ovs-vsctl add-br br1 ovs-vsctl add-port eth0 ifconfig br1 p.q.r.s netmask 255.255.255.0 Internal bridge for vm communication ovs-vsctl add-br virbr3 ovs-vsctl show ovs-vsctl add-port virbr3 gre2 — set interface gre2 type=gre options:remote_ip:a.b.c.d Hypervisor 2: External communication ovs-vsctl add-br br1 ovs-vsctl add-port eth0 ifconfig br1 a.b.c.d netmask 255.255.255.0 Internal bridge for vm communication ovs-vsctl add-br virbr3 ovs-vsctl show ovs-vsctl add-port virbr3 gre2 — set interface gre2 type=gre options:remote_ip:p.q.r.s I am not able to communicate outside world from the vm’s.I am just able to reach the host on which vm resides and viceversa.Can you please let me know what am i missing here?

  37. slowe’s avatar

    Shiva, you can’t ping from the VMs on the GRE network to systems outside the GRE network without “something” to strip off the GRE headers. This configuration doesn’t provide that “something”, so connectivity from the VMs to systems not connected to the GRE overlay isn’t expected to work.

    If you’d like an idea of how to build the “something” that is needed, take a look at how OpenStack Neutron and the OVS plugin do it. OpenStack combines OVS, network namespaces, and iptables rules (among other things) to perform this function.

  38. Pasquale’s avatar

    Hello,
    I am a student and I am writing my thesis work on sdn.

    I wanted to ask you a thing about gre tunnels: I’d like to avoid this (working) configuration as I’ll have different openvswitches with a lot of connections and with this configuration I’d have to double everything, switches and ports.

    My idea was to simply attach eth0 to the openvswitch, anyway I noticed that the gre tunnel works as long as eth0 belongs to the same vlan.

    For my project I would need to have gre tunnels belonging to a specific vlan, while eth0 should have none…is there something I am doing wrong?

    If eth0 is not attached at all it all works fine..maybe openvswitch requires at least an external network interface to make gre tunnels work?

  39. zhi’s avatar

    Hello, othmane, have you worked your scenario out? I want to do pretty much the same thing you posted here but I haven’t started yet. Do you have suggestions or comments? Thanks for your help in advance…

  40. Tao Zhou’s avatar

    In your example, I don’t think you need to create a br0.
    As long as there’s connectivity between Hypervisor 1 and Hypervisor 2, you can achieve your target.
    I don’t understand why you create br0.

    Tao

  41. slowe’s avatar

    Tao, br0 is used in conjunction with tep0 to control the IP routing for the GRE traffic. tep0 is the IP interface by which the hypervisors communicate via GRE. You can most certainly skip br0 and tep0 and allow the Linux host routing stack to choose which IP interface to use.

  42. chandra sekhar’s avatar

    Hello scott,
    I have created VMs on ubuntu 12.04 LTS system using Xen hypervisor…
    I’m trying to setup a GRE tunneiling between two VM’s using openvswitch -v 1.9.3. I have 2 hosts and host 1 has VM1 (192.168.1.4), xenbr0 (Xen hypervisor)(192.168.1.3), physical etho (10.12.1.10), OVS1 and host 2 has VM” (192.168.1.6), xenbr0 (Xen hypervisor)(192.168.1.5), physical etho (10.12.1.12), OVS2.

    I have understood your steps but the problem here is

    How do i bridge the ‘eth0′ of VM1 to the OVS1 of host1??…I’m stuck here….

    Can you please let me know what I have to do here?
    Your help in this regard is greatly appreciated.

  43. Taha’s avatar

    Hello othmane,

    I’m working on the same setup but only one VM per host….
    Can you send the complete configuration commands for the your setup?
    and How you did the bridging between the ‘eth0? of VM to the OVS of host?

    Your help in this regard is greatly appreciated.

  44. Devesh’s avatar

    Hi Scott,
    I have three ubuntu Linux machine. One I am using as controller(controller is not started).I am using other two machines as OpenVswitch. I created one-one bridge on both openVswitches and created one-one virtual machines to both bridges. I gave IP to br-int1, br-int2,VM1 and VM2 of domain 192.168.1.X. My all 3 machines has 2 ethernet port’s . Both OVS switches are connected to controller machine and both ovs are connected with each other also.
    My question is Why I am not able to ping vM2 from VM1 and vice versa(Controller is not running). On wireshark I get the packets of ARP only seems like ARP is not resolving. Wireshark says.
    some time:-
    I get continues packets like “who has 192.168.1.181 says 192.168.1.31″.
    and some times it says
    “who has 192.168.1.181 says 0.0.0.0″.

    Thank you so much Scott.

  45. reachlin’s avatar

    After reading your another fantastic post:http://blog.scottlowe.org/2013/05/15/examining-open-vswitch-traffic-patterns/, I think br0 and tep0 are not mandatory, if I only want to use GRE? Because traffic hits br2 will go out from eth0?

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>