Revisiting VXLAN and Layer 3 Connectivity7 December 2011 · Filed in Explanation
In my earlier post on VXLAN and Layer 3 connectivity, I had a fatal flaw in my thinking and in my diagrams that was corrected for me in the comments to that post. In this post, I want to revisit the idea of Layer 3 connectivity with VXLAN and include the corrected information (and new diagrams).
The “fatal flaw” was that I was working under the impression that we’d have to change network address translation (NAT) mappings on the vShield Edge (VSE) instance that was handling NAT for a particular VXLAN segment. As a result of this incorrect thinking, I stated that VXLAN broke Layer 3 connectivity. As it turns out, I was wrong.
Instead—and this makes perfect sense now that my flawed thinking was pointed out—the VSE instance continues to serve as the default Layer 3 gateway for the workload(s) inside the VXLAN segment.
Consider this diagram, which shows how a workload external to a VXLAN segment communicates with a workload inside a VXLAN segment:
Note that in this diagram, the Linux workload outside the VXLAN segment communicates via the VSE instance handling NAT for that particular VXLAN segment. The VSE instance (VSE 1) passes that communication to the internal workload, and the return traffic follows the same path. Layer 3 connectivity outside of the VXLAN segment is handled via traditional/normal Layer 2/3 methods.
Now consider this diagram, which shows the same communication, but after the Windows-based workload inside the VXLAN segment has now migrated to a different location:
Note that even though the Windows-based workload inside the VXLAN segment now resides on a completely separate VTEP (ESXi 2, in this case), the traffic from the Linux-based workload outside the VXLAN segment continues to move through VSE 1. That’s because VSE 1 is still the Layer 3 default gateway for the IP subnet inside the VXLAN segment. Therefore—and this is where I was wrong earlier—Layer 3 connectivity is not broken, but it does have to “horseshoe” across to the other data center and then back again, as illustrated above. This is the classic traffic pattern that we see with other overlay technologies, like OTV.
For me, while this addresses Layer 3 connectivity after a migration with VXLAN, it does bring up other questions:
How does one provide redundancy at the VSE level? Is there VRRP support in VSE, or an equivalent function?
Because Layer 3 connectivity is maintained, what now is the role of OTV? Is OTV relegated to handling Layer 2 extensions only for non-virtualized workloads?
How do we now propose to handle the “horseshoe” routing issue? It would seem to me that the only way to address this would be to port support for LISP (or an equivalent protocol) into VSE.
Feel free to post any questions, thoughts, or corrections in the comments below. Thanks!Tags: Networking · OTV · VXLAN · Virtualization Previous Post: Fixing Repeating iTunes Firewall Prompt Next Post: Technology Short Take #18