How Tos, OpenStack

Docker Networking – SocketPlane

In the Docker Networking post I mentioned few solutions for multi-host containers connectivity. I have covered Weave in a previous blog post and on this post I’ll describe my hands-on experience with SocketPlane, the company Docker acquired on March 2015.SocketPlaneLogo

SocketPlane solution integrates VxLAN tunnels between Open vSwitch host endpoints in order to facilitate a connection between containers running on different hosts. It relies also on multicast DNS to discover other SocketPlane members in the cluster and after the discovery phase Consul is used to the actual addition of the SocketPlane member to the cluster.

Before we start with the hands-on portion of this post I’ll mention that my host machine is running Mac OS 10.8.5 and I’m using VirtualBox 4.3.16 with Vagrant version 1.7.2 and having git installed.

Following the installation instructions at SocketPlane GitHub Page I have used the Vagrant option to have three VM hosts running Ubuntu 14.10. If you prefer not to use Vagrant you can follow the manual option described here.

Simply following the Vagrant install instructions I cloned SocketPlane’s repository to my own machine.

yaniv-zadkas-macbook:SocketPlane yanzadka$ git clone https://github.com/socketplane/socketplane
Cloning into 'socketplane'...
remote: Counting objects: 2606, done.
remote: Total 2606 (delta 0), reused 0 (delta 0), pack-reused 2606
Receiving objects: 100% (2606/2606), 1.49 MiB | 991.00 KiB/s, done.
Resolving deltas: 100% (1058/1058), done.
Checking connectivity... done.

Changed directory and double-checked no other Vagrant instance is running.

yaniv-zadkas-macbook:SocketPlane yanzadka$ cd socketplane
yaniv-zadkas-macbook:socketplane yanzadka$ vagrant global-status
id       name   provider state  directory                           
--------------------------------------------------------------------
There are no active Vagrant environments on this computer! Or,
you haven't destroyed and recreated Vagrant environments that were
started with an older version of Vagrant.

And then started the already installed Vagrant by using the vagrant up command waiting around 15 minutes until the “done!!!” was seen. Note that the Vagrantfile file in the socketplane directory can be edited to have more or less than three host VMs as well as different OS running rather than Ubuntu 14.10.

yaniv-zadkas-macbook:socketplane yanzadka$ vagrant up
Bringing machine 'socketplane-1' up with 'virtualbox' provider...
Bringing machine 'socketplane-2' up with 'virtualbox' provider...
Bringing machine 'socketplane-3' up with 'virtualbox' provider...
==> socketplane-1: Box 'socketplane/ubuntu-14.10' could not be found. Attempting to find and install...
    socketplane-1: Box Provider: virtualbox
    socketplane-1: Box Version: >= 0
==> socketplane-1: Loading metadata for box 'socketplane/ubuntu-14.10'
    socketplane-1: URL: https://atlas.hashicorp.com/socketplane/ubuntu-14.10
==> socketplane-1: Adding box 'socketplane/ubuntu-14.10' (v0.1) for provider: virtualbox
    socketplane-1: Downloading: https://vagrantcloud.com/socketplane/boxes/ubuntu-14.10/versions/0.1/providers/virtualbox.box
==> socketplane-1: Successfully added box 'socketplane/ubuntu-14.10' (v0.1) for 'virtual box'!
….
….
==> socketplane-3: dba162d5b0b7: 
==> socketplane-3: Download complete
==> socketplane-3: 7c5e9d5231cf: Pulling metadata
==> socketplane-3: 7c5e9d5231cf: Pulling fs layer
==> socketplane-3: 7c5e9d5231cf: 
==> socketplane-3: Download complete
==> socketplane-3: 7c5e9d5231cf: Download complete
==> socketplane-3: Status: Downloaded newer image for clusterhq/powerstrip:v0.0.1
==> socketplane-3: Done!!!
yaniv-zadkas-macbook:socketplane yanzadka$ 

We can see three new VMs created on VirtualBox after the script was done

Screen Shot 2015-04-24 at 1.43.30 PM

Let’s check back what we see when running vagrant global-status

yaniv-zadkas-macbook:socketplane yanzadka$ vagrant global-status
id       name          provider   state   directory                                                                                             
------------------------------------------------------------------------------------------------------------------------------------------------
0b5f0fa  socketplane-1 virtualbox running /Users/yanzadka/Documents/Career Training/OpenStack/OpenStack Blog/StackTutor/SocketPlane/socketplane 
f7b8e1e  socketplane-2 virtualbox running /Users/yanzadka/Documents/Career Training/OpenStack/OpenStack Blog/StackTutor/SocketPlane/socketplane 
7ce76c4  socketplane-3 virtualbox running /Users/yanzadka/Documents/Career Training/OpenStack/OpenStack Blog/StackTutor/SocketPlane/socketplane

Having three VM hosts running we can now vagrant ssh one to check what the script did (oh, and Ubuntu 15.04 is available!)

yaniv-zadkas-macbook:socketplane yanzadka$ vagrant ssh socketplane-1
Welcome to Ubuntu 14.10 (GNU/Linux 3.16.0-23-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
New release '15.04' available.
Run 'do-release-upgrade' to upgrade to it.

We can see the docker command wrapper – socketplane command options

vagrant@socketplane-1:~$ sudo socketplane 
NAME:
    socketplane - Install, Configure and Run SocketPlane

VERSION:
    0.1

USAGE:
    /usr/bin/socketplane  [command_options] [arguments...]

COMMANDS:
    help
            Help and usage

    install [unattended] [nopowerstrip]
            Install SocketPlane (installs docker and openvswitch)

    uninstall
            Remove Socketplane installation

…

    network agent start
            Starts an existing SocketPlane image if it is not already running

    network agent stop
            Stops a running SocketPlane image. This will not delete the local image

Checking whether any container is running by using the socketplane info command.

vagrant@socketplane-1:~$ sudo socketplane info
{}

Moving on, we’ll create our first container by using the socketplane run with the interactive options.

vagrant@socketplane-1:~$ sudo socketplane run -itd ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from ubuntu
...
968daed1b77026e41a217db7ea9660c124a3e3c55b648230ad217edeeb78b62f

Let’s check the socketplane info command now

vagrant@socketplane-1:~$ sudo socketplane info
{
    "968daed1b77026e41a217db7ea9660c124a3e3c55b648230ad217edeeb78b62f": {
        "connection_details": {
            "gateway": "10.1.0.1",
            "ip": "10.1.0.2",
            "mac": "02:42:0a:01:00:02",
            "name": "ovs0c22366",
            "subnet": "/16"
        },
        "container_id": "968daed1b77026e41a217db7ea9660c124a3e3c55b648230ad217edeeb78b62f",
        "container_name": "/reverent_jones",
        "container_pid": "4235",
        "network": "default",
        "ovs_port_id": "ovs0c22366"
    }
}

We can see the default allocation of 10.1.x.x subnet to the new container rather than the 172.17.x.x subnet docker use by default. More about that later.

We’ll vagrant ssh to another VM host and check our consul members. Consul is an open source tool made by the same developer of Vagrant. It is used by socketplane to join the other VM hosts to the SocketPlane cluster.

yaniv-zadkas-macbook:socketplane yanzadka$ vagrant ssh socketplane-2
Welcome to Ubuntu 14.10 (GNU/Linux 3.16.0-23-generic x86_64)
...
vagrant@socketplane-2:~$ consul members
Node           Address             Status  Type    Build  Protocol
socketplane-2  10.254.101.22:8301  alive   client  0.4.1  2
socketplane-1  10.254.101.21:8301  alive   server  0.4.1  2
socketplane-3  10.254.101.23:8301  alive   client  0.4.1  2

We’ll create a container on host2 using the same command we used on host1

vagrant@socketplane-2:~$ sudo socketplane run -itd ubuntu
….
Status: Downloaded newer image for ubuntu:latest
83974bc7eea55dde08d7c5ba0fa78ee56b7750bb6839bd0e82721ecb8f076551

Now is the time to check connectivity between the two containers we have created on different hosts.

vagrant@socketplane-2:~$ sudo socketplane attach 83974bc7eea55dde08d7c5ba0fa78ee56b7750bb6839bd0e82721ecb8f076551
root@83974bc7eea5:/# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ovs8b15404 Link encap:Ethernet  HWaddr 02:42:0a:01:00:03  
          inet addr:10.1.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f458:73ff:fecb:9499/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1440  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:516 (516.0 B)  TX bytes:516 (516.0 B)

root@83974bc7eea5:/# ping -c 3 10.1.0.2
PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
64 bytes from 10.1.0.2: icmp_seq=1 ttl=64 time=8.59 ms
64 bytes from 10.1.0.2: icmp_seq=2 ttl=64 time=3.19 ms
64 bytes from 10.1.0.2: icmp_seq=3 ttl=64 time=3.78 ms

--- 10.1.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2015ms
rtt min/avg/max/mdev = 3.193/5.188/8.590/2.418 ms

ping went successfully. ifconfig inside the container shows ovs8b15404, an OVS port that is connected to the OVS that has been created by socketplane installation. Let’s look at host2 current interfaces.

vagrant@socketplane-2:~$ ifconfig
default   Link encap:Ethernet  HWaddr f2:1f:67:a8:d7:97  
          inet addr:10.1.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f01f:67ff:fea8:d797/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1440  Metric:1
          RX packets:34 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2484 (2.4 KB)  TX bytes:816 (816.0 B)

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:3467 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4152 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:142313 (142.3 KB)  TX bytes:10016692 (10.0 MB)

docker0-ovs Link encap:Ethernet  HWaddr 3a:8d:4d:1e:6c:43  
          inet6 addr: fe80::601f:eeff:fe1c:3b26/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:42 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3228 (3.2 KB)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:c7:e7:dd  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fec7:e7dd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:325686 errors:0 dropped:0 overruns:0 frame:0
          TX packets:116804 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:288223335 (288.2 MB)  TX bytes:7390421 (7.3 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:b4:52:95  
          inet addr:10.254.101.22  Bcast:10.254.101.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feb4:5295/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:34517 errors:0 dropped:0 overruns:0 frame:0
          TX packets:43960 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4023252 (4.0 MB)  TX bytes:4424480 (4.4 MB)

eth0 and eth1 are created by our Vagrant script. The former is used to facilitate ssh to the VMs and the latter for VxLAN tunnelling, which I’ll be talking about in a second. We see that socketplane has left docker0 Linux bridge as-is and created default and docker0-ovs. Let’s run some commands to understand what we see above better.

vagrant@socketplane-2:~$ sudo brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.56847afe9799	no		
vagrant@socketplane-2:~$ sudo ovs-vsctl show
68de8b9f-6e25-4e74-b2ca-d53173c2a3b3
    Manager "ptcp:6640"
        is_connected: true
    Bridge "docker0-ovs"
        Port "vxlan-10.254.101.21"
            Interface "vxlan-10.254.101.21"
                type: vxlan
                options: {remote_ip="10.254.101.21"}
        Port "ovs8b15404"
            tag: 1
            Interface "ovs8b15404"
                type: internal
        Port default
            tag: 1
            Interface default
                type: internal
        Port "vxlan-10.254.101.22"
            Interface "vxlan-10.254.101.22"
                type: vxlan
                options: {remote_ip="10.254.101.22"}
        Port "docker0-ovs"
            Interface "docker0-ovs"
                type: internal
        Port "vxlan-10.254.101.23"
            Interface "vxlan-10.254.101.23"
                type: vxlan
                options: {remote_ip="10.254.101.23"}
    ovs_version: "2.1.3"

We see that docker0 is the only Linux bridge running and that we have one OVS bridgedocker0-ovs. This OVS bridge has several ports:
Port "ovs8b15404" - the port we saw earlier in the container.
Port "vxlan-10.254.101.22" - this port is used for VxLAN traffic. VxLAN is layer 2 tunnelling protocol used in OVS. Port "vxlan-10.254.101.21" and Port "vxlan-10.254.101.23" are the VxLAN end points for Host1 and Host3 respectively.
Port default and Port "docker0-ovs" are both created upon socketplane installation.

We can see VxLAN tunnelling in action if we run tcpdump on eth1 while sending ping from host2 to host1.

vagrant@socketplane-2:~$ sudo tcpdump -i eth1 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
01:45:33.143481 IP 10.254.101.22.8301 > 10.254.101.23.8301: UDP, length 30
01:45:33.146884 IP 10.254.101.23.8301 > 10.254.101.22.8301: UDP, length 11
01:45:33.315687 IP 10.254.101.22.54775 > 10.254.101.21.4789: VXLAN, flags [I] (0x08), vni 0
IP 10.1.0.3 > 10.1.0.2: ICMP echo request, id 18, seq 145, length 64
01:45:33.321364 IP 10.254.101.21.40808 > 10.254.101.22.4789: VXLAN, flags [I] (0x08), vni 0
IP 10.1.0.2 > 10.1.0.3: ICMP echo reply, id 18, seq 145, length 64
01:45:33.667165 IP 10.254.101.23.8301 > 10.254.101.21.8301: UDP, length 30
01:45:33.670338 IP 10.254.101.21.8301 > 10.254.101.23.8301: UDP, length 11
01:45:33.956714 IP 10.254.101.21.8301 > 10.254.101.23.8301: UDP, length 30
01:45:33.960905 IP 10.254.101.23.8301 > 10.254.101.21.8301: UDP, length 11
01:45:34.141012 IP 10.254.101.22.8301 > 10.254.101.21.8301: UDP, length 30
01:45:34.145010 IP 10.254.101.21.8301 > 10.254.101.22.8301: UDP, length 11 

We can see additional traffic running between the three host on eth1 that is not encapsulated with VxLAN.

As with Weave, we can also specify here what subnet we want to use for our containers by using the network create command.

vagrant@socketplane-2:~$ sudo socketplane network create web 10.3.0.0/16
{
    "gateway": "10.3.0.1",
    "id": "web",
    "subnet": "10.3.0.0/16",
    "vlan": 2
} 

We can see the two networks socketplane created by running the network list command.

vagrant@socketplane-2:~$ sudo socketplane network list
[
    {
        "gateway": "10.1.0.1",
        "id": "default",
        "subnet": "10.1.0.0/16",
        "vlan": 1
    },
    {
        "gateway": "10.3.0.1",
        "id": "web",
        "subnet": "10.3.0.0/16",
        "vlan": 2
    }
]

References:
SocketPlane GitHub Page
SocketPlane Technology Preview Demo #1

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s