Overview

The Nuage libnetwork plugin allows users to create new networks of type Nuage. Nuage libnetwork plugin runs on every docker host. Each Docker host, whether bare-metal or virtual, also has the VSP’s Virtual Routing and Switching (VRS) component installed on it. VRS, a software agent, is the Nuage user space component of standard Open vSwitch (OVS). It is responsible for forwarding traffic from the containers, performing the VXLAN encapsulation of layer-2 packets, and enforcing security policies. When creating a Docker container, the user can specify what Zone or Policy Group it belongs to. All endpoints in a given Zone adhere to the same set of security policies. Nuage libnetwork plugin supports built in IPAM driver where the IP address management is done by VSP.

The libnetwork plugin supports both local and global scope networks. The scope defines if your network is going to propagate to all the nodes as part of your cluster.

In this blog, I provide step by step instructions on how to configure Nuage SDN libnetwork plugin with docker engine for a multi-host deployment with consul server as the backend store.

If you are interested in single host networking deployment with consul server (backend store), then check my other blog: Single-Host Networking, Docker Libnetwork and Nuage SDN.

Prerequisites

  • Nuage VSP Release 5.0R1 or later release (other versions should work, docker libnetwork plugin is supported with nuage since 4.0R6.1)
  • Docker version: 1.13 (needed for Nuage 4.0R8, 4.0R9, 4.0R10, 5.0R1 and above)
  • Centos 7.3 (other Linux distributions should work, but I only verified centos 7.3)
  • VSD (standalone or cluster) and VSC (at least 1) must be deployed and operational
  • CentOS Server with at least 1 NIC that has reachability to both the VSD and VSC/s
  • CentOS Server must have internet access for updates
  • Each docker node should have unique name and ip address

Topology

docker_nuage_topology_geekysnippets

Installation steps

1. First Step: Install Consul Server (see topology)

a. Install docker v1.13

yum install -y yum-utils
yum-config-manager --add-repo https://packages.docker.com/1.13/yum/repo/main/centos/7
yum makecache fast
sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e"
yum install -y docker-engine
docker version

b. Start and enable docker

systemctl start docker
systemctl enable docker

c. Start the consul server by running the consul container. Note: the container will use the host’s ip address to connect to the nodes. The consul docker server will map three ports from the host machine to the docker consul server: 8400, 8500, 53

docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h consul1 progrium/consul -server -bootstrap

 

2. Second Step: Install VRS and Docker Libnetwork on Nodes 1 and 2 (see topology)

a. Install docker v1.13

yum install -y yum-utils
yum-config-manager --add-repo https://packages.docker.com/1.13/yum/repo/main/centos/7
yum makecache fast
sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e"
yum install -y docker-engine
docker version

b. Stop and enable docker

systemctl stop docker

c. Download the nuage-openvswitch plugin. The ip address that I have below is just an example

wget http://10.31.135.45:8080/nuage-openvswitch-5.0.1-12.el7.x86_64.rpm

d. Install Nuage openvswitch plugin (on Centos 7.3). If you are using Ubuntu, you need to download another package

yum -y localinstall nuage-openvswitch-5.0.1-12.el7.x86_64.rpm

e. Edit /etc/default/openvswitch to add container support, controllers IP addresses, and the uplink interface of the host server.

vi /etc/default/openvswitch

PLATFORM="kvm, lxc"

ACTIVE_CONTROLLER=10.31.134.249
STANDBY_CONTROLLER=10.31.134.247

NETWORK_UPLINK_INTF=eth0

f. Restart the openvswitch and verify that you have connectivity with the VSC controllers

systemctl restart openvswitch
systemctl status openvswitch
ovs-vsctl show

g. Create the docker socket file as follows:

vi /usr/lib/systemd/system/docker.socket

[Unit]
Description=Docker Socket for the API
PartOf=docker.service

[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target

h. Run the new docker dameon, and replace the consul host ip address and the node ip address with yours.

On node1:

docker daemon -D --cluster-store=consul://172.30.30.46:8500 --cluster-advertise=172.30.30.17:2376 &

On node2:

docker daemon -D --cluster-store=consul://172.30.30.46:8500 --cluster-advertise=172.30.30.15:2376 &

By the way, Docker daemon log level is too verbose by default, if this bothers you, and you can remove the -D option. For example:

docker daemon --cluster-store=consul://172.30.30.46:8500 --cluster-advertise=172.30.30.15:2376 --log-level=error &

i. Download the nuage-docker-libnetwork plugin. The ip address that I have below is just an example

wget http://10.31.135.45:8080/ libnetwork-nuage-5.0.1-2.el7.x86_64.rpm

j. Install docker libnetwork on centos 7.3 host

yum -y localinstall libnetwork-nuage-5.0.1-2.el7.x86_64.rpm

k. Modify two lines in the YAML configuration file (SCOPE and VSD url). Leave everything else as default. Note: VSD IP address has to be changed with your VSD. Scope is global when the installation is on multi-host with external consul.

vi /etc/default/libnetwork-nuage.yaml
scope: "global"
url: https://10.31.134.241:8443

l. Restart libnetwork Nuage plugin

systemctl restart libnetwork-nuage
systemctl status libnetwork-nuage

m. Choose any of your nodes, and create a new network with your VSD attributes. Replace the enterprise, domain, zone, subnet, subnet IP address with your values.

docker network create --driver=nuage --ipam-driver=nuage-ipam --ipam-opt organization=Enterprise1 --ipam-opt domain=DC_domain --ipam-opt zone=KVM --ipam-opt subnet="Dockers" --ipam-opt user=admin --subnet=192.1.60.0/24 --gateway=192.1.60.1 MyNuageNet

n. Choose any of your nodes and verify that docker network is configured correctly. Note: once you create a network, this network info should be synced automatically on all nodes

docker network ls

o. Now run your container and point to your nuage network

docker run -d -it --net MyNuageNet nginx /bin/bash