Quantcast
Channel: Jason Anderson » Jason
Viewing all articles
Browse latest Browse all 4

Dockered DPDK: packaging Open vSwitch

$
0
0

I recently attended the NFV World Congress in San Jose, and had a great time talking to vendors about their solutions and current trends toward widespread NFV adoption. Intel’s hot new(ish) multicore programming framework – the Data Plane Development Kit, or DPDK – was part of the marketing spiel of almost everyone even remotely invested in the NFVI.  The main interest is in the poll mode driver, which dedicates a CPU core to polling devices rather than waiting for interrupts to signal when a packet has arrived.  This has resulted in some amazing packet processing rates, such as a DPDK-accelerated Open vSwitch switching at 14.88Mp/s.

Since I’ve been working with Docker lately, I naturally started imagining what could be done with combining crazy fast DPDK applications with the lightweight virtualization and deployment flexibility of Docker.  Many DPDK applications – such as Open vSwitch – have some requirements in the DPDK build that may break other applications if they relied on the same libraries.  This makes it a great candidate for containerization, since we can give the application its very own tested build and run environment.

I was not, of course, the first to think of this – some Googling will turn up quite a few bits and pieces that have been helpful in writing this post.  My goal here is to bring that information into a consolidated tutorial and to explain the containerized DPDK framework that I have published to Dockerhub.

DPDK Framework in a Container

DPDK applications need to access a set of headers and libraries for compilation, so I decided to create a base container (Github, Dockerhub) with those resources.  Here’s the Dockerfile:

FROM ubuntu:vivid

RUN apt-get update && apt-get install -y --no-install-recommends \
  gcc build-essential make curl \
  && apt-get clean && rm -rf /var/lib/apt/lists/*

ENV DPDK_VERSION=2.0.0 \
  RTE_SDK=/usr/src/dpdk

RUN curl -sSL http://dpdk.org/browse/dpdk/snapshot/dpdk-${DPDK_VERSION}.tar.gz | tar -xz; \
  mv dpdk-${DPDK_VERSION} ${RTE_SDK}

# don't build kernel modules
RUN sed -i s/CONFIG_RTE_EAL_IGB_UIO=y/CONFIG_RTE_EAL_IGB_UIO=n/ ${RTE_SDK}/config/common_linuxapp \
  && sed -i s/CONFIG_RTE_LIBRTE_KNI=y/CONFIG_RTE_LIBRTE_KNI=n/ ${RTE_SDK}/config/common_linuxapp

# don't build unnecessary stuff, can be reversed in dpdk_config.sh
RUN sed -i s/CONFIG_RTE_APP_TEST=y/CONFIG_RTE_APP_TEST=n/ ${RTE_SDK}/config/common_linuxapp \
  && sed -i s/CONFIG_RTE_TEST_PMD=y/CONFIG_RTE_TEST_PMD=n/ ${RTE_SDK}/config/common_linuxapp

# set RTE_TARGET by sourcing dpdk_env.sh, should be in the build dir of the child image
# must be sourced in beginning of any RUN instruction that needs it
ONBUILD COPY dpdk_env.sh ${RTE_SDK}/
ONBUILD COPY dpdk_config.sh ${RTE_SDK}/

ONBUILD RUN \
  . ${RTE_SDK}/dpdk_env.sh; \
  . ${RTE_SDK}/dpdk_config.sh; \
  cd ${RTE_SDK} \
  && make install T=${RTE_TARGET} \
  && make clean

Pretty basic stuff at first – get some packages, set the all-important RTE_SDK environment variable, grab the source.  One thing that is important is to not rely on kernel headers; doing so would be seriously non portable.  The uio and igb_uio kernel modules have to be built and installed by the host that will run the DPDK container. Therefore, we configure the SDK to not compile kernel modules, and therefore not require installing kernel headers on the build system.

The key part of this build script is the deferment of compilation to when the application is built, so that the application can specify its requirements. This is done by requiring the DPDK application provide dpdk_env.sh and dpdk_config.sh, which provide environment variables (such as RTE_TARGET) and a set of commands to run before compilation occurs. For example, Open vSwitch requires that DPDK be compiled with CONFIG_RTE_BUILD_COMBINE_LIBS=y in its configuration, which would be inserted in dpdk_config.sh.

DPDK Application in a Container

Now that the framework is there, time to use it in an application.  In this post I will demonstrate Open vSwitch in a container (Github, Dockerhub), which could be plenty useful.  To begin, here’s the dpdk_env.sh and dpdk_config.sh files:

#!/bin/bash
export RTE_TARGET=x86_64-native-linuxapp-gcc

#!/bin/bash

# OVS needs vhost library in DPDK, requires libfuse-dev to compile
apt-get update && apt-get install -y --no-install-recommends \
  libfuse-dev \
  && apt-get clean && rm -rf /var/lib/apt/lists/*

sed -i s/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/ $RTE_SDK/config/common_linuxapp
sed -i s/CONFIG_RTE_LIBRTE_VHOST=n/CONFIG_RTE_LIBRTE_VHOST=y/ $RTE_SDK/config/common_linuxapp
sed -i s/CONFIG_RTE_LIBRTE_VHOST_USER=y/CONFIG_RTE_LIBRTE_VHOST_USER=n/ $RTE_SDK/config/common_linuxapp

OVS has some special requirements for DPDK, which is kind of the point of putting it in a container, right? Here’s the Dockerfile to build it:

FROM rakurai/dpdk:2.0.0-onbuild

RUN apt-get update && apt-get install -y --no-install-recommends \
  autoconf automake libtool openssl libssl-dev python \
  && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

ENV OVS_DIR=/usr/src/ovs

RUN curl -ksSL http://github.com/openvswitch/ovs/archive/master.tar.gz | tar -xz; \
  mv ovs-master ${OVS_DIR}

RUN . ${RTE_SDK}/dpdk_env.sh; \
  cd ${OVS_DIR} \
  && ./boot.sh \
  && ./configure --with-dpdk=${RTE_SDK}/${RTE_TARGET} \
  && make install CFLAGS='-O3 -march=native' \
  && make clean

# create database configuration
RUN ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema

COPY run_ovs.sh run_ovs.sh
RUN chmod +x run_ovs.sh
CMD ["./run_ovs.sh"]

The ONBUILD instructions in the DPDK Dockerfile will be executed first, of course, which will compile the DPDK framework. Then we install more packages for OVS, get the source, and compile with DPDK options. In the last few lines, we move the final script into the container, which is all the stuff OVS needs running:

#!/bin/bash
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock

ovsdb-server --remote=punix:$DB_SOCK \
  --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
  --private-key=db:Open_vSwitch,SSL,private_key \
  --certificate=Open_vSwitch,SSL,certificate \
  --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert \
  --pidfile \
  --detach

ovs-vsctl --no-wait init
ovs-vswitchd --dpdk -c 0x1 -n 4 -- unix:$DB_SOCK --pidfile

Now, here you could go a bit differently, and the repository I linked to may change somewhat. It could be said that it is more Dockerish to put the ovsdb-server in its own container, and then link them. However, this is a self contained example, so we’ll just go with this.

Running Open vSwitch

Before we start it up, we need to fulfill some prerequisites. I won’t go into details on the how and why, but please see the DPDK Getting Started Guide and the OVS-DPDK installation guide.  OVS requires 1GB huge pages, so you need your /etc/default/grub to have at least these options:

GRUB_CMDLINE_LINUX_DEFAULT="default_hugepagesz=1GB hugepagesz=1G hugepages=1"

followed by an update-grub and reboot. You also need to mount them with this or the /etc/fstab equivalent:

mkdir -p /mnt/huge
mount -t hugetlbfs -o pagesize=1G nodev /mnt/huge

Compile the kernel module on the host system and insert it. Download DPDK, extract, and run the dpdk/tools/setup.sh script. Choose to build to the x86_64-native-linuxapp-gcc target, currently option 9, and then insert the UIO module, currently option 12. Finally, bind one of your interfaces with option 18, though you’ll have to bring that interface down first.

Now you can start the container. Here’s what I used:

docker run --privileged -v /mnt/huge:/mnt/huge --device=/dev/uio0:/dev/uio0 rakurai/ovs-dpdk

This gives the container access to the huge page mount, and the uio0 device that you just bound to the UIO driver. I found that I needed to run the container as --privileged to access parts of the /dev/uio0 filesystem, though it appears that some people are able to get around this. I will update this post if I find out how to run the container without privileged.

If all goes well, you now have DPDK-accelerated OVS running in a container, and you can go about adding interfaces to the container, adding them to OVS, and setting up rules for forwarding packets at ludicrous speeds. Good luck, and please let me know how it works out for you!

Links

DPDK base Docker container – rakurai/dpdkGithub, Dockerhub
Open vSwitch Docker container – rakurai/ovs-dpdkGithub, Dockerhub
DPDK Getting Started Guide
OVS-DPDK installation guide


Viewing all articles
Browse latest Browse all 4

Trending Articles