Needing a means to provide some network automation tools to my team, I decided to see if Docker would fit the bill. Read on to learn how to build your own Docker container for network automation.
Earlier this year, I finally learned Docker because a use case came along. It was far easier to learn how to use Docker than to dive into all the details of an application I wanted to deploy. Using Docker allows you to leverage pre-built, ready-to-use container images. If an application consists of multiple components (i.e. a database, a front-end and a back-end) you can deploy multiple containers and link them.
Another use case, the subject of this post, is building custom containers. I want to leverage some network automation tools like Nornir, Ansible and Netmiko. Because a lot of these tools are built on Python and meant to run in a Linux environment, you can get into dependency hell very quick. By isolating your tools in a container, this problem can be circumvented.
The procedure as described in this post was tested using a fresh Debian 9.5 VM. I’ve prepared this VM with an installation of Docker Community Edition. The Docker CE installation instructions can be found here. Note that I executed all of this as root, which is practical for Docker. There will be another post later about the security implications of using a container in the way described here.
When the install is done, run the following command to verify that your Docker installation is working:
docker run hello-world
The command should pull the hello-world container from Docker Hub (the Docker registry containing the ready-to-use container images), spin it up and have it generate some output to your screen. This command will run the container in the foreground, then exit when done. The container will still exist on your system, as can be viewed by running this command:
docker ps -a
The container image will also be available from your local Docker repository now. You can inspect the images that are available locally with this command:
docker image ls
Here is an example after running the hello-world container, then inspecting the local containers and images:
Removing a container works like this (the
-f flag can remove a container that is still running):
docker rm -f [container name or ID]
Removing an image works like this:
docker rmi [image name or ID]
The example above showed briefly how to use a pre-built container from Docker Hub. It is also possible to build your own container. You can use containers from Docker Hub as a base for your own container. The following example will extend a Docker Python image (located at
library/python in Docker Hub). To start, make a fresh directory and place two empty files with the following names in it:
requirements.txt lists the Python packages that will be put into the new container image. These should all be installable with pip. In my case, the contents of the file look like this:
You can of course add any package you want. Next, we need to write the
Dockerfile should contain the actual build instructions. Those also reference the
requirements.txt file. My
Dockerfile looks like this:
# Use an official Python runtime as a parent image
# Set the working directory to /install
# Copy the current directory contents into the container at /install
ADD . /install
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Set new default WORKDIR
The files for this project are also available in my GitHub repository. Let’s walk through the
- 1: The “
FROM” statement loads the container we are going to extend
- 2: The “
WORKDIR” statement sets the directory inside the container that the next operations will be executed in
- 3: The “
ADD .” statement copies the contents of the current directory on the host system to the specified directory in the Docker container (in this case the
WORKDIRwe’ve just specified above the
- 4: The “
RUN” statement executes a command in the container. In this case, it’s a
pip installcommand that loads the package names from the
requirements.txtfile and installs them in the container
- 5: Another “
WORKDIR” statement. When building a container, if you start using the container later, the default directory you start in will be the last
Dockerfile. In my next post I will discuss different ways of using the custom container clarify why this is practical
Now the only thing left to do is to actually build the image. In this case, the command syntax is the following:
docker build -f ./Dockerfile -t automator .
Breaking the command down, here’s what’s happening. The “
docker build” part is self-explanatory. After that, we have to reference the
Dockerfile we’ve created. The “
-t automator” part sets the image name. The dot at the end is mandatory because “
docker build” expects a path. In this case, the command is run straight from the folder containing the
Dockerfile, so we can just add a dot. When we run this command, Docker will first grab the Python base image from Docker Hub. This image is then started as a new container, in which the instructions from the
Dockerfile are performed. After these instructions, the Python container now holds the changes and will be saved as a new image, ready for use. When the build command finishes, you should have both the Python image and your new custom image in the local repo:
Now it’s time to demonstrate that this new image is actually usable. We can fire up a fresh container using our new image. By starting it in detached interactive mode (by using the flags “
-dit“), the container is started and will keep running in the background:
docker run -dit --name automator_container automator
We can connect to the container by using this command:
docker exec -it automator_container /bin/bash
The command above connects us to a bash shell inside the container. In this environment, all the tools that were installed during the build process are available to us. Running some commands like “
ansible --version” or “
pip list” will verify this:
If you’ve made it this far, to the end of the post, you now know some basics about working with Docker. You should also have a functional container that’s usable for your network automation projects. However, needing to connect to the shell of the container might not be the most practical way to run your own projects. Also, we still need a way to get the actual project data into the container.
The next post will show how to do just that, and will also show how to take some security measures so non-root users can use the container in a shared system (for instance a management server) in a safe way.
Thanks for reading, feel free to leave comments or questions, and please stick around for the next post.