The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

OpenStack and Docker

Gluster
2014-03-07

Introduction

I recently tried to set up OpenStack with docker as the hypervisor on a single node and I ran into mountains of trouble.  I tried with DevStack and entirely failed using both the master branch and stable/havana.  After much work I was able to launch container but the network was not right.  Ultimately I found a path that worked.  This post explains how I did this.

Create the base image

CentOS 6.5

The first step is to have a VM that can support this.  Because I was using RDO this needed to be a Red Hat derivative.  I originally chose a stock vagrant CentOS 6.5 VM.  I got everything set up and then ran out of disk space (many bad words were said).  Thus I used packer and the templates here to create a CentOS VM with 40GB of disk space.  I had to change the “disk_size” value under “builders” to something larger than 40000. Then I ran the build.

packer build template.json

When this completed I had a centos-6.5 vagrant box ready to boot.

Vagrant

I wanted to manage this VM with Vagrant and because OpenStack is fairly intolerant to HOST_IP changes I had to inject in an interface with a static IP address.  Below is the Vagrant file I used:

Vagrant.configure("2") do |config|
   config.vm.box = "centos-6.5-base"
   config.vm.network :private_network, ip: "172.16.129.26"
   ES_OS_MEM = 3072
   ES_OS_CPUS = 1
   config.vm.hostname = "rdodocker"
  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", ES_OS_MEM]
    vb.customize ["modifyvm", :id, "--cpus", ES_OS_CPUS]
  end
end

After running vagrant up to boot this VM I got the following error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/sbin/ifdown eth1 2> /dev/null
Stdout from the command:

Stderr from the command:

Thankfully this was easily solved.  I sshed into the VM with vagrant ssh, and then ran the following:

cat <<EOM | sudo tee /etc/sysconfig/network-scripts/ifcfg-eth1 >/dev/null
DEVICE="eth1"
ONBOOT="yes"
TYPE="Ethernet"
EOM

After that I exited from the ssh session and repackaged the VM with vagrant package –output centos-6.5-goodnet.box. I added the new box to vagrant and altered my Vagrant file to boot it.  I now had a base image on which I could install OpenStack.

RDO

Through much trial and error I came to the conclusion that I needed the icehouse development release of RDO.  Unfortunately this alone was not enough to properly handle docker.  I also had to install nova from the master branch into a python virtualenv and reconfigure the box to use that nova code.  This section has the specifics of what I did.

RDO Install

I followed the instructions for installing RDO that are here, only instead of running packstack –allinone I used a custom answer file.  I generated a template answer file with the command packstack –gen-answer-file=~/answers.txt.  Then I opened that file and I substituted out ever IP address with the IP address that vagrant was injecting into my VM (in my case this was 172.16.129.26).  I also set the following:

CONFIG_NEUTRON_INSTALL=n

This is very important.  The docker driver does not work with neutron.  (I learned this the hard way).  I then installed RDO with the command packstack –answerfile answers.txt.

Docker

Once RDO was installed and working (without docker) I set up the VM such that nova would use docker.  The instructions here are basically what I followed.  Here is my command set:

sudo yum -y install docker-io
sudo service docker start
sudo chkconfig docker on
sudo yum -y install docker-registry
sudo usermod -G docker nova
sudo service redis start
sudo chkconfig redis on
sudo service docker-registry start
sudo chkconfig docker-registry on

I edited the file /etc/sysconfig/docker-registry and add the following:

export SETTINGS_FLAVOR=openstack
export REGISTRY_PORT=5042 
. /root/keystonerc_admin 
export OS_GLANCE_URL=http://172.16.129.26:9292

Note that some of the values in that file were already set.  I removed those entires.

OpenStack for Docker Configuration

I changed this entry in  /etc/nova/nova.conf

compute_driver = docker.DockerDriver

this value (and uncommented it) in /etc/glance/glance-api.conf:

container_formats = ami,ari,aki,bare,ovf,docker

Hand Rolled Nova

Unfortunately docker does not work with the current release of RDO icehouse.  Therefore I had to get the latest code from the master branch of nova.  Further I had to install it.  To be safe I put it in its own python virtualenv.  In order to install nova like this a lot of dependencies must be installed.  Here is the yum command I used to install what I needed (and in some cases just wanted).

yum update
 yum install -y telnet git libxslt-devel libffi-devel python-virtualenv mysql-devel

Then I installed nova and its package specific dependencies into a virtualenv that I created.  The command sequence is below:

git clone https://github.com/openstack/nova.git
virtualenv --no-site-packages /usr/local/OPENSTACKVE
source /usr/local/OPENSTACKVE/bin/activate
pip install -r requirements.txt
pip install qpid-python
pip install mysql-python
python setup.py install

At this point I had an updated version of the nova software but it was running against an old version of the data base.  Thus I had to run:

nova-manage db sync

The final step was to change all of the nova startup scripts to point to the code in the virtualenv instead of the code installed to the system.  I did this by opening up every file at /etc/init.d/openstack-nova-* and changing the exec=”/usr/bin/nova-$suffix” line to exec=”/usr/local/OPENSTACKVE/bin/nova-$suffix”.  I then rebooted the VM and I was FINALLY all set to launch docker containers that I could ssh into!

BLOG

  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more