all posts tagged openstack


by on April 10, 2014

How to govern a project on the scale of OpenStack

Managing collaborative open source projects

How an open source project is governed can matter just as much as the features it supports, the speed at which it runs, or the code that underlies it. Some open source projects have what we might call a "benevolent dictator for life." Others are outgrowths of corporate projects that, while open, still have their goals and code led by the company that manages it. And of course, there are thousands of projects out there that are written and managed by a single person or a small group of people for whom governance is less of an issue than insuring project sustainability.

read more

by on March 7, 2014

OpenStack and Docker

Introduction

I recently tried to set up OpenStack with docker as the hypervisor on a single node and I ran into mountains of trouble.  I tried with DevStack and entirely failed using both the master branch and stable/havana.  After much work I was able to launch container but the network was not right.  Ultimately I found a path that worked.  This post explains how I did this.

Create the base image

CentOS 6.5

The first step is to have a VM that can support this.  Because I was using RDO this needed to be a Red Hat derivative.  I originally chose a stock vagrant CentOS 6.5 VM.  I got everything set up and then ran out of disk space (many bad words were said).  Thus I used packer and the templates here to create a CentOS VM with 40GB of disk space.  I had to change the “disk_size” value under “builders” to something larger than 40000. Then I ran the build.

packer build template.json

When this completed I had a centos-6.5 vagrant box ready to boot.

Vagrant

I wanted to manage this VM with Vagrant and because OpenStack is fairly intolerant to HOST_IP changes I had to inject in an interface with a static IP address.  Below is the Vagrant file I used:

Vagrant.configure("2") do |config|
   config.vm.box = "centos-6.5-base"
   config.vm.network :private_network, ip: "172.16.129.26"
   ES_OS_MEM = 3072
   ES_OS_CPUS = 1
   config.vm.hostname = "rdodocker"
  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", ES_OS_MEM]
    vb.customize ["modifyvm", :id, "--cpus", ES_OS_CPUS]
  end
end

After running vagrant up to boot this VM I got the following error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/sbin/ifdown eth1 2> /dev/null
Stdout from the command:

Stderr from the command:

Thankfully this was easily solved.  I sshed into the VM with vagrant ssh, and then ran the following:

cat <<EOM | sudo tee /etc/sysconfig/network-scripts/ifcfg-eth1 >/dev/null
DEVICE="eth1"
ONBOOT="yes"
TYPE="Ethernet"
EOM

After that I exited from the ssh session and repackaged the VM with vagrant package –output centos-6.5-goodnet.box. I added the new box to vagrant and altered my Vagrant file to boot it.  I now had a base image on which I could install OpenStack.

RDO

Through much trial and error I came to the conclusion that I needed the icehouse development release of RDO.  Unfortunately this alone was not enough to properly handle docker.  I also had to install nova from the master branch into a python virtualenv and reconfigure the box to use that nova code.  This section has the specifics of what I did.

RDO Install

I followed the instructions for installing RDO that are here, only instead of running packstack –allinone I used a custom answer file.  I generated a template answer file with the command packstack –gen-answer-file=~/answers.txt.  Then I opened that file and I substituted out ever IP address with the IP address that vagrant was injecting into my VM (in my case this was 172.16.129.26).  I also set the following:

CONFIG_NEUTRON_INSTALL=n

This is very important.  The docker driver does not work with neutron.  (I learned this the hard way).  I then installed RDO with the command packstack –answerfile answers.txt.

Docker

Once RDO was installed and working (without docker) I set up the VM such that nova would use docker.  The instructions here are basically what I followed.  Here is my command set:

sudo yum -y install docker-io
sudo service docker start
sudo chkconfig docker on
sudo yum -y install docker-registry
sudo usermod -G docker nova
sudo service redis start
sudo chkconfig redis on
sudo service docker-registry start
sudo chkconfig docker-registry on

I edited the file /etc/sysconfig/docker-registry and add the following:

export SETTINGS_FLAVOR=openstack
export REGISTRY_PORT=5042 
. /root/keystonerc_admin 
export OS_GLANCE_URL=http://172.16.129.26:9292

Note that some of the values in that file were already set.  I removed those entires.

OpenStack for Docker Configuration

I changed this entry in  /etc/nova/nova.conf

compute_driver = docker.DockerDriver

this value (and uncommented it) in /etc/glance/glance-api.conf:

container_formats = ami,ari,aki,bare,ovf,docker

Hand Rolled Nova

Unfortunately docker does not work with the current release of RDO icehouse.  Therefore I had to get the latest code from the master branch of nova.  Further I had to install it.  To be safe I put it in its own python virtualenv.  In order to install nova like this a lot of dependencies must be installed.  Here is the yum command I used to install what I needed (and in some cases just wanted).

yum update
 yum install -y telnet git libxslt-devel libffi-devel python-virtualenv mysql-devel

Then I installed nova and its package specific dependencies into a virtualenv that I created.  The command sequence is below:

git clone https://github.com/openstack/nova.git
virtualenv --no-site-packages /usr/local/OPENSTACKVE
source /usr/local/OPENSTACKVE/bin/activate
pip install -r requirements.txt
pip install qpid-python
pip install mysql-python
python setup.py install

At this point I had an updated version of the nova software but it was running against an old version of the data base.  Thus I had to run:

nova-manage db sync

The final step was to change all of the nova startup scripts to point to the code in the virtualenv instead of the code installed to the system.  I did this by opening up every file at /etc/init.d/openstack-nova-* and changing the exec=”/usr/bin/nova-$suffix” line to exec=”/usr/local/OPENSTACKVE/bin/nova-$suffix”.  I then rebooted the VM and I was FINALLY all set to launch docker containers that I could ssh into!


by on February 28, 2014

Vote for Gluster-related OpenStack Summit Talks

Here are the Gluster-related abstracts that have been submitted for the OpenStack Summit in May in Atlanta. Check them out and vote!

  • Use Case: OpenStack + GlusterFS on TryStack.org
    • “The Gluster community has made huge strides to support backing an openstack installation’s storage with GlusterFS. TryStack.org has implimented GlusterFS as it’s storage backend.

      In this presentation we’ll will walk through the configuration details to impliment GlusterFS as OpenStack’s storage backend.”

  • A Technical Tour of OpenStack Swift Object Storage Volume Extensions
    • “Take developers through a tour of existing DiskFile backends for OpenStack Object Storage (Swift). The DiskFile interface in Swift is an API for changing how objects are stored on storage volumes. Swift provides a default implementation over XFS (Posix) and a reference in-memory example to help folks get started.”
  • Manila: The OpenStack File Share Service – Technology Deep Dive
    • “This presentation introduces Manila, the new OpenStack File Shares Service. Manila is a community-driven project that presents the management of file shares (e.g. NFS, CIFS) as a core service to OpenStack. Manila currently works with NetApp, Red Hat Storage (GlusterFS) and IBM GPFS (along with a reference implementation based on a Linux NFS server).”
  • Sharing GlusterFS Storage Servers with Openstack Compute nodes via Docker
    • “The main focus of this session will be to explain how Docker can be leveraged to utilize unused cycles on GlusterFS Storage nodes for additional compute nodes in an Openstack environment. Docker is an application container and can host both GlusterFS Storage node as well as Openstack compute nodes in a single physical server.”
  • Best practices for deploying and using Gluster for Storage in OpenStack environments
    • “Gluster has a number of exciting new features such as NUFA (Non Uniform File Access), Granular geo-replication, Unified file, block & object storage access and data tiering.

      In this presentation we discuss these new features and introduce best practices based on our own expereineces as well as that of customers for deploying and using Gluster in OpenStack environments.”

  • Extending GlusterFS for OpenStack
    • “There is a need to extend GlusterFS storage availability to other Operating Systems and Hyper-visors. In this session, you will learn about a generalized block solution for Gluster that works for any block-based application (Xen, HyperV, VirtualBox, VmWare, tape). We will compare different interconnect choices between the GlusterFS server and openstack client, such as iSCSI, FcOE, and ‘gluster native’.”
  • Breaking the Mold with OpenStack Swift and GlusterFS
    • “Red Hat uses OpenStack Swift as the object storage interface to GlusterFS. Instead of reimplementing the Swift API, Red Hat is participating in the OpenStack Swift community to ensure that GlusterFS can take full advantage of the latest Swift features. This is absolutely the right way to pair Swift with another storage system.”

 

by on February 13, 2014

Change the Axis

The other day, I was talking to a colleague about the debate within OpenStack about whether to chase Amazon's AWS (what another colleague called the "failed Eucalyptus strategy") or forge its own path. It reminded me of an idea that was given to me years ago. I can't take credit for the idea, but I can't remember who I got it from so I'll do my best to represent it visually myself. Consider the following image.

image

Let's say your competitor is driving toward a Seattle feature set - coffee, grunge, rain. You have a slightly different vision, or perhaps just a different execution, that leads toward more of an LA feature set - fewer evergreens, more palm trees. If you measure yourself by progress toward your opponent's goals (the dotted line), you're going to lose. That's true even if you actually make better progress toward your goals. You're just playing one game and expecting to win another. That might seem like an obviously stupid thing to do, but an amazing number of companies and projects end up doing just that. I'll let someone with more of a stake in the OpenStack debate decide whether that applies to them. Now, consider a slightly different picture.

image

Here, we've drawn a second line to compare our competitor's progress against our yardstick. Quite predictably, now they're the ones who are behind. Isn't that so much better? If you're building a different product, you need to communicate why you're aiming at the right target and shift the debate to who's hitting it. In other words, change the axis.

I don't mean to say that copying someone else's feature set is always a mistake. If you think you can execute on their vision better than they can, that's great. Bigger companies do this to smaller companies all the time. At Revivio, we weren't afraid of other small competitors. We were afraid of some big company like EMC getting serious about what we were doing, then beating us with sheer weight of numbers and marketing muscle. Occasionally things even go the other way, when a smaller team takes advantage of agility and new technology to out-execute a larger legacy-bound competitor. The real point is not that one strategy's better, but that you can't mix them. You can't send execution in one direction and messaging in another. You have to pick one, and stick to it, or else you'll always be perceived as falling short.

by on January 9, 2014

Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS

Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.

So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.… Read the rest

The post Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS appeared first on vmware admins.

by on January 6, 2014

Distributed Storage – too complicated to try?

The thing about distributed storage is that all the pieces that make the magic happen are.....well, distributed! The distributed nature of the components can represent a significant hurdle for people looking to evaluate whether distributed storage is right for them. Not only do people have to set up multiple servers, but they also have to get to grips with services/daemons, new terms and potentially clustering complexity.

So what can be done?

Well the first thing is to look for a distributed storage architecture that tries to make things simple in the first place...life's too short for unnecessary complexity.

The next question is "Does the platform provide an easy to use and understand" deployment tool?"

Confession time - I'm involved with the gluster community. A while ago I started a project called gluster-deploy which aims to make the first time configuration of a gluster cluster, childs play. I originally blogged about an early release of the tool in October, so perhaps now is a good time to revisit the project and see how easy it is to get started with gluster (completely unbiased view naturally!)

At a high level, all distributed storage platforms consist of a minimum of two layers;
  • cluster layer - binding the servers together, into a single namespace
  • aggregated disk capacity - pooling storage from each of the servers together to present easy to consume capacity to the end user/applications
So the key thing is to deliver usable capacity as quickly and as pain-free as possible - whilst ensuring that the storage platform is configured correctly. Now I could proceed to show you a succession of screenshots of gluster-deploy in action - but to prevent 'death-by-screenshot' syndrome, I'll refrain from that and just pick out the highlights.

I wont cover installing the gluster rpms, but I will point out that if you're using fedora - they are in the standard repository and if you're not using fedora, head on over to the gluster download site download.gluster.org.




So let's assume that you have several servers available; each one has an unused disk and gluster installed and started. If you grab the gluster-deploy tool from the gluster-deploy link above you'll have a tar.gz archive that you can untar onto one of your test nodes. Login to one of the nodes as 'root' and untar the archive;

>tar xvzf gluster-deploy.tar.gz && cd gluster-deploy

This will untar the archive and place you in the gluster-deploy directory, so before we run it lets take a look at the options the program supports

[root@mynode gluster-deploy]# ./gluster-deploy.py -h
Usage: gluster-deploy.py [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -n, --no-password     Skip access key checking (debug only)
  -p PORT, --port=PORT  Port to run UI on (> 1024)
  -f CFGFILE, --config-file=CFGFILE
                        Config file providing server list bypassing subnet
                        scan


 Ok. So there is some tweaking we can do but for now,  let's just run it.

[root@mynode gluster-deploy]# ./gluster-deploy.py

gluster-deploy starting

    Configuration file
        -> Not supplied, UI will perform subnet selection/scan

    Web server details:
        Access key  - pf20hyK8p28dPgIxEaExiVm2i6
        Web Address -
            http://192.168.122.144:8080/

    Setup Progress


Taking the URL displayed in the CLI, and pasting into a browser, starts the configuration process.


The deployment tool basically walks through a series of pages that gather some information about how we'd like our cluster and storage to look. Once the information is gathered, the tool then does all the leg-work across the cluster nodes to complete the configuration, resulting in a working cluster and a volume ready to receive application data.
At a high level, gluster-deploy performs the following tasks;

- Build the cluster;
  • via a subnet scan - the user chooses which subnet to scan (based on the subnets seen on the server running the tool) 
OR
  • via a config file that supplies the nodes to use in the cluster (-f invocation parameter) 
- Configure passwordless login across the nodes, enabling automation 

- Perform disk discovery. Any unused disk is shown up in the UI

- You then choose which of the discovered disks you want gluster to use

- Once the disks are selected, you define how you want the disks managed
  • lvm (default)
  • lvm with dm-thinp
  • btrfs (not supported yet, but soon!)
  NB When you choose to use snapshot support (lvm with dm-thinp or btrfs), confirmation is required since these are 'future' features, typically there for developers.

- Once the format is complete, you define the volume that you want gluster to present to your application(s). The volume create process includes 'some' intelligence to make life a little easier
  • tuning presets are provided for common gluster workloads like OpenStack cinder and glance, ovirt/rhev, and hadoop
  • distributed volumes and distributed-replicated volumes types are supported
  • for volumes that use replication, the UI prevents disks (bricks) from the same server being assigned to the same replica set
  • UI shows a summary of the capacity expectation for the volume given the brick configuration and replication overheads
Now, let's take a closer look at what you can expect to see during these phases.


The image above shows the results from the subnet scan. Four nodes have been discovered on the selected subnet that have gluster running on them. You then select which nodes you want from the left hand 'box' and click the 'arrow' icon to add them to the cluster nodes. Once you're happy, click 'Create'.

Passwordless login is a feature of ssh, which enables remote login by shared public keys. This capability is used by the tool to enable automation across the nodes.



With the public keys in place, the tool can scan for 'free' disks.


Choosing the disks to use is just a simple checkbox, and if they all look right - just click on the checkbox in the table heading. Understanding which disks to use is phase one, the next step is to confirm how you want to manage these disks (which at a low level defines the characteristics for the Logical Volume Manager)


Clicking on the "Build Bricks" button, initiates a format process across the servers to prepare the disks, building the low-level filesystem and updating the node's filesystem table (fstab). These bricks then become the component parts of the gluster volume that get's mounted by the users or applications.

Volumes can be tuned/optimised for different workloads, so the tool has a number of presets to choose from. Choose a 'Use case' that best fits your workload, and then a volume type (distributed or replicated) that meets your data availability requirements. Now you can see a list of bricks on the left and an empty table on the right. Select which bricks you want in the volume and click the arrow to add them to the table. A Volume Summary is presented at the bottom of the page showing you what will be built (space usable, brick count, fault tolerance). Once you're happy, simply click the "Create" button.



The volume will be created and started making it available to clients straight away. In my test environment the time to configure the cluster and storage < 1minute...

So, if you can use a mouse and a web browser, you can now configure and enjoy the gluster distributed filesystem : no drama...no stress...no excuses!


For a closer look at the tool's workflow, I've posted a video to youtube.



In a future post, I'll show you how to use foreman to simplify the provisioning of the gluster nodes themselves.
by on December 19, 2013

Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO)

The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!

In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS.… Read the rest

The post Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO) appeared first on vmware admins.