all posts tagged openstack
Here are the Gluster-related abstracts that have been submitted for the OpenStack Summit in May in Atlanta. Check them out and vote!
- Use Case: OpenStack + GlusterFS on TryStack.org
- “The Gluster community has made huge strides to support backing an openstack installation’s storage with GlusterFS. TryStack.org has implimented GlusterFS as it’s storage backend.
In this presentation we’ll will walk through the configuration details to impliment GlusterFS as OpenStack’s storage backend.”
- A Technical Tour of OpenStack Swift Object Storage Volume Extensions
- “Take developers through a tour of existing DiskFile backends for OpenStack Object Storage (Swift). The DiskFile interface in Swift is an API for changing how objects are stored on storage volumes. Swift provides a default implementation over XFS (Posix) and a reference in-memory example to help folks get started.”
- Manila: The OpenStack File Share Service – Technology Deep Dive
- “This presentation introduces Manila, the new OpenStack File Shares Service. Manila is a community-driven project that presents the management of file shares (e.g. NFS, CIFS) as a core service to OpenStack. Manila currently works with NetApp, Red Hat Storage (GlusterFS) and IBM GPFS (along with a reference implementation based on a Linux NFS server).”
- Sharing GlusterFS Storage Servers with Openstack Compute nodes via Docker
- “The main focus of this session will be to explain how Docker can be leveraged to utilize unused cycles on GlusterFS Storage nodes for additional compute nodes in an Openstack environment. Docker is an application container and can host both GlusterFS Storage node as well as Openstack compute nodes in a single physical server.”
- Best practices for deploying and using Gluster for Storage in OpenStack environments
- “Gluster has a number of exciting new features such as NUFA (Non Uniform File Access), Granular geo-replication, Unified file, block & object storage access and data tiering.
In this presentation we discuss these new features and introduce best practices based on our own expereineces as well as that of customers for deploying and using Gluster in OpenStack environments.”
- Extending GlusterFS for OpenStack
- “There is a need to extend GlusterFS storage availability to other Operating Systems and Hyper-visors. In this session, you will learn about a generalized block solution for Gluster that works for any block-based application (Xen, HyperV, VirtualBox, VmWare, tape). We will compare different interconnect choices between the GlusterFS server and openstack client, such as iSCSI, FcOE, and ‘gluster native’.”
- Breaking the Mold with OpenStack Swift and GlusterFS
- “Red Hat uses OpenStack Swift as the object storage interface to GlusterFS. Instead of reimplementing the Swift API, Red Hat is participating in the OpenStack Swift community to ensure that GlusterFS can take full advantage of the latest Swift features. This is absolutely the right way to pair Swift with another storage system.”
The other day, I was talking to a colleague about the debate within OpenStack
about whether to chase Amazon's AWS (what another colleague called the "failed
Eucalyptus strategy") or forge its own path. It reminded me of an idea that was
given to me years ago. I can't take credit for the idea, but I can't remember
who I got it from so I'll do my best to represent it visually myself. Consider
the following image.
Let's say your competitor is driving toward a Seattle feature set - coffee,
grunge, rain. You have a slightly different vision, or perhaps just a different
execution, that leads toward more of an LA feature set - fewer evergreens, more
palm trees. If you measure yourself by progress toward your opponent's goals
(the dotted line), you're going to lose. That's true even if you actually
make better progress toward your goals. You're just playing one game and
expecting to win another. That might seem like an obviously stupid thing to do,
but an amazing number of companies and projects end up doing just that. I'll
let someone with more of a stake in the OpenStack debate decide whether that
applies to them. Now, consider a slightly different picture.
Here, we've drawn a second line to compare our competitor's progress against
our yardstick. Quite predictably, now they're the ones who are behind. Isn't
that so much better? If you're building a different product, you need to
communicate why you're aiming at the right target and shift the debate to who's
hitting it. In other words, change the axis.
I don't mean to say that copying someone else's feature set is always a mistake.
If you think you can execute on their vision better than they can, that's great.
Bigger companies do this to smaller companies all the time. At Revivio, we
weren't afraid of other small competitors. We were afraid of some big company
like EMC getting serious about what we were doing, then beating us with sheer
weight of numbers and marketing muscle. Occasionally things even go the other
way, when a smaller team takes advantage of agility and new technology to
out-execute a larger legacy-bound competitor. The real point is not that one
strategy's better, but that you can't mix them. You can't send execution in one
direction and messaging in another. You have to pick one, and stick to it, or
else you'll always be perceived as falling short.
Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.
So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.… Read the rest
The post Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS appeared first on vmware admins.
The thing about distributed storage is that all the pieces that make the magic happen are.....well, distributed! The distributed nature of the components can represent a significant hurdle for people looking to evaluate whether distributed storage is right for them. Not only do people have to set up multiple servers, but they also have to get to grips with services/daemons, new terms and potentially clustering complexity.
So what can be done?
Well the first thing is to look for a distributed storage architecture that tries to make things simple in the first place...life's too short for unnecessary complexity.
The next question is "Does the platform provide an easy to use and understand" deployment tool?"
Confession time - I'm involved with the gluster community. A while ago I started a project called gluster-deploy which aims to make the first time configuration of a gluster cluster, childs play. I originally blogged about an early release of the tool in October, so perhaps now is a good time to revisit the project and see how easy it is to get started with gluster (completely unbiased view naturally!)
At a high level, all distributed storage platforms consist of a minimum of two layers;
- cluster layer - binding the servers together, into a single namespace
- aggregated disk capacity - pooling storage from each of the servers together to present easy to consume capacity to the end user/applications
So the key thing is to deliver usable capacity as quickly and as pain-free as possible - whilst ensuring that the storage platform is configured correctly. Now I could proceed to show you a succession of screenshots of gluster-deploy in action - but to prevent 'death-by-screenshot' syndrome, I'll refrain from that and just pick out the highlights.
I wont cover installing the gluster rpms, but I will point out that if you're using fedora - they are in the standard repository and if you're not using fedora, head on over to the gluster download site download.gluster.org.
So let's assume that you have several servers available; each one has an unused disk and gluster installed and started. If you grab the gluster-deploy tool from the gluster-deploy link above you'll have a tar.gz archive that you can untar onto one of your test nodes. Login to one of the nodes as 'root' and untar the archive;
>tar xvzf gluster-deploy.tar.gz && cd gluster-deploy
This will untar the archive and place you in the gluster-deploy directory, so before we run it lets take a look at the options the program supports
[root@mynode gluster-deploy]# ./gluster-deploy.py -h
Usage: gluster-deploy.py [options]
--version show program's version number and exit
-h, --help show this help message and exit
-n, --no-password Skip access key checking (debug only)
-p PORT, --port=PORT Port to run UI on (> 1024)
-f CFGFILE, --config-file=CFGFILE
Config file providing server list bypassing subnet
Ok. So there is some tweaking we can do but for now, let's just run it.
[root@mynode gluster-deploy]# ./gluster-deploy.py
-> Not supplied, UI will perform subnet selection/scan
Web server details:
Access key - pf20hyK8p28dPgIxEaExiVm2i6
Web Address -
Taking the URL displayed in the CLI, and pasting into a browser, starts the configuration process.
The deployment tool basically walks through a series of pages that gather some information about how we'd like our cluster and storage to look. Once the information is gathered, the tool then does all the leg-work across the cluster nodes to complete the configuration, resulting in a working cluster and a volume ready to receive application data.
At a high level, gluster-deploy performs the following tasks;- Build the cluster;
- via a subnet scan - the user chooses which subnet to scan (based on the subnets seen on the server running the tool)
- Configure passwordless login across the nodes, enabling automation
- via a config file that supplies the nodes to use in the cluster (-f invocation parameter)
- Perform disk discovery. Any unused disk is shown up in the UI
- You then choose which of the discovered disks you want gluster to use
- Once the disks are selected, you define how you want the disks managed
NB When you choose to use snapshot support (lvm with dm-thinp or btrfs), confirmation is required since these are 'future' features, typically there for developers.
- lvm (default)
- lvm with dm-thinp
- btrfs (not supported yet, but soon!)
- Once the format is complete, you define the volume that you want gluster to present to your application(s). The volume create process includes 'some' intelligence to make life a little easierNow, let's take a closer look at what you can expect to see during these phases.
- tuning presets are provided for common gluster workloads like OpenStack cinder and glance, ovirt/rhev, and hadoop
- distributed volumes and distributed-replicated volumes types are supported
- for volumes that use replication, the UI prevents disks (bricks) from the same server being assigned to the same replica set
- UI shows a summary of the capacity expectation for the volume given the brick configuration and replication overheads
The image above shows the results from the subnet scan. Four nodes have been discovered on the selected subnet that have gluster running on them. You then select which nodes you want from the left hand 'box' and click the 'arrow' icon to add them to the cluster nodes. Once you're happy, click 'Create'.
Passwordless login is a feature of ssh, which enables remote login by shared public keys. This capability is used by the tool to enable automation across the nodes.
With the public keys in place, the tool can scan for 'free' disks.
Choosing the disks to use is just a simple checkbox, and if they all look right - just click on the checkbox in the table heading. Understanding which disks to use is phase one, the next step is to confirm how you want to manage these disks (which at a low level defines the characteristics for the Logical Volume Manager)
Clicking on the "Build Bricks" button, initiates a format process across the servers to prepare the disks, building the low-level filesystem and updating the node's filesystem table (fstab). These bricks then become the component parts of the gluster volume that get's mounted by the users or applications.
Volumes can be tuned/optimised for different workloads, so the tool has a number of presets to choose from. Choose a 'Use case' that best fits your workload, and then a volume type (distributed or replicated) that meets your data availability requirements. Now you can see a list of bricks on the left and an empty table on the right. Select which bricks you want in the volume and click the arrow to add them to the table. A Volume Summary is presented at the bottom of the page showing you what will be built (space usable, brick count, fault tolerance). Once you're happy, simply click the "Create" button.
The volume will be created and started making it available to clients straight away. In my test environment the time to configure the cluster and storage < 1minute...
So, if you can use a mouse and a web browser, you can now configure and enjoy the gluster distributed filesystem : no drama...no stress...no excuses!
For a closer look at the tool's workflow, I've posted a video to youtube.
In a future post, I'll show you how to use foreman to simplify the provisioning of the gluster nodes themselves.
The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!
In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS.… Read the rest
The post Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO) appeared first on vmware admins.
What a weird title for a podcast episode you think. Actually, nothing could be further from the truth. Ian and I have worked together in two lives. In 2007 I took a sabbatical to go do hush hush secret stuff … Continue reading
If you’ve been following the Gluster and Ceph communities for any length of time, you know that we have similar visions for open software-defined storage and are becoming more competitive with each passing day. We have been rivals in a similar space for some time, but on friendly terms – and for a couple of simple reasons: 1.) we each have common enemies in the form of proprietary big storage and 2.) we genuinely like each other. I’m a Sage Weil fan and have long been an admirer of his work. Ross Turk and Neil Levine are also members of the Inktank clan whom I respect and vouch for on a regular basis. There are others I’m forgetting, and I hope they don’t take it personally!
So you can imagine the internal debate I had when presented with the first results of a Red Hat Storage comparison with Ceph in a set of benchmarks commissioned by the Red Hat Storage product marketing group (for reference, they’re located here). If you saw my presentations at the OpenStack Summit in Hong Kong, then you know I went with it, and I’m glad I did. While the Ceph guys have been very good about not spouting FUD and focusing instead on the bigger picture – taking down EvilMaChines, for example – others in the clan of OpenStack hangers-on have not been so exemplary.
I don’t know who, exactly, the Red Hat Storage marketing group was targeting with the benchmarks, but I am targeting a very specific audience, and it isn’t anyone associated with Inktank or the Ceph project. I am targeting all the people in the OpenStack universe who wrote us off and wanted to declare the storage wars over. I’m also a bit tired of the inexplicable phrase that “Ceph is faster than Gluster”, often said with no qualification, which I’ve known for sometime was not true. It’s that truism, spouted by some moustachioed cloudy hipsters at an OpenStack meetup, that rankles me – almost as much as someone asking me in a public forum why we shouldn’t all ditch Gluster for Ceph. The idea that one is unequivocally faster or better than the other is completely ridiculous – almost as ridiculous as the thought that hipsters in early 20th century drag are trusted experts at evaluating technology. The benchmarks in question do not end any debates. On the contrary, they are just the beginning.
I felt uneasy when I saw Sage show up at our Gluster Cloud Night in Hong Kong, because I really didn’t intend for this to be an “In yo’ face!” type of event. I did not know beforehand that he would be there, but even if I had, I wouldn’t have changed my decision to show the results. The “Ceph is faster” truism had become one of those things that everyone “knows” without the evidence to support it, and the longer we let it go unopposed, the more likely it was to become a self-fulfilling prophecy. Also, while we may have common enemies, it has become increasingly clear that the OpenStack universe would really prefer to converge around a single storage technology, and I will not let that happen without a fight.
We’ve radically improved GlusterFS and the Gluster Community over the last couple of years, and we are very proud of our work. We don’t have to take a back seat to anyone; we don’t have to accept second place to anyone; and we’re not going to. In the end, it’s very clear who the winners of this rivalry will be. It won’t be Ceph, and it won’t be Gluster. It will be you, the users and developers, who will benefit from the two open source heavyweights scratching and clawing their way to the top of the heap. Rejoice and revel in your victory, because we work for you.
To see the benchmark results for yourself, see the Red Hat Storage blog post on the subject.
To see the VAR Guy’s take, see this article.
I was at the Gluster London Community event in London yesterday and listened to speakers there talk about Gluster and demystifying what it is and also how it has made an impact in the storage world. One of the speakers … Continue reading
The Gluster Community would like to congratulate the OpenStack Foundation and developers on the Havana release. With performance-boosting enhancements for OpenStack Block Storage (Cinder), Compute (Nova) and Image Service (Glance), as well as a native template language for OpenStack Orchestration (Heat), the OpenStack Havana release points the way to continued momentum for the OpenStack community. The many storage-related features in the Havana release coupled with the growing scope of typical OpenStack deployments demonstrate the need for scale-out, open software-defined storage solutions. The fusion of GlusterFS open software-defined storage with OpenStack software is a match made in cloud heaven.
Naturally, the Gluster Community would like to focus on OpenStack enhancements that pertain directly to our universe:
- OpenStack Image Service (Glance)
- OpenStack Cinder can now be used as a block-storage back-end for the Image Service. For Gluster users, this means that Glance can point to the same image as Cinder, which means it is not necessary to copy the entire image before deploying, saving some valuable time.
- OpenStack Compute (Nova)
- OpenStack integration with GlusterFS utilizing the QEMU/libgfapi integration reduces the kernel space to user space context switching to significantly boost performance.
- When connecting to NFS or GlusterFS backed volumes, Nova now uses the mount options set in the Cinder configuration. Previously, the mount options had to be set on each Compute node that would access the volumes. This allows operators to more easily automate the scaling of their storage platforms.
- QEMU-assisted snapshotting is now used to provide the ability to create cinder volume snapshots, including GlusterFS.
- OpenStack Orchestration (Heat)
- Initial support for native template language (HOT). For OpenStack operators, this presents an easier way to orchestrate services in application stacks.
- OpenStack Object Storage (Swift)
- There is nothing in the OpenStack Havana release notes pertaining to GlusterFS and Swift integration but we always like to talk about the fruits of our collaboration with Swift developers. We are dedicated to using the upstream Swift project API/proxy layer in our integration, and the Swift team has been a pleasure to work with, so kudos to them.
- OpenStack Data processing (Savanna)
- This incubating project enables users to easily provision and manage Apache Hadoop clusters on OpenStack. It’s a joint project between Red Hat, Mirantis and HortonWorks and points the way towards “Analytics as a Service”. It’s not an official part of OpenStack releases yet, but it’s come very far very quickly, and we’re excited about the data processing power it will spur.
To give an idea of the performance improvements in the GlusterFS-QEMU integration that Nova now takes advantage of, consider the early benchmarks below published by Bharata Rao, a developer at IBM’s Linux Technology Center.
FIO READ numbers
|QEMU GlusterFS block driver (FUSE bypass)
FIO WRITE numbers
|QEMU GlusterFS block driver (FUSE bypass)
“Base” refers to an operation directly on a disk filesystem.
Havana vs. Pre-Havana
This is a snapshot to show the difference between the Havanna and Grizzly releases with GlusterFS.
|Glance – Could point to the filesystem images mounted with GlusterFS, but had to copy VM image to deploy it
||Can now point to Cinder interface, removing the need to copy image
|Cinder – Integrated with GlusterFS, but only with Fuse mounted volumes
||Can now use libgfapi-QEMU integration for KVM hypervisors
|Nova – No integration with GlusterFS
||Can now use the libgfapi-QEMU integration
|Swift – GlusterFS maintained a separate repository of changes to Swift proxy layer
||Swift patches now merged upstream, providing a cleaner break between API and implementation
The Orchestration feature we are excited about is not Gluster-specific, but has several touch points with GlusterFS, especially in light of the newly-introduced Manila FaaS project for OpenStack (https://launchpad.net/manila). Imagine being able to orchestrate all of your storage services with Heat, building the ultimate in scale-out cloud applications with open software-defined storage that scales with your application as needed.
We’re very excited about the Havana release and we look forward to working with the global OpenStack community on this and future releases. Download the latest GlusterFS version, GlusterFS 3.4, from the Gluster Community at gluster.org, and check out the performance with a GlusterFS 3.4-backed OpenStack cloud.
This weeks podcast is with Klaus Oxdal of Red Hat Nordics. Klaus is a good friend of mine who I’ve worked closely with for 3 1/2 years or so and he eats lives and breathes Red Hat. Much like my … Continue reading