The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

Howto: Using UFO (swift) — A Quick Setup Guide

Gluster
September 17, 2012

This sets up a GlusterFS Unified File and Object (UFO) server on a single node (single brick) Gluster server using the RPMs contained in my YUM repo at http://repos.fedorapeople.org/repos/kkeithle/glusterfs/. This repo contains RPMs for Fedora 16, Fedora 17, and RHEL 6. Alternatively you may use the glusterfs-3.4.0beta1 RPMs from the GlusterFS YUM repo at http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0beta1/

1. Add the repo to your system. See the README file there for instructions. N.B. If you’re using CentOS or some other RHEL clone you’ll want (need) to add the Fedora EPEL repo — see http://fedoraproject.org/wiki/EPEL.

2. Install glusterfs and UFO (remember to enable the new repo first):

  • glusterfs-3.3.1 or glusterfs-3.4.0beta1 on Fedora 17 and Fedora 18: `yum install glusterfs glusterfs-server glusterfs-fuse glusterfs-swift glusterfs-swift-account glusterfs-swift-container glusterfs-swift-object glusterfs-swift-proxy glusterfs-ufo`
  • glusterfs-3.4.0beta1 on Fedora 19, RHEL 6, and CentOS 6: `yum install glusterfs glusterfs-server glusterfs-fuse openstack-swift openstack-swift-account openstack-swift-container openstack-swift-object openstack-swift-proxy glusterfs-ufo`

3. Start glusterfs:

  • On Fedora 17, Fedora 18: `systemctl start glusterd.service`
  • On Fedora 16 or  RHEL 6 `service start glusterd`
  • On CentOS6.x `/etc/init.d/glusterd start`

4. Create a glusterfs volume:
`gluster volume create $myvolname $myhostname:$pathtobrick`

5. Start the glusterfs volume:
`gluster volume start $myvolname`

6. Create a self-signed cert for UFO:
`cd /etc/swift; openssl req -new -x509 -nodes -out cert.crt -keyout cert.key`

7. fixup some files in /etc/swift:

  • `mv swift.conf-gluster swift.conf`
  • `mv fs.conf-gluster fs.conf`
  • `mv proxy-server.conf-gluster proxy-server.conf`
  • `mv account-server/1.conf-gluster account-server/1.conf`
  • `mv container-server/1.conf-gluster container-server/1.conf`
  • `mv object-server/1.conf-gluster object-server/1.conf`
  • `rm {account,container,object}-server.conf

8. Configure UFO (edit /etc/swift/proxy-server.conf):
+ add your cert and key to the [DEFAULT] section:
bind_port = 443
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
+ add one or more users of the gluster volume to the [filter:tempauth] section:
user_$myvolname_$username=$password .admin
+ add the memcache address to the [filter:cache] section:
memcache_servers = 127.0.0.1:11211

9. Generate builders:
`/usr/bin/gluster-swift-gen-builders $myvolname`

10. Start memcached:

  • On Fedora 17: `systemctl start memcached.service`
  • On Fedora 16 or  RHEL 6 `service start memcached`
  • On CentOS6.x `/etc/init.d/memcached start`

11. Start UFO:

`swift-init main start`

» This has bitten me more than once. If you ssh -X into the machine running swift, it’s likely that sshd will already be using ports 6010, 6011, and 6012, and will collide with the swift processes trying to use those ports «

12. Get authentication token from UFO:
`curl -v -H 'X-Storage-User: $myvolname:$username' -H 'X-Storage-Pass: $password' -k https://$myhostname:443/auth/v1.0`
(authtoken similar to AUTH_tk2c69b572dd544383b352d0f0d61c2e6d)

13. Create a container:
`curl -v -X PUT -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername -k`

14. List containers:
`curl -v -X GET -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname -k`

15. Upload a file to a container:

`curl -v -X PUT -T $filename -H 'X-Auth-Token: $authtoken' -H 'Content-Length: $filelen' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername/$filename -k`

16. Download the file:

`curl -v -X GET -H 'X-Auth-Token: $authtoken' https://$myhostname:443/v1/AUTH_$myvolname/$mycontainername/$filename -k > $filename`

More information and examples are available from

=======================================================================

N.B. We (Red Hat, Gluster) generally recommend using xfs for brick volumes; or if you’re feeling brave, btrfs. If you’re using ext4 be aware of the ext4 issue* and if you’re using ext3 make sure you mount it with -o user_xattr.

* http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/

BLOG

  • 11 Jan 2019
    Gluster Container Storage 0.5 relea...

    Today, we are announcing the availability of GCS (Gluster Container Storage) 0.5. Highlights and updates since v0.4: GCS environment updated to kube 1.13 CSI deployment moved to 1.0 Integrated Anthill deployment Kube & etcd metrics added to prometheus Tuning of etcd to increase stability GD2 bug fixes from scale testing...

    Read more
  • 07 Jan 2019
    Gluster Monthly Newsletter, Decembe...

    See you at FOSDEM! We have a jampacked Software Defined Storage day on Sunday, Feb 3rd  (with a few sessions on the previous day): https://fosdem.org/2019/schedule/track/software_defined_storage/ We also have a shared stand with Ceph, come find us! Gluster 6 – We’re in planning for our Gluster 6 release, currently scheduled for...

    Read more
  • 12 Dec 2018
    Gluster Container Storage milestone...

    Today, we are announcing the availability of GCS (Gluster Container Storage) 0.4. The release was bit delayed to address some of the critical issues identified. This release brings in a good amount of bug fixes along with some key feature enhancements in GlusterD2. We’d request all of you to try...

    Read more