Gluster can have trouble delivering good performance for small file workloads. This problem is acute for features such as tiering and RDMA, which employ expensive hardware such as SSDs or infiniband. In such workloads the hardware’s benefits are unrealized, so there is little return on the investment. A major contributing factor to this problem has …Read more
Back in February of this year Martin Kletzander gave a talk at devconf.cz on GCC plug-ins. It would seem that gcc plug-ins are a feature that has gone largely overlooked for many years. I came back from DevConf inspired to try it out. A quick search showed me I was not alone – a colleague …Read more
Important happenings for Gluster this month: GlusterFS-3.9rc1 is out for testing! Gluster 3.8.4 is released, users are advised to update: http://blog.gluster.org/2016/09/glusterfs-3-8-4-is-available-gluster-users-are-advised-to-update/ Gluster Developer Summit: Next week, Gluster Developer Summit happens in Berlin from October 6 through 7th. Our schedule: https://www.gluster.org/events/schedule2016/ We will be recording scheduled talks, and posting them to our YouTube channel! Gluster-users: Amudhan …Read more
Even though the last release 3.8 was just two weeks ago, we’re sticking to the release schedule and have 3.8.4 ready for all our current and future users. As with all updates, we advise users of previous versions to upgrade to the latest and greatest. …
So let’s look at how this is done.
[Slice] CPUQuota=200% |
# cd /etc/systemd/system # mkdir glusterd.service.d # echo -e “[Service]\nCPUAccounting=true\nSlice=glusterfs.slice” > glusterd.service.d/override.conf |
# systemctl daemon-reload # systemctl stop glusterd # killall glusterfsd && killall glusterfs # systemctl daemon-reload # systemctl start glusterd |
# systemctl show glusterd | grep slice Slice=glusterfs.slice ControlGroup=/glusterfs.slice/glusterd.service Wants=glusterfs.slice After=rpcbind.service glusterfs.slice systemd-journald.socket network.target basic.target |
├─1 /usr/lib/systemd/systemd –switched-root –system –deserialize 19 ├─glusterfs.slice │ └─glusterd.service │ ├─ 867 /usr/sbin/glusterd -p /var/run/glusterd.pid –log-level INFO │ ├─1231 /usr/sbin/glusterfsd -s server-1 –volfile-id repl.server-1.bricks-brick-repl -p /var/lib/glusterd/vols/repl/run/server-1-bricks-brick-repl.pid │ └─1305 /usr/sbin/glusterfs -s localhost –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log ├─user.slice │ └─user-0.slice │ └─session-1.scope │ ├─2075 sshd: root@pts/0 │ ├─2078 -bash │ ├─2146 systemd-cgls │ └─2147 less └─system.slice |
# systemctl set-property glusterfs.slice CPUQuota=350% |
[Unit] Description=CPU soak task [Service] [Install] |
With that said…happy hacking 🙂
Gluster Eventing is the new feature as part of Gluster.Next
initiatives, it provides close to realtime notification and alerts for
the Gluster cluster state changes.
Websockets APIs to consume events will be added later. Now we emit
events via another popular mechanism called “Webhooks”.(Many popular
products provide notifications via Webhooks Github, Atlassian,
Dropbox, and many more)
Webhooks are similar to callbacks(over HTTP), on event Gluster will
call the Webhook URL(via POST) which is configured. Webhook is a web server
which listens on a URL, this can be deployed outside of the
Cluster. Gluster nodes should be able to access this Webhook server on
the configured port. We will discuss about adding/testing webhook
later.
Example Webhook written in python,
from flask import Flask, request app = Flask(__name__) @app.route("/listen", methods=["POST"]) def events_listener(): gluster_event = request.json if gluster_event is None: # No event to process, may be test call return "OK" # Process gluster_event # { # "nodeid": NODEID, # "ts": EVENT_TIMESTAMP, # "event": EVENT_TYPE, # "message": EVENT_DATA # } return "OK" app.run(host="0.0.0.0", port=9000)
Eventing feature is not yet available in any of the releases, patch is
under review in upstream master(http://review.gluster.org/14248). If anybody interested in trying it
out can cherrypick the patch from review.gluster.org
git clone http://review.gluster.org/glusterfs
cd glusterfs
git fetch http://review.gluster.org/glusterfs refs/changes/48/14248/5
git checkout FETCH_HEAD
git checkout -b <YOUR_BRANCH_NAME>
./autogen.sh
./configure
make
make install
Start the Eventing using,
gluster-eventing start
Other commands available are stop, restart, reload and
status. gluster-eventing --help for more details.
Now Gluster can send out notifications via Webhooks. Setup a web
server listening to a POST request and register that URL to Gluster
Eventing. Thats all.
gluster-eventing webhook-add <MY_WEB_SERVER_URL>
For example, if my webserver is running at http://192.168.122.188:9000/listen
then register using,
gluster-eventing webhook-add ``http://192.168.122.188:9000/listen``
We can also test if web server is accessible from all Gluster nodes
using webhook-test subcommand.
gluster-eventing webhook-test http://192.168.122.188:9000/listen
With the initial patch only basic events are covered, I will add more
events once this patch gets merged. Following events are available
now.
Volume Create Volume Delete Volume Start Volume Stop Peer Attach Peer Detach
Created a small demo to show this eventing feature, it uses Web server
which is included with the patch for Testing.(laptop hostname is sonne)
/usr/share/glusterfs/scripts/eventsdash.py --port 8080
Login to Gluster node and start the eventing,
gluster-eventing start gluster-eventing webhook-add http://sonne:8080/listen
And then login to VM and run Gluster commands to probe/detach peer,
volume create, start etc and Observe the realtime notifications for
the same where eventsdash is running.
Example,
ssh root@fvm1 gluster peer attach fvm2 gluster volume create gv1 fvm1:/bricks/b1 fvm2:/bricks/b2 force gluster volume start gv1 gluster volume stop gv1 gluster volume delete gv1 gluster peer detach fvm2
Demo also includes a Web UI which refreshes its UI automatically when
something changes in Cluster.(I am still fine tuning this UI, not yet
available with the patch. But soon will be available as seperate repo
in my github)
Will this feature available in 3.8 release?
Sadly No. I couldn’t get this merged before 3.8 feature freeze 🙁
Is it possible to create a simple Gluster dashboard outside the
cluster?
It is possible, along with the events we also need REST APIs to get
more information from cluster or to perform any action in cluster.
(WIP REST APIs are available here)
Is it possible to filter only alerts or critical notifications?
Thanks Kotresh for the
suggestion. Yes it is possible to add event_type and event_group
information to the dict so that it can be filtered easily.(Not yet
available now, but will add this feature once this patch gets merged
in Master)
Is documentation available to know more about eventing design and
internals?
Design spec available here
(which discusses about Websockets, currently we don’t have
Websockets support). Usage documentation is available in the commit
message of the patch(http://review.gluster.org/14248).
Comments and Suggestions Welcome.
Tiering is a powerful feature in Gluster. It divides the available storage into two parts: the hot tier populated by small fast storage devices like SSDs or a RAMDisk, and the cold tier populated by large slow devices like mechanical HDDs. By placing most recently accessed files in the hot , Gluster can quickly process …Read more
Important happenings for Gluster this month: 3.7.14 released 3.8.3 released CFP for Gluster Developer Summit open until August 31st gluster-users: [Gluster-users] release-3.6 end of life http://www.gluster.org/pipermail/gluster-users/2016-August/028078.html – Joe requests a review of the 3.6 EOL proposal [Gluster-users] The out-of-order GlusterFS 3.8.3 release addresses a usability regression http://www.gluster.org/pipermail/gluster-users/2016-August/028155.html Niels de Vos announces 3.8.3 [Gluster-users] GlusterFS-3.7.14 released …Read more
Important happenings for Gluster this month: First stable update for 3.8 is available, GlusterFS 3.8.1 fixes several bugs Gluster Developers Summit: October 6, 7, 2016 directly following LinuxCon Berlin This is an invite-only event, but you can apply for an invitation. Deadline for application is July 31, 2016. Apply for an invitation: http://goo.gl/forms/JOEzoimW9qVV4jdz1 Gluster-users: Aravinda …Read more
Announcing 3.8! As of June 14, 3.8 is released for general use. The 3.8 release focuses on: containers with inclusion of Heketi hyperconvergence ecosystem integration protocol improvements with NFS Ganesha http://blog.gluster.org/2016/06/glusterfs-3-8-released/ Please note that this release also marks the end of updates for Gluster 3.5. Upcoming Events: Red Hat Summit, June 28-30 Gluster related talks: …Read more
Important happenings for Gluster this month: We’re closing in on a 3.8 release, with release candidate 2 released on May 24th. (http://www.gluster.org/pipermail/gluster-devel/2016-May/049642.html) Our 3.8 roadmap of features is available at: https://www.gluster.org/community/roadmap/3.8/ Our current timeline is to have a release in June, so update your release notes! Gluster Developers Summit: October 6, 7 directly following LinuxCon …Read more
Lots of people at Vault this week! Last month’s newsletter included a list of all of the Gluster related talks, so if you’re here, come by the Red Hat booth and say hi! New things: 3.7.11 Released: https://www.gluster.org/pipermail/gluster-devel/2016-April/049155.html We’ve got a new Events (https://www.gluster.org/events/) page to replace the publicpad. (https://public.pad.fsfe.org/p/gluster-events) – Feel free to add …Read more
This is first post in a series of blog posts where we will discuss deduplication in a distributed data system. Here we will discuss different types of deduplication and various approaches taken in a distributed data systems to get optimum performance and storage efficiency. But first basics of deduplication! “Data deduplication (often called “intelligent compression” […]
What a busy month this past month for Gluster! We’ve got updates from SCaLE, FOSDEM, our Developer Gatherings in Brno, DevConf, noteworthy threads from the mailing lists, and upcoming events. From SCaLE: Richard Wareing gave a talk at the Southern California Linux Expo about Scaling Gluster at Facebook. More at Scaling GlusterFS at Facebook From …Read more
In November 2015, we did our annual Gluster Community Survey, and we had some great responses and turnout! We’ve taken some of the highlights and distilled them down for our overall community to review. Some interesting things: 68% of respondents have been using Gluster for less than 2 years. 3 shall be the number:The most …Read more
I have been working in GlusterFS for quite some time now. As you might know that GlusterFS is an open source distributed file-system. What differentiate Gluster from other distributed file-system is its scale-out nature, data access without metad…
Sometimes the solutions we put in place turn out even better than what we originally hoped. That could sum of the experience of Belgian Internet Service Provider RIS Belgium,which turned to Gluster to solve the problem of distributed storage and ended up getting more benefit from the solution than they expected. Initially RIS, a web …Read more
NFS-Ganesha 2.3 is rapidly winding down to release and it has a bunch of new things in it that make it fairly compelling. A lot of people are also starting to use Red Hat Gluster Storage with the NFS-Ganesha NFS server that is part of that package. Setting up a highly available NFS-Ganesha system using …Read more
We’ve just wrapped up a great week at LinuxCon Europe 2016 in Dublin with a great showing from the Gluster community! BitRot Detection in GlusterFS – Gaurav Garg, Red Hat & Venky Shankar Advancements in Automatic File Replication in Gluster – Ravishankar N Gluster for Sysadmins – Dustin Black Open Storage in the Enterprise with …Read more
Since we did not have any weekly Gluster news go out in September, this post tries to capture and summarize action from the entire month of September 2015. == General News == GlusterFS won yet another Bossie in the open source platforms, infrastructure, management, and orchestration software category. Long time users of the project might …Read more