Aggregated news from external sources
We occasionally find leaks in Gluster via bugs filed by users and customers.
We definitely have benefits from checking for memory leaks and address
corruption ourselves. The usual way has been to run it under valgrind. With
ASAN, the difference is we can compile the binary with ASAN and then anyone can
run their tests on top of this binary and it should crash in case it comes
across a memory leak or memory corruption. We’ve fixed at least one
bug with the traceback from ASAN.
Here’s how you run Gluster under ASAN.
./autogen.sh ./configure --enable-gnfs --enable-debug --silent --sanitize=address
You need to make sure you have
libasan installed or else this might error out.
Once this is done, compile and install like you would normally. Now run tests
and see how it works. There are problems with this approach though. If there’s
a leak in cli, it’s going to complain about it all the time. The noise doesn’t
imply that fixing that is important. The Gluster CLI is going away soon.
Additionally, the CLI isn’t a long running daemon. It’s started, does it’s job,
and dies immediately.
The tricky part though is catches memory you’ve forgotten to free. It does not
catch memory that you’ve allocated unnecessarily. In the near future, I want to
create downloadable RPMs which you can download and run tests against.
Source: nigelb (Building Gluster with Address Sanitizer)
Progress cannot be made without change. As technologists, we recognize this every day. Most of the time, these changes are iterative: progresssive additions of features to projects like Gluster. Sometimes those changes are small, and sometimes not. And that’s, of course, just talking about our project. But one of the...
Upcoming Community Happy Hour at Red Hat Summit! Tue, May 7, 2019, 6:30 PM – 7:30 PM EDT https://cephandglusterhappyhour_rhsummit.eventbrite.com has all the details. Gluster 7 Roadmap Discussion kicked off for our 7 roadmap on the mailing lists, see [Gluster-users] GlusterFS v7.0 (and v8.0) roadmap discussion https://lists.gluster.org/pipermail/gluster-users/2019-March/036139.html for more details. Community...