<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace"><div class="gmail_default" style="font-family:monospace,monospace">Okay, I think this has to do with Gluster NFS even though I was not accessing the Gluster volume via NFS.<br><br></div><div class="gmail_default" style="font-family:monospace,monospace">Directly on the bricks the files look like this:<br><br></div><div class="gmail_default" style="font-family:monospace,monospace">Hgluster01<br></div><div class="gmail_default" style="font-family:monospace,monospace"><div style="margin-left:40px">-r--r--r--. 2 602 602 832M Apr 3 06:17 scan_89.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_90.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_91.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_92.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_94.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_95.tar.bak<br>-r--r--r--. 2 602 602 839M Apr 3 11:39 scan_96.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_98.tar.bak<br>---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_99.tar.bak<br><br></div>Hgluster02<br><div style="margin-left:40px">---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_89.tar.bak<br>-r--r--r--. 2 602 602 869939200 Apr 3 07:36 scan_90.tar.bak<br>-r--r--r--. 2 602 602 868331520 Apr 3 09:36 scan_91.tar.bak<br>-r--r--r--. 2 602 602 870092800 Apr 3 09:37 scan_92.tar.bak<br>-r--r--r--. 2 602 602 875448320 Apr 3 09:39 scan_93.tar.bak<br>-r--r--r--. 2 602 602 870656000 Apr 3 09:40 scan_94.tar.bak<br>-r--r--r--. 2 602 602 869396480 Apr 3 11:38 scan_95.tar.bak<br>-r--r--r--. 2 602 602 881858560 Apr 3 11:40 scan_97.tar.bak<br>-r--r--r--. 2 602 602 868188160 Apr 3 11:41 scan_98.tar.bak<br>-r--r--r--. 2 602 602 865382400 Apr 3 13:32 scan_99.tar.bak<br><br></div>So
I turned off NFS and from a client tried to move the files to see if
that would get rid of these weird nfsnobody files. Didn't work. After
moving the files to a new directory, the new directory still had all of
the nfsnobody files on the bricks.<br><br></div><div class="gmail_default" style="font-family:monospace,monospace">Next,
I used rsync from the same client to copy all of the files to a new
directory and, lo and behold, the nfsnobody files were gone. I tested a
Bareos backup job and the data was read without issue from both nodes.
There was zero empty files.<br><br></div>I guess I will blame Gluster NFS for this one? If you need any more information from me, I would be happy to oblige.</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><span style="font-family:monospace,monospace"><font size="1">___________________________________________<br>¯\_(ツ)_/¯<br>Ryan Clough<br>Information Systems<br><a href="http://www.decisionsciencescorp.com/" target="_blank">Decision Sciences International Corporation</a></font></span><span style="font-family:"Calibri","sans-serif";color:#1f497d"><a href="http://www.decisionsciencescorp.com/" target="_blank"><span style="color:blue"></span></a></span></div></div></div></div></div>
<br><div class="gmail_quote">On Wed, Jul 29, 2015 at 5:53 AM, Pranith Kumar Karampuri <span dir="ltr"><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
hi Ryan,<br>
What do you see in the logs of glusterfs mount and bricks? Do
you think it is possible for you to attach those logs to this thread
so that we can see what could be going on?<br>
<br>
Pranith<div><div class="h5"><br>
<br>
<div>On 07/28/2015 02:32 AM, Ryan Clough
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="h5">
<div dir="ltr">
<div class="gmail_default" style="font-family:monospace,monospace">
<div class="gmail_default" style="font-family:monospace,monospace">Hello,<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">I have cross-posted
this question in the bareos-users mailing list.<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace"><br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Wondering if anyone
has tried this because I am unable to backup data that is
mounted via Gluster Fuse or Gluster NFS. Basically, I have
the Gluster volume mounted on the Bareos Director which also
has the tape changer attached.<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Here is some
information about versions:<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Bareos version
14.2.2<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Gluster version
3.7.2<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Scientific Linux
version 6.6<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Our Gluster volume
consists of two nodes in distribute only. Here is the
configuration of our volume:<br>
[root@hgluster02 ~]# gluster volume info<br>
<br>
Volume Name: export_volume<br>
Type: Distribute<br>
Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb<br>
Status: Started<br>
Number of Bricks: 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: hgluster01:/gluster_data<br>
Brick2: hgluster02:/gluster_data<br>
Options Reconfigured:<br>
performance.io-thread-count: 24<br>
server.event-threads: 20<br>
client.event-threads: 4<br>
performance.readdir-ahead: on<br>
features.inode-quota: on<br>
features.quota: on<br>
nfs.disable: off<br>
auth.allow:
192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*<br>
server.allow-insecure: on<br>
server.root-squash: on<br>
performance.read-ahead: on<br>
features.quota-deem-statfs: on<br>
diagnostics.brick-log-level: WARNING<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">When I try to backup
a directory from Gluster Fuse or Gluster NFS mount and I
monitor the network communication I only see data being
pulled from the hgluster01 brick. When the job finishes
Bareos thinks that it completed without error but included
in the messages for the job are lots and lots of permission
denied errors like this:<br>
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot
open
"/export/rclough/psdv-2014-archives-2/scan_111.tar.bak":
ERR=Permission denied.<br>
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot
open "/export/rclough/psdv-2014-archives-2/run_219.tar.bak":
ERR=Permission denied.<br>
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot
open
"/export/rclough/psdv-2014-archives-2/scan_112.tar.bak":
ERR=Permission denied.<br>
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot
open "/export/rclough/psdv-2014-archives-2/run_220.tar.bak":
ERR=Permission denied.<br>
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot
open
"/export/rclough/psdv-2014-archives-2/scan_114.tar.bak":
ERR=Permission denied.<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">At first I thought
this might be a root-squash problem but, if I try to
read/copy a file using the root user from the Bareos server
that is trying to do the backup, I can read files just fine.<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">When the job
finishes is reports that it finished "OK -- with warnings"
but, again the log for the job is filled with
"ERR=Permission denied" messages. In my opinion, this job
did not finish OK and should be Failed. Some of the files
from the HGluster02 brick are backed up but all of the ones
with permission errors do not. When I restore the job, all
of the files with permission errors are empty.<br>
<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace">Has anyone
successfully used Bareos to backup data from Gluster mounts?
This is an important use case for us because this is the
largest single volume that we have to prepare large amounts
of data to be archived.<br>
</div>
<div class="gmail_default" style="font-family:monospace,monospace"><br>
</div>
Thank you for your time,</div>
<div>
<div>
<div dir="ltr">
<div>
<div dir="ltr"><span style="font-family:monospace,monospace"><font size="1">___________________________________________<br>
¯\_(ツ)_/¯<br>
Ryan Clough<br>
Information Systems<br>
<a href="http://www.decisionsciencescorp.com/" target="_blank">Decision Sciences International
Corporation</a></font></span><span style="font-family:"Calibri","sans-serif";color:rgb(31,73,125)"><a href="http://www.decisionsciencescorp.com/" target="_blank"><span style="color:blue"></span></a></span></div>
</div>
</div>
</div>
</div>
</div>
<br>
</div></div><span><font color="#888888">This email and its contents are
confidential. If you are not the intended recipient, please do
not disclose or use the information within this email or its
attachments. If you have received this email in error, please
report the error to the sender by return email and delete this
communication from your records.</font></span>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div>
</blockquote></div><br></div>
<br>
<span><font color="#888888">This email and its contents are confidential. If you are not the
intended recipient, please do not disclose or use the information within
this email or its attachments. If you have received this email in
error, please report the error to the sender by return email and
delete this communication from your records.</font></span>