<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Thanks Joe,<br>
for the answers!<br>
<br>
I was not clear enough about the set up apparently.<br>
The Gluster cluster consist of 3 nodes with each 14 bricks. The
bricks are formatted as xfs, mounted locally as xfs. There is one
volume, type: Distributed-Replicate (replica 2). The configuration
is so that bricks are mirrored on two different nodes.<br>
<br>
The NFS mount which was alive but not used during reboot when the
problem started are from clients (2 XenServer machines configured
as a pool - a shared storage set-up). The comparisons I give below
are between (other) clients mounting via either glusterfs or NFS.
Similar problem with the exception that the first listing (via ls)
after a fresh mount via NFS actually does find the files with
data. A second listing only finds the 0 bit file with the same
name.<br>
<br>
So all the 0bit files in mode 0644 can be safely removed?<br>
<br>
Why do I see three files with the same name (and modification
timestamp etc.) via either a glusterfs or NFS mount from a client?
Deleting one of the three will probably not solve the issue
either.. this seems to me an indexing issue in the gluster
cluster.<br>
<br>
How do I get Gluster to replicate the files correctly, only 2
versions of the same file, not three, and on two bricks on
different machines?<br>
<br>
Cheers,<br>
Olav<br>
<br>
<br>
<br>
<br>
On 20/02/15 21:51, Joe Julian wrote:<br>
</div>
<blockquote cite="mid:54E79E41.2010704@julianfamily.org" type="cite">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<br>
<div class="moz-cite-prefix">On 02/20/2015 12:21 PM, Olav Peeters
wrote:<br>
</div>
<blockquote cite="mid:54E7974B.5080607@gmail.com" type="cite">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
<div class="moz-cite-prefix">Let's take one file
(3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd) as an example...<br>
On the 3 nodes where all bricks are formatted as XFS and
mounted in /export and 272b2366-dfbf-ad47-2a0f-5d5cc40863e3 is
the mounting point of a NFS shared storage connection from
XenServer machines:<br>
</div>
</blockquote>
Did I just read this correctly? Your bricks are NFS mounts? ie,
GlusterFS Client <-> GlusterFS Server <-> NFS
<-> XFS<br>
<blockquote cite="mid:54E7974B.5080607@gmail.com" type="cite">
<div class="moz-cite-prefix"> <br>
[root@gluster01 ~]# find
/export/*/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/ -name '300*'
-exec ls -la {} \;<br>
-rw-r--r--. 2 root root 44332659200 Feb 17 23:55
/export/brick13gfs01/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
</div>
</blockquote>
Supposedly, this is the actual file.<br>
<blockquote cite="mid:54E7974B.5080607@gmail.com" type="cite">
<div class="moz-cite-prefix"> -rw-r--r--. 2 root root 0 Feb 18
00:51
/export/brick14gfs01/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
</div>
</blockquote>
This is not a linkfile. Note it's mode 0644. How it got there with
those permissions would be a matter of history and would require
information that's probably lost.<br>
<blockquote cite="mid:54E7974B.5080607@gmail.com" type="cite">
<div class="moz-cite-prefix"> <br>
root@gluster02 ~]# find
/export/*/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/ -name '300*'
-exec ls -la {} \;<br>
-rw-r--r--. 2 root root 44332659200 Feb 17 23:55
/export/brick13gfs02/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
<br>
[root@gluster03 ~]# find
/export/*/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/ -name '300*'
-exec ls -la {} \;<br>
-rw-r--r--. 2 root root 44332659200 Feb 17 23:55
/export/brick13gfs03/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 2 root root 0 Feb 18 00:51
/export/brick14gfs03/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
</div>
</blockquote>
Same analysis as above.<br>
<blockquote cite="mid:54E7974B.5080607@gmail.com" type="cite">
<div class="moz-cite-prefix"> <br>
3 files with information, 2 x a 0-bit file with the same name<br>
<br>
Checking the 0-bit files:<br>
[root@gluster01 ~]# getfattr -m . -d -e hex
/export/brick14gfs01/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
getfattr: Removing leading '/' from absolute path names<br>
# file:
export/brick14gfs01/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-34=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-35=0x000000000000000000000000<br>
trusted.gfid=0xaefd184508414a8f8408f1ab8aa7a417<br>
<br>
[root@gluster03 ~]# getfattr -m . -d -e hex
/export/brick14gfs03/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
getfattr: Removing leading '/' from absolute path names<br>
# file:
export/brick14gfs03/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-34=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-35=0x000000000000000000000000<br>
trusted.gfid=0xaefd184508414a8f8408f1ab8aa7a417<br>
<br>
This is not a glusterfs link file since there is no
"trusted.glusterfs.dht.linkto", am I correct? <br>
</div>
</blockquote>
You are correct.<br>
<blockquote cite="mid:54E7974B.5080607@gmail.com" type="cite">
<div class="moz-cite-prefix"> <br>
And checking the "good" files:<br>
<br>
# file:
export/brick13gfs01/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-32=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-33=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-34=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-35=0x000000010000000100000000<br>
trusted.gfid=0xaefd184508414a8f8408f1ab8aa7a417<br>
<br>
[root@gluster02 ~]# getfattr -m . -d -e hex
/export/brick13gfs02/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
getfattr: Removing leading '/' from absolute path names<br>
# file:
export/brick13gfs02/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-32=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-33=0x000000000000000000000000<br>
trusted.gfid=0xaefd184508414a8f8408f1ab8aa7a417<br>
<br>
[root@gluster03 ~]# getfattr -m . -d -e hex
/export/brick13gfs03/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
getfattr: Removing leading '/' from absolute path names<br>
# file:
export/brick13gfs03/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000<br>
trusted.afr.dirty=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-40=0x000000000000000000000000<br>
trusted.afr.sr_vol01-client-41=0x000000000000000000000000<br>
trusted.gfid=0xaefd184508414a8f8408f1ab8aa7a417<br>
<br>
<br>
<br>
Seen from a client via a glusterfs mount:<br>
[root@client ~]# ls -al
/mnt/glusterfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/300*<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/glusterfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/glusterfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/glusterfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
<br>
<br>
<br>
Via NFS (just after performing a umount and mount the volume
again):<br>
[root@client ~]# ls -al
/mnt/nfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/300*
<br>
-rw-r--r--. 1 root root 44332659200 Feb 17 23:55
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 44332659200 Feb 17 23:55
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 44332659200 Feb 17 23:55
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
<br>
Doing the same list a couple of seconds later:<br>
[root@client ~]# ls -al
/mnt/nfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/300*<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
And again, and again, and again:<br>
[root@client ~]# ls -al
/mnt/nfs/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/300*<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
-rw-r--r--. 1 root root 0 Feb 18 00:51
/mnt/test/272b2366-dfbf-ad47-2a0f-5d5cc40863e3/3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd<br>
<br>
This really seems odd. Why do we get to see "real data file"
once only?<br>
<br>
It seems more and more that this crazy file duplication (and
writing of sticky bit files) was actually triggered when
rebooting one of the three nodes while there still is an
active (even when there is no data exchange at all) NFS
connection, since all 0-bit files (of the non Sticky bit type)
were either created at 00:51 or 00:41, the exact moment one of
the three nodes in the cluster were rebooted. This would mean
that replication currently with GlusterFS creates hardly any
redundancy. Quiet the opposite, if one of the machines goes
down, all of your data seriously gets disorganised. I am buzzy
configuring a test installation to see how this can be best
reproduced for a bug report..<br>
<br>
Does anyone have a suggestion how to best get rid of the
duplicates, or rather get this mess organised the way it
should be?<br>
This is a cluster with millions of files. A rebalance does not
fix the issue, neither does a rebalance fix-layout help. Since
this is a replicated volume all files should be their 2x, not
3x. Can I safely just remove all the 0 bit files outside of
the .glusterfs directory including the sticky bit files?<br>
<br>
The empty 0 bit files outside of .glusterfs on every brick I
can probably safely removed like this:<br>
find /export/* -path */.glusterfs -prune -o -type f -size 0
-perm 1000 -exec rm {} \;<br>
not?<br>
<br>
Thanks!<br>
<br>
Cheers,<br>
Olav<br>
On 18/02/15 22:10, Olav Peeters wrote:<br>
</div>
<blockquote cite="mid:54E4FFAB.8040203@gmail.com" type="cite">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
<div class="moz-cite-prefix">Thanks Tom and Joe,<br>
for the fast response!<br>
<br>
Before I started my upgrade I stopped all clients using the
volume and stopped all VM's with VHD on the volume, but I
guess, and this may be the missing thing to reproduce this
in a lab, I did not detach a NFS shared storage mount from a
XenServer pool to this volume, since this is an extremely
risky business. I also did not stop the volume. This I guess
was a bit stupid, but since I did upgrades in the past this
way without any issues I skipped this step (a really bad
habit). I'll make amends and file a proper bug report :-). I
agree with you Joe, this should never happen, even when
someone ignores the advice of stopping the volume. If it
would also be nessessary to detach shared storage NFS
connections to a volume, than franky, glusterfs is unusable
in a private cloud. No one can afford downtime of the whole
infrastructure just for a glusterfs upgrade. Ideally a
replicated gluster volume should even be able to remain
online and used during (at least a minor version) upgrade.<br>
<br>
I don't know whether a heal was maybe buzzy when I started
the upgrade. I forgot to check. I did check the CPU activity
on the gluster nodes which were very low (in the 0.0X range
via top), so I doubt it. I will add this to the bug report
as a suggestion should they not be able to reproduce with an
open NFS connection.<br>
<br>
By the way, is it sufficient to do:<br>
service glusterd stop<br>
service glusterfsd stop<br>
and do a:<br>
ps aux | gluster*<br>
to see if everything has stopped and kill any leftovers
should this be necessary?<br>
<br>
For the fix, do you agree that if I run e.g.:<br>
find /export/* -type f -size 0 -perm 1000 -exec /bin/rm {}
\;<br>
on every node if /export is the location of all my bricks,
also in a replicated set-up, this will be save?<br>
No necessary 0bit files will be deleted in e.g. the
.glusterfs of every brick?<br>
<br>
Thanks for your support!<br>
<br>
Cheers,<br>
Olav<br>
<br>
<br>
<br>
<br>
<br>
On 18/02/15 20:51, Joe Julian wrote:<br>
</div>
<blockquote cite="mid:54E4ED3D.2070707@julianfamily.org"
type="cite">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
<br>
<div class="moz-cite-prefix">On 02/18/2015 11:43 AM, <a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a>
wrote:<br>
</div>
<blockquote
cite="mid:20150218124308.b2b02683b6fce9ed61e10e2e9bfae354.a3d1725a9b.mailapi@email04.secureserver.net"
type="cite">
<div>Hi Olav,</div>
<div><br class="Apple-interchange-newline">
I have a hunch that our problem was caused by improper
unmounting of the gluster volume, and have since found
that the proper order should be: kill all jobs using
volume -> unmount volume on clients -> gluster
volume stop -> stop gluster service (if necessary)</div>
<div> </div>
<div>In my case, I wrote a Python script to find duplicate
files on the mounted volume, then delete the
corresponding link files on the bricks (making sure to
also delete files in the .glusterfs directory)</div>
<div> </div>
<div>However, your find command was also suggested to me
and I think it's a simpler solution. I believe removing
all link files (even ones that are not causing
duplicates) is fine since the next file access gluster
will do a lookup on all bricks and recreate any link
files if necessary. Hopefully a gluster expert can chime
in on this point as I'm not completely sure.</div>
</blockquote>
<br>
You are correct.<br>
<br>
<blockquote
cite="mid:20150218124308.b2b02683b6fce9ed61e10e2e9bfae354.a3d1725a9b.mailapi@email04.secureserver.net"
type="cite">
<div> </div>
<div>Keep in mind your setup is somewhat different than
mine as I have only 5 bricks with no replication.</div>
<div> </div>
<div>Regards,</div>
<div>Tom</div>
<div> </div>
<blockquote class="threadBlockQuote" style="border-left:
2px solid #C2C2C2; padding-left: 3px; margin-left: 4px;">---------
Original Message ---------
<div>Subject: Re: [Gluster-users] Hundreds of duplicate
files<br>
From: "Olav Peeters" <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:opeeters@gmail.com"><opeeters@gmail.com></a><br>
Date: 2/18/15 10:52 am<br>
To: <a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>,
<a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a><br>
<br>
<div class="moz-cite-prefix">Hi all,<br>
I'm have this problem after upgrading from 3.5.3 to
3.6.2.<br>
At the moment I am still waiting for a heal to
finish (on a 31TB volume with 42 bricks, replicated
over three nodes).<br>
<br>
Tom,<br>
how did you remove the duplicates?<br>
with 42 bricks I will not be able to do this
manually..<br>
Did a:<br>
find $brick_root -type f -size 0 -perm 1000 -exec
/bin/rm {} \;<br>
work for you?<br>
<br>
Should this type of thing ideally not be checked and
mended by a heal?<br>
<br>
Does anyone have an idea yet how this happens in the
first place? Can it be connected to upgrading?<br>
<br>
Cheers,<br>
Olav<br>
<pre class="moz-signature"> </pre>
On 01/01/15 03:07, <a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a>
wrote:</div>
<blockquote
cite="mid:20141231190720.b2b02683b6fce9ed61e10e2e9bfae354.1100adfcd4.mailapi@email04.secureserver.net">
<div>No, the files can be read on a newly mounted
client! I went ahead and deleted all of the link
files associated with these duplicates, and then
remounted the volume. The problem is fixed!</div>
<div>Thanks again for the help, Joe and Vijay.</div>
<div> </div>
<div>Tom</div>
<div> </div>
<blockquote class="threadBlockQuote"
style="border-left: 2px solid #C2C2C2;
padding-left: 3px; margin-left: 4px;">---------
Original Message ---------
<div>Subject: Re: [Gluster-users] Hundreds of
duplicate files<br>
From: "Vijay Bellur" <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:vbellur@redhat.com"><vbellur@redhat.com></a><br>
Date: 12/28/14 3:23 am<br>
To: <a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a>,
<a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
<br>
On 12/28/2014 01:20 PM, <a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a>
wrote:<br>
> Hi Vijay,<br>
> Yes the files are still readable from the
.glusterfs path.<br>
> There is no explicit error. However, trying
to read a text file in<br>
> python simply gives me null characters:<br>
><br>
> >>>
open('ott_mf_itab').readlines()<br>
>
['\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00']<br>
><br>
> And reading binary files does the same<br>
><br>
<br>
Is this behavior seen with a freshly mounted
client too?<br>
<br>
-Vijay<br>
<br>
> --------- Original Message ---------<br>
> Subject: Re: [Gluster-users] Hundreds of
duplicate files<br>
> From: "Vijay Bellur" <a
moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:vbellur@redhat.com"><vbellur@redhat.com></a><br>
> Date: 12/27/14 9:57 pm<br>
> To: <a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a>,
<a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
><br>
> On 12/28/2014 10:13 AM, <a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:tbenzvi@3vgeomatics.com">tbenzvi@3vgeomatics.com</a>
wrote:<br>
> > Thanks Joe, I've read your blog post
as well as your post<br>
> regarding the<br>
> > .glusterfs directory.<br>
> > I found some unneeded duplicate files
which were not being read<br>
> > properly. I then deleted the link file
from the brick. This always<br>
> > removes the duplicate file from the
listing, but the file does not<br>
> > always become readable. If I also
delete the associated file in the<br>
> > .glusterfs directory on that brick,
then some more files become<br>
> > readable. However this solution still
doesn't work for all files.<br>
> > I know the file on the brick is not
corrupt as it can be read<br>
> directly<br>
> > from the brick directory.<br>
><br>
> For files that are not readable from the
client, can you check if the<br>
> file is readable from the .glusterfs/ path?<br>
><br>
> What is the specific error that is seen
while trying to read one such<br>
> file from the client?<br>
><br>
> Thanks,<br>
> Vijay<br>
><br>
><br>
><br>
>
_______________________________________________<br>
> Gluster-users mailing list<br>
> <a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a moz-do-not-send="true"
class="moz-txt-link-freetext"
href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
></div>
</blockquote>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
</div>
</blockquote>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>