[Gluster-users] afr with gluster command line - possible?

lejeczek peljasz at yahoo.co.uk
Mon Apr 23 17:31:29 UTC 2012


what I'm saying is that it was possible in previous 
versions, don't know when this changed
namely client(fuse) would mountpoint to just one 
brick/server and then then bricks would sort replication out 
between themselves
it used to work this way, right? I'm sure I had it this way, 
afr + unity

what I'm asking is whether it is possible to have similar in 
3.2.x?

I'm curious why would it change, it is now the client that 
looks after replication,
I did some quick manual tampering with configs for fuse and 
I see that if client is told to mountpoint to only one 
brick, to lets say the local one to the client, then 
operations of this client do not get replicated over to 
other(s) brick(s)
what's the benefit of having the client doing it, and logic 
behind it? before glusted/server would be looking after such 
a volume and one would think this would be the best place to 
drop this trust and knowledge - server sides mange 
replicated volume and let a client jump in wherever convenient

and what about such setups where a client cannot access 
certain brick for the brick is behind client's reach

one practical test case, I'd imagine common, possible?


                      CLIENTs
<very congested net>
/ IP Ab                                        IP Bb \
brick A                                        brick B
\ IP Aa < very fast net >        IP Ba /


one would would think clients access Ab & Bb
replication via Aa & Ba
I guess it goes as deep as inner-workings of gluster itself, 
for me in the past(AFR) this worked really nicely when some 
clients had only access to one brick(separate net) whereas 
others to the second, and bricks had a fast link to each 
other, where the replication happened.

maybe I would be better off with some other 'mirroring' 
solution, anybody recommends anything?
cheers

On 23/04/12 17:13, Brian Candler wrote:
> On Mon, Apr 23, 2012 at 04:44:32PM +0100, lejeczek wrote:
>>     yes, precisely
>>     in the past I had running AFRs, this way
>>      box A looback client ->  box A server<->  box B server<- box B
>>     loopback client
>>     but similarly replace local loopback client with legitimate separate
>>     client that would have only access to one brick's one NIC
> Sorry, you've lost me again. What do you mean by a "loopback client" or a
> "legitimate client"?
>
> A client is a client, and a brick is a brick.
>
> If you happen to run a client on the same physical hardware as a brick, that
> makes no difference to the architecture.
>
> So if you have server1 and server2, both running as bricks, that's fine.
> Then you create an AFR volume out of those two bricks, that's fine too.
> Then if a [FUSE native] client mounts this volume it will talk to both
> bricks, that's fine too.
>
> If the client happens to be located on server1, then it makes no difference
> - it too will talk to server1 and server2.
>
> Your diagram suggests that "box A server" (brick) talks to "box B server",
> but I don't think it does, unless you're doing NFS. More accurately I'd
> say it's like this:
>
>       +---------+     +----------+
>       | client  |     |  .client |
>       |    |  `---------'-.  |   |
>       |    |  ,---------'  | |   |
>       |    v v  |     |    v v   |
>       |  brick  |     |   brick  |
>       +---------+     +----------+
>
>>     the simple idea was the client did not have to know about all the
>>     bricks/servers
> The native FUSE client learns this from the volume info and configures
> itself automatically.  Is there a problem with this?
>
>>     and I'd think this would be what most of of us would like, there would
>>     be quite a few situations where this is greatly helpful
>>     nowadays this seems impossible, or am I wrong?
> Unfortunately I don't understand what you're asking for, so I can't really
> give any suggestions!
>
> Regards,
>
> Brian.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120423/9c60e595/attachment.html>


More information about the Gluster-users mailing list