[Gluster-users] HadoopFS-like gluster setup

Peng Zhao blackass at gmail.com
Wed Jul 1 10:43:18 UTC 2009


BTW, is there someone have configured gluster like HDFS (unify with
automatic replication). Could someone share the volfile here?
I think I'm not the only fan waiting here ;-)
Gnep
On Wed, Jul 1, 2009 at 6:39 PM, Peng Zhao <blackass at gmail.com> wrote:

> OK, my stupid. There was no fuse module. I built one and modprobe fuse. The
> previous error is gone, with some new one:
> Here are the DEBUG-level msg:
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on /usr/lib64/glusterfs/2.0.2/xlator/features/locks.so:
> undefined symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on
> /usr/lib64/glusterfs/2.0.2/xlator/performance/io-threads.so: undefined
> symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on
> /usr/lib64/glusterfs/2.0.2/xlator/performance/write-behind.so: undefined
> symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [xlator.c:634:xlator_set_type] xlator:
> dlsym(notify) on /usr/lib64/glusterfs/2.0.2/xlator/performance/io-cache.so:
> undefined symbol: notify -- neglecting
> [2009-07-01 18:36:25] D [glusterfsd.c:1179:main] glusterfs: running in pid
> 6874
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-0:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-0:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-1:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-1:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-2:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-2:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-3:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-3:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [unify.c:4288:init] unified: namespace node
> specified as compute-5-4
> [2009-07-01 18:36:25] D [scheduler.c:48:get_scheduler] scheduler: attempt
> to load file rr.so
> [2009-07-01 18:36:25] D [unify.c:4320:init] unified: Child node count is 2
> [2009-07-01 18:36:25] D [rr-options.c:188:rr_options_validate] rr: using
> scheduler.limits.min-free-disk = 15 [default]
> [2009-07-01 18:36:25] D [rr-options.c:216:rr_options_validate] rr: using
> scheduler.refresh-interval = 10 [default]
> [2009-07-01 18:36:25] D [client-protocol.c:5948:init] compute-5-4:
> defaulting frame-timeout to 30mins
> [2009-07-01 18:36:25] D [client-protocol.c:5959:init] compute-5-4:
> defaulting ping-timeout to 10
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-4: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [write-behind.c:1859:init] writebehind: disabling
> write-behind for first 1 bytes
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-0: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-0: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-1: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-1: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-2: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-2: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-3: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [client-protocol.c:6276:notify] compute-5-3: got
> GF_EVENT_PARENT_UP, attempting connect on transport
> [2009-07-01 18:36:25] D [io-threads.c:2280:init] brick: io-threads:
> Autoscaling: off, min_threads: 16, max_threads: 16
> [2009-07-01 18:36:25] D [transport.c:141:transport_load] transport: attempt
> to load file /usr/lib64/glusterfs/2.0.2/transport/socket.so
> [2009-07-01 18:36:25] E [socket.c:206:__socket_server_bind] server: binding
> to failed: Address already in use
> [2009-07-01 18:36:25] E [socket.c:209:__socket_server_bind] server: Port is
> already in use
> [2009-07-01 18:36:25] E [server-protocol.c:7631:init] server: failed to
> bind/listen on socket
> [2009-07-01 18:36:25] E [xlator.c:736:xlator_init_rec] xlator:
> Initialization of volume 'server' failed, review your volfile again
> [2009-07-01 18:36:25] E [glusterfsd.c:498:_xlator_graph_init] glusterfs:
> initializing translator failed
> [2009-07-01 18:36:25] E [glusterfsd.c:1191:main] glusterfs: translator
> initialization failed. exiting
> [root at compute-5-0 gluster]# ps aux | grep gluster
>
> I think my volfile is wrong, though don't know where is the problem.
> BR,
> Gnep
>
> On Wed, Jul 1, 2009 at 2:41 PM, Shehjar Tikoo <shehjart at gluster.com>wrote:
>
>> Peng Zhao wrote:
>>
>>> Hi, all,
>>> I'm new to gluster, but found it interesting. I want to setup gluster in
>>> a way to be similar with HDFS.
>>> There is my sample vol-file:
>>> volume posix
>>>  type storage/posix
>>>  option directory /data1/gluster
>>> end-volume
>>>
>>> volume locks
>>>  type features/locks
>>>  subvolumes posix
>>> end-volume
>>>
>>> volume brick
>>>  type performance/io-threads
>>>  subvolumes locks
>>> end-volume
>>>
>>> volume server
>>>  type protocol/server
>>>  option transport-type tcp
>>>  option auth.addr.brick.allow *
>>>  subvolumes brick
>>> end-volume
>>>
>>> volume compute-5-0
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host compute-5-0
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume compute-5-1
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host compute-5-1
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume compute-5-2
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host compute-5-2
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume compute-5-3
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host compute-5-3
>>>  option remote-subvolume brick
>>> end-volume
>>>
>>> volume compute-5-4
>>>  type protocol/client
>>>  option transport-type tcp
>>>  option remote-host compute-5-4
>>>  option remote-subvolume brick-ns
>>> end-volume
>>>
>>> volume primary
>>>  type cluster/replicate
>>>  option local-volume-name primary
>>>  subvolumes compute-5-0 compute-5-1
>>> end-volume
>>>
>>> volume secondary
>>>  type cluster/replicate
>>>  option local-volume-name secondary
>>>  subvolumes compute-5-2 compute-5-3
>>> end-volume
>>>
>>> volume unified
>>>  type cluster/unify
>>>  option scheduler rr
>>>  option local-volume-name unified          # do I need this?
>>>  option namespace compute-5-4           # do I need this?
>>>  subvolumes primary secondary
>>> end-volume
>>>
>>> volume writebehind
>>>  type performance/write-behind
>>>  option cache-size 1MB
>>>  subvolumes unified
>>> end-volume
>>>
>>> volume cache
>>>  type performance/io-cache
>>>  option cache-size 512MB
>>>  subvolumes writebehind
>>> end-volume
>>>
>>> The glusterd is up & running and no error msg in the logs. However, it
>>> reports some error when I tried to mount it:
>>> [2009-07-01 09:37:36] E [xlator.c:736:xlator_init_rec] xlator:
>>> Initialization of volume 'fuse' failed, review your volfile again
>>> [2009-07-01 09:37:36] E [glusterfsd.c:498:_xlator_graph_init] glusterfs:
>>> initializing translator failed
>>> [2009-07-01 09:37:36] E [glusterfsd.c:1191:main] glusterfs: translator
>>> initialization failed. exiting
>>>
>>> I guess it is a very common question. Anyone has any idea?
>>> BR,
>>>
>>
>> Try generating the log file with log-level set to bug. You
>> can do so by using the "-L DEBUG" command line parameter.
>>
>> The debug log level will give us a better idea of what
>> exactly is failing.
>>
>> -Shehjar
>>
>>> Gnep
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090701/ffd67097/attachment.html>


More information about the Gluster-users mailing list