<p dir="ltr"></p>
<p dir="ltr">-Atin<br>
Sent from one plus one<br>
On Jan 18, 2016 11:41 AM, "Mark Chaney" <<a href="mailto:mail@lists.macscr.com">mail@lists.macscr.com</a>> wrote:<br>
><br>
> I have a two node cluster setup with iscsi using the image files that are stored on the gluster cluster as LUNs. They do appear to be syncing, but I have a few questions and I appreciate any help you can give me. Thanks for your time!<br>
><br>
> 1) Why does the second brick show as N for online?<br>
> 2) Why is the healer daemon shown as NA? How can I correct that if it needs to be corrected?<br>
SHD doesn't need to listen to any specific port, and its showing online, so no issues.<br>
>From the status output it looks like brick hasn't started in gluster2 node. Could you check/send glusterd and brick log from gluster2 node?<br>
> 3) Should i really be mounting the gluster volumes on each gluster node for iscsi access or should i be accessing /var/gluster-storage directly?<br>
> 4) If i only have about 72GB of files stored in gluster, why is each gluster host about 155GB? Are their duplicates stored somewhere and why?<br>
><br>
> root@gluster1:~# gluster volume status volume1<br>
> Status of volume: volume1<br>
> Gluster process TCP Port RDMA Port Online Pid<br>
> ------------------------------------------------------------------------------<br>
> Brick gluster1:/var/gluster-storage 49152 0 Y 3043<br>
> Brick gluster2:/var/gluster-storage N/A N/A N N/A<br>
> NFS Server on localhost 2049 0 Y 3026<br>
> Self-heal Daemon on localhost N/A N/A Y 3034<br>
> NFS Server on gluster2 2049 0 Y 2738<br>
> Self-heal Daemon on gluster2 N/A N/A Y 2743<br>
><br>
> Task Status of Volume volume1<br>
> ------------------------------------------------------------------------------<br>
> There are no active volume tasks<br>
><br>
> root@gluster1:~# gluster peer status<br>
> Number of Peers: 1<br>
><br>
> Hostname: gluster2<br>
> Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b<br>
> State: Peer in Cluster (Connected)<br>
> root@gluster1:~#<br>
> root@gluster1:~# mount | grep gluster<br>
> gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)<br>
><br>
><br>
> root@gluster2:~# gluster volume status volume1<br>
> Status of volume: volume1<br>
> Gluster process TCP Port RDMA Port Online Pid<br>
> ------------------------------------------------------------------------------<br>
> Brick gluster1:/var/gluster-storage 49152 0 Y 3043<br>
> Brick gluster2:/var/gluster-storage N/A N/A N N/A<br>
> NFS Server on localhost 2049 0 Y 2738<br>
> Self-heal Daemon on localhost N/A N/A Y 2743<br>
> NFS Server on <a href="http://gluster1.mgr.example.com">gluster1.mgr.example.com</a> 2049 0 Y 3026<br>
> Self-heal Daemon on <a href="http://gluster1.mgr.example.co">gluster1.mgr.example.co</a><br>
> m N/A N/A Y 3034<br>
><br>
> Task Status of Volume volume1<br>
> ------------------------------------------------------------------------------<br>
> There are no active volume tasks<br>
><br>
> root@gluster2:~# gluster peer status<br>
> Number of Peers: 1<br>
><br>
> Hostname: <a href="http://gluster1.mgr.example.com">gluster1.mgr.example.com</a><br>
> Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3<br>
> State: Peer in Cluster (Connected)<br>
> root@gluster2:~#<br>
> root@gluster2:~# mount | grep gluster<br>
> gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)<br>
> root@gluster2:~#<br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</p>