<div dir="ltr"><span style="font-size:12.8px">i update from 3.7.6 to 3.7.8</span><div style="font-size:12.8px">I am on Centos 7.2</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px"><div>[root@compute1 ~]# gluster volume status</div><div>Status of volume: vol_cinder</div><div>Gluster process                             TCP Port  RDMA Port  Online  Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 172.16.10.2:/glusterfs-cinder         49156     0          Y       2515 </div><div>Brick 172.16.10.3:/glusterfs-cinder         49156     0          Y       2235 </div><div>NFS Server on localhost                     2049      0          Y       2492 </div><div>Self-heal Daemon on localhost               N/A       N/A        Y       2497 </div><div>NFS Server on compute2                      2049      0          Y       2224 </div><div>Self-heal Daemon on compute2                N/A       N/A        Y       2264 </div><div> </div><div>Task Status of Volume vol_cinder</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div><div> </div><div>Status of volume: vol_glances</div><div>Gluster process                             TCP Port  RDMA Port  Online  Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 172.16.10.2:/glusterfs-glances        49153     0          Y       2521 </div><div>Brick 172.16.10.3:/glusterfs-glances        49153     0          Y       2236 </div><div>NFS Server on localhost                     2049      0          Y       2492 </div><div>Self-heal Daemon on localhost               N/A       N/A        Y       2497 </div><div>NFS Server on compute2                      2049      0          Y       2224 </div><div>Self-heal Daemon on compute2                N/A       N/A        Y       2264 </div><div> </div><div>Task Status of Volume vol_glances</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div><div> </div><div>Status of volume: vol_instances</div><div>Gluster process                             TCP Port  RDMA Port  Online  Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 172.16.10.2:/glusterfs-instances      49152     0          Y       2507 </div><div>Brick 172.16.10.3:/glusterfs-instances      49152     0          Y       2265 </div><div>NFS Server on localhost                     2049      0          Y       2492 </div><div>Self-heal Daemon on localhost               N/A       N/A        Y       2497 </div><div>NFS Server on compute2                      2049      0          Y       2224 </div><div>Self-heal Daemon on compute2                N/A       N/A        Y       2264 </div><div> </div><div>Task Status of Volume vol_instances</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-02-16 9:31 GMT+00:00 Niels de Vos <span dir="ltr">&lt;<a href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div>On Tue, Feb 16, 2016 at 08:35:05AM +0000, ousmane sanogo wrote:<br>
&gt; Hello i .update my gluster node yesterday<br>
&gt; I am using cinder openstack with gluster<br>
&gt;  And i have this warning after update :<br>
&gt;<br>
&gt; warning:<br>
&gt; /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.2.glusterfs-cinder.vol<br>
&gt; saved as<br>
&gt; /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.2.glusterfs-cinder.vol.rpmsave\nwarning:<br>
&gt; /var/lib/glusterd/vols/vol_cinder/vol_cinder.tcp-fuse.vol saved as<br>
&gt; /var/lib/glusterd/vols/vol_cinder/vol_cinder.tcp-fuse.vol.rpmsave\nwarning:<br>
&gt; /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.3.glusterfs-cinder.vol<br>
&gt; saved as<br>
&gt; /var/lib/glusterd/vols/vol_cinder/vol_cinder.172.16.10.3.glusterfs-cinder.vol.rpmsave\nwarning:<br>
&gt; /var/lib/glusterd/vols/vol_cinder/trusted-vol_cinder.tcp-fuse.vol saved as<br>
&gt; /var/lib/glusterd/vols/vol_cinder/trusted-vol_cinder.tcp-fuse.vol.rpmsave\nwarning:<br>
&gt; /var/lib/glusterd/vols/vol_instances/vol_instances.172.16.10.2.glusterfs-instances.vol<br>
&gt; saved as<br>
&gt; /var/lib/glusterd/vols/vol_instances/vol_instances.172.16.10.2.glusterfs-instances.vol.rpmsave<br>
&gt; Warning: glusterd.service changed on disk. Run &#39;systemctl daemon-reload&#39; to<br>
&gt; reload units.<br>
&gt;<br>
&gt; I run &quot;systemctl daemon-reload&quot; on node.<br>
&gt;<br>
&gt; On node 1:<br>
&gt; [root@compute1 ~]# tail<br>
&gt; /var/log/glusterfs/var-lib-nova-mnt-7e2fea33428149438b876dd122157f27.log -f<br>
&gt; [2016-02-15 19:56:<a href="tel:44.459473" value="+22544459473" target="_blank">44.459473</a>] I [MSGID: 114057]<br>
&gt; [client-handshake.c:1437:select_server_supported_programs]<br>
&gt; 0-vol_cinder-client-0: Using Program GlusterFS 3.3, Num (1298437), Version<br>
&gt; (330)<br>
&gt; [2016-02-15 19:56:<a href="tel:44.459644" value="+22544459644" target="_blank">44.459644</a>] I [MSGID: 114046]<br>
&gt; [client-handshake.c:1213:client_setvolume_cbk] 0-vol_cinder-client-0:<br>
&gt; Connected to vol_cinder-client-0, attached to remote volume<br>
&gt; &#39;/glusterfs-cinder&#39;.<br>
&gt; [2016-02-15 19:56:<a href="tel:44.459658" value="+22544459658" target="_blank">44.459658</a>] I [MSGID: 114047]<br>
&gt; [client-handshake.c:1224:client_setvolume_cbk] 0-vol_cinder-client-0:<br>
&gt; Server and Client lk-version numbers are not same, reopening the fds<br>
&gt; [2016-02-15 19:56:<a href="tel:44.459666" value="+22544459666" target="_blank">44.459666</a>] I [MSGID: 114042]<br>
&gt; [client-handshake.c:1056:client_post_handshake] 0-vol_cinder-client-0: 1<br>
&gt; fds open - Delaying child_up until they are re-opened<br>
&gt; [2016-02-15 19:56:<a href="tel:44.459882" value="+22544459882" target="_blank">44.459882</a>] I [MSGID: 114041]<br>
&gt; [client-handshake.c:678:client_child_up_reopen_done] 0-vol_cinder-client-0:<br>
&gt; last fd open&#39;d/lock-self-heal&#39;d - notifying CHILD-UP<br>
&gt; [2016-02-15 19:56:<a href="tel:44.459944" value="+22544459944" target="_blank">44.459944</a>] I [MSGID: 114035]<br>
&gt; [client-handshake.c:193:client_set_lk_version_cbk] 0-vol_cinder-client-0:<br>
&gt; Server lk version = 1<br>
&gt; [2016-02-15 19:57:32.625556] I [MSGID: 108031]<br>
&gt; [afr-common.c:1782:afr_local_discovery_cbk] 0-vol_cinder-replicate-0:<br>
&gt; selecting local read_child vol_cinder-client-0<br>
&gt; [2016-02-15 20:08:23.876756] I [fuse-bridge.c:4984:fuse_thread_proc]<br>
&gt; 0-fuse: unmounting /var/lib/nova/mnt/7e2fea33428149438b876dd122157f27<br>
&gt; [2016-02-15 20:08:23.918397] W [glusterfsd.c:1236:cleanup_and_exit]<br>
&gt; (--&gt;/lib64/libpthread.so.0(+0x7dc5) [0x7fee5b957dc5]<br>
&gt; --&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fee5cfc2855]<br>
&gt; --&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7fee5cfc26d9] ) 0-:<br>
&gt; received signum (15), shutting down<br>
&gt; [2016-02-15 20:08:23.918424] I [fuse-bridge.c:5683:fini] 0-fuse: Unmounting<br>
&gt; &#39;/var/lib/nova/mnt/7e2fea33428149438b876dd122157f27&#39;.<br>
&gt;<br>
&gt;<br>
&gt; [root@compute1 ~]# tail /var/log/glusterfs/var-lib-nova-instances.log -f<br>
&gt; [2016-02-16 08:32:32.292641] W [fuse-bridge.c:2292:fuse_writev_cbk]<br>
&gt; 0-glusterfs-fuse: 62606120: WRITE =&gt; -1 (Noeud final de transport n&#39;est pas<br>
&gt; connecté)<br>
&gt; [2016-02-16 08:32:32.292700] W [fuse-bridge.c:2292:fuse_writev_cbk]<br>
&gt; 0-glusterfs-fuse: 62606122: WRITE =&gt; -1 (Noeud final de transport n&#39;est pas<br>
&gt; connecté)<br>
&gt; [2016-02-16 08:32:32.292756] W [fuse-bridge.c:2292:fuse_writev_cbk]<br>
&gt; 0-glusterfs-fuse: 62606124: WRITE =&gt; -1 (Noeud final de transport n&#39;est pas<br>
&gt; connecté)<br>
&gt;<br>
&gt; On node 2<br>
&gt;<br>
&gt; [root@compute2 ~]# tail<br>
&gt; /var/log/glusterfs/var-lib-nova-mnt-7e2fea33428149438b876dd122157f27.log -f<br>
&gt; [2016-02-15 19:56:<a href="tel:47.042442" value="+22547042442" target="_blank">47.042442</a>] W [fuse-bridge.c:2292:fuse_writev_cbk]<br>
&gt; 0-glusterfs-fuse: 15109968: WRITE =&gt; -1 (Transport endpoint is not<br>
&gt; connected)<br>
&gt; [2016-02-15 19:56:<a href="tel:47.047263" value="+22547047263" target="_blank">47.047263</a>] W [fuse-bridge.c:2292:fuse_writev_cbk]<br>
&gt; 0-glusterfs-fuse: 15109970: WRITE =&gt; -1 (Transport endpoint is not<br>
&gt; connected)<br>
&gt; [2016-02-15 19:56:<a href="tel:47.047339" value="+22547047339" target="_blank">47.047339</a>] W [fuse-bridge.c:2292:fuse_writev_cbk]<br>
&gt; 0-glusterfs-fuse: 15109972: WRITE =&gt; -1 (Transport endpoint is not<br>
&gt; connected)<br>
&gt; [2016-02-15 19:57:<a href="tel:03.118138" value="+22503118138" target="_blank">03.118138</a>] I [MSGID: 108031]<br>
&gt; [afr-common.c:1782:afr_local_discovery_cbk] 0-vol_cinder-replicate-0:<br>
&gt; selecting local read_child vol_cinder-client-1<br>
&gt; [2016-02-15 20:07:19.303007] W [fuse-bridge.c:1282:fuse_err_cbk]<br>
&gt; 0-glusterfs-fuse: 15109995: FSYNC() ERR =&gt; -1 (Transport endpoint is not<br>
&gt; connected)<br>
&gt; [2016-02-15 20:07:19.318493] W [fuse-bridge.c:1282:fuse_err_cbk]<br>
&gt; 0-glusterfs-fuse: 15109996: FSYNC() ERR =&gt; -1 (Transport endpoint is not<br>
&gt; connected)<br>
&gt; [2016-02-15 20:07:19.318601] W [fuse-bridge.c:1282:fuse_err_cbk]<br>
&gt; 0-glusterfs-fuse: 15109997: FLUSH() ERR =&gt; -1 (Transport endpoint is not<br>
&gt; connected)<br>
&gt; [2016-02-15 20:07:<a href="tel:20.264111" value="+22520264111" target="_blank">20.264111</a>] I [fuse-bridge.c:4984:fuse_thread_proc]<br>
&gt; 0-fuse: unmounting /var/lib/nova/mnt/7e2fea33428149438b876dd122157f27<br>
&gt; [2016-02-15 20:07:<a href="tel:20.264361" value="+22520264361" target="_blank">20.264361</a>] W [glusterfsd.c:1236:cleanup_and_exit]<br>
&gt; (--&gt;/lib64/libpthread.so.0(+0x7dc5) [0x7f853d29fdc5]<br>
&gt; --&gt;/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f853e90a855]<br>
&gt; --&gt;/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f853e90a6d9] ) 0-:<br>
&gt; received signum (15), shutting down<br>
&gt; [2016-02-15 20:07:<a href="tel:20.264381" value="+22520264381" target="_blank">20.264381</a>] I [fuse-bridge.c:5683:fini] 0-fuse: Unmounting<br>
&gt; &#39;/var/lib/nova/mnt/7e2fea33428149438b876dd122157f27&#39;.<br>
&gt;<br>
&gt;<br>
&gt; I restart glusterd, glusterfsd but i can&#39;t mount<br>
&gt;   /var/lib/nova/mnt/7e2fea33428149438b876dd122157f27<br>
<br>
</div></div>Some things that are missing:<br>
- What version of glusterfs packages were upgraded to what version?<br>
- Which OS/distribution?<br>
- does &#39;gluster volume status&#39; show any missing processes?<br>
<br>
If not all brick processes are running, you probably should check the<br>
logs of those processes (/var/log/glusterfs/bricks/*.log).<br>
<br>
The messages that you posted suggest that some (or all?) of the bricks<br>
are not reachable. This causes the mounting to fail.<br>
<br>
HTH,<br>
Niels<br>
</blockquote></div><br><br clear="all"><div><br></div><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br></div></div></div></div></div></div>
</div></div>