Discussion:
Unexplained NFS mount hangs
Daniel Stickney
2009-04-13 15:24:06 UTC
Permalink
Hi all,

I am investigating some NFS mount hangs that we have started to see over the past month on some of our servers. The behavior is that the client mount hangs and needs to be manually unmounted (forcefully with 'umount -f') and remounted to make it work. There are about 85 clients mounting a partition over NFS. About 50 of the clients are running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of these 50 has ever had this mount hang. The other 35 are CentOS 5.2 with kernel 2.6.27 which was compiled from source. The mount hangs are inconsistent and so far I don't know how to trigger them on demand. The timing of the hangs as noted by the timestamp in /var/log/messages varies. Not all of the 35 CentOS clients have their mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients have a mount hang per week, on different days, mostly different times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not consistent. In /var/log/messages we see

Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not responding, still trying

One very interesting aspect of this behavior is that the load value on the client with the hung mount immediately spikes to (16.00)+(normal load value). We have also seen client load spikes to (30.00)+(normal load value). These discrete load value increases might be a good hint.

Running 'df' prints some output and then hangs when it reaches the hung mount point. 'mount -v' shows the mount point like normal. When an NFS server is rebooted, we are used to seeing the client log a "nfs: server ___________ not responding, still trying", then a "nfs: server __________ OK" message when it comes back online. With this issue there is never an "OK" message even though the NFS server is still functioning for all other NFS clients. On a client which has a hung NFS mount, running 'rpcinfo -p' and 'showmount -e' against the NFS server shows that RPC and NFS appear to be functioning between client and server even during the issue.


# rpcinfo -p broker101
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32779 nlockmgr
100021 3 udp 32779 nlockmgr
100021 4 udp 32779 nlockmgr
100021 1 tcp 60389 nlockmgr
100021 3 tcp 60389 nlockmgr
100021 4 tcp 60389 nlockmgr
100011 1 udp 960 rquotad
100011 2 udp 960 rquotad
100011 1 tcp 963 rquotad
100011 2 tcp 963 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 995 mountd
100005 1 tcp 998 mountd
100005 2 udp 995 mountd
100005 2 tcp 998 mountd
100005 3 udp 995 mountd
100005 3 tcp 998 mountd


# showmount -e broker101
Export list for broker101:
/mnt/sdc1 *
/mnt/sdb1 *


It is confusing that the NFS client doesn't recover automatically. So whatever the issue is evidently is blocking the kernel from seeing that the NFS server is live and functioning after the issue is triggered.

I'm running low on ideas of how to resolve this. One idea I have is to modify some NFS client timeout values, but I don't have a specific reason to think this will resolve the problem. Right now the values are:

# sysctl -a | grep -i nfs
fs.nfs.nlm_grace_period = 0
fs.nfs.nlm_timeout = 10
fs.nfs.nlm_udpport = 0
fs.nfs.nlm_tcpport = 0
fs.nfs.nsm_use_hostnames = 0
fs.nfs.nsm_local_state = 0
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nfs_congestion_kb = 65152
sunrpc.nfs_debug = 0
sunrpc.nfsd_debug = 0


I've turned on nfs debugging but there was a tremendous amount of output because of the NFS clients activity on several different (and working) NFS mount points. I can capture and supply this output again if it would be helpful. Has anyone seen this behavior before, and does anyone have any suggestions for how this might be resolved?

Thanks for your time,

Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chuck Lever
2009-04-13 16:12:47 UTC
Permalink
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?

If you use a different kernel (say, 2.6.26) on the CentOS systems, do
the hangs go away?
Post by Daniel Stickney
One very interesting aspect of this behavior is that the load value
on the client with the hung mount immediately spikes to (16.00)+
(normal load value). We have also seen client load spikes to (30.00)+
(normal load value). These discrete load value increases might be a
good hint.
Running 'df' prints some output and then hangs when it reaches the
hung mount point. 'mount -v' shows the mount point like normal. When
an NFS server is rebooted, we are used to seeing the client log a
server __________ OK" message when it comes back online. With this
issue there is never an "OK" message even though the NFS server is
still functioning for all other NFS clients. On a client which has a
hung NFS mount, running 'rpcinfo -p' and 'showmount -e' against the
NFS server shows that RPC and NFS appear to be functioning between
client and server even during the issue.
# rpcinfo -p broker101
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32779 nlockmgr
100021 3 udp 32779 nlockmgr
100021 4 udp 32779 nlockmgr
100021 1 tcp 60389 nlockmgr
100021 3 tcp 60389 nlockmgr
100021 4 tcp 60389 nlockmgr
100011 1 udp 960 rquotad
100011 2 udp 960 rquotad
100011 1 tcp 963 rquotad
100011 2 tcp 963 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 995 mountd
100005 1 tcp 998 mountd
100005 2 udp 995 mountd
100005 2 tcp 998 mountd
100005 3 udp 995 mountd
100005 3 tcp 998 mountd
# showmount -e broker101
/mnt/sdc1 *
/mnt/sdb1 *
It is confusing that the NFS client doesn't recover automatically.
So whatever the issue is evidently is blocking the kernel from
seeing that the NFS server is live and functioning after the issue
is triggered.
I'm running low on ideas of how to resolve this. One idea I have is
to modify some NFS client timeout values, but I don't have a
specific reason to think this will resolve the problem. Right now
# sysctl -a | grep -i nfs
fs.nfs.nlm_grace_period = 0
fs.nfs.nlm_timeout = 10
fs.nfs.nlm_udpport = 0
fs.nfs.nlm_tcpport = 0
fs.nfs.nsm_use_hostnames = 0
fs.nfs.nsm_local_state = 0
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nfs_congestion_kb = 65152
sunrpc.nfs_debug = 0
sunrpc.nfsd_debug = 0
I've turned on nfs debugging but there was a tremendous amount of
output because of the NFS clients activity on several different (and
working) NFS mount points. I can capture and supply this output
again if it would be helpful. Has anyone seen this behavior before,
and does anyone have any suggestions for how this might be resolved?
Thanks for your time,
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Rudy Zijlstra
2009-04-13 16:18:57 UTC
Permalink
Op maandag 13-04-2009 om 12:12 uur [tijdzone -0400], schreef Chuck
Post by Chuck Lever
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?
If you use a different kernel (say, 2.6.26) on the CentOS systems, do
the hangs go away?
Hi Chuck,

In my case NFS/TCP.

I have tried most 2.6.2x kernels, it may take a week or longer for them
to hang, but hang they do :(

have been fighting with this one since at least 2.6.24, and probably
2.6.22

The reader that was hanging last week, is running 2.6.26

Rudy

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chuck Lever
2009-04-13 16:38:33 UTC
Permalink
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 12:12 uur [tijdzone -0400], schreef Chuck
Post by Chuck Lever
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?
If you use a different kernel (say, 2.6.26) on the CentOS systems, do
the hangs go away?
Hi Chuck,
In my case NFS/TCP.
I have tried most 2.6.2x kernels, it may take a week or longer for them
to hang, but hang they do :(
have been fighting with this one since at least 2.6.24, and probably
2.6.22
The reader that was hanging last week, is running 2.6.26
If you run "netstat --ip" on a client that has a hanging NFS mount
point, what does it show?

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Daniel Stickney
2009-04-13 16:47:59 UTC
Permalink
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Chuck Lever
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?
If you use a different kernel (say, 2.6.26) on the CentOS systems, do
the hangs go away?
Hi Chuck,

Thanks for your reply. The mounts are NFSv3 over TCP. We have not tried a different kernel (because of the number of servers to be upgraded), but that is next on to ToDo list. Wanted to explore the possibility that some other change might resolve the issue, but I am getting close to launching the kernel upgrades. (The prepackaged RHEL/CentOS 2.6.18* kernels have other NFS client problems with attribute caching which really mess things up, so that is why we have had to compile from source)

To add a little more info, in a post on April 10th titled "NFSv3 Client Timeout on 2.6.27" Bryan mentioned that his client socket was in state FIN_WAIT2, and server in CLOSE_WAIT, which is exactly what I am seeing here.

tcp 0 0 worker120.cluster:944 broker101.cluster:nfs FIN_WAIT2

This is especially interesting because the original nfs "server not responding" message was about 32 hours ago. On this same client, all other NFS mounts to other servers are showing state "established".

-Daniel
Post by Chuck Lever
Post by Daniel Stickney
One very interesting aspect of this behavior is that the load value
on the client with the hung mount immediately spikes to (16.00)+
(normal load value). We have also seen client load spikes to (30.00)+
(normal load value). These discrete load value increases might be a
good hint.
Running 'df' prints some output and then hangs when it reaches the
hung mount point. 'mount -v' shows the mount point like normal. When
an NFS server is rebooted, we are used to seeing the client log a
server __________ OK" message when it comes back online. With this
issue there is never an "OK" message even though the NFS server is
still functioning for all other NFS clients. On a client which has a
hung NFS mount, running 'rpcinfo -p' and 'showmount -e' against the
NFS server shows that RPC and NFS appear to be functioning between
client and server even during the issue.
# rpcinfo -p broker101
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32779 nlockmgr
100021 3 udp 32779 nlockmgr
100021 4 udp 32779 nlockmgr
100021 1 tcp 60389 nlockmgr
100021 3 tcp 60389 nlockmgr
100021 4 tcp 60389 nlockmgr
100011 1 udp 960 rquotad
100011 2 udp 960 rquotad
100011 1 tcp 963 rquotad
100011 2 tcp 963 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 995 mountd
100005 1 tcp 998 mountd
100005 2 udp 995 mountd
100005 2 tcp 998 mountd
100005 3 udp 995 mountd
100005 3 tcp 998 mountd
# showmount -e broker101
/mnt/sdc1 *
/mnt/sdb1 *
It is confusing that the NFS client doesn't recover automatically.
So whatever the issue is evidently is blocking the kernel from
seeing that the NFS server is live and functioning after the issue
is triggered.
I'm running low on ideas of how to resolve this. One idea I have is
to modify some NFS client timeout values, but I don't have a
specific reason to think this will resolve the problem. Right now
# sysctl -a | grep -i nfs
fs.nfs.nlm_grace_period = 0
fs.nfs.nlm_timeout = 10
fs.nfs.nlm_udpport = 0
fs.nfs.nlm_tcpport = 0
fs.nfs.nsm_use_hostnames = 0
fs.nfs.nsm_local_state = 0
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nfs_congestion_kb = 65152
sunrpc.nfs_debug = 0
sunrpc.nfsd_debug = 0
I've turned on nfs debugging but there was a tremendous amount of
output because of the NFS clients activity on several different (and
working) NFS mount points. I can capture and supply this output
again if it would be helpful. Has anyone seen this behavior before,
and does anyone have any suggestions for how this might be resolved?
Thanks for your time,
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chuck Lever
2009-04-13 17:08:14 UTC
Permalink
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Chuck Lever
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?
If you use a different kernel (say, 2.6.26) on the CentOS systems, do
the hangs go away?
Hi Chuck,
Thanks for your reply. The mounts are NFSv3 over TCP. We have not
tried a different kernel (because of the number of servers to be
upgraded), but that is next on to ToDo list. Wanted to explore the
possibility that some other change might resolve the issue, but I am
getting close to launching the kernel upgrades. (The prepackaged
RHEL/CentOS 2.6.18* kernels have other NFS client problems with
attribute caching which really mess things up, so that is why we
have had to compile from source)
To add a little more info, in a post on April 10th titled "NFSv3
Client Timeout on 2.6.27" Bryan mentioned that his client socket was
in state FIN_WAIT2, and server in CLOSE_WAIT, which is exactly what
I am seeing here.
tcp 0 0 worker120.cluster:944
broker101.cluster:nfs FIN_WAIT2
This is especially interesting because the original nfs "server not
responding" message was about 32 hours ago. On this same client, all
other NFS mounts to other servers are showing state "established".
Poking around in git, I see this recent commit:

commit 2a9e1cfa23fb62da37739af81127dab5af095d99
Author: Trond Myklebust <Trond.Myklebust-HgOvQuBEEgTQT0dZR+***@public.gmane.org>
Date: Tue Oct 28 15:21:39 2008 -0400

SUNRPC: Respond promptly to server TCP resets

If the server sends us an RST error while we're in the
TCP_ESTABLISHED
state, then that will not result in a state change, and so the
RPC client
ends up hanging forever (see
http://bugzilla.kernel.org/show_bug.cgi?id=11154)

We can intercept the reset by setting up an sk->sk_error_report
callback,
which will then allow us to initiate a proper shutdown and retry...

We also make sure that if the send request receives an
ECONNRESET, then we
shutdown too...

Signed-off-by: Trond Myklebust <Trond.Myklebust-HgOvQuBEEgTQT0dZR+***@public.gmane.org>

Which may address part of the problem. If I'm reading the output of
"git describe" correctly, this one should be in 2.6.28.

There are a whole series of commits in this area that went upstream
about a month ago. It's not clear if these are also necessary to
address the problem. But they would be in 2.6.30-rc1.
Post by Daniel Stickney
-Daniel
Post by Chuck Lever
Post by Daniel Stickney
One very interesting aspect of this behavior is that the load value
on the client with the hung mount immediately spikes to (16.00)+
(normal load value). We have also seen client load spikes to
(30.00)+
(normal load value). These discrete load value increases might be a
good hint.
Running 'df' prints some output and then hangs when it reaches the
hung mount point. 'mount -v' shows the mount point like normal. When
an NFS server is rebooted, we are used to seeing the client log a
server __________ OK" message when it comes back online. With this
issue there is never an "OK" message even though the NFS server is
still functioning for all other NFS clients. On a client which has a
hung NFS mount, running 'rpcinfo -p' and 'showmount -e' against the
NFS server shows that RPC and NFS appear to be functioning between
client and server even during the issue.
# rpcinfo -p broker101
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32779 nlockmgr
100021 3 udp 32779 nlockmgr
100021 4 udp 32779 nlockmgr
100021 1 tcp 60389 nlockmgr
100021 3 tcp 60389 nlockmgr
100021 4 tcp 60389 nlockmgr
100011 1 udp 960 rquotad
100011 2 udp 960 rquotad
100011 1 tcp 963 rquotad
100011 2 tcp 963 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 995 mountd
100005 1 tcp 998 mountd
100005 2 udp 995 mountd
100005 2 tcp 998 mountd
100005 3 udp 995 mountd
100005 3 tcp 998 mountd
# showmount -e broker101
/mnt/sdc1 *
/mnt/sdb1 *
It is confusing that the NFS client doesn't recover automatically.
So whatever the issue is evidently is blocking the kernel from
seeing that the NFS server is live and functioning after the issue
is triggered.
I'm running low on ideas of how to resolve this. One idea I have is
to modify some NFS client timeout values, but I don't have a
specific reason to think this will resolve the problem. Right now
# sysctl -a | grep -i nfs
fs.nfs.nlm_grace_period = 0
fs.nfs.nlm_timeout = 10
fs.nfs.nlm_udpport = 0
fs.nfs.nlm_tcpport = 0
fs.nfs.nsm_use_hostnames = 0
fs.nfs.nsm_local_state = 0
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nfs_congestion_kb = 65152
sunrpc.nfs_debug = 0
sunrpc.nfsd_debug = 0
I've turned on nfs debugging but there was a tremendous amount of
output because of the NFS clients activity on several different (and
working) NFS mount points. I can capture and supply this output
again if it would be helpful. Has anyone seen this behavior before,
and does anyone have any suggestions for how this might be resolved?
Thanks for your time,
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Rudy Zijlstra
2009-04-13 19:25:07 UTC
Permalink
Op maandag 13-04-2009 om 13:08 uur [tijdzone -0400], schreef Chuck
Post by Chuck Lever
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Chuck Lever
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?
If you use a different kernel (say, 2.6.26) on the CentOS systems, do
the hangs go away?
Hi Chuck,
Thanks for your reply. The mounts are NFSv3 over TCP. We have not
tried a different kernel (because of the number of servers to be
upgraded), but that is next on to ToDo list. Wanted to explore the
possibility that some other change might resolve the issue, but I am
getting close to launching the kernel upgrades. (The prepackaged
RHEL/CentOS 2.6.18* kernels have other NFS client problems with
attribute caching which really mess things up, so that is why we
have had to compile from source)
To add a little more info, in a post on April 10th titled "NFSv3
Client Timeout on 2.6.27" Bryan mentioned that his client socket was
in state FIN_WAIT2, and server in CLOSE_WAIT, which is exactly what
I am seeing here.
tcp 0 0 worker120.cluster:944
broker101.cluster:nfs FIN_WAIT2
This is especially interesting because the original nfs "server not
responding" message was about 32 hours ago. On this same client, all
other NFS mounts to other servers are showing state "established".
commit 2a9e1cfa23fb62da37739af81127dab5af095d99
Date: Tue Oct 28 15:21:39 2008 -0400
SUNRPC: Respond promptly to server TCP resets
If the server sends us an RST error while we're in the
TCP_ESTABLISHED
state, then that will not result in a state change, and so the
RPC client
ends up hanging forever (see
http://bugzilla.kernel.org/show_bug.cgi?id=11154)
We can intercept the reset by setting up an sk->sk_error_report
callback,
which will then allow us to initiate a proper shutdown and retry...
We also make sure that if the send request receives an
ECONNRESET, then we
shutdown too...
Which may address part of the problem. If I'm reading the output of
"git describe" correctly, this one should be in 2.6.28.
Chuck,

thanks a lot for this.

I'm 90% certain i've had a hang on 2.6.28.2 (currently running 2.6.28.7
since March 14th on the writing clients).

Too much HW related problems in the past weeks to be fully certain (and
me too much travelling and thus not at home).
Post by Chuck Lever
There are a whole series of commits in this area that went upstream
about a month ago. It's not clear if these are also necessary to
address the problem. But they would be in 2.6.30-rc1.
OK, i'll switch to 2.6.30 on all clients once it is out. Prefer to wait
for release, as they are production type machines.

If i get a hang, i'll check with "netstat --ip"

Cheers,

Rudy


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Rudy Zijlstra
2009-04-14 09:16:23 UTC
Permalink
Op maandag 13-04-2009 om 21:25 uur [tijdzone +0200], schreef Rudy
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 13:08 uur [tijdzone -0400], schreef Chuck
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
OK, i'll switch to 2.6.30 on all clients once it is out. Prefer to wait
for release, as they are production type machines.
If i get a hang, i'll check with "netstat --ip"
Just now one of my 2.6.28.7 machines is hanging.
netstat results in client status:
tcp 0 0 mythm.romunt.nl:1020 repeater.romunt.nl:nfsd FIN_WAIT2
tcp 76 0 mythm.romunt.nl:6544 repeater.romunt.n:53854 ESTABLISHED


and on the server i find:
tcp 1 0 repeater.romunt.nl:nfsd mythm.romunt.nl:1020 CLOSE_WAIT
tcp 0 0 repeater.romunt.n:53854 mythm.romunt.nl:6544 FIN_WAIT2


Cheers,

Rudy

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2009-04-14 12:31:26 UTC
Permalink
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 21:25 uur [tijdzone +0200], schreef Rudy
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 13:08 uur [tijdzone -0400], schreef Chuck
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
OK, i'll switch to 2.6.30 on all clients once it is out. Prefer to wait
for release, as they are production type machines.
If i get a hang, i'll check with "netstat --ip"
Just now one of my 2.6.28.7 machines is hanging.
tcp 0 0 mythm.romunt.nl:1020 repeater.romunt.nl:nfsd FIN_WAIT2
tcp 76 0 mythm.romunt.nl:6544 repeater.romunt.n:53854 ESTABLISHED
tcp 1 0 repeater.romunt.nl:nfsd mythm.romunt.nl:1020 CLOSE_WAIT
tcp 0 0 repeater.romunt.n:53854 mythm.romunt.nl:6544 FIN_WAIT2
Which shows that the NFS server is failing to close the tcp connection
after the client has closed on its side.

You probably want to apply this patch to your server:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git&a=commitdiff&h=69b6ba3712b796a66595cfaf0a5ab4dfe1cf964a


Trond

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Rudy Zijlstra
2009-04-14 12:37:36 UTC
Permalink
Op dinsdag 14-04-2009 om 08:31 uur [tijdzone -0400], schreef Trond
Post by Trond Myklebust
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 21:25 uur [tijdzone +0200], schreef Rudy
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 13:08 uur [tijdzone -0400], schreef Chuck
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
OK, i'll switch to 2.6.30 on all clients once it is out. Prefer to wait
for release, as they are production type machines.
If i get a hang, i'll check with "netstat --ip"
Just now one of my 2.6.28.7 machines is hanging.
tcp 0 0 mythm.romunt.nl:1020 repeater.romunt.nl:nfsd FIN_WAIT2
tcp 76 0 mythm.romunt.nl:6544 repeater.romunt.n:53854 ESTABLISHED
tcp 1 0 repeater.romunt.nl:nfsd mythm.romunt.nl:1020 CLOSE_WAIT
tcp 0 0 repeater.romunt.n:53854 mythm.romunt.nl:6544 FIN_WAIT2
Which shows that the NFS server is failing to close the tcp connection
after the client has closed on its side.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git&a=commitdiff&h=69b6ba3712b796a66595cfaf0a5ab4dfe1cf964a
Trond
Hi Trond

Thanks, would an upgrade to 2.6.29.1 also work?

Thanks,

Rudy

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2009-04-14 12:40:45 UTC
Permalink
Post by Rudy Zijlstra
Op dinsdag 14-04-2009 om 08:31 uur [tijdzone -0400], schreef Trond
Post by Trond Myklebust
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 21:25 uur [tijdzone +0200], schreef Rudy
Post by Rudy Zijlstra
Op maandag 13-04-2009 om 13:08 uur [tijdzone -0400], schreef Chuck
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see
over the past month on some of our servers. The behavior is that the
client mount hangs and needs to be manually unmounted (forcefully
with 'umount -f') and remounted to make it work. There are about 85
clients mounting a partition over NFS. About 50 of the clients are
running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of
these 50 has ever had this mount hang. The other 35 are CentOS 5.2
with kernel 2.6.27 which was compiled from source. The mount hangs
are inconsistent and so far I don't know how to trigger them on
demand. The timing of the hangs as noted by the timestamp in /var/
log/messages varies. Not all of the 35 CentOS clients have their
mounts hang at the same time, and the NFS server continues operating
apparently normally for all other clients. Normally maybe 5 clients
have a mount hang per week, on different days, mostly different
times. Now and then we might see a cluster of a few clien
ts have their mounts hang at the same exact time, but this is not
consistent. In /var/log/messages we see
OK, i'll switch to 2.6.30 on all clients once it is out. Prefer to wait
for release, as they are production type machines.
If i get a hang, i'll check with "netstat --ip"
Just now one of my 2.6.28.7 machines is hanging.
tcp 0 0 mythm.romunt.nl:1020 repeater.romunt.nl:nfsd FIN_WAIT2
tcp 76 0 mythm.romunt.nl:6544 repeater.romunt.n:53854 ESTABLISHED
tcp 1 0 repeater.romunt.nl:nfsd mythm.romunt.nl:1020 CLOSE_WAIT
tcp 0 0 repeater.romunt.n:53854 mythm.romunt.nl:6544 FIN_WAIT2
Which shows that the NFS server is failing to close the tcp connection
after the client has closed on its side.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git&a=commitdiff&h=69b6ba3712b796a66595cfaf0a5ab4dfe1cf964a
Trond
Hi Trond
Thanks, would an upgrade to 2.6.29.1 also work?
Yes. That same patch should also be in 2.6.29.

Cheers
Trond

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Daniel Stickney
2009-04-13 20:35:49 UTC
Permalink
On Mon, 13 Apr 2009 13:08:14 -0400
Post by Chuck Lever
Post by Daniel Stickney
On Mon, 13 Apr 2009 12:12:47 -0400
Post by Chuck Lever
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to
see over the past month on some of our servers. The behavior is
that the client mount hangs and needs to be manually unmounted
(forcefully with 'umount -f') and remounted to make it work.
There are about 85 clients mounting a partition over NFS. About
50 of the clients are running Fedora Core 3 with kernel
2.6.11-1.27_FC3smp. Not one of these 50 has ever had this mount
hang. The other 35 are CentOS 5.2 with kernel 2.6.27 which was
compiled from source. The mount hangs are inconsistent and so far
I don't know how to trigger them on demand. The timing of the
hangs as noted by the timestamp in /var/ log/messages varies. Not
all of the 35 CentOS clients have their mounts hang at the same
time, and the NFS server continues operating apparently normally
for all other clients. Normally maybe 5 clients have a mount hang
per week, on different days, mostly different times. Now and then
we might see a cluster of a few clien ts have their mounts hang
at the same exact time, but this is not consistent.
In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not
responding, still trying
Are these NFS/UDP or NFS/TCP mounts?
If you use a different kernel (say, 2.6.26) on the CentOS systems,
do the hangs go away?
Hi Chuck,
Thanks for your reply. The mounts are NFSv3 over TCP. We have not
tried a different kernel (because of the number of servers to be
upgraded), but that is next on to ToDo list. Wanted to explore the
possibility that some other change might resolve the issue, but I
am getting close to launching the kernel upgrades. (The
prepackaged RHEL/CentOS 2.6.18* kernels have other NFS client
problems with attribute caching which really mess things up, so
that is why we have had to compile from source)
To add a little more info, in a post on April 10th titled "NFSv3
Client Timeout on 2.6.27" Bryan mentioned that his client socket
was in state FIN_WAIT2, and server in CLOSE_WAIT, which is exactly
what I am seeing here.
tcp 0 0 worker120.cluster:944
broker101.cluster:nfs FIN_WAIT2
This is especially interesting because the original nfs "server
not responding" message was about 32 hours ago. On this same
client, all other NFS mounts to other servers are showing state
"established".
commit 2a9e1cfa23fb62da37739af81127dab5af095d99
Date: Tue Oct 28 15:21:39 2008 -0400
SUNRPC: Respond promptly to server TCP resets
If the server sends us an RST error while we're in the
TCP_ESTABLISHED
state, then that will not result in a state change, and so the
RPC client
ends up hanging forever (see
http://bugzilla.kernel.org/show_bug.cgi?id=11154)
We can intercept the reset by setting up an sk->sk_error_report
callback,
which will then allow us to initiate a proper shutdown and retry...
We also make sure that if the send request receives an
ECONNRESET, then we
shutdown too...
Which may address part of the problem. If I'm reading the output of
"git describe" correctly, this one should be in 2.6.28.
There are a whole series of commits in this area that went upstream
about a month ago. It's not clear if these are also necessary to
address the problem. But they would be in 2.6.30-rc1.
Thanks Chuck. Reading through the bug reports, I also now believe that a
kernel upgrade is the most likely resolution to this problem. I
appreciate your time!

-Daniel
Post by Chuck Lever
Post by Daniel Stickney
-Daniel
Post by Chuck Lever
Post by Daniel Stickney
One very interesting aspect of this behavior is that the load
value on the client with the hung mount immediately spikes to
(16.00)+ (normal load value). We have also seen client load
spikes to (30.00)+
(normal load value). These discrete load value increases might be
a good hint.
Running 'df' prints some output and then hangs when it reaches the
hung mount point. 'mount -v' shows the mount point like normal.
When an NFS server is rebooted, we are used to seeing the client
log a "nfs: server ___________ not responding, still trying",
then a "nfs: server __________ OK" message when it comes back
online. With this issue there is never an "OK" message even
though the NFS server is still functioning for all other NFS
clients. On a client which has a hung NFS mount, running 'rpcinfo
-p' and 'showmount -e' against the NFS server shows that RPC and
NFS appear to be functioning between client and server even
during the issue.
# rpcinfo -p broker101
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32779 nlockmgr
100021 3 udp 32779 nlockmgr
100021 4 udp 32779 nlockmgr
100021 1 tcp 60389 nlockmgr
100021 3 tcp 60389 nlockmgr
100021 4 tcp 60389 nlockmgr
100011 1 udp 960 rquotad
100011 2 udp 960 rquotad
100011 1 tcp 963 rquotad
100011 2 tcp 963 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 995 mountd
100005 1 tcp 998 mountd
100005 2 udp 995 mountd
100005 2 tcp 998 mountd
100005 3 udp 995 mountd
100005 3 tcp 998 mountd
# showmount -e broker101
/mnt/sdc1 *
/mnt/sdb1 *
It is confusing that the NFS client doesn't recover automatically.
So whatever the issue is evidently is blocking the kernel from
seeing that the NFS server is live and functioning after the issue
is triggered.
I'm running low on ideas of how to resolve this. One idea I have
is to modify some NFS client timeout values, but I don't have a
specific reason to think this will resolve the problem. Right now
# sysctl -a | grep -i nfs
fs.nfs.nlm_grace_period = 0
fs.nfs.nlm_timeout = 10
fs.nfs.nlm_udpport = 0
fs.nfs.nlm_tcpport = 0
fs.nfs.nsm_use_hostnames = 0
fs.nfs.nsm_local_state = 0
fs.nfs.nfs_callback_tcpport = 0
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nfs_congestion_kb = 65152
sunrpc.nfs_debug = 0
sunrpc.nfsd_debug = 0
I've turned on nfs debugging but there was a tremendous amount of
output because of the NFS clients activity on several different
(and working) NFS mount points. I can capture and supply this
output again if it would be helpful. Has anyone seen this
behavior before, and does anyone have any suggestions for how
this might be resolved?
Thanks for your time,
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe
linux-nfs" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Daniel Stickney
Operations Manager - Systems and Network Engineer
Daniel Stickney
Operations Manager - Systems and Network Engineer
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Bryan McLellan
2009-04-13 23:11:35 UTC
Permalink
Post by Daniel Stickney
To add a little more info, in a post on April 10th titled "NFSv3 Client Timeout on 2.6.27" Bryan mentioned that his client socket was in state FIN_WAIT2, and server in CLOSE_WAIT, which is exactly what I am seeing here.
Since my problems originated after upgrading to Ubuntu intrepid in a
'etch -> hardy -> intrepid' cycle, and hardy contained 2.6.24, I
wonder if the regression was in:

commit e06799f958bf7f9f8fae15f0c6f519953fb0257c
Author: Trond Myklebust <Trond.Myklebust-HgOvQuBEEgTQT0dZR+***@public.gmane.org>
Date: Mon Nov 5 15:44:12 2007 -0500

SUNRPC: Use shutdown() instead of close() when disconnecting a TCP socket

By using shutdown() rather than close() we allow the RPC client to wait
for the TCP close handshake to complete before we start trying to reconnect
using the same port.
We use shutdown(SHUT_WR) only instead of shutting down both directions,
however we wait until the server has closed the connection on its side.

Signed-off-by: Trond Myklebust <Trond.Myklebust-HgOvQuBEEgTQT0dZR+***@public.gmane.org>

$ git describe e06799f958bf7f9f8fae15f0c6f519953fb0257c --contains
v2.6.25-rc1~1146^2~105

I came in today to find that the one machine outside of production
that was hung that I could toy with eventually fixed itself, albeit
five days later.

Apr 8 12:42:34 bvt-was02 kernel: [3706362.490101] nfs: server
file01.prod.example.com not responding, still trying
Apr 13 12:09:59 bvt-was02 kernel: [4136407.174292] nfs: server
file01.prod.example.com OK

There looks like there are a lot of additional timeouts added in
2.6.30-rc1, so perhaps I'll compile from source and wait to see if
this happens again on the test machines.

Bryan
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Kasparek Tomas
2009-04-14 13:34:23 UTC
Permalink
Post by Bryan McLellan
Post by Daniel Stickney
To add a little more info, in a post on April 10th titled "NFSv3 Client Timeout on 2.6.27" Bryan mentioned that his client socket was in state FIN_WAIT2, and server in CLOSE_WAIT, which is exactly what I am seeing here.
Since my problems originated after upgrading to Ubuntu intrepid in a
'etch -> hardy -> intrepid' cycle, and hardy contained 2.6.24, I
commit e06799f958bf7f9f8fae15f0c6f519953fb0257c
Date: Mon Nov 5 15:44:12 2007 -0500
SUNRPC: Use shutdown() instead of close() when disconnecting a TCP socket
By using shutdown() rather than close() we allow the RPC client to wait
for the TCP close handshake to complete before we start trying to reconnect
using the same port.
We use shutdown(SHUT_WR) only instead of shutting down both directions,
however we wait until the server has closed the connection on its side.
Hi,

probably yes, I have several other problems with this. See my previous post
about lockd issues and DoS on server by repeating clients. Last info is,
that Trond knows about the DoS problem and when this is solved I will try
to do everything I can to solve the lockd problem. I need to stay with
2.6.27 so it is crucial for me to solve all these.

Bye

--

Tomas Kasparek, PhD student E-mail: kasparek-***@public.gmane.org
CVT FIT VUT Brno, L127 Web: http://www.fit.vutbr.cz/~kasparek
Bozetechova 1, 612 66 Fax: +420 54114-1270
Brno, Czech Republic Phone: +420 54114-1220

jabber: tomas.kasparek-***@public.gmane.org
GPG: 2F1E 1AAF FD3B CFA3 1537 63BD DCBE 18FF A035 53BC

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Rudy Zijlstra
2009-04-13 16:15:25 UTC
Permalink
Hi Daniel,

Op maandag 13-04-2009 om 09:24 uur [tijdzone -0600], schreef Daniel
Post by Daniel Stickney
Hi all,
I am investigating some NFS mount hangs that we have started to see over the past month on some of our servers. The behavior is that the client mount hangs and needs to be manually unmounted (forcefully with 'umount -f') and remounted to make it work. There are about 85 clients mounting a partition over NFS. About 50 of the clients are running Fedora Core 3 with kernel 2.6.11-1.27_FC3smp. Not one of these 50 has ever had this mount hang. The other 35 are CentOS 5.2 with kernel 2.6.27 which was compiled from source. The mount hangs are inconsistent and so far I don't know how to trigger them on demand. The timing of the hangs as noted by the timestamp in /var/log/messages varies. Not all of the 35 CentOS clients have their mounts hang at the same time, and the NFS server continues operati
ng apparently normally for all other clients. Normally maybe 5 clients have a mount hang per week, on different days, mostly different times. Now and then we might see a cluster of a few cl!
ien
Post by Daniel Stickney
ts have their mounts hang at the same exact time, but this is not consistent. In /var/log/messages we see
Apr 12 02:04:12 worker120 kernel: nfs: server broker101 not responding, still trying
One very interesting aspect of this behavior is that the load value on the client with the hung mount immediately spikes to (16.00)+(normal load value). We have also seen client load spikes to (30.00)+(normal load value). These discrete load value increases might be a good hint.
Running 'df' prints some output and then hangs when it reaches the hung mount point. 'mount -v' shows the mount point like normal. When an NFS server is rebooted, we are used to seeing the client log a "nfs: server ___________ not responding, still trying", then a "nfs: server __________ OK" message when it comes back online. With this issue there is never an "OK" message even though the NFS server is still functioning for all other NFS clients. On a client which has a hung NFS mount, running 'rpcinfo -p' and 'showmount -e' against the NFS server shows that RPC and NFS appear to be functioning between client and server even during the issue.
This matches very will with my own experience.

For a long time i was thinking this was write related, but recently i
had a hang on a reading client.

My application is streaming video, and i can have about 40Mbps hitting
the file server, while reading about 16Mbps at the same time

The reading clients only read, they do not write. Most of the hangs i
see are from the writers, and once now from a reader.

I have tried several recent kernels, and never been able to find a
relation to the kernel on the file server. From my experiments, all
2.6.2x kernels are affected. I cannot reproduce at will though. It is
waiting till it happens.

Thanks,


Rudy



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...