Discussion:
[PATCH v2 0/2] Fix NFS client state recovery
Trond Myklebust
2014-09-28 03:54:56 UTC
Permalink
- Client should not declare unreclaimed locks to be stale just because
SETCLIENTID_CONFIRM or CREATE_SESSION fail with NFS4ERR_STALE_CLIENTID
- Fix open/state recovery error handling

Trond Myklebust (2):
NFSv4: Fix lock recovery when CREATE_SESSION/SETCLIENTID_CONFIRM fails
NFSv4: fix open/lock state recovery error handling

fs/nfs/nfs4state.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
--
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2014-09-28 03:54:57 UTC
Permalink
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.

RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.

Cc: stable-***@public.gmane.org
Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/***@public.gmane.org>
---
fs/nfs/nfs4state.c | 1 -
1 file changed, 1 deletion(-)

diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 22fe35104c0c..26d510d11efd 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
break;
case -NFS4ERR_STALE_CLIENTID:
clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
break;
case -NFS4ERR_CLID_INUSE:
--
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2014-09-28 03:54:58 UTC
Permalink
The current open/lock state recovery unfortunately does not handle errors
such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping,
just proceeds as if the state manager is finished recovering.
This patch ensures that we loop back, handle higher priority errors
and complete the open/lock state recovery.

Cc: stable-***@public.gmane.org
Signed-off-by: Trond Myklebust <trond.myklebust-7I+n7zu2hftEKMMhf/***@public.gmane.org>
---
fs/nfs/nfs4state.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 26d510d11efd..87b2d0e79797 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1705,7 +1705,8 @@ restart:
if (status < 0) {
set_bit(ops->owner_flag_bit, &sp->so_flags);
nfs4_put_state_owner(sp);
- return nfs4_recovery_handle_error(clp, status);
+ status = nfs4_recovery_handle_error(clp, status);
+ return (status != 0) ? status : -EAGAIN;
}

nfs4_put_state_owner(sp);
@@ -1714,7 +1715,7 @@ restart:
spin_unlock(&clp->cl_lock);
}
rcu_read_unlock();
- return status;
+ return 0;
}

static int nfs4_check_lease(struct nfs_client *clp)
@@ -2365,14 +2366,11 @@ static void nfs4_state_manager(struct nfs_client *clp)
section = "reclaim reboot";
status = nfs4_do_reclaim(clp,
clp->cl_mvops->reboot_recovery_ops);
- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
- continue;
- nfs4_state_end_reclaim_reboot(clp);
- if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
+ if (status == -EAGAIN)
continue;
if (status < 0)
goto out_error;
+ nfs4_state_end_reclaim_reboot(clp);
}

/* Now recover expired state... */
@@ -2380,9 +2378,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
section = "reclaim nograce";
status = nfs4_do_reclaim(clp,
clp->cl_mvops->nograce_recovery_ops);
- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
- test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
+ if (status == -EAGAIN)
continue;
if (status < 0)
goto out_error;
--
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Anna Schumaker
2014-10-01 13:57:00 UTC
Permalink
Post by Trond Myklebust
The current open/lock state recovery unfortunately does not handle errors
such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping,
just proceeds as if the state manager is finished recovering.
This patch ensures that we loop back, handle higher priority errors
and complete the open/lock state recovery.
---
fs/nfs/nfs4state.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 26d510d11efd..87b2d0e79797 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
if (status < 0) {
set_bit(ops->owner_flag_bit, &sp->so_flags);
nfs4_put_state_owner(sp);
- return nfs4_recovery_handle_error(clp, status);
+ status = nfs4_recovery_handle_error(clp, status);
+ return (status != 0) ? status : -EAGAIN;
Why is the ternary better than doing something like this?

if (status == 0)
return -EAGAIN;
return status;

Anna
Post by Trond Myklebust
}
nfs4_put_state_owner(sp);
spin_unlock(&clp->cl_lock);
}
rcu_read_unlock();
- return status;
+ return 0;
}
static int nfs4_check_lease(struct nfs_client *clp)
@@ -2365,14 +2366,11 @@ static void nfs4_state_manager(struct nfs_client *clp)
section = "reclaim reboot";
status = nfs4_do_reclaim(clp,
clp->cl_mvops->reboot_recovery_ops);
- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
- continue;
- nfs4_state_end_reclaim_reboot(clp);
- if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
+ if (status == -EAGAIN)
continue;
if (status < 0)
goto out_error;
+ nfs4_state_end_reclaim_reboot(clp);
}
/* Now recover expired state... */
@@ -2380,9 +2378,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
section = "reclaim nograce";
status = nfs4_do_reclaim(clp,
clp->cl_mvops->nograce_recovery_ops);
- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
- test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
+ if (status == -EAGAIN)
continue;
if (status < 0)
goto out_error;
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2014-10-01 18:48:57 UTC
Permalink
On Wed, Oct 1, 2014 at 9:57 AM, Anna Schumaker
Post by Anna Schumaker
Post by Trond Myklebust
The current open/lock state recovery unfortunately does not handle errors
such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping,
just proceeds as if the state manager is finished recovering.
This patch ensures that we loop back, handle higher priority errors
and complete the open/lock state recovery.
---
fs/nfs/nfs4state.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 26d510d11efd..87b2d0e79797 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
if (status < 0) {
set_bit(ops->owner_flag_bit, &sp->so_flags);
nfs4_put_state_owner(sp);
- return nfs4_recovery_handle_error(clp, status);
+ status = nfs4_recovery_handle_error(clp, status);
+ return (status != 0) ? status : -EAGAIN;
Why is the ternary better than doing something like this?
if (status == 0)
return -EAGAIN;
return status;
It is 2 lines shorter, yet just as legible in this situation.
--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

trond.myklebust-7I+n7zu2hftEKMMhf/***@public.gmane.org
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Anna Schumaker
2014-10-01 13:49:48 UTC
Permalink
Post by Trond Myklebust
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.
RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.
Has our handling of NFS4ERR_NO_GRACE been tested in this situation?

Anna
Post by Trond Myklebust
---
fs/nfs/nfs4state.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 22fe35104c0c..26d510d11efd 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
break;
clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
break;
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2014-10-01 18:38:00 UTC
Permalink
On Wed, Oct 1, 2014 at 9:49 AM, Anna Schumaker
Post by Anna Schumaker
Post by Trond Myklebust
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.
RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.
Has our handling of NFS4ERR_NO_GRACE been tested in this situation?
nfs4_reclaim_open_state() will handle it by calling
nfs4_state_mark_reclaim_nograce(), which clears the flags enabling
reboot recovery, and sets the NFS_STATE_RECLAIM_NOGRACE flag to force
state reclaim using standard OPEN and LOCK.
--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

trond.myklebust-7I+n7zu2hftEKMMhf/***@public.gmane.org
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Jeff Layton
2014-10-01 18:32:49 UTC
Permalink
On Sat, 27 Sep 2014 23:54:57 -0400
Post by Trond Myklebust
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.
RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.
---
fs/nfs/nfs4state.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 22fe35104c0c..26d510d11efd 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
break;
clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
break;
What distinguishes between the v4.0 and v4.1+ case here?

For v4.1+, we do want the client to just try to reclaim everything that
it can. For v4.0 though, we need to be a little more careful. Consider:


Client Server
===================================================================
SETCLIENTID
OPEN (O1)
LOCK (L1)
reboot (B1)

RENEW (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN(reclaim O1) (NFS4_OK)

=== NETWORK PARTITION ===
Grace period is lifted, but client1's
lease hasn't expired yet

Lock that conflicts with L1 is handed out to client2

reboot (B2)
=== PARTITION HEALS ===
LOCK(reclaim L1) (NFS4ERR_STALE_CLIENTID)

SETCLIENTID
OPEN (reclaim O1) (NFS4_OK)
LOCK (reclaim L1) (NFS4_OK)


Now we have a conflict. I think that the client should not try to
reclaim L1 after B2 in the v4.0 case. Do we need to do something
to handle the v4.0 vs. v4.1+ cases differently here?
--
Jeff Layton <jlayton-7I+n7zu2hftEKMMhf/***@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Trond Myklebust
2014-10-01 18:46:43 UTC
Permalink
Post by Jeff Layton
On Sat, 27 Sep 2014 23:54:57 -0400
Post by Trond Myklebust
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.
RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.
---
fs/nfs/nfs4state.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 22fe35104c0c..26d510d11efd 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
break;
clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
break;
What distinguishes between the v4.0 and v4.1+ case here?
Nothing. They are actually supposed to be handled identically here.
nfs4_handle_reclaim_lease_error() is called if and only if the
SETCLIENTID_CONFIRM (v4.0) or CREATE_SESSION (v4.1) fail. At that
point we have not yet sent out any OPEN or LOCK reclaim requests, so
the failure will really result in a reclaim all-or-nothing for both
NFSv4 and NFSv4.x.
Post by Jeff Layton
For v4.1+, we do want the client to just try to reclaim everything that
Client Server
===================================================================
SETCLIENTID
OPEN (O1)
LOCK (L1)
reboot (B1)
RENEW (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN(reclaim O1) (NFS4_OK)
=== NETWORK PARTITION ===
Grace period is lifted, but client1's
lease hasn't expired yet
Lock that conflicts with L1 is handed out to client2
reboot (B2)
=== PARTITION HEALS ===
LOCK(reclaim L1) (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN (reclaim O1) (NFS4_OK)
LOCK (reclaim L1) (NFS4_OK)
Now we have a conflict. I think that the client should not try to
reclaim L1 after B2 in the v4.0 case. Do we need to do something
to handle the v4.0 vs. v4.1+ cases differently here?
This patch does not change the existing handling of
NFS4ERR_STALE_CLIENTID above, so the NFSv4.0 code will continue to
work as before.

The reason why NFSv4.1 will not need changing above is because the
SEQUENCE op that we send instead of RENEW will receive a
NFS4ERR_DEADSESSION or NFS4ERR_BADSESSION instead of the stale
clientid error.
--
Trond Myklebust

Linux NFS client maintainer, PrimaryData

trond.myklebust-7I+n7zu2hftEKMMhf/***@public.gmane.org
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Jeff Layton
2014-10-01 18:54:56 UTC
Permalink
On Wed, 1 Oct 2014 14:46:43 -0400
Post by Trond Myklebust
Post by Jeff Layton
On Sat, 27 Sep 2014 23:54:57 -0400
Post by Trond Myklebust
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.
RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.
---
fs/nfs/nfs4state.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 22fe35104c0c..26d510d11efd 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
break;
clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
break;
What distinguishes between the v4.0 and v4.1+ case here?
Nothing. They are actually supposed to be handled identically here.
nfs4_handle_reclaim_lease_error() is called if and only if the
SETCLIENTID_CONFIRM (v4.0) or CREATE_SESSION (v4.1) fail. At that
point we have not yet sent out any OPEN or LOCK reclaim requests, so
the failure will really result in a reclaim all-or-nothing for both
NFSv4 and NFSv4.x.
Post by Jeff Layton
For v4.1+, we do want the client to just try to reclaim everything that
Client Server
===================================================================
SETCLIENTID
OPEN (O1)
LOCK (L1)
reboot (B1)
RENEW (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN(reclaim O1) (NFS4_OK)
=== NETWORK PARTITION ===
Grace period is lifted, but client1's
lease hasn't expired yet
Lock that conflicts with L1 is handed out to client2
reboot (B2)
=== PARTITION HEALS ===
LOCK(reclaim L1) (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN (reclaim O1) (NFS4_OK)
LOCK (reclaim L1) (NFS4_OK)
Now we have a conflict. I think that the client should not try to
reclaim L1 after B2 in the v4.0 case. Do we need to do something
to handle the v4.0 vs. v4.1+ cases differently here?
This patch does not change the existing handling of
NFS4ERR_STALE_CLIENTID above, so the NFSv4.0 code will continue to
work as before.
The reason why NFSv4.1 will not need changing above is because the
SEQUENCE op that we send instead of RENEW will receive a
NFS4ERR_DEADSESSION or NFS4ERR_BADSESSION instead of the stale
clientid error.
Ahh ok. Got it!

Acked-by: Jeff Layton <jlayton-7I+n7zu2hftEKMMhf/***@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo-***@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Continue reading on narkive:
Loading...