diff mbox series

NFSv4: fix a mount deadlock in NFS v4.1 client

Message ID 20240906-nfs-mount-deadlock-fix-v1-1-ea1aef533f9c@google.com (mailing list archive)
State New
Headers show
Series NFSv4: fix a mount deadlock in NFS v4.1 client | expand

Commit Message

Oleksandr Tymoshenko Sept. 6, 2024, 12:57 a.m. UTC
nfs41_init_clientid does not signal a failure condition from
nfs4_proc_exchange_id and nfs4_proc_create_session to a client which may
lead to mount syscall indefinitely blocked in the following stack trace:
  nfs_wait_client_init_complete
  nfs41_discover_server_trunking
  nfs4_discover_server_trunking
  nfs4_init_client
  nfs4_set_client
  nfs4_create_server
  nfs4_try_get_tree
  vfs_get_tree
  do_new_mount
  __se_sys_mount

and the client stuck in uninitialized state.

In addition to this all subsequent mount calls would also get blocked in
nfs_match_client waiting for the uninitialized client to finish
initialization:
  nfs_wait_client_init_complete
  nfs_match_client
  nfs_get_client
  nfs4_set_client
  nfs4_create_server
  nfs4_try_get_tree
  vfs_get_tree
  do_new_mount
  __se_sys_mount

To avoid this situation propagate error condition to the mount thread
and let mount syscall fail properly.

Signed-off-by: Oleksandr Tymoshenko <ovt@google.com>
---
 fs/nfs/nfs4state.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


---
base-commit: ad618736883b8970f66af799e34007475fe33a68
change-id: 20240906-nfs-mount-deadlock-fix-55c14b38e088

Best regards,

Comments

Trond Myklebust Sept. 8, 2024, 4:48 p.m. UTC | #1
On Fri, 2024-09-06 at 00:57 +0000, Oleksandr Tymoshenko wrote:
> nfs41_init_clientid does not signal a failure condition from
> nfs4_proc_exchange_id and nfs4_proc_create_session to a client which
> may
> lead to mount syscall indefinitely blocked in the following stack
> trace:
>   nfs_wait_client_init_complete
>   nfs41_discover_server_trunking
>   nfs4_discover_server_trunking
>   nfs4_init_client
>   nfs4_set_client
>   nfs4_create_server
>   nfs4_try_get_tree
>   vfs_get_tree
>   do_new_mount
>   __se_sys_mount
> 
> and the client stuck in uninitialized state.
> 
> In addition to this all subsequent mount calls would also get blocked
> in
> nfs_match_client waiting for the uninitialized client to finish
> initialization:
>   nfs_wait_client_init_complete
>   nfs_match_client
>   nfs_get_client
>   nfs4_set_client
>   nfs4_create_server
>   nfs4_try_get_tree
>   vfs_get_tree
>   do_new_mount
>   __se_sys_mount
> 
> To avoid this situation propagate error condition to the mount thread
> and let mount syscall fail properly.
> 
> Signed-off-by: Oleksandr Tymoshenko <ovt@google.com>
> ---
>  fs/nfs/nfs4state.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> index 877f682b45f2..54ad3440ad2b 100644
> --- a/fs/nfs/nfs4state.c
> +++ b/fs/nfs/nfs4state.c
> @@ -335,8 +335,8 @@ int nfs41_init_clientid(struct nfs_client *clp,
> const struct cred *cred)
>  	if (!(clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R))
>  		nfs4_state_start_reclaim_reboot(clp);
>  	nfs41_finish_session_reset(clp);
> -	nfs_mark_client_ready(clp, NFS_CS_READY);
>  out:
> +	nfs_mark_client_ready(clp, status == 0 ? NFS_CS_READY :
> status);
>  	return status;
>  }

NACK. This will break all sorts of recovery scenarios, because it
doesn't distinguish between an initial 'mount' and a server reboot
recovery situation.
Even in the case where we are in the initial mount, it also doesn't
distinguish between transient errors such as NFS4ERR_DELAY or reboot
errors such as NFS4ERR_STALE_CLIENTID, etc.

Exactly what is the scenario that is causing your hang? Let's try to
address that with a more targeted fix.
Oleksandr Tymoshenko Sept. 9, 2024, 4:36 p.m. UTC | #2
>> nfs41_init_clientid does not signal a failure condition from
>> nfs4_proc_exchange_id and nfs4_proc_create_session to a client which
>> may
>> lead to mount syscall indefinitely blocked in the following stack

> NACK. This will break all sorts of recovery scenarios, because it
> doesn't distinguish between an initial 'mount' and a server reboot
> recovery situation.
> Even in the case where we are in the initial mount, it also doesn't
> distinguish between transient errors such as NFS4ERR_DELAY or reboot
> errors such as NFS4ERR_STALE_CLIENTID, etc.

> Exactly what is the scenario that is causing your hang? Let's try to
> address that with a more targeted fix.

The scenario is as follows: there are several NFS servers and several
production machines with multiple NFS mounts. This is a containerized
multi-tennant workflow so every tennant gets its own NFS mount to access their
data. At some point nfs41_init_clientid fails in the initial mount.nfs call
and all subsequent mount.nfs calls just hang in nfs_wait_client_init_complete
until the original one, where nfs4_proc_exchange_id has failed, is killed.

The cause of the nfs41_init_clientid failure in the production case is a timeout.
The following error message is observed in logs:
  NFS: state manager: lease expired failed on NFSv4 server <ip> with error 110
Oleksandr Tymoshenko Sept. 9, 2024, 5:46 p.m. UTC | #3
>> nfs41_init_clientid does not signal a failure condition from
>> nfs4_proc_exchange_id and nfs4_proc_create_session to a client which
>> may
>> lead to mount syscall indefinitely blocked in the following stack

> NACK. This will break all sorts of recovery scenarios, because it
> doesn't distinguish between an initial 'mount' and a server reboot
> recovery situation.
> Even in the case where we are in the initial mount, it also doesn't
> distinguish between transient errors such as NFS4ERR_DELAY or reboot
> errors such as NFS4ERR_STALE_CLIENTID, etc.

> Exactly what is the scenario that is causing your hang? Let's try to
> address that with a more targeted fix.

(re-sending with the correct subject, previous mistake was due to my tools failure)

The scenario is as follows: there are several NFS servers and several
production machines with multiple NFS mounts. This is a containerized
multi-tennant workflow so every tennant gets its own NFS mount to access their
data. At some point nfs41_init_clientid fails in the initial mount.nfs call
and all subsequent mount.nfs calls just hang in nfs_wait_client_init_complete
until the original one, where nfs4_proc_exchange_id has failed, is killed.

The cause of the nfs41_init_clientid failure in the production case is a timeout.
The following error message is observed in logs:
  NFS: state manager: lease expired failed on NFSv4 server <ip> with error 110
Trond Myklebust Sept. 9, 2024, 5:56 p.m. UTC | #4
On Mon, 2024-09-09 at 16:36 +0000, Oleksandr Tymoshenko wrote:
> > > nfs41_init_clientid does not signal a failure condition from
> > > nfs4_proc_exchange_id and nfs4_proc_create_session to a client
> > > which
> > > may
> > > lead to mount syscall indefinitely blocked in the following stack
> 
> > NACK. This will break all sorts of recovery scenarios, because it
> > doesn't distinguish between an initial 'mount' and a server reboot
> > recovery situation.
> > Even in the case where we are in the initial mount, it also doesn't
> > distinguish between transient errors such as NFS4ERR_DELAY or
> > reboot
> > errors such as NFS4ERR_STALE_CLIENTID, etc.
> 
> > Exactly what is the scenario that is causing your hang? Let's try
> > to
> > address that with a more targeted fix.
> 
> The scenario is as follows: there are several NFS servers and several
> production machines with multiple NFS mounts. This is a containerized
> multi-tennant workflow so every tennant gets its own NFS mount to
> access their
> data. At some point nfs41_init_clientid fails in the initial
> mount.nfs call
> and all subsequent mount.nfs calls just hang in
> nfs_wait_client_init_complete
> until the original one, where nfs4_proc_exchange_id has failed, is
> killed.
> 
> The cause of the nfs41_init_clientid failure in the production case
> is a timeout.
> The following error message is observed in logs:
>   NFS: state manager: lease expired failed on NFSv4 server <ip> with
> error 110
> 

How about something like the following fix then?
8<-----------------------------------------------
From eb402b489bb0d0ada1a3dd9101d4d7e193402e46 Mon Sep 17 00:00:00 2001
Message-ID: <eb402b489bb0d0ada1a3dd9101d4d7e193402e46.1725904471.git.trond.myklebust@hammerspace.com>
From: Trond Myklebust <trond.myklebust@hammerspace.com>
Date: Mon, 9 Sep 2024 13:47:07 -0400
Subject: [PATCH] NFSv4: Fail mounts if the lease setup times out

If the server is down when the client is trying to mount, so that the
calls to exchange_id or create_session fail, then we should allow the
mount system call to fail rather than hang and block other mount/umount
calls.

Reported-by: Oleksandr Tymoshenko <ovt@google.com>
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 fs/nfs/nfs4state.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 30aba1dedaba..59dcdf9bc7b4 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -2024,6 +2024,12 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
 		nfs_mark_client_ready(clp, -EPERM);
 		clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
 		return -EPERM;
+	case -ETIMEDOUT:
+		if (clp->cl_cons_state == NFS_CS_SESSION_INITING) {
+			nfs_mark_client_ready(clp, -EIO);
+			return -EIO;
+		}
+		fallthrough;
 	case -EACCES:
 	case -NFS4ERR_DELAY:
 	case -EAGAIN:
Oleksandr Tymoshenko Sept. 9, 2024, 11:06 p.m. UTC | #5
On Mon, Sep 9, 2024 at 10:56 AM Trond Myklebust <trondmy@hammerspace.com> wrote:
>
> On Mon, 2024-09-09 at 16:36 +0000, Oleksandr Tymoshenko wrote:
> > > > nfs41_init_clientid does not signal a failure condition from
> > > > nfs4_proc_exchange_id and nfs4_proc_create_session to a client
> > > > which
> > > > may
> > > > lead to mount syscall indefinitely blocked in the following stack
> >
> > > NACK. This will break all sorts of recovery scenarios, because it
> > > doesn't distinguish between an initial 'mount' and a server reboot
> > > recovery situation.
> > > Even in the case where we are in the initial mount, it also doesn't
> > > distinguish between transient errors such as NFS4ERR_DELAY or
> > > reboot
> > > errors such as NFS4ERR_STALE_CLIENTID, etc.
> >
> > > Exactly what is the scenario that is causing your hang? Let's try
> > > to
> > > address that with a more targeted fix.
> >
> > The scenario is as follows: there are several NFS servers and several
> > production machines with multiple NFS mounts. This is a containerized
> > multi-tennant workflow so every tennant gets its own NFS mount to
> > access their
> > data. At some point nfs41_init_clientid fails in the initial
> > mount.nfs call
> > and all subsequent mount.nfs calls just hang in
> > nfs_wait_client_init_complete
> > until the original one, where nfs4_proc_exchange_id has failed, is
> > killed.
> >
> > The cause of the nfs41_init_clientid failure in the production case
> > is a timeout.
> > The following error message is observed in logs:
> >   NFS: state manager: lease expired failed on NFSv4 server <ip> with
> > error 110
> >
>
> How about something like the following fix then?
> 8<-----------------------------------------------
> From eb402b489bb0d0ada1a3dd9101d4d7e193402e46 Mon Sep 17 00:00:00 2001
> Message-ID: <eb402b489bb0d0ada1a3dd9101d4d7e193402e46.1725904471.git.trond.myklebust@hammerspace.com>
> From: Trond Myklebust <trond.myklebust@hammerspace.com>
> Date: Mon, 9 Sep 2024 13:47:07 -0400
> Subject: [PATCH] NFSv4: Fail mounts if the lease setup times out
>
> If the server is down when the client is trying to mount, so that the
> calls to exchange_id or create_session fail, then we should allow the
> mount system call to fail rather than hang and block other mount/umount
> calls.
>
> Reported-by: Oleksandr Tymoshenko <ovt@google.com>
> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> ---
>  fs/nfs/nfs4state.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> index 30aba1dedaba..59dcdf9bc7b4 100644
> --- a/fs/nfs/nfs4state.c
> +++ b/fs/nfs/nfs4state.c
> @@ -2024,6 +2024,12 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
>                 nfs_mark_client_ready(clp, -EPERM);
>                 clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
>                 return -EPERM;
> +       case -ETIMEDOUT:
> +               if (clp->cl_cons_state == NFS_CS_SESSION_INITING) {
> +                       nfs_mark_client_ready(clp, -EIO);
> +                       return -EIO;
> +               }
> +               fallthrough;
>         case -EACCES:
>         case -NFS4ERR_DELAY:
>         case -EAGAIN:
> --

This patch fixes the issue in my simulated environment. ETIMEDOUT is
the error code that
was observed in the production env but I guess it's not the only
possible one. Would it make
sense to handle all error conditions in the NFS_CS_SESSION_INITING
state or are there
some others that are recoverable?
Trond Myklebust Sept. 10, 2024, 12:22 a.m. UTC | #6
On Mon, 2024-09-09 at 16:06 -0700, Oleksandr Tymoshenko wrote:
> On Mon, Sep 9, 2024 at 10:56 AM Trond Myklebust
> <trondmy@hammerspace.com> wrote:
> > 
> > On Mon, 2024-09-09 at 16:36 +0000, Oleksandr Tymoshenko wrote:
> > > > > nfs41_init_clientid does not signal a failure condition from
> > > > > nfs4_proc_exchange_id and nfs4_proc_create_session to a
> > > > > client
> > > > > which
> > > > > may
> > > > > lead to mount syscall indefinitely blocked in the following
> > > > > stack
> > > 
> > > > NACK. This will break all sorts of recovery scenarios, because
> > > > it
> > > > doesn't distinguish between an initial 'mount' and a server
> > > > reboot
> > > > recovery situation.
> > > > Even in the case where we are in the initial mount, it also
> > > > doesn't
> > > > distinguish between transient errors such as NFS4ERR_DELAY or
> > > > reboot
> > > > errors such as NFS4ERR_STALE_CLIENTID, etc.
> > > 
> > > > Exactly what is the scenario that is causing your hang? Let's
> > > > try
> > > > to
> > > > address that with a more targeted fix.
> > > 
> > > The scenario is as follows: there are several NFS servers and
> > > several
> > > production machines with multiple NFS mounts. This is a
> > > containerized
> > > multi-tennant workflow so every tennant gets its own NFS mount to
> > > access their
> > > data. At some point nfs41_init_clientid fails in the initial
> > > mount.nfs call
> > > and all subsequent mount.nfs calls just hang in
> > > nfs_wait_client_init_complete
> > > until the original one, where nfs4_proc_exchange_id has failed,
> > > is
> > > killed.
> > > 
> > > The cause of the nfs41_init_clientid failure in the production
> > > case
> > > is a timeout.
> > > The following error message is observed in logs:
> > >   NFS: state manager: lease expired failed on NFSv4 server <ip>
> > > with
> > > error 110
> > > 
> > 
> > How about something like the following fix then?
> > 8<-----------------------------------------------
> > From eb402b489bb0d0ada1a3dd9101d4d7e193402e46 Mon Sep 17 00:00:00
> > 2001
> > Message-ID:
> > <eb402b489bb0d0ada1a3dd9101d4d7e193402e46.1725904471.git.trond.mykl
> > ebust@hammerspace.com>
> > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > Date: Mon, 9 Sep 2024 13:47:07 -0400
> > Subject: [PATCH] NFSv4: Fail mounts if the lease setup times out
> > 
> > If the server is down when the client is trying to mount, so that
> > the
> > calls to exchange_id or create_session fail, then we should allow
> > the
> > mount system call to fail rather than hang and block other
> > mount/umount
> > calls.
> > 
> > Reported-by: Oleksandr Tymoshenko <ovt@google.com>
> > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > ---
> >  fs/nfs/nfs4state.c | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> > index 30aba1dedaba..59dcdf9bc7b4 100644
> > --- a/fs/nfs/nfs4state.c
> > +++ b/fs/nfs/nfs4state.c
> > @@ -2024,6 +2024,12 @@ static int
> > nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
> >                 nfs_mark_client_ready(clp, -EPERM);
> >                 clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
> >                 return -EPERM;
> > +       case -ETIMEDOUT:
> > +               if (clp->cl_cons_state == NFS_CS_SESSION_INITING) {
> > +                       nfs_mark_client_ready(clp, -EIO);
> > +                       return -EIO;
> > +               }
> > +               fallthrough;
> >         case -EACCES:
> >         case -NFS4ERR_DELAY:
> >         case -EAGAIN:
> > --
> 
> This patch fixes the issue in my simulated environment. ETIMEDOUT is
> the error code that
> was observed in the production env but I guess it's not the only
> possible one. Would it make
> sense to handle all error conditions in the NFS_CS_SESSION_INITING
> state or are there
> some others that are recoverable?
> 

The only other one that I'm thinking might want to be treated similarly
is the above EACCES error. That's because that is only returned if
there is a problem with your RPCSEC_GSS/krb5 credential. I was thinking
of changing that one too in the same patch, but came to the conclusion
it would be better to treat the two issues with separate fixes.

The other error conditions are all supposed to be transient NFS level
errors. They should not be treated as fatal.
Oleksandr Tymoshenko Sept. 10, 2024, 9:08 p.m. UTC | #7
On Mon, Sep 9, 2024 at 5:22 PM Trond Myklebust <trondmy@hammerspace.com> wrote:
>
> On Mon, 2024-09-09 at 16:06 -0700, Oleksandr Tymoshenko wrote:
> > On Mon, Sep 9, 2024 at 10:56 AM Trond Myklebust
> > <trondmy@hammerspace.com> wrote:
> > >
> > > On Mon, 2024-09-09 at 16:36 +0000, Oleksandr Tymoshenko wrote:
> > > > > > nfs41_init_clientid does not signal a failure condition from
> > > > > > nfs4_proc_exchange_id and nfs4_proc_create_session to a
> > > > > > client
> > > > > > which
> > > > > > may
> > > > > > lead to mount syscall indefinitely blocked in the following
> > > > > > stack
> > > >
> > > > > NACK. This will break all sorts of recovery scenarios, because
> > > > > it
> > > > > doesn't distinguish between an initial 'mount' and a server
> > > > > reboot
> > > > > recovery situation.
> > > > > Even in the case where we are in the initial mount, it also
> > > > > doesn't
> > > > > distinguish between transient errors such as NFS4ERR_DELAY or
> > > > > reboot
> > > > > errors such as NFS4ERR_STALE_CLIENTID, etc.
> > > >
> > > > > Exactly what is the scenario that is causing your hang? Let's
> > > > > try
> > > > > to
> > > > > address that with a more targeted fix.
> > > >
> > > > The scenario is as follows: there are several NFS servers and
> > > > several
> > > > production machines with multiple NFS mounts. This is a
> > > > containerized
> > > > multi-tennant workflow so every tennant gets its own NFS mount to
> > > > access their
> > > > data. At some point nfs41_init_clientid fails in the initial
> > > > mount.nfs call
> > > > and all subsequent mount.nfs calls just hang in
> > > > nfs_wait_client_init_complete
> > > > until the original one, where nfs4_proc_exchange_id has failed,
> > > > is
> > > > killed.
> > > >
> > > > The cause of the nfs41_init_clientid failure in the production
> > > > case
> > > > is a timeout.
> > > > The following error message is observed in logs:
> > > >   NFS: state manager: lease expired failed on NFSv4 server <ip>
> > > > with
> > > > error 110
> > > >
> > >
> > > How about something like the following fix then?
> > > 8<-----------------------------------------------
> > > From eb402b489bb0d0ada1a3dd9101d4d7e193402e46 Mon Sep 17 00:00:00
> > > 2001
> > > Message-ID:
> > > <eb402b489bb0d0ada1a3dd9101d4d7e193402e46.1725904471.git.trond.mykl
> > > ebust@hammerspace.com>
> > > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > Date: Mon, 9 Sep 2024 13:47:07 -0400
> > > Subject: [PATCH] NFSv4: Fail mounts if the lease setup times out
> > >
> > > If the server is down when the client is trying to mount, so that
> > > the
> > > calls to exchange_id or create_session fail, then we should allow
> > > the
> > > mount system call to fail rather than hang and block other
> > > mount/umount
> > > calls.
> > >
> > > Reported-by: Oleksandr Tymoshenko <ovt@google.com>
> > > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > ---
> > >  fs/nfs/nfs4state.c | 6 ++++++
> > >  1 file changed, 6 insertions(+)
> > >
> > > diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> > > index 30aba1dedaba..59dcdf9bc7b4 100644
> > > --- a/fs/nfs/nfs4state.c
> > > +++ b/fs/nfs/nfs4state.c
> > > @@ -2024,6 +2024,12 @@ static int
> > > nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
> > >                 nfs_mark_client_ready(clp, -EPERM);
> > >                 clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
> > >                 return -EPERM;
> > > +       case -ETIMEDOUT:
> > > +               if (clp->cl_cons_state == NFS_CS_SESSION_INITING) {
> > > +                       nfs_mark_client_ready(clp, -EIO);
> > > +                       return -EIO;
> > > +               }
> > > +               fallthrough;
> > >         case -EACCES:
> > >         case -NFS4ERR_DELAY:
> > >         case -EAGAIN:
> > > --
> >
> > This patch fixes the issue in my simulated environment. ETIMEDOUT is
> > the error code that
> > was observed in the production env but I guess it's not the only
> > possible one. Would it make
> > sense to handle all error conditions in the NFS_CS_SESSION_INITING
> > state or are there
> > some others that are recoverable?
> >
>
> The only other one that I'm thinking might want to be treated similarly
> is the above EACCES error. That's because that is only returned if
> there is a problem with your RPCSEC_GSS/krb5 credential. I was thinking
> of changing that one too in the same patch, but came to the conclusion
> it would be better to treat the two issues with separate fixes.
>
> The other error conditions are all supposed to be transient NFS level
> errors. They should not be treated as fatal.

Sounds good. Will you submit this patch to the mainline kernel? If so
please add me to Cc. Thanks for looking into this.
diff mbox series

Patch

diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 877f682b45f2..54ad3440ad2b 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -335,8 +335,8 @@  int nfs41_init_clientid(struct nfs_client *clp, const struct cred *cred)
 	if (!(clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R))
 		nfs4_state_start_reclaim_reboot(clp);
 	nfs41_finish_session_reset(clp);
-	nfs_mark_client_ready(clp, NFS_CS_READY);
 out:
+	nfs_mark_client_ready(clp, status == 0 ? NFS_CS_READY : status);
 	return status;
 }