diff mbox series

[v3,2/3] nfsd: Initial implementation of NFSv4 Courteous Server

Message ID 20210916182212.81608-3-dai.ngo@oracle.com (mailing list archive)
State New
Headers show
Series nfsd: Initial implementation of NFSv4 Courteous Server | expand

Commit Message

Dai Ngo Sept. 16, 2021, 6:22 p.m. UTC
Currently an NFSv4 client must maintain its lease by using the at least
one of the state tokens or if nothing else, by issuing a RENEW (4.0), or
a singleton SEQUENCE (4.1) at least once during each lease period. If the
client fails to renew the lease, for any reason, the Linux server expunges
the state tokens immediately upon detection of the "failure to renew the
lease" condition and begins returning NFS4ERR_EXPIRED if the client should
reconnect and attempt to use the (now) expired state.

The default lease period for the Linux server is 90 seconds.  The typical
client cuts that in half and will issue a lease renewing operation every
45 seconds. The 90 second lease period is very short considering the
potential for moderately long term network partitions.  A network partition
refers to any loss of network connectivity between the NFS client and the
NFS server, regardless of its root cause.  This includes NIC failures, NIC
driver bugs, network misconfigurations & administrative errors, routers &
switches crashing and/or having software updates applied, even down to
cables being physically pulled.  In most cases, these network failures are
transient, although the duration is unknown.

A server which does not immediately expunge the state on lease expiration
is known as a Courteous Server.  A Courteous Server continues to recognize
previously generated state tokens as valid until conflict arises between
the expired state and the requests from another client, or the server
reboots.

The initial implementation of the Courteous Server will do the following:

. when the laundromat thread detects an expired client and if that client
still has established states on the Linux server and there is no waiters
for the client's locks then mark the client as a COURTESY_CLIENT and skip
destroying the client and all its states, otherwise destroy the client as
usual.

. detects conflict of OPEN request with COURTESY_CLIENT, destroys the
expired client and all its states, skips the delegation recall then allows
the conflicting request to succeed.

. detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks
requests with COURTESY_CLIENT, destroys the expired client and all its
states then allows the conflicting request to succeed.

Signed-off-by: Dai Ngo <dai.ngo@oracle.com>
---
 fs/nfsd/nfs4state.c        | 155 ++++++++++++++++++++++++++++++++++++++++++++-
 fs/nfsd/state.h            |   3 +
 include/linux/sunrpc/svc.h |   1 +
 3 files changed, 156 insertions(+), 3 deletions(-)

Comments

J. Bruce Fields Sept. 22, 2021, 9:14 p.m. UTC | #1
On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
>  		seq_puts(m, "status: confirmed\n");
>  	else
>  		seq_puts(m, "status: unconfirmed\n");
> +	seq_printf(m, "courtesy client: %s\n",
> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
> +	seq_printf(m, "last renew: %lld secs\n",

I'd rather keep any units to the left of the colon.  Also, "last renew"
suggests to me that it's the absolute time of the last renew.  Maybe
"seconds since last renew:" ?

> +		ktime_get_boottime_seconds() - clp->cl_time);
>  	seq_printf(m, "name: ");
>  	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>  	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
> @@ -4652,6 +4662,42 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
>  	nfsd4_run_cb(&dp->dl_recall);
>  }
>  
> +/*
> + * If the conflict happens due to a NFSv4 request then check for
> + * courtesy client and set rq_conflict_client so that upper layer
> + * can destroy the conflict client and retry the call.
> + */
> +static bool
> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> +{
> +	struct svc_rqst *rqst;
> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +	bool ret = false;
> +
> +	if (!i_am_nfsd()) {
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +			return true;
> +		}
> +		return false;
> +	}
> +	rqst = kthread_data(current);
> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> +		return false;
> +	rqst->rq_conflict_client = NULL;
> +
> +	spin_lock(&nn->client_lock);
> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
> +				!mark_client_expired_locked(clp)) {
> +		rqst->rq_conflict_client = clp;
> +		ret = true;
> +	}
> +	spin_unlock(&nn->client_lock);

Check whether this is safe; I think the flc_lock may be taken inside of
this lock elsewhere, resulting in a potential deadlock?

rqst doesn't need any locking as it's only being used by this thread, so
it's the client expiration stuff that's the problem, I guess.

--b.
Dai Ngo Sept. 22, 2021, 10:16 p.m. UTC | #2
On 9/22/21 2:14 PM, J. Bruce Fields wrote:
> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>> @@ -2389,6 +2395,10 @@ static int client_info_show(struct seq_file *m, void *v)
>>   		seq_puts(m, "status: confirmed\n");
>>   	else
>>   		seq_puts(m, "status: unconfirmed\n");
>> +	seq_printf(m, "courtesy client: %s\n",
>> +		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
>> +	seq_printf(m, "last renew: %lld secs\n",
> I'd rather keep any units to the left of the colon.  Also, "last renew"
> suggests to me that it's the absolute time of the last renew.  Maybe
> "seconds since last renew:" ?

will fix in v4.

>
>> +		ktime_get_boottime_seconds() - clp->cl_time);
>>   	seq_printf(m, "name: ");
>>   	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
>>   	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
>> @@ -4652,6 +4662,42 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
>>   	nfsd4_run_cb(&dp->dl_recall);
>>   }
>>   
>> +/*
>> + * If the conflict happens due to a NFSv4 request then check for
>> + * courtesy client and set rq_conflict_client so that upper layer
>> + * can destroy the conflict client and retry the call.
>> + */
>> +static bool
>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>> +{
>> +	struct svc_rqst *rqst;
>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +	bool ret = false;
>> +
>> +	if (!i_am_nfsd()) {
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>> +			return true;
>> +		}
>> +		return false;
>> +	}
>> +	rqst = kthread_data(current);
>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>> +		return false;
>> +	rqst->rq_conflict_client = NULL;
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
>> +				!mark_client_expired_locked(clp)) {
>> +		rqst->rq_conflict_client = clp;
>> +		ret = true;
>> +	}
>> +	spin_unlock(&nn->client_lock);
> Check whether this is safe; I think the flc_lock may be taken inside of
> this lock elsewhere, resulting in a potential deadlock?
>
> rqst doesn't need any locking as it's only being used by this thread, so
> it's the client expiration stuff that's the problem, I guess.

mark_client_expired_locked needs to acquire cl_lock. I think the lock
ordering is ok, client_lock -> cl_lock. nfsd4_exchange_id uses this
lock ordering.

I will submit v4 patch with the fix in client_info_show and also new code
for handling NFSv4 share reservation conflicts with courtesy clients.

Thanks Bruce,

-Dai

>
> --b.
J. Bruce Fields Sept. 23, 2021, 1:18 a.m. UTC | #3
On Wed, Sep 22, 2021 at 03:16:34PM -0700, dai.ngo@oracle.com wrote:
> 
> On 9/22/21 2:14 PM, J. Bruce Fields wrote:
> >On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> >>+/*
> >>+ * If the conflict happens due to a NFSv4 request then check for
> >>+ * courtesy client and set rq_conflict_client so that upper layer
> >>+ * can destroy the conflict client and retry the call.
> >>+ */
> >>+static bool
> >>+nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> >>+{
> >>+	struct svc_rqst *rqst;
> >>+	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> >>+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> >>+	bool ret = false;
> >>+
> >>+	if (!i_am_nfsd()) {
> >>+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> >>+			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> >>+			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> >>+			return true;
> >>+		}
> >>+		return false;
> >>+	}
> >>+	rqst = kthread_data(current);
> >>+	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> >>+		return false;
> >>+	rqst->rq_conflict_client = NULL;
> >>+
> >>+	spin_lock(&nn->client_lock);
> >>+	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
> >>+				!mark_client_expired_locked(clp)) {
> >>+		rqst->rq_conflict_client = clp;
> >>+		ret = true;
> >>+	}
> >>+	spin_unlock(&nn->client_lock);
> >Check whether this is safe; I think the flc_lock may be taken inside of
> >this lock elsewhere, resulting in a potential deadlock?
> >
> >rqst doesn't need any locking as it's only being used by this thread, so
> >it's the client expiration stuff that's the problem, I guess.
> 
> mark_client_expired_locked needs to acquire cl_lock. I think the lock
> ordering is ok, client_lock -> cl_lock. nfsd4_exchange_id uses this
> lock ordering.

It's flc_lock (see locks.c) that I'm worried about.  I've got a lockdep
warning here, taking a closer look....

nfsd4_release_lockowner takes clp->cl_lock and then fcl_lock.

Here we're taking fcl_lock and then client_lock.

As you say, elsewhere client_lock is taken and then cl_lock.

So that's the loop, I think.

--b.
J. Bruce Fields Sept. 23, 2021, 1:34 a.m. UTC | #4
On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> +/*
> + * If the conflict happens due to a NFSv4 request then check for
> + * courtesy client and set rq_conflict_client so that upper layer
> + * can destroy the conflict client and retry the call.
> + */

I think we need a different approach.  Wouldn't we need to take a
reference on clp when we assign to rq_conflict_client?

What happens if there are multiple leases found in the loop in
__break_lease?

It doesn't seem right that we'd need to treat the case where nfsd is the
breaker differently the case where it's another process.

I'm not sure what to suggest instead, though....  Maybe as with locks we
need two separate callbacks, one that tests whether there's a courtesy
client that needs removing, one that does it after we've dropped the
lock.

--b.

> +static bool
> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
> +{
> +	struct svc_rqst *rqst;
> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +	bool ret = false;
> +
> +	if (!i_am_nfsd()) {
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
> +			return true;
> +		}
> +		return false;
> +	}
> +	rqst = kthread_data(current);
> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
> +		return false;
> +	rqst->rq_conflict_client = NULL;
> +
> +	spin_lock(&nn->client_lock);
> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
> +				!mark_client_expired_locked(clp)) {
> +		rqst->rq_conflict_client = clp;
> +		ret = true;
> +	}
> +	spin_unlock(&nn->client_lock);
> +	return ret;
> +}
> +
>  /* Called from break_lease() with i_lock held. */
>  static bool
>  nfsd_break_deleg_cb(struct file_lock *fl)
> @@ -4660,6 +4706,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>  	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>  	struct nfs4_file *fp = dp->dl_stid.sc_file;
>  
> +	if (nfsd_check_courtesy_client(dp))
> +		return false;
>  	trace_nfsd_cb_recall(&dp->dl_stid);
>  
>  	/*
> @@ -5279,6 +5327,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
>  	 */
>  }
>  
> +static bool
> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
> +{
> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
> +
> +	spin_lock(&nn->client_lock);
> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
> +			mark_client_expired_locked(clp)) {
> +		spin_unlock(&nn->client_lock);
> +		return false;
> +	}
> +	spin_unlock(&nn->client_lock);
> +	expire_client(clp);
> +	return true;
> +}
> +
>  __be32
>  nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
>  {
> @@ -5328,7 +5392,13 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
>  			goto out;
>  		}
>  	} else {
> +		rqstp->rq_conflict_client = NULL;
>  		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
> +		if (status == nfserr_jukebox && rqstp->rq_conflict_client) {
> +			if (nfs4_destroy_courtesy_client(rqstp->rq_conflict_client))
> +				status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
> +		}
> +
>  		if (status) {
>  			stp->st_stid.sc_type = NFS4_CLOSED_STID;
>  			release_open_stateid(stp);
> @@ -5562,6 +5632,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>  }
>  #endif
>  
> +static
> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
> +{
> +	int i;
> +	struct nfs4_stateowner *so, *tmp;
> +	struct nfs4_lockowner *lo;
> +	struct nfs4_ol_stateid *stp;
> +	struct nfs4_file *nf;
> +	struct inode *ino;
> +	struct file_lock_context *ctx;
> +	struct file_lock *fl;
> +
> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
> +		/* scan each lock owner */
> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
> +				so_strhash) {
> +			if (so->so_is_open_owner)
> +				continue;
> +
> +			/* scan lock states of this lock owner */
> +			lo = lockowner(so);
> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
> +					st_perstateowner) {
> +				nf = stp->st_stid.sc_file;
> +				ino = nf->fi_inode;
> +				ctx = ino->i_flctx;
> +				if (!ctx)
> +					continue;
> +				/* check each lock belongs to this lock state */
> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
> +					if (fl->fl_owner != lo)
> +						continue;
> +					if (!list_empty(&fl->fl_blocked_requests))
> +						return true;
> +				}
> +			}
> +		}
> +	}
> +	return false;
> +}
> +
>  static time64_t
>  nfs4_laundromat(struct nfsd_net *nn)
>  {
> @@ -5577,7 +5688,9 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	};
>  	struct nfs4_cpntf_state *cps;
>  	copy_stateid_t *cps_t;
> +	struct nfs4_stid *stid;
>  	int i;
> +	int id = 0;
>  
>  	if (clients_still_reclaiming(nn)) {
>  		lt.new_timeo = 0;
> @@ -5598,8 +5711,33 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	spin_lock(&nn->client_lock);
>  	list_for_each_safe(pos, next, &nn->client_lru) {
>  		clp = list_entry(pos, struct nfs4_client, cl_lru);
> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> +			goto exp_client;
> +		}
> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
> +				goto exp_client;
> +			/*
> +			 * after umount, v4.0 client is still
> +			 * around waiting to be expired
> +			 */
> +			if (clp->cl_minorversion)
> +				continue;
> +		}
>  		if (!state_expired(&lt, clp->cl_time))
>  			break;
> +		spin_lock(&clp->cl_lock);
> +		stid = idr_get_next(&clp->cl_stateids, &id);
> +		spin_unlock(&clp->cl_lock);
> +		if (stid && !nfs4_anylock_conflict(clp)) {
> +			/* client still has states */
> +			clp->courtesy_client_expiry =
> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
> +			continue;
> +		}
> +exp_client:
>  		if (mark_client_expired_locked(clp))
>  			continue;
>  		list_add(&clp->cl_lru, &reaplist);
> @@ -5679,9 +5817,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>  	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>  }
>  
> -static struct workqueue_struct *laundry_wq;
> -static void laundromat_main(struct work_struct *);
> -
>  static void
>  laundromat_main(struct work_struct *laundry)
>  {
> @@ -6486,6 +6621,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
>  		lock->fl_end = OFFSET_MAX;
>  }
>  
> +/* return true if lock was expired else return false */
> +static bool
> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
> +{
> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
> +	struct nfs4_client *clp = lo->lo_owner.so_client;
> +
> +	if (testonly)
> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
> +			true : false;
> +	return nfs4_destroy_courtesy_client(clp);
> +}
> +
>  static fl_owner_t
>  nfsd4_fl_get_owner(fl_owner_t owner)
>  {
> @@ -6533,6 +6681,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
>  	.lm_notify = nfsd4_lm_notify,
>  	.lm_get_owner = nfsd4_fl_get_owner,
>  	.lm_put_owner = nfsd4_fl_put_owner,
> +	.lm_expire_lock = nfsd4_fl_expire_lock,
>  };
>  
>  static inline void
> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
> index e73bdbb1634a..93e30b101578 100644
> --- a/fs/nfsd/state.h
> +++ b/fs/nfsd/state.h
> @@ -345,6 +345,8 @@ struct nfs4_client {
>  #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
>  #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
>  					 1 << NFSD4_CLIENT_CB_KILL)
> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
>  	unsigned long		cl_flags;
>  	const struct cred	*cl_cb_cred;
>  	struct rpc_clnt		*cl_cb_client;
> @@ -385,6 +387,7 @@ struct nfs4_client {
>  	struct list_head	async_copies;	/* list of async copies */
>  	spinlock_t		async_lock;	/* lock for async copies */
>  	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
> +	int			courtesy_client_expiry;
>  };
>  
>  /* struct nfs4_client_reset
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 064c96157d1f..349bf7bf20d2 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -306,6 +306,7 @@ struct svc_rqst {
>  						 * net namespace
>  						 */
>  	void **			rq_lease_breaker; /* The v4 client breaking a lease */
> +	void			*rq_conflict_client;
>  };
>  
>  #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)
> -- 
> 2.9.5
Dai Ngo Sept. 23, 2021, 5:09 p.m. UTC | #5
On 9/22/21 6:18 PM, J. Bruce Fields wrote:
> On Wed, Sep 22, 2021 at 03:16:34PM -0700, dai.ngo@oracle.com wrote:
>> On 9/22/21 2:14 PM, J. Bruce Fields wrote:
>>> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>>>> +/*
>>>> + * If the conflict happens due to a NFSv4 request then check for
>>>> + * courtesy client and set rq_conflict_client so that upper layer
>>>> + * can destroy the conflict client and retry the call.
>>>> + */
>>>> +static bool
>>>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>>>> +{
>>>> +	struct svc_rqst *rqst;
>>>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>>>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>>>> +	bool ret = false;
>>>> +
>>>> +	if (!i_am_nfsd()) {
>>>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>>>> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>>>> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>>>> +			return true;
>>>> +		}
>>>> +		return false;
>>>> +	}
>>>> +	rqst = kthread_data(current);
>>>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>>>> +		return false;
>>>> +	rqst->rq_conflict_client = NULL;
>>>> +
>>>> +	spin_lock(&nn->client_lock);
>>>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
>>>> +				!mark_client_expired_locked(clp)) {
>>>> +		rqst->rq_conflict_client = clp;
>>>> +		ret = true;
>>>> +	}
>>>> +	spin_unlock(&nn->client_lock);
>>> Check whether this is safe; I think the flc_lock may be taken inside of
>>> this lock elsewhere, resulting in a potential deadlock?
>>>
>>> rqst doesn't need any locking as it's only being used by this thread, so
>>> it's the client expiration stuff that's the problem, I guess.
>> mark_client_expired_locked needs to acquire cl_lock. I think the lock
>> ordering is ok, client_lock -> cl_lock. nfsd4_exchange_id uses this
>> lock ordering.
> It's flc_lock (see locks.c) that I'm worried about.  I've got a lockdep
> warning here, taking a closer look....
>
> nfsd4_release_lockowner takes clp->cl_lock and then fcl_lock.
>
> Here we're taking fcl_lock and then client_lock.
>
> As you say, elsewhere client_lock is taken and then cl_lock.
>
> So that's the loop, I think.

Thanks Bruce, I see the deadlock. We will need a new approach for this.

-Dai

>
> --b.
Dai Ngo Sept. 23, 2021, 5:09 p.m. UTC | #6
On 9/22/21 6:34 PM, J. Bruce Fields wrote:
> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>> +/*
>> + * If the conflict happens due to a NFSv4 request then check for
>> + * courtesy client and set rq_conflict_client so that upper layer
>> + * can destroy the conflict client and retry the call.
>> + */
> I think we need a different approach.

I think nfsd_check_courtesy_client is used to handle conflict with
delegation. So instead of using rq_conflict_client to let the caller
knows and destroy the courtesy client as the current patch does, we
can ask the laundromat thread to do the destroy. In that case,
nfs4_get_vfs_file in nfsd4_process_open2 will either return no error
since the the laufromat destroyed the courtesy client or it gets
get nfserr_jukebox which causes the NFS client to retry. By the time
the retry comes the courtesy client should already be destroyed.

>   Wouldn't we need to take a
> reference on clp when we assign to rq_conflict_client?

we won't need rq_conflict_client with the new approach.

>
> What happens if there are multiple leases found in the loop in
> __break_lease?

this should no longer be a problem also.

>
> It doesn't seem right that we'd need to treat the case where nfsd is the
> breaker differently the case where it's another process.
>
> I'm not sure what to suggest instead, though....  Maybe as with locks we
> need two separate callbacks, one that tests whether there's a courtesy
> client that needs removing, one that does it after we've dropped the

I will try the new approach if you don't see any obvious problems
with it.

-Dai

> lock.
>
> --b.
>
>> +static bool
>> +nfsd_check_courtesy_client(struct nfs4_delegation *dp)
>> +{
>> +	struct svc_rqst *rqst;
>> +	struct nfs4_client *clp = dp->dl_recall.cb_clp;
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +	bool ret = false;
>> +
>> +	if (!i_am_nfsd()) {
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
>> +			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
>> +			return true;
>> +		}
>> +		return false;
>> +	}
>> +	rqst = kthread_data(current);
>> +	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
>> +		return false;
>> +	rqst->rq_conflict_client = NULL;
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
>> +				!mark_client_expired_locked(clp)) {
>> +		rqst->rq_conflict_client = clp;
>> +		ret = true;
>> +	}
>> +	spin_unlock(&nn->client_lock);
>> +	return ret;
>> +}
>> +
>>   /* Called from break_lease() with i_lock held. */
>>   static bool
>>   nfsd_break_deleg_cb(struct file_lock *fl)
>> @@ -4660,6 +4706,8 @@ nfsd_break_deleg_cb(struct file_lock *fl)
>>   	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
>>   	struct nfs4_file *fp = dp->dl_stid.sc_file;
>>   
>> +	if (nfsd_check_courtesy_client(dp))
>> +		return false;
>>   	trace_nfsd_cb_recall(&dp->dl_stid);
>>   
>>   	/*
>> @@ -5279,6 +5327,22 @@ static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
>>   	 */
>>   }
>>   
>> +static bool
>> +nfs4_destroy_courtesy_client(struct nfs4_client *clp)
>> +{
>> +	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
>> +
>> +	spin_lock(&nn->client_lock);
>> +	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
>> +			mark_client_expired_locked(clp)) {
>> +		spin_unlock(&nn->client_lock);
>> +		return false;
>> +	}
>> +	spin_unlock(&nn->client_lock);
>> +	expire_client(clp);
>> +	return true;
>> +}
>> +
>>   __be32
>>   nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
>>   {
>> @@ -5328,7 +5392,13 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
>>   			goto out;
>>   		}
>>   	} else {
>> +		rqstp->rq_conflict_client = NULL;
>>   		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
>> +		if (status == nfserr_jukebox && rqstp->rq_conflict_client) {
>> +			if (nfs4_destroy_courtesy_client(rqstp->rq_conflict_client))
>> +				status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
>> +		}
>> +
>>   		if (status) {
>>   			stp->st_stid.sc_type = NFS4_CLOSED_STID;
>>   			release_open_stateid(stp);
>> @@ -5562,6 +5632,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
>>   }
>>   #endif
>>   
>> +static
>> +bool nfs4_anylock_conflict(struct nfs4_client *clp)
>> +{
>> +	int i;
>> +	struct nfs4_stateowner *so, *tmp;
>> +	struct nfs4_lockowner *lo;
>> +	struct nfs4_ol_stateid *stp;
>> +	struct nfs4_file *nf;
>> +	struct inode *ino;
>> +	struct file_lock_context *ctx;
>> +	struct file_lock *fl;
>> +
>> +	for (i = 0; i < OWNER_HASH_SIZE; i++) {
>> +		/* scan each lock owner */
>> +		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
>> +				so_strhash) {
>> +			if (so->so_is_open_owner)
>> +				continue;
>> +
>> +			/* scan lock states of this lock owner */
>> +			lo = lockowner(so);
>> +			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
>> +					st_perstateowner) {
>> +				nf = stp->st_stid.sc_file;
>> +				ino = nf->fi_inode;
>> +				ctx = ino->i_flctx;
>> +				if (!ctx)
>> +					continue;
>> +				/* check each lock belongs to this lock state */
>> +				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
>> +					if (fl->fl_owner != lo)
>> +						continue;
>> +					if (!list_empty(&fl->fl_blocked_requests))
>> +						return true;
>> +				}
>> +			}
>> +		}
>> +	}
>> +	return false;
>> +}
>> +
>>   static time64_t
>>   nfs4_laundromat(struct nfsd_net *nn)
>>   {
>> @@ -5577,7 +5688,9 @@ nfs4_laundromat(struct nfsd_net *nn)
>>   	};
>>   	struct nfs4_cpntf_state *cps;
>>   	copy_stateid_t *cps_t;
>> +	struct nfs4_stid *stid;
>>   	int i;
>> +	int id = 0;
>>   
>>   	if (clients_still_reclaiming(nn)) {
>>   		lt.new_timeo = 0;
>> @@ -5598,8 +5711,33 @@ nfs4_laundromat(struct nfsd_net *nn)
>>   	spin_lock(&nn->client_lock);
>>   	list_for_each_safe(pos, next, &nn->client_lru) {
>>   		clp = list_entry(pos, struct nfs4_client, cl_lru);
>> +		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> +			goto exp_client;
>> +		}
>> +		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
>> +			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
>> +				goto exp_client;
>> +			/*
>> +			 * after umount, v4.0 client is still
>> +			 * around waiting to be expired
>> +			 */
>> +			if (clp->cl_minorversion)
>> +				continue;
>> +		}
>>   		if (!state_expired(&lt, clp->cl_time))
>>   			break;
>> +		spin_lock(&clp->cl_lock);
>> +		stid = idr_get_next(&clp->cl_stateids, &id);
>> +		spin_unlock(&clp->cl_lock);
>> +		if (stid && !nfs4_anylock_conflict(clp)) {
>> +			/* client still has states */
>> +			clp->courtesy_client_expiry =
>> +				ktime_get_boottime_seconds() + courtesy_client_expiry;
>> +			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
>> +			continue;
>> +		}
>> +exp_client:
>>   		if (mark_client_expired_locked(clp))
>>   			continue;
>>   		list_add(&clp->cl_lru, &reaplist);
>> @@ -5679,9 +5817,6 @@ nfs4_laundromat(struct nfsd_net *nn)
>>   	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
>>   }
>>   
>> -static struct workqueue_struct *laundry_wq;
>> -static void laundromat_main(struct work_struct *);
>> -
>>   static void
>>   laundromat_main(struct work_struct *laundry)
>>   {
>> @@ -6486,6 +6621,19 @@ nfs4_transform_lock_offset(struct file_lock *lock)
>>   		lock->fl_end = OFFSET_MAX;
>>   }
>>   
>> +/* return true if lock was expired else return false */
>> +static bool
>> +nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
>> +{
>> +	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
>> +	struct nfs4_client *clp = lo->lo_owner.so_client;
>> +
>> +	if (testonly)
>> +		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
>> +			true : false;
>> +	return nfs4_destroy_courtesy_client(clp);
>> +}
>> +
>>   static fl_owner_t
>>   nfsd4_fl_get_owner(fl_owner_t owner)
>>   {
>> @@ -6533,6 +6681,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops  = {
>>   	.lm_notify = nfsd4_lm_notify,
>>   	.lm_get_owner = nfsd4_fl_get_owner,
>>   	.lm_put_owner = nfsd4_fl_put_owner,
>> +	.lm_expire_lock = nfsd4_fl_expire_lock,
>>   };
>>   
>>   static inline void
>> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
>> index e73bdbb1634a..93e30b101578 100644
>> --- a/fs/nfsd/state.h
>> +++ b/fs/nfsd/state.h
>> @@ -345,6 +345,8 @@ struct nfs4_client {
>>   #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
>>   #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
>>   					 1 << NFSD4_CLIENT_CB_KILL)
>> +#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
>> +#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
>>   	unsigned long		cl_flags;
>>   	const struct cred	*cl_cb_cred;
>>   	struct rpc_clnt		*cl_cb_client;
>> @@ -385,6 +387,7 @@ struct nfs4_client {
>>   	struct list_head	async_copies;	/* list of async copies */
>>   	spinlock_t		async_lock;	/* lock for async copies */
>>   	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
>> +	int			courtesy_client_expiry;
>>   };
>>   
>>   /* struct nfs4_client_reset
>> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
>> index 064c96157d1f..349bf7bf20d2 100644
>> --- a/include/linux/sunrpc/svc.h
>> +++ b/include/linux/sunrpc/svc.h
>> @@ -306,6 +306,7 @@ struct svc_rqst {
>>   						 * net namespace
>>   						 */
>>   	void **			rq_lease_breaker; /* The v4 client breaking a lease */
>> +	void			*rq_conflict_client;
>>   };
>>   
>>   #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)
>> -- 
>> 2.9.5
J. Bruce Fields Sept. 23, 2021, 7:32 p.m. UTC | #7
On Thu, Sep 23, 2021 at 10:09:35AM -0700, dai.ngo@oracle.com wrote:
> On 9/22/21 6:34 PM, J. Bruce Fields wrote:
> >On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
> >>+/*
> >>+ * If the conflict happens due to a NFSv4 request then check for
> >>+ * courtesy client and set rq_conflict_client so that upper layer
> >>+ * can destroy the conflict client and retry the call.
> >>+ */
> >I think we need a different approach.
> 
> I think nfsd_check_courtesy_client is used to handle conflict with
> delegation. So instead of using rq_conflict_client to let the caller
> knows and destroy the courtesy client as the current patch does, we
> can ask the laundromat thread to do the destroy.

I can't see right now why that wouldn't work.

> In that case,
> nfs4_get_vfs_file in nfsd4_process_open2 will either return no error
> since the the laufromat destroyed the courtesy client or it gets
> get nfserr_jukebox which causes the NFS client to retry. By the time
> the retry comes the courtesy client should already be destroyed.

Make sure this works for local (non-NFS) lease breakers as well.  I
think that mainly means making sure the !O_NONBLOCK case of
__break_lease works.

--b.
Dai Ngo Sept. 24, 2021, 8:53 p.m. UTC | #8
On 9/23/21 12:32 PM, J. Bruce Fields wrote:
> On Thu, Sep 23, 2021 at 10:09:35AM -0700, dai.ngo@oracle.com wrote:
>> On 9/22/21 6:34 PM, J. Bruce Fields wrote:
>>> On Thu, Sep 16, 2021 at 02:22:11PM -0400, Dai Ngo wrote:
>>>> +/*
>>>> + * If the conflict happens due to a NFSv4 request then check for
>>>> + * courtesy client and set rq_conflict_client so that upper layer
>>>> + * can destroy the conflict client and retry the call.
>>>> + */
>>> I think we need a different approach.
>> I think nfsd_check_courtesy_client is used to handle conflict with
>> delegation. So instead of using rq_conflict_client to let the caller
>> knows and destroy the courtesy client as the current patch does, we
>> can ask the laundromat thread to do the destroy.
> I can't see right now why that wouldn't work.
>
>> In that case,
>> nfs4_get_vfs_file in nfsd4_process_open2 will either return no error
>> since the the laufromat destroyed the courtesy client or it gets
>> get nfserr_jukebox which causes the NFS client to retry. By the time
>> the retry comes the courtesy client should already be destroyed.
> Make sure this works for local (non-NFS) lease breakers as well.  I
> think that mainly means making sure the !O_NONBLOCK case of
> __break_lease works.

Yes, local lease breakers use (!O_NONBLOCK). In this case __break_lease
will call lm_break then wait for all lease conflicts to be resolved
before returning to caller.

-Dai

>
> --b.
diff mbox series

Patch

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 42356416f0a0..54e5317f00f1 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -125,6 +125,11 @@  static void free_session(struct nfsd4_session *);
 static const struct nfsd4_callback_ops nfsd4_cb_recall_ops;
 static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops;
 
+static struct workqueue_struct *laundry_wq;
+static void laundromat_main(struct work_struct *);
+
+static int courtesy_client_expiry = (24 * 60 * 60);	/* in secs */
+
 static bool is_session_dead(struct nfsd4_session *ses)
 {
 	return ses->se_flags & NFS4_SESSION_DEAD;
@@ -172,6 +177,7 @@  renew_client_locked(struct nfs4_client *clp)
 
 	list_move_tail(&clp->cl_lru, &nn->client_lru);
 	clp->cl_time = ktime_get_boottime_seconds();
+	clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
 }
 
 static void put_client_renew_locked(struct nfs4_client *clp)
@@ -2389,6 +2395,10 @@  static int client_info_show(struct seq_file *m, void *v)
 		seq_puts(m, "status: confirmed\n");
 	else
 		seq_puts(m, "status: unconfirmed\n");
+	seq_printf(m, "courtesy client: %s\n",
+		test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no");
+	seq_printf(m, "last renew: %lld secs\n",
+		ktime_get_boottime_seconds() - clp->cl_time);
 	seq_printf(m, "name: ");
 	seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len);
 	seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion);
@@ -4652,6 +4662,42 @@  static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
 	nfsd4_run_cb(&dp->dl_recall);
 }
 
+/*
+ * If the conflict happens due to a NFSv4 request then check for
+ * courtesy client and set rq_conflict_client so that upper layer
+ * can destroy the conflict client and retry the call.
+ */
+static bool
+nfsd_check_courtesy_client(struct nfs4_delegation *dp)
+{
+	struct svc_rqst *rqst;
+	struct nfs4_client *clp = dp->dl_recall.cb_clp;
+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+	bool ret = false;
+
+	if (!i_am_nfsd()) {
+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
+			set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags);
+			mod_delayed_work(laundry_wq, &nn->laundromat_work, 0);
+			return true;
+		}
+		return false;
+	}
+	rqst = kthread_data(current);
+	if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
+		return false;
+	rqst->rq_conflict_client = NULL;
+
+	spin_lock(&nn->client_lock);
+	if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) &&
+				!mark_client_expired_locked(clp)) {
+		rqst->rq_conflict_client = clp;
+		ret = true;
+	}
+	spin_unlock(&nn->client_lock);
+	return ret;
+}
+
 /* Called from break_lease() with i_lock held. */
 static bool
 nfsd_break_deleg_cb(struct file_lock *fl)
@@ -4660,6 +4706,8 @@  nfsd_break_deleg_cb(struct file_lock *fl)
 	struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
 	struct nfs4_file *fp = dp->dl_stid.sc_file;
 
+	if (nfsd_check_courtesy_client(dp))
+		return false;
 	trace_nfsd_cb_recall(&dp->dl_stid);
 
 	/*
@@ -5279,6 +5327,22 @@  static void nfsd4_deleg_xgrade_none_ext(struct nfsd4_open *open,
 	 */
 }
 
+static bool
+nfs4_destroy_courtesy_client(struct nfs4_client *clp)
+{
+	struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+
+	spin_lock(&nn->client_lock);
+	if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ||
+			mark_client_expired_locked(clp)) {
+		spin_unlock(&nn->client_lock);
+		return false;
+	}
+	spin_unlock(&nn->client_lock);
+	expire_client(clp);
+	return true;
+}
+
 __be32
 nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nfsd4_open *open)
 {
@@ -5328,7 +5392,13 @@  nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
 			goto out;
 		}
 	} else {
+		rqstp->rq_conflict_client = NULL;
 		status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+		if (status == nfserr_jukebox && rqstp->rq_conflict_client) {
+			if (nfs4_destroy_courtesy_client(rqstp->rq_conflict_client))
+				status = nfs4_get_vfs_file(rqstp, fp, current_fh, stp, open);
+		}
+
 		if (status) {
 			stp->st_stid.sc_type = NFS4_CLOSED_STID;
 			release_open_stateid(stp);
@@ -5562,6 +5632,47 @@  static void nfsd4_ssc_expire_umount(struct nfsd_net *nn)
 }
 #endif
 
+static
+bool nfs4_anylock_conflict(struct nfs4_client *clp)
+{
+	int i;
+	struct nfs4_stateowner *so, *tmp;
+	struct nfs4_lockowner *lo;
+	struct nfs4_ol_stateid *stp;
+	struct nfs4_file *nf;
+	struct inode *ino;
+	struct file_lock_context *ctx;
+	struct file_lock *fl;
+
+	for (i = 0; i < OWNER_HASH_SIZE; i++) {
+		/* scan each lock owner */
+		list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
+				so_strhash) {
+			if (so->so_is_open_owner)
+				continue;
+
+			/* scan lock states of this lock owner */
+			lo = lockowner(so);
+			list_for_each_entry(stp, &lo->lo_owner.so_stateids,
+					st_perstateowner) {
+				nf = stp->st_stid.sc_file;
+				ino = nf->fi_inode;
+				ctx = ino->i_flctx;
+				if (!ctx)
+					continue;
+				/* check each lock belongs to this lock state */
+				list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+					if (fl->fl_owner != lo)
+						continue;
+					if (!list_empty(&fl->fl_blocked_requests))
+						return true;
+				}
+			}
+		}
+	}
+	return false;
+}
+
 static time64_t
 nfs4_laundromat(struct nfsd_net *nn)
 {
@@ -5577,7 +5688,9 @@  nfs4_laundromat(struct nfsd_net *nn)
 	};
 	struct nfs4_cpntf_state *cps;
 	copy_stateid_t *cps_t;
+	struct nfs4_stid *stid;
 	int i;
+	int id = 0;
 
 	if (clients_still_reclaiming(nn)) {
 		lt.new_timeo = 0;
@@ -5598,8 +5711,33 @@  nfs4_laundromat(struct nfsd_net *nn)
 	spin_lock(&nn->client_lock);
 	list_for_each_safe(pos, next, &nn->client_lru) {
 		clp = list_entry(pos, struct nfs4_client, cl_lru);
+		if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) {
+			clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
+			goto exp_client;
+		}
+		if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) {
+			if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry)
+				goto exp_client;
+			/*
+			 * after umount, v4.0 client is still
+			 * around waiting to be expired
+			 */
+			if (clp->cl_minorversion)
+				continue;
+		}
 		if (!state_expired(&lt, clp->cl_time))
 			break;
+		spin_lock(&clp->cl_lock);
+		stid = idr_get_next(&clp->cl_stateids, &id);
+		spin_unlock(&clp->cl_lock);
+		if (stid && !nfs4_anylock_conflict(clp)) {
+			/* client still has states */
+			clp->courtesy_client_expiry =
+				ktime_get_boottime_seconds() + courtesy_client_expiry;
+			set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags);
+			continue;
+		}
+exp_client:
 		if (mark_client_expired_locked(clp))
 			continue;
 		list_add(&clp->cl_lru, &reaplist);
@@ -5679,9 +5817,6 @@  nfs4_laundromat(struct nfsd_net *nn)
 	return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT);
 }
 
-static struct workqueue_struct *laundry_wq;
-static void laundromat_main(struct work_struct *);
-
 static void
 laundromat_main(struct work_struct *laundry)
 {
@@ -6486,6 +6621,19 @@  nfs4_transform_lock_offset(struct file_lock *lock)
 		lock->fl_end = OFFSET_MAX;
 }
 
+/* return true if lock was expired else return false */
+static bool
+nfsd4_fl_expire_lock(struct file_lock *fl, bool testonly)
+{
+	struct nfs4_lockowner *lo = (struct nfs4_lockowner *)fl->fl_owner;
+	struct nfs4_client *clp = lo->lo_owner.so_client;
+
+	if (testonly)
+		return test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ?
+			true : false;
+	return nfs4_destroy_courtesy_client(clp);
+}
+
 static fl_owner_t
 nfsd4_fl_get_owner(fl_owner_t owner)
 {
@@ -6533,6 +6681,7 @@  static const struct lock_manager_operations nfsd_posix_mng_ops  = {
 	.lm_notify = nfsd4_lm_notify,
 	.lm_get_owner = nfsd4_fl_get_owner,
 	.lm_put_owner = nfsd4_fl_put_owner,
+	.lm_expire_lock = nfsd4_fl_expire_lock,
 };
 
 static inline void
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index e73bdbb1634a..93e30b101578 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -345,6 +345,8 @@  struct nfs4_client {
 #define NFSD4_CLIENT_UPCALL_LOCK	(5)	/* upcall serialization */
 #define NFSD4_CLIENT_CB_FLAG_MASK	(1 << NFSD4_CLIENT_CB_UPDATE | \
 					 1 << NFSD4_CLIENT_CB_KILL)
+#define NFSD4_COURTESY_CLIENT		(6)	/* be nice to expired client */
+#define NFSD4_DESTROY_COURTESY_CLIENT	(7)
 	unsigned long		cl_flags;
 	const struct cred	*cl_cb_cred;
 	struct rpc_clnt		*cl_cb_client;
@@ -385,6 +387,7 @@  struct nfs4_client {
 	struct list_head	async_copies;	/* list of async copies */
 	spinlock_t		async_lock;	/* lock for async copies */
 	atomic_t		cl_cb_inflight;	/* Outstanding callbacks */
+	int			courtesy_client_expiry;
 };
 
 /* struct nfs4_client_reset
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 064c96157d1f..349bf7bf20d2 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -306,6 +306,7 @@  struct svc_rqst {
 						 * net namespace
 						 */
 	void **			rq_lease_breaker; /* The v4 client breaking a lease */
+	void			*rq_conflict_client;
 };
 
 #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)