From patchwork Sat Aug 10 19:59:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13759616 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78976125BA; Sat, 10 Aug 2024 20:00:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723320053; cv=none; b=IiFr1qrkGuSNz5rbkatk4Zu2eyvZJf0Qsc9s0I8fbmi0z27w9bmtcYA0wgqfc1GIQIfTWNErWEPDYP7CAIIMeZbB4CNaBQ7Wpxk5dW3lMZFzp1bNq7/jPjgaN1XiiE0+Hc5YVuGltrunr+2uWsSD/xjibzP//YBFu6fYF8R7evI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723320053; c=relaxed/simple; bh=44buz1+lp5M1C+A8lSIF/iKBxTScCdyb4ZmhlHtXaSg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=diJjyEsYkOMt7SgrJzGcoC0ZSE0YN51M2TEV3WJyq7e9v2uxjXT+MTVz+IJ4DTX3z9zPyg0lVAr2cY2lslHE3ZeUKL+/rrvjcnMjaK3iVd0LiW/qJNletuVLLpyzmi/qnlG9g+BJaAEpuE3fBlEUwHFyk6jpr8nosoWu1Dygxlk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bAUSLgtW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bAUSLgtW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D203C32781; Sat, 10 Aug 2024 20:00:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723320053; bh=44buz1+lp5M1C+A8lSIF/iKBxTScCdyb4ZmhlHtXaSg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bAUSLgtW9h0QvSf1zDZ7KhpRPcIoPpeUP6heINx5pB1CloFiM0u6tIVRkjRfISzAD hioa+/nP5HLiZfWSDVV1G09UCwHqy5NbxpHd+dsTpNG7/UVlDVs63ScJeeyb+mQXkH /T54CobDnGfVfp5VHsqNHh2ep3sc0wp1AgqXxsdrHUAJx9OMd/xTu6PSa/ZkjLAQaQ kZw3gXC2A3tvWkRPnU8MHUOcYHbHAmJRAMyKb8r8Z9g2XMEc7+IfEZUzo+ENpRCjlN xQFQGBWRQpFNG0O4Vqb5FOSyZ3zKBE+xxmqD2DDy3pqy9xqmt3EvZBLCYBwX+6BUgQ mMzPb95/g5/lw== From: cel@kernel.org To: Cc: , pvorel@suse.cz, sherry.yang@oracle.com, calum.mackay@oracle.com, kernel-team@fb.com, ltp@lists.linux.it, Chuck Lever , Jeff Layton Subject: [PATCH 6.1.y 06/18] NFSD: Refactor the duplicate reply cache shrinker Date: Sat, 10 Aug 2024 15:59:57 -0400 Message-ID: <20240810200009.9882-7-cel@kernel.org> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240810200009.9882-1-cel@kernel.org> References: <20240810200009.9882-1-cel@kernel.org> Precedence: bulk X-Mailing-List: linux-nfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Chuck Lever [ Upstream commit c135e1269f34dfdea4bd94c11060c83a3c0b3c12 ] Avoid holding the bucket lock while freeing cache entries. This change also caps the number of entries that are freed when the shrinker calls to reduce the shrinker's impact on the cache's effectiveness. Reviewed-by: Jeff Layton [ cel: adjusted to apply to v6.1.y -- this one might not be necessary ] Signed-off-by: Chuck Lever --- fs/nfsd/nfscache.c | 85 ++++++++++++++++++++++------------------------ 1 file changed, 40 insertions(+), 45 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index b553f2cece58..049565bbef2d 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -310,51 +310,16 @@ nfsd_prune_bucket_locked(struct nfsd_net *nn, struct nfsd_drc_bucket *b, } } -static long prune_bucket(struct nfsd_drc_bucket *b, struct nfsd_net *nn, - unsigned int max) -{ - struct svc_cacherep *rp, *tmp; - long freed = 0; - - list_for_each_entry_safe(rp, tmp, &b->lru_head, c_lru) { - /* - * Don't free entries attached to calls that are still - * in-progress, but do keep scanning the list. - */ - if (rp->c_state == RC_INPROG) - continue; - if (atomic_read(&nn->num_drc_entries) <= nn->max_drc_entries && - time_before(jiffies, rp->c_timestamp + RC_EXPIRE)) - break; - nfsd_reply_cache_free_locked(b, rp, nn); - if (max && freed++ > max) - break; - } - return freed; -} - -/* - * Walk the LRU list and prune off entries that are older than RC_EXPIRE. - * Also prune the oldest ones when the total exceeds the max number of entries. +/** + * nfsd_reply_cache_count - count_objects method for the DRC shrinker + * @shrink: our registered shrinker context + * @sc: garbage collection parameters + * + * Returns the total number of entries in the duplicate reply cache. To + * keep things simple and quick, this is not the number of expired entries + * in the cache (ie, the number that would be removed by a call to + * nfsd_reply_cache_scan). */ -static long -prune_cache_entries(struct nfsd_net *nn) -{ - unsigned int i; - long freed = 0; - - for (i = 0; i < nn->drc_hashsize; i++) { - struct nfsd_drc_bucket *b = &nn->drc_hashtbl[i]; - - if (list_empty(&b->lru_head)) - continue; - spin_lock(&b->cache_lock); - freed += prune_bucket(b, nn, 0); - spin_unlock(&b->cache_lock); - } - return freed; -} - static unsigned long nfsd_reply_cache_count(struct shrinker *shrink, struct shrink_control *sc) { @@ -364,13 +329,43 @@ nfsd_reply_cache_count(struct shrinker *shrink, struct shrink_control *sc) return atomic_read(&nn->num_drc_entries); } +/** + * nfsd_reply_cache_scan - scan_objects method for the DRC shrinker + * @shrink: our registered shrinker context + * @sc: garbage collection parameters + * + * Free expired entries on each bucket's LRU list until we've released + * nr_to_scan freed objects. Nothing will be released if the cache + * has not exceeded it's max_drc_entries limit. + * + * Returns the number of entries released by this call. + */ static unsigned long nfsd_reply_cache_scan(struct shrinker *shrink, struct shrink_control *sc) { struct nfsd_net *nn = container_of(shrink, struct nfsd_net, nfsd_reply_cache_shrinker); + unsigned long freed = 0; + LIST_HEAD(dispose); + unsigned int i; - return prune_cache_entries(nn); + for (i = 0; i < nn->drc_hashsize; i++) { + struct nfsd_drc_bucket *b = &nn->drc_hashtbl[i]; + + if (list_empty(&b->lru_head)) + continue; + + spin_lock(&b->cache_lock); + nfsd_prune_bucket_locked(nn, b, 0, &dispose); + spin_unlock(&b->cache_lock); + + freed += nfsd_cacherep_dispose(&dispose); + if (freed > sc->nr_to_scan) + break; + } + + trace_nfsd_drc_gc(nn, freed); + return freed; } /**