From patchwork Mon Jul 21 19:11:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Trond Myklebust X-Patchwork-Id: 4597861 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BADB99F3FF for ; Mon, 21 Jul 2014 19:11:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EAD8D200D4 for ; Mon, 21 Jul 2014 19:11:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 342E920117 for ; Mon, 21 Jul 2014 19:11:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755259AbaGUTLm (ORCPT ); Mon, 21 Jul 2014 15:11:42 -0400 Received: from mail-ie0-f170.google.com ([209.85.223.170]:38375 "EHLO mail-ie0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755206AbaGUTLl (ORCPT ); Mon, 21 Jul 2014 15:11:41 -0400 Received: by mail-ie0-f170.google.com with SMTP id rl12so7252468iec.15 for ; Mon, 21 Jul 2014 12:11:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=LkaJLA0JUtN+tDxvwEKBsDyc1iZWlHBDPx1USKe1mH8=; b=i5sS0GZzmc49zTrkmRi3bphWG3P0a4VIdiZ5NfJT16hH4MVfTnOOBYA2405fNfvNM3 04XnHPIBLbGOqA3is9ezq0gZIUNRKnGB2HDO6JSMvE/A5BGyranXJqgXzIEr5rSy05GZ Mt30p6a8JadJz4azWfJJjdiJhfA5Q5xeRa0dZiW2CVZYWEf3Q6DtS3NVnqXvZJgEJ6tq fIeAlZDPxneyozEfSccpXkhnFroet6HE1FMyxqOTTxH0jqOuLK97t5uBkXXBYC5Ec8Gf DqMGDsntrMczuscQtNekub0bq74vVkSqRB6+xPUybyGt2PiIGCsRpGBvcqq7DHJqGRqs MKFw== X-Gm-Message-State: ALoCoQkx6YsnqtL8WhsCb+1ZMATbgnrMADxVDxMfRkIrP0LKMuumso4iSdOHm23f4PzURHJBcMPy X-Received: by 10.50.88.37 with SMTP id bd5mr31228315igb.1.1405969901223; Mon, 21 Jul 2014 12:11:41 -0700 (PDT) Received: from leira.trondhjem.org.localdomain (c-98-209-19-95.hsd1.mi.comcast.net. [98.209.19.95]) by mx.google.com with ESMTPSA id e4sm41208264igx.4.2014.07.21.12.11.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Jul 2014 12:11:40 -0700 (PDT) From: Trond Myklebust To: linux-nfs@vger.kernel.org Subject: [PATCH 2/2] NFS: Enforce an upper limit on the number of cached access call Date: Mon, 21 Jul 2014 15:11:42 -0400 Message-Id: <1405969902-11477-2-git-send-email-trond.myklebust@primarydata.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1405969902-11477-1-git-send-email-trond.myklebust@primarydata.com> References: <1405969902-11477-1-git-send-email-trond.myklebust@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This may be used to limit the number of cached credentials building up inside the access cache. Signed-off-by: Trond Myklebust --- fs/nfs/dir.c | 40 ++++++++++++++++++++++++++++++++++------ 1 file changed, 34 insertions(+), 6 deletions(-) diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index 4a3d4ef76127..285392e2c946 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -2028,6 +2028,10 @@ static DEFINE_SPINLOCK(nfs_access_lru_lock); static LIST_HEAD(nfs_access_lru_list); static atomic_long_t nfs_access_nr_entries; +static unsigned long nfs_access_max_cachesize = ULONG_MAX; +module_param(nfs_access_max_cachesize, ulong, 0644); +MODULE_PARM_DESC(nfs_access_max_cachesize, "NFS access maximum total cache length"); + static void nfs_access_free_entry(struct nfs_access_entry *entry) { put_rpccred(entry->cred); @@ -2049,18 +2053,13 @@ static void nfs_access_free_list(struct list_head *head) } unsigned long -nfs_access_cache_scan(struct shrinker *shrink, struct shrink_control *sc) +nfs_do_access_cache_scan(unsigned int nr_to_scan) { LIST_HEAD(head); struct nfs_inode *nfsi, *next; struct nfs_access_entry *cache; - int nr_to_scan = sc->nr_to_scan; - gfp_t gfp_mask = sc->gfp_mask; long freed = 0; - if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL) - return SHRINK_STOP; - spin_lock(&nfs_access_lru_lock); list_for_each_entry_safe(nfsi, next, &nfs_access_lru_list, access_cache_inode_lru) { struct inode *inode; @@ -2094,11 +2093,39 @@ remove_lru_entry: } unsigned long +nfs_access_cache_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + int nr_to_scan = sc->nr_to_scan; + gfp_t gfp_mask = sc->gfp_mask; + + if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL) + return SHRINK_STOP; + return nfs_do_access_cache_scan(nr_to_scan); +} + + +unsigned long nfs_access_cache_count(struct shrinker *shrink, struct shrink_control *sc) { return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries)); } +static void +nfs_access_cache_enforce_limit(void) +{ + long nr_entries = atomic_long_read(&nfs_access_nr_entries); + unsigned long diff; + unsigned int nr_to_scan; + + if (nr_entries < 0 || nr_entries <= nfs_access_max_cachesize) + return; + nr_to_scan = 100; + diff = nr_entries - nfs_access_max_cachesize; + if (diff < nr_to_scan) + nr_to_scan = diff; + nfs_do_access_cache_scan(nr_to_scan); +} + static void __nfs_access_zap_cache(struct nfs_inode *nfsi, struct list_head *head) { struct rb_root *root_node = &nfsi->access_cache; @@ -2244,6 +2271,7 @@ void nfs_access_add_cache(struct inode *inode, struct nfs_access_entry *set) &nfs_access_lru_list); spin_unlock(&nfs_access_lru_lock); } + nfs_access_cache_enforce_limit(); } EXPORT_SYMBOL_GPL(nfs_access_add_cache);