From patchwork Thu Dec 5 11:00:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 3287291 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 69346C0D4B for ; Thu, 5 Dec 2013 11:01:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 42421203E5 for ; Thu, 5 Dec 2013 11:01:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 211912038D for ; Thu, 5 Dec 2013 11:01:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755656Ab3LELBF (ORCPT ); Thu, 5 Dec 2013 06:01:05 -0500 Received: from mail-qc0-f172.google.com ([209.85.216.172]:58379 "EHLO mail-qc0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755639Ab3LELBD (ORCPT ); Thu, 5 Dec 2013 06:01:03 -0500 Received: by mail-qc0-f172.google.com with SMTP id e16so4686276qcx.17 for ; Thu, 05 Dec 2013 03:01:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=EkgrD1klTvejGV/SoR8Kfu2wdze1X3RxheZgadgiyf4=; b=UkXmMoim+4eMOOR2SiBygzabYi0VJI/ZeSJWFYGS7RMx5Al8xd8sbdbToJxuhJdE8e yDRAtwB2x86fZVT9FItgnKFx8G0NGHcLjNh50ft7D5daRSvvCE9l21B7hb6lc6ztEJH/ JGn35k8ZQGdyXugRYNjS77vjyWVNAKeLOFMWn3NLKFzYIgQ9/VEG5XSzfopwEbqvvZ6t WQk95unJxJjBYtjd/RNRO+uxI7pz9mwE8AWNAWlPuiGbVGHCrGZhPg2D7cuBEUTNXb/9 AAnmZFXzZe7K5EggTeNZaCSsWhkC/kZ+5345cStEbXE/N6o+s0pYhT7/EljMdpV2WTDw jVIA== X-Gm-Message-State: ALoCoQl6Vb8c8WrB6W4klKnzf5cbKQTemsKaNTZ3BCTuNEiAvd+Tza0inr4o23rfD4kXIlEMu03v X-Received: by 10.224.51.196 with SMTP id e4mr127207420qag.16.1386241261280; Thu, 05 Dec 2013 03:01:01 -0800 (PST) Received: from tlielax.poochiereds.net ([2001:470:8:d63:3a60:77ff:fe93:a95d]) by mx.google.com with ESMTPSA id r5sm244149326qaj.13.2013.12.05.03.01.00 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Dec 2013 03:01:00 -0800 (PST) From: Jeff Layton To: linux-nfs@vger.kernel.org Cc: Christoph Hellwig , "J. Bruce Fields" Subject: [PATCH RFC 1/3] nfsd: don't try to reuse an expired DRC entry off the list Date: Thu, 5 Dec 2013 06:00:51 -0500 Message-Id: <1386241253-5781-2-git-send-email-jlayton@redhat.com> X-Mailer: git-send-email 1.8.4.2 In-Reply-To: <1386241253-5781-1-git-send-email-jlayton@redhat.com> References: <1386241253-5781-1-git-send-email-jlayton@redhat.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently when we are processing a request, we try to scrape an expired or over-limit entry off the list in preference to allocating a new one from the slab. This is unnecessarily complicated. Just use the slab layer. Signed-off-by: Jeff Layton --- fs/nfsd/nfscache.c | 36 ++++-------------------------------- 1 file changed, 4 insertions(+), 32 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index b6af150..f8f060f 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -132,13 +132,6 @@ nfsd_reply_cache_alloc(void) } static void -nfsd_reply_cache_unhash(struct svc_cacherep *rp) -{ - hlist_del_init(&rp->c_hash); - list_del_init(&rp->c_lru); -} - -static void nfsd_reply_cache_free_locked(struct svc_cacherep *rp) { if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base) { @@ -416,22 +409,8 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) /* * Since the common case is a cache miss followed by an insert, - * preallocate an entry. First, try to reuse the first entry on the LRU - * if it works, then go ahead and prune the LRU list. + * preallocate an entry. */ - spin_lock(&cache_lock); - if (!list_empty(&lru_head)) { - rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru); - if (nfsd_cache_entry_expired(rp) || - num_drc_entries >= max_drc_entries) { - nfsd_reply_cache_unhash(rp); - prune_cache_entries(); - goto search_cache; - } - } - - /* No expired ones available, allocate a new one. */ - spin_unlock(&cache_lock); rp = nfsd_reply_cache_alloc(); spin_lock(&cache_lock); if (likely(rp)) { @@ -439,7 +418,9 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) drc_mem_usage += sizeof(*rp); } -search_cache: + /* go ahead and prune the cache */ + prune_cache_entries(); + found = nfsd_cache_search(rqstp, csum); if (found) { if (likely(rp)) @@ -453,15 +434,6 @@ search_cache: goto out; } - /* - * We're keeping the one we just allocated. Are we now over the - * limit? Prune one off the tip of the LRU in trade for the one we - * just allocated if so. - */ - if (num_drc_entries >= max_drc_entries) - nfsd_reply_cache_free_locked(list_first_entry(&lru_head, - struct svc_cacherep, c_lru)); - nfsdstats.rcmisses++; rqstp->rq_cacherep = rp; rp->c_state = RC_INPROG;