From patchwork Fri Jun 20 18:56:52 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 4391671 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C33C6BEEAA for ; Fri, 20 Jun 2014 18:57:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F3A68203DF for ; Fri, 20 Jun 2014 18:57:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E6081203C4 for ; Fri, 20 Jun 2014 18:57:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965658AbaFTS51 (ORCPT ); Fri, 20 Jun 2014 14:57:27 -0400 Received: from mail-qc0-f176.google.com ([209.85.216.176]:45072 "EHLO mail-qc0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965171AbaFTS50 (ORCPT ); Fri, 20 Jun 2014 14:57:26 -0400 Received: by mail-qc0-f176.google.com with SMTP id w7so3935013qcr.35 for ; Fri, 20 Jun 2014 11:57:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id; bh=gmHAWxnFT4Ud/hsaNlGi10YOHoC468TzsVQMUX63IdE=; b=UmlbeQiqCqr4vzCTCvu1o56YMnypMdJYfJbYtZX/lJuMELtvzA9KOsCfrDXlPlzYpJ x2InhfDNuexcnvIu7Lty6MHlcmVdeDj7sTgr4ueXJRmMjwYBZaZ0Kfc9N3JyXNrmcLu3 TzlskLsdIUoJBF1OcXnfxL3BqZ8b9UEDZxjTY6dNa6aC6PN/HgG8o0jiSoTz/lUzJpSf dp1ga9cbATfQ4oMbpS5wN/yDiLoSUnXY33SoVwdK+miTuCHoV4CuXyDdzfUPjBC9RtII o6rKGMAYOmrNylvXvBv+JJkzRGaKMPMlbmkA5vCGPLHn8D3GdFvbhEUDXWRovplvrhwP q8Ow== X-Gm-Message-State: ALoCoQlUCCDjzbcmBADjmyLqweWKQkiH6qpJB43ZWloieepKym8sn8qTbdkZvFSTqVtVhrQDZcuZ X-Received: by 10.140.95.105 with SMTP id h96mr7476298qge.2.1403290645287; Fri, 20 Jun 2014 11:57:25 -0700 (PDT) Received: from tlielax.poochiereds.net ([2001:470:8:d63:3a60:77ff:fe93:a95d]) by mx.google.com with ESMTPSA id m92sm1501353qgd.29.2014.06.20.11.57.23 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 Jun 2014 11:57:24 -0700 (PDT) From: Jeff Layton To: stable@vger.kernel.org Cc: linux-nfs@vger.kernel.org, bfields@fieldses.org Subject: [stable PATCH] nfsd: don't try to reuse an expired DRC entry off the list Date: Fri, 20 Jun 2014 14:56:52 -0400 Message-Id: <1403290612-15341-1-git-send-email-jlayton@primarydata.com> X-Mailer: git-send-email 1.9.3 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jeff Layton This is commit a0ef5e19684f0447da9ff0654a12019c484f57ca in mainline. While the commit message below doesn't lay this out, we've subsequently found that there are some cases where an entry that's still in use can be freed prematurely if a particular operation takes a *very* long time (on the order of minutes) and/or the server is very busy and doesn't have a lot of memory dedicated to the DRC. This patch eliminates that possibility, so it's actually more than just a cleanup. The regression crept in in v3.9, and this patch went into mainline in v3.14. Please apply this to any stable kernel between those two mainline releases. Original patch description follows: -------------------------------[snip]---------------------------- Currently when we are processing a request, we try to scrape an expired or over-limit entry off the list in preference to allocating a new one from the slab. This is unnecessarily complicated. Just use the slab layer. Signed-off-by: Jeff Layton Signed-off-by: J. Bruce Fields Acked-by: J. Bruce Fields --- fs/nfsd/nfscache.c | 36 ++++-------------------------------- 1 file changed, 4 insertions(+), 32 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index ec8d97ddc635..02e8e9ad5750 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -129,13 +129,6 @@ nfsd_reply_cache_alloc(void) } static void -nfsd_reply_cache_unhash(struct svc_cacherep *rp) -{ - hlist_del_init(&rp->c_hash); - list_del_init(&rp->c_lru); -} - -static void nfsd_reply_cache_free_locked(struct svc_cacherep *rp) { if (rp->c_type == RC_REPLBUFF && rp->c_replvec.iov_base) { @@ -402,22 +395,8 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) /* * Since the common case is a cache miss followed by an insert, - * preallocate an entry. First, try to reuse the first entry on the LRU - * if it works, then go ahead and prune the LRU list. + * preallocate an entry. */ - spin_lock(&cache_lock); - if (!list_empty(&lru_head)) { - rp = list_first_entry(&lru_head, struct svc_cacherep, c_lru); - if (nfsd_cache_entry_expired(rp) || - num_drc_entries >= max_drc_entries) { - nfsd_reply_cache_unhash(rp); - prune_cache_entries(); - goto search_cache; - } - } - - /* No expired ones available, allocate a new one. */ - spin_unlock(&cache_lock); rp = nfsd_reply_cache_alloc(); spin_lock(&cache_lock); if (likely(rp)) { @@ -425,7 +404,9 @@ nfsd_cache_lookup(struct svc_rqst *rqstp) drc_mem_usage += sizeof(*rp); } -search_cache: + /* go ahead and prune the cache */ + prune_cache_entries(); + found = nfsd_cache_search(rqstp, csum); if (found) { if (likely(rp)) @@ -439,15 +420,6 @@ search_cache: goto out; } - /* - * We're keeping the one we just allocated. Are we now over the - * limit? Prune one off the tip of the LRU in trade for the one we - * just allocated if so. - */ - if (num_drc_entries >= max_drc_entries) - nfsd_reply_cache_free_locked(list_first_entry(&lru_head, - struct svc_cacherep, c_lru)); - nfsdstats.rcmisses++; rqstp->rq_cacherep = rp; rp->c_state = RC_INPROG;