From patchwork Wed Jun 25 13:48:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 4420931 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8C7919F390 for ; Wed, 25 Jun 2014 13:48:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B09E720394 for ; Wed, 25 Jun 2014 13:48:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BCDAA2038F for ; Wed, 25 Jun 2014 13:48:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757186AbaFYNsm (ORCPT ); Wed, 25 Jun 2014 09:48:42 -0400 Received: from mail-qg0-f47.google.com ([209.85.192.47]:54134 "EHLO mail-qg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757165AbaFYNsl (ORCPT ); Wed, 25 Jun 2014 09:48:41 -0400 Received: by mail-qg0-f47.google.com with SMTP id q108so1671293qgd.34 for ; Wed, 25 Jun 2014 06:48:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id; bh=D9rKXXZF3JqciRTmtaNWLuZ84zqBnjQD8FbuaKUAzW8=; b=AoAI95xuHH/jqtVQxm3vPNoLp9oBm7WimKqX/mupJFjWQEU+lE2jysCLU9xt3jc6MY nLw8w5Me3WEbJJ/vtwUrpnPwexMJRvJ7MgTBEWT8fZLjhNhdA1JyzqrZUvMCtc8O9QBA Ohnz5K0IKGgRlWLYHe1gqXZESDbb8+mJPYpCGA0AmCKXtHzbrLEjsOEO2fuXhe1xtW5g RMOpUqw2XruSUF+E2kU3Du0pDODHaLj7HPeHgKroI6R0rub9G+tt86pC5CVUeV4Fc5um qfjErilSqBHDkBjsTxciawcPvRZhnJrHoOjF8wrefavPj80sxF7qXwydwxSodME1ubIB +5DA== X-Gm-Message-State: ALoCoQna2Xk7MHo6o0k0TG58sZBi0fvY9RFPTVB3dTJBpVs9AnZgqpsux8BRXnyMucqifUWJxJvs X-Received: by 10.229.44.65 with SMTP id z1mr11895756qce.7.1403704120578; Wed, 25 Jun 2014 06:48:40 -0700 (PDT) Received: from tlielax.poochiereds.net ([2001:470:8:d63:3a60:77ff:fe93:a95d]) by mx.google.com with ESMTPSA id i31sm2245932qgf.41.2014.06.25.06.48.39 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jun 2014 06:48:39 -0700 (PDT) From: Jeff Layton To: stable@vger.kernel.org Cc: kasparek@fit.vutbr.cz, linux-nfs@vger.kernel.org, bfields@fieldses.org Subject: [stable PATCH] nfsd: don't halt scanning the DRC LRU list when there's an RC_INPROG entry Date: Wed, 25 Jun 2014 09:48:34 -0400 Message-Id: <1403704114-3536-1-git-send-email-jlayton@primarydata.com> X-Mailer: git-send-email 1.9.3 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The patch below has already made it into mainline as commit 1b19453d1c6abcfa7c312ba6c9f11a277568fc94. I think we need it in stable as well. In addition to making the cache pruner more efficient, it also fixes a logic bug in prune_cache_entries. If num_drc_entries is greater than the max, then we can end up freeing the entry even if it's still RC_INPROG. This patch ensures that RC_INPROG entries are always skipped and that prevents a use-after-free. Please apply this to any stable kernel, v3.9 or above. Original patch description follows. See: https://bugzilla.kernel.org/show_bug.cgi?id=77031 Thanks to Tomas Kasparek for the bug report. --------------------------[snip]---------------------------- Currently, the DRC cache pruner will stop scanning the list when it hits an entry that is RC_INPROG. It's possible however for a call to take a *very* long time. In that case, we don't want it to block other entries from being pruned if they are expired or we need to trim the cache to get back under the limit. Fix the DRC cache pruner to just ignore RC_INPROG entries. Signed-off-by: Jeff Layton Signed-off-by: J. Bruce Fields --- fs/nfsd/nfscache.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index f8f060ffbf4f..6040da8830ff 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -224,13 +224,6 @@ hash_refile(struct svc_cacherep *rp) hlist_add_head(&rp->c_hash, cache_hash + hash_32(rp->c_xid, maskbits)); } -static inline bool -nfsd_cache_entry_expired(struct svc_cacherep *rp) -{ - return rp->c_state != RC_INPROG && - time_after(jiffies, rp->c_timestamp + RC_EXPIRE); -} - /* * Walk the LRU list and prune off entries that are older than RC_EXPIRE. * Also prune the oldest ones when the total exceeds the max number of entries. @@ -242,8 +235,14 @@ prune_cache_entries(void) long freed = 0; list_for_each_entry_safe(rp, tmp, &lru_head, c_lru) { - if (!nfsd_cache_entry_expired(rp) && - num_drc_entries <= max_drc_entries) + /* + * Don't free entries attached to calls that are still + * in-progress, but do keep scanning the list. + */ + if (rp->c_state == RC_INPROG) + continue; + if (num_drc_entries <= max_drc_entries && + time_before(jiffies, rp->c_timestamp + RC_EXPIRE)) break; nfsd_reply_cache_free_locked(rp); freed++;