From patchwork Fri Jul 25 11:34:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 4622531 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7B4889F375 for ; Fri, 25 Jul 2014 11:35:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 85A402010F for ; Fri, 25 Jul 2014 11:35:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 88B7D201E4 for ; Fri, 25 Jul 2014 11:35:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760266AbaGYLey (ORCPT ); Fri, 25 Jul 2014 07:34:54 -0400 Received: from mail-qa0-f46.google.com ([209.85.216.46]:44278 "EHLO mail-qa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751856AbaGYLex (ORCPT ); Fri, 25 Jul 2014 07:34:53 -0400 Received: by mail-qa0-f46.google.com with SMTP id v10so4374712qac.5 for ; Fri, 25 Jul 2014 04:34:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=UcdRfEzNJ4GeRQ7Obkh8Ndotd6xqvoRy/17GIlHQ28Y=; b=OaLjAvw7JqQ3WaJuAx3vR5cWtP4U2uTDcTT3IVpfWhfEXcPQeTyH7jzyeoXtJfIgY6 qrCveTfPebo2Yx+Fr/sFRWquMbECtlAch8NMoMAuSF5cw7kJj0/sa7wxnuKreAuPkq9B g0P95BT6BD6akLwUi3Y5MSvbmymY/3GkJ0O+7n/Q06BpCRXc3JI9VR4vo8kdfD1XZ7So 1jRtlIU9KzYr1NXWfZVB3/B8m3QeV5dfdwX1hAsN1unGR8xTUBKOpID2xjfnVJtqTfQU vgJ8rSk7oRIm3Yg92qEfF9z2K0PgQfPnOfDZmmM9Skh+n5TjSLEdhpBDoiOJOvZF7MRC sYSA== X-Gm-Message-State: ALoCoQk0Q21IIm+Yoj0er2RSdKiQXVWk684CxwCSaOqmRm0SBYNqdUQJi8VVClbkoozfDRnEY/2X X-Received: by 10.224.111.195 with SMTP id t3mr19354411qap.17.1406288092623; Fri, 25 Jul 2014 04:34:52 -0700 (PDT) Received: from tlielax.poochiereds.net ([2001:470:8:d63:3a60:77ff:fe93:a95d]) by mx.google.com with ESMTPSA id p12sm11041561qga.0.2014.07.25.04.34.51 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 25 Jul 2014 04:34:51 -0700 (PDT) From: Jeff Layton To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org, hch@infradead.org, Neil Brown Subject: [PATCH v6 8/9] nfsd: give block_delegation and delegation_blocked its own spinlock Date: Fri, 25 Jul 2014 07:34:26 -0400 Message-Id: <1406288067-20663-9-git-send-email-jlayton@primarydata.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1406288067-20663-1-git-send-email-jlayton@primarydata.com> References: <1406288067-20663-1-git-send-email-jlayton@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The state lock can be fairly heavily contended, and there's no reason that nfs4_file lookups and delegation_blocked should be mutually exclusive. Let's give the new block_delegation code its own spinlock. It does mean that we'll need to take a different lock in the delegation break code, but that's not generally as critical to performance. Cc: Neil Brown Signed-off-by: Jeff Layton Reviewed-by: Christoph Hellwig --- fs/nfsd/nfs4state.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 85d7ac664691..ecfddca9b841 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -517,10 +517,11 @@ static struct nfs4_ol_stateid * nfs4_alloc_stateid(struct nfs4_client *clp) * Each filter is 256 bits. We hash the filehandle to 32bit and use the * low 3 bytes as hash-table indices. * - * 'state_lock', which is always held when block_delegations() is called, + * 'blocked_delegations_lock', which is always taken in block_delegations(), * is used to manage concurrent access. Testing does not need the lock * except when swapping the two filters. */ +static DEFINE_SPINLOCK(blocked_delegations_lock); static struct bloom_pair { int entries, old_entries; time_t swap_time; @@ -536,7 +537,7 @@ static int delegation_blocked(struct knfsd_fh *fh) if (bd->entries == 0) return 0; if (seconds_since_boot() - bd->swap_time > 30) { - spin_lock(&state_lock); + spin_lock(&blocked_delegations_lock); if (seconds_since_boot() - bd->swap_time > 30) { bd->entries -= bd->old_entries; bd->old_entries = bd->entries; @@ -545,7 +546,7 @@ static int delegation_blocked(struct knfsd_fh *fh) bd->new = 1-bd->new; bd->swap_time = seconds_since_boot(); } - spin_unlock(&state_lock); + spin_unlock(&blocked_delegations_lock); } hash = arch_fast_hash(&fh->fh_base, fh->fh_size, 0); if (test_bit(hash&255, bd->set[0]) && @@ -566,16 +567,16 @@ static void block_delegations(struct knfsd_fh *fh) u32 hash; struct bloom_pair *bd = &blocked_delegations; - lockdep_assert_held(&state_lock); - hash = arch_fast_hash(&fh->fh_base, fh->fh_size, 0); + spin_lock(&blocked_delegations_lock); __set_bit(hash&255, bd->set[bd->new]); __set_bit((hash>>8)&255, bd->set[bd->new]); __set_bit((hash>>16)&255, bd->set[bd->new]); if (bd->entries == 0) bd->swap_time = seconds_since_boot(); bd->entries += 1; + spin_unlock(&blocked_delegations_lock); } static struct nfs4_delegation * @@ -3096,16 +3097,16 @@ void nfsd4_prepare_cb_recall(struct nfs4_delegation *dp) struct nfs4_client *clp = dp->dl_stid.sc_client; struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); - /* - * We can't do this in nfsd_break_deleg_cb because it is - * already holding inode->i_lock - */ - spin_lock(&state_lock); block_delegations(&dp->dl_fh); + /* + * We can't do this in nfsd_break_deleg_cb because it is + * already holding inode->i_lock. + * * If the dl_time != 0, then we know that it has already been * queued for a lease break. Don't queue it again. */ + spin_lock(&state_lock); if (dp->dl_time == 0) { dp->dl_time = get_seconds(); list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru);