From patchwork Mon Jul 2 05:52:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 10500475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DE3A66035E for ; Mon, 2 Jul 2018 05:54:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE0DA2876E for ; Mon, 2 Jul 2018 05:54:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C1B37287A5; Mon, 2 Jul 2018 05:54:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F9C12876E for ; Mon, 2 Jul 2018 05:54:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753184AbeGBFxu (ORCPT ); Mon, 2 Jul 2018 01:53:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54030 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753443AbeGBFxI (ORCPT ); Mon, 2 Jul 2018 01:53:08 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6B88FC04AC48; Mon, 2 Jul 2018 05:53:08 +0000 (UTC) Received: from llong.com (ovpn-116-112.phx2.redhat.com [10.3.116.112]) by smtp.corp.redhat.com (Postfix) with ESMTP id 666B110D51E9; Mon, 2 Jul 2018 05:53:00 +0000 (UTC) From: Waiman Long To: Alexander Viro Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Linus Torvalds , Jan Kara , "Paul E. McKenney" , Andrew Morton , Ingo Molnar , Miklos Szeredi , Matthew Wilcox , Larry Woodman , James Bottomley , "Wangkai (Kevin C)" , Waiman Long Subject: [PATCH v5 4/6] fs/dcache: Spread negative dentry pruning across multiple CPUs Date: Mon, 2 Jul 2018 13:52:01 +0800 Message-Id: <1530510723-24814-5-git-send-email-longman@redhat.com> In-Reply-To: <1530510723-24814-1-git-send-email-longman@redhat.com> References: <1530510723-24814-1-git-send-email-longman@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Mon, 02 Jul 2018 05:53:08 +0000 (UTC) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Doing negative dentry pruning using schedule_delayed_work() will typically concentrate the pruning effort on one particular CPU. That is not fair to the tasks running on that CPU. In addition, it is possible that one CPU can have all its negative dentries pruned away while the others can still have more negative dentries than the percpu limit. To be fair, negative dentries pruning is now done across all the online CPUs, if they all have close to the percpu limit of negative dentries. Signed-off-by: Waiman Long --- fs/dcache.c | 43 ++++++++++++++++++++++++++++++++++++++----- 1 file changed, 38 insertions(+), 5 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 6d00f52..4f34f53 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -360,7 +360,8 @@ static void __neg_dentry_inc(struct dentry *dentry) WRITE_ONCE(ndblk.prune_sb, NULL); } else { atomic_inc(&ndblk.prune_sb->s_active); - schedule_delayed_work(&prune_neg_dentry_work, 1); + schedule_delayed_work_on(smp_processor_id(), + &prune_neg_dentry_work, 1); } } } @@ -1467,8 +1468,9 @@ static enum lru_status dentry_negative_lru_isolate(struct list_head *item, */ static void prune_negative_dentry(struct work_struct *work) { + int cpu = smp_processor_id(); int freed, last_n_neg; - long nfree; + long nfree, excess; struct super_block *sb = READ_ONCE(ndblk.prune_sb); LIST_HEAD(dispose); @@ -1502,9 +1504,40 @@ static void prune_negative_dentry(struct work_struct *work) (nfree >= neg_dentry_nfree_init/2) || NEG_IS_SB_UMOUNTING(sb)) goto stop_pruning; - schedule_delayed_work(&prune_neg_dentry_work, - (nfree < neg_dentry_nfree_init/8) - ? NEG_PRUNING_FAST_RATE : NEG_PRUNING_SLOW_RATE); + /* + * If the negative dentry count in the current cpu is less than the + * per_cpu limit, schedule the pruning in the next cpu if it has + * more negative dentries. This will make the negative dentry count + * reduction spread more evenly across multiple per-cpu counters. + */ + excess = neg_dentry_percpu_limit - __this_cpu_read(nr_dentry_neg); + if (excess > 0) { + int next_cpu = cpumask_next(cpu, cpu_online_mask); + + if (next_cpu >= nr_cpu_ids) + next_cpu = cpumask_first(cpu_online_mask); + if (per_cpu(nr_dentry_neg, next_cpu) > + __this_cpu_read(nr_dentry_neg)) { + cpu = next_cpu; + + /* + * Transfer some of the excess negative dentry count + * to the free pool if the current percpu pool is less + * than 3/4 of the limit. + */ + if ((excess > neg_dentry_percpu_limit/4) && + raw_spin_trylock(&ndblk.nfree_lock)) { + WRITE_ONCE(ndblk.nfree, + ndblk.nfree + NEG_DENTRY_BATCH); + __this_cpu_add(nr_dentry_neg, NEG_DENTRY_BATCH); + raw_spin_unlock(&ndblk.nfree_lock); + } + } + } + + schedule_delayed_work_on(cpu, &prune_neg_dentry_work, + (nfree < neg_dentry_nfree_init/8) + ? NEG_PRUNING_FAST_RATE : NEG_PRUNING_SLOW_RATE); return; stop_pruning: