From patchwork Fri Jul 10 14:01:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feng Tang X-Patchwork-Id: 11656727 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2422813B6 for ; Fri, 10 Jul 2020 14:02:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EE119207D0 for ; Fri, 10 Jul 2020 14:02:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE119207D0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 28DDE8D0007; Fri, 10 Jul 2020 10:02:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 23CD88D0001; Fri, 10 Jul 2020 10:02:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 106C28D0007; Fri, 10 Jul 2020 10:02:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id EF85C8D0001 for ; Fri, 10 Jul 2020 10:02:20 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AE3027591 for ; Fri, 10 Jul 2020 14:02:20 +0000 (UTC) X-FDA: 77022330840.28.steam38_3315efb26ece Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id AA8E8641D for ; Fri, 10 Jul 2020 14:02:19 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,feng.tang@intel.com,,RULES_HIT:30005:30054:30064,0,RBL:134.134.136.65:@intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04yfhbufsqriud8jb9jh53ia3k85hycd45yqy6b7ny6uekru4yw7rwqf5oanrgm.yzz5j9ozhx6bo4u8yxu7txkm3k8fom5mwdf1irhtm6c84km84zyrg68u4t6mfae.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: steam38_3315efb26ece X-Filterd-Recvd-Size: 4743 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 14:02:18 +0000 (UTC) IronPort-SDR: w7McPH4CWy1gg3eRqOeT2KMBJwh5krpRrmvUe5EM2yH9lbUGK2M24GRzvGIS18clEaku6QHzHz 3BTo4wrA8Flw== X-IronPort-AV: E=McAfee;i="6000,8403,9677"; a="148188233" X-IronPort-AV: E=Sophos;i="5.75,336,1589266800"; d="scan'208";a="148188233" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2020 07:02:04 -0700 IronPort-SDR: Qtf614lvHJnTjh0kqbKNTaOneiygxtaXN5BrwNM3mQPllYWJ++pA7e7Yahs31ZT44+BhwyG6xo T+L6lyJHrjrw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,336,1589266800"; d="scan'208";a="458287180" Received: from shbuild999.sh.intel.com ([10.239.146.107]) by orsmga005.jf.intel.com with ESMTP; 10 Jul 2020 07:02:00 -0700 From: Feng Tang To: Andrew Morton , Michal Hocko , Johannes Weiner , Matthew Wilcox , Mel Gorman , Kees Cook , Qian Cai , Dennis Zhou , andi.kleen@intel.com, tim.c.chen@intel.com, dave.hansen@intel.com, ying.huang@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Feng Tang , Tejun Heo , Christoph Lameter Subject: [PATCH v6 3/4] percpu_counter: add percpu_counter_sync() Date: Fri, 10 Jul 2020 22:01:47 +0800 Message-Id: <1594389708-60781-4-git-send-email-feng.tang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594389708-60781-1-git-send-email-feng.tang@intel.com> References: <1594389708-60781-1-git-send-email-feng.tang@intel.com> X-Rspamd-Queue-Id: AA8E8641D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: percpu_counter's accuracy is related to its batch size. For a percpu_counter with a big batch, its deviation could be big, so when the counter's batch is runtime changed to a smaller value for better accuracy, there could also be requirment to reduce the big deviation. So add a percpu-counter sync function to be run on each CPU. Reported-by: kernel test robot Signed-off-by: Feng Tang Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Michal Hocko Cc: Qian Cai Cc: Andi Kleen Cc: Huang Ying --- include/linux/percpu_counter.h | 4 ++++ lib/percpu_counter.c | 19 +++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h index 0a4f54d..01861ee 100644 --- a/include/linux/percpu_counter.h +++ b/include/linux/percpu_counter.h @@ -44,6 +44,7 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch); s64 __percpu_counter_sum(struct percpu_counter *fbc); int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch); +void percpu_counter_sync(struct percpu_counter *fbc); static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs) { @@ -172,6 +173,9 @@ static inline bool percpu_counter_initialized(struct percpu_counter *fbc) return true; } +static inline void percpu_counter_sync(struct percpu_counter *fbc) +{ +} #endif /* CONFIG_SMP */ static inline void percpu_counter_inc(struct percpu_counter *fbc) diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index a66595b..a2345de 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -99,6 +99,25 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch) EXPORT_SYMBOL(percpu_counter_add_batch); /* + * For percpu_counter with a big batch, the devication of its count could + * be big, and there is requirement to reduce the deviation, like when the + * counter's batch could be runtime decreased to get a better accuracy, + * which can be achieved by running this sync function on each CPU. + */ +void percpu_counter_sync(struct percpu_counter *fbc) +{ + unsigned long flags; + s64 count; + + raw_spin_lock_irqsave(&fbc->lock, flags); + count = __this_cpu_read(*fbc->counters); + fbc->count += count; + __this_cpu_sub(*fbc->counters, count); + raw_spin_unlock_irqrestore(&fbc->lock, flags); +} +EXPORT_SYMBOL(percpu_counter_sync); + +/* * Add up all the per-cpu counts, return the result. This is a more accurate * but much slower version of percpu_counter_read_positive() */