From patchwork Mon Dec 2 19:26:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dennis Zhou X-Patchwork-Id: 11269505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4B8913B6 for ; Mon, 2 Dec 2019 19:26:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7A81521774 for ; Mon, 2 Dec 2019 19:26:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A81521774 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 930336B0003; Mon, 2 Dec 2019 14:26:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8E0E96B0005; Mon, 2 Dec 2019 14:26:47 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F7AF6B0007; Mon, 2 Dec 2019 14:26:47 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 68FD46B0003 for ; Mon, 2 Dec 2019 14:26:47 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 08DF42816 for ; Mon, 2 Dec 2019 19:26:47 +0000 (UTC) X-FDA: 76221183654.25.coat86_7565f8dcd8d3f X-Spam-Summary: 2,0,0,a74321846254a847,d41d8cd98f00b204,dennisszhou@gmail.com,:torvalds@linux-foundation.org:tj@kernel.org:cl@linux.com::linux-kernel@vger.kernel.org,RULES_HIT:41:69:355:379:421:973:988:989:1260:1277:1312:1313:1314:1345:1437:1516:1518:1519:1535:1543:1593:1594:1595:1596:1711:1730:1747:1777:1792:2282:2393:2553:2559:2562:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3873:3874:4250:4321:4362:4605:5007:6261:7903:8784:10004:10400:10967:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12683:12740:12895:12986:13071:13161:13229:13439:13895:14096:14097:14180:14181:14721:14819:21060:21080:21220:21444:21627:21740:21987:30054:30070:30090,0,RBL:209.85.219.65:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: coat86_7565f8dcd8d3f X-Filterd-Recvd-Size: 5336 Received: from mail-qv1-f65.google.com (mail-qv1-f65.google.com [209.85.219.65]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Dec 2019 19:26:46 +0000 (UTC) Received: by mail-qv1-f65.google.com with SMTP id o18so356497qvf.1 for ; Mon, 02 Dec 2019 11:26:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:user-agent; bh=sRC578IzxYpnWQw6x6fP98N8MlOa2CYbZCkgjfiYCog=; b=oxgHEdEivdOcRRzm/udPpsDSsYB7bs+s/sFfIuXipYz2fVeireTjA9RbgescrdtVBH A8WbfCRmpOtqnaJGrIeizQlUII98TruhmFC528ARrTeSc9Lf03lDPoK0/i3MuA1D4SAd bevPwJaS/hsnnhv+Wk0fMpAH9g+uniHe6/o4zABtH1m+M8Ju7fedKLraJPIhmn1frriP dcUgXmp0ZWyofJl+dX2/audBg4kthBOwV07LjGo6ZOEU5kGM5s5nV1WmK2A36nuATOT5 T9sCu9/kxP6X9FUIVtBULhc0RCt9MPp0Pk3LrFUjPu89MdT9RRVUTcIZhzd0TDln0rHh Sp+w== X-Gm-Message-State: APjAAAVkjCOOJyJq1zE4LHppCnIIRgRwU+M1VYI+tsKc6PO8cDhzn6Q3 DGSoWraKc0c5nKwaX0yeS0g= X-Google-Smtp-Source: APXvYqxC6MUdKhcZCNnymc7mHDpEnf3nE/+STk6uqyB6Ap66XIGy055VTTmeEjxkfiC7Iq0gIoU5fA== X-Received: by 2002:a05:6214:5ac:: with SMTP id by12mr758100qvb.74.1575314806019; Mon, 02 Dec 2019 11:26:46 -0800 (PST) Received: from dennisz-mbp ([2620:10d:c091:500::3:2086]) by smtp.gmail.com with ESMTPSA id u24sm307708qkm.40.2019.12.02.11.26.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Dec 2019 11:26:45 -0800 (PST) Date: Mon, 2 Dec 2019 14:26:43 -0500 From: Dennis Zhou To: Linus Torvalds Cc: Tejun Heo , Christoph Lameter , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [GIT PULL] percpu changes for v5.5-rc1 Message-ID: <20191202192643.GA19946@dennisz-mbp> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.12.2 (2019-09-21) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Linus, This pull request has a change to fix percpu-refcount for RT kernels because rcu-sched disables preemption and the refcount release callback might acquire a spinlock. In the works is to add memcg counting for percpu by Roman Gushchin. That may land in either for-5.6 or for-5.7. There is also some sparse warnings that we're sorting out now. Thanks, Dennis The following changes since commit 4f5cafb5cb8471e54afdc9054d973535614f7675: Linux 5.4-rc3 (2019-10-13 16:37:36 -0700) are available in the Git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu.git for-5.5 for you to fetch changes up to ba30e27405afa0b13b79532a345977b3e58ad501: Revert "percpu: add __percpu to SHIFT_PERCPU_PTR" (2019-11-25 14:28:04 -0800) ---------------------------------------------------------------- Ben Dooks (1): percpu: add __percpu to SHIFT_PERCPU_PTR Dennis Zhou (1): Revert "percpu: add __percpu to SHIFT_PERCPU_PTR" Sebastian Andrzej Siewior (1): percpu-refcount: Use normal instead of RCU-sched" include/linux/percpu-refcount.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 7aef0abc194a..390031e816dc 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -186,14 +186,14 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr) { unsigned long __percpu *percpu_count; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) this_cpu_add(*percpu_count, nr); else atomic_long_add(nr, &ref->count); - rcu_read_unlock_sched(); + rcu_read_unlock(); } /** @@ -223,7 +223,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref) unsigned long __percpu *percpu_count; bool ret; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) { this_cpu_inc(*percpu_count); @@ -232,7 +232,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref) ret = atomic_long_inc_not_zero(&ref->count); } - rcu_read_unlock_sched(); + rcu_read_unlock(); return ret; } @@ -257,7 +257,7 @@ static inline bool percpu_ref_tryget_live(struct percpu_ref *ref) unsigned long __percpu *percpu_count; bool ret = false; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) { this_cpu_inc(*percpu_count); @@ -266,7 +266,7 @@ static inline bool percpu_ref_tryget_live(struct percpu_ref *ref) ret = atomic_long_inc_not_zero(&ref->count); } - rcu_read_unlock_sched(); + rcu_read_unlock(); return ret; } @@ -285,14 +285,14 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr) { unsigned long __percpu *percpu_count; - rcu_read_lock_sched(); + rcu_read_lock(); if (__ref_is_percpu(ref, &percpu_count)) this_cpu_sub(*percpu_count, nr); else if (unlikely(atomic_long_sub_and_test(nr, &ref->count))) ref->release(ref); - rcu_read_unlock_sched(); + rcu_read_unlock(); } /**