From patchwork Mon Jun 12 09:07:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276162 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CD61C87FDE for ; Mon, 12 Jun 2023 09:57:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232302AbjFLJ5g (ORCPT ); Mon, 12 Jun 2023 05:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229692AbjFLJyR (ORCPT ); Mon, 12 Jun 2023 05:54:17 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6347649F6; Mon, 12 Jun 2023 02:38:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=VyS6fqYhD/n91U5f9GQ+a8R8fakG8EouUVHj8V9PE00=; b=QruMXFdvJIpT6C4xhfbtXQc4F6 4BWAt2iZpXnuR4PZL/DqfGiytuUGSiJSOq/8VdBO2EL0m6XzvNtDiKZ1oLWkBo4Fjyp1SCW0HtowW khBwKdFSkOaLE0NRU0h6wppJ/BJQbx6t+fi0YMOq8E+vtMHdfs8eGLnlXdcKkdtjJW1TZjuaLEaOi xRrHYrmW0sm1wBcbePBmYSzXufFebgerY8jxwNA5ArsNL9JCWj7MuCccd5YfoqES1HrUtarXWvNv9 rTToGRe669jagMzznnXleX7c296r5JI6WREQfZY9f/DB2/IM2B7t9lrQeaDhqxwYfXOZ/h3jaM8KO MM3mvBbQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0f-008kOm-0u; Mon, 12 Jun 2023 09:38:49 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 29BC6300E86; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id ED20B240FDF93; Mon, 12 Jun 2023 11:38:47 +0200 (CEST) Message-ID: <20230612093537.467120754@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:14 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 01/57] dmaengine: ioat: Free up __cleanup() name References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org In order to use __cleanup for __attribute__((__cleanup__(func))) the name must not be used for anything else. Avoid the conflict. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Dave Jiang --- drivers/dma/ioat/dma.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/drivers/dma/ioat/dma.c +++ b/drivers/dma/ioat/dma.c @@ -584,11 +584,11 @@ desc_get_errstat(struct ioatdma_chan *io } /** - * __cleanup - reclaim used descriptors + * __ioat_cleanup - reclaim used descriptors * @ioat_chan: channel (ring) to clean * @phys_complete: zeroed (or not) completion address (from status) */ -static void __cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete) +static void __ioat_cleanup(struct ioatdma_chan *ioat_chan, dma_addr_t phys_complete) { struct ioatdma_device *ioat_dma = ioat_chan->ioat_dma; struct ioat_ring_ent *desc; @@ -675,7 +675,7 @@ static void ioat_cleanup(struct ioatdma_ spin_lock_bh(&ioat_chan->cleanup_lock); if (ioat_cleanup_preamble(ioat_chan, &phys_complete)) - __cleanup(ioat_chan, phys_complete); + __ioat_cleanup(ioat_chan, phys_complete); if (is_ioat_halted(*ioat_chan->completion)) { u32 chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); @@ -712,7 +712,7 @@ static void ioat_restart_channel(struct ioat_quiesce(ioat_chan, 0); if (ioat_cleanup_preamble(ioat_chan, &phys_complete)) - __cleanup(ioat_chan, phys_complete); + __ioat_cleanup(ioat_chan, phys_complete); __ioat_restart_chan(ioat_chan); } @@ -786,7 +786,7 @@ static void ioat_eh(struct ioatdma_chan /* cleanup so tail points to descriptor that caused the error */ if (ioat_cleanup_preamble(ioat_chan, &phys_complete)) - __cleanup(ioat_chan, phys_complete); + __ioat_cleanup(ioat_chan, phys_complete); chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET); pci_read_config_dword(pdev, IOAT_PCI_CHANERR_INT_OFFSET, &chanerr_int); @@ -943,7 +943,7 @@ void ioat_timer_event(struct timer_list /* timer restarted in ioat_cleanup_preamble * and IOAT_COMPLETION_ACK cleared */ - __cleanup(ioat_chan, phys_complete); + __ioat_cleanup(ioat_chan, phys_complete); goto unlock_out; } From patchwork Mon Jun 12 09:07:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BACB1C7EE25 for ; Mon, 12 Jun 2023 09:57:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234031AbjFLJ5E (ORCPT ); Mon, 12 Jun 2023 05:57:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231173AbjFLJyX (ORCPT ); Mon, 12 Jun 2023 05:54:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F3E55FF0; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=mq5JO2tPl8C7SILOfz/KwJIcnobnm9UFdJaARgXH410=; b=AMcWXrFAvQk7IooWq5vsQKOUpX c+43fYmu8uTkx0XjpUhBb5rLhhZTpeo7fFtrFQ6gX1Ie/h4dDKHWEhenT13VsLCQtYF4J8qBvnqTN Rb7fXamtrAdhJIYp7XX1thb9d5mecYu5DsgQSqMp7US340Et8Gvt1QM3WZU+8S6FEN+rhyMOUXAIr zrgte1I7kA8l7tRuTCoT+JXSeh785ZZ203dX7QWjb/gjzrBXnKJDRygq4P+8awKyHb5qZx9laV/87 GV7BCj0gIl7rPXAFdcBqCQXcR9zo9IGdzN5vsPxOEsYOaEA/IqiNceS6vjnqf815nn9bTlK1cd3oY AFlkOulA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0f-002N8u-IH; Mon, 12 Jun 2023 09:38:49 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6C280302680; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id F111730A210B0; Mon, 12 Jun 2023 11:38:47 +0200 (CEST) Message-ID: <20230612093537.536441207@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:15 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 02/57] apparmor: Free up __cleanup() name References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org In order to use __cleanup for __attribute__((__cleanup__(func))) the name must not be used for anything else. Avoid the conflict. Signed-off-by: Peter Zijlstra (Intel) Acked-by: John Johansen Acked-by: John Johansen --- security/apparmor/include/lib.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/security/apparmor/include/lib.h +++ b/security/apparmor/include/lib.h @@ -232,7 +232,7 @@ void aa_policy_destroy(struct aa_policy */ #define fn_label_build(L, P, GFP, FN) \ ({ \ - __label__ __cleanup, __done; \ + __label__ __do_cleanup, __done; \ struct aa_label *__new_; \ \ if ((L)->size > 1) { \ @@ -250,7 +250,7 @@ void aa_policy_destroy(struct aa_policy __new_ = (FN); \ AA_BUG(!__new_); \ if (IS_ERR(__new_)) \ - goto __cleanup; \ + goto __do_cleanup; \ __lvec[__j++] = __new_; \ } \ for (__j = __count = 0; __j < (L)->size; __j++) \ @@ -272,7 +272,7 @@ void aa_policy_destroy(struct aa_policy vec_cleanup(profile, __pvec, __count); \ } else \ __new_ = NULL; \ -__cleanup: \ +__do_cleanup: \ vec_cleanup(label, __lvec, (L)->size); \ } else { \ (P) = labels_profile(L); \ From patchwork Mon Jun 12 09:07:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276160 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB5CDC7EE45 for ; Mon, 12 Jun 2023 09:57:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235364AbjFLJ5b (ORCPT ); Mon, 12 Jun 2023 05:57:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232302AbjFLJyS (ORCPT ); Mon, 12 Jun 2023 05:54:18 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7787F449F; Mon, 12 Jun 2023 02:38:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=hIHpoo/SiH1NQnU6QQRKGeTYqK+wj0fI/nl3of6Av20=; b=ejJTRNFGexBmt/1NUimW26TJT6 F67CVgqx32D2WfmsdyOL3kvMSEQ6jEjC+Rz9fdnFMk0OL+VHZEf8OuD5vTZWeWhhdOR1YqxA77J8a knxJHyATFin3liOJeAHFZpSoEQ/VMG3QCilELoKyXY64vk+LQaUtLCMFf3XaNWIp6v7KVUq53uWbO RvGT5MCxIuHOa+uB+TgFu+hPLn7LbrEiALvN9LFonMkOXvQ8TAwWe679s7mtOtOkbGdj5oTdTQM96 CEuqOJOA0CiOaxFP8V0n5V8yxTnuzFzjevl1AXYvjpJra9VynF9/Racc8TOLMS2RpJiTH2ZWHF0hU budY24mg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0f-008kOn-0u; Mon, 12 Jun 2023 09:38:49 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 728A1302DA8; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 00F6130A37E79; Mon, 12 Jun 2023 11:38:47 +0200 (CEST) Message-ID: <20230612093537.614161713@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:16 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 03/57] locking: Introduce __cleanup() based infrastructure References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use __attribute__((__cleanup__(func))) to build: - simple auto-release pointers using __free() - 'classes' with constructor and destructor semantics for scope-based resource management. - lock guards based on the above classes. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/cleanup.h | 167 ++++++++++++++++++++++++++++++++++++ include/linux/compiler-clang.h | 9 + include/linux/compiler_attributes.h | 6 + include/linux/device.h | 7 + include/linux/file.h | 6 + include/linux/irqflags.h | 7 + include/linux/mutex.h | 4 include/linux/percpu.h | 4 include/linux/preempt.h | 5 + include/linux/rcupdate.h | 3 include/linux/rwsem.h | 8 + include/linux/sched/task.h | 2 include/linux/slab.h | 3 include/linux/spinlock.h | 31 ++++++ include/linux/srcu.h | 5 + scripts/checkpatch.pl | 2 16 files changed, 268 insertions(+), 1 deletion(-) --- /dev/null +++ b/include/linux/cleanup.h @@ -0,0 +1,167 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_GUARDS_H +#define __LINUX_GUARDS_H + +#include + +/* + * DEFINE_FREE(name, type, free): + * simple helper macro that defines the required wrapper for a __free() + * based cleanup function. @free is an expression using '_T' to access + * the variable. + * + * __free(name): + * variable attribute to add a scoped based cleanup to the variable. + * + * return_ptr(p): + * returns p while inhibiting the __free(). + * + * Ex. + * + * DEFINE_FREE(kfree, void *, if (_T) kfree(_T)) + * + * struct obj *p = kmalloc(...); + * if (!p) + * return NULL; + * + * if (!init_obj(p)) + * return NULL; + * + * return_ptr(p); + */ + +#define DEFINE_FREE(name, type, free) \ + static inline void __free_##name(void *p) { type _T = *(type *)p; free; } + +#define __free(name) __cleanup(__free_##name) + +#define no_free_ptr(p) \ + ({ __auto_type __ptr = (p); (p) = NULL; __ptr; }) + +#define return_ptr(p) return no_free_ptr(p) + + +/* + * DEFINE_CLASS(name, type, exit, init, init_args...): + * helper to define the destructor and constructor for a type. + * @exit is an expression using '_T' -- similar to FREE above. + * @init is an expression in @init_args resulting in @type + * + * EXTEND_CLASS(name, ext, init, init_args...): + * extends class @name to @name@ext with the new constructor + * + * CLASS(name, var)(args...): + * declare the variable @var as an instance of the named class + * + * Ex. + * + * DEFINE_CLASS(fdget, struct fd, fdput(_T), fdget(fd), int fd) + * + * CLASS(fdget, f)(fd); + * if (!f.file) + * return -EBADF; + * + * // use 'f' without concern + */ + +#define DEFINE_CLASS(name, type, exit, init, init_args...) \ +typedef type class_##name##_t; \ +static inline void class_##name##_destructor(type *p) \ +{ type _T = *p; exit; } \ +static inline type class_##name##_constructor(init_args) \ +{ type t = init; return t; } + +#define EXTEND_CLASS(name, ext, init, init_args...) \ +typedef class_##name##_t class_##name##ext##_t; \ +static inline void class_##name##ext##_destructor(class_##name##_t *p) \ +{ class_##name##_destructor(p); } \ +static inline class_##name##_t class_##name##ext##_constructor(init_args) \ +{ class_##name##_t t = init; return t; } + +#define CLASS(name, var) \ + class_##name##_t var __cleanup(class_##name##_destructor) = \ + class_##name##_constructor + + +/* + * DEFINE_GUARD(name, type, lock, unlock): + * trivial wrapper around DEFINE_CLASS() above specifically + * for locks. + * + * guard(name): + * an anonymous instance of the (guard) class + * + * scoped_guard (name, args...) { }: + * similar to CLASS(name, scope)(args), except the variable (with the + * explicit name 'scope') is declard in a for-loop such that its scope is + * bound to the next (compound) statement. + * + */ + +#define DEFINE_GUARD(name, type, lock, unlock) \ + DEFINE_CLASS(name, type, unlock, ({ lock; _T; }), type _T) + +#define guard(name) \ + CLASS(name, __UNIQUE_ID(guard)) + +#define scoped_guard(name, args...) \ + for (CLASS(name, scope)(args), \ + *done = NULL; !done; done = (void *)1) + +/* + * Additional helper macros for generating lock guards with types, either for + * locks that don't have a native type (eg. RCU, preempt) or those that need a + * 'fat' pointer (eg. spin_lock_irqsave). + * + * DEFINE_LOCK_GUARD_0(name, _lock, _unlock, ...) + * DEFINE_LOCK_GUARD_1(name, type, _lock, _unlock, ...) + * + * will result in the following type: + * + * typedef struct { + * type *lock; // 'type := void' for the _0 variant + * __VA_ARGS__; + * } class_##name##_t; + * + * As above, both _lock and _unlock are statements, except this time '_T' will + * be a pointer to the above struct. + */ + +#define __DEFINE_UNLOCK_GUARD(name, type, _unlock, ...) \ +typedef struct { \ + type *lock; \ + __VA_ARGS__; \ +} class_##name##_t; \ + \ +static inline void class_##name##_destructor(class_##name##_t *_T) \ +{ \ + if (_T->lock) { _unlock; } \ +} + + +#define __DEFINE_LOCK_GUARD_1(name, type, _lock) \ +static inline class_##name##_t class_##name##_constructor(type *l) \ +{ \ + class_##name##_t _t = { .lock = l }, *_T = &_t; \ + _lock; \ + return _t; \ +} + +#define __DEFINE_LOCK_GUARD_0(name, _lock) \ +static inline class_##name##_t class_##name##_constructor(void) \ +{ \ + class_##name##_t _t = { .lock = (void*)1 }, \ + *_T __maybe_unused = &_t; \ + _lock; \ + return _t; \ +} + +#define DEFINE_LOCK_GUARD_1(name, type, _lock, _unlock, ...) \ +__DEFINE_UNLOCK_GUARD(name, type, _unlock, __VA_ARGS__) \ +__DEFINE_LOCK_GUARD_1(name, type, _lock) + +#define DEFINE_LOCK_GUARD_0(name, _lock, _unlock, ...) \ +__DEFINE_UNLOCK_GUARD(name, void, _unlock, __VA_ARGS__) \ +__DEFINE_LOCK_GUARD_0(name, _lock) + +#endif /* __LINUX_GUARDS_H */ --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -5,6 +5,15 @@ /* Compiler specific definitions for Clang compiler */ +/* + * Clang prior to 17 is being silly and considers many __cleanup() variables + * as unused (because they are, their sole purpose is to go out of scope). + * + * https://reviews.llvm.org/D152180 + */ +#undef __cleanup +#define __cleanup(func) __maybe_unused __attribute__((__cleanup__(func))) + /* same as gcc, this was present in clang-2.6 so we can assume it works * with any version that can compile the kernel */ --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -77,6 +77,12 @@ #define __attribute_const__ __attribute__((__const__)) /* + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-cleanup-variable-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#cleanup + */ +#define __cleanup(func) __attribute__((__cleanup__(func))) + +/* * Optional: only supported since gcc >= 9 * Optional: not supported by clang * --- a/include/linux/device.h +++ b/include/linux/device.h @@ -30,6 +30,7 @@ #include #include #include +#include #include struct device; @@ -899,6 +900,9 @@ void device_unregister(struct device *de void device_initialize(struct device *dev); int __must_check device_add(struct device *dev); void device_del(struct device *dev); + +DEFINE_FREE(device_del, struct device *, if (_T) device_del(_T)) + int device_for_each_child(struct device *dev, void *data, int (*fn)(struct device *dev, void *data)); int device_for_each_child_reverse(struct device *dev, void *data, @@ -1066,6 +1070,9 @@ extern int (*platform_notify_remove)(str */ struct device *get_device(struct device *dev); void put_device(struct device *dev); + +DEFINE_FREE(put_device, struct device *, if (_T) put_device(_T)) + bool kill_device(struct device *dev); #ifdef CONFIG_DEVTMPFS --- a/include/linux/file.h +++ b/include/linux/file.h @@ -10,6 +10,7 @@ #include #include #include +#include struct file; @@ -80,6 +81,8 @@ static inline void fdput_pos(struct fd f fdput(f); } +DEFINE_CLASS(fd, struct fd, fdput(_T), fdget(fd), int fd) + extern int f_dupfd(unsigned int from, struct file *file, unsigned flags); extern int replace_fd(unsigned fd, struct file *file, unsigned flags); extern void set_close_on_exec(unsigned int fd, int flag); @@ -88,6 +91,9 @@ extern int __get_unused_fd_flags(unsigne extern int get_unused_fd_flags(unsigned flags); extern void put_unused_fd(unsigned int fd); +DEFINE_CLASS(get_unused_fd, int, if (_T >= 0) put_unused_fd(_T), + get_unused_fd_flags(flags), unsigned flags) + extern void fd_install(unsigned int fd, struct file *file); extern int __receive_fd(struct file *file, int __user *ufd, --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -13,6 +13,7 @@ #define _LINUX_TRACE_IRQFLAGS_H #include +#include #include #include @@ -267,4 +268,10 @@ extern void warn_bogus_irq_restore(void) #define irqs_disabled_flags(flags) raw_irqs_disabled_flags(flags) +DEFINE_LOCK_GUARD_0(irq, local_irq_disable(), local_irq_enable()) +DEFINE_LOCK_GUARD_0(irqsave, + local_irq_save(_T->flags), + local_irq_restore(_T->flags), + unsigned long flags) + #endif --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -19,6 +19,7 @@ #include #include #include +#include #ifdef CONFIG_DEBUG_LOCK_ALLOC # define __DEP_MAP_MUTEX_INITIALIZER(lockname) \ @@ -219,4 +220,7 @@ extern void mutex_unlock(struct mutex *l extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); +DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T)) +DEFINE_FREE(mutex, struct mutex *, if (_T) mutex_unlock(_T)) + #endif /* __LINUX_MUTEX_H */ --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -8,6 +8,7 @@ #include #include #include +#include #include @@ -127,6 +128,9 @@ extern void __init setup_per_cpu_areas(v extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1); extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1); extern void free_percpu(void __percpu *__pdata); + +DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T)) + extern phys_addr_t per_cpu_ptr_to_phys(void *addr); #define alloc_percpu_gfp(type, gfp) \ --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -8,6 +8,7 @@ */ #include +#include #include /* @@ -463,4 +464,8 @@ static __always_inline void preempt_enab preempt_enable(); } +DEFINE_LOCK_GUARD_0(preempt, preempt_disable(), preempt_enable()) +DEFINE_LOCK_GUARD_0(preempt_notrace, preempt_disable_notrace(), preempt_enable_notrace()) +DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable()) + #endif /* __LINUX_PREEMPT_H */ --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -1095,4 +1096,6 @@ rcu_head_after_call_rcu(struct rcu_head extern int rcu_expedited; extern int rcu_normal; +DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock()) + #endif /* __LINUX_RCUPDATE_H */ --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -15,6 +15,7 @@ #include #include #include +#include #ifdef CONFIG_DEBUG_LOCK_ALLOC # define __RWSEM_DEP_MAP_INIT(lockname) \ @@ -201,6 +202,13 @@ extern void up_read(struct rw_semaphore */ extern void up_write(struct rw_semaphore *sem); +DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) +DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(_T)) + +DEFINE_FREE(up_read, struct rw_semaphore *, if (_T) up_read(_T)) +DEFINE_FREE(up_write, struct rw_semaphore *, if (_T) up_write(_T)) + + /* * downgrade write lock to read lock */ --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -126,6 +126,8 @@ static inline void put_task_struct(struc __put_task_struct(t); } +DEFINE_FREE(put_task, struct task_struct *, if (_T) put_task_struct(_T)) + static inline void put_task_struct_many(struct task_struct *t, int nr) { if (refcount_sub_and_test(nr, &t->usage)) --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -17,6 +17,7 @@ #include #include #include +#include /* @@ -211,6 +212,8 @@ void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); +DEFINE_FREE(kfree, void *, if (_T) kfree(_T)) + /** * ksize - Report actual allocation size of associated object * --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -61,6 +61,7 @@ #include #include #include +#include #include #include @@ -502,5 +503,35 @@ int __alloc_bucket_spinlocks(spinlock_t void free_bucket_spinlocks(spinlock_t *locks); +DEFINE_LOCK_GUARD_1(raw_spinlock, raw_spinlock_t, + raw_spin_lock(_T->lock), + raw_spin_unlock(_T->lock)) + +DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock_t, + raw_spin_lock_nested(_T->lock, SINGLE_DEPTH_NESTING), + raw_spin_unlock(_T->lock)) + +DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t, + raw_spin_lock_irq(_T->lock), + raw_spin_unlock_irq(_T->lock)) + +DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t, + raw_spin_lock_irqsave(_T->lock, _T->flags), + raw_spin_unlock_irqrestore(_T->lock, _T->flags), + unsigned long flags) + +DEFINE_LOCK_GUARD_1(spinlock, spinlock_t, + spin_lock(_T->lock), + spin_unlock(_T->lock)) + +DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t, + spin_lock_irq(_T->lock), + spin_unlock_irq(_T->lock)) + +DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t, + spin_lock_irqsave(_T->lock, _T->flags), + spin_unlock_irqrestore(_T->lock, _T->flags), + unsigned long flags) + #undef __LINUX_INSIDE_SPINLOCK_H #endif /* __LINUX_SPINLOCK_H */ --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -343,4 +343,9 @@ static inline void smp_mb__after_srcu_re /* __srcu_read_unlock has smp_mb() internally so nothing to do here. */ } +DEFINE_LOCK_GUARD_1(srcu, struct srcu_struct, + _T->idx = srcu_read_lock(_T->lock), + srcu_read_unlock(_T->lock, _T->idx), + int idx) + #endif --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -5046,7 +5046,7 @@ sub process { if|for|while|switch|return|case| volatile|__volatile__| __attribute__|format|__extension__| - asm|__asm__)$/x) + asm|__asm__|scoped_guard)$/x) { # cpp #define statements have non-optional spaces, ie # if there is a space between the name and the open From patchwork Mon Jun 12 09:07:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 387C5C87FDD for ; Mon, 12 Jun 2023 09:58:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236042AbjFLJ6N (ORCPT ); Mon, 12 Jun 2023 05:58:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230186AbjFLJyT (ORCPT ); Mon, 12 Jun 2023 05:54:19 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C79749FA; Mon, 12 Jun 2023 02:38:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=iuKjwptMM283OkeCg3rHRYfDPMncNnTiTfZUlZvzjjw=; b=eQ6OsD4kYVKLafcm5zMaXykUrJ t3Yx8a+byrZdB944/0aNdLOqAmoLeWSa1sowLg19pvZhRfEGH12EfjQizZN73l+Q2lmgWVTy/jEN2 66w/tSoR035EtMK1QJrtp16Z85OKpvfSl1I8OLy7JSDS0vEBXw8vGKRffU4u96Gz85h50NrSym32v 8aadmJp7+8xctTmftjzi5KRquTBd3yr6fYIYX7v7Rb7TZGAQFPvFwW5JY6GaXXShnDrDMfhQWRx/H Hx6pFtrNHcKkz/xQw6bxea1nVgSbzWPnczzWWwxrD46aGrv+zXbaZt8xVprl8zZZmhqQuJnaVP8+3 u9+4lFQg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0f-008kOo-0K; Mon, 12 Jun 2023 09:38:49 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 7FC1C302E28; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0C29230A58077; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093537.693926033@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:17 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 04/57] kbuild: Drop -Wdeclaration-after-statement References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org With the advent on scope-based resource management it comes really tedious to abide by the contraints of -Wdeclaration-after-statement. It will still be recommeneded to place declarations at the start of a scope where possible, but it will no longer be enforced. Suggested-by: Linus Torvalds Signed-off-by: Peter Zijlstra (Intel) --- Makefile | 6 +----- arch/arm64/kernel/vdso32/Makefile | 2 -- 2 files changed, 1 insertion(+), 7 deletions(-) --- a/Makefile +++ b/Makefile @@ -447,8 +447,7 @@ HOSTRUSTC = rustc HOSTPKG_CONFIG = pkg-config KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \ - -O2 -fomit-frame-pointer -std=gnu11 \ - -Wdeclaration-after-statement + -O2 -fomit-frame-pointer -std=gnu11 KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS) KBUILD_USERLDFLAGS := $(USERLDFLAGS) @@ -1012,9 +1011,6 @@ endif # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -# warn about C99 declaration after statement -KBUILD_CFLAGS += -Wdeclaration-after-statement - # Variable Length Arrays (VLAs) should not be used anywhere in the kernel KBUILD_CFLAGS += -Wvla --- a/arch/arm64/kernel/vdso32/Makefile +++ b/arch/arm64/kernel/vdso32/Makefile @@ -65,11 +65,9 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-pr -fno-strict-aliasing -fno-common \ -Werror-implicit-function-declaration \ -Wno-format-security \ - -Wdeclaration-after-statement \ -std=gnu11 VDSO_CFLAGS += -O2 # Some useful compiler-dependent flags from top-level Makefile -VDSO_CFLAGS += $(call cc32-option,-Wdeclaration-after-statement,) VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign) VDSO_CFLAGS += -fno-strict-overflow VDSO_CFLAGS += $(call cc32-option,-Werror=strict-prototypes) From patchwork Mon Jun 12 09:07:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10B7BC7EE43 for ; Mon, 12 Jun 2023 09:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235872AbjFLJ4q (ORCPT ); Mon, 12 Jun 2023 05:56:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230361AbjFLJyW (ORCPT ); Mon, 12 Jun 2023 05:54:22 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F34E4C15; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=2Fw/6pc13hFz4431UNCf/XS3M9fAIWcYfqPnXE09Ku4=; b=b6HhHqyRgVNIvJxJg3qz34xg3z MclAequn3uyYDgQ5Z5i8iDgOLuz/qXGgm1oMBVoGifA2g41/aPpi+zdp37cuqoJp+mxC5cFvegIAx PrGLhpLp6tRFNJ+xGJdm2Vl8NMuY2mfkHRUQqumE0N6uhf8Hwi+0jsjngOlSw1NjDNooe7ANlbed6 InipGNmagWluFfr2AZow7mI1sOKeySpJUvHBQ/S5Qm+hiYjBJavBgxNx9Q3pT2HaDMGuK1RCXjfIv izn1MU80zDpXeVWyzMuTE3zNEGhTt+PqqObFqGdyck+4419X8FjSEtsb3NkqK5BrIhAd8Futl2USY ItEsQUOQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N91-0z; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 86856302EA7; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0FD3C30A6FEEC; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093537.762718530@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:18 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 05/57] sched: Simplify get_nohz_timer_target() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1097,25 +1097,22 @@ int get_nohz_timer_target(void) hk_mask = housekeeping_cpumask(HK_TYPE_TIMER); - rcu_read_lock(); + guard(rcu)(); + for_each_domain(cpu, sd) { for_each_cpu_and(i, sched_domain_span(sd), hk_mask) { if (cpu == i) continue; - if (!idle_cpu(i)) { - cpu = i; - goto unlock; - } + if (!idle_cpu(i)) + return i; } } if (default_cpu == -1) default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER); - cpu = default_cpu; -unlock: - rcu_read_unlock(); - return cpu; + + return default_cpu; } /* From patchwork Mon Jun 12 09:07:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276139 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19A87C7EE2E for ; Mon, 12 Jun 2023 09:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234786AbjFLJ4m (ORCPT ); Mon, 12 Jun 2023 05:56:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231137AbjFLJyW (ORCPT ); Mon, 12 Jun 2023 05:54:22 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F24C4C13; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=QOVoxBXTqeJRoOWdQft4bGcKCTL4UyZrQG32knsR3k0=; b=c+D11F7FSqbBnyBLSSfE5UYd1l IvFPRzJeGLbLE977YxYQAS4g0jVtxP4+qS7e9sAEyGnpnM0/g6folbG9eSZv5iyY/d24GmbN9NGRN /+RkwB8o1SLLroMJqf+HzAXEkGvwurDfZKEjFEq3/dV71E5Xp3YhCsfoKdyKyuyYpRdWWU4rMR6bN 7KySp2Drh9CXlO7audT1YOfwwh4FrL2zBrKIN+tMW03xlMnBm4G7PO6JWp8epOgTbND+XuyDvWtfY wxdQXZn9Web1FeOuzgWYGEmPyXwuR9f6T2BwQc4dFBBrjDcoPkjo6/3va7mxH8xuCy9vdUYE8+v0P IpzOAsYg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N92-18; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 964E7302F75; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 14D9A30A70220; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093537.833273038@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:19 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 06/57] sched: Simplify sysctl_sched_uclamp_handler() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1801,7 +1801,8 @@ static int sysctl_sched_uclamp_handler(s int old_min, old_max, old_min_rt; int result; - mutex_lock(&uclamp_mutex); + guard(mutex)(&uclamp_mutex); + old_min = sysctl_sched_uclamp_util_min; old_max = sysctl_sched_uclamp_util_max; old_min_rt = sysctl_sched_uclamp_util_min_rt_default; @@ -1810,7 +1811,7 @@ static int sysctl_sched_uclamp_handler(s if (result) goto undo; if (!write) - goto done; + return result; if (sysctl_sched_uclamp_util_min > sysctl_sched_uclamp_util_max || sysctl_sched_uclamp_util_max > SCHED_CAPACITY_SCALE || @@ -1846,16 +1847,12 @@ static int sysctl_sched_uclamp_handler(s * Otherwise, keep it simple and do just a lazy update at each next * task enqueue time. */ - - goto done; + return result; undo: sysctl_sched_uclamp_util_min = old_min; sysctl_sched_uclamp_util_max = old_max; sysctl_sched_uclamp_util_min_rt_default = old_min_rt; -done: - mutex_unlock(&uclamp_mutex); - return result; } #endif From patchwork Mon Jun 12 09:07:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA081C7EE2E for ; Mon, 12 Jun 2023 09:57:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235943AbjFLJ5o (ORCPT ); Mon, 12 Jun 2023 05:57:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230396AbjFLJyW (ORCPT ); Mon, 12 Jun 2023 05:54:22 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DB644C12; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Z/QyOBDSEB+zoR9GPvKsSeTCgad9VvZXAWajjLt+jzo=; b=YbdZ7d0Wkk7V4xEA2JeTFjo7y9 Js3k1bAq/AvskMJrYL88GyKWi4bdT9g7xpk2tbcZi3VNr/dAEWOx9CHcGNSfQ6f74cyKb81/LczUz 0uwzpsUGL2jwlCDcudwo4mohnyMk7Ya5OLuZ63mjaIPHz5sza9/KwfqaEzgTpSP7uHZ/0S9G8MxCg W0NoHXpfe1FwA0SonnMbv2vJpnZqDBl6dUVcTBPu6b5/vrwFfvY5LCkIXSN1b6WwSJWTwsIq0TbSN 2EmHnrFbXaeXExBxGGLXhk/uU/VAP0weaXan6woQ24npm1rDwnPNgthAwCgWJr34wxBs5xdz3ymN8 ZU+JNpWQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N93-6k; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 9F184302F7E; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 1A5FD30A70240; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093537.905243325@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:20 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 07/57] sched: Simplify: migrate_swap_stop() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 23 +++++++---------------- kernel/sched/sched.h | 20 ++++++++++++++++++++ 2 files changed, 27 insertions(+), 16 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3258,7 +3258,6 @@ static int migrate_swap_stop(void *data) { struct migration_swap_arg *arg = data; struct rq *src_rq, *dst_rq; - int ret = -EAGAIN; if (!cpu_active(arg->src_cpu) || !cpu_active(arg->dst_cpu)) return -EAGAIN; @@ -3266,33 +3265,25 @@ static int migrate_swap_stop(void *data) src_rq = cpu_rq(arg->src_cpu); dst_rq = cpu_rq(arg->dst_cpu); - double_raw_lock(&arg->src_task->pi_lock, - &arg->dst_task->pi_lock); - double_rq_lock(src_rq, dst_rq); + guard(double_raw_spinlock)(&arg->src_task->pi_lock, &arg->dst_task->pi_lock); + guard(double_rq_lock)(src_rq, dst_rq); if (task_cpu(arg->dst_task) != arg->dst_cpu) - goto unlock; + return -EAGAIN; if (task_cpu(arg->src_task) != arg->src_cpu) - goto unlock; + return -EAGAIN; if (!cpumask_test_cpu(arg->dst_cpu, arg->src_task->cpus_ptr)) - goto unlock; + return -EAGAIN; if (!cpumask_test_cpu(arg->src_cpu, arg->dst_task->cpus_ptr)) - goto unlock; + return -EAGAIN; __migrate_swap_task(arg->src_task, arg->dst_cpu); __migrate_swap_task(arg->dst_task, arg->src_cpu); - ret = 0; - -unlock: - double_rq_unlock(src_rq, dst_rq); - raw_spin_unlock(&arg->dst_task->pi_lock); - raw_spin_unlock(&arg->src_task->pi_lock); - - return ret; + return 0; } /* --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2572,6 +2572,12 @@ static inline void double_rq_clock_clear static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2) {} #endif +#define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \ +__DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \ +static inline class_##name##_t class_##name##_constructor(type *lock, type *lock2) \ +{ class_##name##_t _t = { .lock = lock, .lock2 = lock2 }, *_T = &_t; \ + _lock; return _t; } + #ifdef CONFIG_SMP static inline bool rq_order_less(struct rq *rq1, struct rq *rq2) @@ -2701,6 +2707,16 @@ static inline void double_raw_lock(raw_s raw_spin_lock_nested(l2, SINGLE_DEPTH_NESTING); } +static inline void double_raw_unlock(raw_spinlock_t *l1, raw_spinlock_t *l2) +{ + raw_spin_unlock(l1); + raw_spin_unlock(l2); +} + +DEFINE_LOCK_GUARD_2(double_raw_spinlock, raw_spinlock_t, + double_raw_lock(_T->lock, _T->lock2), + double_raw_unlock(_T->lock, _T->lock2)) + /* * double_rq_unlock - safely unlock two runqueues * @@ -2758,6 +2774,10 @@ static inline void double_rq_unlock(stru #endif +DEFINE_LOCK_GUARD_2(double_rq_lock, struct rq, + double_rq_lock(_T->lock, _T->lock2), + double_rq_unlock(_T->lock, _T->lock2)) + extern struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq); extern struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq); From patchwork Mon Jun 12 09:07:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71EB2C7EE43 for ; Mon, 12 Jun 2023 09:58:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235987AbjFLJ6E (ORCPT ); Mon, 12 Jun 2023 05:58:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232091AbjFLJyX (ORCPT ); Mon, 12 Jun 2023 05:54:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B5204C10; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=i7e4Ed2VApizqqMeLDQrTIcscOdlMDKi2d1+AvdLSfc=; b=wMzEZ40Omey0hsxhU/f3YtsHn5 zztcNlN85LbTkIssyfkfrL48NmxnO6Xui7xALjewqMlyvd6RZfe6D4fBUEzU8ivSmkwH/rB+mgo8T DhIJ64bXK4MQAZHj4XNZbgTg99syxZL71a1nSxvAdXCC6Uq6WFU3Xo8Qau8qLevXW3vCcNOlO2hGd kLRb3QAfPguHrywS9Q4UgqR5mxOfj+Fk/M4IRihmVpaRZyf1EnKiw6p0zeUoZx/4h4dCUGPxRMpDH XqqMxcqWisvIc3MeLjMSkQ+D+XHT/6tgpmqY6cFvXD2CLwtL8aPDEHRB9Vg3FRPnxzDUV9pZfUiAM GAqscdag==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N94-7B; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id A9B5B302FB8; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 22F9530A70248; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093537.977924652@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:21 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 08/57] sched: Simplify wake_up_if_idle() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 20 ++++++-------------- kernel/sched/sched.h | 15 +++++++++++++++ 2 files changed, 21 insertions(+), 14 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3872,21 +3872,13 @@ static void __ttwu_queue_wakelist(struct void wake_up_if_idle(int cpu) { struct rq *rq = cpu_rq(cpu); - struct rq_flags rf; - rcu_read_lock(); - - if (!is_idle_task(rcu_dereference(rq->curr))) - goto out; - - rq_lock_irqsave(rq, &rf); - if (is_idle_task(rq->curr)) - resched_curr(rq); - /* Else CPU is not idle, do nothing here: */ - rq_unlock_irqrestore(rq, &rf); - -out: - rcu_read_unlock(); + guard(rcu)(); + if (is_idle_task(rcu_dereference(rq->curr))) { + guard(rq_lock)(rq); + if (is_idle_task(rq->curr)) + resched_curr(rq); + } } bool cpus_share_cache(int this_cpu, int that_cpu) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1678,6 +1678,21 @@ rq_unlock(struct rq *rq, struct rq_flags raw_spin_rq_unlock(rq); } +DEFINE_LOCK_GUARD_1(rq_lock, struct rq, + rq_lock(_T->lock, &_T->rf), + rq_unlock(_T->lock, &_T->rf), + struct rq_flags rf) + +DEFINE_LOCK_GUARD_1(rq_lock_irq, struct rq, + rq_lock_irq(_T->lock, &_T->rf), + rq_unlock_irq(_T->lock, &_T->rf), + struct rq_flags rf) + +DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq, + rq_lock_irqsave(_T->lock, &_T->rf), + rq_unlock_irqrestore(_T->lock, &_T->rf), + struct rq_flags rf) + static inline struct rq * this_rq_lock_irq(struct rq_flags *rf) __acquires(rq->lock) From patchwork Mon Jun 12 09:07:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36DA6C7EE2E for ; Mon, 12 Jun 2023 09:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232194AbjFLJ4t (ORCPT ); Mon, 12 Jun 2023 05:56:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229929AbjFLJyU (ORCPT ); Mon, 12 Jun 2023 05:54:20 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CD304C01; Mon, 12 Jun 2023 02:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Ef1zuHRwXVIMGYBHxAKm7sDagWjauE1Zj2vzzzzoRbg=; b=QOBUTnrhk6MwC7czUrltnKrx5V g6P+nWRqgZa26lQjfIlMx00YyxNZpVuUIRfCdNMnxi5kbC2Lazl72dGaS/cbGsN8H7agp6QXR51ue hxU2UMxJ+c6yH91d2NwY0g330xmnbwgq+NDQSDRt4u0qZv3lz7j+YExaA9KAT0nyJHSBto2GsUwqf qERLsCohhiIBf1MtfNb5AwMZx6S/dp+YnCxlUA3bR658utElgiTYdcjXGUBZ587KNs/GEzxh+wsH4 i+TwfhCIU9Plk6t9x1Na78FV5bhvsYQK7MInNlO7IJId8DdC5JTCWhXjwdArNg/rTt+C657d/fWjs sz6d+CUA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kOy-0w; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id B47DE302FF9; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 2D3DE30A70AC7; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.076428270@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:22 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 09/57] sched: Simplify ttwu() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 220 +++++++++++++++++++++++++--------------------------- 1 file changed, 108 insertions(+), 112 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3664,16 +3664,15 @@ ttwu_stat(struct task_struct *p, int cpu __schedstat_inc(p->stats.nr_wakeups_local); } else { struct sched_domain *sd; + guard(rcu)(); __schedstat_inc(p->stats.nr_wakeups_remote); - rcu_read_lock(); for_each_domain(rq->cpu, sd) { if (cpumask_test_cpu(cpu, sched_domain_span(sd))) { __schedstat_inc(sd->ttwu_wake_remote); break; } } - rcu_read_unlock(); } if (wake_flags & WF_MIGRATED) @@ -4135,10 +4134,9 @@ bool ttwu_state_match(struct task_struct static int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) { - unsigned long flags; + guard(preempt)(); int cpu, success = 0; - preempt_disable(); if (p == current) { /* * We're waking current, this means 'p->on_rq' and 'task_cpu(p) @@ -4165,129 +4163,127 @@ try_to_wake_up(struct task_struct *p, un * reordered with p->state check below. This pairs with smp_store_mb() * in set_current_state() that the waiting thread does. */ - raw_spin_lock_irqsave(&p->pi_lock, flags); - smp_mb__after_spinlock(); - if (!ttwu_state_match(p, state, &success)) - goto unlock; + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { + smp_mb__after_spinlock(); + if (!ttwu_state_match(p, state, &success)) + break; - trace_sched_waking(p); + trace_sched_waking(p); - /* - * Ensure we load p->on_rq _after_ p->state, otherwise it would - * be possible to, falsely, observe p->on_rq == 0 and get stuck - * in smp_cond_load_acquire() below. - * - * sched_ttwu_pending() try_to_wake_up() - * STORE p->on_rq = 1 LOAD p->state - * UNLOCK rq->lock - * - * __schedule() (switch to task 'p') - * LOCK rq->lock smp_rmb(); - * smp_mb__after_spinlock(); - * UNLOCK rq->lock - * - * [task p] - * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq - * - * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in - * __schedule(). See the comment for smp_mb__after_spinlock(). - * - * A similar smb_rmb() lives in try_invoke_on_locked_down_task(). - */ - smp_rmb(); - if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags)) - goto unlock; + /* + * Ensure we load p->on_rq _after_ p->state, otherwise it would + * be possible to, falsely, observe p->on_rq == 0 and get stuck + * in smp_cond_load_acquire() below. + * + * sched_ttwu_pending() try_to_wake_up() + * STORE p->on_rq = 1 LOAD p->state + * UNLOCK rq->lock + * + * __schedule() (switch to task 'p') + * LOCK rq->lock smp_rmb(); + * smp_mb__after_spinlock(); + * UNLOCK rq->lock + * + * [task p] + * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq + * + * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in + * __schedule(). See the comment for smp_mb__after_spinlock(). + * + * A similar smb_rmb() lives in try_invoke_on_locked_down_task(). + */ + smp_rmb(); + if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags)) + break; #ifdef CONFIG_SMP - /* - * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be - * possible to, falsely, observe p->on_cpu == 0. - * - * One must be running (->on_cpu == 1) in order to remove oneself - * from the runqueue. - * - * __schedule() (switch to task 'p') try_to_wake_up() - * STORE p->on_cpu = 1 LOAD p->on_rq - * UNLOCK rq->lock - * - * __schedule() (put 'p' to sleep) - * LOCK rq->lock smp_rmb(); - * smp_mb__after_spinlock(); - * STORE p->on_rq = 0 LOAD p->on_cpu - * - * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in - * __schedule(). See the comment for smp_mb__after_spinlock(). - * - * Form a control-dep-acquire with p->on_rq == 0 above, to ensure - * schedule()'s deactivate_task() has 'happened' and p will no longer - * care about it's own p->state. See the comment in __schedule(). - */ - smp_acquire__after_ctrl_dep(); + /* + * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be + * possible to, falsely, observe p->on_cpu == 0. + * + * One must be running (->on_cpu == 1) in order to remove oneself + * from the runqueue. + * + * __schedule() (switch to task 'p') try_to_wake_up() + * STORE p->on_cpu = 1 LOAD p->on_rq + * UNLOCK rq->lock + * + * __schedule() (put 'p' to sleep) + * LOCK rq->lock smp_rmb(); + * smp_mb__after_spinlock(); + * STORE p->on_rq = 0 LOAD p->on_cpu + * + * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in + * __schedule(). See the comment for smp_mb__after_spinlock(). + * + * Form a control-dep-acquire with p->on_rq == 0 above, to ensure + * schedule()'s deactivate_task() has 'happened' and p will no longer + * care about it's own p->state. See the comment in __schedule(). + */ + smp_acquire__after_ctrl_dep(); - /* - * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq - * == 0), which means we need to do an enqueue, change p->state to - * TASK_WAKING such that we can unlock p->pi_lock before doing the - * enqueue, such as ttwu_queue_wakelist(). - */ - WRITE_ONCE(p->__state, TASK_WAKING); + /* + * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq + * == 0), which means we need to do an enqueue, change p->state to + * TASK_WAKING such that we can unlock p->pi_lock before doing the + * enqueue, such as ttwu_queue_wakelist(). + */ + WRITE_ONCE(p->__state, TASK_WAKING); - /* - * If the owning (remote) CPU is still in the middle of schedule() with - * this task as prev, considering queueing p on the remote CPUs wake_list - * which potentially sends an IPI instead of spinning on p->on_cpu to - * let the waker make forward progress. This is safe because IRQs are - * disabled and the IPI will deliver after on_cpu is cleared. - * - * Ensure we load task_cpu(p) after p->on_cpu: - * - * set_task_cpu(p, cpu); - * STORE p->cpu = @cpu - * __schedule() (switch to task 'p') - * LOCK rq->lock - * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu) - * STORE p->on_cpu = 1 LOAD p->cpu - * - * to ensure we observe the correct CPU on which the task is currently - * scheduling. - */ - if (smp_load_acquire(&p->on_cpu) && - ttwu_queue_wakelist(p, task_cpu(p), wake_flags)) - goto unlock; + /* + * If the owning (remote) CPU is still in the middle of schedule() with + * this task as prev, considering queueing p on the remote CPUs wake_list + * which potentially sends an IPI instead of spinning on p->on_cpu to + * let the waker make forward progress. This is safe because IRQs are + * disabled and the IPI will deliver after on_cpu is cleared. + * + * Ensure we load task_cpu(p) after p->on_cpu: + * + * set_task_cpu(p, cpu); + * STORE p->cpu = @cpu + * __schedule() (switch to task 'p') + * LOCK rq->lock + * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu) + * STORE p->on_cpu = 1 LOAD p->cpu + * + * to ensure we observe the correct CPU on which the task is currently + * scheduling. + */ + if (smp_load_acquire(&p->on_cpu) && + ttwu_queue_wakelist(p, task_cpu(p), wake_flags)) + break; - /* - * If the owning (remote) CPU is still in the middle of schedule() with - * this task as prev, wait until it's done referencing the task. - * - * Pairs with the smp_store_release() in finish_task(). - * - * This ensures that tasks getting woken will be fully ordered against - * their previous state and preserve Program Order. - */ - smp_cond_load_acquire(&p->on_cpu, !VAL); + /* + * If the owning (remote) CPU is still in the middle of schedule() with + * this task as prev, wait until it's done referencing the task. + * + * Pairs with the smp_store_release() in finish_task(). + * + * This ensures that tasks getting woken will be fully ordered against + * their previous state and preserve Program Order. + */ + smp_cond_load_acquire(&p->on_cpu, !VAL); - cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU); - if (task_cpu(p) != cpu) { - if (p->in_iowait) { - delayacct_blkio_end(p); - atomic_dec(&task_rq(p)->nr_iowait); - } + cpu = select_task_rq(p, p->wake_cpu, wake_flags | WF_TTWU); + if (task_cpu(p) != cpu) { + if (p->in_iowait) { + delayacct_blkio_end(p); + atomic_dec(&task_rq(p)->nr_iowait); + } - wake_flags |= WF_MIGRATED; - psi_ttwu_dequeue(p); - set_task_cpu(p, cpu); - } + wake_flags |= WF_MIGRATED; + psi_ttwu_dequeue(p); + set_task_cpu(p, cpu); + } #else - cpu = task_cpu(p); + cpu = task_cpu(p); #endif /* CONFIG_SMP */ - ttwu_queue(p, cpu, wake_flags); -unlock: - raw_spin_unlock_irqrestore(&p->pi_lock, flags); + ttwu_queue(p, cpu, wake_flags); + } out: if (success) ttwu_stat(p, task_cpu(p), wake_flags); - preempt_enable(); return success; } From patchwork Mon Jun 12 09:07:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42616C7EE43 for ; Mon, 12 Jun 2023 09:58:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230383AbjFLJ6I (ORCPT ); Mon, 12 Jun 2023 05:58:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232312AbjFLJyT (ORCPT ); Mon, 12 Jun 2023 05:54:19 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 476B449FD; Mon, 12 Jun 2023 02:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=drbNuL55DYuL5wiscrlTnGAb/cO6SeGIhgtGTiA6PhQ=; b=VSEjB2saA1r44d+t/vL6tc5EoG xB4CG9CBZHIyB4zdQuRxzQ1Q7TJzwADvt7PtjKTLVENXxebGGeYnzeA/aNRsjwtBQcBhdw4FFc4IK 0+1KXg+b3RLYPmQQvXAuMTENNKt/wXQirviZsGNYMJsc8yMv6AgbvSWbtKsnbfSzEoEJrRcd+lkte 4b7/355f2S3PVCwS3IueHM4tNu6LtFu8YsWTsDpFO14hwHxbY8on3IDwGHEQAM0/61U1d4CsGZOmx CEkY9pA/PvCBwARY6IQ+Op5jTjaOmySf1HLi3LEoDC/B3NJHx/W2kOF67517FH7LjugtEYLajl9zJ Q4ROTiZg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kP0-13; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id BE76330313F; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3207B30A70AC5; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.154498590@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:23 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 10/57] sched: Simplify sched_exec() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5431,23 +5431,20 @@ unsigned int nr_iowait(void) void sched_exec(void) { struct task_struct *p = current; - unsigned long flags; + struct migration_arg arg; int dest_cpu; - raw_spin_lock_irqsave(&p->pi_lock, flags); - dest_cpu = p->sched_class->select_task_rq(p, task_cpu(p), WF_EXEC); - if (dest_cpu == smp_processor_id()) - goto unlock; + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { + dest_cpu = p->sched_class->select_task_rq(p, task_cpu(p), WF_EXEC); + if (dest_cpu == smp_processor_id()) + return; - if (likely(cpu_active(dest_cpu))) { - struct migration_arg arg = { p, dest_cpu }; + if (unlikely(!cpu_active(dest_cpu))) + return; - raw_spin_unlock_irqrestore(&p->pi_lock, flags); - stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); - return; + arg = (struct migration_arg){ p, dest_cpu }; } -unlock: - raw_spin_unlock_irqrestore(&p->pi_lock, flags); + stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); } #endif From patchwork Mon Jun 12 09:07:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9950BC7EE43 for ; Mon, 12 Jun 2023 09:57:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235711AbjFLJ5z (ORCPT ); Mon, 12 Jun 2023 05:57:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230383AbjFLJyT (ORCPT ); Mon, 12 Jun 2023 05:54:19 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C7C24C00; Mon, 12 Jun 2023 02:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=q92jsVFPZSP8G7FOeDp8TiMSNcZFRY9w1smIqAIutpc=; b=ZjjVcZol4/9Mg/CL1z/yvfA0AG OAFbqGJmR4WaLGPuac3DsxuC/cSLfbEfZYKTwqp9zVOHAsflVFZRmumwh6cw0dmuh8tYBucZ3u0Bv rYCwv7X1JMsEe/Bto2w5Ft+VMSeXNZoL9d7mW5B4ILoyaUUzjXUpOLsiTzmFWQE55vsKoxPa9Ol60 eK29omO/sJmvbBhucM9rXlm3uLaAGBN0Ne0S14xbL5cU904oFMiq6FXiZdX6eIaV+eBG+pu5krCbP WhCEIB+k7OA7yII2HdbfhSoriI3sHSADL+6mMPmTzKFHg7w0eK75g37vf6kFTHN40d5lPG/ZFftsp MVAqneTQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kOz-11; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C5F08303164; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3619930A70ADD; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.225353309@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:24 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 11/57] sched: Simplify sched_tick_remote() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 43 ++++++++++++++++++------------------------- 1 file changed, 18 insertions(+), 25 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5651,9 +5651,6 @@ static void sched_tick_remote(struct wor struct tick_work *twork = container_of(dwork, struct tick_work, work); int cpu = twork->cpu; struct rq *rq = cpu_rq(cpu); - struct task_struct *curr; - struct rq_flags rf; - u64 delta; int os; /* @@ -5663,30 +5660,26 @@ static void sched_tick_remote(struct wor * statistics and checks timeslices in a time-independent way, regardless * of when exactly it is running. */ - if (!tick_nohz_tick_stopped_cpu(cpu)) - goto out_requeue; + if (tick_nohz_tick_stopped_cpu(cpu)) { + guard(rq_lock_irq)(rq); + struct task_struct *curr = rq->curr; + + if (cpu_online(cpu)) { + update_rq_clock(rq); + + if (!is_idle_task(curr)) { + /* + * Make sure the next tick runs within a + * reasonable amount of time. + */ + u64 delta = rq_clock_task(rq) - curr->se.exec_start; + WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); + } + curr->sched_class->task_tick(rq, curr, 0); - rq_lock_irq(rq, &rf); - curr = rq->curr; - if (cpu_is_offline(cpu)) - goto out_unlock; - - update_rq_clock(rq); - - if (!is_idle_task(curr)) { - /* - * Make sure the next tick runs within a reasonable - * amount of time. - */ - delta = rq_clock_task(rq) - curr->se.exec_start; - WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); + calc_load_nohz_remote(rq); + } } - curr->sched_class->task_tick(rq, curr, 0); - - calc_load_nohz_remote(rq); -out_unlock: - rq_unlock_irq(rq, &rf); -out_requeue: /* * Run the remote tick once per second (1Hz). This arbitrary From patchwork Mon Jun 12 09:07:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BD8DC83005 for ; Mon, 12 Jun 2023 09:57:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231501AbjFLJ51 (ORCPT ); Mon, 12 Jun 2023 05:57:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231321AbjFLJyX (ORCPT ); Mon, 12 Jun 2023 05:54:23 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B6DF4C11; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=4+/Q1nIn7pjeaoOkqiN8ZoN9dOzRXj0Qx8pmucBG9KU=; b=JTIbW/gvOSQ58b4TCHUoHGEDh4 Y1YJaLE27tmAx/JmH64kRErvzD0KUqyqfrX71MGF7wv0MltPgLgnjyg/kORWC199iFcMaqxABBB85 2ngIl0UzK18xlGkdJHKfnwYwA/Wl2lR2NRNxaCAnNGGdy6tjZchz97RFC/huuNDIRiljM+iyvvmex VegKabbNI/sdQ6KSFyu7uT5aaMn9NnyH70zAM87Em+UPxNfTHrP0e+GNEWh2sXqUMKqamS/UzmCip zhr1rYAeBzRzlh1FU7evBmwXBmRVHZ/P3LFM8ztGfR2bODV96a59vURa3b146za2hED5IY+f1Izzy M5i/AT6A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N97-A5; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D205C303189; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3C3B930A70ADE; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.307089780@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:25 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 12/57] sched: Simplify try_steal_cookie() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6229,19 +6229,19 @@ static bool try_steal_cookie(int this, i unsigned long cookie; bool success = false; - local_irq_disable(); - double_rq_lock(dst, src); + guard(irq)(); + guard(double_rq_lock)(dst, src); cookie = dst->core->core_cookie; if (!cookie) - goto unlock; + return false; if (dst->curr != dst->idle) - goto unlock; + return false; p = sched_core_find(src, cookie); if (!p) - goto unlock; + return false; do { if (p == src->core_pick || p == src->curr) @@ -6253,9 +6253,10 @@ static bool try_steal_cookie(int this, i if (p->core_occupation > dst->idle->core_occupation) goto next; /* - * sched_core_find() and sched_core_next() will ensure that task @p - * is not throttled now, we also need to check whether the runqueue - * of the destination CPU is being throttled. + * sched_core_find() and sched_core_next() will ensure + * that task @p is not throttled now, we also need to + * check whether the runqueue of the destination CPU is + * being throttled. */ if (sched_task_is_throttled(p, this)) goto next; @@ -6273,10 +6274,6 @@ static bool try_steal_cookie(int this, i p = sched_core_next(p, cookie); } while (p); -unlock: - double_rq_unlock(dst, src); - local_irq_enable(); - return success; } From patchwork Mon Jun 12 09:07:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DF9EC7EE25 for ; Mon, 12 Jun 2023 09:58:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236135AbjFLJ6d (ORCPT ); Mon, 12 Jun 2023 05:58:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230408AbjFLJyY (ORCPT ); Mon, 12 Jun 2023 05:54:24 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7806B5FF2; Mon, 12 Jun 2023 02:38:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=tN5fF7Wyh35qBJFCdjEuKMaMwIiqau8XmW0j2x/SMjk=; b=pTQEAHGpF5fz0pLIBta9z8QWvF V3VTKbdm5MrZ6Uq6sIyVOPIlJRamTVCnfvzmEg5CiuDQWZwIOhMu3l9PZQuNwkMb8MTxXwLb05k1F pXKfEwe9Z0i663rYdw9pUPCOE+2AHi/lm0RzSNXMm4U2w1iNa1ZIxBs9afcsQW/V20UMXzYer3WCT zF9DC46u7wLZd3iCRc+AmRVnu9fCua5kGt94sVVOX6RESLLLndvS1OIwmKToKe+pfW/q3/OlKUMUJ NzNjcl62XQoQkQaJg1jmU538/Cwb5EJWwMJr4XxONBZWuSlObMm3Fh7GXzLLBUAOiqFAhv2ZaNNZD zVv+iV3g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N98-AY; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id D8E9630318E; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 416A530A70ADF; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.393498853@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:26 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 13/57] sched: Simplify sched_core_cpu_{starting,deactivate}() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6331,20 +6331,24 @@ static void queue_core_balance(struct rq queue_balance_callback(rq, &per_cpu(core_balance_head, rq->cpu), sched_core_balance); } +DEFINE_LOCK_GUARD_1(core_lock, int, + sched_core_lock(*_T->lock, &_T->flags), + sched_core_unlock(*_T->lock, &_T->flags), + unsigned long flags) + static void sched_core_cpu_starting(unsigned int cpu) { const struct cpumask *smt_mask = cpu_smt_mask(cpu); struct rq *rq = cpu_rq(cpu), *core_rq = NULL; - unsigned long flags; int t; - sched_core_lock(cpu, &flags); + guard(core_lock)(&cpu); WARN_ON_ONCE(rq->core != rq); /* if we're the first, we'll be our own leader */ if (cpumask_weight(smt_mask) == 1) - goto unlock; + return; /* find the leader */ for_each_cpu(t, smt_mask) { @@ -6358,7 +6362,7 @@ static void sched_core_cpu_starting(unsi } if (WARN_ON_ONCE(!core_rq)) /* whoopsie */ - goto unlock; + return; /* install and validate core_rq */ for_each_cpu(t, smt_mask) { @@ -6369,29 +6373,25 @@ static void sched_core_cpu_starting(unsi WARN_ON_ONCE(rq->core != core_rq); } - -unlock: - sched_core_unlock(cpu, &flags); } static void sched_core_cpu_deactivate(unsigned int cpu) { const struct cpumask *smt_mask = cpu_smt_mask(cpu); struct rq *rq = cpu_rq(cpu), *core_rq = NULL; - unsigned long flags; int t; - sched_core_lock(cpu, &flags); + guard(core_lock)(&cpu); /* if we're the last man standing, nothing to do */ if (cpumask_weight(smt_mask) == 1) { WARN_ON_ONCE(rq->core != rq); - goto unlock; + return; } /* if we're not the leader, nothing to do */ if (rq->core != rq) - goto unlock; + return; /* find a new leader */ for_each_cpu(t, smt_mask) { @@ -6402,7 +6402,7 @@ static void sched_core_cpu_deactivate(un } if (WARN_ON_ONCE(!core_rq)) /* impossible */ - goto unlock; + return; /* copy the shared state to the new leader */ core_rq->core_task_seq = rq->core_task_seq; @@ -6424,9 +6424,6 @@ static void sched_core_cpu_deactivate(un rq = cpu_rq(t); rq->core = core_rq; } - -unlock: - sched_core_unlock(cpu, &flags); } static inline void sched_core_cpu_dying(unsigned int cpu) From patchwork Mon Jun 12 09:07:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE814C87FE1 for ; Mon, 12 Jun 2023 09:58:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229729AbjFLJ6P (ORCPT ); Mon, 12 Jun 2023 05:58:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229697AbjFLJyV (ORCPT ); Mon, 12 Jun 2023 05:54:21 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A47B4C0E; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=MG7QpnmCwLygWJ81Mn2iabAPA0zngwNBxpS2UJOPCco=; b=PgmXbkmwG9TYpDGdocx+LtR6tM l370b2N9nNxi1RmPWxCNObTJ4Voj6TnwUaEMamYzTJY6EkmNWG1KMSqPUKs7IW+4Maipk3vvMjEFR ZaMxjhNAcQQyGWTARi2LdpPcjAIDpMadcwYBjkd27vA7PC1Va2aKU7sfH8xVuq7SvVINulgGTJ51x XjR8xr7yUJ3LF2wwQJd4xe6tOSQnyO6M7lQz1xek40kDnSUJiKz9biq82Fj0QVkT0KEsORIAJNpYr dGHAHUGeUs4DbodpgoR1h3ajXFzvYjku3P4L5rQcre7lw9MZ79S1OiErMltwKmghwzvlIgQByVkHx pDKs2gjQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0g-002N99-Aw; Mon, 12 Jun 2023 09:38:50 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id E71BC303196; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 46BE630A77B54; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.465891562@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:27 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 14/57] sched: Simplify set_user_nice() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 13 ++++++------- kernel/sched/sched.h | 5 +++++ 2 files changed, 11 insertions(+), 7 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7119,9 +7119,8 @@ static inline int rt_effective_prio(stru void set_user_nice(struct task_struct *p, long nice) { bool queued, running; - int old_prio; - struct rq_flags rf; struct rq *rq; + int old_prio; if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE) return; @@ -7129,7 +7128,9 @@ void set_user_nice(struct task_struct *p * We have to be careful, if called from sys_setpriority(), * the task might be in the middle of scheduling on another CPU. */ - rq = task_rq_lock(p, &rf); + CLASS(task_rq_lock, rq_guard)(p); + rq = rq_guard.rq; + update_rq_clock(rq); /* @@ -7140,8 +7141,9 @@ void set_user_nice(struct task_struct *p */ if (task_has_dl_policy(p) || task_has_rt_policy(p)) { p->static_prio = NICE_TO_PRIO(nice); - goto out_unlock; + return; } + queued = task_on_rq_queued(p); running = task_current(rq, p); if (queued) @@ -7164,9 +7166,6 @@ void set_user_nice(struct task_struct *p * lowered its priority, then reschedule its CPU: */ p->sched_class->prio_changed(rq, p, old_prio); - -out_unlock: - task_rq_unlock(rq, p, &rf); } EXPORT_SYMBOL(set_user_nice); --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1630,6 +1630,11 @@ task_rq_unlock(struct rq *rq, struct tas raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); } +DEFINE_LOCK_GUARD_1(task_rq_lock, struct task_struct, + _T->rq = task_rq_lock(_T->lock, &_T->rf), + task_rq_unlock(_T->rq, _T->lock, &_T->rf), + struct rq *rq; struct rq_flags rf) + static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) __acquires(rq->lock) From patchwork Mon Jun 12 09:07:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84F11C7EE25 for ; Mon, 12 Jun 2023 09:58:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235991AbjFLJ6F (ORCPT ); Mon, 12 Jun 2023 05:58:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230025AbjFLJyV (ORCPT ); Mon, 12 Jun 2023 05:54:21 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 059F24C0D; Mon, 12 Jun 2023 02:38:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=by7PPFcazB/pK+2FaRs+OdI6JGfC9WsBX82VOJ7mM6o=; b=Y9O0dNQIJUKlbjZtt4rxwr8Rv6 BevFAXrP1152xD7o/GddMsCg4tPJSIDp0cRHFpQZjBJvwythjmmoKl8IQGNgf1x1/9e+43fYWMJqz sBca6vwafwE3K9qpMLJDwrRFibEGyiTjmyFg9Q/GYkE3Zg1mWJ11eKDdJ/+LPCwS+GUqSqRrWiYkG 0QVjxQGVtDJq1lpG2kA59EZhMgxt2zHPN6qItB5KHnuG5+qudT11EyJ6SbRHs4K8aDlVGtBnaU2GF XuieTeZwR3eToElJB0vqlwQ9UdcuvS6R9ZY/l/O9ZJhrDgojoRHi5Uh+8R/BJOkRPZkJa+DWzn+eL 9gEWkzJw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kP9-1f; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id ECD763031B0; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4C66430A77B55; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.546520916@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:28 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 15/57] sched: Simplify syscalls References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 154 ++++++++++++++++++++++------------------------------ 1 file changed, 68 insertions(+), 86 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7425,6 +7425,21 @@ static struct task_struct *find_process_ return pid ? find_task_by_vpid(pid) : current; } +static struct task_struct *find_get_task(pid_t pid) +{ + struct task_struct *p; + guard(rcu)(); + + p = find_process_by_pid(pid); + if (likely(p)) + get_task_struct(p); + + return p; +} + +DEFINE_CLASS(find_get_task, struct task_struct *, if (_T) put_task_struct(_T), + find_get_task(pid), pid_t pid) + /* * sched_setparam() passes in -1 for its policy, to let the functions * it calls know not to change it. @@ -7462,14 +7477,11 @@ static void __setscheduler_params(struct static bool check_same_owner(struct task_struct *p) { const struct cred *cred = current_cred(), *pcred; - bool match; + guard(rcu)(); - rcu_read_lock(); pcred = __task_cred(p); - match = (uid_eq(cred->euid, pcred->euid) || - uid_eq(cred->euid, pcred->uid)); - rcu_read_unlock(); - return match; + return (uid_eq(cred->euid, pcred->euid) || + uid_eq(cred->euid, pcred->uid)); } /* @@ -7873,27 +7885,17 @@ static int do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param) { struct sched_param lparam; - struct task_struct *p; - int retval; if (!param || pid < 0) return -EINVAL; if (copy_from_user(&lparam, param, sizeof(struct sched_param))) return -EFAULT; - rcu_read_lock(); - retval = -ESRCH; - p = find_process_by_pid(pid); - if (likely(p)) - get_task_struct(p); - rcu_read_unlock(); - - if (likely(p)) { - retval = sched_setscheduler(p, policy, &lparam); - put_task_struct(p); - } + CLASS(find_get_task, p)(pid); + if (!p) + return -ESRCH; - return retval; + return sched_setscheduler(p, policy, &lparam); } /* @@ -7989,7 +7991,6 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pi unsigned int, flags) { struct sched_attr attr; - struct task_struct *p; int retval; if (!uattr || pid < 0 || flags) @@ -8004,21 +8005,14 @@ SYSCALL_DEFINE3(sched_setattr, pid_t, pi if (attr.sched_flags & SCHED_FLAG_KEEP_POLICY) attr.sched_policy = SETPARAM_POLICY; - rcu_read_lock(); - retval = -ESRCH; - p = find_process_by_pid(pid); - if (likely(p)) - get_task_struct(p); - rcu_read_unlock(); + CLASS(find_get_task, p)(pid); + if (!p) + return -ESRCH; - if (likely(p)) { - if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS) - get_params(p, &attr); - retval = sched_setattr(p, &attr); - put_task_struct(p); - } + if (attr.sched_flags & SCHED_FLAG_KEEP_PARAMS) + get_params(p, &attr); - return retval; + return sched_setattr(p, &attr); } /** @@ -8036,16 +8030,17 @@ SYSCALL_DEFINE1(sched_getscheduler, pid_ if (pid < 0) return -EINVAL; - retval = -ESRCH; - rcu_read_lock(); + guard(rcu)(); p = find_process_by_pid(pid); - if (p) { - retval = security_task_getscheduler(p); - if (!retval) - retval = p->policy - | (p->sched_reset_on_fork ? SCHED_RESET_ON_FORK : 0); + if (!p) + return -ESRCH; + + retval = security_task_getscheduler(p); + if (!retval) { + retval = p->policy; + if (p->sched_reset_on_fork) + retval |= SCHED_RESET_ON_FORK; } - rcu_read_unlock(); return retval; } @@ -8066,30 +8061,23 @@ SYSCALL_DEFINE2(sched_getparam, pid_t, p if (!param || pid < 0) return -EINVAL; - rcu_read_lock(); - p = find_process_by_pid(pid); - retval = -ESRCH; - if (!p) - goto out_unlock; + scoped_guard (rcu) { + p = find_process_by_pid(pid); + if (!p) + return -ESRCH; - retval = security_task_getscheduler(p); - if (retval) - goto out_unlock; + retval = security_task_getscheduler(p); + if (retval) + return retval; - if (task_has_rt_policy(p)) - lp.sched_priority = p->rt_priority; - rcu_read_unlock(); + if (task_has_rt_policy(p)) + lp.sched_priority = p->rt_priority; + } /* * This one might sleep, we cannot do it with a spinlock held ... */ - retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0; - - return retval; - -out_unlock: - rcu_read_unlock(); - return retval; + return copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0; } /* @@ -8149,39 +8137,33 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pi usize < SCHED_ATTR_SIZE_VER0 || flags) return -EINVAL; - rcu_read_lock(); - p = find_process_by_pid(pid); - retval = -ESRCH; - if (!p) - goto out_unlock; + scoped_guard (rcu) { + p = find_process_by_pid(pid); + if (!p) + return -ESRCH; - retval = security_task_getscheduler(p); - if (retval) - goto out_unlock; + retval = security_task_getscheduler(p); + if (retval) + return retval; - kattr.sched_policy = p->policy; - if (p->sched_reset_on_fork) - kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK; - get_params(p, &kattr); - kattr.sched_flags &= SCHED_FLAG_ALL; + kattr.sched_policy = p->policy; + if (p->sched_reset_on_fork) + kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK; + get_params(p, &kattr); + kattr.sched_flags &= SCHED_FLAG_ALL; #ifdef CONFIG_UCLAMP_TASK - /* - * This could race with another potential updater, but this is fine - * because it'll correctly read the old or the new value. We don't need - * to guarantee who wins the race as long as it doesn't return garbage. - */ - kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value; - kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value; + /* + * This could race with another potential updater, but this is fine + * because it'll correctly read the old or the new value. We don't need + * to guarantee who wins the race as long as it doesn't return garbage. + */ + kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value; + kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value; #endif - - rcu_read_unlock(); + } return sched_attr_copy_to_user(uattr, &kattr, usize); - -out_unlock: - rcu_read_unlock(); - return retval; } #ifdef CONFIG_SMP From patchwork Mon Jun 12 09:07:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 464BAC7EE25 for ; Mon, 12 Jun 2023 09:57:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234987AbjFLJ5T (ORCPT ); Mon, 12 Jun 2023 05:57:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232392AbjFLJyV (ORCPT ); Mon, 12 Jun 2023 05:54:21 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2D994C09; Mon, 12 Jun 2023 02:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=vDHS8Plr2cencdpiH/auo6E776oNoOhgYdcpKlgKc4Q=; b=XfspiS8Usbqb2dpT01GShTpXYF 0wwyX4AcSTYBMwe4XCE0JXv0GEXukM5Z+wJrQ5ydemtoQKvufj+e0HtUrkEv50Xk/7NS5f+kXoBH9 NRHSIXz+gP5a9fY839hRRMg3XMU8GiI1b3Pt7sMPV4OqaS+t6EGwmATXLiqnN97MCbKsORuHO2Pr8 vz3ITtR3OFV8f2JSdzAkJYccKWoz7g4QT5P8RYF55XvZaHjj/8iBTBPAQu9E+hOhqdBtj1hgmbPlJ c78PyIoTct3THwyJGFh1bmBuUYmoopUkjAyRnngrF2sMB5pJoA+wZWUNb+w3pp3J3RKYPSrqc3iKz xqluhn6g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kPA-1f; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 04CC13031B9; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 53B9E30A77B56; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.640502622@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:29 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 16/57] sched: Simplify sched_{set,get}affinity() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 53 +++++++++++++--------------------------------------- 1 file changed, 14 insertions(+), 39 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8258,39 +8258,24 @@ long sched_setaffinity(pid_t pid, const { struct affinity_context ac; struct cpumask *user_mask; - struct task_struct *p; int retval; - rcu_read_lock(); - - p = find_process_by_pid(pid); - if (!p) { - rcu_read_unlock(); + CLASS(find_get_task, p)(pid); + if (!p) return -ESRCH; - } - /* Prevent p going away */ - get_task_struct(p); - rcu_read_unlock(); - - if (p->flags & PF_NO_SETAFFINITY) { - retval = -EINVAL; - goto out_put_task; - } + if (p->flags & PF_NO_SETAFFINITY) + return -EINVAL; if (!check_same_owner(p)) { - rcu_read_lock(); - if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) { - rcu_read_unlock(); - retval = -EPERM; - goto out_put_task; - } - rcu_read_unlock(); + guard(rcu)(); + if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) + return -EPERM; } retval = security_task_setscheduler(p); if (retval) - goto out_put_task; + return retval; /* * With non-SMP configs, user_cpus_ptr/user_mask isn't used and @@ -8300,8 +8285,7 @@ long sched_setaffinity(pid_t pid, const if (user_mask) { cpumask_copy(user_mask, in_mask); } else if (IS_ENABLED(CONFIG_SMP)) { - retval = -ENOMEM; - goto out_put_task; + return -ENOMEM; } ac = (struct affinity_context){ @@ -8313,8 +8297,6 @@ long sched_setaffinity(pid_t pid, const retval = __sched_setaffinity(p, &ac); kfree(ac.user_mask); -out_put_task: - put_task_struct(p); return retval; } @@ -8356,28 +8338,21 @@ SYSCALL_DEFINE3(sched_setaffinity, pid_t long sched_getaffinity(pid_t pid, struct cpumask *mask) { struct task_struct *p; - unsigned long flags; int retval; - rcu_read_lock(); - - retval = -ESRCH; + guard(rcu)(); p = find_process_by_pid(pid); if (!p) - goto out_unlock; + return -ESRCH; retval = security_task_getscheduler(p); if (retval) - goto out_unlock; + return retval; - raw_spin_lock_irqsave(&p->pi_lock, flags); + guard(raw_spinlock_irqsave)(&p->pi_lock); cpumask_and(mask, &p->cpus_mask, cpu_active_mask); - raw_spin_unlock_irqrestore(&p->pi_lock, flags); -out_unlock: - rcu_read_unlock(); - - return retval; + return 0; } /** From patchwork Mon Jun 12 09:07:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DE07C7EE45 for ; Mon, 12 Jun 2023 09:58:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236137AbjFLJ6f (ORCPT ); Mon, 12 Jun 2023 05:58:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231855AbjFLJyZ (ORCPT ); Mon, 12 Jun 2023 05:54:25 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A25155FF3; Mon, 12 Jun 2023 02:38:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=9sqy4wj1SuZG+E1ZOkNDiw8zR5F8T4bHRpYkByM8S6o=; b=HNjtc1kou6jUKrPa93YlSTKipB VhOpXUPSBCgjriMwtuZYiFbSO92NjXbfCsFmRFtzO7yfFSXIVuY/yN+Yj7pGBcnBI5S0hv8fNdtcN //wmbKzY575t4i5oKfCiWpecYl0rAG6BFfdMSjKIXuCRhpuWcyhO2gVcrMTipwAY6sjO79StTEAIQ VC8/ybwDlXuWf1vL/cHd1fVTSgl2eqKrBrD54rFBuftQGd/gu653fosXknQhYF8Op6cNrlThojVpp 8VS39gvDd4GbsgBVynRhv46UkSxPgULiSAQIabpdcmA7w8/j+yhjfaJwXT+KXZjarghiosRq198gW o3zyz1kQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kPB-1k; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0BFFE3031BE; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5832930A77B57; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.712217968@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:30 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 17/57] sched: Simplify yield_to() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 73 ++++++++++++++++++++++------------------------------ 1 file changed, 32 insertions(+), 41 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8799,55 +8799,46 @@ int __sched yield_to(struct task_struct { struct task_struct *curr = current; struct rq *rq, *p_rq; - unsigned long flags; int yielded = 0; - local_irq_save(flags); - rq = this_rq(); + scoped_guard (irqsave) { + rq = this_rq(); again: - p_rq = task_rq(p); - /* - * If we're the only runnable task on the rq and target rq also - * has only one task, there's absolutely no point in yielding. - */ - if (rq->nr_running == 1 && p_rq->nr_running == 1) { - yielded = -ESRCH; - goto out_irq; - } - - double_rq_lock(rq, p_rq); - if (task_rq(p) != p_rq) { - double_rq_unlock(rq, p_rq); - goto again; - } - - if (!curr->sched_class->yield_to_task) - goto out_unlock; - - if (curr->sched_class != p->sched_class) - goto out_unlock; - - if (task_on_cpu(p_rq, p) || !task_is_running(p)) - goto out_unlock; - - yielded = curr->sched_class->yield_to_task(rq, p); - if (yielded) { - schedstat_inc(rq->yld_count); + p_rq = task_rq(p); /* - * Make p's CPU reschedule; pick_next_entity takes care of - * fairness. + * If we're the only runnable task on the rq and target rq also + * has only one task, there's absolutely no point in yielding. */ - if (preempt && rq != p_rq) - resched_curr(p_rq); - } + if (rq->nr_running == 1 && p_rq->nr_running == 1) + return -ESRCH; -out_unlock: - double_rq_unlock(rq, p_rq); -out_irq: - local_irq_restore(flags); + guard(double_rq_lock)(rq, p_rq); + if (task_rq(p) != p_rq) + goto again; + + if (!curr->sched_class->yield_to_task) + return 0; + + if (curr->sched_class != p->sched_class) + return 0; + + if (task_on_cpu(p_rq, p) || !task_is_running(p)) + return 0; + + yielded = curr->sched_class->yield_to_task(rq, p); + if (yielded) { + schedstat_inc(rq->yld_count); + /* + * Make p's CPU reschedule; pick_next_entity + * takes care of fairness. + */ + if (preempt && rq != p_rq) + resched_curr(p_rq); + } + } - if (yielded > 0) + if (yielded) schedule(); return yielded; From patchwork Mon Jun 12 09:07:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB703C8300C for ; Mon, 12 Jun 2023 09:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234565AbjFLJ4v (ORCPT ); Mon, 12 Jun 2023 05:56:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232365AbjFLJyV (ORCPT ); Mon, 12 Jun 2023 05:54:21 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B634D4C05; Mon, 12 Jun 2023 02:38:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=8IwM37e642EshGkaM591nsICKeU24b96IU4dYucK72I=; b=dXBEmpz80b7hoderM45846w/8O pNO8hpipMKRJ27niZQqDP/oZ7/BF1rbJvgcP4PNZOjA0/uy51im9efmFnxUQeqkaz8ERUeIMTVgbA 4VU+/qmkIVh9mruK1Kemd08wt+2YvNqj7iMBDIb55yjZBxdNbyYHd4vmkglhIRzn3VhbsO1g6kMNX DPownbe5F7lKMEY9oMSiQbAsQ6Sq/LDRo6BH7dsjs/eLhKCTZkfWMRYDFaavzf9aNxmS0+JA4YiOd d6r7PI/eGL4W69GpFhaxW3FPhqLQIpOokWmypJbsZsB0KhQzPHHjPZBSmrvaJDXXl2NqdZOvB1M89 9hb3DItQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0g-008kPD-1r; Mon, 12 Jun 2023 09:38:51 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 226E43031CC; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5D3FD30A77B5B; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.792690687@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:31 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 18/57] sched: Simplify sched_rr_get_interval() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 40 ++++++++++++++++------------------------ 1 file changed, 16 insertions(+), 24 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -8941,38 +8941,30 @@ SYSCALL_DEFINE1(sched_get_priority_min, static int sched_rr_get_interval(pid_t pid, struct timespec64 *t) { - struct task_struct *p; - unsigned int time_slice; - struct rq_flags rf; - struct rq *rq; + unsigned int time_slice = 0; int retval; if (pid < 0) return -EINVAL; - retval = -ESRCH; - rcu_read_lock(); - p = find_process_by_pid(pid); - if (!p) - goto out_unlock; - - retval = security_task_getscheduler(p); - if (retval) - goto out_unlock; - - rq = task_rq_lock(p, &rf); - time_slice = 0; - if (p->sched_class->get_rr_interval) - time_slice = p->sched_class->get_rr_interval(rq, p); - task_rq_unlock(rq, p, &rf); + scoped_guard (rcu) { + struct task_struct *p = find_process_by_pid(pid); + if (!p) + return -ESRCH; + + retval = security_task_getscheduler(p); + if (retval) + return retval; + + scoped_guard (task_rq_lock, p) { + struct rq *rq = scope.rq; + if (p->sched_class->get_rr_interval) + time_slice = p->sched_class->get_rr_interval(rq, p); + } + } - rcu_read_unlock(); jiffies_to_timespec64(time_slice, t); return 0; - -out_unlock: - rcu_read_unlock(); - return retval; } /** From patchwork Mon Jun 12 09:07:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D148FC7EE43 for ; Mon, 12 Jun 2023 09:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236122AbjFLJ6a (ORCPT ); Mon, 12 Jun 2023 05:58:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231547AbjFLJy1 (ORCPT ); Mon, 12 Jun 2023 05:54:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F13BD5FF7; Mon, 12 Jun 2023 02:39:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=y0wgwpukS8T7O340+NKvU9chnTVlAmLcht1jdurq5Ss=; b=XJLeq7I1drWQYbixkl89Se55Lm ueDLMCBUmrK1ud0Ym2MEKFBBcNCFsHuofYli/d22KQa8rLnB7TqYweSgfEswFS0Rngj94PaRCGISU BjJqQVzROtr4nm6YuEhME1yjkhuAy4siEWBxnmmKn84Ma8GXkQN0TO78ajWRKONL+bjwVXBj1OHlc wd5XSXX9DMA+GZC8viUDd1En+ijoZsCoNPdUmeqGf2hUg2zXdMg7bMFdw2n7RijPcP1yaXBwIJspG zSqEDja4Bsa9uiVuz1JOYDJaWqEk5fEBX3CCfz04e9HuFekeyep/OIowD9BCc1AFfcfJdAMXRdKLx jIf4FRgA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NB4-5z; Mon, 12 Jun 2023 09:38:55 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 03D383002A9; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 6353930A77B5A; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.871533689@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:32 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 19/57] sched: Simplify sched_move_task() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10361,17 +10361,18 @@ void sched_move_task(struct task_struct int queued, running, queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK; struct task_group *group; - struct rq_flags rf; struct rq *rq; - rq = task_rq_lock(tsk, &rf); + CLASS(task_rq_lock, rq_guard)(tsk); + rq = rq_guard.rq; + /* * Esp. with SCHED_AUTOGROUP enabled it is possible to get superfluous * group changes. */ group = sched_get_task_group(tsk); if (group == tsk->sched_task_group) - goto unlock; + return; update_rq_clock(rq); @@ -10396,9 +10397,6 @@ void sched_move_task(struct task_struct */ resched_curr(rq); } - -unlock: - task_rq_unlock(rq, tsk, &rf); } static inline struct task_group *css_tg(struct cgroup_subsys_state *css) From patchwork Mon Jun 12 09:07:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FE7CC7EE2E for ; Mon, 12 Jun 2023 09:58:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235459AbjFLJ57 (ORCPT ); Mon, 12 Jun 2023 05:57:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233352AbjFLJyo (ORCPT ); Mon, 12 Jun 2023 05:54:44 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEA0D4C1F; Mon, 12 Jun 2023 02:39:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=oJyDWnF7S8lcaog4Xmhyr6vhDojPiq7m8dz3r+ZsZG4=; b=rdH9P5/MC3KcY+M1BubJnq2SDA VDGeL/wvRJ516ylJciShUfk/lLKhhrHMf84ST4NUiKTy2ExpNVzaIsxOt0cQrm7+eX/8udNe5etuq a8fdjA1aakVHh17wbvSw2yH8JlQoRaJkmSyuc03oqmoc7UOXpLHURSRxkh0dVmI/veHKlufAu0gpY Ns3St7yJnjxp7C9LWCYQAjUcurSze2tx9FwQXkuQqZEP3Medv20JQgxJrWc9w8JIbLUqKZH2b4jfd jiWzkPTbr4fSMmiKp/dav0iBFlH+nbdDr15ksYyL2JPyPxZmPbWGcP2XPV4+cHOFrrcX0V4f4DCw+ S6QpV56A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0l-008kQN-0e; Mon, 12 Jun 2023 09:38:55 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0E854302EA7; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 67EF730A77B58; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093538.942686455@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:33 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 20/57] sched: Simplify tg_set_cfs_bandwidth() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/cpu.h | 2 ++ kernel/sched/core.c | 42 +++++++++++++++++++++--------------------- 2 files changed, 23 insertions(+), 21 deletions(-) --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -148,6 +148,8 @@ static inline int remove_cpu(unsigned in static inline void smp_shutdown_nonboot_cpus(unsigned int primary_cpu) { } #endif /* !CONFIG_HOTPLUG_CPU */ +DEFINE_LOCK_GUARD_0(cpus_read_lock, cpus_read_lock(), cpus_read_unlock()) + #ifdef CONFIG_PM_SLEEP_SMP extern int freeze_secondary_cpus(int primary); extern void thaw_secondary_cpus(void); --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10726,11 +10726,12 @@ static int tg_set_cfs_bandwidth(struct t * Prevent race between setting of cfs_rq->runtime_enabled and * unthrottle_offline_cfs_rqs(). */ - cpus_read_lock(); - mutex_lock(&cfs_constraints_mutex); + guard(cpus_read_lock)(); + guard(mutex)(&cfs_constraints_mutex); + ret = __cfs_schedulable(tg, period, quota); if (ret) - goto out_unlock; + return ret; runtime_enabled = quota != RUNTIME_INF; runtime_was_enabled = cfs_b->quota != RUNTIME_INF; @@ -10740,39 +10741,38 @@ static int tg_set_cfs_bandwidth(struct t */ if (runtime_enabled && !runtime_was_enabled) cfs_bandwidth_usage_inc(); - raw_spin_lock_irq(&cfs_b->lock); - cfs_b->period = ns_to_ktime(period); - cfs_b->quota = quota; - cfs_b->burst = burst; - - __refill_cfs_bandwidth_runtime(cfs_b); - - /* Restart the period timer (if active) to handle new period expiry: */ - if (runtime_enabled) - start_cfs_bandwidth(cfs_b); - raw_spin_unlock_irq(&cfs_b->lock); + scoped_guard (raw_spinlock_irq, &cfs_b->lock) { + cfs_b->period = ns_to_ktime(period); + cfs_b->quota = quota; + cfs_b->burst = burst; + + __refill_cfs_bandwidth_runtime(cfs_b); + + /* + * Restart the period timer (if active) to handle new + * period expiry: + */ + if (runtime_enabled) + start_cfs_bandwidth(cfs_b); + } for_each_online_cpu(i) { struct cfs_rq *cfs_rq = tg->cfs_rq[i]; struct rq *rq = cfs_rq->rq; - struct rq_flags rf; - rq_lock_irq(rq, &rf); + guard(rq_lock_irq)(rq); cfs_rq->runtime_enabled = runtime_enabled; cfs_rq->runtime_remaining = 0; if (cfs_rq->throttled) unthrottle_cfs_rq(cfs_rq); - rq_unlock_irq(rq, &rf); } + if (runtime_was_enabled && !runtime_enabled) cfs_bandwidth_usage_dec(); -out_unlock: - mutex_unlock(&cfs_constraints_mutex); - cpus_read_unlock(); - return ret; + return 0; } static int tg_set_cfs_quota(struct task_group *tg, long cfs_quota_us) From patchwork Mon Jun 12 09:07:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9401C7EE2E for ; Mon, 12 Jun 2023 09:58:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230408AbjFLJ6e (ORCPT ); Mon, 12 Jun 2023 05:58:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233425AbjFLJyp (ORCPT ); Mon, 12 Jun 2023 05:54:45 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5F334C1D; Mon, 12 Jun 2023 02:39:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=szAn79F4mqfMUFH3NFua/nTnj8aGUQ8HGVR6NlJOzM4=; b=dmt6RRN5/6iW0nXv8WCs7Zhoq7 h66/OFpGSYq3NopaotDNajuFo8i5Abvywbk25Mkw64F7BKRPBDAMFErpkMVn5jc1JDWWixleXtBPh TLPcdD6pTKzi2NGzuw4ny2PmQy/30yIqkBkkgI2XhvaBJgwGgdhaI4B8DWmnMj2mqvkcRzcAY6HRy jk7gpJD/ftYk8Hikh4m2P22vJKm9NoD09zIFAZMUbUi4MpR8Yhp4gzJ6D7tuejbq/RRvcSqn4CrlB sEMwyssBIPsqrQMHjdIJ8/AZRnWyKMbTE7D2j2333TyWpvcfxe9q1zhpgbPY6ZV8UERchXfhjI06e QRlB8Skw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0l-008kQP-0j; Mon, 12 Jun 2023 09:38:56 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0E96F302F7E; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 6C74530A77B5C; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.014199820@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:34 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 21/57] sched: Misc cleanups References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Random remaining guard use... Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 163 ++++++++++++++++++++-------------------------------- 1 file changed, 63 insertions(+), 100 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1454,16 +1454,12 @@ static void __uclamp_update_util_min_rt_ static void uclamp_update_util_min_rt_default(struct task_struct *p) { - struct rq_flags rf; - struct rq *rq; - if (!rt_task(p)) return; /* Protect updates to p->uclamp_* */ - rq = task_rq_lock(p, &rf); + guard(task_rq_lock)(p); __uclamp_update_util_min_rt_default(p); - task_rq_unlock(rq, p, &rf); } static inline struct uclamp_se @@ -1759,9 +1755,8 @@ static void uclamp_update_root_tg(void) uclamp_se_set(&tg->uclamp_req[UCLAMP_MAX], sysctl_sched_uclamp_util_max, false); - rcu_read_lock(); + guard(rcu)(); cpu_util_update_eff(&root_task_group.css); - rcu_read_unlock(); } #else static void uclamp_update_root_tg(void) { } @@ -1788,10 +1783,9 @@ static void uclamp_sync_util_min_rt_defa smp_mb__after_spinlock(); read_unlock(&tasklist_lock); - rcu_read_lock(); + guard(rcu)(); for_each_process_thread(g, p) uclamp_update_util_min_rt_default(p); - rcu_read_unlock(); } static int sysctl_sched_uclamp_handler(struct ctl_table *table, int write, @@ -2243,10 +2237,9 @@ void migrate_disable(void) return; } - preempt_disable(); + guard(preempt)(); this_rq()->nr_pinned++; p->migration_disabled = 1; - preempt_enable(); } EXPORT_SYMBOL_GPL(migrate_disable); @@ -2270,7 +2263,7 @@ void migrate_enable(void) * Ensure stop_task runs either before or after this, and that * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule(). */ - preempt_disable(); + guard(preempt)(); if (p->cpus_ptr != &p->cpus_mask) __set_cpus_allowed_ptr(p, &ac); /* @@ -2281,7 +2274,6 @@ void migrate_enable(void) barrier(); p->migration_disabled = 0; this_rq()->nr_pinned--; - preempt_enable(); } EXPORT_SYMBOL_GPL(migrate_enable); @@ -3449,13 +3441,11 @@ unsigned long wait_task_inactive(struct */ void kick_process(struct task_struct *p) { - int cpu; + guard(preempt)(); + int cpu = task_cpu(p); - preempt_disable(); - cpu = task_cpu(p); if ((cpu != smp_processor_id()) && task_curr(p)) smp_send_reschedule(cpu); - preempt_enable(); } EXPORT_SYMBOL_GPL(kick_process); @@ -6300,8 +6290,9 @@ static void sched_core_balance(struct rq struct sched_domain *sd; int cpu = cpu_of(rq); - preempt_disable(); - rcu_read_lock(); + guard(preempt)(); + guard(rcu)(); + raw_spin_rq_unlock_irq(rq); for_each_domain(cpu, sd) { if (need_resched()) @@ -6311,8 +6302,6 @@ static void sched_core_balance(struct rq break; } raw_spin_rq_lock_irq(rq); - rcu_read_unlock(); - preempt_enable(); } static DEFINE_PER_CPU(struct balance_callback, core_balance_head); @@ -8169,8 +8158,6 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pi #ifdef CONFIG_SMP int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask) { - int ret = 0; - /* * If the task isn't a deadline task or admission control is * disabled then we don't care about affinity changes. @@ -8184,11 +8171,11 @@ int dl_task_check_affinity(struct task_s * tasks allowed to run on all the CPUs in the task's * root_domain. */ - rcu_read_lock(); + guard(rcu)(); if (!cpumask_subset(task_rq(p)->rd->span, mask)) - ret = -EBUSY; - rcu_read_unlock(); - return ret; + return -EBUSY; + + return 0; } #endif @@ -9197,10 +9184,8 @@ int task_can_attach(struct task_struct * * success of set_cpus_allowed_ptr() on all attached tasks * before cpus_mask may be changed. */ - if (p->flags & PF_NO_SETAFFINITY) { - ret = -EINVAL; - goto out; - } + if (p->flags & PF_NO_SETAFFINITY) + return -EINVAL; if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span, cs_effective_cpus)) { @@ -9211,7 +9196,6 @@ int task_can_attach(struct task_struct * ret = dl_cpu_busy(cpu, p); } -out: return ret; } @@ -10433,11 +10417,9 @@ static int cpu_cgroup_css_online(struct #ifdef CONFIG_UCLAMP_TASK_GROUP /* Propagate the effective uclamp value for the new group */ - mutex_lock(&uclamp_mutex); - rcu_read_lock(); + guard(mutex)(&uclamp_mutex); + guard(rcu)(); cpu_util_update_eff(css); - rcu_read_unlock(); - mutex_unlock(&uclamp_mutex); #endif return 0; @@ -10588,8 +10570,8 @@ static ssize_t cpu_uclamp_write(struct k static_branch_enable(&sched_uclamp_used); - mutex_lock(&uclamp_mutex); - rcu_read_lock(); + guard(mutex)(&uclamp_mutex); + guard(rcu)(); tg = css_tg(of_css(of)); if (tg->uclamp_req[clamp_id].value != req.util) @@ -10604,9 +10586,6 @@ static ssize_t cpu_uclamp_write(struct k /* Update effective clamps to track the most restrictive value */ cpu_util_update_eff(of_css(of)); - rcu_read_unlock(); - mutex_unlock(&uclamp_mutex); - return nbytes; } @@ -10632,10 +10611,10 @@ static inline void cpu_uclamp_print(stru u64 percent; u32 rem; - rcu_read_lock(); - tg = css_tg(seq_css(sf)); - util_clamp = tg->uclamp_req[clamp_id].value; - rcu_read_unlock(); + scoped_guard (rcu) { + tg = css_tg(seq_css(sf)); + util_clamp = tg->uclamp_req[clamp_id].value; + } if (util_clamp == SCHED_CAPACITY_SCALE) { seq_puts(sf, "max\n"); @@ -10952,7 +10931,6 @@ static int tg_cfs_schedulable_down(struc static int __cfs_schedulable(struct task_group *tg, u64 period, u64 quota) { - int ret; struct cfs_schedulable_data data = { .tg = tg, .period = period, @@ -10964,11 +10942,8 @@ static int __cfs_schedulable(struct task do_div(data.quota, NSEC_PER_USEC); } - rcu_read_lock(); - ret = walk_tg_tree(tg_cfs_schedulable_down, tg_nop, &data); - rcu_read_unlock(); - - return ret; + guard(rcu)(); + return walk_tg_tree(tg_cfs_schedulable_down, tg_nop, &data); } static int cpu_cfs_stat_show(struct seq_file *sf, void *v) @@ -11529,14 +11504,12 @@ int __sched_mm_cid_migrate_from_fetch_ci * are not the last task to be migrated from this cpu for this mm, so * there is no need to move src_cid to the destination cpu. */ - rcu_read_lock(); + guard(rcu)(); src_task = rcu_dereference(src_rq->curr); if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) { - rcu_read_unlock(); t->last_mm_cid = -1; return -1; } - rcu_read_unlock(); return src_cid; } @@ -11580,18 +11553,17 @@ int __sched_mm_cid_migrate_from_try_stea * the lazy-put flag, this task will be responsible for transitioning * from lazy-put flag set to MM_CID_UNSET. */ - rcu_read_lock(); - src_task = rcu_dereference(src_rq->curr); - if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) { - rcu_read_unlock(); - /* - * We observed an active task for this mm, there is therefore - * no point in moving this cid to the destination cpu. - */ - t->last_mm_cid = -1; - return -1; + scoped_guard (rcu) { + src_task = rcu_dereference(src_rq->curr); + if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) { + /* + * We observed an active task for this mm, there is therefore + * no point in moving this cid to the destination cpu. + */ + t->last_mm_cid = -1; + return -1; + } } - rcu_read_unlock(); /* * The src_cid is unused, so it can be unset. @@ -11664,7 +11636,6 @@ static void sched_mm_cid_remote_clear(st { struct rq *rq = cpu_rq(cpu); struct task_struct *t; - unsigned long flags; int cid, lazy_cid; cid = READ_ONCE(pcpu_cid->cid); @@ -11699,23 +11670,21 @@ static void sched_mm_cid_remote_clear(st * the lazy-put flag, that task will be responsible for transitioning * from lazy-put flag set to MM_CID_UNSET. */ - rcu_read_lock(); - t = rcu_dereference(rq->curr); - if (READ_ONCE(t->mm_cid_active) && t->mm == mm) { - rcu_read_unlock(); - return; + scoped_guard (rcu) { + t = rcu_dereference(rq->curr); + if (READ_ONCE(t->mm_cid_active) && t->mm == mm) + return; } - rcu_read_unlock(); /* * The cid is unused, so it can be unset. * Disable interrupts to keep the window of cid ownership without rq * lock small. */ - local_irq_save(flags); - if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) - __mm_cid_put(mm, cid); - local_irq_restore(flags); + scoped_guard (irqsave) { + if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET)) + __mm_cid_put(mm, cid); + } } static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu) @@ -11737,14 +11706,13 @@ static void sched_mm_cid_remote_clear_ol * snapshot associated with this cid if an active task using the mm is * observed on this rq. */ - rcu_read_lock(); - curr = rcu_dereference(rq->curr); - if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) { - WRITE_ONCE(pcpu_cid->time, rq_clock); - rcu_read_unlock(); - return; + scoped_guard (rcu) { + curr = rcu_dereference(rq->curr); + if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) { + WRITE_ONCE(pcpu_cid->time, rq_clock); + return; + } } - rcu_read_unlock(); if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS) return; @@ -11838,7 +11806,6 @@ void task_tick_mm_cid(struct rq *rq, str void sched_mm_cid_exit_signals(struct task_struct *t) { struct mm_struct *mm = t->mm; - struct rq_flags rf; struct rq *rq; if (!mm) @@ -11846,7 +11813,7 @@ void sched_mm_cid_exit_signals(struct ta preempt_disable(); rq = this_rq(); - rq_lock_irqsave(rq, &rf); + guard(rq_lock_irqsave)(rq); preempt_enable_no_resched(); /* holding spinlock */ WRITE_ONCE(t->mm_cid_active, 0); /* @@ -11856,13 +11823,11 @@ void sched_mm_cid_exit_signals(struct ta smp_mb(); mm_cid_put(mm); t->last_mm_cid = t->mm_cid = -1; - rq_unlock_irqrestore(rq, &rf); } void sched_mm_cid_before_execve(struct task_struct *t) { struct mm_struct *mm = t->mm; - struct rq_flags rf; struct rq *rq; if (!mm) @@ -11870,7 +11835,7 @@ void sched_mm_cid_before_execve(struct t preempt_disable(); rq = this_rq(); - rq_lock_irqsave(rq, &rf); + guard(rq_lock_irqsave)(rq); preempt_enable_no_resched(); /* holding spinlock */ WRITE_ONCE(t->mm_cid_active, 0); /* @@ -11880,13 +11845,11 @@ void sched_mm_cid_before_execve(struct t smp_mb(); mm_cid_put(mm); t->last_mm_cid = t->mm_cid = -1; - rq_unlock_irqrestore(rq, &rf); } void sched_mm_cid_after_execve(struct task_struct *t) { struct mm_struct *mm = t->mm; - struct rq_flags rf; struct rq *rq; if (!mm) @@ -11894,16 +11857,16 @@ void sched_mm_cid_after_execve(struct ta preempt_disable(); rq = this_rq(); - rq_lock_irqsave(rq, &rf); - preempt_enable_no_resched(); /* holding spinlock */ - WRITE_ONCE(t->mm_cid_active, 1); - /* - * Store t->mm_cid_active before loading per-mm/cpu cid. - * Matches barrier in sched_mm_cid_remote_clear_old(). - */ - smp_mb(); - t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm); - rq_unlock_irqrestore(rq, &rf); + scoped_guard (rq_lock_irqsave, rq) { + preempt_enable_no_resched(); /* holding spinlock */ + WRITE_ONCE(t->mm_cid_active, 1); + /* + * Store t->mm_cid_active before loading per-mm/cpu cid. + * Matches barrier in sched_mm_cid_remote_clear_old(). + */ + smp_mb(); + t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm); + } rseq_set_notify_resume(t); } From patchwork Mon Jun 12 09:07:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276151 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51377C7EE25 for ; Mon, 12 Jun 2023 09:57:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233861AbjFLJ5C (ORCPT ); Mon, 12 Jun 2023 05:57:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229729AbjFLJyZ (ORCPT ); Mon, 12 Jun 2023 05:54:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5A495FF4; Mon, 12 Jun 2023 02:38:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=A6Nf2exxWEURXKIPC4PisBIYNsQ8WOBsy7I8/r8MB2I=; b=nlawFV4XolW6/iduWLySiShjYw cln2JRO/L02dwlVuY9LVrR5dx3MlqGFfL3bXKejT47tvNXSnylMRfCYpkyOmvJTJzxX3Dkuu3+U9V GON9orGticGtLHDO7XjCtQWBLjBGPxQx9TE4ZfwC71+0DHPT+U64tMKz4/QcVn7XKWdGmg/vEjHlw a6B5ZSoGuDL/jumilYTnXf3s+pZg++UlLBOa2U4Jw+Tlp8Uwlu4DsCd/18pgblJOjSG6PcbHyR8eK n4RIv+2T9+h3z8iLYj3qab8IwhKRiOAXKJ0t/9uLH9RGFqp6GhdoTGkppi9FY/jUA+IXOCEJtextC T/wQMAMA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NB5-60; Mon, 12 Jun 2023 09:38:55 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 0E881302F75; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 725F830A77B5F; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.085862001@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:35 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 22/57] perf: Fix cpuctx refcounting References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Fixes: bd2756811766 ("perf: Rewrite core context handling") Signed-off-by: Peter Zijlstra (Intel) --- include/linux/perf_event.h | 13 ++++++++----- kernel/events/core.c | 16 ++++++++++++++++ 2 files changed, 24 insertions(+), 5 deletions(-) --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -841,11 +841,11 @@ struct perf_event { }; /* - * ,-----------------------[1:n]----------------------. - * V V - * perf_event_context <-[1:n]-> perf_event_pmu_context <--- perf_event - * ^ ^ | | - * `--------[1:n]---------' `-[n:1]-> pmu <-[1:n]-' + * ,-----------------------[1:n]------------------------. + * V V + * perf_event_context <-[1:n]-> perf_event_pmu_context <-[1:n]- perf_event + * | | + * `--[n:1]-> pmu <-[1:n]--' * * * struct perf_event_pmu_context lifetime is refcount based and RCU freed @@ -863,6 +863,9 @@ struct perf_event { * ctx->mutex pinning the configuration. Since we hold a reference on * group_leader (through the filedesc) it can't go away, therefore it's * associated pmu_ctx must exist and cannot change due to ctx->mutex. + * + * perf_event holds a refcount on perf_event_context + * perf_event holds a refcount on perf_event_pmu_context */ struct perf_event_pmu_context { struct pmu *pmu; --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4809,6 +4809,11 @@ find_get_pmu_context(struct pmu *pmu, st void *task_ctx_data = NULL; if (!ctx->task) { + /* + * perf_pmu_migrate_context() / __perf_pmu_install_event() + * relies on the fact that find_get_pmu_context() cannot fail + * for CPU contexts. + */ struct perf_cpu_pmu_context *cpc; cpc = per_cpu_ptr(pmu->cpu_pmu_context, event->cpu); @@ -12832,6 +12837,13 @@ static void __perf_pmu_install_event(str { struct perf_event_pmu_context *epc; + /* + * Now that the events are unused, put their old ctx and grab a + * reference on the new context. + */ + put_ctx(event->ctx); + get_ctx(ctx); + event->cpu = cpu; epc = find_get_pmu_context(pmu, ctx, event); event->pmu_ctx = epc; @@ -12877,6 +12889,10 @@ void perf_pmu_migrate_context(struct pmu struct perf_event_context *src_ctx, *dst_ctx; LIST_HEAD(events); + /* + * Since per-cpu context is persistent, no need to grab an extra + * reference. + */ src_ctx = &per_cpu_ptr(&perf_cpu_context, src_cpu)->ctx; dst_ctx = &per_cpu_ptr(&perf_cpu_context, dst_cpu)->ctx; From patchwork Mon Jun 12 09:07:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6158AC7EE43 for ; Mon, 12 Jun 2023 09:58:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235814AbjFLJ6O (ORCPT ); Mon, 12 Jun 2023 05:58:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231254AbjFLJy1 (ORCPT ); Mon, 12 Jun 2023 05:54:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16F1A5FF5; Mon, 12 Jun 2023 02:39:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=zYnYxbpqthdvjZXSbX3mfTIRB+RNguvN2goMXYp9Xns=; b=C5N48YEmWa+xMIpR8Tid5fFCz1 4ts/IhHAlVB3Y7klukQjxw44QSTOVczdIgHCgTTbeHSL7aNXDjlvrZtOElLa8bHls8SaEi11a1yjW m8kixqnVYt13JTML9cc4mFMTjjaeZEqje23QKPWH88Nmb5TXkjBZw9Qbm/tLMrNNaaBI01UM3PP4o mnSnzt8+wc/j6RBeyZpxQSflkzM8u9BQdxPW12PhV7VkCm5m4D2Z6wf2b/Pbn8epVWSKozGvq5kEJ r34sE5/kpDYtj9XvM/rxugO23nxtz5nDzFbm0JOBNbjS5cquGVt7sQ2Oz3swWIcXkshOgt0riH9zO O7emOYuA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NB7-86; Mon, 12 Jun 2023 09:38:55 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 16399302FB8; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 75EFF30A77B5E; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.157685883@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:36 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 23/57] perf: Simplify perf_event_alloc() error path References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The error cleanup sequence in perf_event_alloc() is a subset of the existing _free_event() function (it must of course be). Split this out into __free_event() and simplify the error path. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/perf_event.h | 1 kernel/events/core.c | 129 ++++++++++++++++++++++----------------------- 2 files changed, 66 insertions(+), 64 deletions(-) --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -634,6 +634,7 @@ struct swevent_hlist { #define PERF_ATTACH_ITRACE 0x10 #define PERF_ATTACH_SCHED_CB 0x20 #define PERF_ATTACH_CHILD 0x40 +#define PERF_ATTACH_EXCLUSIVE 0x80 struct bpf_prog; struct perf_cgroup; --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5094,6 +5094,8 @@ static int exclusive_event_init(struct p return -EBUSY; } + event->attach_state |= PERF_ATTACH_EXCLUSIVE; + return 0; } @@ -5101,14 +5103,13 @@ static void exclusive_event_destroy(stru { struct pmu *pmu = event->pmu; - if (!is_exclusive_pmu(pmu)) - return; - /* see comment in exclusive_event_init() */ if (event->attach_state & PERF_ATTACH_TASK) atomic_dec(&pmu->exclusive_cnt); else atomic_inc(&pmu->exclusive_cnt); + + event->attach_state &= ~PERF_ATTACH_EXCLUSIVE; } static bool exclusive_event_match(struct perf_event *e1, struct perf_event *e2) @@ -5143,38 +5144,22 @@ static bool exclusive_event_installable( static void perf_addr_filters_splice(struct perf_event *event, struct list_head *head); -static void _free_event(struct perf_event *event) +/* vs perf_event_alloc() error */ +static void __free_event(struct perf_event *event) { - irq_work_sync(&event->pending_irq); - - unaccount_event(event); - - security_perf_event_free(event); - - if (event->rb) { - /* - * Can happen when we close an event with re-directed output. - * - * Since we have a 0 refcount, perf_mmap_close() will skip - * over us; possibly making our ring_buffer_put() the last. - */ - mutex_lock(&event->mmap_mutex); - ring_buffer_attach(event, NULL); - mutex_unlock(&event->mmap_mutex); - } - - if (is_cgroup_event(event)) - perf_detach_cgroup(event); - if (!event->parent) { if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) put_callchain_buffers(); } - perf_event_free_bpf_prog(event); - perf_addr_filters_splice(event, NULL); kfree(event->addr_filter_ranges); + if (event->attach_state & PERF_ATTACH_EXCLUSIVE) + exclusive_event_destroy(event); + + if (is_cgroup_event(event)) + perf_detach_cgroup(event); + if (event->destroy) event->destroy(event); @@ -5185,22 +5170,56 @@ static void _free_event(struct perf_even if (event->hw.target) put_task_struct(event->hw.target); - if (event->pmu_ctx) + if (event->pmu_ctx) { + /* + * put_pmu_ctx() needs an event->ctx reference, because of + * epc->ctx. + */ + WARN_ON_ONCE(!event->ctx); + WARN_ON_ONCE(event->pmu_ctx->ctx != event->ctx); put_pmu_ctx(event->pmu_ctx); + } /* - * perf_event_free_task() relies on put_ctx() being 'last', in particular - * all task references must be cleaned up. + * perf_event_free_task() relies on put_ctx() being 'last', in + * particular all task references must be cleaned up. */ if (event->ctx) put_ctx(event->ctx); - exclusive_event_destroy(event); - module_put(event->pmu->module); + if (event->pmu) + module_put(event->pmu->module); call_rcu(&event->rcu_head, free_event_rcu); } +/* vs perf_event_alloc() success */ +static void _free_event(struct perf_event *event) +{ + irq_work_sync(&event->pending_irq); + + unaccount_event(event); + + security_perf_event_free(event); + + if (event->rb) { + /* + * Can happen when we close an event with re-directed output. + * + * Since we have a 0 refcount, perf_mmap_close() will skip + * over us; possibly making our ring_buffer_put() the last. + */ + mutex_lock(&event->mmap_mutex); + ring_buffer_attach(event, NULL); + mutex_unlock(&event->mmap_mutex); + } + + perf_event_free_bpf_prog(event); + perf_addr_filters_splice(event, NULL); + + __free_event(event); +} + /* * Used to free events which have a known refcount of 1, such as in error paths * where the event isn't exposed yet and inherited events. @@ -11591,8 +11610,10 @@ static int perf_try_init_event(struct pm event->destroy(event); } - if (ret) + if (ret) { + event->pmu = NULL; module_put(pmu->module); + } return ret; } @@ -11918,7 +11939,7 @@ perf_event_alloc(struct perf_event_attr * See perf_output_read(). */ if (attr->inherit && (attr->sample_type & PERF_SAMPLE_READ)) - goto err_ns; + goto err; if (!has_branch_stack(event)) event->attr.branch_sample_type = 0; @@ -11926,7 +11947,7 @@ perf_event_alloc(struct perf_event_attr pmu = perf_init_event(event); if (IS_ERR(pmu)) { err = PTR_ERR(pmu); - goto err_ns; + goto err; } /* @@ -11936,24 +11957,24 @@ perf_event_alloc(struct perf_event_attr */ if (pmu->task_ctx_nr == perf_invalid_context && (task || cgroup_fd != -1)) { err = -EINVAL; - goto err_pmu; + goto err; } if (event->attr.aux_output && !(pmu->capabilities & PERF_PMU_CAP_AUX_OUTPUT)) { err = -EOPNOTSUPP; - goto err_pmu; + goto err; } if (cgroup_fd != -1) { err = perf_cgroup_connect(cgroup_fd, event, attr, group_leader); if (err) - goto err_pmu; + goto err; } err = exclusive_event_init(event); if (err) - goto err_pmu; + goto err; if (has_addr_filter(event)) { event->addr_filter_ranges = kcalloc(pmu->nr_addr_filters, @@ -11961,7 +11982,7 @@ perf_event_alloc(struct perf_event_attr GFP_KERNEL); if (!event->addr_filter_ranges) { err = -ENOMEM; - goto err_per_task; + goto err; } /* @@ -11986,41 +12007,21 @@ perf_event_alloc(struct perf_event_attr if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) { err = get_callchain_buffers(attr->sample_max_stack); if (err) - goto err_addr_filters; + goto err; } } err = security_perf_event_alloc(event); if (err) - goto err_callchain_buffer; + goto err; /* symmetric to unaccount_event() in _free_event() */ account_event(event); return event; -err_callchain_buffer: - if (!event->parent) { - if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) - put_callchain_buffers(); - } -err_addr_filters: - kfree(event->addr_filter_ranges); - -err_per_task: - exclusive_event_destroy(event); - -err_pmu: - if (is_cgroup_event(event)) - perf_detach_cgroup(event); - if (event->destroy) - event->destroy(event); - module_put(pmu->module); -err_ns: - if (event->hw.target) - put_task_struct(event->hw.target); - call_rcu(&event->rcu_head, free_event_rcu); - +err: + __free_event(event); return ERR_PTR(err); } From patchwork Mon Jun 12 09:07:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38DF7C7EE25 for ; Mon, 12 Jun 2023 09:57:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234982AbjFLJ5S (ORCPT ); Mon, 12 Jun 2023 05:57:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234420AbjFLJy4 (ORCPT ); Mon, 12 Jun 2023 05:54:56 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 449644C27; Mon, 12 Jun 2023 02:39:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=+L1iRMB4nmmwqwHgVqKFu6YfCZFh/nRXnBwlrRSlD8U=; b=JxJ2AX424BD0wYIDaTb07Zd1dX ILrq87wGBByjSu3iDKJjPs1TH6v5DnRqZ5AOtEvCl5/g626ZrMYJuQVjY9GYkEGSY6omnB4h1bZMR tJeEd7dgIp8TKelyqMf5KMY6/ew3++kJQZkpak1m/ZwWDVpa07br6snHiOf1xM+LtOE6LZ1/DhA0o AamiBNIMlQHH49v6n5/vYwxBwNpekxUvbvZkLLkqnCiDJ6Dt2+rpH96ZhD3ahn52IFxSwf3iRVdMa eq2djU89YFJzIvM5/OBSmpsn3SlJs3L1W/dd14kJO0RJA2huiLE1xRdP5ianT5BfUiPG4v6jloCCw Ut5TH9wQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0l-008kQQ-0j; Mon, 12 Jun 2023 09:39:24 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 172D9302FF9; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 7AB6530A77B60; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.228708854@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:37 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 24/57] perf: Simplify perf_pmu_register() error path References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org The error path of perf_pmu_register() is of course very similar to a subset of perf_pmu_unregister(). Extract this common part in __perf_pmu_unregister() and simplify things. Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 51 ++++++++++++++++++++++++--------------------------- 1 file changed, 24 insertions(+), 27 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11426,20 +11426,35 @@ static int pmu_dev_alloc(struct pmu *pmu static struct lock_class_key cpuctx_mutex; static struct lock_class_key cpuctx_lock; +static void __perf_pmu_unregister(struct pmu *pmu) +{ + free_percpu(pmu->pmu_disable_count); + if (pmu->type >= 0) + idr_remove(&pmu_idr, pmu->type); + if (pmu_bus_running && pmu->dev && pmu->dev != PMU_NULL_DEV) { + if (pmu->nr_addr_filters) + device_remove_file(pmu->dev, &dev_attr_nr_addr_filters); + device_del(pmu->dev); + put_device(pmu->dev); + } + free_pmu_context(pmu); +} + int perf_pmu_register(struct pmu *pmu, const char *name, int type) { int cpu, ret, max = PERF_TYPE_MAX; + pmu->type = -1; + mutex_lock(&pmus_lock); ret = -ENOMEM; pmu->pmu_disable_count = alloc_percpu(int); if (!pmu->pmu_disable_count) goto unlock; - pmu->type = -1; if (WARN_ONCE(!name, "Can not register anonymous pmu.\n")) { ret = -EINVAL; - goto free_pdc; + goto free; } pmu->name = name; @@ -11449,23 +11464,22 @@ int perf_pmu_register(struct pmu *pmu, c ret = idr_alloc(&pmu_idr, pmu, max, 0, GFP_KERNEL); if (ret < 0) - goto free_pdc; + goto free; WARN_ON(type >= 0 && ret != type); - type = ret; - pmu->type = type; + pmu->type = ret; if (pmu_bus_running && !pmu->dev) { ret = pmu_dev_alloc(pmu); if (ret) - goto free_idr; + goto free; } ret = -ENOMEM; pmu->cpu_pmu_context = alloc_percpu(struct perf_cpu_pmu_context); if (!pmu->cpu_pmu_context) - goto free_dev; + goto free; for_each_possible_cpu(cpu) { struct perf_cpu_pmu_context *cpc; @@ -11511,17 +11525,8 @@ int perf_pmu_register(struct pmu *pmu, c return ret; -free_dev: - if (pmu->dev && pmu->dev != PMU_NULL_DEV) { - device_del(pmu->dev); - put_device(pmu->dev); - } - -free_idr: - idr_remove(&pmu_idr, pmu->type); - -free_pdc: - free_percpu(pmu->pmu_disable_count); +free: + __perf_pmu_unregister(pmu); goto unlock; } EXPORT_SYMBOL_GPL(perf_pmu_register); @@ -11538,15 +11543,7 @@ void perf_pmu_unregister(struct pmu *pmu synchronize_srcu(&pmus_srcu); synchronize_rcu(); - free_percpu(pmu->pmu_disable_count); - idr_remove(&pmu_idr, pmu->type); - if (pmu_bus_running && pmu->dev && pmu->dev != PMU_NULL_DEV) { - if (pmu->nr_addr_filters) - device_remove_file(pmu->dev, &dev_attr_nr_addr_filters); - device_del(pmu->dev); - put_device(pmu->dev); - } - free_pmu_context(pmu); + __perf_pmu_unregister(pmu); mutex_unlock(&pmus_lock); } EXPORT_SYMBOL_GPL(perf_pmu_unregister); From patchwork Mon Jun 12 09:07:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2861C7EE2E for ; Mon, 12 Jun 2023 09:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236120AbjFLJ63 (ORCPT ); Mon, 12 Jun 2023 05:58:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231877AbjFLJya (ORCPT ); Mon, 12 Jun 2023 05:54:30 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A300A5FFF; Mon, 12 Jun 2023 02:39:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ldnPYKUyHMdijInwVZoC1NuQUIqnzmW6aookZYRZh1U=; b=bJBHJIBXVRdeQP4T7XcXUY8iMH di/jzbHG8QdRKgt7X5fFBZSQuExIhm4OtWcsQgbp82u82KzPGnsNgHCPG1r7wDa+Zf3Yybo+MMKKD 6kzpW7tb12oZwh2LWxC5mYsLBh3EeFF5Mx+gfLH4YWuoPB20Wmu38WpRv0/JFEDPN3I3WnPAdrUZG N9RCoJPL7qvE55pcs319+8ceIKd9bh09dAM074UmAS+nhI32VGb65JKPNV7DU57u9vWRkuYFwBSRv PUJltPdiKo1thqnI10zGjkLcJXnNDVTEEIJgdNDcoNHm4ClkGvI5OiDwY24bhEmzI8eSOZmwAqbCa RgBj18Gg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NB9-De; Mon, 12 Jun 2023 09:38:55 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 18413303164; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 7F8B230A77B61; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.300603001@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:38 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 25/57] perf: Simplify perf_fget_light() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Introduce fdnull and use it to simplify perf_fget_light() to either return a valid struct fd or not -- much like fdget() itself. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/file.h | 7 ++++++- kernel/events/core.c | 22 +++++++++++----------- 2 files changed, 17 insertions(+), 12 deletions(-) --- a/include/linux/file.h +++ b/include/linux/file.h @@ -59,6 +59,8 @@ static inline struct fd __to_fd(unsigned return (struct fd){(struct file *)(v & ~3),v & 3}; } +#define fdnull __to_fd(0) + static inline struct fd fdget(unsigned int fd) { return __to_fd(__fdget(fd)); --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5802,18 +5802,17 @@ EXPORT_SYMBOL_GPL(perf_event_period); static const struct file_operations perf_fops; -static inline int perf_fget_light(int fd, struct fd *p) +static inline struct fd perf_fdget(int fd) { struct fd f = fdget(fd); if (!f.file) - return -EBADF; + return fdnull; if (f.file->f_op != &perf_fops) { fdput(f); - return -EBADF; + return fdnull; } - *p = f; - return 0; + return f; } static int perf_event_set_output(struct perf_event *event, @@ -5864,10 +5863,9 @@ static long _perf_ioctl(struct perf_even int ret; if (arg != -1) { struct perf_event *output_event; - struct fd output; - ret = perf_fget_light(arg, &output); - if (ret) - return ret; + struct fd output = perf_fdget(arg); + if (!output.file) + return -EBADF; output_event = output.file->private_data; ret = perf_event_set_output(event, output_event); fdput(output); @@ -12401,9 +12399,11 @@ SYSCALL_DEFINE5(perf_event_open, return event_fd; if (group_fd != -1) { - err = perf_fget_light(group_fd, &group); - if (err) + group = perf_fdget(group_fd); + if (!group.file) { + err = -EBADF; goto err_fd; + } group_leader = group.file->private_data; if (flags & PERF_FLAG_FD_OUTPUT) output_event = group_leader; From patchwork Mon Jun 12 09:07:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76AD3C7EE43 for ; Mon, 12 Jun 2023 09:57:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235069AbjFLJ5l (ORCPT ); Mon, 12 Jun 2023 05:57:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229454AbjFLJyb (ORCPT ); Mon, 12 Jun 2023 05:54:31 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55F1F6185; Mon, 12 Jun 2023 02:39:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=IwZQFShSV/PIl8fTfPc9l4QCHQp32ssy80H3iz5Si2g=; b=n2QBRERGYOTGa4H4UfPiQMxml4 DlIa7Bh+E0wy752HUyfCxpdFro63TOC2IKp9jPE5GAfa1mJYpUhJg7mXGYIHTeDn9BySpGXGbyVd3 IwJesqbM7Wb3W3h6YpQgJKi2jBSkPrZDWblalKMV6HQ84Jsy2sFXbKyI8rcW+nAOW3dJ/b920m92P rah0RrnGqtwQsyYb1dXy9/e8D1odgIYC+fkV2EbyFFqvSqo9mCNLgutnfVUAg637Ma+mnkR77qzmE PZQFREWg0jB+r3RJ1GGStrlIluYtwark33oMb5t5WyJikLV3GNXnRWJf6oZ44tLapABjEhnCdDKM9 42/wgNSw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NBI-II; Mon, 12 Jun 2023 09:38:55 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 23BBA303196; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 8C04B30A77B63; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.371360635@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:39 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 26/57] perf: Simplify event_function*() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 39 ++++++++++++++++++++++++++------------- 1 file changed, 26 insertions(+), 13 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -214,6 +214,25 @@ struct event_function_struct { void *data; }; +typedef struct { + struct perf_cpu_context *cpuctx; + struct perf_event_context *ctx; +} class_perf_ctx_lock_t; + +static inline void class_perf_ctx_lock_destructor(class_perf_ctx_lock_t *_T) +{ + if (_T->cpuctx) + perf_ctx_unlock(_T->cpuctx, _T->ctx); +} + +static inline class_perf_ctx_lock_t +class_perf_ctx_lock_constructor(struct perf_cpu_context *cpuctx, + struct perf_event_context *ctx) +{ + perf_ctx_lock(cpuctx, ctx); + return (class_perf_ctx_lock_t){ cpuctx, ctx }; +} + static int event_function(void *info) { struct event_function_struct *efs = info; @@ -224,17 +243,15 @@ static int event_function(void *info) int ret = 0; lockdep_assert_irqs_disabled(); + guard(perf_ctx_lock)(cpuctx, task_ctx); - perf_ctx_lock(cpuctx, task_ctx); /* * Since we do the IPI call without holding ctx->lock things can have * changed, double check we hit the task we set out to hit. */ if (ctx->task) { - if (ctx->task != current) { - ret = -ESRCH; - goto unlock; - } + if (ctx->task != current) + return -ESRCH; /* * We only use event_function_call() on established contexts, @@ -254,8 +271,6 @@ static int event_function(void *info) } efs->func(event, cpuctx, ctx, efs->data); -unlock: - perf_ctx_unlock(cpuctx, task_ctx); return ret; } @@ -329,11 +344,11 @@ static void event_function_local(struct task_ctx = ctx; } - perf_ctx_lock(cpuctx, task_ctx); + guard(perf_ctx_lock)(cpuctx, task_ctx); task = ctx->task; if (task == TASK_TOMBSTONE) - goto unlock; + return; if (task) { /* @@ -343,18 +358,16 @@ static void event_function_local(struct */ if (ctx->is_active) { if (WARN_ON_ONCE(task != current)) - goto unlock; + return; if (WARN_ON_ONCE(cpuctx->task_ctx != ctx)) - goto unlock; + return; } } else { WARN_ON_ONCE(&cpuctx->ctx != ctx); } func(event, cpuctx, ctx, data); -unlock: - perf_ctx_unlock(cpuctx, task_ctx); } #define PERF_FLAG_ALL (PERF_FLAG_FD_NO_GROUP |\ From patchwork Mon Jun 12 09:07:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276141 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B933FC8300C for ; Mon, 12 Jun 2023 09:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233308AbjFLJ4o (ORCPT ); Mon, 12 Jun 2023 05:56:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232321AbjFLJy0 (ORCPT ); Mon, 12 Jun 2023 05:54:26 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEE925FF6; Mon, 12 Jun 2023 02:39:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=quWvb6uoZHCgYMmf8H+8cRm/zhKpzM+NkhuuDLi/jU0=; b=JgwzlZkLk5FJYWO7LoFzK1cisC aBQ5LhcXjSO7nFEY2UqQ25IYFqShzwF1MEvo2tYaU3xImhwE97+SRgARWwFOM90DmkkOcTM9vvRiN K0D/nxB7wgTutGs/BJUirFozMwU4pHGGZoMJI6+adCj6IubDSLGtZCjbBSTXj036wzszfXPaQClUq oDqcKFVTI1+O0Qci768RPoGOOBWrHX6+9VlTGBbGcmYgpqiU9DPthnR9bD67ex3yjQ70UNt/X6uBI 49tCO/xKdnsDsuMyH017UA+KPvphoPEfQ2aAhY1SshrRO7CJ3qDBk6tC9l/vfG5WFBXRBMcA2cQPG Y3EYDKXw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NBZ-TV; Mon, 12 Jun 2023 09:38:56 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2823C30326D; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 93BC430A77B64; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.452507393@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:40 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 27/57] perf: Simplify perf_cgroup_connect() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- include/linux/file.h | 2 +- kernel/events/core.c | 19 ++++++++----------- 2 files changed, 9 insertions(+), 12 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -936,22 +936,20 @@ static inline int perf_cgroup_connect(in { struct perf_cgroup *cgrp; struct cgroup_subsys_state *css; - struct fd f = fdget(fd); - int ret = 0; + int ret; + CLASS(fd, f)(fd); if (!f.file) return -EBADF; css = css_tryget_online_from_dir(f.file->f_path.dentry, &perf_event_cgrp_subsys); - if (IS_ERR(css)) { - ret = PTR_ERR(css); - goto out; - } + if (IS_ERR(css)) + return PTR_ERR(css); ret = perf_cgroup_ensure_storage(event, css); if (ret) - goto out; + return ret; cgrp = container_of(css, struct perf_cgroup, css); event->cgrp = cgrp; @@ -963,11 +961,10 @@ static inline int perf_cgroup_connect(in */ if (group_leader && group_leader->cgrp != cgrp) { perf_detach_cgroup(event); - ret = -EINVAL; + return -EINVAL; } -out: - fdput(f); - return ret; + + return 0; } static inline void From patchwork Mon Jun 12 09:07:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7B8AC7EE43 for ; Mon, 12 Jun 2023 09:58:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236060AbjFLJ6S (ORCPT ); Mon, 12 Jun 2023 05:58:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234600AbjFLJy5 (ORCPT ); Mon, 12 Jun 2023 05:54:57 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E20E619A; Mon, 12 Jun 2023 02:39:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=TIflEc6HK/22xB9+LAc8fdYfMJHSxjY/dB735CDFq7w=; b=OzxWN5L/KzC4VUo/iTYaWameCX hrbwrk3ixeRH23F57jlIRFGWnlTcHuMcV7kLy7osS01tgjphDYs7mWn2JZfDkZ1+rR6EcEvrBxrNq zuW6xChuJcvp8qwBU8di5iOmYh8pWWaTGqRRp0lXV86E7IzjUkmvM0B0kyib/7YaiFdsFaLUU6kzN jRUGUTzf6TM4YMKKaPBC+AkGNgv4QE6GBnrVTUB+3uvTZYeky0QKBbK68+lbLkHSUur0WJ5xngoPZ gJIK2g+KtjpXKF0kOxEZS0dUHbQ0CQVAE0El1MhQ7ke9TlaJD5eefsKkFJQO3sIV0cBEXmZEPvZrb 0kCDB3Ig==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0l-008kQb-39; Mon, 12 Jun 2023 09:39:30 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2813A30325F; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 9735A30A77B62; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.537454913@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:41 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 28/57] perf; Simplify event_sched_in() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Use guards to reduce gotos and simplify control flow. Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1153,6 +1153,8 @@ void perf_pmu_enable(struct pmu *pmu) pmu->pmu_enable(pmu); } +DEFINE_GUARD(perf_pmu_disable, struct pmu *, perf_pmu_disable(_T), perf_pmu_enable(_T)) + static void perf_assert_pmu_disabled(struct pmu *pmu) { WARN_ON_ONCE(*this_cpu_ptr(pmu->pmu_disable_count) == 0); @@ -2489,7 +2491,6 @@ event_sched_in(struct perf_event *event, { struct perf_event_pmu_context *epc = event->pmu_ctx; struct perf_cpu_pmu_context *cpc = this_cpu_ptr(epc->pmu->cpu_pmu_context); - int ret = 0; WARN_ON_ONCE(event->ctx != ctx); @@ -2517,15 +2518,14 @@ event_sched_in(struct perf_event *event, event->hw.interrupts = 0; } - perf_pmu_disable(event->pmu); + guard(perf_pmu_disable)(event->pmu); perf_log_itrace_start(event); if (event->pmu->add(event, PERF_EF_START)) { perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE); event->oncpu = -1; - ret = -EAGAIN; - goto out; + return -EAGAIN; } if (!is_software_event(event)) @@ -2536,10 +2536,7 @@ event_sched_in(struct perf_event *event, if (event->attr.exclusive) cpc->exclusive = 1; -out: - perf_pmu_enable(event->pmu); - - return ret; + return 0; } static int From patchwork Mon Jun 12 09:07:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E046BC87FDD for ; Mon, 12 Jun 2023 09:58:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236160AbjFLJ6m (ORCPT ); Mon, 12 Jun 2023 05:58:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234649AbjFLJy6 (ORCPT ); Mon, 12 Jun 2023 05:54:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1058619C; Mon, 12 Jun 2023 02:39:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=vBUL+P10jAOvg5j6kSlF2qo+oprvGK1yGYCCYSiAqlM=; b=dYxSMOmpH89QyKJycVjUFdz5f7 QtXIOBYtxj6iJTSn6IBQq1fqtULVL0EJ2lrui1BxP7PB9iMiZ4lEyl7wCumL8s9vGhOjw20sOJbra 725z2gvGDW4YGdUTOG3DhJos2Uqwqm3u0uM/Xz51OqqLHCKlVBRCImRY2ej8VsP680y/ClU1892d2 pR89p9K/dz055hwh/JicJK35+3pN11I9hXKXko13DmKjgOwOqJ7hje5V4nHvIH4N6rhh3dBrKyYGk 0PrV7Q++6iFwi0JLKHg0aSLxWz9+wRVSkaNTiQJTAygTQs7lE13Eu3Wip57JG6CG1kAY+37v5RmRu sk2gQJ9g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0l-008kQZ-1p; Mon, 12 Jun 2023 09:39:32 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 27A9A3031B9; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 9C85030A77B66; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.611540686@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:42 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 29/57] perf: Simplify: __perf_install_in_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2732,13 +2732,13 @@ static int __perf_install_in_context(vo struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); struct perf_event_context *task_ctx = cpuctx->task_ctx; bool reprogram = true; - int ret = 0; - raw_spin_lock(&cpuctx->ctx.lock); - if (ctx->task) { - raw_spin_lock(&ctx->lock); + if (ctx->task) task_ctx = ctx; + guard(perf_ctx_lock)(cpuctx, task_ctx); + + if (ctx->task) { reprogram = (ctx->task == current); /* @@ -2748,14 +2748,10 @@ static int __perf_install_in_context(vo * If its not running, we don't care, ctx->lock will * serialize against it becoming runnable. */ - if (task_curr(ctx->task) && !reprogram) { - ret = -ESRCH; - goto unlock; - } + if (task_curr(ctx->task) && !reprogram) + return -ESRCH; WARN_ON_ONCE(reprogram && cpuctx->task_ctx && cpuctx->task_ctx != ctx); - } else if (task_ctx) { - raw_spin_lock(&task_ctx->lock); } #ifdef CONFIG_CGROUP_PERF @@ -2778,10 +2774,7 @@ static int __perf_install_in_context(vo add_event_to_ctx(event, ctx); } -unlock: - perf_ctx_unlock(cpuctx, task_ctx); - - return ret; + return 0; } static bool exclusive_event_installable(struct perf_event *event, From patchwork Mon Jun 12 09:07:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC365C83005 for ; Mon, 12 Jun 2023 09:57:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbjFLJ5s (ORCPT ); Mon, 12 Jun 2023 05:57:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231684AbjFLJy2 (ORCPT ); Mon, 12 Jun 2023 05:54:28 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2E735FFA; Mon, 12 Jun 2023 02:39:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=RF7AlZRmRPruwiE6WSJ8fgsVWKVvpbmYdppCd/KBEkQ=; b=gufB5guI2wjztq+YIlaWdjMhyv 3LYfsbSy4unkOyd2ugwudXuVbPkgYZFtkZVII3Xqay+tH7nkG1yTdrFKTcjUHvpAa8Cf37YLpt3E8 pDLxbYYLfAa1grWsAGwn5LQLL4FRD+9IBx4VEM4Gmq/oyW3fFf8dLVvIRA7oBGmGxTO7R049FIU1t r4XaxNuX7StfRoeJQ7EQPUjJFlffCIlkZhBv+EjHy6R48CogNzg2OSLrwlw3rCqTLDkgHr+w7wF/Q ZOTFC2+xYVUW9aU01AvEkNjTt06XQEZg/5knH9qw46D4tqiWNFnLETrbsKjwwTivhd6RHKFM2hAvf JA52UjPw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0l-002NBd-VJ; Mon, 12 Jun 2023 09:38:56 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 27CCF3031BE; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id A1B8930A77B67; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.682563843@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:43 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 30/57] perf: Simplify: *perf_event_{dis,en}able*() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 51 ++++++++++++++++++++++----------------------------- 1 file changed, 22 insertions(+), 29 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2415,7 +2415,7 @@ static void __perf_event_disable(struct update_cgrp_time_from_event(event); } - perf_pmu_disable(event->pmu_ctx->pmu); + guard(perf_pmu_disable)(event->pmu_ctx->pmu); if (event == event->group_leader) group_sched_out(event, ctx); @@ -2424,8 +2424,6 @@ static void __perf_event_disable(struct perf_event_set_state(event, PERF_EVENT_STATE_OFF); perf_cgroup_event_disable(event, ctx); - - perf_pmu_enable(event->pmu_ctx->pmu); } /* @@ -2446,12 +2444,10 @@ static void _perf_event_disable(struct p { struct perf_event_context *ctx = event->ctx; - raw_spin_lock_irq(&ctx->lock); - if (event->state <= PERF_EVENT_STATE_OFF) { - raw_spin_unlock_irq(&ctx->lock); - return; + scoped_guard (raw_spinlock_irq, &ctx->lock) { + if (event->state <= PERF_EVENT_STATE_OFF) + return; } - raw_spin_unlock_irq(&ctx->lock); event_function_call(event, __perf_event_disable, NULL); } @@ -2955,32 +2951,29 @@ static void _perf_event_enable(struct pe { struct perf_event_context *ctx = event->ctx; - raw_spin_lock_irq(&ctx->lock); - if (event->state >= PERF_EVENT_STATE_INACTIVE || - event->state < PERF_EVENT_STATE_ERROR) { -out: - raw_spin_unlock_irq(&ctx->lock); - return; - } + scoped_guard (raw_spinlock_irq, &ctx->lock) { + if (event->state >= PERF_EVENT_STATE_INACTIVE || + event->state < PERF_EVENT_STATE_ERROR) + return; - /* - * If the event is in error state, clear that first. - * - * That way, if we see the event in error state below, we know that it - * has gone back into error state, as distinct from the task having - * been scheduled away before the cross-call arrived. - */ - if (event->state == PERF_EVENT_STATE_ERROR) { /* - * Detached SIBLING events cannot leave ERROR state. + * If the event is in error state, clear that first. + * + * That way, if we see the event in error state below, we know that it + * has gone back into error state, as distinct from the task having + * been scheduled away before the cross-call arrived. */ - if (event->event_caps & PERF_EV_CAP_SIBLING && - event->group_leader == event) - goto out; + if (event->state == PERF_EVENT_STATE_ERROR) { + /* + * Detached SIBLING events cannot leave ERROR state. + */ + if (event->event_caps & PERF_EV_CAP_SIBLING && + event->group_leader == event) + return; - event->state = PERF_EVENT_STATE_OFF; + event->state = PERF_EVENT_STATE_OFF; + } } - raw_spin_unlock_irq(&ctx->lock); event_function_call(event, __perf_event_enable, NULL); } From patchwork Mon Jun 12 09:07:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BD19C87FDE for ; Mon, 12 Jun 2023 09:58:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235783AbjFLJ6M (ORCPT ); Mon, 12 Jun 2023 05:58:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233724AbjFLJyr (ORCPT ); Mon, 12 Jun 2023 05:54:47 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5F854C20; Mon, 12 Jun 2023 02:39:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=R8zokz8HpLYR/tS6Df4K9QtMMliKooDPknNNGx1Aq0A=; b=gDtWsQkYmCDnkEwKNGgB3R3yg5 AQY1YY9Pt79S6dVL6CWepHSUTi7Jk2ap0PirgoxJz1fjs1z3NKbdLaY23YcYonOyGm9p5ZcrqukWc yahOpmdpqbTYWS9bKccGyNolDypM6QDGY9gdEy70lbk3rS4+jMW+Sv98MjpmckatRRRmbimMkq23O vXAs7ycWwEeJNAkE/ixzJXGvLIQcTSvvlAur3TsCI9Ilrhz2nzXNDbdx1TsVCzn07sqZDE31KbMn4 EtMMfJoiXbcTAoBrqdc9gvRSE1k4X7as2PcuNqT2QlErwrhHL5omOT8gwrGh0frufhfqd9bor9oz2 0FAle/iA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQd-03; Mon, 12 Jun 2023 09:38:59 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 29CF5303287; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id A6ED730A77B69; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.753013700@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:44 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 31/57] perf: Simplify perf_event_modify_attr() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3172,7 +3172,7 @@ static int perf_event_modify_attr(struct WARN_ON_ONCE(event->ctx->parent_ctx); - mutex_lock(&event->child_mutex); + guard(mutex)(&event->child_mutex); /* * Event-type-independent attributes must be copied before event-type * modification, which will validate that final attributes match the @@ -3181,16 +3181,16 @@ static int perf_event_modify_attr(struct perf_event_modify_copy_attr(&event->attr, attr); err = func(event, attr); if (err) - goto out; + return err; + list_for_each_entry(child, &event->child_list, child_list) { perf_event_modify_copy_attr(&child->attr, attr); err = func(child, attr); if (err) - goto out; + return err; } -out: - mutex_unlock(&event->child_mutex); - return err; + + return 0; } static void __pmu_ctx_sched_out(struct perf_event_pmu_context *pmu_ctx, From patchwork Mon Jun 12 09:07:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE48CC7EE43 for ; Mon, 12 Jun 2023 09:58:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229877AbjFLJ6n (ORCPT ); Mon, 12 Jun 2023 05:58:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234688AbjFLJy6 (ORCPT ); Mon, 12 Jun 2023 05:54:58 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94603619E; Mon, 12 Jun 2023 02:39:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Hyp83c4zEQiDnre/2FuPX5V+t4FAU+QqZYq6E3vAduE=; b=Gtb5Ev6740h/Uu//z0ccX1hScc Ck98D6/rxacYaUVGPfXtg073SExo3RFg4HAVAwuEx1OEQLLt9RbdVLZs4jgS8VYz19wXPm8F7X/I+ 7WeQiyTsi1jlEeyPs1F0ihul+YLu1RJS1s+FeAP9xTHYjDaUYE3upOVxx5r6t3B62YsJAqx9lyVbj UC381d09c/rXGi/P9xexX5myrbqbF5MzgVQvqLhDW1yv4cmbx/NmkwAE761i6+nygyHErTIh+dwz4 BeyuYoQfUEGmVnwjGaWuO5jR6RO3Wq+ZoiGVXZUL1/3FmffvSgadr0ZATXm7zlY8LN0s087WDHlEu u2+sLK9g==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQo-13; Mon, 12 Jun 2023 09:39:33 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2E4A03032AF; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id AE93C30A77B6A; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.823493926@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:45 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 32/57] perf: Simplify perf_event_context_sched_in() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 38 +++++++++++++++----------------------- 1 file changed, 15 insertions(+), 23 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -713,6 +713,9 @@ static void perf_ctx_enable(struct perf_ perf_pmu_enable(pmu_ctx->pmu); } +DEFINE_GUARD(perf_ctx_disable, struct perf_event_context *, + perf_ctx_disable(_T), perf_ctx_enable(_T)) + static void ctx_sched_out(struct perf_event_context *ctx, enum event_type_t event_type); static void ctx_sched_in(struct perf_event_context *ctx, enum event_type_t event_type); @@ -3906,31 +3909,27 @@ static void perf_event_context_sched_in( struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); struct perf_event_context *ctx; - rcu_read_lock(); + guard(rcu)(); + ctx = rcu_dereference(task->perf_event_ctxp); if (!ctx) - goto rcu_unlock; - - if (cpuctx->task_ctx == ctx) { - perf_ctx_lock(cpuctx, ctx); - perf_ctx_disable(ctx); - - perf_ctx_sched_task_cb(ctx, true); - - perf_ctx_enable(ctx); - perf_ctx_unlock(cpuctx, ctx); - goto rcu_unlock; - } + return; - perf_ctx_lock(cpuctx, ctx); + guard(perf_ctx_lock)(cpuctx, ctx); /* * We must check ctx->nr_events while holding ctx->lock, such * that we serialize against perf_install_in_context(). */ if (!ctx->nr_events) - goto unlock; + return; + + guard(perf_ctx_disable)(ctx); + + if (cpuctx->task_ctx == ctx) { + perf_ctx_sched_task_cb(ctx, true); + return; + } - perf_ctx_disable(ctx); /* * We want to keep the following priority order: * cpu pinned (that don't need to move), task pinned, @@ -3950,13 +3949,6 @@ static void perf_event_context_sched_in( if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree)) perf_ctx_enable(&cpuctx->ctx); - - perf_ctx_enable(ctx); - -unlock: - perf_ctx_unlock(cpuctx, ctx); -rcu_unlock: - rcu_read_unlock(); } /* From patchwork Mon Jun 12 09:07:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84DE7C7EE25 for ; Mon, 12 Jun 2023 09:57:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235201AbjFLJ5h (ORCPT ); Mon, 12 Jun 2023 05:57:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234230AbjFLJyw (ORCPT ); Mon, 12 Jun 2023 05:54:52 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 442CF135; Mon, 12 Jun 2023 02:39:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Xqf5W2yYJrVz/DQlZr8jQFSxQY35vT93yf3RiRPsU+E=; b=SEfMUFGLcxrsWLLqD4rqRZBdOB DMO5LkBgWnb7vye/D7ZkfEdHNA4LQN6b5bn5RL58pOT0LP8Ukx+j7rtkbDL66kKKuWwzQP+0IzjuJ AdEfjN1fLmHqDivsqL1q0OX0YaAfdn6s/zz1LJAh4NMZPXKCPKTrw61PW7pZt7bEWpZBjhknZF1Pu rfeTs7IyGKdFRPRXNbUm16+QK2kGeQ4AWFVZtGzK5KzTe3Y1gWTmt95W3FGRtzT13/BNtcln9du7r HSmORkIVWTlySqAFUGzgBWHME1IieCrv8NsH6CweYmH/tOmp7EBCw7d+LXsHQCoDcEPhMni22BNIN vlnEfTKA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQh-0J; Mon, 12 Jun 2023 09:39:23 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2EC853032B4; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B3A0A30A77B6B; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.895253662@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:46 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 33/57] perf: Simplify perf_adjust_freq_unthr_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4100,7 +4100,7 @@ perf_adjust_freq_unthr_context(struct pe if (!(ctx->nr_freq || unthrottle)) return; - raw_spin_lock(&ctx->lock); + guard(raw_spinlock)(&ctx->lock); list_for_each_entry_rcu(event, &ctx->event_list, event_entry) { if (event->state != PERF_EVENT_STATE_ACTIVE) @@ -4110,7 +4110,7 @@ perf_adjust_freq_unthr_context(struct pe if (!event_filter_match(event)) continue; - perf_pmu_disable(event->pmu); + guard(perf_pmu_disable)(event->pmu); hwc = &event->hw; @@ -4121,7 +4121,7 @@ perf_adjust_freq_unthr_context(struct pe } if (!event->attr.freq || !event->attr.sample_freq) - goto next; + continue; /* * stop the event and update event->count @@ -4143,11 +4143,7 @@ perf_adjust_freq_unthr_context(struct pe perf_adjust_period(event, period, delta, false); event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); - next: - perf_pmu_enable(event->pmu); } - - raw_spin_unlock(&ctx->lock); } /* From patchwork Mon Jun 12 09:07:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 452B1C87FDC for ; Mon, 12 Jun 2023 09:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235852AbjFLJ6j (ORCPT ); Mon, 12 Jun 2023 05:58:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229877AbjFLJyc (ORCPT ); Mon, 12 Jun 2023 05:54:32 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABC9C6189; Mon, 12 Jun 2023 02:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=0Nln3H/TQf9zjJAhRuDD5Vdsf6AHs8EETAgdoNVjkyE=; b=aLyzlCpjSY84iDgpsEBBBGJX3P lK9yNsqeaoP3u7ZT5H3JL27+RTLru0GApQ5l1v5uV8i/wJIkHH27c0wmrJ/GCkYQ/G7TqE972r+Z4 olS6TpdyK9sVlqg98ePCC+diSOfoU1XnPSfTCvcxnVGIIzofX5B72ITAND3PEzyTDUpjkk1zcu8Ub PdMjf6R10+giGpo+mqNbqKy4UQa4reNeKvcpiwdZPnE+1yKAYR8IhUcKfW8HYKgRVSbmBWw9pmlCJ zqWAeUSmHhgSXgznUhoQ3Aaf3QwuEHOuaRjzBPrKSygfGuGbdsxJRVlJjK0e4SLbm0NNjGwLzFteH qxfDrnkg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQi-0Q; Mon, 12 Jun 2023 09:39:00 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2EF723032BD; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B8BAA30A77B6C; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.966607037@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:47 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 34/57] perf: Simplify perf_event_*_on_exec() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 88 +++++++++++++++++++++++---------------------------- 1 file changed, 40 insertions(+), 48 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4318,39 +4318,36 @@ static void perf_event_enable_on_exec(st enum event_type_t event_type = 0; struct perf_cpu_context *cpuctx; struct perf_event *event; - unsigned long flags; int enabled = 0; - local_irq_save(flags); - if (WARN_ON_ONCE(current->perf_event_ctxp != ctx)) - goto out; - - if (!ctx->nr_events) - goto out; - - cpuctx = this_cpu_ptr(&perf_cpu_context); - perf_ctx_lock(cpuctx, ctx); - ctx_sched_out(ctx, EVENT_TIME); - - list_for_each_entry(event, &ctx->event_list, event_entry) { - enabled |= event_enable_on_exec(event, ctx); - event_type |= get_event_type(event); + scoped_guard (irqsave) { + if (WARN_ON_ONCE(current->perf_event_ctxp != ctx)) + return; + + if (!ctx->nr_events) + return; + + cpuctx = this_cpu_ptr(&perf_cpu_context); + guard(perf_ctx_lock)(cpuctx, ctx); + + ctx_sched_out(ctx, EVENT_TIME); + + list_for_each_entry(event, &ctx->event_list, event_entry) { + enabled |= event_enable_on_exec(event, ctx); + event_type |= get_event_type(event); + } + + /* + * Unclone and reschedule this context if we enabled any event. + */ + if (enabled) { + clone_ctx = unclone_ctx(ctx); + ctx_resched(cpuctx, ctx, event_type); + } else { + ctx_sched_in(ctx, EVENT_TIME); + } } - /* - * Unclone and reschedule this context if we enabled any event. - */ - if (enabled) { - clone_ctx = unclone_ctx(ctx); - ctx_resched(cpuctx, ctx, event_type); - } else { - ctx_sched_in(ctx, EVENT_TIME); - } - perf_ctx_unlock(cpuctx, ctx); - -out: - local_irq_restore(flags); - if (clone_ctx) put_ctx(clone_ctx); } @@ -4367,34 +4364,29 @@ static void perf_event_remove_on_exec(st { struct perf_event_context *clone_ctx = NULL; struct perf_event *event, *next; - unsigned long flags; bool modified = false; - mutex_lock(&ctx->mutex); + scoped_guard (mutex, &ctx->mutex) { + if (WARN_ON_ONCE(ctx->task != current)) + return; - if (WARN_ON_ONCE(ctx->task != current)) - goto unlock; + list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { + if (!event->attr.remove_on_exec) + continue; - list_for_each_entry_safe(event, next, &ctx->event_list, event_entry) { - if (!event->attr.remove_on_exec) - continue; + if (!is_kernel_event(event)) + perf_remove_from_owner(event); - if (!is_kernel_event(event)) - perf_remove_from_owner(event); + modified = true; - modified = true; + perf_event_exit_event(event, ctx); + } - perf_event_exit_event(event, ctx); + guard(raw_spinlock_irqsave)(&ctx->lock); + if (modified) + clone_ctx = unclone_ctx(ctx); } - raw_spin_lock_irqsave(&ctx->lock, flags); - if (modified) - clone_ctx = unclone_ctx(ctx); - raw_spin_unlock_irqrestore(&ctx->lock, flags); - -unlock: - mutex_unlock(&ctx->mutex); - if (clone_ctx) put_ctx(clone_ctx); } From patchwork Mon Jun 12 09:07:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DB7EC7EE45 for ; Mon, 12 Jun 2023 09:58:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236200AbjFLJ6p (ORCPT ); Mon, 12 Jun 2023 05:58:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234726AbjFLJy7 (ORCPT ); Mon, 12 Jun 2023 05:54:59 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB11D619F; Mon, 12 Jun 2023 02:39:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=7465s/BLfgTETUEeYyMSYNRfYMViPKbFZJuWYGxIsWk=; b=RN55zsyIykhfP3Q7HIj0y2AVgk 47cTV7u0CxpIpuDULNN86Nf4NyOOYupBqx6P4OXsVVFvtwRP8cSjRTlVgDX1IshxNXxGz6NlX0dsR RNBcn04VFPzW2pz6aZ6yoK14S0yTre1hJ436e9wYg5zrLrG++D+ELSHFhgTrvceyU4lyzhJMHjjvg 3rvzCmcDzh6Nlmpc+GejG1eemjO4N2Ggkuzlj0w0Xv5SqOYmqyoarPOSPiB+ekVmpb52QZdI15OaC 4VCdWUHSsJ4tJdfSZjt4jhVZmcwgyYc2XdySVw2VudpyshMiQmrX96npQB9gDOa4z00XVA5PvE2m/ a/pg1ACw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQp-16; Mon, 12 Jun 2023 09:39:35 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 34B803032C9; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id C0F5430A77B6E; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.037803940@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:48 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 35/57] perf: Simplify *perf_event_read*() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 54 ++++++++++++++++----------------------------------- 1 file changed, 17 insertions(+), 37 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4435,7 +4435,8 @@ static void __perf_event_read(void *info if (ctx->task && cpuctx->task_ctx != ctx) return; - raw_spin_lock(&ctx->lock); + guard(raw_spinlock)(&ctx->lock); + if (ctx->is_active & EVENT_TIME) { update_context_time(ctx); update_cgrp_time_from_event(event); @@ -4446,12 +4447,12 @@ static void __perf_event_read(void *info perf_event_update_sibling_time(event); if (event->state != PERF_EVENT_STATE_ACTIVE) - goto unlock; + return; if (!data->group) { pmu->read(event); data->ret = 0; - goto unlock; + return; } pmu->start_txn(pmu, PERF_PMU_TXN_READ); @@ -4469,9 +4470,6 @@ static void __perf_event_read(void *info } data->ret = pmu->commit_txn(pmu); - -unlock: - raw_spin_unlock(&ctx->lock); } static inline u64 perf_event_count(struct perf_event *event) @@ -4502,43 +4500,32 @@ static void calc_timer_values(struct per int perf_event_read_local(struct perf_event *event, u64 *value, u64 *enabled, u64 *running) { - unsigned long flags; - int ret = 0; - /* * Disabling interrupts avoids all counter scheduling (context * switches, timer based rotation and IPIs). */ - local_irq_save(flags); + guard(irqsave)(); /* * It must not be an event with inherit set, we cannot read * all child counters from atomic context. */ - if (event->attr.inherit) { - ret = -EOPNOTSUPP; - goto out; - } + if (event->attr.inherit) + return -EOPNOTSUPP; /* If this is a per-task event, it must be for current */ if ((event->attach_state & PERF_ATTACH_TASK) && - event->hw.target != current) { - ret = -EINVAL; - goto out; - } + event->hw.target != current) + return -EINVAL; /* If this is a per-CPU event, it must be for this CPU */ if (!(event->attach_state & PERF_ATTACH_TASK) && - event->cpu != smp_processor_id()) { - ret = -EINVAL; - goto out; - } + event->cpu != smp_processor_id()) + return -EINVAL; /* If this is a pinned event it must be running on this CPU */ - if (event->attr.pinned && event->oncpu != smp_processor_id()) { - ret = -EBUSY; - goto out; - } + if (event->attr.pinned && event->oncpu != smp_processor_id()) + return -EBUSY; /* * If the event is currently on this CPU, its either a per-task event, @@ -4558,10 +4545,8 @@ int perf_event_read_local(struct perf_ev if (running) *running = __running; } -out: - local_irq_restore(flags); - return ret; + return 0; } static int perf_event_read(struct perf_event *event, bool group) @@ -4595,7 +4580,7 @@ static int perf_event_read(struct perf_e .ret = 0, }; - preempt_disable(); + guard(preempt)(); event_cpu = __perf_event_read_cpu(event, event_cpu); /* @@ -4609,19 +4594,15 @@ static int perf_event_read(struct perf_e * after this. */ (void)smp_call_function_single(event_cpu, __perf_event_read, &data, 1); - preempt_enable(); ret = data.ret; } else if (state == PERF_EVENT_STATE_INACTIVE) { struct perf_event_context *ctx = event->ctx; - unsigned long flags; - raw_spin_lock_irqsave(&ctx->lock, flags); + guard(raw_spinlock_irqsave)(&ctx->lock); state = event->state; - if (state != PERF_EVENT_STATE_INACTIVE) { - raw_spin_unlock_irqrestore(&ctx->lock, flags); + if (state != PERF_EVENT_STATE_INACTIVE) goto again; - } /* * May read while context is not active (e.g., thread is @@ -4635,7 +4616,6 @@ static int perf_event_read(struct perf_e perf_event_update_time(event); if (group) perf_event_update_sibling_time(event); - raw_spin_unlock_irqrestore(&ctx->lock, flags); } return ret; From patchwork Mon Jun 12 09:07:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93C24C7EE43 for ; Mon, 12 Jun 2023 09:58:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232365AbjFLJ6i (ORCPT ); Mon, 12 Jun 2023 05:58:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231655AbjFLJy2 (ORCPT ); Mon, 12 Jun 2023 05:54:28 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21E505FF9; Mon, 12 Jun 2023 02:39:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=TyEqnNyUrGcTyQQqxTQ7FjrgTS2UCJUv6R0g2vbKlfY=; b=rHHDliBQ2Qfw4zPxTfBy8QXX52 G14E6gPiy22JcNYH/Q5ary0q0YX91rEsCLNUeF2er4snB00oi8Lc4f2MMJZUwwWFKvHVk7GQuK7Zl 3ahxPQrajWWxBclXKLGynYx8r2sGJlK3yDh9HWCO7kIqSmTRqHGJsywLA/mKXJqegwQvkdaMSNva2 uJVrXhoSZLISp+eyXn3YfK2inTs+lOPkrCnxRe7XnCJ3h8JknStPhu9xVH8GzIG76/K6R5JUG74zu XsZmZNsMpz5Ycxrzfr+Z6QJ1Wis1vJw2il0uWR+vhha6EcQ+HD/j4wtxTmR7enXrzcACCcjOxcx3G mOVdO+3w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0m-002NBu-ET; Mon, 12 Jun 2023 09:38:56 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 3B8233032D0; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id C402E30A77B6D; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.108251860@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:49 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 36/57] perf: Simplify find_get_pmu_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4757,11 +4757,14 @@ find_get_context(struct task_struct *tas return ERR_PTR(err); } +/* + * Returns a matching perf_event_pmu_context with elevated refcount or NULL. + */ static struct perf_event_pmu_context * find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx, struct perf_event *event) { - struct perf_event_pmu_context *new = NULL, *epc; + struct perf_event_pmu_context *epc; void *task_ctx_data = NULL; if (!ctx->task) { @@ -4788,16 +4791,14 @@ find_get_pmu_context(struct pmu *pmu, st return epc; } - new = kzalloc(sizeof(*epc), GFP_KERNEL); + void *new __free(kfree) = kzalloc(sizeof(*epc), GFP_KERNEL); if (!new) - return ERR_PTR(-ENOMEM); + return NULL; if (event->attach_state & PERF_ATTACH_TASK_DATA) { task_ctx_data = alloc_task_ctx_data(pmu); - if (!task_ctx_data) { - kfree(new); - return ERR_PTR(-ENOMEM); - } + if (!task_ctx_data) + return NULL; } __perf_init_event_pmu_context(new, pmu); @@ -4820,8 +4821,7 @@ find_get_pmu_context(struct pmu *pmu, st } } - epc = new; - new = NULL; + epc = no_free_ptr(new); list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list); epc->ctx = ctx; @@ -4835,7 +4835,6 @@ find_get_pmu_context(struct pmu *pmu, st raw_spin_unlock_irq(&ctx->lock); free_task_ctx_data(pmu, task_ctx_data); - kfree(new); return epc; } From patchwork Mon Jun 12 09:07:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12F0EC7EE2E for ; Mon, 12 Jun 2023 09:57:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235421AbjFLJ5i (ORCPT ); Mon, 12 Jun 2023 05:57:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233804AbjFLJys (ORCPT ); Mon, 12 Jun 2023 05:54:48 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07AB14C18; Mon, 12 Jun 2023 02:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ICqqJG5l0sWCX8xrZRxbU0jefpCN2MYfe0yl/+VCRM4=; b=CKNAf/OLveGu3h4G35M1XEIV5u NBuElad+1Cyz7hAjxLmdiofAlNZ59WVXwN3CrP9Gfl4Ccdi/+VZXviefm9IV8ZOsEiIAALkH+ZQeM JSCshvWoOk2ye0l3PailK5L7O/HIe+rzqBLavcMqSUulNXyI0lqc5mlQE1RAC+EzaqyQPBUzKfid5 LYfkCZS8vnyRr2VmNLRD6E8uxxm0Qg99fQqc78oc6Bz3sD5VsR472rJ6vx1URMn7VGFn0WBGdm0KG /Q43F20jOaH5vDTomYkIeci0EFi5Mfq3RUVPKf9bOojvjiudIa4EFwja6+7exCspONRew6SwRyHT0 wI+FO4zQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0n-008kR8-1I; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 3F98B3033B5; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id CA60630A77B6F; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.181605463@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:50 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 37/57] perf: Simplify perf_read_group() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 30 +++++++++++------------------- 1 file changed, 11 insertions(+), 19 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5472,11 +5472,10 @@ static int perf_read_group(struct perf_e struct perf_event *leader = event->group_leader, *child; struct perf_event_context *ctx = leader->ctx; int ret; - u64 *values; lockdep_assert_held(&ctx->mutex); - values = kzalloc(event->read_size, GFP_KERNEL); + u64 *values __free(kfree) = kzalloc(event->read_size, GFP_KERNEL); if (!values) return -ENOMEM; @@ -5486,29 +5485,22 @@ static int perf_read_group(struct perf_e * By locking the child_mutex of the leader we effectively * lock the child list of all siblings.. XXX explain how. */ - mutex_lock(&leader->child_mutex); - - ret = __perf_read_group_add(leader, read_format, values); - if (ret) - goto unlock; - - list_for_each_entry(child, &leader->child_list, child_list) { - ret = __perf_read_group_add(child, read_format, values); + scoped_guard (mutex, &leader->child_mutex) { + ret = __perf_read_group_add(leader, read_format, values); if (ret) - goto unlock; - } + return ret; - mutex_unlock(&leader->child_mutex); + list_for_each_entry(child, &leader->child_list, child_list) { + ret = __perf_read_group_add(child, read_format, values); + if (ret) + return ret; + } + } ret = event->read_size; if (copy_to_user(buf, values, event->read_size)) - ret = -EFAULT; - goto out; + return -EFAULT; -unlock: - mutex_unlock(&leader->child_mutex); -out: - kfree(values); return ret; } From patchwork Mon Jun 12 09:07:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB5A1C87FDC for ; Mon, 12 Jun 2023 09:58:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235714AbjFLJ6L (ORCPT ); Mon, 12 Jun 2023 05:58:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230332AbjFLJyd (ORCPT ); Mon, 12 Jun 2023 05:54:33 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8882618B; Mon, 12 Jun 2023 02:39:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=U4i7z1jGLLioJYi9TDEcLiRvRYzXl6AqAACZeSAVHVg=; b=SBLpIgKh9BvTgin8OgClkKlYXg X5ams+FHzFQ6G1lMnSouT1mDdb4OLsixaSq0XGTpb8qMCrU112HpuhNQoPMpu5LuKgc0l67v7cO1h u5ptXidisQWMYnTu9VRqL7lCU0qSFMrhITDD9gjBmwe8Ob3g4CMGsAIsrtG8NhdHIlElo7XiHuBGk i6CUfsyeQ89E6j6aUDbi6UnyyLpJP5feQWE/A/0f7bznVuaG82q1C6WO47OsY3j0TTRUJpg1Of5WA IRcAI8pTAmFoYuUbcG3n6PWRD/wAaLyzUwLu2dB3lMCN4lr3Q2ZxqN2zU2caRGifvgztAd6QumekK vRuQDjXw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQu-39; Mon, 12 Jun 2023 09:39:00 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 3F7FB303383; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id D03BE30A77B70; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.253514702@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:51 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 38/57] perf: Simplify IOC_SET_OUTPUT References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5762,6 +5762,11 @@ static inline struct fd perf_fdget(int f return f; } +static inline bool is_perf_fd(struct fd fd) +{ + return fd.file && fd.file->f_op == &perf_fops; +} + static int perf_event_set_output(struct perf_event *event, struct perf_event *output_event); static int perf_event_set_filter(struct perf_event *event, void __user *arg); @@ -5807,19 +5812,15 @@ static long _perf_ioctl(struct perf_even case PERF_EVENT_IOC_SET_OUTPUT: { - int ret; if (arg != -1) { struct perf_event *output_event; - struct fd output = perf_fdget(arg); - if (!output.file) + CLASS(fd, output)(arg); + if (!is_perf_fd(output)) return -EBADF; output_event = output.file->private_data; - ret = perf_event_set_output(event, output_event); - fdput(output); - } else { - ret = perf_event_set_output(event, NULL); + return perf_event_set_output(event, output_event); } - return ret; + return perf_event_set_output(event, NULL); } case PERF_EVENT_IOC_SET_FILTER: From patchwork Mon Jun 12 09:07:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81A24C7EE25 for ; Mon, 12 Jun 2023 09:58:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236079AbjFLJ6V (ORCPT ); Mon, 12 Jun 2023 05:58:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233342AbjFLJyo (ORCPT ); Mon, 12 Jun 2023 05:54:44 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EB964C1B; Mon, 12 Jun 2023 02:39:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=kTQMzLiQOiysp6z8Xt3S6oOFWuHsLtjxBsNekM1iUPM=; b=FnsYw1rgaWsOiA0J8R1O2i1uX3 e/F70Qz8dzTuEl3kifFu3bO7NPhClNyjvgCn8IyOXFExymb1mJLxjsIE1VNsGjM+ZK5UdyD9Uvpmx 8MMlCDrtOaWNKsB/UhUOB/ZOYncIt9ndU2q2e+wO9d2EOd9MKV6zm5aJXolVKFVPXj7clRX/cVNos UIEMW7CktYOkUgJCjntPeUklJKZwTRaqyQxMkh9aZMJURF4j5ZOLong2c9l7tRmNOL8u2yfvD2bky pOzCMpkX6wckfonGJZWPeiND+Om9Xf6BVzQSrlpTGvnF4Bd5KmsJ+mb8KpkZEAGen/+v1YgVnR9df hpzryTVw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0n-008kR9-1O; Mon, 12 Jun 2023 09:39:02 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 446133033BD; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id D6BA430A77B71; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.324593804@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:52 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 39/57] perf: Simplify perf_event_*_userpage() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5971,10 +5971,10 @@ static void perf_event_init_userpage(str struct perf_event_mmap_page *userpg; struct perf_buffer *rb; - rcu_read_lock(); + guard(rcu)(); rb = rcu_dereference(event->rb); if (!rb) - goto unlock; + return; userpg = rb->user_page; @@ -5983,9 +5983,6 @@ static void perf_event_init_userpage(str userpg->size = offsetof(struct perf_event_mmap_page, __reserved); userpg->data_offset = PAGE_SIZE; userpg->data_size = perf_data_size(rb); - -unlock: - rcu_read_unlock(); } void __weak arch_perf_update_userpage( @@ -6004,10 +6001,10 @@ void perf_event_update_userpage(struct p struct perf_buffer *rb; u64 enabled, running, now; - rcu_read_lock(); + guard(rcu)(); rb = rcu_dereference(event->rb); if (!rb) - goto unlock; + return; /* * compute total_time_enabled, total_time_running @@ -6025,7 +6022,7 @@ void perf_event_update_userpage(struct p * Disable preemption to guarantee consistent time stamps are stored to * the user page. */ - preempt_disable(); + guard(preempt)(); ++userpg->lock; barrier(); userpg->index = perf_event_index(event); @@ -6043,9 +6040,6 @@ void perf_event_update_userpage(struct p barrier(); ++userpg->lock; - preempt_enable(); -unlock: - rcu_read_unlock(); } EXPORT_SYMBOL_GPL(perf_event_update_userpage); @@ -6061,27 +6055,23 @@ static vm_fault_t perf_mmap_fault(struct return ret; } - rcu_read_lock(); + guard(rcu)(); rb = rcu_dereference(event->rb); if (!rb) - goto unlock; + return ret; if (vmf->pgoff && (vmf->flags & FAULT_FLAG_WRITE)) - goto unlock; + return ret; vmf->page = perf_mmap_to_page(rb, vmf->pgoff); if (!vmf->page) - goto unlock; + return ret; get_page(vmf->page); vmf->page->mapping = vmf->vma->vm_file->f_mapping; vmf->page->index = vmf->pgoff; - ret = 0; -unlock: - rcu_read_unlock(); - - return ret; + return 0; } static void ring_buffer_attach(struct perf_event *event, From patchwork Mon Jun 12 09:07:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276148 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6355C7EE25 for ; Mon, 12 Jun 2023 09:57:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231563AbjFLJ4y (ORCPT ); Mon, 12 Jun 2023 05:56:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233960AbjFLJyt (ORCPT ); Mon, 12 Jun 2023 05:54:49 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B766F10B; Mon, 12 Jun 2023 02:39:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=ZQvmg1eOC4ujP8jeHKMUa0GcDNPOQum0WTJAKmNFtGA=; b=as17JFHf5X4sS4LnfYcat1PiC9 Ygl21SzRcEI+y+2s58m4VR+c7k+AqAwe8jlPtnzWmYl16IPdeW/wTda3t/Mn+GoaB6Ioy1Fvsxcy2 Yc7m5IWDLwQXyvHo1rqnA6Jd0Y/RCI4mTEKdvBgQKhW2SUqz6/ofm3f2FtyVL6QmqCFhXv+NfXBW9 jwT1Z54p5Lij0CfiCqCeWuIXPASZuIXQbi9qaYtAB9PxGVZuCAPJ0xBo+WgJ0/KwZTrb1lbhfQH4p 1LKUHGwtrRUZDPpZ0MI3ymQ1XUCHq292Bw25ogsKjYnaaZ9XHIUSkaTDUL7Zx23RRYEU3HshnTJui hSQE7vqg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0n-008kRD-37; Mon, 12 Jun 2023 09:39:06 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 487FF3033CB; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id DC80030A77B72; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.407316252@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:53 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 40/57] perf: Simplify perf_mmap_close()/perf_aux_sample_output() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6179,6 +6179,9 @@ void ring_buffer_put(struct perf_buffer call_rcu(&rb->rcu_head, rb_free_rcu); } +DEFINE_CLASS(ring_buffer_get, struct perf_buffer *, ring_buffer_put(_T), + ring_buffer_get(event), struct perf_event *event) + static void perf_mmap_open(struct vm_area_struct *vma) { struct perf_event *event = vma->vm_file->private_data; @@ -6206,7 +6209,7 @@ static void perf_pmu_output_stop(struct static void perf_mmap_close(struct vm_area_struct *vma) { struct perf_event *event = vma->vm_file->private_data; - struct perf_buffer *rb = ring_buffer_get(event); + CLASS(ring_buffer_get, rb)(event); struct user_struct *mmap_user = rb->mmap_user; int mmap_locked = rb->mmap_locked; unsigned long size = perf_data_size(rb); @@ -6245,14 +6248,14 @@ static void perf_mmap_close(struct vm_ar detach_rest = true; if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) - goto out_put; + return; ring_buffer_attach(event, NULL); mutex_unlock(&event->mmap_mutex); /* If there's still other mmap()s of this buffer, we're done. */ if (!detach_rest) - goto out_put; + return; /* * No other mmap()s, detach from all other events that might redirect @@ -6309,9 +6312,6 @@ static void perf_mmap_close(struct vm_ar &mmap_user->locked_vm); atomic64_sub(mmap_locked, &vma->vm_mm->pinned_vm); free_uid(mmap_user); - -out_put: - ring_buffer_put(rb); /* could be last */ } static const struct vm_operations_struct perf_mmap_vmops = { @@ -6962,14 +6962,13 @@ static void perf_aux_sample_output(struc struct perf_sample_data *data) { struct perf_event *sampler = event->aux_event; - struct perf_buffer *rb; unsigned long pad; long size; if (WARN_ON_ONCE(!sampler || !data->aux_size)) return; - rb = ring_buffer_get(sampler); + CLASS(ring_buffer_get, rb)(sampler); if (!rb) return; @@ -6982,7 +6981,7 @@ static void perf_aux_sample_output(struc * like to know. */ if (WARN_ON_ONCE(size < 0)) - goto out_put; + return; /* * The pad comes from ALIGN()ing data->aux_size up to u64 in @@ -6996,9 +6995,6 @@ static void perf_aux_sample_output(struc u64 zero = 0; perf_output_copy(handle, &zero, pad); } - -out_put: - ring_buffer_put(rb); } /* From patchwork Mon Jun 12 09:07:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276154 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C785FC87FDD for ; Mon, 12 Jun 2023 09:57:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235905AbjFLJ5R (ORCPT ); Mon, 12 Jun 2023 05:57:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233682AbjFLJyr (ORCPT ); Mon, 12 Jun 2023 05:54:47 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C9764C1E; Mon, 12 Jun 2023 02:39:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=VwLJzq4wQCJ4xfIoCm0MOddWikggSaLNbFpG3XgsB1U=; b=dDnxyA36tfL2pJa+fjQ2VTeARz uh5lFYPphgrzEScOZBiNSB/l0JET6lz9vuFxdA8IjoxWv//Mx3BL1bDtn1P4YIMrpV/FRB35iPjRQ S/2hc5m9oTCtV65ICDia0XlNp2GJ0x2OKDtdrc7GG+LtRuBqj2191E0uWBwd1NiC6XGa9rjhVrcph ZKoLG6i5q8+lBORMXKkBhDQwET2tp7pp2axVFoU3Tu0b7dxGOs6WD1mFYQPYjDOIn9x8uoZZFJBye DG5AK1uryNKX7xiUVquINqYQD8O0H4yN8nueL4W+D6yuGI4G4JIeE0rBu/zP8xN1C128nOvfhbHfZ MmQ71Gaw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0o-008kRE-01; Mon, 12 Jun 2023 09:39:03 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 4DD6E3033CE; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id E135130A77B73; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.493651920@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:54 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 41/57] perf: Simplify __perf_event_output() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7739,22 +7739,17 @@ __perf_event_output(struct perf_event *e int err; /* protect the callchain buffers */ - rcu_read_lock(); + guard(rcu)(); perf_prepare_sample(data, event, regs); perf_prepare_header(&header, data, event, regs); - err = output_begin(&handle, data, event, header.size); if (err) - goto exit; - + return err; perf_output_sample(&handle, &header, data, event); - perf_output_end(&handle); -exit: - rcu_read_unlock(); - return err; + return 0; } void From patchwork Mon Jun 12 09:07:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276147 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5CB6C7EE43 for ; Mon, 12 Jun 2023 09:56:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230295AbjFLJ4x (ORCPT ); Mon, 12 Jun 2023 05:56:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233785AbjFLJys (ORCPT ); Mon, 12 Jun 2023 05:54:48 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B1B96193; Mon, 12 Jun 2023 02:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=u1PqToTv4tGu/wNo5qNbJmeSKvydQLpyPDESq1sFIz4=; b=mHcg7bsEVSqy84in9mzk14TWOQ pUvBhruN8Ps4QU7UuXumepKGGKVTz/FqQ93C9z0RN9W8/VKYjDbgUgbBY3/hYg1MVwS8NQtUh+Xww CynHYhgrEVuJvTSlRsgcVCBrRmckY/PSGTJojKVJLzgMoiN08hB1rnPRAI2jI27rXNxb+MIytDB8L DnKzYQDZWDNXEVW7juyczQTLAGwCYM4mK+1HoLnrpThSbhUJZex38ZlRC9grt10V6FjhXAKdXPMZm veFavMchJctFhw66K958zPJTPkM0N+TjTu3S9OTJzMVp9r9RRaE5dQ/R2ohI3MV52F1JIkwJGTg5z XosuHW2w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0p-008kRQ-2g; Mon, 12 Jun 2023 09:39:06 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 51E713033FA; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id E65E130A77B74; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.564584285@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:55 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 42/57] perf: Simplify perf_iterate_sb() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7871,8 +7871,8 @@ perf_iterate_sb(perf_iterate_f output, v { struct perf_event_context *ctx; - rcu_read_lock(); - preempt_disable(); + guard(rcu)(); + guard(preempt)(); /* * If we have task_ctx != NULL we only notify the task context itself. @@ -7881,7 +7881,7 @@ perf_iterate_sb(perf_iterate_f output, v */ if (task_ctx) { perf_iterate_ctx(task_ctx, output, data, false); - goto done; + return; } perf_iterate_sb_cpu(output, data); @@ -7889,9 +7889,6 @@ perf_iterate_sb(perf_iterate_f output, v ctx = rcu_dereference(current->perf_event_ctxp); if (ctx) perf_iterate_ctx(ctx, output, data, false); -done: - preempt_enable(); - rcu_read_unlock(); } /* From patchwork Mon Jun 12 09:07:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 201FCC7EE45 for ; Mon, 12 Jun 2023 09:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234226AbjFLJ4n (ORCPT ); Mon, 12 Jun 2023 05:56:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233586AbjFLJyp (ORCPT ); Mon, 12 Jun 2023 05:54:45 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA9D56190; Mon, 12 Jun 2023 02:39:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=5KSqPesmPX08w8dWPEEw5Ll2e7FMQinOnTr3XW9NIYo=; b=otUSsXrkwDi6rVuLl90MM+76vL cE6gp9EEa8Zpqqg9MG4CPJ7+wgAaw1DsHIj5puea/XbvsOxRyOFpyECLs24dRzQ6Sjol89f/EXqvQ LChObBEVQXRBsTW4GZSlyTIrge2V5AXUII3ByKeNmerebPgKH0p6l5UMuFfc+5XpT1yAnuRwxbHT8 2TJtKzkAkK7oEDchq+KMZKfYnIVzQSIQ18PSNfOGRXZs26r2yqrAQzcGI/y/cuIBGiWJXVqpmFdO/ vnWMq8UVC4ZBmx1AVgqzGkpD1hMSd/tM7MGG2el0Wb3T3Cib/eS7vjoZ84B8P3FcOaWoCXC0PdTIv Ssmk1qgw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0o-008kRF-02; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 52D533033FC; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id EBB5330A77B75; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.638818161@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:56 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 43/57] perf: Simplify perf_sw_event() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 37 ++++++++++++------------------------- 1 file changed, 12 insertions(+), 25 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -9701,17 +9701,15 @@ static void do_perf_sw_event(enum perf_t struct perf_event *event; struct hlist_head *head; - rcu_read_lock(); + guard(rcu)(); head = find_swevent_head_rcu(swhash, type, event_id); if (!head) - goto end; + return; hlist_for_each_entry_rcu(event, head, hlist_entry) { if (perf_swevent_match(event, type, event_id, data, regs)) perf_swevent_event(event, nr, data, regs); } -end: - rcu_read_unlock(); } DEFINE_PER_CPU(struct pt_regs, __perf_regs[4]); @@ -9746,16 +9744,13 @@ void __perf_sw_event(u32 event_id, u64 n { int rctx; - preempt_disable_notrace(); + guard(preempt_notrace)(); rctx = perf_swevent_get_recursion_context(); if (unlikely(rctx < 0)) - goto fail; + return; ___perf_sw_event(event_id, nr, regs, addr); - perf_swevent_put_recursion_context(rctx); -fail: - preempt_enable_notrace(); } static void perf_swevent_read(struct perf_event *event) @@ -9844,21 +9839,17 @@ static int swevent_hlist_get_cpu(int cpu struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu); int err = 0; - mutex_lock(&swhash->hlist_mutex); + guard(mutex)(&swhash->hlist_mutex); if (!swevent_hlist_deref(swhash) && cpumask_test_cpu(cpu, perf_online_mask)) { struct swevent_hlist *hlist; hlist = kzalloc(sizeof(*hlist), GFP_KERNEL); - if (!hlist) { - err = -ENOMEM; - goto exit; - } + if (!hlist) + return -ENOMEM; rcu_assign_pointer(swhash->swevent_hlist, hlist); } swhash->hlist_refcount++; -exit: - mutex_unlock(&swhash->hlist_mutex); return err; } @@ -10115,16 +10106,12 @@ void perf_tp_event(u16 event_type, u64 c if (task && task != current) { struct perf_event_context *ctx; - rcu_read_lock(); + guard(rcu)(); ctx = rcu_dereference(task->perf_event_ctxp); - if (!ctx) - goto unlock; - - raw_spin_lock(&ctx->lock); - perf_tp_event_target_task(count, record, regs, &data, ctx); - raw_spin_unlock(&ctx->lock); -unlock: - rcu_read_unlock(); + if (ctx) { + guard(raw_spinlock)(&ctx->lock); + perf_tp_event_target_task(count, record, regs, &data, ctx); + } } perf_swevent_put_recursion_context(rctx); From patchwork Mon Jun 12 09:07:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B737C7EE2E for ; Mon, 12 Jun 2023 09:57:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235129AbjFLJ5W (ORCPT ); Mon, 12 Jun 2023 05:57:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234058AbjFLJyv (ORCPT ); Mon, 12 Jun 2023 05:54:51 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C93D5122; Mon, 12 Jun 2023 02:39:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=uQ/8+qtEwxL57I4UlVvTVvhZ1JibWBOVMl2SE33h5lQ=; b=IvCNxBtf78fZAAYfNQN42P59zb LdOmJBgTi7n7fQzel+Cn1NVmH1TX1zqCDnktJMVQ1Td6+CcK2aY6xx9EO+zogbP4FSM4L94bJ4gmh DhBgKzkwStMx8Ujume0wLEdxIhpnqbgOANapLVEvMWG4NgRrJCaSyBEPs1JqOuYsLYBjUfF1q6FAG p6W0Rynx9NQjxRNIQ3FoBhVoElxoRsrJDSiWVadf5YVa4dhK8ZLINVvDZxauq4dcOYnBmLNL4NW/q gG8MUILIZQQSiIlHQWW9V2FLh4CWhT11lX4H5UpL8b3H8I73N3+/eZrfsj0V2ARE+G7HWkR36jCyC spMcbf0w==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0q-008kRa-2x; Mon, 12 Jun 2023 09:39:12 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 55E2930340A; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id F2AAD30A77B78; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093540.708955479@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:57 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 44/57] perf: Simplify bpf_overflow_handler() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -10288,16 +10288,14 @@ static void bpf_overflow_handler(struct int ret = 0; ctx.regs = perf_arch_bpf_user_pt_regs(regs); - if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) - goto out; - rcu_read_lock(); - prog = READ_ONCE(event->prog); - if (prog) { - perf_prepare_sample(data, event, regs); - ret = bpf_prog_run(prog, &ctx); + if (likely(__this_cpu_inc_return(bpf_prog_active) == 1)) { + guard(rcu)(); + prog = READ_ONCE(event->prog); + if (prog) { + perf_prepare_sample(data, event, regs); + ret = bpf_prog_run(prog, &ctx); + } } - rcu_read_unlock(); -out: __this_cpu_dec(bpf_prog_active); if (!ret) return; From patchwork Mon Jun 12 09:07:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05643C7EE45 for ; Mon, 12 Jun 2023 09:57:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234342AbjFLJ5H (ORCPT ); Mon, 12 Jun 2023 05:57:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233658AbjFLJyq (ORCPT ); Mon, 12 Jun 2023 05:54:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A2624C1C; Mon, 12 Jun 2023 02:39:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=EsmTLBQmhx/5yfk2nLo68eqoCP+8wKcpRacytoeqYb0=; b=X3ADuQS8goezGkMNMjigZlzpSL 9BE3pznji3+z534OS7SCF+8WM3OVawaER0mluA6toWtSKmGr9OkNt6TrHh1RJ/0f4dm+lr2Mf5B/S /3JsSlVexJtTenPgLTxgl7OtS2wBCdLUZZslGHs0LnouhMoVSNg5EB+NeLeCso9sW7q97yznHM7ah o4Ct9DyDQgiFbgtYKRjAt8b7QprbRIVvzOfFbTpljdDShy2RouyJIW7GmwwP8+M+oOh6V9e+Bi9JD deaMDXMxMZHh+4v1+iWZUWGpEjVtujoC9MJiz7W5KwZalJEMDtXxCCbcKkQ7BU8mDLjYEo1AjNISv G6mcfmnA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0u-002NHq-89; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5D259305ECF; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 03E4530A77B79; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093540.779825032@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:58 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 45/57] perf: Simplify perf_event_parse_addr_filter() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org XXX this code needs a cleanup Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 56 ++++++++++++++++++++------------------------------- 1 file changed, 22 insertions(+), 34 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -10495,6 +10495,8 @@ static void free_filters_list(struct lis } } +DEFINE_FREE(filter_list, struct list_head *, if (_T) free_filters_list(_T)) + /* * Free existing address filters and optionally install new ones */ @@ -10658,13 +10660,15 @@ perf_event_parse_addr_filter(struct perf struct list_head *filters) { struct perf_addr_filter *filter = NULL; - char *start, *orig, *filename = NULL; substring_t args[MAX_OPT_ARGS]; int state = IF_STATE_ACTION, token; unsigned int kernel = 0; - int ret = -EINVAL; + char *start; + int ret; - orig = fstr = kstrdup(fstr, GFP_KERNEL); + struct list_head *fguard __free(filter_list) = filters; + char *filename __free(kfree) = NULL; + char *orig __free(kfree) = fstr = kstrdup(fstr, GFP_KERNEL); if (!fstr) return -ENOMEM; @@ -10674,7 +10678,6 @@ perf_event_parse_addr_filter(struct perf [IF_ACT_START] = PERF_ADDR_FILTER_ACTION_START, [IF_ACT_STOP] = PERF_ADDR_FILTER_ACTION_STOP, }; - ret = -EINVAL; if (!*start) continue; @@ -10683,7 +10686,7 @@ perf_event_parse_addr_filter(struct perf if (state == IF_STATE_ACTION) { filter = perf_addr_filter_new(event, filters); if (!filter) - goto fail; + return -EINVAL; } token = match_token(start, if_tokens, args); @@ -10692,7 +10695,7 @@ perf_event_parse_addr_filter(struct perf case IF_ACT_START: case IF_ACT_STOP: if (state != IF_STATE_ACTION) - goto fail; + return -EINVAL; filter->action = actions[token]; state = IF_STATE_SOURCE; @@ -10706,18 +10709,18 @@ perf_event_parse_addr_filter(struct perf case IF_SRC_FILEADDR: case IF_SRC_FILE: if (state != IF_STATE_SOURCE) - goto fail; + return -EINVAL; *args[0].to = 0; ret = kstrtoul(args[0].from, 0, &filter->offset); if (ret) - goto fail; + return ret; if (token == IF_SRC_KERNEL || token == IF_SRC_FILE) { *args[1].to = 0; ret = kstrtoul(args[1].from, 0, &filter->size); if (ret) - goto fail; + return ret; } if (token == IF_SRC_FILE || token == IF_SRC_FILEADDR) { @@ -10725,17 +10728,15 @@ perf_event_parse_addr_filter(struct perf kfree(filename); filename = match_strdup(&args[fpos]); - if (!filename) { - ret = -ENOMEM; - goto fail; - } + if (!filename) + return -ENOMEM; } state = IF_STATE_END; break; default: - goto fail; + return -EINVAL; } /* @@ -10744,19 +10745,17 @@ perf_event_parse_addr_filter(struct perf * attribute. */ if (state == IF_STATE_END) { - ret = -EINVAL; - /* * ACTION "filter" must have a non-zero length region * specified. */ if (filter->action == PERF_ADDR_FILTER_ACTION_FILTER && !filter->size) - goto fail; + return -EINVAL; if (!kernel) { if (!filename) - goto fail; + return -EINVAL; /* * For now, we only support file-based filters @@ -10766,21 +10765,19 @@ perf_event_parse_addr_filter(struct perf * mapped at different virtual addresses in * different processes. */ - ret = -EOPNOTSUPP; if (!event->ctx->task) - goto fail; + return -EOPNOTSUPP; /* look up the path and grab its inode */ ret = kern_path(filename, LOOKUP_FOLLOW, &filter->path); if (ret) - goto fail; + return ret; - ret = -EINVAL; if (!filter->path.dentry || !S_ISREG(d_inode(filter->path.dentry) ->i_mode)) - goto fail; + return -EINVAL; event->addr_filters.nr_file_filters++; } @@ -10795,19 +10792,10 @@ perf_event_parse_addr_filter(struct perf } if (state != IF_STATE_ACTION) - goto fail; - - kfree(filename); - kfree(orig); + return -EINVAL; + no_free_ptr(fguard); // allow filters to escape to the caller return 0; - -fail: - kfree(filename); - free_filters_list(filters); - kfree(orig); - - return ret; } static int From patchwork Mon Jun 12 09:07:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E33E2C83005 for ; Mon, 12 Jun 2023 09:57:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232225AbjFLJ45 (ORCPT ); Mon, 12 Jun 2023 05:56:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233646AbjFLJyq (ORCPT ); Mon, 12 Jun 2023 05:54:46 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A56E6191; Mon, 12 Jun 2023 02:39:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=UxgZ8gBb/X/LH3e6EzhOyo9roeaslQ6np5gS0C0ZzJo=; b=HM14QRNLfsj2jCFI7z2tW4MkY6 GFgNx3SbksNmnhAVq0AjonGzUDtBI/Mo6iGgqC4Z3Yw0yJRHs5bUbLaUeIW6IvnXfKgBRJJjn7uNQ 8kP6DACk9mXjCQZNEWLx2zj6NQ4ximS2Z4sK1z90jL4Lq9e0i0OeQ8TlW7W1cqhcWYNvSCLkdgZmy SqKNr9CxrQ7FW/sh5UaiKJ6haS90E/EgjU7MLEJJu+O0hcnzxUFea/kex+iIicCt101qYrkq6dqe6 AVToOOiTV/KbOAryV79BDSwPFxBb+4i3U0+K1JR+w5uDsypkLpq0y4RhhxnrjLRTPqtV8YNct2odK /9QseCKw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0t-002NHI-V0; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5D877305ED3; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 08B3F30A77B7C; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093540.850386350@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:59 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 46/57] perf: Simplify pmu_dev_alloc() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Greg Kroah-Hartman Reviewed-by: Greg Kroah-Hartman --- kernel/events/core.c | 65 ++++++++++++++++++++++++--------------------------- 1 file changed, 31 insertions(+), 34 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11285,49 +11285,46 @@ static void pmu_dev_release(struct devic static int pmu_dev_alloc(struct pmu *pmu) { - int ret = -ENOMEM; + int ret; - pmu->dev = kzalloc(sizeof(struct device), GFP_KERNEL); - if (!pmu->dev) - goto out; + struct device *dev __free(put_device) = + kzalloc(sizeof(struct device), GFP_KERNEL); + if (!dev) + return -ENOMEM; - pmu->dev->groups = pmu->attr_groups; - device_initialize(pmu->dev); + dev->groups = pmu->attr_groups; + device_initialize(dev); - dev_set_drvdata(pmu->dev, pmu); - pmu->dev->bus = &pmu_bus; - pmu->dev->release = pmu_dev_release; + dev_set_drvdata(dev, pmu); + dev->bus = &pmu_bus; + dev->release = pmu_dev_release; - ret = dev_set_name(pmu->dev, "%s", pmu->name); + ret = dev_set_name(dev, "%s", pmu->name); if (ret) - goto free_dev; + return ret; - ret = device_add(pmu->dev); + ret = device_add(dev); if (ret) - goto free_dev; + return ret; - /* For PMUs with address filters, throw in an extra attribute: */ - if (pmu->nr_addr_filters) - ret = device_create_file(pmu->dev, &dev_attr_nr_addr_filters); - - if (ret) - goto del_dev; + struct device *del __free(device_del) = dev; - if (pmu->attr_update) - ret = sysfs_update_groups(&pmu->dev->kobj, pmu->attr_update); - - if (ret) - goto del_dev; - -out: - return ret; - -del_dev: - device_del(pmu->dev); - -free_dev: - put_device(pmu->dev); - goto out; + /* For PMUs with address filters, throw in an extra attribute: */ + if (pmu->nr_addr_filters) { + ret = device_create_file(dev, &dev_attr_nr_addr_filters); + if (ret) + return ret; + } + + if (pmu->attr_update) { + ret = sysfs_update_groups(&dev->kobj, pmu->attr_update); + if (ret) + return ret; + } + + no_free_ptr(del); + pmu->dev = no_free_ptr(dev); + return 0; } static struct lock_class_key cpuctx_mutex; From patchwork Mon Jun 12 09:08:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57941C87FE2 for ; Mon, 12 Jun 2023 09:56:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229720AbjFLJ4w (ORCPT ); Mon, 12 Jun 2023 05:56:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233509AbjFLJyp (ORCPT ); Mon, 12 Jun 2023 05:54:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C7F24C21; Mon, 12 Jun 2023 02:39:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=i6isNClAHvIMw2+OzbyxD8vRwSg4Nx0nqjzFU0CtSfM=; b=MxrQC5up24n4fUnoCZs9xhGydj WPiaUY1fVHgP0zt4SgHjFl9bU1FTswRc5S3GTdIwagtnGxmM8g/mVm8h6tFslA0BkAIHCS0lsmsxf mXYApR3w+00dThAGwRw5SSsaVtCEesX57GFzuJg1kHrKYomu8aDdoUGY1JDUE8JBHipCfa92Ifv/r zd+VuHJHkrbYFig5uYcq1Hizv1HY2WFQ43Hgl/mswdSuUFAi9hjrs2LMVRGruHQWcuZ+8W1Fu1Q+7 OvClSB+9TLTaaO7D8ryJgXgnOZRmNV9glpRBLCdH+94Z9Be2B+yVwhI1KhGtwMx58ouJ9xNIuEfjs v+1zP2IQ==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0u-002NHx-FF; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5D94C30611B; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 0DD7E30A79080; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093540.931189374@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:00 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 47/57] perf: Simplify perf_pmu_register() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 40 ++++++++++++++++------------------------ 1 file changed, 16 insertions(+), 24 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11344,22 +11344,23 @@ void __perf_pmu_unregister(struct pmu *p free_pmu_context(pmu); } -int perf_pmu_register(struct pmu *pmu, const char *name, int type) +DEFINE_FREE(pmu_unregister, struct pmu *, if (_T) __perf_pmu_unregister(_T)) + +int perf_pmu_register(struct pmu *_pmu, const char *name, int type) { int cpu, ret, max = PERF_TYPE_MAX; - pmu->type = -1; + _pmu->type = -1; + + guard(mutex)(&pmus_lock); + struct pmu *pmu __free(pmu_unregister) = _pmu; - mutex_lock(&pmus_lock); - ret = -ENOMEM; pmu->pmu_disable_count = alloc_percpu(int); if (!pmu->pmu_disable_count) - goto unlock; + return -ENOMEM; - if (WARN_ONCE(!name, "Can not register anonymous pmu.\n")) { - ret = -EINVAL; - goto free; - } + if (WARN_ONCE(!name, "Can not register anonymous pmu.\n")) + return -EINVAL; pmu->name = name; @@ -11368,7 +11369,7 @@ int perf_pmu_register(struct pmu *pmu, c ret = idr_alloc(&pmu_idr, pmu, max, 0, GFP_KERNEL); if (ret < 0) - goto free; + return ret; WARN_ON(type >= 0 && ret != type); @@ -11377,13 +11378,12 @@ int perf_pmu_register(struct pmu *pmu, c if (pmu_bus_running && !pmu->dev) { ret = pmu_dev_alloc(pmu); if (ret) - goto free; + return ret; } - ret = -ENOMEM; pmu->cpu_pmu_context = alloc_percpu(struct perf_cpu_pmu_context); if (!pmu->cpu_pmu_context) - goto free; + return -ENOMEM; for_each_possible_cpu(cpu) { struct perf_cpu_pmu_context *cpc; @@ -11423,21 +11423,14 @@ int perf_pmu_register(struct pmu *pmu, c list_add_rcu(&pmu->entry, &pmus); atomic_set(&pmu->exclusive_cnt, 0); - ret = 0; -unlock: - mutex_unlock(&pmus_lock); - - return ret; - -free: - __perf_pmu_unregister(pmu); - goto unlock; + no_free_ptr(pmu); // let it rip + return 0; } EXPORT_SYMBOL_GPL(perf_pmu_register); void perf_pmu_unregister(struct pmu *pmu) { - mutex_lock(&pmus_lock); + guard(mutex)(&pmus_lock); list_del_rcu(&pmu->entry); /* @@ -11448,7 +11441,6 @@ void perf_pmu_unregister(struct pmu *pmu synchronize_rcu(); __perf_pmu_unregister(pmu); - mutex_unlock(&pmus_lock); } EXPORT_SYMBOL_GPL(perf_pmu_unregister); From patchwork Mon Jun 12 09:08:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 113CFC87FDC for ; Mon, 12 Jun 2023 09:57:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233651AbjFLJ5B (ORCPT ); Mon, 12 Jun 2023 05:57:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233927AbjFLJyt (ORCPT ); Mon, 12 Jun 2023 05:54:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D76FD6196; Mon, 12 Jun 2023 02:39:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=+ABEd9mMVv/0QFeK6grpTerCWBoJWrrnVkIx9/E66Rs=; b=UTo4GrteI/bFbo9KHpO0OPwMGI 3uaOsWbK8QQKJpqqXdvWKe/1DuroVfEQe6X20N2c/d4VACYYWQBssFwlcwWzekErpAM0jQu1tZ9/e +YjTJLibkb7RS+vl5QNs6CnVHMIIsZx26ITReWfpe+4LcyRYAR3GYYJnxqTgCp03WtkHryIoBvyNs TXVQ6nIN0Wo13A75EEBRaNQgHJG89a01PYjHojcWPFAz9Lso8WCkbgk970fpgIc60tblrKHCu8q3T ZNjg78iPZJ+jSlMlf+tB1JB3hzy3afqZWV9UZL/bFcehkWkvUp1ecRWM7hjJ4i7++5u7/hW81IfNT RM5pjjLw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0u-002NHJ-0b; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 5EDDB306129; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 12B3030A79081; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.025480679@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:01 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 48/57] perf: Simplify perf_init_event() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 31 ++++++++++++------------------- 1 file changed, 12 insertions(+), 19 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11504,10 +11504,10 @@ static int perf_try_init_event(struct pm static struct pmu *perf_init_event(struct perf_event *event) { bool extended_type = false; - int idx, type, ret; struct pmu *pmu; + int type, ret; - idx = srcu_read_lock(&pmus_srcu); + guard(srcu)(&pmus_srcu); /* * Save original type before calling pmu->event_init() since certain @@ -11520,7 +11520,7 @@ static struct pmu *perf_init_event(struc pmu = event->parent->pmu; ret = perf_try_init_event(pmu, event); if (!ret) - goto unlock; + return pmu; } /* @@ -11539,13 +11539,12 @@ static struct pmu *perf_init_event(struc } again: - rcu_read_lock(); - pmu = idr_find(&pmu_idr, type); - rcu_read_unlock(); + scoped_guard (rcu) + pmu = idr_find(&pmu_idr, type); if (pmu) { if (event->attr.type != type && type != PERF_TYPE_RAW && !(pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)) - goto fail; + return ERR_PTR(-ENOENT); ret = perf_try_init_event(pmu, event); if (ret == -ENOENT && event->attr.type != type && !extended_type) { @@ -11554,27 +11553,21 @@ static struct pmu *perf_init_event(struc } if (ret) - pmu = ERR_PTR(ret); + return ERR_PTR(ret); - goto unlock; + return pmu; } list_for_each_entry_rcu(pmu, &pmus, entry, lockdep_is_held(&pmus_srcu)) { ret = perf_try_init_event(pmu, event); if (!ret) - goto unlock; + return pmu; - if (ret != -ENOENT) { - pmu = ERR_PTR(ret); - goto unlock; - } + if (ret != -ENOENT) + return ERR_PTR(ret); } -fail: - pmu = ERR_PTR(-ENOENT); -unlock: - srcu_read_unlock(&pmus_srcu, idx); - return pmu; + return ERR_PTR(-ENOENT); } static void attach_sb_event(struct perf_event *event) From patchwork Mon Jun 12 09:08:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97F0FC83005 for ; Mon, 12 Jun 2023 09:58:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236093AbjFLJ6W (ORCPT ); Mon, 12 Jun 2023 05:58:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233523AbjFLJyp (ORCPT ); Mon, 12 Jun 2023 05:54:45 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B34D0618F; Mon, 12 Jun 2023 02:39:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=VxLoQWqPLfFOizpz6LclwAvcc6NhnspZtOnksJEP8nY=; b=p6PhqIfSxM8DiwUuU4pSepp9+t sQDAs+RLgzLsKfJ0Kcen7QCEGUEiIeSA/rwAsAR7b6MfNGO9pcntCkHSiswQZFuF2xM2R+zf8J3ow wX35cUTV1JxcFVTwq7nPW7+2yeRbGO9zPkqabUzEI167PNV6S41nRrSC4/lAzjfi/vOJ3w15MxDOX ZX/q+B8JUmaFcJIBrT88ZNo/cz0VSFtkEr9L9mebjyXbMWd2ohnjBecsQJ/J3K5VJxidYIQPFKy+9 4HVU7cOA9JH1oDaOoJo11k/QJT8iAkLcVqjNx5HmWfEws0lEKwGfEU1AD+SSkaq9GPZUqJdegwQDg EEzgg1Jw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0u-002NIB-Nl; Mon, 12 Jun 2023 09:39:04 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6627730612F; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 1B34930A79083; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.097332151@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:02 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 49/57] perf: Simplify perf_event_alloc() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 47 ++++++++++++++++++----------------------------- 1 file changed, 18 insertions(+), 29 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5148,6 +5148,8 @@ static void __free_event(struct perf_eve call_rcu(&event->rcu_head, free_event_rcu); } +DEFINE_FREE(__free_event, struct perf_event *, if (_T) __free_event(_T)) + /* vs perf_event_alloc() success */ static void _free_event(struct perf_event *event) { @@ -11694,7 +11696,6 @@ perf_event_alloc(struct perf_event_attr void *context, int cgroup_fd) { struct pmu *pmu; - struct perf_event *event; struct hw_perf_event *hwc; long err = -EINVAL; int node; @@ -11709,8 +11710,8 @@ perf_event_alloc(struct perf_event_attr } node = (cpu >= 0) ? cpu_to_node(cpu) : -1; - event = kmem_cache_alloc_node(perf_event_cache, GFP_KERNEL | __GFP_ZERO, - node); + struct perf_event *event __free(__free_event) = + kmem_cache_alloc_node(perf_event_cache, GFP_KERNEL | __GFP_ZERO, node); if (!event) return ERR_PTR(-ENOMEM); @@ -11815,51 +11816,43 @@ perf_event_alloc(struct perf_event_attr * See perf_output_read(). */ if (attr->inherit && (attr->sample_type & PERF_SAMPLE_READ)) - goto err; + return ERR_PTR(-EINVAL); if (!has_branch_stack(event)) event->attr.branch_sample_type = 0; pmu = perf_init_event(event); - if (IS_ERR(pmu)) { - err = PTR_ERR(pmu); - goto err; - } + if (IS_ERR(pmu)) + return (void*)pmu; /* * Disallow uncore-task events. Similarly, disallow uncore-cgroup * events (they don't make sense as the cgroup will be different * on other CPUs in the uncore mask). */ - if (pmu->task_ctx_nr == perf_invalid_context && (task || cgroup_fd != -1)) { - err = -EINVAL; - goto err; - } + if (pmu->task_ctx_nr == perf_invalid_context && (task || cgroup_fd != -1)) + return ERR_PTR(-EINVAL); if (event->attr.aux_output && - !(pmu->capabilities & PERF_PMU_CAP_AUX_OUTPUT)) { - err = -EOPNOTSUPP; - goto err; - } + !(pmu->capabilities & PERF_PMU_CAP_AUX_OUTPUT)) + return ERR_PTR(-EOPNOTSUPP); if (cgroup_fd != -1) { err = perf_cgroup_connect(cgroup_fd, event, attr, group_leader); if (err) - goto err; + return ERR_PTR(err); } err = exclusive_event_init(event); if (err) - goto err; + return ERR_PTR(err); if (has_addr_filter(event)) { event->addr_filter_ranges = kcalloc(pmu->nr_addr_filters, sizeof(struct perf_addr_filter_range), GFP_KERNEL); - if (!event->addr_filter_ranges) { - err = -ENOMEM; - goto err; - } + if (!event->addr_filter_ranges) + return ERR_PTR(-ENOMEM); /* * Clone the parent's vma offsets: they are valid until exec() @@ -11883,22 +11876,18 @@ perf_event_alloc(struct perf_event_attr if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) { err = get_callchain_buffers(attr->sample_max_stack); if (err) - goto err; + return ERR_PTR(err); } } err = security_perf_event_alloc(event); if (err) - goto err; + return ERR_PTR(err); /* symmetric to unaccount_event() in _free_event() */ account_event(event); - return event; - -err: - __free_event(event); - return ERR_PTR(err); + return_ptr(event); } static int perf_copy_attr(struct perf_event_attr __user *uattr, From patchwork Mon Jun 12 09:08:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276142 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10F36C7EE25 for ; Mon, 12 Jun 2023 09:56:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235107AbjFLJ4p (ORCPT ); Mon, 12 Jun 2023 05:56:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229519AbjFLJyu (ORCPT ); Mon, 12 Jun 2023 05:54:50 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95C22110; Mon, 12 Jun 2023 02:39:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=zIJhFm+v7umyR/hjF6scVII/tRkyIDOCoJmZ83M51xg=; b=bwoTU3HDHIUp9ajMy0QFcq4D+D EXDKbnFuPa6qUqmoVDvtErkhWO00LMiPcJzpdvPsQMJLq9wpMasMY6nUzcVeGL9ALBqrIhtyV8EfA XKKGvO2PN5okZcYxdhdtY3o4iZ/qDy6c+5VPyKqrPh2PgoSSFDW8jI7e1PpaMe5VUMWVCAorJndtN AJ5GDlw6VsgSOVpk0HVetMuky+tVFOVVxZX11VNX4IhJ4CBNVPbArKYPU5OAaWFomWTACc/ooAiYV zzoGivkT77zIhCwiENxw5L2hhL0o+nX8tsqRKPT4FCVJkM3V5nu3PsWwe5GueAr3wFeoRM2LesPEy 6QSZaq5A==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0v-008kS7-13; Mon, 12 Jun 2023 09:39:12 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6B4BC306137; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 1F44130A79082; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.169256651@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:03 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 50/57] perf: Simplify sys_perf_event_open() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- include/linux/file.h | 3 kernel/events/core.c | 483 +++++++++++++++++++++++---------------------------- 2 files changed, 222 insertions(+), 264 deletions(-) --- a/include/linux/file.h +++ b/include/linux/file.h @@ -84,6 +84,7 @@ static inline void fdput_pos(struct fd f } DEFINE_CLASS(fd, struct fd, fdput(_T), fdget(fd), int fd) +DEFINE_FREE(fdput, struct fd, fdput(_T)) extern int f_dupfd(unsigned int from, struct file *file, unsigned flags); extern int replace_fd(unsigned fd, struct file *file, unsigned flags); @@ -96,6 +97,8 @@ extern void put_unused_fd(unsigned int f DEFINE_CLASS(get_unused_fd, int, if (_T >= 0) put_unused_fd(_T), get_unused_fd_flags(flags), unsigned flags) +#define no_free_fd(fd) ({ int __fd = (fd); (fd) = -1; __fd; }) + extern void fd_install(unsigned int fd, struct file *file); extern int __receive_fd(struct file *file, int __user *ufd, --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1163,9 +1163,10 @@ static void perf_assert_pmu_disabled(str WARN_ON_ONCE(*this_cpu_ptr(pmu->pmu_disable_count) == 0); } -static void get_ctx(struct perf_event_context *ctx) +static struct perf_event_context *get_ctx(struct perf_event_context *ctx) { refcount_inc(&ctx->refcount); + return ctx; } static void *alloc_task_ctx_data(struct pmu *pmu) @@ -4672,9 +4673,6 @@ find_lively_task_by_vpid(pid_t vpid) get_task_struct(task); rcu_read_unlock(); - if (!task) - return ERR_PTR(-ESRCH); - return task; } @@ -4754,6 +4752,11 @@ find_get_context(struct task_struct *tas return ERR_PTR(err); } +DEFINE_CLASS(find_get_ctx, struct perf_event_context *, + if (!IS_ERR_OR_NULL(_T)) { perf_unpin_context(_T); put_ctx(_T); }, + find_get_context(task, event), + struct task_struct *task, struct perf_event *event) + /* * Returns a matching perf_event_pmu_context with elevated refcount or NULL. */ @@ -4836,9 +4839,10 @@ find_get_pmu_context(struct pmu *pmu, st return epc; } -static void get_pmu_ctx(struct perf_event_pmu_context *epc) +static struct perf_event_pmu_context *get_pmu_ctx(struct perf_event_pmu_context *epc) { WARN_ON_ONCE(!atomic_inc_not_zero(&epc->refcount)); + return epc; } static void free_epc_rcu(struct rcu_head *head) @@ -4881,6 +4885,8 @@ static void put_pmu_ctx(struct perf_even call_rcu(&epc->rcu_head, free_epc_rcu); } +DEFINE_FREE(put_pmu_ctx, struct perf_event_pmu_context *, if (_T) put_pmu_ctx(_T)) + static void perf_event_free_filter(struct perf_event *event); static void free_event_rcu(struct rcu_head *head) @@ -5190,6 +5196,8 @@ static void free_event(struct perf_event _free_event(event); } +DEFINE_FREE(free_event, struct perf_event *, if (!IS_ERR_OR_NULL(_T)) free_event(_T)) + /* * Remove user event from the owner task. */ @@ -5748,19 +5756,6 @@ EXPORT_SYMBOL_GPL(perf_event_period); static const struct file_operations perf_fops; -static inline struct fd perf_fdget(int fd) -{ - struct fd f = fdget(fd); - if (!f.file) - return fdnull; - - if (f.file->f_op != &perf_fops) { - fdput(f); - return fdnull; - } - return f; -} - static inline bool is_perf_fd(struct fd fd) { return fd.file && fd.file->f_op == &perf_fops; @@ -12189,19 +12184,16 @@ SYSCALL_DEFINE5(perf_event_open, pid_t, pid, int, cpu, int, group_fd, unsigned long, flags) { struct perf_event *group_leader = NULL, *output_event = NULL; - struct perf_event_pmu_context *pmu_ctx; - struct perf_event *event, *sibling; + struct perf_event *sibling; struct perf_event_attr attr; - struct perf_event_context *ctx; struct file *event_file = NULL; - struct fd group = {NULL, 0}; - struct task_struct *task = NULL; + struct task_struct *task __free(put_task) = NULL; + struct fd group __free(fdput) = fdnull; struct pmu *pmu; - int event_fd; int move_group = 0; - int err; int f_flags = O_RDWR; int cgroup_fd = -1; + int err; /* for future expandability... */ if (flags & ~PERF_FLAG_ALL) @@ -12261,16 +12253,14 @@ SYSCALL_DEFINE5(perf_event_open, if (flags & PERF_FLAG_FD_CLOEXEC) f_flags |= O_CLOEXEC; - event_fd = get_unused_fd_flags(f_flags); - if (event_fd < 0) - return event_fd; + CLASS(get_unused_fd, fd)(f_flags); + if (fd < 0) + return fd; if (group_fd != -1) { - group = perf_fdget(group_fd); - if (!group.file) { - err = -EBADF; - goto err_fd; - } + group = fdget(group_fd); + if (!is_perf_fd(group)) + return -EBADF; group_leader = group.file->private_data; if (flags & PERF_FLAG_FD_OUTPUT) output_event = group_leader; @@ -12280,33 +12270,26 @@ SYSCALL_DEFINE5(perf_event_open, if (pid != -1 && !(flags & PERF_FLAG_PID_CGROUP)) { task = find_lively_task_by_vpid(pid); - if (IS_ERR(task)) { - err = PTR_ERR(task); - goto err_group_fd; - } + if (!task) + return -ESRCH; } if (task && group_leader && - group_leader->attr.inherit != attr.inherit) { - err = -EINVAL; - goto err_task; - } + group_leader->attr.inherit != attr.inherit) + return -EINVAL; if (flags & PERF_FLAG_PID_CGROUP) cgroup_fd = pid; - event = perf_event_alloc(&attr, cpu, task, group_leader, NULL, + struct perf_event *event __free(free_event) = + perf_event_alloc(&attr, cpu, task, group_leader, NULL, NULL, NULL, cgroup_fd); - if (IS_ERR(event)) { - err = PTR_ERR(event); - goto err_task; - } + if (IS_ERR(event)) + return PTR_ERR(event); if (is_sampling_event(event)) { - if (event->pmu->capabilities & PERF_PMU_CAP_NO_INTERRUPT) { - err = -EOPNOTSUPP; - goto err_alloc; - } + if (event->pmu->capabilities & PERF_PMU_CAP_NO_INTERRUPT) + return -EOPNOTSUPP; } /* @@ -12318,266 +12301,238 @@ SYSCALL_DEFINE5(perf_event_open, if (attr.use_clockid) { err = perf_event_set_clock(event, attr.clockid); if (err) - goto err_alloc; + return err; } if (pmu->task_ctx_nr == perf_sw_context) event->event_caps |= PERF_EV_CAP_SOFTWARE; - if (task) { - err = down_read_interruptible(&task->signal->exec_update_lock); - if (err) - goto err_alloc; + do { + struct rw_semaphore *exec_update_lock __free(up_read) = NULL; + if (task) { + err = down_read_interruptible(&task->signal->exec_update_lock); + if (err) + return err; + + exec_update_lock = &task->signal->exec_update_lock; + + /* + * We must hold exec_update_lock across this and any potential + * perf_install_in_context() call for this new event to + * serialize against exec() altering our credentials (and the + * perf_event_exit_task() that could imply). + */ + if (!perf_check_permission(&attr, task)) + return -EACCES; + } /* - * We must hold exec_update_lock across this and any potential - * perf_install_in_context() call for this new event to - * serialize against exec() altering our credentials (and the - * perf_event_exit_task() that could imply). + * Get the target context (task or percpu): */ - err = -EACCES; - if (!perf_check_permission(&attr, task)) - goto err_cred; - } + CLASS(find_get_ctx, ctx)(task, event); + if (IS_ERR(ctx)) + return PTR_ERR(ctx); - /* - * Get the target context (task or percpu): - */ - ctx = find_get_context(task, event); - if (IS_ERR(ctx)) { - err = PTR_ERR(ctx); - goto err_cred; - } - - mutex_lock(&ctx->mutex); + guard(mutex)(&ctx->mutex); - if (ctx->task == TASK_TOMBSTONE) { - err = -ESRCH; - goto err_locked; - } + if (ctx->task == TASK_TOMBSTONE) + return -ESRCH; - if (!task) { - /* - * Check if the @cpu we're creating an event for is online. - * - * We use the perf_cpu_context::ctx::mutex to serialize against - * the hotplug notifiers. See perf_event_{init,exit}_cpu(). - */ - struct perf_cpu_context *cpuctx = per_cpu_ptr(&perf_cpu_context, event->cpu); + if (!task) { + /* + * Check if the @cpu we're creating an event for is + * online. + * + * We use the perf_cpu_context::ctx::mutex to serialize + * against the hotplug notifiers. See + * perf_event_{init,exit}_cpu(). + */ + struct perf_cpu_context *cpuctx = + per_cpu_ptr(&perf_cpu_context, event->cpu); - if (!cpuctx->online) { - err = -ENODEV; - goto err_locked; + if (!cpuctx->online) + return -ENODEV; } - } - if (group_leader) { - err = -EINVAL; + if (group_leader) { + err = -EINVAL; - /* - * Do not allow a recursive hierarchy (this new sibling - * becoming part of another group-sibling): - */ - if (group_leader->group_leader != group_leader) - goto err_locked; - - /* All events in a group should have the same clock */ - if (group_leader->clock != event->clock) - goto err_locked; + /* + * Do not allow a recursive hierarchy (this new sibling + * becoming part of another group-sibling) + */ + if (group_leader->group_leader != group_leader) + return -EINVAL; - /* - * Make sure we're both events for the same CPU; - * grouping events for different CPUs is broken; since - * you can never concurrently schedule them anyhow. - */ - if (group_leader->cpu != event->cpu) - goto err_locked; + /* All events in a group should have the same clock */ + if (group_leader->clock != event->clock) + return -EINVAL; - /* - * Make sure we're both on the same context; either task or cpu. - */ - if (group_leader->ctx != ctx) - goto err_locked; + /* + * Make sure we're both events for the same CPU; + * grouping events for different CPUs is broken; since + * you can never concurrently schedule them anyhow. + */ + if (group_leader->cpu != event->cpu) + return -EINVAL; - /* - * Only a group leader can be exclusive or pinned - */ - if (attr.exclusive || attr.pinned) - goto err_locked; + /* + * Make sure we're both on the same context; either + * task or cpu. + */ + if (group_leader->ctx != ctx) + return -EINVAL; - if (is_software_event(event) && - !in_software_context(group_leader)) { /* - * If the event is a sw event, but the group_leader - * is on hw context. - * - * Allow the addition of software events to hw - * groups, this is safe because software events - * never fail to schedule. - * - * Note the comment that goes with struct - * perf_event_pmu_context. + * Only a group leader can be exclusive or pinned */ - pmu = group_leader->pmu_ctx->pmu; - } else if (!is_software_event(event)) { - if (is_software_event(group_leader) && - (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { + if (attr.exclusive || attr.pinned) + return -EINVAL; + + if (is_software_event(event) && + !in_software_context(group_leader)) { + /* + * If the event is a sw event, but the + * group_leader is on hw context. + * + * Allow the addition of software events to hw + * groups, this is safe because software events + * never fail to schedule. + * + * Note the comment that goes with struct + * perf_event_pmu_context. + */ + pmu = group_leader->pmu_ctx->pmu; + } else if (!is_software_event(event)) { + if (is_software_event(group_leader) && + (group_leader->group_caps & PERF_EV_CAP_SOFTWARE)) { + /* + * In case the group is a pure software + * group, and we try to add a hardware + * event, move the whole group to the + * hardware context. + */ + move_group = 1; + } + /* - * In case the group is a pure software group, and we - * try to add a hardware event, move the whole group to - * the hardware context. + * Don't allow group of multiple hw events from + * different pmus */ - move_group = 1; + if (!in_software_context(group_leader) && + group_leader->pmu_ctx->pmu != pmu) + return -EINVAL; } + } + + /* + * Now that we're certain of the pmu; find the pmu_ctx. + */ + struct perf_event_pmu_context *pmu_ctx __free(put_pmu_ctx) = + find_get_pmu_context(pmu, ctx, event); + if (!pmu_ctx) + return -ENOMEM; - /* Don't allow group of multiple hw events from different pmus */ - if (!in_software_context(group_leader) && - group_leader->pmu_ctx->pmu != pmu) - goto err_locked; + if (output_event) { + err = perf_event_set_output(event, output_event); + if (err) + return err; } - } - /* - * Now that we're certain of the pmu; find the pmu_ctx. - */ - pmu_ctx = find_get_pmu_context(pmu, ctx, event); - if (IS_ERR(pmu_ctx)) { - err = PTR_ERR(pmu_ctx); - goto err_locked; - } - event->pmu_ctx = pmu_ctx; + if (!perf_event_validate_size(event)) + return -E2BIG; - if (output_event) { - err = perf_event_set_output(event, output_event); - if (err) - goto err_context; - } + if (perf_need_aux_event(event) && + !perf_get_aux_event(event, group_leader)) + return -EINVAL; - if (!perf_event_validate_size(event)) { - err = -E2BIG; - goto err_context; - } + /* + * Must be under the same ctx::mutex as perf_install_in_context(), + * because we need to serialize with concurrent event creation. + */ + if (!exclusive_event_installable(event, ctx)) + return -EBUSY; - if (perf_need_aux_event(event) && !perf_get_aux_event(event, group_leader)) { - err = -EINVAL; - goto err_context; - } + WARN_ON_ONCE(ctx->parent_ctx); - /* - * Must be under the same ctx::mutex as perf_install_in_context(), - * because we need to serialize with concurrent event creation. - */ - if (!exclusive_event_installable(event, ctx)) { - err = -EBUSY; - goto err_context; - } + event_file = anon_inode_getfile("[perf_event]", &perf_fops, + event, f_flags); + if (IS_ERR(event_file)) + return PTR_ERR(event_file); - WARN_ON_ONCE(ctx->parent_ctx); + /* + * The event is now owned by event_file and will be cleaned up + * through perf_fops::release(). Similarly the fd will be linked + * to event_file and should not be put_unused_fd(). + */ - event_file = anon_inode_getfile("[perf_event]", &perf_fops, event, f_flags); - if (IS_ERR(event_file)) { - err = PTR_ERR(event_file); - event_file = NULL; - goto err_context; - } + /* + * This is the point on no return; we cannot fail hereafter. This is + * where we start modifying current state. + */ - /* - * This is the point on no return; we cannot fail hereafter. This is - * where we start modifying current state. - */ + if (move_group) { + /* + * Moves the events from one pmu to another, hence we need + * to update the pmu_ctx, but through all this the ctx + * stays the same. + */ + perf_remove_from_context(group_leader, 0); + put_pmu_ctx(group_leader->pmu_ctx); - if (move_group) { - perf_remove_from_context(group_leader, 0); - put_pmu_ctx(group_leader->pmu_ctx); + for_each_sibling_event(sibling, group_leader) { + perf_remove_from_context(sibling, 0); + put_pmu_ctx(sibling->pmu_ctx); + } - for_each_sibling_event(sibling, group_leader) { - perf_remove_from_context(sibling, 0); - put_pmu_ctx(sibling->pmu_ctx); - } + /* + * Install the group siblings before the group leader. + * + * Because a group leader will try and install the entire group + * (through the sibling list, which is still in-tact), we can + * end up with siblings installed in the wrong context. + * + * By installing siblings first we NO-OP because they're not + * reachable through the group lists. + */ + for_each_sibling_event(sibling, group_leader) { + sibling->pmu_ctx = get_pmu_ctx(pmu_ctx); + perf_event__state_init(sibling); + perf_install_in_context(ctx, sibling, sibling->cpu); + } - /* - * Install the group siblings before the group leader. - * - * Because a group leader will try and install the entire group - * (through the sibling list, which is still in-tact), we can - * end up with siblings installed in the wrong context. - * - * By installing siblings first we NO-OP because they're not - * reachable through the group lists. - */ - for_each_sibling_event(sibling, group_leader) { - sibling->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(sibling); - perf_install_in_context(ctx, sibling, sibling->cpu); + /* + * Removing from the context ends up with disabled + * event. What we want here is event in the initial + * startup state, ready to be add into new context. + */ + group_leader->pmu_ctx = get_pmu_ctx(pmu_ctx); + perf_event__state_init(group_leader); + perf_install_in_context(ctx, group_leader, group_leader->cpu); } /* - * Removing from the context ends up with disabled - * event. What we want here is event in the initial - * startup state, ready to be add into new context. + * Precalculate sample_data sizes; do while holding ctx::mutex such + * that we're serialized against further additions and before + * perf_install_in_context() which is the point the event is active and + * can use these values. */ - group_leader->pmu_ctx = pmu_ctx; - get_pmu_ctx(pmu_ctx); - perf_event__state_init(group_leader); - perf_install_in_context(ctx, group_leader, group_leader->cpu); - } + perf_event__header_size(event); + perf_event__id_header_size(event); - /* - * Precalculate sample_data sizes; do while holding ctx::mutex such - * that we're serialized against further additions and before - * perf_install_in_context() which is the point the event is active and - * can use these values. - */ - perf_event__header_size(event); - perf_event__id_header_size(event); + event->owner = current; - event->owner = current; + event->pmu_ctx = no_free_ptr(pmu_ctx); + perf_install_in_context(get_ctx(ctx), event, event->cpu); + } while (0); - perf_install_in_context(ctx, event, event->cpu); - perf_unpin_context(ctx); + scoped_guard (mutex, ¤t->perf_event_mutex) + list_add_tail(&event->owner_entry, ¤t->perf_event_list); - mutex_unlock(&ctx->mutex); + fd_install(fd, event_file); - if (task) { - up_read(&task->signal->exec_update_lock); - put_task_struct(task); - } - - mutex_lock(¤t->perf_event_mutex); - list_add_tail(&event->owner_entry, ¤t->perf_event_list); - mutex_unlock(¤t->perf_event_mutex); - - /* - * Drop the reference on the group_event after placing the - * new event on the sibling_list. This ensures destruction - * of the group leader will find the pointer to itself in - * perf_group_detach(). - */ - fdput(group); - fd_install(event_fd, event_file); - return event_fd; - -err_context: - put_pmu_ctx(event->pmu_ctx); - event->pmu_ctx = NULL; /* _free_event() */ -err_locked: - mutex_unlock(&ctx->mutex); - perf_unpin_context(ctx); - put_ctx(ctx); -err_cred: - if (task) - up_read(&task->signal->exec_update_lock); -err_alloc: - free_event(event); -err_task: - if (task) - put_task_struct(task); -err_group_fd: - fdput(group); -err_fd: - put_unused_fd(event_fd); - return err; + no_free_ptr(event); + return no_free_fd(fd); } /** From patchwork Mon Jun 12 09:08:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276158 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41356C7EE43 for ; Mon, 12 Jun 2023 09:57:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235263AbjFLJ5Z (ORCPT ); Mon, 12 Jun 2023 05:57:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234050AbjFLJyu (ORCPT ); Mon, 12 Jun 2023 05:54:50 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEE1DF9; Mon, 12 Jun 2023 02:39:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=/A/ePFAjeMC2GglceFED5TIbtUbkLmsSQSkhfQHRtFw=; b=kFbg2Jf4qYcn0XtHtKqR2yCbMd H8amaE7vnYEWxcLgDO4zyoMl9/c1f9cvqTyiAU7MH8KX+IvKKJAqW9Sf8EIweVxdv6ScrGmGJyi0P u8+kOhhR6xdwShh7mjg3MQDWp6nr/4UiMAukJyxwqHSwNzQ+PASFrxXd7QOcVveGNYd0JZPrR4N+g EYGrvk4J11hBkeUPkxIvX/hhxL1jhGJmW4HO/81uJGpYfivRaZmHYneQuLRhD2D530TvYDj3B4wW5 K+uiqOt+PVGGlvAY8RwSAa0jFsZPi8DoURNPx2MXWp0DIkZGwYBfg4WMqhJzNkupmdNH+rBSU3Yul 4CLnKaUw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0u-008kS3-2y; Mon, 12 Jun 2023 09:39:11 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6B9DF30613A; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 234A130A79084; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.240573885@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:04 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 51/57] perf: Simplify perf_event_create_kernel_counter() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 79 ++++++++++++++++----------------------------------- 1 file changed, 26 insertions(+), 53 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -12569,12 +12569,6 @@ perf_event_create_kernel_counter(struct perf_overflow_handler_t overflow_handler, void *context) { - struct perf_event_pmu_context *pmu_ctx; - struct perf_event_context *ctx; - struct perf_event *event; - struct pmu *pmu; - int err; - /* * Grouping is not supported for kernel events, neither is 'AUX', * make sure the caller's intentions are adjusted. @@ -12582,16 +12576,16 @@ perf_event_create_kernel_counter(struct if (attr->aux_output) return ERR_PTR(-EINVAL); - event = perf_event_alloc(attr, cpu, task, NULL, NULL, + + struct perf_event *event __free(free_event) = + perf_event_alloc(attr, cpu, task, NULL, NULL, overflow_handler, context, -1); - if (IS_ERR(event)) { - err = PTR_ERR(event); - goto err; - } + if (IS_ERR(event)) + return event; /* Mark owner so we could distinguish it from user events. */ event->owner = TASK_TOMBSTONE; - pmu = event->pmu; + struct pmu *pmu = event->pmu; if (pmu->task_ctx_nr == perf_sw_context) event->event_caps |= PERF_EV_CAP_SOFTWARE; @@ -12599,25 +12593,21 @@ perf_event_create_kernel_counter(struct /* * Get the target context (task or percpu): */ - ctx = find_get_context(task, event); - if (IS_ERR(ctx)) { - err = PTR_ERR(ctx); - goto err_alloc; - } + CLASS(find_get_ctx, ctx)(task, event); + if (IS_ERR(ctx)) + return (void *)ctx; WARN_ON_ONCE(ctx->parent_ctx); - mutex_lock(&ctx->mutex); - if (ctx->task == TASK_TOMBSTONE) { - err = -ESRCH; - goto err_unlock; - } + guard(mutex)(&ctx->mutex); - pmu_ctx = find_get_pmu_context(pmu, ctx, event); - if (IS_ERR(pmu_ctx)) { - err = PTR_ERR(pmu_ctx); - goto err_unlock; - } - event->pmu_ctx = pmu_ctx; + if (ctx->task == TASK_TOMBSTONE) + return ERR_PTR(-ESRCH); + + + struct perf_event_pmu_context *pmu_ctx __free(put_pmu_ctx) = + find_get_pmu_context(pmu, ctx, event); + if (!pmu_ctx) + return ERR_PTR(-ENOMEM); if (!task) { /* @@ -12628,34 +12618,17 @@ perf_event_create_kernel_counter(struct */ struct perf_cpu_context *cpuctx = container_of(ctx, struct perf_cpu_context, ctx); - if (!cpuctx->online) { - err = -ENODEV; - goto err_pmu_ctx; - } + if (!cpuctx->online) + return ERR_PTR(-ENODEV); } - if (!exclusive_event_installable(event, ctx)) { - err = -EBUSY; - goto err_pmu_ctx; - } + if (!exclusive_event_installable(event, ctx)) + return ERR_PTR(-EBUSY); + + event->pmu_ctx = no_free_ptr(pmu_ctx); + perf_install_in_context(get_ctx(ctx), event, event->cpu); - perf_install_in_context(ctx, event, event->cpu); - perf_unpin_context(ctx); - mutex_unlock(&ctx->mutex); - - return event; - -err_pmu_ctx: - put_pmu_ctx(pmu_ctx); - event->pmu_ctx = NULL; /* _free_event() */ -err_unlock: - mutex_unlock(&ctx->mutex); - perf_unpin_context(ctx); - put_ctx(ctx); -err_alloc: - free_event(event); -err: - return ERR_PTR(err); + return_ptr(event); } EXPORT_SYMBOL_GPL(perf_event_create_kernel_counter); From patchwork Mon Jun 12 09:08:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFDDBC7EE43 for ; Mon, 12 Jun 2023 09:58:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236117AbjFLJ61 (ORCPT ); Mon, 12 Jun 2023 05:58:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233781AbjFLJyr (ORCPT ); Mon, 12 Jun 2023 05:54:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B7EB618E; Mon, 12 Jun 2023 02:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=DAYbQ04kMNbr7rbW92MpBohW/rYLgWGXJr8J1HFf6A8=; b=Eql5NktjOLj2JVeZy3dcrxBiKN oWALo/DK/LD+f6T9ZB3Dgywb7GPNlY8H/tRWl4kVmO9CaHN201KZm8mVNoWCxScenFixVED6f65dd iJvjjgm/3/g6SDb8g+O4H0vFDof5sX8qGgqc87R3eMRZAzpydHhdZbVck82t+acSqNl7nYRXGl5Ya 9Ur/TUPNUv4D5nUorGgGshJzNa7cbJ7ZIN5oDo8G7gx8X+tLK1erIaYpMHuGy/OZS4keZgugjEIeH bI+dznInCYdmNjALn5gUQRRF1Z4aRfWMRQS531g9eeeBR+lMwpK4uCY/gCSh3DlrxbhtjarAQFF9J /w8uXayg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0v-002NJJ-NW; Mon, 12 Jun 2023 09:39:05 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 6D67130613D; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 2806930A79085; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.311174114@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:05 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 52/57] perf: Simplify perf_event_init_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 29 +++++++++++------------------ 1 file changed, 11 insertions(+), 18 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1450,6 +1450,10 @@ static void perf_unpin_context(struct pe raw_spin_unlock_irqrestore(&ctx->lock, flags); } +DEFINE_CLASS(pin_task_ctx, struct perf_event_context *, + if (_T) { perf_unpin_context(_T); put_ctx(_T); }, + perf_pin_task_context(task), struct task_struct *task) + /* * Update the record of the current time in a context. */ @@ -7939,18 +7943,13 @@ static void perf_event_addr_filters_exec void perf_event_exec(void) { - struct perf_event_context *ctx; - - ctx = perf_pin_task_context(current); + CLASS(pin_task_ctx, ctx)(current); if (!ctx) return; perf_event_enable_on_exec(ctx); perf_event_remove_on_exec(ctx); perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true); - - perf_unpin_context(ctx); - put_ctx(ctx); } struct remote_output { @@ -13226,8 +13225,7 @@ inherit_task_group(struct perf_event *ev */ static int perf_event_init_context(struct task_struct *child, u64 clone_flags) { - struct perf_event_context *child_ctx, *parent_ctx; - struct perf_event_context *cloned_ctx; + struct perf_event_context *child_ctx, *cloned_ctx; struct perf_event *event; struct task_struct *parent = current; int inherited_all = 1; @@ -13241,7 +13239,7 @@ static int perf_event_init_context(struc * If the parent's context is a clone, pin it so it won't get * swapped under us. */ - parent_ctx = perf_pin_task_context(parent); + CLASS(pin_task_ctx, parent_ctx)(parent); if (!parent_ctx) return 0; @@ -13256,7 +13254,7 @@ static int perf_event_init_context(struc * Lock the parent list. No need to lock the child - not PID * hashed yet and not running, so nobody can access it. */ - mutex_lock(&parent_ctx->mutex); + guard(mutex)(&parent_ctx->mutex); /* * We dont have to disable NMIs - we are only looking at @@ -13266,7 +13264,7 @@ static int perf_event_init_context(struc ret = inherit_task_group(event, parent, parent_ctx, child, clone_flags, &inherited_all); if (ret) - goto out_unlock; + return ret; } /* @@ -13282,7 +13280,7 @@ static int perf_event_init_context(struc ret = inherit_task_group(event, parent, parent_ctx, child, clone_flags, &inherited_all); if (ret) - goto out_unlock; + return ret; } raw_spin_lock_irqsave(&parent_ctx->lock, flags); @@ -13310,13 +13308,8 @@ static int perf_event_init_context(struc } raw_spin_unlock_irqrestore(&parent_ctx->lock, flags); -out_unlock: - mutex_unlock(&parent_ctx->mutex); - - perf_unpin_context(parent_ctx); - put_ctx(parent_ctx); - return ret; + return 0; } /* From patchwork Mon Jun 12 09:08:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9000C7EE25 for ; Mon, 12 Jun 2023 09:57:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235956AbjFLJ5w (ORCPT ); Mon, 12 Jun 2023 05:57:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234126AbjFLJyv (ORCPT ); Mon, 12 Jun 2023 05:54:51 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D262127; Mon, 12 Jun 2023 02:39:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=15bpyN9dSHomHwXMCYT6MSRj+ih+/8LUKVobKyJuC4s=; b=WpdHcwE/Bn7t5/ICIYZ5akHT1T AGnD4HMO1wGigj+LruzShUv7NXjelPX5LpFyujoZtH2Z1iHiUobvXlccWasHUjJRSMKUN6pJuLa7W 98n2w9oxMkYOcx1J8YVwkxgQGUICtraIL8UFVycOONamctZs/wRIQ1zZ6mvGc7in/JNVKCV8BmPaW Mp3T/auA+btE20jDoL8+8957zF8v5BL3My7WeeP51fRCJQ1lYjUk0+2L2CSuy3SJjhyoEh3qy72ef deBz9GXTdmcbd5jIugv2bSCZ+E1YG5lJOeOqkG5rApXiQCRpQ7k7XQTrt/MbairLj7a37ML4ZUZtX JHMNouzg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0x-008kTk-2m; Mon, 12 Jun 2023 09:39:15 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 7705F306158; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 2E28C30A79086; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.382527855@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:06 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 53/57] perf: Simplify perf_event_sysfs_init() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -13503,11 +13503,11 @@ static int __init perf_event_sysfs_init( struct pmu *pmu; int ret; - mutex_lock(&pmus_lock); + guard(mutex)(&pmus_lock); ret = bus_register(&pmu_bus); if (ret) - goto unlock; + return ret; list_for_each_entry(pmu, &pmus, entry) { if (pmu->dev) @@ -13517,12 +13517,8 @@ static int __init perf_event_sysfs_init( WARN(ret, "Failed to register pmu: %s, reason %d\n", pmu->name, ret); } pmu_bus_running = 1; - ret = 0; -unlock: - mutex_unlock(&pmus_lock); - - return ret; + return 0; } device_initcall(perf_event_sysfs_init); From patchwork Mon Jun 12 09:08:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7733C83005 for ; Mon, 12 Jun 2023 09:58:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236071AbjFLJ6T (ORCPT ); Mon, 12 Jun 2023 05:58:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233879AbjFLJyt (ORCPT ); Mon, 12 Jun 2023 05:54:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7813C6194; Mon, 12 Jun 2023 02:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=RhYZOzjhfuM83rgB0LmGBa7KCuiBOYTp/1b5fR4CJwQ=; b=pSR3Z8fO6gr0uz5UWx2Wyw0kGx c1o1XlzkEwbDa5cj/bws9XTOvobvWPaOGxfhF5QetTedfpp/QOmEWHq0NTU6ex0rTYovD347ZGewn fH4Tw61mxcPaHR+PDhLO/3iaGemM4DeeOBHNSzaohc6qdEuO0r92dZ/ENI/jQYdUQ0UrlMQsOPZmv WIJJ61PjIIyC4LP20xi9YYGkJNkPfRYnFsCYTW+EIw29iW6OJMrGRT6MX3Vn6PkUFIPpQNLGGzPdJ gt47YzrV/4mHp6S7t5DoD0bmp1j9D59G3YN/7mt7PrQ1SjRQhxKa4QFmFcncoQdj3C2n3dN3xqTMW znFwO2KA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0x-002NLY-Ri; Mon, 12 Jun 2023 09:39:07 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 761CF30614E; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 322B430A79087; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.454144142@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:07 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 54/57] perf: Misc cleanups References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 64 +++++++++++++++++++-------------------------------- 1 file changed, 25 insertions(+), 39 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1274,13 +1274,11 @@ perf_event_ctx_lock_nested(struct perf_e struct perf_event_context *ctx; again: - rcu_read_lock(); - ctx = READ_ONCE(event->ctx); - if (!refcount_inc_not_zero(&ctx->refcount)) { - rcu_read_unlock(); - goto again; + scoped_guard (rcu) { + ctx = READ_ONCE(event->ctx); + if (!refcount_inc_not_zero(&ctx->refcount)) + goto again; } - rcu_read_unlock(); mutex_lock_nested(&ctx->mutex, nesting); if (event->ctx != ctx) { @@ -2254,7 +2252,7 @@ event_sched_out(struct perf_event *event */ list_del_init(&event->active_list); - perf_pmu_disable(event->pmu); + guard(perf_pmu_disable)(event->pmu); event->pmu->del(event, 0); event->oncpu = -1; @@ -2288,8 +2286,6 @@ event_sched_out(struct perf_event *event ctx->nr_freq--; if (event->attr.exclusive || !cpc->active_oncpu) cpc->exclusive = 0; - - perf_pmu_enable(event->pmu); } static void @@ -3219,7 +3215,8 @@ static void __pmu_ctx_sched_out(struct p if (!event_type) return; - perf_pmu_disable(pmu); + guard(perf_pmu_disable)(pmu); + if (event_type & EVENT_PINNED) { list_for_each_entry_safe(event, tmp, &pmu_ctx->pinned_active, @@ -3239,7 +3236,6 @@ static void __pmu_ctx_sched_out(struct p */ pmu_ctx->rotate_necessary = 0; } - perf_pmu_enable(pmu); } static void @@ -3586,13 +3582,10 @@ static void __perf_pmu_sched_task(struct if (WARN_ON_ONCE(!pmu->sched_task)) return; - perf_ctx_lock(cpuctx, cpuctx->task_ctx); - perf_pmu_disable(pmu); + guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx); + guard(perf_pmu_disable)(pmu); pmu->sched_task(cpc->task_epc, sched_in); - - perf_pmu_enable(pmu); - perf_ctx_unlock(cpuctx, cpuctx->task_ctx); } static void perf_pmu_sched_task(struct task_struct *prev, @@ -12655,8 +12648,6 @@ static void __perf_pmu_install_event(str struct perf_event_context *ctx, int cpu, struct perf_event *event) { - struct perf_event_pmu_context *epc; - /* * Now that the events are unused, put their old ctx and grab a * reference on the new context. @@ -12665,8 +12656,7 @@ static void __perf_pmu_install_event(str get_ctx(ctx); event->cpu = cpu; - epc = find_get_pmu_context(pmu, ctx, event); - event->pmu_ctx = epc; + event->pmu_ctx = find_get_pmu_context(pmu, ctx, event); if (event->state >= PERF_EVENT_STATE_OFF) event->state = PERF_EVENT_STATE_INACTIVE; @@ -12815,12 +12805,12 @@ perf_event_exit_event(struct perf_event static void perf_event_exit_task_context(struct task_struct *child) { - struct perf_event_context *child_ctx, *clone_ctx = NULL; + struct perf_event_context *clone_ctx = NULL; struct perf_event *child_event, *next; WARN_ON_ONCE(child != current); - child_ctx = perf_pin_task_context(child); + CLASS(pin_task_ctx, child_ctx)(child); if (!child_ctx) return; @@ -12834,27 +12824,27 @@ static void perf_event_exit_task_context * without ctx::mutex (it cannot because of the move_group double mutex * lock thing). See the comments in perf_install_in_context(). */ - mutex_lock(&child_ctx->mutex); + guard(mutex)(&child_ctx->mutex); /* * In a single ctx::lock section, de-schedule the events and detach the * context from the task such that we cannot ever get it scheduled back * in. */ - raw_spin_lock_irq(&child_ctx->lock); - task_ctx_sched_out(child_ctx, EVENT_ALL); + scoped_guard (raw_spinlock_irq, &child_ctx->lock) { + task_ctx_sched_out(child_ctx, EVENT_ALL); - /* - * Now that the context is inactive, destroy the task <-> ctx relation - * and mark the context dead. - */ - RCU_INIT_POINTER(child->perf_event_ctxp, NULL); - put_ctx(child_ctx); /* cannot be last */ - WRITE_ONCE(child_ctx->task, TASK_TOMBSTONE); - put_task_struct(current); /* cannot be last */ + /* + * Now that the context is inactive, destroy the task <-> ctx + * relation and mark the context dead. + */ + RCU_INIT_POINTER(child->perf_event_ctxp, NULL); + put_ctx(child_ctx); /* cannot be last */ + WRITE_ONCE(child_ctx->task, TASK_TOMBSTONE); + put_task_struct(current); /* cannot be last */ - clone_ctx = unclone_ctx(child_ctx); - raw_spin_unlock_irq(&child_ctx->lock); + clone_ctx = unclone_ctx(child_ctx); + } if (clone_ctx) put_ctx(clone_ctx); @@ -12868,10 +12858,6 @@ static void perf_event_exit_task_context list_for_each_entry_safe(child_event, next, &child_ctx->event_list, event_entry) perf_event_exit_event(child_event, child_ctx); - - mutex_unlock(&child_ctx->mutex); - - put_ctx(child_ctx); } /* From patchwork Mon Jun 12 09:08:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9A3DC7EE25 for ; Mon, 12 Jun 2023 09:57:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230302AbjFLJ5f (ORCPT ); Mon, 12 Jun 2023 05:57:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234182AbjFLJyw (ORCPT ); Mon, 12 Jun 2023 05:54:52 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A8864C19; Mon, 12 Jun 2023 02:39:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=9OWN91/cXOLRfmVSj/c39yenMABydmgJP4BxXsEr7m4=; b=ODMSw25KlwnbIZwtM8Ooplwz3S avsg1C00hw0ZNaZy9SLLjMZe48szqJxUiobcK4beCgvel6ihAYx2CB/a5nHYQfhh+vAHV4C8Cmzs9 sHrFLrQ8JgaSA/kQmUkLW4qOpTYgvv7KZlHNVaG4mFStN9gg4I9vNmU5SCLRgi1KP1mNG2IqtUsMx Bn2lgUNR5mJw4d6qus/8zp5DlFSOnjyVpqcuU6rRQsEHrG6c+y/Y3B/BEKX+tLP83TTc2c2wVlwgW ZQhT0qliAPMSaNLZeLP4zzJpsXK4sXqa3/eS25o+5DQB5pDBPNuzZ9bbmC1N5UaAjMUFBVYOytSF3 R2+1UEzw==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0x-008kTl-30; Mon, 12 Jun 2023 09:39:15 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 783DC30615A; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3757230A79088; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.524967360@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:08 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 55/57] perf: Simplify find_get_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 46 ++++++++++++++++++---------------------------- 1 file changed, 18 insertions(+), 28 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -1202,6 +1202,8 @@ static void put_ctx(struct perf_event_co } } +DEFINE_FREE(put_ctx, struct perf_event_context *, if (_T) put_ctx(_T)) + /* * Because of perf_event::ctx migration in sys_perf_event_open::move_group and * perf_pmu_migrate_context() we need some magic. @@ -4718,41 +4720,29 @@ find_get_context(struct task_struct *tas if (clone_ctx) put_ctx(clone_ctx); } else { - ctx = alloc_perf_context(task); - err = -ENOMEM; - if (!ctx) - goto errout; - - err = 0; - mutex_lock(&task->perf_event_mutex); - /* - * If it has already passed perf_event_exit_task(). - * we must see PF_EXITING, it takes this mutex too. - */ - if (task->flags & PF_EXITING) - err = -ESRCH; - else if (task->perf_event_ctxp) - err = -EAGAIN; - else { - get_ctx(ctx); - ++ctx->pin_count; - rcu_assign_pointer(task->perf_event_ctxp, ctx); - } - mutex_unlock(&task->perf_event_mutex); + struct perf_event_context *new __free(put_ctx) = + alloc_perf_context(task); + if (!new) + return ERR_PTR(-ENOMEM); - if (unlikely(err)) { - put_ctx(ctx); + scoped_guard (mutex, &task->perf_event_mutex) { + /* + * If it has already passed perf_event_exit_task(). + * we must see PF_EXITING, it takes this mutex too. + */ + if (task->flags & PF_EXITING) + return ERR_PTR(-ESRCH); - if (err == -EAGAIN) + if (task->perf_event_ctxp) goto retry; - goto errout; + + ctx = get_ctx(no_free_ptr(new)); + ++ctx->pin_count; + rcu_assign_pointer(task->perf_event_ctxp, ctx); } } return ctx; - -errout: - return ERR_PTR(err); } DEFINE_CLASS(find_get_ctx, struct perf_event_context *, From patchwork Mon Jun 12 09:08:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6558CC8300C for ; Mon, 12 Jun 2023 09:58:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229697AbjFLJ6Y (ORCPT ); Mon, 12 Jun 2023 05:58:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234161AbjFLJyv (ORCPT ); Mon, 12 Jun 2023 05:54:51 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E857E114; Mon, 12 Jun 2023 02:39:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=LlQbmNANbazrx11NDWgZGA+KsXoINGT+PoaKYdOA2yY=; b=PVxnCtl+cg0H5werzmZktd2ptY izw9imbUtFY6WKlX5m+pXGI8jB2Vpexpx25d6HydeIPlUHTQH8vTmyDyJuvg9H7K/9mL+zKKa527w ZBj+oG2yYdaX7auomnWIzG6FKkjmmb6TDINrgGuaLPq5rr4E6e0kwgIvRJMOs0JzXBrxHXfD7oZ93 6voMlbWkgwNXry+eXUwyLqaXUDmJRoEaYdt0xbRcQWgi6eS/kSaMf2kt3cephuX+5Gvj7f+reovi2 GiDQhAzvnZWli9/XSIkjtId4Vy90oRY37gyAqxmCDQfV+zfvW9AmSLNfYexJVAAjoVuDoRQBHyiyi l5rs8tjA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0z-008kUm-1x; Mon, 12 Jun 2023 09:39:17 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 7B04A30615D; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3CAAD30A7A29D; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.598260416@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:09 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 56/57] perf: Simplify perf_pmu_output_stop() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7977,7 +7977,8 @@ static void perf_pmu_output_stop(struct int err, cpu; restart: - rcu_read_lock(); + /* cannot have a label in front of a decl */; + guard(rcu)(); list_for_each_entry_rcu(iter, &event->rb->event_list, rb_entry) { /* * For per-CPU events, we need to make sure that neither they @@ -7993,12 +7994,9 @@ static void perf_pmu_output_stop(struct continue; err = cpu_function_call(cpu, __perf_pmu_output_stop, event); - if (err == -EAGAIN) { - rcu_read_unlock(); + if (err == -EAGAIN) goto restart; - } } - rcu_read_unlock(); } /* From patchwork Mon Jun 12 09:08:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 13276178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03EBAC87FE4 for ; Mon, 12 Jun 2023 09:58:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236055AbjFLJ6Q (ORCPT ); Mon, 12 Jun 2023 05:58:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233889AbjFLJyt (ORCPT ); Mon, 12 Jun 2023 05:54:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A4BC4C24; Mon, 12 Jun 2023 02:39:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=3znloSDHViiBmrAdv7OWZVEw7vUxAyx0B3A3/+Qs8bI=; b=IUO9OLx1TrVgcptK2MZb4c+Sn4 EVzHTvVu+xtHeRn8FXs+4TG6lnj2b3k/me7UG7IIb1KpO4T5GPw2+qWG9oATedUzq+NM70mcT4bti HLzEMtejImfTANsC8sPs1gswHo/77zVVk6+JE8KGZDN2idZrQ91P2npzPnTaxcVqhdp7GM9v7xzqO 83Un7eDMkpuqONSKTCyszSRzP/K6jTfTnP438X5enxh5e8KELYoQkGq5KDulr1XMZqUpPzpwdVlsN B70mhv8SwI8fCgJro8rqHKdeL6YR9k4RzfmXBv/eguvp03Eee6R4yJax4MoBvm6+4AfTeHzlm/+Ou b0LlqYtg==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1q8e0z-002NO7-BS; Mon, 12 Jun 2023 09:39:09 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 822BF306161; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4309930A7A2A1; Mon, 12 Jun 2023 11:38:49 +0200 (CEST) Message-ID: <20230612093541.669724890@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:08:10 +0200 From: Peter Zijlstra To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 57/57] perf: Simplify perf_install_in_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org Signed-off-by: Peter Zijlstra (Intel) --- kernel/events/core.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -2876,7 +2876,7 @@ perf_install_in_context(struct perf_even if (!task_function_call(task, __perf_install_in_context, event)) return; - raw_spin_lock_irq(&ctx->lock); + guard(raw_spinlock_irq)(&ctx->lock); task = ctx->task; if (WARN_ON_ONCE(task == TASK_TOMBSTONE)) { /* @@ -2884,19 +2884,15 @@ perf_install_in_context(struct perf_even * cannot happen), and we hold ctx->mutex, which serializes us * against perf_event_exit_task_context(). */ - raw_spin_unlock_irq(&ctx->lock); return; } /* * If the task is not running, ctx->lock will avoid it becoming so, * thus we can safely install the event. */ - if (task_curr(task)) { - raw_spin_unlock_irq(&ctx->lock); + if (task_curr(task)) goto again; - } add_event_to_ctx(event, ctx); - raw_spin_unlock_irq(&ctx->lock); } /*