From patchwork Thu Jan 19 14:36:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 430E2C00A5A for ; Thu, 19 Jan 2023 14:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231400AbjASOjQ (ORCPT ); Thu, 19 Jan 2023 09:39:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231307AbjASOjO (ORCPT ); Thu, 19 Jan 2023 09:39:14 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85DB356EC9 for ; Thu, 19 Jan 2023 06:37:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139029; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sqdAC23AFCpafEftyGbYSrYoVeo5kk0ftc/PmTPLnCU=; b=AYYoB+EjdzjhZdTGYJ0JjP+rmib1g7oYJDTDAfww2g6yoDpUeVvJs+IB2Y13AMzYvfFna7 6V1M4jegMN5UDu0JBavkeetf2wKMwUg5kd+YFpeAtePTSEghMQQdEMQjAymuiDbuio63AA lLx/JNMEZDzvqxGDK8mkBp6C3oLxgL8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-457-hPiD0RM2Oa6zOe2Q691O6g-1; Thu, 19 Jan 2023 09:37:04 -0500 X-MC-Unique: hPiD0RM2Oa6zOe2Q691O6g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4585B85CCE1; Thu, 19 Jan 2023 14:37:00 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2EE5A2026D68; Thu, 19 Jan 2023 14:36:54 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: Steven Rostedt , "Paul E. McKenney" , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v4 1/7] trace: Add trace_ipi_send_cpumask() Date: Thu, 19 Jan 2023 14:36:13 +0000 Message-Id: <20230119143619.2733236-2-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org trace_ipi_raise() is unsuitable for generically tracing IPI sources due to its "reason" argument being an uninformative string (on arm64 all you get is "Function call interrupts" for SMP calls). Add a variant of it that exports a target cpumask, a callsite and a callback. Signed-off-by: Valentin Schneider Reviewed-by: Steven Rostedt (Google) --- include/trace/events/ipi.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/trace/events/ipi.h b/include/trace/events/ipi.h index 0be71dad6ec03..b1125dc27682c 100644 --- a/include/trace/events/ipi.h +++ b/include/trace/events/ipi.h @@ -35,6 +35,28 @@ TRACE_EVENT(ipi_raise, TP_printk("target_mask=%s (%s)", __get_bitmask(target_cpus), __entry->reason) ); +TRACE_EVENT(ipi_send_cpumask, + + TP_PROTO(const struct cpumask *cpumask, unsigned long callsite, void *callback), + + TP_ARGS(cpumask, callsite, callback), + + TP_STRUCT__entry( + __cpumask(cpumask) + __field(void *, callsite) + __field(void *, callback) + ), + + TP_fast_assign( + __assign_cpumask(cpumask, cpumask_bits(cpumask)); + __entry->callsite = (void *)callsite; + __entry->callback = callback; + ), + + TP_printk("cpumask=%s callsite=%pS callback=%pS", + __get_cpumask(cpumask), __entry->callsite, __entry->callback) +); + DECLARE_EVENT_CLASS(ipi_handler, TP_PROTO(const char *reason), From patchwork Thu Jan 19 14:36:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E45BC6379F for ; Thu, 19 Jan 2023 14:39:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231538AbjASOjm (ORCPT ); Thu, 19 Jan 2023 09:39:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231359AbjASOjX (ORCPT ); Thu, 19 Jan 2023 09:39:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 779FB5FD4D for ; Thu, 19 Jan 2023 06:37:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139030; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oCuy74aeG96dFLNHaRy6nwMZ69wGMBkic+YQ3uKUql4=; b=Q3cs5i3g0ih48bdHK86+Jq4q8B1LEtiKC2bP6Ut4I1DRFsHkeniMmV3X8Kqh7HMKcIqpnJ eioSatFBq+ISJlH4XHLI99sUD4MOnsKsvGwxVZ1v0OnGvdy4+1oeeDw09EujLWJl90axow I0gwtvmIedO9qxMzXR1+JabpbNUTEQQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-580-kIncDfkdPyCPUZHWbJ4FXw-1; Thu, 19 Jan 2023 09:37:07 -0500 X-MC-Unique: kIncDfkdPyCPUZHWbJ4FXw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D14801C0A597; Thu, 19 Jan 2023 14:37:05 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7418B2026D76; Thu, 19 Jan 2023 14:37:00 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: Steven Rostedt , Ingo Molnar , "Paul E. McKenney" , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v4 2/7] sched, smp: Trace IPIs sent via send_call_function_single_ipi() Date: Thu, 19 Jan 2023 14:36:14 +0000 Message-Id: <20230119143619.2733236-3-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org send_call_function_single_ipi() is the thing that sends IPIs at the bottom of smp_call_function*() via either generic_exec_single() or smp_call_function_many_cond(). Give it an IPI-related tracepoint. Note that this ends up tracing any IPI sent via __smp_call_single_queue(), which covers __ttwu_queue_wakelist() and irq_work_queue_on() "for free". Signed-off-by: Valentin Schneider Reviewed-by: Steven Rostedt (Google) Acked-by: Ingo Molnar --- arch/arm/kernel/smp.c | 3 --- arch/arm64/kernel/smp.c | 1 - kernel/sched/core.c | 7 +++++-- kernel/smp.c | 4 ++++ 4 files changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 36e6efad89f30..45b8ca2ce521f 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -48,9 +48,6 @@ #include #include -#define CREATE_TRACE_POINTS -#include - /* * as from 2.5, kernels no longer have an init_tasks structure * so we need some other way of telling a new secondary core diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index ffc5d76cf6955..937d2623e06ba 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -51,7 +51,6 @@ #include #include -#define CREATE_TRACE_POINTS #include DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index bb1ee6d7bddea..c0d09eb249603 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -81,6 +81,7 @@ #include #include #undef CREATE_TRACE_POINTS +#include #include "sched.h" #include "stats.h" @@ -3819,10 +3820,12 @@ void send_call_function_single_ipi(int cpu) { struct rq *rq = cpu_rq(cpu); - if (!set_nr_if_polling(rq->idle)) + if (!set_nr_if_polling(rq->idle)) { + trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); arch_send_call_function_single_ipi(cpu); - else + } else { trace_sched_wake_idle_without_ipi(cpu); + } } /* diff --git a/kernel/smp.c b/kernel/smp.c index 06a413987a14a..e2ca1e2f31274 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -26,6 +26,10 @@ #include #include +#define CREATE_TRACE_POINTS +#include +#undef CREATE_TRACE_POINTS + #include "smpboot.h" #include "sched/smp.h" From patchwork Thu Jan 19 14:36:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B48AC004D4 for ; Thu, 19 Jan 2023 14:39:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231473AbjASOjl (ORCPT ); Thu, 19 Jan 2023 09:39:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231330AbjASOjW (ORCPT ); Thu, 19 Jan 2023 09:39:22 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C0C46469D for ; Thu, 19 Jan 2023 06:37:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139033; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EP4LqydUSiQVulPj8M4fIZioGJfqcQn9EHXEOk9M9cI=; b=FMNTM71RjRKCdtntYFAsAvPp9MT3UGH7TYowE2Nk2W/yA9sjPh+hT95jaV30bdqtWKS6AK Ijg6IET1Pmqa9uvlG+9DoDWnZOa61xTYO6WSz9cgTZcvRt+niHtHGz01Y2szE93iN/Kzkz EIAmvIkI3RkAIJt4Vlkd3O/0Sf8hjro= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-38-CTfrtppuN2mnyWpz4_YWkA-1; Thu, 19 Jan 2023 09:37:12 -0500 X-MC-Unique: CTfrtppuN2mnyWpz4_YWkA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9FA9F811E9C; Thu, 19 Jan 2023 14:37:10 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 465E82026D68; Thu, 19 Jan 2023 14:37:06 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: Steven Rostedt , "Paul E. McKenney" , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v4 3/7] smp: Trace IPIs sent via arch_send_call_function_ipi_mask() Date: Thu, 19 Jan 2023 14:36:15 +0000 Message-Id: <20230119143619.2733236-4-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org This simply wraps around the arch function and prepends it with a tracepoint, similar to send_call_function_single_ipi(). Signed-off-by: Valentin Schneider Reviewed-by: Steven Rostedt (Google) --- kernel/smp.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/kernel/smp.c b/kernel/smp.c index e2ca1e2f31274..93b4386cd3096 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -160,6 +160,13 @@ void __init call_function_init(void) smpcfd_prepare_cpu(smp_processor_id()); } +static __always_inline void +send_call_function_ipi_mask(const struct cpumask *mask) +{ + trace_ipi_send_cpumask(mask, _RET_IP_, NULL); + arch_send_call_function_ipi_mask(mask); +} + #ifdef CONFIG_CSD_LOCK_WAIT_DEBUG static DEFINE_STATIC_KEY_FALSE(csdlock_debug_enabled); @@ -970,7 +977,7 @@ static void smp_call_function_many_cond(const struct cpumask *mask, if (nr_cpus == 1) send_call_function_single_ipi(last_cpu); else if (likely(nr_cpus > 1)) - arch_send_call_function_ipi_mask(cfd->cpumask_ipi); + send_call_function_ipi_mask(cfd->cpumask_ipi); cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->pinged, this_cpu, CFD_SEQ_NOCPU, CFD_SEQ_PINGED); } From patchwork Thu Jan 19 14:36:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BEF6C00A5A for ; Thu, 19 Jan 2023 14:40:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231271AbjASOkY (ORCPT ); Thu, 19 Jan 2023 09:40:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231727AbjASOkC (ORCPT ); Thu, 19 Jan 2023 09:40:02 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99E7978559 for ; Thu, 19 Jan 2023 06:37:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6o4aDoMnOyw6oIKqxXD/63VXzreRgu7lohjEhFx0NNg=; b=NgYiJsJa4O/Cp2vszJgCZkvBNeul18VTAADyQ5o2sd3VExtVfGPjddZiXUyC+O1IR5m0rN FmQxthC34OCC32AXXrish/3SraJ7m5VNAA7bLzIggndoFHENvUbsD45I+/8tCVqaIWfA73 vFNy3AbLMmnOrIo4gThyzn837zUqTJI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-116-G32uewm1PsutaaFx4gNlJA-1; Thu, 19 Jan 2023 09:37:18 -0500 X-MC-Unique: G32uewm1PsutaaFx4gNlJA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 93863101A521; Thu, 19 Jan 2023 14:37:16 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E80B32026D68; Thu, 19 Jan 2023 14:37:10 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: Steven Rostedt , "Paul E. McKenney" , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v4 4/7] irq_work: Trace self-IPIs sent via arch_irq_work_raise() Date: Thu, 19 Jan 2023 14:36:16 +0000 Message-Id: <20230119143619.2733236-5-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org IPIs sent to remote CPUs via irq_work_queue_on() are now covered by trace_ipi_send_cpumask(), add another instance of the tracepoint to cover self-IPIs. Signed-off-by: Valentin Schneider Reviewed-by: Steven Rostedt (Google) --- kernel/irq_work.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/kernel/irq_work.c b/kernel/irq_work.c index 7afa40fe5cc43..c33e88e32a67a 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -22,6 +22,8 @@ #include #include +#include + static DEFINE_PER_CPU(struct llist_head, raised_list); static DEFINE_PER_CPU(struct llist_head, lazy_list); static DEFINE_PER_CPU(struct task_struct *, irq_workd); @@ -74,6 +76,16 @@ void __weak arch_irq_work_raise(void) */ } +static __always_inline void irq_work_raise(struct irq_work *work) +{ + if (trace_ipi_send_cpumask_enabled() && arch_irq_work_has_interrupt()) + trace_ipi_send_cpumask(cpumask_of(smp_processor_id()), + _RET_IP_, + work->func); + + arch_irq_work_raise(); +} + /* Enqueue on current CPU, work must already be claimed and preempt disabled */ static void __irq_work_queue_local(struct irq_work *work) { @@ -99,7 +111,7 @@ static void __irq_work_queue_local(struct irq_work *work) /* If the work is "lazy", handle it from next tick if any */ if (!lazy_work || tick_nohz_tick_stopped()) - arch_irq_work_raise(); + irq_work_raise(work); } /* Enqueue the irq work @work on the current CPU */ From patchwork Thu Jan 19 14:36:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6392FC004D4 for ; Thu, 19 Jan 2023 14:40:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231259AbjASOkp (ORCPT ); Thu, 19 Jan 2023 09:40:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231738AbjASOkE (ORCPT ); Thu, 19 Jan 2023 09:40:04 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E9CD7E6BD for ; Thu, 19 Jan 2023 06:37:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139047; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=D1KhGYl2ByNOvmuI98l8c+cnTMqpI1cWqc4va8SaCb4=; b=eQQWJkC5oviy3Fz4rLk4hbT6fm3QxHHqW1XZeJQJuH+2rOtvRt+je/OON0pwNnzX4oIY79 62NlY2o3cGXQufpIDkFbcTOdUKpSd4v52Z2q3MC2x8XDvN4ZED6dJkLwHU1yAOcja42+IY Q/NkpOeBmtgsyHMjliKHbLJrpRhOLg8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-17-qh-e_qHRORejt2SS9j4vpg-1; Thu, 19 Jan 2023 09:37:24 -0500 X-MC-Unique: qh-e_qHRORejt2SS9j4vpg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 55C3B87A9E9; Thu, 19 Jan 2023 14:37:22 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CA29E2026D76; Thu, 19 Jan 2023 14:37:16 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: Guo Ren , Palmer Dabbelt , "Paul E. McKenney" , Steven Rostedt , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , "David S. Miller" Subject: [PATCH v4 5/7] treewide: Trace IPIs sent via smp_send_reschedule() Date: Thu, 19 Jan 2023 14:36:17 +0000 Message-Id: <20230119143619.2733236-6-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org To be able to trace invocations of smp_send_reschedule(), rename the arch-specific definitions of it to arch_smp_send_reschedule() and wrap it into an smp_send_reschedule() that contains a tracepoint. Changes to include the declaration of the tracepoint were driven by the following coccinelle script: @func_use@ @@ smp_send_reschedule(...); @include@ @@ #include @no_include depends on func_use && !include@ @@ #include <...> + + #include Signed-off-by: Valentin Schneider [csky bits] Acked-by: Guo Ren [riscv bits] Acked-by: Palmer Dabbelt Acked-by: Huacai Chen --- arch/alpha/kernel/smp.c | 2 +- arch/arc/kernel/smp.c | 2 +- arch/arm/kernel/smp.c | 2 +- arch/arm/mach-actions/platsmp.c | 2 ++ arch/arm64/kernel/smp.c | 2 +- arch/csky/kernel/smp.c | 2 +- arch/hexagon/kernel/smp.c | 2 +- arch/ia64/kernel/smp.c | 4 ++-- arch/loongarch/kernel/smp.c | 4 ++-- arch/mips/include/asm/smp.h | 2 +- arch/mips/kernel/rtlx-cmp.c | 2 ++ arch/openrisc/kernel/smp.c | 2 +- arch/parisc/kernel/smp.c | 4 ++-- arch/powerpc/kernel/smp.c | 6 ++++-- arch/powerpc/kvm/book3s_hv.c | 3 +++ arch/powerpc/platforms/powernv/subcore.c | 2 ++ arch/riscv/kernel/smp.c | 4 ++-- arch/s390/kernel/smp.c | 2 +- arch/sh/kernel/smp.c | 2 +- arch/sparc/kernel/smp_32.c | 2 +- arch/sparc/kernel/smp_64.c | 2 +- arch/x86/include/asm/smp.h | 2 +- arch/x86/kvm/svm/svm.c | 4 ++++ arch/x86/kvm/x86.c | 2 ++ arch/xtensa/kernel/smp.c | 2 +- include/linux/smp.h | 11 +++++++++-- virt/kvm/kvm_main.c | 2 ++ 27 files changed, 52 insertions(+), 26 deletions(-) diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c index f4e20f75438f8..38637eb9eebd5 100644 --- a/arch/alpha/kernel/smp.c +++ b/arch/alpha/kernel/smp.c @@ -562,7 +562,7 @@ handle_ipi(struct pt_regs *regs) } void -smp_send_reschedule(int cpu) +arch_smp_send_reschedule(int cpu) { #ifdef DEBUG_IPI_MSG if (cpu == hard_smp_processor_id()) diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c index ad93fe6e4b77d..409cfa4675b40 100644 --- a/arch/arc/kernel/smp.c +++ b/arch/arc/kernel/smp.c @@ -292,7 +292,7 @@ static void ipi_send_msg(const struct cpumask *callmap, enum ipi_msg_type msg) ipi_send_msg_one(cpu, msg); } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { ipi_send_msg_one(cpu, IPI_RESCHEDULE); } diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 45b8ca2ce521f..dea24a6e0ed6f 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -744,7 +744,7 @@ void __init set_smp_ipi_range(int ipi_base, int n) ipi_setup(smp_processor_id()); } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); } diff --git a/arch/arm/mach-actions/platsmp.c b/arch/arm/mach-actions/platsmp.c index f26618b435145..7b208e96fbb67 100644 --- a/arch/arm/mach-actions/platsmp.c +++ b/arch/arm/mach-actions/platsmp.c @@ -20,6 +20,8 @@ #include #include +#include + #define OWL_CPU1_ADDR 0x50 #define OWL_CPU1_FLAG 0x5c diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 937d2623e06ba..8d108edc4a89f 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -976,7 +976,7 @@ void __init set_smp_ipi_range(int ipi_base, int n) ipi_setup(smp_processor_id()); } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); } diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c index 4b605aa2e1d65..fd7f81be16dd6 100644 --- a/arch/csky/kernel/smp.c +++ b/arch/csky/kernel/smp.c @@ -140,7 +140,7 @@ void smp_send_stop(void) on_each_cpu(ipi_stop, NULL, 1); } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE); } diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c index 4ba93e59370c4..4e8bee25b8c68 100644 --- a/arch/hexagon/kernel/smp.c +++ b/arch/hexagon/kernel/smp.c @@ -217,7 +217,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) } } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { send_ipi(cpumask_of(cpu), IPI_RESCHEDULE); } diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c index e2cc59db86bc2..ea4f009a232b4 100644 --- a/arch/ia64/kernel/smp.c +++ b/arch/ia64/kernel/smp.c @@ -220,11 +220,11 @@ kdump_smp_send_init(void) * Called with preemption disabled. */ void -smp_send_reschedule (int cpu) +arch_smp_send_reschedule (int cpu) { ia64_send_ipi(cpu, IA64_IPI_RESCHEDULE, IA64_IPI_DM_INT, 0); } -EXPORT_SYMBOL_GPL(smp_send_reschedule); +EXPORT_SYMBOL_GPL(arch_smp_send_reschedule); /* * Called with preemption disabled. diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c index 8c6e227cb29df..83225610a1480 100644 --- a/arch/loongarch/kernel/smp.c +++ b/arch/loongarch/kernel/smp.c @@ -155,11 +155,11 @@ void loongson_send_ipi_mask(const struct cpumask *mask, unsigned int action) * it goes straight through and wastes no time serializing * anything. Worst case is that we lose a reschedule ... */ -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { loongson_send_ipi_single(cpu, SMP_RESCHEDULE); } -EXPORT_SYMBOL_GPL(smp_send_reschedule); +EXPORT_SYMBOL_GPL(arch_smp_send_reschedule); irqreturn_t loongson_ipi_interrupt(int irq, void *dev) { diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h index 5d9ff61004ca7..9806e79895d99 100644 --- a/arch/mips/include/asm/smp.h +++ b/arch/mips/include/asm/smp.h @@ -66,7 +66,7 @@ extern void calculate_cpu_foreign_map(void); * it goes straight through and wastes no time serializing * anything. Worst case is that we lose a reschedule ... */ -static inline void smp_send_reschedule(int cpu) +static inline void arch_smp_send_reschedule(int cpu) { extern const struct plat_smp_ops *mp_ops; /* private */ diff --git a/arch/mips/kernel/rtlx-cmp.c b/arch/mips/kernel/rtlx-cmp.c index d26dcc4b46e74..e991cc936c1cd 100644 --- a/arch/mips/kernel/rtlx-cmp.c +++ b/arch/mips/kernel/rtlx-cmp.c @@ -17,6 +17,8 @@ #include #include +#include + static int major; static void rtlx_interrupt(void) diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c index e1419095a6f0a..0a7a059e2dff4 100644 --- a/arch/openrisc/kernel/smp.c +++ b/arch/openrisc/kernel/smp.c @@ -173,7 +173,7 @@ void handle_IPI(unsigned int ipi_msg) } } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); } diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c index 7dbd92cafae38..b7fc859fa87db 100644 --- a/arch/parisc/kernel/smp.c +++ b/arch/parisc/kernel/smp.c @@ -246,8 +246,8 @@ void kgdb_roundup_cpus(void) inline void smp_send_stop(void) { send_IPI_allbutself(IPI_CPU_STOP); } -void -smp_send_reschedule(int cpu) { send_IPI_single(cpu, IPI_RESCHEDULE); } +void +arch_smp_send_reschedule(int cpu) { send_IPI_single(cpu, IPI_RESCHEDULE); } void smp_send_all_nop(void) diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c index 6b90f10a6c819..35f101ccb540d 100644 --- a/arch/powerpc/kernel/smp.c +++ b/arch/powerpc/kernel/smp.c @@ -61,6 +61,8 @@ #include #include +#include + #ifdef DEBUG #include #define DBG(fmt...) udbg_printf(fmt) @@ -364,12 +366,12 @@ static inline void do_message_pass(int cpu, int msg) #endif } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { if (likely(smp_ops)) do_message_pass(cpu, PPC_MSG_RESCHEDULE); } -EXPORT_SYMBOL_GPL(smp_send_reschedule); +EXPORT_SYMBOL_GPL(arch_smp_send_reschedule); void arch_send_call_function_single_ipi(int cpu) { diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 6ba68dd6190bd..3b70b5f80bd56 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include @@ -80,6 +81,8 @@ #include #include +#include + #include "book3s.h" #include "book3s_hv.h" diff --git a/arch/powerpc/platforms/powernv/subcore.c b/arch/powerpc/platforms/powernv/subcore.c index 7e98b00ea2e84..c53c4c7977680 100644 --- a/arch/powerpc/platforms/powernv/subcore.c +++ b/arch/powerpc/platforms/powernv/subcore.c @@ -20,6 +20,8 @@ #include #include +#include + #include "subcore.h" #include "powernv.h" diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c index 8c3b59f1f9b80..42e9656a1db2e 100644 --- a/arch/riscv/kernel/smp.c +++ b/arch/riscv/kernel/smp.c @@ -328,8 +328,8 @@ bool smp_crash_stop_failed(void) } #endif -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { send_ipi_single(cpu, IPI_RESCHEDULE); } -EXPORT_SYMBOL_GPL(smp_send_reschedule); +EXPORT_SYMBOL_GPL(arch_smp_send_reschedule); diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c index 0031325ce4bc9..6c4da1e26e568 100644 --- a/arch/s390/kernel/smp.c +++ b/arch/s390/kernel/smp.c @@ -553,7 +553,7 @@ void arch_send_call_function_single_ipi(int cpu) * it goes straight through and wastes no time serializing * anything. Worst case is that we lose a reschedule ... */ -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { pcpu_ec_call(pcpu_devices + cpu, ec_schedule); } diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c index 65924d9ec2459..5cf35a774dc70 100644 --- a/arch/sh/kernel/smp.c +++ b/arch/sh/kernel/smp.c @@ -256,7 +256,7 @@ void __init smp_cpus_done(unsigned int max_cpus) (bogosum / (5000/HZ)) % 100); } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { mp_ops->send_ipi(cpu, SMP_MSG_RESCHEDULE); } diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c index ad8094d955eba..87eaa7719fa27 100644 --- a/arch/sparc/kernel/smp_32.c +++ b/arch/sparc/kernel/smp_32.c @@ -120,7 +120,7 @@ void cpu_panic(void) struct linux_prom_registers smp_penguin_ctable = { 0 }; -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { /* * CPU model dependent way of implementing IPI generation targeting diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index a55295d1b9244..e5964d1d8b37d 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -1430,7 +1430,7 @@ static unsigned long send_cpu_poke(int cpu) return hv_err; } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { if (cpu == smp_processor_id()) { WARN_ON_ONCE(preemptible()); diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index b4dbb20dab1a1..f9757123d8fa1 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -98,7 +98,7 @@ static inline void play_dead(void) smp_ops.play_dead(); } -static inline void smp_send_reschedule(int cpu) +static inline void arch_smp_send_reschedule(int cpu) { smp_ops.smp_send_reschedule(cpu); } diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 9a194aa1a75a4..7114f62f4846b 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -41,6 +42,9 @@ #include #include + +#include + #include "trace.h" #include "svm.h" diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index da4bbd043a7b6..730a493d4443e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -59,7 +59,9 @@ #include #include #include +#include +#include #include #include diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c index 4dc109dd6214e..d95907b8e4d38 100644 --- a/arch/xtensa/kernel/smp.c +++ b/arch/xtensa/kernel/smp.c @@ -389,7 +389,7 @@ void arch_send_call_function_single_ipi(int cpu) send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC); } -void smp_send_reschedule(int cpu) +void arch_smp_send_reschedule(int cpu) { send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE); } diff --git a/include/linux/smp.h b/include/linux/smp.h index a80ab58ae3f1d..c036a2228d8d0 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -125,8 +125,15 @@ extern void smp_send_stop(void); /* * sends a 'reschedule' event to another CPU: */ -extern void smp_send_reschedule(int cpu); - +extern void arch_smp_send_reschedule(int cpu); +/* + * scheduler_ipi() is inline so can't be passed as callback reason, but the + * callsite IP should be sufficient for root-causing IPIs sent from here. + */ +#define smp_send_reschedule(cpu) ({ \ + trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); \ + arch_smp_send_reschedule(cpu); \ +}) /* * Prepare machine for booting other CPUs. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9c60384b5ae0b..88620f27c4f94 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -67,6 +67,8 @@ #include +#include + /* Worst case buffer size needed for holding an integer. */ #define ITOA_MAX_LEN 12 From patchwork Thu Jan 19 14:36:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46BCFC678DE for ; Thu, 19 Jan 2023 14:41:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230284AbjASOlX (ORCPT ); Thu, 19 Jan 2023 09:41:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231336AbjASOk1 (ORCPT ); Thu, 19 Jan 2023 09:40:27 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01FCC10DA for ; Thu, 19 Jan 2023 06:37:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139053; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QbSxuBYwyBhZhp7hgOeOy88U7e5YyI7vjVO9jj/5+zE=; b=VxejWXolYSzIQbvZGB2XtvUw8c1r1kEFME4gp0tc/xjn/bbssEYxG6sukGUE/I2MJyudAI VM6QxGO3VjCWmQPs7KOI0HxWneTT/yhk6dS5UUY1xMo9h+Z6rwjGHAU8opOwcAOeD9oEp/ +3upvDcogbciLAwcU79udLcaLHvc5WE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-567-l4TS9XtFOxaf2EwwHWgw-g-1; Thu, 19 Jan 2023 09:37:29 -0500 X-MC-Unique: l4TS9XtFOxaf2EwwHWgw-g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 07306857A88; Thu, 19 Jan 2023 14:37:27 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 9C6872026D68; Thu, 19 Jan 2023 14:37:22 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: "Paul E. McKenney" , Steven Rostedt , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v4 6/7] smp: reword smp call IPI comment Date: Thu, 19 Jan 2023 14:36:18 +0000 Message-Id: <20230119143619.2733236-7-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org Accessing the call_single_queue hasn't involved a spinlock since 2014: 6897fc22ea01 ("kernel: use lockless list for smp_call_function_single") The llist operations (namely cmpxchg() and xchg()) provide similar ordering guarantees, update the comment to lessen confusion. Signed-off-by: Valentin Schneider --- kernel/smp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index 93b4386cd3096..821b5986721ac 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -495,9 +495,10 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) #endif /* - * The list addition should be visible before sending the IPI - * handler locks the list to pull the entry off it because of - * normal cache coherency rules implied by spinlocks. + * The list addition should be visible to the target CPU when it pops + * the head of the list to pull the entry off it in the IPI handler + * because of normal cache coherency rules implied by the underlying + * llist ops. * * If IPIs can go out of order to the cache coherency protocol * in an architecture, sufficient synchronisation should be added From patchwork Thu Jan 19 14:36:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 13108199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 441F2C6379F for ; Thu, 19 Jan 2023 14:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231418AbjASOlW (ORCPT ); Thu, 19 Jan 2023 09:41:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231342AbjASOk2 (ORCPT ); Thu, 19 Jan 2023 09:40:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C060974EBF for ; Thu, 19 Jan 2023 06:37:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674139058; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KQsQYSoiV7axOHu2SHPkYXMBMlM95JI1QYM4ca5S0lU=; b=T9GeddsDgtaFWV38M+cW2fN9dV9tL5y/Y7PYo41pvwLNBppO6lhVgA5hk9e2MhyuXGL9OC mUtzGda2HjXRQ48bMfiSo0wrvETX8zLSdfaaDELOdCYLGVIgQlGajCa3ZO2luUqZ7AF9Xh IC4dmPJIcN+1m0iqNKf0899zJMO5M3M= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-339-SfVp0ibUOxOmeBYwt3eAmQ-1; Thu, 19 Jan 2023 09:37:34 -0500 X-MC-Unique: SfVp0ibUOxOmeBYwt3eAmQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8CC143815F70; Thu, 19 Jan 2023 14:37:32 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.33.36.13]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 544032026D68; Thu, 19 Jan 2023 14:37:27 +0000 (UTC) From: Valentin Schneider To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org Cc: "Paul E. McKenney" , Steven Rostedt , Peter Zijlstra , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" Subject: [PATCH v4 7/7] sched, smp: Trace smp callback causing an IPI Date: Thu, 19 Jan 2023 14:36:19 +0000 Message-Id: <20230119143619.2733236-8-vschneid@redhat.com> In-Reply-To: <20230119143619.2733236-1-vschneid@redhat.com> References: <20230119143619.2733236-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org Context ======= The newly-introduced ipi_send_cpumask tracepoint has a "callback" parameter which so far has only been fed with NULL. While CSD_TYPE_SYNC/ASYNC and CSD_TYPE_IRQ_WORK share a similar backing struct layout (meaning their callback func can be accessed without caring about the actual CSD type), CSD_TYPE_TTWU doesn't even have a function attached to its struct. This means we need to check the type of a CSD before eventually dereferencing its associated callback. This isn't as trivial as it sounds: the CSD type is stored in __call_single_node.u_flags, which get cleared right before the callback is executed via csd_unlock(). This implies checking the CSD type before it is enqueued on the call_single_queue, as the target CPU's queue can be flushed before we get to sending an IPI. Furthermore, send_call_function_single_ipi() only has a CPU parameter, and would need to have an additional argument to trickle down the invoked function. This is somewhat silly, as the extra argument will always be pushed down to the function even when nothing is being traced, which is unnecessary overhead. Changes ======= send_call_function_single_ipi() is only used by smp.c, and is defined in sched/core.c as it contains scheduler-specific ops (set_nr_if_polling() of a CPU's idle task). Split it into two parts: the scheduler bits remain in sched/core.c, and the actual IPI emission is moved into smp.c. This lets us define an __always_inline helper function that can take the related callback as parameter without creating useless register pressure in the non-traced path which only gains a (disabled) static branch. Do the same thing for the multi IPI case. Signed-off-by: Valentin Schneider --- kernel/sched/core.c | 18 +++++++----- kernel/sched/smp.h | 2 +- kernel/smp.c | 72 +++++++++++++++++++++++++++++++++------------ 3 files changed, 66 insertions(+), 26 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c0d09eb249603..9733c3ecdbf16 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3816,16 +3816,20 @@ void sched_ttwu_pending(void *arg) rq_unlock_irqrestore(rq, &rf); } -void send_call_function_single_ipi(int cpu) +/* + * Prepare the scene for sending an IPI for a remote smp_call + * + * Returns true if the caller can proceed with sending the IPI. + * Returns false otherwise. + */ +bool call_function_single_prep_ipi(int cpu) { - struct rq *rq = cpu_rq(cpu); - - if (!set_nr_if_polling(rq->idle)) { - trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, NULL); - arch_send_call_function_single_ipi(cpu); - } else { + if (set_nr_if_polling(cpu_rq(cpu)->idle)) { trace_sched_wake_idle_without_ipi(cpu); + return false; } + + return true; } /* diff --git a/kernel/sched/smp.h b/kernel/sched/smp.h index 2eb23dd0f2856..21ac44428bb02 100644 --- a/kernel/sched/smp.h +++ b/kernel/sched/smp.h @@ -6,7 +6,7 @@ extern void sched_ttwu_pending(void *arg); -extern void send_call_function_single_ipi(int cpu); +extern bool call_function_single_prep_ipi(int cpu); #ifdef CONFIG_SMP extern void flush_smp_call_function_queue(void); diff --git a/kernel/smp.c b/kernel/smp.c index 821b5986721ac..5cd680a7e78ef 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -161,9 +161,18 @@ void __init call_function_init(void) } static __always_inline void -send_call_function_ipi_mask(const struct cpumask *mask) +send_call_function_single_ipi(int cpu, smp_call_func_t func) { - trace_ipi_send_cpumask(mask, _RET_IP_, NULL); + if (call_function_single_prep_ipi(cpu)) { + trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, func); + arch_send_call_function_single_ipi(cpu); + } +} + +static __always_inline void +send_call_function_ipi_mask(const struct cpumask *mask, smp_call_func_t func) +{ + trace_ipi_send_cpumask(mask, _RET_IP_, func); arch_send_call_function_ipi_mask(mask); } @@ -430,12 +439,16 @@ static void __smp_call_single_queue_debug(int cpu, struct llist_node *node) struct cfd_seq_local *seq = this_cpu_ptr(&cfd_seq_local); struct call_function_data *cfd = this_cpu_ptr(&cfd_data); struct cfd_percpu *pcpu = per_cpu_ptr(cfd->pcpu, cpu); + struct __call_single_data *csd; + + csd = container_of(node, call_single_data_t, node.llist); + WARN_ON_ONCE(!(CSD_TYPE(csd) & (CSD_TYPE_SYNC | CSD_TYPE_ASYNC))); cfd_seq_store(pcpu->seq_queue, this_cpu, cpu, CFD_SEQ_QUEUE); if (llist_add(node, &per_cpu(call_single_queue, cpu))) { cfd_seq_store(pcpu->seq_ipi, this_cpu, cpu, CFD_SEQ_IPI); cfd_seq_store(seq->ping, this_cpu, cpu, CFD_SEQ_PING); - send_call_function_single_ipi(cpu); + send_call_function_single_ipi(cpu, csd->func); cfd_seq_store(seq->pinged, this_cpu, cpu, CFD_SEQ_PINGED); } else { cfd_seq_store(pcpu->seq_noipi, this_cpu, cpu, CFD_SEQ_NOIPI); @@ -477,6 +490,25 @@ static __always_inline void csd_unlock(struct __call_single_data *csd) smp_store_release(&csd->node.u_flags, 0); } +static __always_inline void +raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t func) +{ + /* + * The list addition should be visible to the target CPU when it pops + * the head of the list to pull the entry off it in the IPI handler + * because of normal cache coherency rules implied by the underlying + * llist ops. + * + * If IPIs can go out of order to the cache coherency protocol + * in an architecture, sufficient synchronisation should be added + * to arch code to make it appear to obey cache coherency WRT + * locking and barrier primitives. Generic code isn't really + * equipped to do the right thing... + */ + if (llist_add(node, &per_cpu(call_single_queue, cpu))) + send_call_function_single_ipi(cpu, func); +} + static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); void __smp_call_single_queue(int cpu, struct llist_node *node) @@ -493,21 +525,25 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) } } #endif - /* - * The list addition should be visible to the target CPU when it pops - * the head of the list to pull the entry off it in the IPI handler - * because of normal cache coherency rules implied by the underlying - * llist ops. - * - * If IPIs can go out of order to the cache coherency protocol - * in an architecture, sufficient synchronisation should be added - * to arch code to make it appear to obey cache coherency WRT - * locking and barrier primitives. Generic code isn't really - * equipped to do the right thing... + * We have to check the type of the CSD before queueing it, because + * once queued it can have its flags cleared by + * flush_smp_call_function_queue() + * even if we haven't sent the smp_call IPI yet (e.g. the stopper + * executes migration_cpu_stop() on the remote CPU). */ - if (llist_add(node, &per_cpu(call_single_queue, cpu))) - send_call_function_single_ipi(cpu); + if (trace_ipi_send_cpumask_enabled()) { + call_single_data_t *csd; + smp_call_func_t func; + + csd = container_of(node, call_single_data_t, node.llist); + func = CSD_TYPE(csd) == CSD_TYPE_TTWU ? + sched_ttwu_pending : csd->func; + + raw_smp_call_single_queue(cpu, node, func); + } else { + raw_smp_call_single_queue(cpu, node, NULL); + } } /* @@ -976,9 +1012,9 @@ static void smp_call_function_many_cond(const struct cpumask *mask, * provided mask. */ if (nr_cpus == 1) - send_call_function_single_ipi(last_cpu); + send_call_function_single_ipi(last_cpu, func); else if (likely(nr_cpus > 1)) - send_call_function_ipi_mask(cfd->cpumask_ipi); + send_call_function_ipi_mask(cfd->cpumask_ipi, func); cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->pinged, this_cpu, CFD_SEQ_NOCPU, CFD_SEQ_PINGED); }