From patchwork Mon Nov 29 14:28:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12693913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DCA3C433F5 for ; Mon, 29 Nov 2021 14:31:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0eVlfoD7dkNbuCE6nHJqu9wWIwlsB4kRBZ9M2Foqlms=; b=iuCSHzGZhc3H5X +9xeILEJ346JYTWTfodi7GW5ULkkAoxBbzr6e+keVW9QlV/bpP0Tfgw+2RtxdriwgpyymoCE/vdhe wdSAXTPuMzlWL53N2wMczawsawabMybxvzpKo98BfksOv4iotfGlZr2vO72llwhGsNwbYPyMDRRZS 8ETkQL/LpJ6LIbYK05jJXuIwQS13zk3Q6PXR+IkeJzv3kRENDTi47VJbWFOTfuYNgGXzAT3YTnxh4 oKdboeh817fXOxHQFC7sJrfoPzkphOjkmdtcg5MgJqvZKwphZUn472nEez4rnnRioDxYCcGPhJV6z w9fJQUwclFzzI+FFImdg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mrhf6-0012Yj-8U; Mon, 29 Nov 2021 14:29:44 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mrheX-0012HZ-Ly for linux-arm-kernel@lists.infradead.org; Mon, 29 Nov 2021 14:29:11 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EA71E113E; Mon, 29 Nov 2021 06:29:08 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7DD813F766; Mon, 29 Nov 2021 06:29:06 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: aou@eecs.berkeley.edu, borntraeger@de.ibm.com, bp@alien8.de, broonie@kernel.org, catalin.marinas@arm.com, dave.hansen@linux.intel.com, gor@linux.ibm.com, hca@linux.ibm.com, madvenka@linux.microsoft.com, mark.rutland@arm.com, mhiramat@kernel.org, mingo@redhat.com, mpe@ellerman.id.au, palmer@dabbelt.com, paul.walmsley@sifive.com, peterz@infradead.org, rostedt@goodmis.org, tglx@linutronix.de, will@kernel.org Subject: [PATCH v2 3/9] arm64: Mark __switch_to() as __sched Date: Mon, 29 Nov 2021 14:28:43 +0000 Message-Id: <20211129142849.3056714-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211129142849.3056714-1-mark.rutland@arm.com> References: <20211129142849.3056714-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211129_062909_798163_3B99F3E2 X-CRM114-Status: GOOD ( 12.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Unlike most architectures (and only in keeping with powerpc), arm64 has a non __sched() function on the path to our cpu_switch_to() assembly function. It is expected that for a blocked task, in_sched_functions() can be used to skip all functions between the raw context switch assembly and the scheduler functions that call into __switch_to(). This is the behaviour expected by stack_trace_consume_entry_nosched(), and the behaviour we'd like to have such that we an simplify arm64's __get_wchan() implementation to use arch_stack_walk(). This patch mark's arm64's __switch_to as __sched. This *will not* change the behaviour of arm64's current __get_wchan() implementation, which always performs an initial unwind step which skips __switch_to(). This *will* change the behaviour of stack_trace_consume_entry_nosched() and stack_trace_save_tsk() to match their expected behaviour on blocked tasks, skipping all scheduler-internal functions including __switch_to(). Other than the above, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Madhavan T. Venkataraman Cc: Peter Zijlstra Cc: Will Deacon Reviewed-by: Mark Brown --- arch/arm64/kernel/process.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index aacf2f5559a8..980cad7292af 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -490,7 +490,8 @@ void update_sctlr_el1(u64 sctlr) /* * Thread switching. */ -__notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, +__notrace_funcgraph __sched +struct task_struct *__switch_to(struct task_struct *prev, struct task_struct *next) { struct task_struct *last;