From patchwork Wed Nov 17 14:07:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12692900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96F2EC433F5 for ; Wed, 17 Nov 2021 14:10:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 62E4361C12 for ; Wed, 17 Nov 2021 14:10:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 62E4361C12 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sSzdpUqtbhSi2I8fmC48exzBLygH+OSAZMesqYiOzYo=; b=Mjjac0nHhd8lwJ Diwie7Klr7QFH/jlDWs7grbVNkaVP5XfyPHctUx4Ikw1E6QmNEUQWROeYn34JF7Imh+ki12eDGGij c7FblQ0A6xKJTfMrasHi2NyFdze8NlkwdS3feicfIsDDbvc+ymLH8E82Q8JVc4RIcHsyPGW9aCooD 3CxEYTkcjAQDqXwczYggg2UNdu0Xgolsxw0ifAjGbM8KsRLUAZEPf2UqS4yETf0mX6FjKWRhQ7uX5 8MvVvwN0bBAydRWQ0ux05Xr7jYtjsfwQUpEymKsCVjJMcV0hIuP4QoUT9yvdjyFCWhsvZ3gfeBxap Ja9wHvOM+ggOhcMETNBQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnLc7-0051bM-Q0; Wed, 17 Nov 2021 14:08:39 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mnLbU-0051MC-Jl for linux-arm-kernel@lists.infradead.org; Wed, 17 Nov 2021 14:08:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DEF2F11B3; Wed, 17 Nov 2021 06:07:59 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 538DB3F70D; Wed, 17 Nov 2021 06:07:57 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: aou@eecs.berkeley.edu, borntraeger@de.ibm.com, bp@alien8.de, broonie@kernel.org, catalin.marinas@arm.com, dave.hansen@linux.intel.com, gor@linux.ibm.com, hca@linux.ibm.com, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com, mark.rutland@arm.com, mhiramat@kernel.org, mingo@redhat.com, mpe@ellerman.id.au, palmer@dabbelt.com, paul.walmsley@sifive.com, peterz@infradead.org, rostedt@goodmis.org, tglx@linutronix.de, will@kernel.org Subject: [PATCH 3/9] arm64: Mark __switch_to() as __sched Date: Wed, 17 Nov 2021 14:07:31 +0000 Message-Id: <20211117140737.44420-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20211117140737.44420-1-mark.rutland@arm.com> References: <20211117140737.44420-1-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211117_060800_768709_673BA5AA X-CRM114-Status: GOOD ( 12.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Unlike most architectures (and only in keeping with powerpc), arm64 has a non __sched() function on the path to our cpu_switch_to() assembly function. It is expected that for a blocked task, in_sched_functions() can be used to skip all functions between the raw context switch assembly and the scheduler functions that call into __switch_to(). This is the behaviour expected by stack_trace_consume_entry_nosched(), and the behaviour we'd like to have such that we an simplify arm64's __get_wchan() implementation to use arch_stack_walk(). This patch mark's arm64's __switch_to as __sched. This *will not* change the behaviour of arm64's current __get_wchan() implementation, which always performs an initial unwind step which skips __switch_to(). This *will* change the behaviour of stack_trace_consume_entry_nosched() and stack_trace_save_tsk() to match their expected behaviour on blocked tasks, skipping all scheduler-internal functions including __switch_to(). Other than the above, there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: Madhavan T. Venkataraman Cc: Peter Zijlstra Cc: Will Deacon --- arch/arm64/kernel/process.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index aacf2f5559a8..980cad7292af 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -490,7 +490,8 @@ void update_sctlr_el1(u64 sctlr) /* * Thread switching. */ -__notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, +__notrace_funcgraph __sched +struct task_struct *__switch_to(struct task_struct *prev, struct task_struct *next) { struct task_struct *last;