From patchwork Sat Jun 29 08:55:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13716855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B107C27C4F for ; Sat, 29 Jun 2024 08:53:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:To:From:Reply-To:Cc:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=jgBGbIgks+aFTHKv4s7wOmSInNCsKlGRnr5jAKd7eHQ=; b=KA1W3Php3L1o8nwmkKBTF+PBgP Oq279OkpFmNCeOqOIn0BEEDR2UsYZiZBryS43uJVXiTHog8YKpXJBWpZnyQ7Jtoo/gOLu8I05IA5Z qqGwByY9Fj4z3wK8pXxt/UFxvV8X0KLKBG6Fp1nKVOxwFezvD//2E1Qy3jRCq55rb+OMh/p1zV4cZ Gy3/ylbsE5f1F7jCKGQiCfydbrpGagnyWtIZ8GXSvCoNWbM180WkGZJ2HyR593P/nDPZKKDhRrUdB 1AgmrjcJ8J8SFW+t9aLhRoJqCGKB/R/TCP7rOUiYb1OyIkmYesJTuiC/CnJQPtAQEtUj90Jaqqu3M 2ahoBqQg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sNTq0-0000000G8mk-47uU; Sat, 29 Jun 2024 08:53:40 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sNTpi-0000000G8gU-3Oqd for linux-arm-kernel@lists.infradead.org; Sat, 29 Jun 2024 08:53:25 +0000 Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WB5ZC1WQ2zjXYm; Sat, 29 Jun 2024 16:48:51 +0800 (CST) Received: from kwepemi100008.china.huawei.com (unknown [7.221.188.57]) by mail.maildlp.com (Postfix) with ESMTPS id 361DF180089; Sat, 29 Jun 2024 16:53:14 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemi100008.china.huawei.com (7.221.188.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Sat, 29 Jun 2024 16:53:13 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , Subject: [PATCH v3 1/3] entry: Add some arch funcs to support arm64 to use generic entry Date: Sat, 29 Jun 2024 16:55:59 +0800 Message-ID: <20240629085601.470241-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240629085601.470241-1-ruanjinjie@huawei.com> References: <20240629085601.470241-1-ruanjinjie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemi100008.china.huawei.com (7.221.188.57) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240629_015323_247002_05EAE787 X-CRM114-Status: GOOD ( 21.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add some arch functions to support arm64 to use generic entry, which do not affect existing architectures that use generic entry: - arch_prepare/post_report_syscall_entry/exit(). - arch_enter_from_kernel_mode(), arch_exit_to_kernel_mode_prepare(). - arch_irqentry_exit_need_resched() to support architecture-related need_resched() check logic. Also make syscall_exit_work() not static and move report_single_step() to thread_info.h, which can be used by arm64 later. x86 and Riscv compilation test ok after this patch. Signed-off-by: Jinjie Ruan Suggested-by: Thomas Gleixner --- v3: - Make the arch funcs not use __weak as Thomas suggested. - Make arch_forget_syscall() folded in arch_post_report_syscall_entry(). - __always_inline -> inline. - Move report_single_step() to thread_info.h for arm64 - Add Suggested-by. - Update the commit message. v2: - Fix a bug that not call arch_post_report_syscall_entry() in syscall_trace_enter() if ptrace_report_syscall_entry() return not zero. - Update the commit message. --- include/linux/entry-common.h | 90 ++++++++++++++++++++++++++++++++++++ include/linux/thread_info.h | 13 ++++++ kernel/entry/common.c | 37 +++++++-------- 3 files changed, 122 insertions(+), 18 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index b0fb775a600d..2aea23ca9d66 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -290,6 +290,94 @@ static __always_inline void arch_exit_to_user_mode(void); static __always_inline void arch_exit_to_user_mode(void) { } #endif +/** + * arch_enter_from_kernel_mode - Architecture specific check work. + */ +static inline void arch_enter_from_kernel_mode(struct pt_regs *regs); + +#ifndef arch_enter_from_kernel_mode +static inline void arch_enter_from_kernel_mode(struct pt_regs *regs) { } +#endif + +/** + * arch_exit_to_kernel_mode_prepare - Architecture specific final work before + * exit to kernel mode. + */ +static inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs); + +#ifndef arch_exit_to_kernel_mode_prepare +static inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs) { } +#endif + +/** + * arch_prepare_report_syscall_entry - Architecture specific work before + * report_syscall_entry(). + */ +static inline unsigned long arch_prepare_report_syscall_entry(struct pt_regs *regs); + +#ifndef arch_prepare_report_syscall_entry +static inline unsigned long arch_prepare_report_syscall_entry(struct pt_regs *regs) +{ + return 0; +} +#endif + +/** + * arch_post_report_syscall_entry - Architecture specific work after + * report_syscall_entry(). + */ +static inline void arch_post_report_syscall_entry(struct pt_regs *regs, + unsigned long saved_reg, + long ret); + +#ifndef arch_post_report_syscall_entry +static inline void arch_post_report_syscall_entry(struct pt_regs *regs, + unsigned long saved_reg, + long ret) +{ +} +#endif + +/** + * arch_prepare_report_syscall_exit - Architecture specific work before + * report_syscall_exit(). + */ +static inline unsigned long arch_prepare_report_syscall_exit(struct pt_regs *regs, + unsigned long work); + +#ifndef arch_prepare_report_syscall_exit +static inline unsigned long arch_prepare_report_syscall_exit(struct pt_regs *regs, + unsigned long work) +{ + return 0; +} +#endif + +/** + * arch_post_report_syscall_exit - Architecture specific work after + * report_syscall_exit(). + */ +static inline void arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work); + +#ifndef arch_post_report_syscall_exit +static inline void arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work) +{ +} +#endif + +/** + * arch_irqentry_exit_need_resched - Architecture specific need resched function + */ +static inline bool arch_irqentry_exit_need_resched(void); + +#ifndef arch_irqentry_exit_need_resched +static inline bool arch_irqentry_exit_need_resched(void) { return true; } +#endif + /** * arch_do_signal_or_restart - Architecture specific signal delivery function * @regs: Pointer to currents pt_regs @@ -552,4 +640,6 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs); */ void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state); +void syscall_exit_work(struct pt_regs *regs, unsigned long work); + #endif diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h index 9ea0b28068f4..062de9666ef3 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h @@ -55,6 +55,19 @@ enum syscall_work_bit { #define SYSCALL_WORK_SYSCALL_AUDIT BIT(SYSCALL_WORK_BIT_SYSCALL_AUDIT) #define SYSCALL_WORK_SYSCALL_USER_DISPATCH BIT(SYSCALL_WORK_BIT_SYSCALL_USER_DISPATCH) #define SYSCALL_WORK_SYSCALL_EXIT_TRAP BIT(SYSCALL_WORK_BIT_SYSCALL_EXIT_TRAP) + +/* + * If SYSCALL_EMU is set, then the only reason to report is when + * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall + * instruction has been already reported in syscall_enter_from_user_mode(). + */ +static inline bool report_single_step(unsigned long work) +{ + if (work & SYSCALL_WORK_SYSCALL_EMU) + return false; + + return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; +} #endif #include diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 90843cc38588..cd76391ffcb9 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -28,6 +28,7 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall) long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work) { + unsigned long saved_reg; long ret = 0; /* @@ -42,8 +43,10 @@ long syscall_trace_enter(struct pt_regs *regs, long syscall, /* Handle ptrace */ if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { + saved_reg = arch_prepare_report_syscall_entry(regs); ret = ptrace_report_syscall_entry(regs); - if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) + arch_post_report_syscall_entry(regs, saved_reg, ret); + if (ret || work & SYSCALL_WORK_SYSCALL_EMU) return -1L; } @@ -133,21 +136,9 @@ __always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs, return ti_work; } -/* - * If SYSCALL_EMU is set, then the only reason to report is when - * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall - * instruction has been already reported in syscall_enter_from_user_mode(). - */ -static inline bool report_single_step(unsigned long work) -{ - if (work & SYSCALL_WORK_SYSCALL_EMU) - return false; - - return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; -} - -static void syscall_exit_work(struct pt_regs *regs, unsigned long work) +void syscall_exit_work(struct pt_regs *regs, unsigned long work) { + unsigned long saved_reg; bool step; /* @@ -169,8 +160,11 @@ static void syscall_exit_work(struct pt_regs *regs, unsigned long work) trace_sys_exit(regs, syscall_get_return_value(current, regs)); step = report_single_step(work); - if (step || work & SYSCALL_WORK_SYSCALL_TRACE) + if (step || work & SYSCALL_WORK_SYSCALL_TRACE) { + saved_reg = arch_prepare_report_syscall_exit(regs, work); ptrace_report_syscall_exit(regs, step); + arch_post_report_syscall_exit(regs, saved_reg, work); + } } /* @@ -244,6 +238,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) return ret; } + arch_enter_from_kernel_mode(regs); + /* * If this entry hit the idle task invoke ct_irq_enter() whether * RCU is watching or not. @@ -307,7 +303,7 @@ void raw_irqentry_exit_cond_resched(void) rcu_irq_exit_check_preempt(); if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) WARN_ON_ONCE(!on_thread_stack()); - if (need_resched()) + if (need_resched() && arch_irqentry_exit_need_resched()) preempt_schedule_irq(); } } @@ -332,7 +328,12 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) /* Check whether this returns to user mode */ if (user_mode(regs)) { irqentry_exit_to_user_mode(regs); - } else if (!regs_irqs_disabled(regs)) { + return; + } + + arch_exit_to_kernel_mode_prepare(regs); + + if (!regs_irqs_disabled(regs)) { /* * If RCU was not watching on entry this needs to be done * carefully and needs the same ordering of lockdep/tracing