From patchwork Tue Jun 25 09:27:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jinjie Ruan X-Patchwork-Id: 13710801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55156C2BBCA for ; Tue, 25 Jun 2024 09:25:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:To:From:Reply-To:Cc:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=plPHvh9YZmHkVT0zlzRcOBrIJpzAbQU6A4NO3HrDAhw=; b=N4zWzIvYvv7PK/MiPfprkCI+v+ HROMgFhFv2s7XE3T4C/YBcRHI1SXUAtd3TqjZObExpIH58ObROiuItkfKxibgynegKmtGCcNsp5kB euyuOuQig+r1ISJi0YflA9Zu0SgEjQ5pFlmOP+szQyuDlC5Fzh/VtKBwN4gBjM01ic6/ga1JqMiCI cEp8gejS0G67vBCktHheir8up4Ti75caK218C8gfJr7Aecff3IvMYdHd7RQ0uijVfm25kgmixN/ng 0WjVXmNQoJnZAaQqEuZYZ9w/Xt9zXiE/25JVI9L3CIYGRu/KxQemSYTmbli9q8ogWIMmKCv7oQUX/ lXMB6DQw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sM2QS-00000002GQ3-26n3; Tue, 25 Jun 2024 09:25:20 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sM2QI-00000002GM0-3Nau for linux-arm-kernel@lists.infradead.org; Tue, 25 Jun 2024 09:25:13 +0000 Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4W7fTB25C2z1X47D; Tue, 25 Jun 2024 17:21:02 +0800 (CST) Received: from kwepemi100008.china.huawei.com (unknown [7.221.188.57]) by mail.maildlp.com (Postfix) with ESMTPS id A6A86140109; Tue, 25 Jun 2024 17:25:01 +0800 (CST) Received: from huawei.com (10.90.53.73) by kwepemi100008.china.huawei.com (7.221.188.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 25 Jun 2024 17:25:00 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , Subject: [PATCH 1/3] entry: Add some arch funcs to support arm64 to use generic entry Date: Tue, 25 Jun 2024 17:27:57 +0800 Message-ID: <20240625092759.1533875-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240625092759.1533875-1-ruanjinjie@huawei.com> References: <20240625092759.1533875-1-ruanjinjie@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.90.53.73] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemi100008.china.huawei.com (7.221.188.57) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240625_022511_262763_901CECC6 X-CRM114-Status: GOOD ( 17.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add some arch functions to support arm64 to use generic entry: - Add arch_prepare/post_report_syscall_entry/exit(), arch_enter_from_kernel_mode(), arch_exit_to_kernel_mode_prepare(), arch_irqentry_exit_need_resched() arch function to support architecture-related action, which do not affect existing architectures that use generic entry. - Make report_single_step() and syscall_exit_work() not static. Signed-off-by: Jinjie Ruan --- include/linux/entry-common.h | 51 ++++++++++++++++++++++++++++++++++++ kernel/entry/common.c | 49 +++++++++++++++++++++++++++++----- 2 files changed, 94 insertions(+), 6 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index b0fb775a600d..1be4c3d91995 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -84,6 +84,18 @@ static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs); static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs) {} #endif +static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs); + +#ifndef arch_enter_from_kernel_mode +static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs) {} +#endif + +static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs); + +#ifndef arch_exit_to_kernel_mode_prepare +static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs) {} +#endif + /** * enter_from_user_mode - Establish state when coming from user mode * @@ -298,6 +310,42 @@ static __always_inline void arch_exit_to_user_mode(void) { } */ void arch_do_signal_or_restart(struct pt_regs *regs); +/** + * arch_irqentry_exit_need_resched - Architecture specific need resched function + */ +bool arch_irqentry_exit_need_resched(void); + +/** + * arch_prepare_report_syscall_entry - Architecture specific report_syscall_entry() + * prepare function + */ +unsigned long arch_prepare_report_syscall_entry(struct pt_regs *regs); + +/** + * arch_post_report_syscall_entry - Architecture specific report_syscall_entry() + * post function + */ +void arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg); + +/** + * arch_prepare_report_syscall_exit - Architecture specific report_syscall_exit() + * prepare function + */ +unsigned long arch_prepare_report_syscall_exit(struct pt_regs *regs, unsigned long work); + +/** + * arch_post_report_syscall_exit - Architecture specific report_syscall_exit() + * post function + */ +void arch_post_report_syscall_exit(struct pt_regs *regs, unsigned long saved_reg, + unsigned long work); + +/** + * arch_forget_syscall - Architecture specific function called if + * ptrace_report_syscall_entry() return nonzero + */ +void arch_forget_syscall(struct pt_regs *regs); + /** * exit_to_user_mode_loop - do any pending work before leaving to user space */ @@ -552,4 +600,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs); */ void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state); +bool report_single_step(unsigned long work); +void syscall_exit_work(struct pt_regs *regs, unsigned long work); + #endif diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 90843cc38588..c524cf7f86f8 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -25,9 +25,14 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall) } } +unsigned long __weak arch_prepare_report_syscall_entry(struct pt_regs *regs) { return 0; } +void __weak arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg) { } +void __weak arch_forget_syscall(struct pt_regs *regs) { }; + long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work) { + unsigned long saved_reg; long ret = 0; /* @@ -42,8 +47,14 @@ long syscall_trace_enter(struct pt_regs *regs, long syscall, /* Handle ptrace */ if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { + saved_reg = arch_prepare_report_syscall_entry(regs); ret = ptrace_report_syscall_entry(regs); - if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) + if (ret) { + arch_forget_syscall(regs); + return -1L; + } + arch_post_report_syscall_entry(regs, saved_reg); + if (work & SYSCALL_WORK_SYSCALL_EMU) return -1L; } @@ -138,7 +149,7 @@ __always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs, * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall * instruction has been already reported in syscall_enter_from_user_mode(). */ -static inline bool report_single_step(unsigned long work) +inline bool report_single_step(unsigned long work) { if (work & SYSCALL_WORK_SYSCALL_EMU) return false; @@ -146,8 +157,22 @@ static inline bool report_single_step(unsigned long work) return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; } -static void syscall_exit_work(struct pt_regs *regs, unsigned long work) +unsigned long __weak arch_prepare_report_syscall_exit(struct pt_regs *regs, + unsigned long work) { + return 0; +} + +void __weak arch_post_report_syscall_exit(struct pt_regs *regs, + unsigned long saved_reg, + unsigned long work) +{ + +} + +void syscall_exit_work(struct pt_regs *regs, unsigned long work) +{ + unsigned long saved_reg; bool step; /* @@ -169,8 +194,11 @@ static void syscall_exit_work(struct pt_regs *regs, unsigned long work) trace_sys_exit(regs, syscall_get_return_value(current, regs)); step = report_single_step(work); - if (step || work & SYSCALL_WORK_SYSCALL_TRACE) + if (step || work & SYSCALL_WORK_SYSCALL_TRACE) { + saved_reg = arch_prepare_report_syscall_exit(regs, work); ptrace_report_syscall_exit(regs, step); + arch_post_report_syscall_exit(regs, saved_reg, work); + } } /* @@ -244,6 +272,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) return ret; } + arch_enter_from_kernel_mode(regs); + /* * If this entry hit the idle task invoke ct_irq_enter() whether * RCU is watching or not. @@ -300,6 +330,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) return ret; } +bool __weak arch_irqentry_exit_need_resched(void) { return true; } + void raw_irqentry_exit_cond_resched(void) { if (!preempt_count()) { @@ -307,7 +339,7 @@ void raw_irqentry_exit_cond_resched(void) rcu_irq_exit_check_preempt(); if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) WARN_ON_ONCE(!on_thread_stack()); - if (need_resched()) + if (need_resched() && arch_irqentry_exit_need_resched()) preempt_schedule_irq(); } } @@ -332,7 +364,12 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) /* Check whether this returns to user mode */ if (user_mode(regs)) { irqentry_exit_to_user_mode(regs); - } else if (!regs_irqs_disabled(regs)) { + return; + } + + arch_exit_to_kernel_mode_prepare(regs); + + if (!regs_irqs_disabled(regs)) { /* * If RCU was not watching on entry this needs to be done * carefully and needs the same ordering of lockdep/tracing