From patchwork Wed Oct 27 23:34:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12588985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B5C6C433F5 for ; Wed, 27 Oct 2021 23:35:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2AA7C60EFF for ; Wed, 27 Oct 2021 23:35:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2AA7C60EFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5xi+RJzPze8amooTcpmgA3obCyaX+rXuG6Bh4SBNagE=; b=vUmwHq76y7yUy8 Q3zsUYHqAh0kJautdqm4XGF/zfFggthnZ17EvpL04pk89xcq+RmOSwOwdKRJPE48wkzndeCnX8VBP X6camfDFHs0ObH3GgXI/Zu6ipnjKya0/hHj6F1VzuvOcOFEmdDWZgLfdVnCk/SncrX4M1mzD0i1oL mIbm5MfimfE06MGk2AIgOSQ+2VcI/Eh3UYW/Bhwy3ChvX4AHunBJRDgdovh9orbFRK/0G0sGiiupI Rj9lv5OrB4W+nHTry6zXkKkV2kHqHVRCRwQ2TqOcMpQ/0T3KjsSL1OzxOfmSU8PX2F9FhKu7l0CSb Jny2KFWZSe1Go2OfpTBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mfsRG-006V2e-5i; Wed, 27 Oct 2021 23:34:34 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mfsR3-006Uyn-BB for linux-arm-kernel@lists.infradead.org; Wed, 27 Oct 2021 23:34:22 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id A4B9060EFF; Wed, 27 Oct 2021 23:34:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1635377661; bh=uFIJdCx0wKY/JzDgB9g3KQJU6tBLu+Kzk2JrPUaUpJE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NJlMOvN42SEWsJcuLgSOaEHdrbm+HwSQ8scxgPGOiZvdZEWcGBZEgEMmpoKNfCW07 NLuilrnfl6jWFdDP/Ymt+kykKGrjluFLxTMOFCwojvNw3fTl9F9/V0gR9DMqc8QStl QQo0PrBZgoW9QOH1brNr54kTdpeWZ0zlopjPDUTA4p2yIzmG/Wtlacub/mZTusj96u JESGk2FeUHv+fsJAEi69ut3MCSzyHWGU8NokD+79I2bhRPCoc2DI4eUiiYr5GIWHTX 7TJtNtXj6ZE1HTIHYJ9BQ8AL41cr+XzSD/TGrRfZSQLeWdWm/HEeDsU8ZrJhkWkdvQ XFKW5RWWcJziw== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Mark Rutland , Quentin Perret , Catalin Marinas , James Morse , Will Deacon , Frederic Weisbecker , Peter Zijlstra , Kees Cook Subject: [PATCH v5 1/2] static_call: force symbol references with external linkage for CFI/LTO Date: Thu, 28 Oct 2021 01:34:08 +0200 Message-Id: <20211027233409.902331-2-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211027233409.902331-1-ardb@kernel.org> References: <20211027233409.902331-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2061; h=from:subject; bh=uFIJdCx0wKY/JzDgB9g3KQJU6tBLu+Kzk2JrPUaUpJE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBheeHuVj+DTJacPTYk7Hm3r7h4LN8Ib8QozGFWoMno xmBb5SKJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYXnh7gAKCRDDTyI5ktmPJOs0C/ 42evZtc8RJPmZCTlm241gxognUYBe1fwUO/whjOAykUtSu5hEey1XU4f2gs5dFA6aH0OkTFUUEo3yM WxYojCcStYfO4pVxs22qJDbOYWGDyLF6r9LlfAywWB5DRv0I68VrzhRztz7xnjbOn5aus90bs3Lm08 vXUJK3xFc3v6eoZOK0rqzEHoLWVSCUfOpEglioBUpdFapsU+ZST6Rlxsv/H5pgNDqVxgFzHo2xjBuu UYXoTzCxgQWVNknSEj9aIdR9zrAxmagyHjvc8Jk+FCjMCHm108Q9MWNAzB/GtcqQlFHd9MGbbBXbXC AYo9OLKa+p1QlvaK6LMnBYe4+E6iK4FvEz24o2i6U9cBRInlg05yF0buMA0SgfZwtmFEjDyd+pLMaQ E99Fo+stRSo1sG0+8bR0W1h3rC68Hz2UsANdTVtpErmagqZQEHzYvupJFEPeAxa3HVQscB7r7zzIJ5 di2b8wun8I9JwjaA7rkcEaORvnHnzugIRQQ9Ma9QeWiyY= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211027_163421_476107_CBAB62E8 X-CRM114-Status: GOOD ( 12.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When building with Clang with CFI or LTO enabled, the linker may decide not to emit function symbols with static linkage at all, or emit them under a different symbol name. This breaks static calls, given that we refer to such functions both from C code and from assembler, and we expect the names to be the same. So let's force the use of an alias with external linkage in a way that is visible to the compiler. This ensures that the C name and the asm name are identical. Signed-off-by: Ard Biesheuvel --- include/linux/static_call.h | 21 ++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/include/linux/static_call.h b/include/linux/static_call.h index 3e56a9751c06..19dc210214c0 100644 --- a/include/linux/static_call.h +++ b/include/linux/static_call.h @@ -327,10 +327,27 @@ static inline int static_call_text_reserved(void *start, void *end) #endif /* CONFIG_HAVE_STATIC_CALL */ +#ifdef CONFIG_LTO +/* + * DEFINE_STATIC_CALL() accepts any function symbol reference for its _func + * argument, but this may cause problems under Clang LTO/CFI if the function + * symbol has static linkage, because the symbol names exposed at the + * asm/object level may deviate from the C names. So let's force the reference + * to go via an alias with external linkage instead. + */ +#define _DEFINE_STATIC_CALL(name, _func, _init, _alias) \ + extern typeof(_func) _alias __alias(_init); \ + __DEFINE_STATIC_CALL(name, _func, _alias) +#else +#define _DEFINE_STATIC_CALL(name, _func, _init, _alias) \ + __DEFINE_STATIC_CALL(name, _func, _init) +#endif + #define DEFINE_STATIC_CALL(name, _func) \ - __DEFINE_STATIC_CALL(name, _func, _func) + _DEFINE_STATIC_CALL(name, _func, _func, __UNIQUE_ID(_func)) #define DEFINE_STATIC_CALL_RET0(name, _func) \ - __DEFINE_STATIC_CALL(name, _func, __static_call_return0) + _DEFINE_STATIC_CALL(name, _func, __static_call_return0, \ + __UNIQUE_ID(_func)) #endif /* _LINUX_STATIC_CALL_H */ From patchwork Wed Oct 27 23:34:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 12588987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB495C433F5 for ; Wed, 27 Oct 2021 23:35:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B2DD56023B for ; Wed, 27 Oct 2021 23:35:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B2DD56023B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7bplggqmzMTBRDWyShX4aKO1G3x/+XUacdKrdIUPO9U=; b=HazRlTP0uZi9Op 4WtLL1Uq7EVuSuswaQymmfE7nn7I8Pn0utzxWJeQu6z37bu9SRGbHoy23m4QkHjp8V8gSpoOwIbeZ Bz8tFhy1lx+KynTLVZkt84cDeJXl2JP5lyAdtzM7lguMC+89pG8vp6pMv8cY3pHnJkwv5LcXqXoKo NS4knCM+L5IF7Uc38xBDZiei2hcTefPY8CzR0A/aAxQRN5p4fTR5aF3/XlE74YFpaeXGV0spkmKqH tt8auW360OzmTuLicTnbNT0LHgr2ST8LQDwIcoIDor9xVi4ZubyHgBA4ZWaoqO7/iXq+A+6dusluB Q9IFe+YHNM9ay30a3hxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mfsRR-006V4h-7y; Wed, 27 Oct 2021 23:34:45 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mfsR6-006V01-4a for linux-arm-kernel@lists.infradead.org; Wed, 27 Oct 2021 23:34:26 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 752B961073; Wed, 27 Oct 2021 23:34:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1635377663; bh=9vXvelLloF67p2LtObXIQu9Zc7S/Z4YbCnIoVvX3NnQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ty9cV4hTEXd566tet0Ps2dMmTH6QyugS3Uxxah3rNoMh4cnPtcy1OBxJ1a9jWCK6B nJEwLRhKQhfSYkFU2HOVa9wufStCl84BFMWfto5/utBmV8DIJRxxftTHPinHae2Z+e nErZGpYXQaOHpUBsJZJ0J6j7rwbkFhA8jhFG3wmILkaaCZ9FfCqs4wCW1Zp8cUrK6d iWNTaGU5s4JyjOYvog1eocmn5RosZx6O5+TuJhPdtWUr1B7rlueJStB1vO7U6hVoLS YQEk+MSpK+gV2QYWPhEc0F90kAN+EDmSGoVSRGsOO1hFwRuB0/DusIlWhI8T6hLJpx FVFo2dCiq9B8Q== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Mark Rutland , Quentin Perret , Catalin Marinas , James Morse , Will Deacon , Frederic Weisbecker , Peter Zijlstra , Kees Cook Subject: [PATCH v5 2/2] arm64: implement support for static call trampolines Date: Thu, 28 Oct 2021 01:34:09 +0200 Message-Id: <20211027233409.902331-3-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211027233409.902331-1-ardb@kernel.org> References: <20211027233409.902331-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7033; h=from:subject; bh=9vXvelLloF67p2LtObXIQu9Zc7S/Z4YbCnIoVvX3NnQ=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBheeHwf6cq75cGlo1IxYUHSS8j3v1Oqnvi6o8EY1/+ 4gXw3ieJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYXnh8AAKCRDDTyI5ktmPJJ0uC/ wNStjHLWBPdaMcXYX+TOVCREFvvqDwgjlKp+lf7qkcj0veVbY4amwul8CBg8NDwnvhguSYMbM0Q+7C wm74haA5BIKOt1uK4mHWUiJj137uSOKrrXWmk06485YJXBm/xDxtbucSi5LmgfJxFSC2H+xstY3yd5 vl8ocLyot8IAuD3LQtNrLkVgrJ1g9Cr3A5aPC2x3GQHG/UtDC8rU7tMNcs2piI2IjPpRFKdATjmbhd 6Qk29/nlhm67gndWmbP1IIoMfaRyJkC1JbH+YsKB/7qhK2EFBCWIys5OsBQVlzB4z7N+A7Wp6Dzwbu lka4oRrclZKpm2AkZvpOootZTiiDIIebvlOhr+HXdD2L+q2sIZCB9wmw4AZJ1P7efGXoje8YUAr+za ecnI+iLwqbk2q3G1tsjFL/Dg86LV9PuW/uKdqwJFhQVwA8nRt2445wWi03w/DF7/gjlw6sb4U+F6TE 2a3SOoKJDQUxXYuUpLV6tS0ZKXuZdNV30Vkp4J+HCi4jA= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211027_163424_265378_6870C7C3 X-CRM114-Status: GOOD ( 27.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement arm64 support for the 'unoptimized' static call variety, which routes all calls through a single trampoline that is patched to perform a tail call to the selected function. It is expected that the direct branch instruction will be able to cover the common case. However, given that static call targets may be located in modules loaded out of direct branching range, we need a fallback path that loads the address into R16 and uses a branch-to-register (BR) instruction to perform an indirect call. Unlike on x86, there is no pressing need on arm64 to avoid indirect calls at all cost, but hiding it from the compiler as is done here does have some benefits: - the literal is located in .text, which gives us the same robustness advantage that code patching does; - no performance hit on CFI enabled Clang builds that decorate compiler emitted indirect calls with branch target validity checks. Acked-by: Peter Zijlstra Signed-off-by: Ard Biesheuvel --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/static_call.h | 40 +++++++++++ arch/arm64/kernel/patching.c | 72 +++++++++++++++++++- arch/arm64/kernel/vmlinux.lds.S | 1 + 4 files changed, 111 insertions(+), 3 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 228f39a35908..d9caa83b0f9f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -193,6 +193,7 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_PREEMPT_DYNAMIC select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_STATIC_CALL select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUTEX_CMPXCHG if FUTEX select MMU_GATHER_RCU_TABLE_FREE diff --git a/arch/arm64/include/asm/static_call.h b/arch/arm64/include/asm/static_call.h new file mode 100644 index 000000000000..b8b168174c52 --- /dev/null +++ b/arch/arm64/include/asm/static_call.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_STATIC_CALL_H +#define _ASM_STATIC_CALL_H + +/* + * The sequence below is laid out in a way that guarantees that the literal and + * the instruction are always covered by the same cacheline, and can be updated + * using a single store-pair instruction (if we rewrite the BTI C instruction + * as well). This means the literal and the instruction are always in sync when + * observed via the D-side. + * + * However, this does not guarantee that the I-side will catch up immediately + * as well: until the I-cache maintenance completes, CPUs may branch to the old + * target, or execute a stale NOP or RET. We deal with this by writing the + * literal unconditionally, even if it is 0x0 or the branch is in range. That + * way, a stale NOP will fall through and call the new target via an indirect + * call. Stale RETs or Bs will be taken as before, and branch to the old + * target until the I-side catches up. + */ +#define __ARCH_DEFINE_STATIC_CALL_TRAMP(name, insn) \ + asm(" .pushsection .static_call.text, \"ax\" \n" \ + " .align 4 \n" \ + " .globl " STATIC_CALL_TRAMP_STR(name) " \n" \ + "0: .quad 0x0 \n" \ + STATIC_CALL_TRAMP_STR(name) ": \n" \ + " hint 34 /* BTI C */ \n" \ + insn " \n" \ + " ldr x16, 0b \n" \ + " cbz x16, 1f \n" \ + " br x16 \n" \ + "1: ret \n" \ + " .popsection \n") + +#define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \ + __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "b " #func) + +#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ + __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret") + +#endif /* _ASM_STATIC_CALL_H */ diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c index 771f543464e0..646d1bd16482 100644 --- a/arch/arm64/kernel/patching.c +++ b/arch/arm64/kernel/patching.c @@ -66,7 +66,7 @@ int __kprobes aarch64_insn_read(void *addr, u32 *insnp) return ret; } -static int __kprobes __aarch64_insn_write(void *addr, __le32 insn) +static int __kprobes __aarch64_insn_write(void *addr, void *insn, int size) { void *waddr = addr; unsigned long flags = 0; @@ -75,7 +75,7 @@ static int __kprobes __aarch64_insn_write(void *addr, __le32 insn) raw_spin_lock_irqsave(&patch_lock, flags); waddr = patch_map(addr, FIX_TEXT_POKE0); - ret = copy_to_kernel_nofault(waddr, &insn, AARCH64_INSN_SIZE); + ret = copy_to_kernel_nofault(waddr, insn, size); patch_unmap(FIX_TEXT_POKE0); raw_spin_unlock_irqrestore(&patch_lock, flags); @@ -85,7 +85,73 @@ static int __kprobes __aarch64_insn_write(void *addr, __le32 insn) int __kprobes aarch64_insn_write(void *addr, u32 insn) { - return __aarch64_insn_write(addr, cpu_to_le32(insn)); + __le32 i = cpu_to_le32(insn); + + return __aarch64_insn_write(addr, &i, AARCH64_INSN_SIZE); +} + +static void *strip_cfi_jt(void *addr) +{ + if (IS_ENABLED(CONFIG_CFI_CLANG)) { + /* + * Taking the address of a function produces the address of the + * jump table entry when Clang CFI is enabled. Such entries are + * ordinary jump instructions, so if we spot one of those, we + * should decode it and use the address of the target instead. + */ + u32 br = le32_to_cpup(addr); + + if (aarch64_insn_is_b(br)) + return addr + aarch64_get_branch_offset(br); + } + return addr; +} + +void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) +{ + /* + * -0x8 + * 0x0 bti c <--- trampoline entry point + * 0x4 + * 0x8 ldr x16, + * 0xc cbz x16, 20 + * 0x10 br x16 + * 0x14 ret + */ + struct { + u64 literal; + __le32 insn[2]; + } insns; + u32 insn; + int ret; + + tramp = strip_cfi_jt(tramp); + + insn = aarch64_insn_gen_hint(AARCH64_INSN_HINT_BTIC); + insns.literal = (u64)func; + insns.insn[0] = cpu_to_le32(insn); + + if (!func) { + insn = aarch64_insn_gen_branch_reg(AARCH64_INSN_REG_LR, + AARCH64_INSN_BRANCH_RETURN); + } else { + func = strip_cfi_jt(func); + + insn = aarch64_insn_gen_branch_imm((u64)tramp + 4, (u64)func, + AARCH64_INSN_BRANCH_NOLINK); + + /* + * Use a NOP if the branch target is out of range, and rely on + * the indirect call instead. + */ + if (insn == AARCH64_BREAK_FAULT) + insn = aarch64_insn_gen_hint(AARCH64_INSN_HINT_NOP); + } + insns.insn[1] = cpu_to_le32(insn); + + ret = __aarch64_insn_write(tramp - 8, &insns, sizeof(insns)); + if (!WARN_ON(ret)) + caches_clean_inval_pou((u64)tramp - 8, sizeof(insns)); } int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index f6b1a88245db..ceb35c35192c 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -161,6 +161,7 @@ SECTIONS IDMAP_TEXT HIBERNATE_TEXT TRAMP_TEXT + STATIC_CALL_TEXT *(.fixup) *(.gnu.warning) . = ALIGN(16);