From patchwork Fri Dec 3 10:47:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12694648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4AB54C433F5 for ; Fri, 3 Dec 2021 10:49:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=19fBqYcHSh2eu7hrjO2DB0VOxrIuETKi3/wwZQvLSos=; b=q6cRtKvYv80ffn XCEq4s/Z2JXnAM0bj+IdG1PGTJVPQpeAwRi+CQR10g94mY4BM7cfhLHKncua2buo0LnlCuIaxXWVC v2cSKFeGW93LorS9TVn+ghrLEe/7OwIdSoMlTjN1ruU8P4LQWRcYDqMw6e3la1/oJQza3gg0fq2ID +9K+MXfo66zlhWoSByk0mtMGZi35wmALykfixQCL1+BvxgsTgnismSW7YJr1dplcycjDHTS7X++2h Lc80JDa0v69RJrKViEv03YVCffYsBsdCw0wab1yg2B2SOe48kVnuJ1NPRRuR5ceb8mvGWYpUBfpfQ El+I6W7TAT0+OYC14KfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66W-00FEdw-G0; Fri, 03 Dec 2021 10:47:48 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66I-00FEYE-NQ for linux-arm-kernel@lists.infradead.org; Fri, 03 Dec 2021 10:47:36 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 99B461597; Fri, 3 Dec 2021 02:47:33 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 63F2F3F5A1; Fri, 3 Dec 2021 02:47:32 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH 1/4] arm64: alternative: wait for other CPUs before patching Date: Fri, 3 Dec 2021 10:47:20 +0000 Message-Id: <20211203104723.3412383-2-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211203104723.3412383-1-mark.rutland@arm.com> References: <20211203104723.3412383-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211203_024734_883555_3C1EA65D X-CRM114-Status: GOOD ( 14.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In __apply_alternatives_multi_stop() we have a "really simple polling protocol" to avoid patching code that is concurrently executed on other CPUs. Secondary CPUs wait for the boot CPU to signal that patching is complete, but the boot CPU doesn't wait for secondaries to enter the polling loop, and it's possible that patching starts while secondaries are still within the stop_machine logic. Let's fix this by adding a vaguely simple polling protocol where the boot CPU waits for secondaries to signal that they have entered the unpatchable stop function. We can use the arch_atomic_*() functions for this, as they are not patched with alternatives. At the same time, let's make `all_alternatives_applied` local to __apply_alternatives_multi_stop(), since it is only used there, and this makes the code a little clearer. Signed-off-by: Mark Rutland Cc: Andre Przywara Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/alternative.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 3fb79b76e9d9..4f32d4425aac 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -21,9 +21,6 @@ #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) -/* Volatile, as we may be patching the guts of READ_ONCE() */ -static volatile int all_alternatives_applied; - static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS); struct alt_region { @@ -193,11 +190,17 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu } /* - * We might be patching the stop_machine state machine, so implement a - * really simple polling protocol here. + * Apply alternatives, ensuring that no CPUs are concurrently executing code + * being patched. + * + * We might be patching the stop_machine state machine or READ_ONCE(), so + * we implement a simple polling protocol. */ static int __apply_alternatives_multi_stop(void *unused) { + /* Volatile, as we may be patching the guts of READ_ONCE() */ + static volatile int all_alternatives_applied; + static atomic_t stopped_cpus = ATOMIC_INIT(0); struct alt_region region = { .begin = (struct alt_instr *)__alt_instructions, .end = (struct alt_instr *)__alt_instructions_end, @@ -205,12 +208,16 @@ static int __apply_alternatives_multi_stop(void *unused) /* We always have a CPU 0 at this point (__init) */ if (smp_processor_id()) { + arch_atomic_inc(&stopped_cpus); while (!all_alternatives_applied) cpu_relax(); isb(); } else { DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); + while (arch_atomic_read(&stopped_cpus) != num_online_cpus() - 1) + cpu_relax(); + bitmap_complement(remaining_capabilities, boot_capabilities, ARM64_NPATCHABLE); From patchwork Fri Dec 3 10:47:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12694649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1DA60C433EF for ; Fri, 3 Dec 2021 10:49:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/cxiO4K4z5LHHNn2bi28pCN8Zf+KhMWMez27jqXfIDQ=; b=jUTrKipcW+Gyxx HzIfxLff2ADZkcUoB3acxuu9rMQFIQAb+zFvIQks6lZwR2cyJo6ddWMGhSLPYwa7QgFN+lrcGKLYq +EaXNl4OMoRybIzOJ0a/V26Ivt5F2BQ6TLJeYeLKySMKmM4objZMvG/K1QZfiB11cSBvTwi19rvcI I3BwhZYJtO2wTjsMPu7e0x8e/f1s0sJ68MAivI7JH0gMA5pBAKpbo1VOl5BlrDPMRvx/lmbtwPHAN ze+QYBgefI9FTh5rvqVd3eYXVhxp6DgjFnoWdqS/UJ6DLmFtjeZjzBH1Cdidt77VbA74E3NDbNzn6 EQOABjZS7f39NkRd2Ztg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66h-00FEhn-Lv; Fri, 03 Dec 2021 10:47:59 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66K-00FEYx-B1 for linux-arm-kernel@lists.infradead.org; Fri, 03 Dec 2021 10:47:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BB3D91596; Fri, 3 Dec 2021 02:47:35 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 85F223F5A1; Fri, 3 Dec 2021 02:47:34 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH 2/4] arm64: insn: wait for other CPUs before patching Date: Fri, 3 Dec 2021 10:47:21 +0000 Message-Id: <20211203104723.3412383-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211203104723.3412383-1-mark.rutland@arm.com> References: <20211203104723.3412383-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211203_024736_456190_D2082763 X-CRM114-Status: GOOD ( 13.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In aarch64_insn_patch_text_cb(), secondary CPUs wait for the master CPU to signal that patching is complete, but the master CPU doesn't wait for secondaries to enter the polling loop, and it's possible that patching starts while secondaries are still within the stop_machine logic. Let's fix this by adding a vaguely simple polling protocol where the boot CPU waits for secondaries to signal that they have entered the unpatchable stop function. We can use the arch_atomic_*() functions for this, as these are inlined and not instrumented. These will not be patched concurrently with this code executing. Signed-off-by: Mark Rutland Cc: Andre Przywara Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/patching.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c index 771f543464e0..c0d51340c913 100644 --- a/arch/arm64/kernel/patching.c +++ b/arch/arm64/kernel/patching.c @@ -116,16 +116,17 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg) { int i, ret = 0; struct aarch64_insn_patch *pp = arg; + int num_cpus = num_online_cpus(); - /* The first CPU becomes master */ - if (atomic_inc_return(&pp->cpu_count) == 1) { + /* The last CPU becomes master */ + if (arch_atomic_inc_return(&pp->cpu_count) == num_cpus) { for (i = 0; ret == 0 && i < pp->insn_cnt; i++) ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], pp->new_insns[i]); /* Notify other processors with an additional increment. */ atomic_inc(&pp->cpu_count); } else { - while (atomic_read(&pp->cpu_count) <= num_online_cpus()) + while (arch_atomic_read(&pp->cpu_count) <= num_cpus) cpu_relax(); isb(); } From patchwork Fri Dec 3 10:47:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12694650 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A092DC433F5 for ; Fri, 3 Dec 2021 10:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wtacv5U4tY84+XINL4V9oyhlG8uKck2Y5tbwBTgZTw0=; b=B+hztAyTeyq7V/ jBfjJnPgbMYKtmbcradPQxxQMxc1aN4mLOdA+KxoI4HGvM7pvCnUzhdICppsyPL6zI0NYvkZqJj+N Icvs9pZVnBLc/Z2pVgYRNAwuC0C3DuYmk7wKFx+CyH2i9pXNapnIrjUKtxcx2/MMsUE8NDnHMM+Wr tTXqwef1IGZ0H92M0hFUyrkQz+vrAoADoNG0InK/ncvzcE7i2InHcu1Xizd9MDHBjoCEmgZkHhoVp rYwpf8bJaCxC4DEdzCw7UgOgYUM3Z2NagzlAAXZvcUYrpatQV9ijOB7kTv6wjQ1L9MeyRAvd8jOwI Wc2JYbHpxn/4SjfMh/ZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66w-00FEp3-TP; Fri, 03 Dec 2021 10:48:15 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66M-00FEa9-6k for linux-arm-kernel@lists.infradead.org; Fri, 03 Dec 2021 10:47:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF48C1597; Fri, 3 Dec 2021 02:47:37 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8A4E03F5A1; Fri, 3 Dec 2021 02:47:36 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH 3/4] arm64: patching: unify stop_machine() patch synchronization Date: Fri, 3 Dec 2021 10:47:22 +0000 Message-Id: <20211203104723.3412383-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211203104723.3412383-1-mark.rutland@arm.com> References: <20211203104723.3412383-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211203_024738_404635_A2018FBE X-CRM114-Status: GOOD ( 21.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some instruction sequences cannot be safely modified while they may be concurrently executed, and so it's necessary to temporarily stop all CPUs while performing the modification. We have separate implementations of this for alternatives and kprobes. This patch unifies these with a common patch_machine() helper function which handles the necessary synchronization to ensure that CPUs are stopped during patching. This separates the patching logic, making it easier to understand, and means that we only have to maintain one synchronization algorithm. The synchronization logic in do_patch_machine() only uses unpatchable functions, and the function itself is marked `noinstr` to prevent instrumentation. The patch_machine() helper is left instrumentatble as stop_machine() is instrumentable, and therefore there is no benefit to forbidding instrumentation. As with the prior alternative patching sequence, the CPU to apply the patch is chosen early so that this may be deterministic. Since __apply_alternatives_stopped() is only ever called once under apply_alternatives_all(), the `all_alternatives_applied` variable and warning are redundant and therefore removed. Signed-off-by: Mark Rutland Cc: Andre Przywara Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/include/asm/patching.h | 4 ++ arch/arm64/kernel/alternative.c | 40 +++----------- arch/arm64/kernel/patching.c | 91 +++++++++++++++++++++++++------ 3 files changed, 84 insertions(+), 51 deletions(-) diff --git a/arch/arm64/include/asm/patching.h b/arch/arm64/include/asm/patching.h index 6bf5adc56295..25c199bc55d2 100644 --- a/arch/arm64/include/asm/patching.h +++ b/arch/arm64/include/asm/patching.h @@ -10,4 +10,8 @@ int aarch64_insn_write(void *addr, u32 insn); int aarch64_insn_patch_text_nosync(void *addr, u32 insn); int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt); +typedef int (*patch_machine_func_t)(void *); +int patch_machine_cpuslocked(patch_machine_func_t func, void *arg); +int patch_machine(patch_machine_func_t func, void *arg); + #endif /* __ASM_PATCHING_H */ diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index 4f32d4425aac..d2b4b9e6a0e4 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -14,8 +14,8 @@ #include #include #include +#include #include -#include #define __ALT_PTR(a, f) ((void *)&(a)->f + (a)->f) #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) @@ -189,43 +189,17 @@ static void __nocfi __apply_alternatives(struct alt_region *region, bool is_modu } } -/* - * Apply alternatives, ensuring that no CPUs are concurrently executing code - * being patched. - * - * We might be patching the stop_machine state machine or READ_ONCE(), so - * we implement a simple polling protocol. - */ -static int __apply_alternatives_multi_stop(void *unused) +static int __apply_alternatives_stopped(void *unused) { - /* Volatile, as we may be patching the guts of READ_ONCE() */ - static volatile int all_alternatives_applied; - static atomic_t stopped_cpus = ATOMIC_INIT(0); struct alt_region region = { .begin = (struct alt_instr *)__alt_instructions, .end = (struct alt_instr *)__alt_instructions_end, }; + DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); - /* We always have a CPU 0 at this point (__init) */ - if (smp_processor_id()) { - arch_atomic_inc(&stopped_cpus); - while (!all_alternatives_applied) - cpu_relax(); - isb(); - } else { - DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); - - while (arch_atomic_read(&stopped_cpus) != num_online_cpus() - 1) - cpu_relax(); - - bitmap_complement(remaining_capabilities, boot_capabilities, - ARM64_NPATCHABLE); - - BUG_ON(all_alternatives_applied); - __apply_alternatives(®ion, false, remaining_capabilities); - /* Barriers provided by the cache flushing */ - all_alternatives_applied = 1; - } + bitmap_complement(remaining_capabilities, boot_capabilities, + ARM64_NPATCHABLE); + __apply_alternatives(®ion, false, remaining_capabilities); return 0; } @@ -233,7 +207,7 @@ static int __apply_alternatives_multi_stop(void *unused) void __init apply_alternatives_all(void) { /* better not try code patching on a live SMP system */ - stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); + patch_machine(__apply_alternatives_stopped, NULL); } /* diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c index c0d51340c913..04497dbf14e2 100644 --- a/arch/arm64/kernel/patching.c +++ b/arch/arm64/kernel/patching.c @@ -105,31 +105,88 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) return ret; } +struct patch_machine_info { + patch_machine_func_t func; + void *arg; + int cpu; + atomic_t active; + volatile int done; +}; + +/* + * Run a code patching function on a single CPU, ensuring that no CPUs are + * concurrently executing code being patched. + * + * We wait for other CPUs to become quiescent before starting patching, and + * wait until patching is completed before other CPUs are woken. + * + * The patching function is responsible for any barriers necessary to make new + * instructions visible to other CPUs. The other CPUs will issue an ISB upon + * being woken to ensure they use the new instructions. + */ +static int noinstr do_patch_machine(void *arg) +{ + struct patch_machine_info *pmi = arg; + int cpu = smp_processor_id(); + int ret = 0; + + if (pmi->cpu == cpu) { + while (arch_atomic_read(&pmi->active)) + cpu_relax(); + ret = pmi->func(pmi->arg); + pmi->done = 1; + } else { + arch_atomic_dec(&pmi->active); + while (!pmi->done) + cpu_relax(); + isb(); + } + + return ret; +} + +/* + * Run a code patching function on a single CPU, ensuring that no CPUs are + * concurrently executing code being patched. + */ +int patch_machine_cpuslocked(patch_machine_func_t func, void *arg) +{ + struct patch_machine_info pmi = { + .func = func, + .arg = arg, + .cpu = raw_smp_processor_id(), + .active = ATOMIC_INIT(num_online_cpus() - 1), + .done = 0, + }; + + return stop_machine_cpuslocked(do_patch_machine, &pmi, cpu_online_mask); +} + +int patch_machine(patch_machine_func_t func, void *arg) +{ + int ret; + + cpus_read_lock(); + ret = patch_machine_cpuslocked(func, arg); + cpus_read_unlock(); + + return ret; +} + struct aarch64_insn_patch { void **text_addrs; u32 *new_insns; int insn_cnt; - atomic_t cpu_count; }; static int __kprobes aarch64_insn_patch_text_cb(void *arg) { int i, ret = 0; struct aarch64_insn_patch *pp = arg; - int num_cpus = num_online_cpus(); - - /* The last CPU becomes master */ - if (arch_atomic_inc_return(&pp->cpu_count) == num_cpus) { - for (i = 0; ret == 0 && i < pp->insn_cnt; i++) - ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], - pp->new_insns[i]); - /* Notify other processors with an additional increment. */ - atomic_inc(&pp->cpu_count); - } else { - while (arch_atomic_read(&pp->cpu_count) <= num_cpus) - cpu_relax(); - isb(); - } + + for (i = 0; ret == 0 && i < pp->insn_cnt; i++) + ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i], + pp->new_insns[i]); return ret; } @@ -140,12 +197,10 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt) .text_addrs = addrs, .new_insns = insns, .insn_cnt = cnt, - .cpu_count = ATOMIC_INIT(0), }; if (cnt <= 0) return -EINVAL; - return stop_machine_cpuslocked(aarch64_insn_patch_text_cb, &patch, - cpu_online_mask); + return patch_machine_cpuslocked(aarch64_insn_patch_text_cb, &patch); } From patchwork Fri Dec 3 10:47:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12694651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3939C433F5 for ; Fri, 3 Dec 2021 10:49:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KaY98e1/TDR8LTM9W4X7vMwvZ9NOfCjs+OwUVu5trY4=; b=s4VLrM2YotGLQW cnaXLgjIFCELJvz1Mh0nDzXJMXRalT3istGGMqoS/bmiF6uDaFfgMY90sNRvulCcV7bfzgBSRnU/l 1uJlj5EWHGe7t2U9Xd/el+Ag3VtkTcD9m+AWbx9SoFc92sn3mRs0P+/crL3ZMGFU+uvusxTYhRdNy ieRVhW8gX9rhxo9Ef42PVNX6ED3rXGDG9tD9zdnEDIugbB3yAj2CIvrato1ZAHsmuwFvzsviqmPIK /agvqQ+pVNSKiTjPmOATy2RPbS2auqqpJfXgco0sIUXMexGE5GvMxCO3JA5z2V0T2SIi+gCYAE4bP t8m4fGp3d+risYA3kMtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt67B-00FEx1-6m; Fri, 03 Dec 2021 10:48:29 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66O-00FEbH-H8 for linux-arm-kernel@lists.infradead.org; Fri, 03 Dec 2021 10:47:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DE4BE15A1; Fri, 3 Dec 2021 02:47:39 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A91563F5A1; Fri, 3 Dec 2021 02:47:38 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org, James Morse Cc: andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH 4/4] arm64: patching: mask exceptions in patch_machine() Date: Fri, 3 Dec 2021 10:47:23 +0000 Message-Id: <20211203104723.3412383-5-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211203104723.3412383-1-mark.rutland@arm.com> References: <20211203104723.3412383-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211203_024740_638894_CA2A6048 X-CRM114-Status: GOOD ( 12.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To ensure that CPUs remain quiescent during patching, they must not take exceptions. Ensure this by masking DAIF during the code patch_machine() logic. Signed-off-by: Mark Rutland Cc: Andre Przywara Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Suzuki K Poulose Cc: Will Deacon --- arch/arm64/kernel/patching.c | 6 ++++++ 1 file changed, 6 insertions(+) James, I think w need something similar for SDEI here, but I wasn't sure what I should use for a save..restore sequence. Any thoughts? Thanks, Mark. diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c index 04497dbf14e2..797bca33a13d 100644 --- a/arch/arm64/kernel/patching.c +++ b/arch/arm64/kernel/patching.c @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -127,9 +128,12 @@ struct patch_machine_info { static int noinstr do_patch_machine(void *arg) { struct patch_machine_info *pmi = arg; + unsigned long flags; int cpu = smp_processor_id(); int ret = 0; + flags = local_daif_save(); + if (pmi->cpu == cpu) { while (arch_atomic_read(&pmi->active)) cpu_relax(); @@ -142,6 +146,8 @@ static int noinstr do_patch_machine(void *arg) isb(); } + local_daif_restore(flags); + return ret; }