diff mbox series

[2/4] arm64: insn: wait for other CPUs before patching

Message ID 20211203104723.3412383-3-mark.rutland@arm.com (mailing list archive)
State New, archived
Headers show
Series arm64: ensure CPUs are quiescent before patching | expand

Commit Message

Mark Rutland Dec. 3, 2021, 10:47 a.m. UTC
In aarch64_insn_patch_text_cb(), secondary CPUs wait for the master CPU
to signal that patching is complete, but the master CPU doesn't wait for
secondaries to enter the polling loop, and it's possible that patching
starts while secondaries are still within the stop_machine logic.

Let's fix this by adding a vaguely simple polling protocol where the
boot CPU waits for secondaries to signal that they have entered the
unpatchable stop function. We can use the arch_atomic_*() functions for
this, as these are inlined and not instrumented. These will not be
patched concurrently with this code executing.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/patching.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)
diff mbox series

Patch

diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
index 771f543464e0..c0d51340c913 100644
--- a/arch/arm64/kernel/patching.c
+++ b/arch/arm64/kernel/patching.c
@@ -116,16 +116,17 @@  static int __kprobes aarch64_insn_patch_text_cb(void *arg)
 {
 	int i, ret = 0;
 	struct aarch64_insn_patch *pp = arg;
+	int num_cpus = num_online_cpus();
 
-	/* The first CPU becomes master */
-	if (atomic_inc_return(&pp->cpu_count) == 1) {
+	/* The last CPU becomes master */
+	if (arch_atomic_inc_return(&pp->cpu_count) == num_cpus) {
 		for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
 			ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
 							     pp->new_insns[i]);
 		/* Notify other processors with an additional increment. */
 		atomic_inc(&pp->cpu_count);
 	} else {
-		while (atomic_read(&pp->cpu_count) <= num_online_cpus())
+		while (arch_atomic_read(&pp->cpu_count) <= num_cpus)
 			cpu_relax();
 		isb();
 	}