From patchwork Fri Dec 3 10:47:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 12694647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE810C433EF for ; Fri, 3 Dec 2021 10:48:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=LGU73JIOGGqzORCOjt3zE60Lhc87stcxSp9O6JiwdDw=; b=cusxTvzjNGEwSO 6WySZ5ZeJNUeescsgXPOrGFEUVVDtsILaJI5cmIstUWX2fYNo2XCP03qG7WC3dWOvUnlEVqERxjui hUz3yrAh/WhIcNzQkQDEeu2U59DnsQ+gkkB7RulDIypzWSEkC32EbkhY9JzKwwlZM+fIPnHHz6lfZ UgRCy4lvsg0SUzevcto1zEiybh2fxsaZ60Rmv0BDmYVuZsK7N14V1hCxTZ0RgFSwirDQOQNSTEt0C oGi/MIzUBnIQJtFirYc2ReuHT3+5CtZhGJTnVwqlTuFKxUH3L2uzEyhgfOBzLx2oqJLBIBlgIJ0vR X9hCNYp/dWMYTgGqCR/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66L-00FEYr-24; Fri, 03 Dec 2021 10:47:37 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mt66G-00FEX3-7P for linux-arm-kernel@lists.infradead.org; Fri, 03 Dec 2021 10:47:34 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 720451596; Fri, 3 Dec 2021 02:47:31 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 39AD73F5A1; Fri, 3 Dec 2021 02:47:30 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: andre.przywara@arm.com, ardb@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, suzuki.poulose@arm.com, will@kernel.org Subject: [PATCH 0/4] arm64: ensure CPUs are quiescent before patching Date: Fri, 3 Dec 2021 10:47:19 +0000 Message-Id: <20211203104723.3412383-1-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211203_024732_333383_3A673B4E X-CRM114-Status: GOOD ( 13.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On arm64, certain instructions cannot be patched while they are being concurrently executed, and in these cases we use stop_machine() to ensure that while one CPU is patching instructions all other CPUs are in a quiescent state. We have two distinct sequences for this, one used for boot-time patching of alternatives, and on used for runtime patching (e.g. kprobes). Both sequences wait for patching to be complete before CPUs exit the quiescent state, but we don't wait for CPUs to be quiescent *before* we start patching, and so we may patch code which is still being executed (e.g. portions of stop_machine() itself). These patches fix this problem by updating the sequences to wait for CPUs to become quiescent before starting patches. The first two patches are potentially backportable fixes for the individual sequences, and the this patch unifies them behind an arm64-specific patch_machine() helper. The last patch prevents taking asynchronous exceptions out of a quiescent state (just DAIF for now; I'm not sure exactly how to handle SDEI). The architecture documentation is a little vague on how to ensure completion of prior execution (i.e. when patching from another CPU cannot possibly affect this and cause UNPREDICTABLE behaviour). For the moment I'm assuming that an atomic store cannot become visible until all prior execution has completed, but I suspect that we *might* need to add barriers into patch_machine() prior to signalling quiescence. This series does not intend to address the more general problem that out patching sequences may use directly-patchable or instrumentable code, and I'm intending that we address those with subsequent patches. Fixing that will require a more substantial rework (e.g. of the insn code). Thanks, Mark. Mark Rutland (4): arm64: alternative: wait for other CPUs before patching arm64: insn: wait for other CPUs before patching arm64: patching: unify stop_machine() patch synchronization arm64: patching: mask exceptions in patch_machine() arch/arm64/include/asm/patching.h | 4 ++ arch/arm64/kernel/alternative.c | 33 +++-------- arch/arm64/kernel/patching.c | 94 +++++++++++++++++++++++++------ 3 files changed, 89 insertions(+), 42 deletions(-)