From patchwork Thu Feb 13 16:13:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 13973728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF1DFC021A0 for ; Thu, 13 Feb 2025 16:50:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=0WCgCBR0k4UWI+brJZeFrZ8LHjKlZRu+UpFatn4X1wg=; b=tt9t7P3QdevDWt/sBB06x5xTA5 fUawJQ7NQ0MzD9JTlekXOvWyViOzhtlveD7y0/OwvGV29DVx36T6SlVpczmLE7d7871SICHGWr+2n 5IdOyOKOHUsSPJ2SDRiTY+9VipJ2ddyrYKtXSboLOSAAANlBex8BtANmQbMSMQrpNT5EvhW1uYCnx wEZSbAvklT3sKr7ywrbnErVxGTPXVBPZKUJh2v86iGzb9Skfnqj5/L/2+FloINqFd1/Ztfl4Rq8ny 7ViWNcKdkkSlHJnT9e9esduRrYCkUGIt2178RlqOjoqcGbUlSbDHauxCSGhnm9LEY8VS/VoIiqzbv SF3B/TFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1ticQD-0000000BspC-22hW; Thu, 13 Feb 2025 16:50:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tibsl-0000000Bioa-2qzF for linux-arm-kernel@lists.infradead.org; Thu, 13 Feb 2025 16:16:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C33CA26B9; Thu, 13 Feb 2025 08:16:27 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.32.44]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A797D3F6A8; Thu, 13 Feb 2025 08:16:02 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun , "Aneesh Kumar K . V" Subject: [PATCH v7 19/45] KVM: arm64: Handle realm MMIO emulation Date: Thu, 13 Feb 2025 16:13:59 +0000 Message-ID: <20250213161426.102987-20-steven.price@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250213161426.102987-1-steven.price@arm.com> References: <20250213161426.102987-1-steven.price@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250213_081608_006915_6224C9B6 X-CRM114-Status: GOOD ( 17.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org MMIO emulation for a realm cannot be done directly with the VM's registers as they are protected from the host. However, for emulatable data aborts, the RMM uses GPRS[0] to provide the read/written value. We can transfer this from/to the equivalent VCPU's register entry and then depend on the generic MMIO handling code in KVM. For a MMIO read, the value is placed in the shared RecExit structure during kvm_handle_mmio_return() rather than in the VCPU's register entry. Signed-off-by: Steven Price Reviewed-by: Gavin Shan --- Changes since v5: * Inject SEA to the guest is an emulatable MMIO access triggers a data abort. * kvm_handle_mmio_return() - disable kvm_incr_pc() for a REC (as the PC isn't under the host's control) and move the REC_ENTER_EMULATED_MMIO flag setting to this location (as that tells the RMM to skip the instruction). --- arch/arm64/kvm/inject_fault.c | 4 +++- arch/arm64/kvm/mmio.c | 16 ++++++++++++---- arch/arm64/kvm/rme-exit.c | 6 ++++++ 3 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index a640e839848e..2a9682b9834f 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -165,7 +165,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr) */ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr) { - if (vcpu_el1_is_32bit(vcpu)) + if (unlikely(vcpu_is_rec(vcpu))) + vcpu->arch.rec.run->enter.flags |= REC_ENTER_FLAG_INJECT_SEA; + else if (vcpu_el1_is_32bit(vcpu)) inject_abt32(vcpu, false, addr); else inject_abt64(vcpu, false, addr); diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index ab365e839874..bff89d47a4d5 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -6,6 +6,7 @@ #include #include +#include #include #include "trace.h" @@ -136,14 +137,21 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu) trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr, &data); data = vcpu_data_host_to_guest(vcpu, data, len); - vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), data); + + if (vcpu_is_rec(vcpu)) + vcpu->arch.rec.run->enter.gprs[0] = data; + else + vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), data); } /* * The MMIO instruction is emulated and should not be re-executed * in the guest. */ - kvm_incr_pc(vcpu); + if (vcpu_is_rec(vcpu)) + vcpu->arch.rec.run->enter.flags |= REC_ENTER_FLAG_EMULATED_MMIO; + else + kvm_incr_pc(vcpu); return 1; } @@ -162,14 +170,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) * No valid syndrome? Ask userspace for help if it has * volunteered to do so, and bail out otherwise. * - * In the protected VM case, there isn't much userspace can do + * In the protected/realm VM case, there isn't much userspace can do * though, so directly deliver an exception to the guest. */ if (!kvm_vcpu_dabt_isvalid(vcpu)) { trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), kvm_vcpu_get_hfar(vcpu), fault_ipa); - if (vcpu_is_protected(vcpu)) { + if (vcpu_is_protected(vcpu) || vcpu_is_rec(vcpu)) { kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); return 1; } diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c index aae1adefe1a3..c785005f821f 100644 --- a/arch/arm64/kvm/rme-exit.c +++ b/arch/arm64/kvm/rme-exit.c @@ -25,6 +25,12 @@ static int rec_exit_reason_notimpl(struct kvm_vcpu *vcpu) static int rec_exit_sync_dabt(struct kvm_vcpu *vcpu) { + struct realm_rec *rec = &vcpu->arch.rec; + + if (kvm_vcpu_dabt_iswrite(vcpu) && kvm_vcpu_dabt_isvalid(vcpu)) + vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), + rec->run->exit.gprs[0]); + return kvm_handle_guest_abort(vcpu); }