From patchwork Fri Apr 19 10:29:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636174 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C16757D3E6; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522591; cv=none; b=cYus3kMSkkSUFe7vHLB+8WUgD3uJ1mBNAOhcPDDtgHrHreeytJjRQMu7JexVkQIoQMxLAKqEvoZR3bpf3ka25bIuP4QsppcqZn2HF+tQYS/RRub5WSy5ClkfVT9BKKwnjUyS+2x2ik1BjneY1zmF3mCLhmaAdamnJAhxwrAT/Ys= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522591; c=relaxed/simple; bh=KNYLgx5NhqQdD7ul1Nk1kGdw1Hv2DXSeVOrfdXxsuXs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=K4Zy8eKNG5VzsiXlUAkA/E8d9ZeciWERrhn5LvDkJ0YgLlX2WLnvXJIY+bqbL+RgHt4HjEazKt0hUamLGfj/ARXItWi9kCnB8yqoyl9R9el6keubEFnjLghLu9oz3NWdxxQhi1WI7Kpfpd32Zz0o+o5qphlnFhutnjqSR5H2+fY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fDuGnBKt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fDuGnBKt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A176C32781; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522591; bh=KNYLgx5NhqQdD7ul1Nk1kGdw1Hv2DXSeVOrfdXxsuXs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fDuGnBKtC2U4IEzyzQSDCnCr2TOZ9imLo9b31T4hS2iCG/hdPbCcYMbQWLmavFbj9 RhH4n9IqCyfkTIqd3SMvC/KuLpbhXzBtPvvEPivEPODcOjp/Dr5haIj+71jcl0Oc1U LQPLFypo/8krnycZbVp4oENnQYYjx2U62yoopxA01aUJ5Iakt9rH+SUEMON0XH2cyP Z/olyfyDN1BPhUh1FQGw27w0Jhxpo/2Tk4HtZ/q7KTDevEWK9UIiKNl5sbMalWnZ1q sxpv6n1vGKudhMkXHg8JhQZ6DVyNb0znTKliTw7MO9azokV9R/sNiDbFkQJ4VExRR1 xHLF8EzHAewsg== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV7-00636W-4i; Fri, 19 Apr 2024 11:29:49 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 01/15] KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values Date: Fri, 19 Apr 2024 11:29:21 +0100 Message-Id: <20240419102935.1935571-2-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false The unsuspecting kernel tinkerer can be easily confused into writing something that looks like this: ikey.lo = __vcpu_sys_reg(vcpu, SYS_APIAKEYLO_EL1); which seems vaguely sensible, until you realise that the second parameter is the encoding of a sysreg, and not the index into the vcpu sysreg file... Debugging what happens in this case is an interesting exercise in head<->wall interactions. As they often say: "Any resemblance to actual persons, living or dead, or actual events is purely coincidental". In order to save people's time, add some compile-time hardening that will at least weed out the "stupidly out of range" values. This will *not* catch anything that isn't a compile-time constant. Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9e8a496fb284..e24bd876ec9a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -896,7 +896,7 @@ struct kvm_vcpu_arch { * Don't bother with VNCR-based accesses in the nVHE code, it has no * business dealing with NV. */ -static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r) +static inline u64 *___ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r) { #if !defined (__KVM_NVHE_HYPERVISOR__) if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && @@ -906,6 +906,13 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r) return (u64 *)&ctxt->sys_regs[r]; } +#define __ctxt_sys_reg(c,r) \ + ({ \ + BUILD_BUG_ON(__builtin_constant_p(r) && \ + (r) >= NR_SYS_REGS); \ + ___ctxt_sys_reg(c, r); \ + }) + #define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r)) u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg); From patchwork Fri Apr 19 10:29:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636176 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3173C7E116; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=itNSeH5XBv04S/EQHUiCbFcyoSoGaHbaGStaPLAO7dZuVs5zMupvOphWz1YDBEneCm40z8ZDYlNMdF7ggfz8PHOntdillyA6H0hrr/C0e9DKhU2HYk8NqWEGIUCD4XRrJoWz1nz0PMx9qo815l9QzxkZs+cs9wmsNA1nwOlHY6A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=6QJByFrrQaSRx5IB/H5CGrbHLbcChZnGbU2cGpXqnno=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=U0HGVlN7ui9TwKCh9jSrUdRrVG+WrDWsKE7lDnQNnWLkX0mIpxHdsaruXvXmujM6zyyoZL5MCRTWSF6M7ERTZFF+o3WBjBIYZBmE+vq2tMHMI4cPQxzYuZJprLjZlrB7ZLJaKxN2mqx4147SeNacGhFfxyR6MKvZbrLeknVKwoY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=q8e7nfMo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="q8e7nfMo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D4ADC32782; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522591; bh=6QJByFrrQaSRx5IB/H5CGrbHLbcChZnGbU2cGpXqnno=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q8e7nfMonY2RPqtD8IWFv6/LXRK+Un/ctfMyOmNLGdxd9Qrju/FZrE+Y83Whn15Ma Lqlw7+4ZON8ppVRxFwQ4CWk5aeMNtEX+Lj7asUB8ptmSbg3s4YaN2+Lt2GKcGofIbX cKWBomEoBIpkAlfM2B8SWfyWG4GhytFOrpeZeWPv5jb0MKUvHnouyAjJ6yqYPYaPRt dpdiM6bF6GTZykXdgaNgRraEB+6mtHcYtij6UA5k5kGzwcbX+JbSgVUrMGdNpp4efR fDfxllFElDeRwraxFcDmH7+jtpwfVL7Wx0ugSqEmq6vuQR/2Gcn3g85xvYI5squKQ0 FVcd10KwPfVBw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV7-00636W-BS; Fri, 19 Apr 2024 11:29:49 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 02/15] KVM: arm64: Add helpers for ESR_ELx_ERET_ISS_ERET* Date: Fri, 19 Apr 2024 11:29:22 +0100 Message-Id: <20240419102935.1935571-3-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false The ESR_ELx_ERET_ISS_ERET* macros are a bit confusing: - ESR_ELx_ERET_ISS_ERET really indicates that we have trapped an ERETA* instruction, as opposed to an ERET - ESR_ELx_ERET_ISS_ERETA really indicates that we have trapped an ERETAB instruction, as opposed to an ERETAA. We could repaint those to make more sense, but these are the names that are present in the ARM ARM, and we are sentimentally attached to those. Instead, add two new helpers: - esr_iss_is_eretax() being true tells you that you need to authenticate the ERET - esr_iss_is_eretab() tells you that you need to use the B key instead of the A key Following patches will make use of these primitives. Suggested-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/esr.h | 12 ++++++++++++ arch/arm64/kvm/handle_exit.c | 2 +- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 81606bf7d5ac..7abf09df7033 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -404,6 +404,18 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr) return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_ACCESS; } +/* Indicate whether ESR.EC==0x1A is for an ERETAx instruction */ +static inline bool esr_iss_is_eretax(unsigned long esr) +{ + return esr & ESR_ELx_ERET_ISS_ERET; +} + +/* Indicate which key is used for ERETAx (false: A-Key, true: B-Key) */ +static inline bool esr_iss_is_eretab(unsigned long esr) +{ + return esr & ESR_ELx_ERET_ISS_ERETA; +} + const char *esr_get_class_string(unsigned long esr); #endif /* __ASSEMBLY */ diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 617ae6dea5d5..15221e481ccd 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -219,7 +219,7 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu) static int kvm_handle_eret(struct kvm_vcpu *vcpu) { - if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET) + if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu))) return kvm_handle_ptrauth(vcpu); /* From patchwork Fri Apr 19 10:29:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636177 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 317B97E11E; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=XRHmbNefSRsfyrOKnBoGwxdDKQgZ02dc0ah/sQca2E1JOp/E+sBB8bx3msj4d725CK0b/COESkACJOCc6PUB1kGB3mkWWdkATthXNt2o/S7AdOfSVC2xpR2RBR9Sod4QOlrUnPMp1QPnCvw/esCGGGzXjr4L4cuKFQgNRBBlE1w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=0ghS+d/37kG9I38a1NzWxGiZYdqk3MoQyhIqBHrfUaA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BVMeqX6RXT7l2d7xFZwmQmoRwuera3ArCh8MVHliQ2nHS92ymRZ6aeuxVdBQdfhZm+goO5GiVE6Q1mtcHSUS/UAdK13fNOsgF3afcG4XPGp7WkUW8FWt3BqoFyRAN4rMyvIrbbhOxcsqaXITanF9ZMHPYQAJ9eYFPEEs63EpYkw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ehu+Y539; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ehu+Y539" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B91A2C32786; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522591; bh=0ghS+d/37kG9I38a1NzWxGiZYdqk3MoQyhIqBHrfUaA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ehu+Y539kz/UpKatxH9BxcNZnBjDHEECI6hn9v8xvHsrtURB9Jpvp25iPkiehHBDp lWI3NjmSiFJmsYsXmg3tqhJYgUUv9QoKUvYl6iET8FOH+dCO+SBrpl1QDD8PbmEJkM fwjfZSZNOdsY4dXh9//xQSP4lolkVCbvoDRBwfUGcQXu5sZxz97q00YyB46CEjsEjn x4lJ40LH2CMXvoPTQLuam2AdYyAkx2fSn6MftmVfonMy0bSwTfJRs5LGqnwblVb3eg lSgiZCevXrRjNps2wwx0vqzNyEiifwF1BUEXdr6MhG0e8C+AMpaFmBAO8ueXwUJ3ry 4O/VlKxtHqIIA== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV7-00636W-I9; Fri, 19 Apr 2024 11:29:49 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 03/15] KVM: arm64: Constraint PAuth support to consistent implementations Date: Fri, 19 Apr 2024 11:29:23 +0100 Message-Id: <20240419102935.1935571-4-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false PAuth comes it two parts: address authentication, and generic authentication. So far, KVM mandates that both are implemented. PAuth also comes in three flavours: Q5, Q3, and IMPDEF. Only one can be implemented for any of address and generic authentication. Crucially, the architecture doesn't mandate that address and generic authentication implement the *same* flavour. This would make implementing ERETAx very difficult for NV, something we are not terribly keen on. So only allow PAuth support for KVM on systems that are not totally insane. Which is so far 100% of the known HW. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/arm.c | 38 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 36 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3dee5490eea9..a7178af1ab0c 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -218,6 +218,40 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_arm_teardown_hypercalls(kvm); } +static bool kvm_has_full_ptr_auth(void) +{ + bool apa, gpa, api, gpi, apa3, gpa3; + u64 isar1, isar2, val; + + /* + * Check that: + * + * - both Address and Generic auth are implemented for a given + * algorithm (Q5, IMPDEF or Q3) + * - only a single algorithm is implemented. + */ + if (!system_has_full_ptr_auth()) + return false; + + isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1); + isar2 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR2_EL1); + + apa = !!FIELD_GET(ID_AA64ISAR1_EL1_APA_MASK, isar1); + val = FIELD_GET(ID_AA64ISAR1_EL1_GPA_MASK, isar1); + gpa = (val == ID_AA64ISAR1_EL1_GPA_IMP); + + api = !!FIELD_GET(ID_AA64ISAR1_EL1_API_MASK, isar1); + val = FIELD_GET(ID_AA64ISAR1_EL1_GPI_MASK, isar1); + gpi = (val == ID_AA64ISAR1_EL1_GPI_IMP); + + apa3 = !!FIELD_GET(ID_AA64ISAR2_EL1_APA3_MASK, isar2); + val = FIELD_GET(ID_AA64ISAR2_EL1_GPA3_MASK, isar2); + gpa3 = (val == ID_AA64ISAR2_EL1_GPA3_IMP); + + return (apa == gpa && api == gpi && apa3 == gpa3 && + (apa + api + apa3) == 1); +} + int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) { int r; @@ -311,7 +345,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: - r = system_has_full_ptr_auth(); + r = kvm_has_full_ptr_auth(); break; case KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE: if (kvm) @@ -1270,7 +1304,7 @@ static unsigned long system_supported_vcpu_features(void) if (!system_supports_sve()) clear_bit(KVM_ARM_VCPU_SVE, &features); - if (!system_has_full_ptr_auth()) { + if (!kvm_has_full_ptr_auth()) { clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); } From patchwork Fri Apr 19 10:29:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636179 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 317F47E564; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=XiWJoTGuJIviVesBS8IGCpGg+oO+gBada+QvqgncLCb6vrd8C2wQaMOMxguMDSvdZoaO0bG7buY36pfqrUkq+H5XqgI7rzyRAAk1YpgIjYSVM2fqtlghaXElzA/0tDEOPU55Xg/eUWOAlCK1bQT5qmWEFJxUq0rD05+LFTciylc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=0rZiPoamnUU17Xi5/XFxXZrf+/7ihZhKHezWSfWcHpE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WAJo7enf+FUEqt3pyLGgZoaU2gGyomRqP2m23loD91HJKI+RN9xk4yAlxy1QWXjDV6Xy1604eO6NepPueQkGBAZPc3PcdOpedI6225NzeNdl4UW+1Sp7tZ4/zMLtajpD1MmkBPR9z+dtGCfrPvzThy3+uzZNdiSICHWyS9LDlFE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=H1z7xiyT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="H1z7xiyT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCCFAC072AA; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522591; bh=0rZiPoamnUU17Xi5/XFxXZrf+/7ihZhKHezWSfWcHpE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H1z7xiyTEIxzXpTg0RkBC6qcAQDuOZUzt+2jku/MtFaEiWNp+yguhSSeEiTZnlkQ8 pgd351OXr/iUn/8YlmA1hM+lpo+kmQTIfzDQ5EO/g+8NTLPn3cS+xEEpRHmRw3HTQn DRaZtNK0mZZLgv1Tmg4vxbH5QKwcdGt/bzjbkOpLw7099k9eMzF23uxYvHbQh8Fp9C b78OIc5qdxhMFQXwY7bHv2KxzfBcYvhpbN1jwWw2VRbEgG8qokChQGWYoFAJboUax7 v3RZHlLKAzCWbPEqoUGL2cvtbfSMY5SyGUsqPxd7To091DqYpSw66PO+pt4/JiqwXw vlQB7EP2G8LEA== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV7-00636W-Q1; Fri, 19 Apr 2024 11:29:49 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 04/15] KVM: arm64: nv: Drop VCPU_HYP_CONTEXT flag Date: Fri, 19 Apr 2024 11:29:24 +0100 Message-Id: <20240419102935.1935571-5-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false It has become obvious that HCR_EL2.NV serves the exact same use as VCPU_HYP_CONTEXT, only in an architectural way. So just drop the flag for good. Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 2 -- arch/arm64/kvm/hyp/vhe/switch.c | 7 +------ 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e24bd876ec9a..5566191d646a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -817,8 +817,6 @@ struct kvm_vcpu_arch { #define DEBUG_STATE_SAVE_SPE __vcpu_single_flag(iflags, BIT(5)) /* Save TRBE context if active */ #define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6)) -/* vcpu running in HYP context */ -#define VCPU_HYP_CONTEXT __vcpu_single_flag(iflags, BIT(7)) /* SVE enabled for host EL0 */ #define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0)) diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 1581df6aec87..07fd9f70f870 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -197,7 +197,7 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) * If we were in HYP context on entry, adjust the PSTATE view * so that the usual helpers work correctly. */ - if (unlikely(vcpu_get_flag(vcpu, VCPU_HYP_CONTEXT))) { + if (vcpu_has_nv(vcpu) && (read_sysreg(hcr_el2) & HCR_NV)) { u64 mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT); switch (mode) { @@ -240,11 +240,6 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) sysreg_restore_guest_state_vhe(guest_ctxt); __debug_switch_to_guest(vcpu); - if (is_hyp_ctxt(vcpu)) - vcpu_set_flag(vcpu, VCPU_HYP_CONTEXT); - else - vcpu_clear_flag(vcpu, VCPU_HYP_CONTEXT); - do { /* Jump in the fire! */ exit_code = __guest_enter(vcpu); From patchwork Fri Apr 19 10:29:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636175 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3177E7E118; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=K9y1fL9HKYk3r2EMgmHk3yggbDkvvECEw+kogW3HGg5UkcJPogzbVcBu5d7zDdio6li/6Q3sr72s9WyK3K4bvxCc+Ifk+e0ob7dg41pG8oF9VgmNQCMnEr852/fdQHVYQMIy/mW4WuLYs9V5xJiyynvWX5/MXyWjTB0vShm3QRo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=jcT2AAMivrMamGwKhwibZMsBtTKuKUz+xyxpJw2vBKA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=p3TSEEUdNqOlb/9qN2QJ+4rOHY/xrnvhI9sfyu6bcVxNKrF6uDS5boMlIkh0pW69t19Ngh0yGtzVKnL2URIwUnJXq38nWSLM7/ExL8hfBoKvlB+qjWGBwCVf3CFX9LydRYM4xzitDhbI43ueQxcPyOIGhCVRtkaQPVOvyTfyiIQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QENCkbaa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QENCkbaa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DFDA0C4AF09; Fri, 19 Apr 2024 10:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522591; bh=jcT2AAMivrMamGwKhwibZMsBtTKuKUz+xyxpJw2vBKA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QENCkbaa5mfk/0RzuthjoiFjs8B4WlTUxdNtLnVFY1pJUmY7QEGQSRFLhN6tUCE9B 6x9bb19i9zvxXsSnfzhWGvg/wmG6PgxDCkH2RdVynT1VMDNgxEErIpM/dhZmdz/rsr DuIbWLZCewu9tG5SLEJqZ7kngrQAJubphxyeZ7/ur11+cOqDucKG+OceXLhCOUBuPU 73jYwde/6aiMReNWMWPIVhSkr+YpjteXzFxXkmcB2mkCGfzJ+r0PVWh+EaWVALtQxR e8u2C19nNyijWMki04IpEFebuOokmeUDNp/NtH6cF+j4S+zEwU95jlqC2pzR/iVMcg kzJgfSXiTfBQg== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV8-00636W-1F; Fri, 19 Apr 2024 11:29:50 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 05/15] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Date: Fri, 19 Apr 2024 11:29:25 +0100 Message-Id: <20240419102935.1935571-6-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Add the HCR_EL2 configuration for FEAT_NV2, adding the required bits for running a guest hypervisor, and overall merging the allowed bits provided by the guest. This heavily replies on unavaliable features being sanitised when the HCR_EL2 shadow register is accessed, and only a couple of bits must be explicitly disabled. Non-NV guests are completely unaffected by any of this. Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +-- arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- arch/arm64/kvm/hyp/vhe/switch.c | 35 ++++++++++++++++++++++++- 3 files changed, 36 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index e3fcf8c4d5b4..f5f701f309a9 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -271,10 +271,8 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu) __deactivate_traps_hfgxtr(vcpu); } -static inline void ___activate_traps(struct kvm_vcpu *vcpu) +static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr) { - u64 hcr = vcpu->arch.hcr_el2; - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) hcr |= HCR_TVM; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index c50f8459e4fc..4103625e46c5 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -40,7 +40,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; - ___activate_traps(vcpu); + ___activate_traps(vcpu, vcpu->arch.hcr_el2); __activate_traps_common(vcpu); val = vcpu->arch.cptr_el2; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 07fd9f70f870..6b82f0907882 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -33,11 +33,44 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); +/* + * HCR_EL2 bits that the NV guest can freely change (no RES0/RES1 + * semantics, irrespective of the configuration), but that cannot be + * applied to the actual HW as things would otherwise break badly. + * + * - TGE: we want the guest to use EL1, which is incompatible with + * this bit being set + * + * - API/APK: for hysterical raisins, we enable PAuth lazily, which + * means that the guest's bits cannot be directly applied (we really + * want to see the traps). Revisit this at some point. + */ +#define NV_HCR_GUEST_EXCLUDE (HCR_TGE | HCR_API | HCR_APK) + +static u64 __compute_hcr(struct kvm_vcpu *vcpu) +{ + u64 hcr = vcpu->arch.hcr_el2; + + if (!vcpu_has_nv(vcpu)) + return hcr; + + if (is_hyp_ctxt(vcpu)) { + hcr |= HCR_NV | HCR_NV2 | HCR_AT | HCR_TTLB; + + if (!vcpu_el2_e2h_is_set(vcpu)) + hcr |= HCR_NV1; + + write_sysreg_s(vcpu->arch.ctxt.vncr_array, SYS_VNCR_EL2); + } + + return hcr | (__vcpu_sys_reg(vcpu, HCR_EL2) & ~NV_HCR_GUEST_EXCLUDE); +} + static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; - ___activate_traps(vcpu); + ___activate_traps(vcpu, __compute_hcr(vcpu)); if (has_cntpoff()) { struct timer_map map; From patchwork Fri Apr 19 10:29:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636178 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C6577E59F; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=VkMExKigSYOogrdTG/k3JfmFhKmOK6MKH+4eMYQU08BnDSJq9VmcsfhdxpcHffP7oTcLzFS/GPeNaV9a25oQEKVX5O+ov5H7YzyfI5kikvGPNIHIw8zM/JeItaC3KwOehJopD1tbrjFod3hv7o7xyD3tWPHdI90E70JAmpzAa1A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=xE8hNSFxmpy+AAOQpa+KgTdZLT8PINd+Cx6eo/7x5E0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KXf4lacIfRGONNn2gMrm75aFvX8HLmy+W50gvXsi8HEC+FU0lfOuGHsf2GAD+bFqAnKWR6jHtY4KcM6mV5vxBVpD+zNX4yArzbdw4akczKrULZLiOZFmw2zdzqvw2cv4XEcgKadJmEkDkw9vrN5MffH6elW58SluWZY4P5XByEM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BqmcjUya; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BqmcjUya" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 552F7C32782; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522592; bh=xE8hNSFxmpy+AAOQpa+KgTdZLT8PINd+Cx6eo/7x5E0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BqmcjUyaoXFUnlWplV2gxlK8xDZ0fIKFkj7dJWyKEqcoDH84kzjQMYKQlKyt2rGmD X8PWHNtbDIwer12xJm5hp0C22uGaZ6MDA8p1ocRk1IOh6AeMHcx+ZtGAeYgQqKzRhz 1ZYJ31l0x9LlH9Ul8H+KoytMLBI+JXvDWs1N3qvXdCl4YYa7Z4N/dVXxcZeZf8sJUm Qf0P4aFM4kMvdUN9EhqpIUIbD1MrrqKUeHJD9FivO3fqmlhc5jTJzxJZm7ifQ0kLsV fXbqZ0dBF1eCMocCXmhI1E4a9j+lDOSk5irOhqfd1CsH30hJuUolya3KCvA3nRcZOj vG4xD7+rjnJVg== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV8-00636W-7n; Fri, 19 Apr 2024 11:29:50 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas , Jintack Lim Subject: [PATCH v4 06/15] KVM: arm64: nv: Add trap forwarding for ERET and SMC Date: Fri, 19 Apr 2024 11:29:26 +0100 Message-Id: <20240419102935.1935571-7-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com, jintack.lim@linaro.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Honor the trap forwarding bits for both ERET and SMC, using a new helper that checks for common conditions. Reviewed-by: Joey Gouly Co-developed-by: Jintack Lim Signed-off-by: Jintack Lim Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_nested.h | 1 + arch/arm64/kvm/emulate-nested.c | 27 +++++++++++++++++++++++++++ arch/arm64/kvm/handle_exit.c | 7 +++++++ 3 files changed, 35 insertions(+) diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h index c77d795556e1..dbc4e3a67356 100644 --- a/arch/arm64/include/asm/kvm_nested.h +++ b/arch/arm64/include/asm/kvm_nested.h @@ -60,6 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0) return ttbr0 & ~GENMASK_ULL(63, 48); } +extern bool forward_smc_trap(struct kvm_vcpu *vcpu); int kvm_init_nv_sysregs(struct kvm *kvm); diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c index 4697ba41b3a9..2d80e81ae650 100644 --- a/arch/arm64/kvm/emulate-nested.c +++ b/arch/arm64/kvm/emulate-nested.c @@ -2117,6 +2117,26 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index) return true; } +static bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit) +{ + bool control_bit_set; + + if (!vcpu_has_nv(vcpu)) + return false; + + control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit; + if (!is_hyp_ctxt(vcpu) && control_bit_set) { + kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + return true; + } + return false; +} + +bool forward_smc_trap(struct kvm_vcpu *vcpu) +{ + return forward_traps(vcpu, HCR_TSC); +} + static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr) { u64 mode = spsr & PSR_MODE_MASK; @@ -2155,6 +2175,13 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu) u64 spsr, elr, mode; bool direct_eret; + /* + * Forward this trap to the virtual EL2 if the virtual + * HCR_EL2.NV bit is set and this is coming from !EL2. + */ + if (forward_traps(vcpu, HCR_NV)) + return; + /* * Going through the whole put/load motions is a waste of time * if this is a VHE guest hypervisor returning to its own diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 15221e481ccd..6a88ec024e2f 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -55,6 +55,13 @@ static int handle_hvc(struct kvm_vcpu *vcpu) static int handle_smc(struct kvm_vcpu *vcpu) { + /* + * Forward this trapped smc instruction to the virtual EL2 if + * the guest has asked for it. + */ + if (forward_smc_trap(vcpu)) + return 1; + /* * "If an SMC instruction executed at Non-secure EL1 is * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a From patchwork Fri Apr 19 10:29:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636181 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 971977E78E; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=C+CxKwiPnjh/LHnwDmv3+mFmEnftAG8iTs2gvFdZn1KISoiHX5kLdRv1gYCDhyCFjJyfbFeB2hkpuYXAwIvZsA36e8ifhQZ2/XEa2d8+RWFTV3dD9JBg9F5qUx8j0uuYXEYqmOAeANtzHkHNVzFnOr5dlAmTupIe9LvltLi6stk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=xGFicCUMUF6aeC28B8z9WCJ/RPpQLq1KI8I9/hSEiKI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=EvHi8AoJpau/GpErvPcLdjeGxnztYCUK13V+bAObS6VCDA7ek2hfTtQJA/T+zUVAgNpBn3lvwJXiMxqVGlE8XQMYvqRJTA2Gvj82uRQUs5eJkvBDwQIPGks0zZNs0JSr9iYjrL58UA9ZYsvj4PUE5kZBg71y+M/+FhB2vzUHQ2E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kMZD4lOp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kMZD4lOp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 496EAC4AF08; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522592; bh=xGFicCUMUF6aeC28B8z9WCJ/RPpQLq1KI8I9/hSEiKI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kMZD4lOphRwRrh/sc1/JXje3S1VD/U77FT8Xumx/Uf7hZA30ayTzbuwm0B2+2gKH6 EGI+BbZIfZBYlubgq8bQmJrbDmmgLgxxY8Oqg4ida0EHV5eFmgM8FD3DVZMy+d7Vyw 43stVhqr480W8ln2UQ4p4akN7bLerSb+i8cg/SIl4/JFFBadu8kloOWPXqtysS2Xns remWCA3+A/OD4uvjLwx0xk4h+cwGNAwXyvyTueXw3wx3FVUBMiXRRWVgQOW9jkZEmc nR1uXhgea4LeAixlvJE1t7nnrxHanweqQsILpx2n+JVaa8wi7WyZKsNDW7X/f8jJjl B4adZR8W9M2oQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV8-00636W-G5; Fri, 19 Apr 2024 11:29:50 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 07/15] KVM: arm64: nv: Fast-track 'InHost' exception returns Date: Fri, 19 Apr 2024 11:29:27 +0100 Message-Id: <20240419102935.1935571-8-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false A significant part of the FEAT_NV extension is to trap ERET instructions so that the hypervisor gets a chance to switch from a vEL2 L1 guest to an EL1 L2 guest. But this also has the unfortunate consequence of trapping ERET in unsuspecting circumstances, such as staying at vEL2 (interrupt handling while being in the guest hypervisor), or returning to host userspace in the case of a VHE guest. Although we already make some effort to handle these ERET quicker by not doing the put/load dance, it is still way too far down the line for it to be efficient enough. For these cases, it would ideal to ERET directly, no question asked. Of course, we can't do that. But the next best thing is to do it as early as possible, in fixup_guest_exit(), much as we would handle FPSIMD exceptions. Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/kvm/emulate-nested.c | 29 +++------------------- arch/arm64/kvm/hyp/vhe/switch.c | 44 +++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+), 26 deletions(-) diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c index 2d80e81ae650..63a74c0330f1 100644 --- a/arch/arm64/kvm/emulate-nested.c +++ b/arch/arm64/kvm/emulate-nested.c @@ -2172,8 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr) void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu) { - u64 spsr, elr, mode; - bool direct_eret; + u64 spsr, elr; /* * Forward this trap to the virtual EL2 if the virtual @@ -2182,33 +2181,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu) if (forward_traps(vcpu, HCR_NV)) return; - /* - * Going through the whole put/load motions is a waste of time - * if this is a VHE guest hypervisor returning to its own - * userspace, or the hypervisor performing a local exception - * return. No need to save/restore registers, no need to - * switch S2 MMU. Just do the canonical ERET. - */ - spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2); - spsr = kvm_check_illegal_exception_return(vcpu, spsr); - - mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT); - - direct_eret = (mode == PSR_MODE_EL0t && - vcpu_el2_e2h_is_set(vcpu) && - vcpu_el2_tge_is_set(vcpu)); - direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t); - - if (direct_eret) { - *vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2); - *vcpu_cpsr(vcpu) = spsr; - trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr); - return; - } - preempt_disable(); kvm_arch_vcpu_put(vcpu); + spsr = __vcpu_sys_reg(vcpu, SPSR_EL2); + spsr = kvm_check_illegal_exception_return(vcpu, spsr); elr = __vcpu_sys_reg(vcpu, ELR_EL2); trace_kvm_nested_eret(vcpu, elr, spsr); diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 6b82f0907882..390c7d99f617 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -206,6 +206,49 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu) __vcpu_put_switch_sysregs(vcpu); } +static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + u64 spsr, mode; + + /* + * Going through the whole put/load motions is a waste of time + * if this is a VHE guest hypervisor returning to its own + * userspace, or the hypervisor performing a local exception + * return. No need to save/restore registers, no need to + * switch S2 MMU. Just do the canonical ERET. + * + * Unless the trap has to be forwarded further down the line, + * of course... + */ + if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) + return false; + + spsr = read_sysreg_el1(SYS_SPSR); + mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT); + + switch (mode) { + case PSR_MODE_EL0t: + if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu))) + return false; + break; + case PSR_MODE_EL2t: + mode = PSR_MODE_EL1t; + break; + case PSR_MODE_EL2h: + mode = PSR_MODE_EL1h; + break; + default: + return false; + } + + spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode; + + write_sysreg_el2(spsr, SYS_SPSR); + write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR); + + return true; +} + static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, @@ -216,6 +259,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, + [ESR_ELx_EC_ERET] = kvm_hyp_handle_eret, [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, }; From patchwork Fri Apr 19 10:29:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636180 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B704E78276; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=KjSHXCaUV3tYWD/45PQGXoEq5PctkG8JieJ4yFmBUDbgjQ5vopz4S/fJgcTEQnydOeEFpgA50tYytDzx2zOCJ9etuLyanJrC9wIz274kBNjBP8HCUHGkj367GPkxnB9QG39YDCovVRqNCkN4/mzx81rYS9kfCCm8CT+uWHBZ8zM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=PcLim/KmKXgvfK0zZahv5eoT9zsPnsz7Cs5wQESJbb0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iyqjq/64mfvlV1YVeBjPsJw4uFMjIPYn8j5CyRxRzs0ZGbqtloaI/L01d2zOI01q5/2s0y7RtUo9zu1tcdgBSAsmkP7rPrqux4ffutnVkTCVT+xvm14X82IlXRnksWla3l487qY2KxjGbb8+kO214iNjzp/hgVRfnfCiEhTXuRU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UfVa3ZqF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UfVa3ZqF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9448DC3277B; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522592; bh=PcLim/KmKXgvfK0zZahv5eoT9zsPnsz7Cs5wQESJbb0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UfVa3ZqFVVai07mBnhGDCE0D8AzVON7uUb5tZ5fBY5Lz1wn7am8guF2RxitqfucQu QCROhze7GpJTSc3Y1q5HbqEoeb/wmfBitkTSN3+yBERMOHZHnWH5ZB4Q4fhKVY5qKl TL/N5Xqgg+H9b8cPspSvynuj3tcIbafsqO1ndyUz8RWvG2g+ZH+0wkBT/3k2FvxDe4 Xqr3jAGGjhHJCHXz5RfR8HQ7XKo9SF7EfArUpeaNIgYOebsj7IUDolx8U6k7pH3D6u pXk5ZhQ0903Af+/rWFH9MhmdMDmDFaExhFFzn8DSR2QQNOIT6BDpeOFGxlxpmvow/P L/v2/QaA7QdPw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV8-00636W-O0; Fri, 19 Apr 2024 11:29:50 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 08/15] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set Date: Fri, 19 Apr 2024 11:29:28 +0100 Message-Id: <20240419102935.1935571-9-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false If the L1 hypervisor decides to trap ERETs while running L2, make sure we don't try to emulate it, just like we wouldn't if it had its NV bit set. The exception will be reinjected from the core handler. Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/vhe/switch.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 390c7d99f617..26395171621b 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -220,7 +220,8 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code) * Unless the trap has to be forwarded further down the line, * of course... */ - if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) + if ((__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) || + (__vcpu_sys_reg(vcpu, HFGITR_EL2) & HFGITR_EL2_ERET)) return false; spsr = read_sysreg_el1(SYS_SPSR); From patchwork Fri Apr 19 10:29:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636184 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B70897EEE2; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; cv=none; b=K4dG/DGpUHlfmGlvhNcEK/4ZM1H8KGxF375f0KrlBKp/+XnZAE3aCzuaJiiGXS47uEsagt4kQG7+YDN6PviCA+eMQXF+qlisAfro0Zqc9yoWarnCSkqZgxgwrUFypRGj+YF8KpKzydg2gwfx0EoQ4q4XNGjuob3ueJI+4NoPEQM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522592; c=relaxed/simple; bh=rJmMMlu/EinsrjbvBVICmMTEOhhWE5CChovNG2i8Ifw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ip5pnqqxtoAfxyHf6oyaM0/U6s9b6JzAJmX/Qrh/tFxQ7/9LWDLjzdcjuyORonAJ9X5xx1bymbIjhwywB98EmecyllZPzx6usGVqoEHBSMaKzXHVC6ik0wso5qz6pCp61YP1ykYKmMboJjWYoYCklGpUSeN4myTzyv1m3o68btI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Nsx5reaj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Nsx5reaj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9DCF4C32786; Fri, 19 Apr 2024 10:29:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522592; bh=rJmMMlu/EinsrjbvBVICmMTEOhhWE5CChovNG2i8Ifw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nsx5reaj/IYZtKKl7cIFwaMF4js+8ZgAmTEug7a3YxT2WCwb7gADYE7wVKKywMAzw 9odbFlDhtB9EtCpkXrNtmDxmHkvEyrKlgsVXJJim+OVRjSnHY1XWLqyFeB/q5XfVlH WvI7OaavUQ+xh/nV82gQ1cCuhNlVQtdBbj69St/kCl9lReF0/t5LQXwfw8r03XatJL OaF6Vu9DafapjVZbRPiFViAlE6P42iIOOKzSQUViPvXvuSqoAMdYNYZWHhOV1o71MQ F/qUlIm8lXz7sDfa081s5uqKbLXUmVf7kn2eByXY7M1UNIfjuHhHZAS+BfSzT4V+uj P19goBh3hSF5A== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV8-00636W-UR; Fri, 19 Apr 2024 11:29:51 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 09/15] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently Date: Fri, 19 Apr 2024 11:29:29 +0100 Message-Id: <20240419102935.1935571-10-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Although KVM couples API and APK for simplicity, the architecture makes no such requirement, and the two can be independently set or cleared. Check for which of the two possible reasons we have trapped here, and if the corresponding L1 control bit isn't set, delegate the handling for forwarding. Otherwise, set this exact bit in HCR_EL2 and resume the guest. Of course, in the non-NV case, we keep setting both bits and be done with it. Note that the entry core already saves/restores the keys should any of the two control bits be set. This results in a bit of rework, and the removal of the (trivial) vcpu_ptrauth_enable() helper. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_emulate.h | 5 ---- arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++---- 2 files changed, 27 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 975af30af31f..87f2c31f3206 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -125,11 +125,6 @@ static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TWI; } -static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu) -{ - vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); -} - static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) { vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index f5f701f309a9..a0908d7a8f56 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -480,11 +480,35 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code) { struct kvm_cpu_context *ctxt; - u64 val; + u64 enable = 0; if (!vcpu_has_ptrauth(vcpu)) return false; + /* + * NV requires us to handle API and APK independently, just in + * case the hypervisor is totally nuts. Please barf >here<. + */ + if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) { + switch (ESR_ELx_EC(kvm_vcpu_get_esr(vcpu))) { + case ESR_ELx_EC_PAC: + if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_API)) + return false; + + enable |= HCR_API; + break; + + case ESR_ELx_EC_SYS64: + if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_APK)) + return false; + + enable |= HCR_APK; + break; + } + } else { + enable = HCR_API | HCR_APK; + } + ctxt = this_cpu_ptr(&kvm_hyp_ctxt); __ptrauth_save_key(ctxt, APIA); __ptrauth_save_key(ctxt, APIB); @@ -492,11 +516,9 @@ static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code) __ptrauth_save_key(ctxt, APDB); __ptrauth_save_key(ctxt, APGA); - vcpu_ptrauth_enable(vcpu); - val = read_sysreg(hcr_el2); - val |= (HCR_API | HCR_APK); - write_sysreg(val, hcr_el2); + vcpu->arch.hcr_el2 |= enable; + sysreg_clear_set(hcr_el2, 0, enable); return true; } From patchwork Fri Apr 19 10:29:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636183 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44C8F7F48C; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; cv=none; b=VzrhVIxPBaf13XAHtV3zx6c8Fwi5g1TpmHBdv9xVW8sqGi2Ri5xCFdv+EvTO1GoAcYtFRja7Z6xkvlkJ7eBoR4z1/41jew/m7+etd8PQ/gn1MK9I098dbDALw2CLQn3tNKH82f4Ts1j799DsaakA4X3+CrBfWb7YDXoUAUVLI8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; c=relaxed/simple; bh=Alk8DkWcPcruQ5um5KDrpSCEo4m97neZZIMYfFIrE54=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KNmPts8YoV94V9oKO/ajM9Zbrq5/iSJKv9o+lR5KRHLmfl5/XKLTfwCL7p8WIzcyH5pUpKgUZZfi53SoJHGgOh6rENzfkwfPOvOa2GMlmv7dFLUr1ve8X90yTkON/jMVdkxfYjU2aVXmvEafkdUGuWjkGpb9NN2vP9hRmaU4km4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=skVSkIri; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="skVSkIri" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1B504C3277B; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522593; bh=Alk8DkWcPcruQ5um5KDrpSCEo4m97neZZIMYfFIrE54=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=skVSkIriuOpK0iffn+AxzWWeiOvNIqp8cRmSl+phTSJvg3xZ1vnADur2SZS8fdfuC 80NC+Q0E1fYXoJs+4Sh65nvu2jR3cOA70Hkfed4dZ6D4dzNWToQGW2IMtuNQt8f22z uCb+WVJ8HGTAZv1PpwgyXTG/HCBs4UxSGvxwO/Qet9TaXvFnUwxemCy35jUVy9vgk7 sB0vMLqseBifJPjK7ZWn4c6XDhpREIaB7ThQBtxLJQwKW6C9C/9M52nCGi5Vpm301w GRfH7AklBBfoghqid4/Jpnj3fVFmC6BuzzrcphQbq/A4SVzuQTk3Dji6znaiXZVvYQ rlBmlOJaLvvog== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV9-00636W-4p; Fri, 19 Apr 2024 11:29:51 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 10/15] KVM: arm64: nv: Reinject PAC exceptions caused by HCR_EL2.API==0 Date: Fri, 19 Apr 2024 11:29:30 +0100 Message-Id: <20240419102935.1935571-11-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false In order for a L1 hypervisor to correctly handle PAuth instructions, it must observe traps caused by a L1 PAuth instruction when HCR_EL2.API==0. Since we already handle the case for API==1 as a fixup, only the exception injection case needs to be handled. Rework the kvm_handle_ptrauth() callback to reinject the trap in this case. Note that APK==0 is already handled by the exising triage_sysreg_trap() helper. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/handle_exit.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 6a88ec024e2f..1ba2f788b2c3 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -214,12 +214,34 @@ static int handle_sve(struct kvm_vcpu *vcpu) } /* - * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into - * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all - * that we can do is give the guest an UNDEF. + * Two possibilities to handle a trapping ptrauth instruction: + * + * - Guest usage of a ptrauth instruction (which the guest EL1 did not + * turn into a NOP). If we get here, it is that we didn't fixup + * ptrauth on exit, and all that we can do is give the guest an + * UNDEF (as the guest isn't supposed to use ptrauth without being + * told it could). + * + * - Running an L2 NV guest while L1 has left HCR_EL2.API==0, and for + * which we reinject the exception into L1. API==1 is handled as a + * fixup so the only way to get here is when API==0. + * + * Anything else is an emulation bug (hence the WARN_ON + UNDEF). */ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu) { + if (!vcpu_has_ptrauth(vcpu)) { + kvm_inject_undefined(vcpu); + return 1; + } + + if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) { + kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + return 1; + } + + /* Really shouldn't be here! */ + WARN_ON_ONCE(1); kvm_inject_undefined(vcpu); return 1; } From patchwork Fri Apr 19 10:29:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636182 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F5E57F7CE; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; cv=none; b=qvcuNXlTzohBtemT32w0db3ZXhyRjgxT8nHv8YvXLtq9thC58udJpjBnsKBvuK/YXFgp9epmXGgvaEM+o+3oF/bivxh/hKBasALAVyR4BhTPYC6cwn9X/Dw/T3V6LjiV4ztUs0DzDa/FV9RWOCfoo/YZpwk/BfN+FZbYJQ5/I4k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; c=relaxed/simple; bh=Dg+Z7zG4VBF0sGpOvyZhMnvQ2Jf+rhquslT9NGMAOik=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PoB773U9pevj+N5bC+b20LLppUjaJBKvXSZl6eB3LbzM9BGa/dLTvnefSYzD/8Lczvunc6rPxFRG3VppfMPMBwoBKTgVhOakGnyFw40EBviz9HIwEOta5j0trppM2rM1nDY1ejXdZbd+Py8dY05kA64UMNWN6Xmrj1XTLYjMC4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=slKkS3pA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="slKkS3pA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1B53DC32783; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522593; bh=Dg+Z7zG4VBF0sGpOvyZhMnvQ2Jf+rhquslT9NGMAOik=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=slKkS3pAX2xzK1+VjPrqoDAeYLJp4NHm9HxcI7GC/8htf4d3sGcyAIJs9iXudt7gQ v5FvzU/ommGyYo5pFO6OEE2mEB4+BHrFRtKNs3k0gyg+zVCGsPO8y/uJ2xFTpSmMzF EQJEgtCfpzVJPXkVEr6Mcqy0e38gz+EMroHnkDsW9h3G2CLD47UzxQbt+U6YMnE6zM gjNj1bODCrQBpz/SUZeeXiFDhnSr7zZgESNBUHaWC59TZEthIgwuB1tEJ+UxkYFPOs ryKEktzAWMruRX7oDx8WY63FBqDEaO7TBQ2iM3W5EGJpfXuKTey9h7ArJVAczjuV9b gBj22R7G2zS9w== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV9-00636W-DP; Fri, 19 Apr 2024 11:29:51 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 11/15] KVM: arm64: nv: Add kvm_has_pauth() helper Date: Fri, 19 Apr 2024 11:29:31 +0100 Message-Id: <20240419102935.1935571-12-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Pointer Authentication comes in many flavors, and a faithful emulation relies on correctly handling the flavour implemented by the HW. For this, provide a new kvm_has_pauth() that checks whether we expose to the guest a particular level of support. This checks across all 3 possible authentication algorithms (Q5, Q3 and IMPDEF). Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5566191d646a..ed4d82017b87 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1336,4 +1336,19 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu); (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \ get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max)) +/* Check for a given level of PAuth support */ +#define kvm_has_pauth(k, l) \ + ({ \ + bool pa, pi, pa3; \ + \ + pa = kvm_has_feat((k), ID_AA64ISAR1_EL1, APA, l); \ + pa &= kvm_has_feat((k), ID_AA64ISAR1_EL1, GPA, IMP); \ + pi = kvm_has_feat((k), ID_AA64ISAR1_EL1, API, l); \ + pi &= kvm_has_feat((k), ID_AA64ISAR1_EL1, GPI, IMP); \ + pa3 = kvm_has_feat((k), ID_AA64ISAR2_EL1, APA3, l); \ + pa3 &= kvm_has_feat((k), ID_AA64ISAR2_EL1, GPA3, IMP); \ + \ + (pa + pi + pa3) == 1; \ + }) + #endif /* __ARM64_KVM_HOST_H__ */ From patchwork Fri Apr 19 10:29:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636185 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 922AB7F7EE; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; cv=none; b=FplTel+ZOXkOSwL18PV7vYCLFATeHeRLLoTBE9u1+ZhTwUNwccm3LJvk2aGrljeSU1ak0vhEiJbepoqnE+PjNGbY3PEoXpR3qzvwK3FHiuQXI5zFt1fxQkJ6NQJiTQaVI1KTivjR2uISsEBV1KFaVtJkp2mXxFhGorzt/Lsv/HQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; c=relaxed/simple; bh=8aUw2zqhn6nZBR42Py4B/aHd4OCc85YK165SS/ImLHg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AL2Mvwz9PntpqHm00BtnuUxMek369YcB/z+26Vrjk+yhv4GV+aFs/2gLosacioDYoLjyn8I6mYlyrUyfrcqsnlPiZB3+MeuUK/aWnrzK88oMfS7rYm6MbGQZsV8glMyaaUQ36Aa7ces63KjsMcVBtZjQYeHV66vNRxHG5zViqkw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sSg3lXcp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sSg3lXcp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5891EC32782; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522593; bh=8aUw2zqhn6nZBR42Py4B/aHd4OCc85YK165SS/ImLHg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sSg3lXcp13rg6RAmf5bGdWsfu59Qs9j5fs4gLCsZXiccPd8audpXU6QstmJeSEwuI welKj+BZPUfNPZED5Ny/2hViHT5MfW3gB4GTqABFIPORBg3MPOk7mrgG2pMDrHyyox f/bjVzk28BLr5BVoYsA0CWZBlMXseC0V+sxxQx73pfuAOrmCaNVTOm3GT0/2dvY43A jHuOMrNS/5qZnJueITsr06BrkDN8YVvmcDgLs5fzKOkH7wAmFFCSz6eL/9wq+3paTX gdxr9XuD2XqNvwKZKVkSa8Db+TpSOwg6d+Q3FAmdzqxhM4vxxidooNh2uvf7WLiVv8 6Z0IfCWH50NDg== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV9-00636W-KC; Fri, 19 Apr 2024 11:29:51 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 12/15] KVM: arm64: nv: Add emulation for ERETAx instructions Date: Fri, 19 Apr 2024 11:29:32 +0100 Message-Id: <20240419102935.1935571-13-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false FEAT_NV has the interesting property of relying on ERET being trapped. An added complexity is that it also traps ERETAA and ERETAB, meaning that the Pointer Authentication aspect of these instruction must be emulated. Add an emulation of Pointer Authentication, limited to ERETAx (always using SP_EL2 as the modifier and ELR_EL2 as the pointer), using the Generic Authentication instructions. The emulation, however small, is placed in its own compilation unit so that it can be avoided if the configuration doesn't include it (or the toolchan in not up to the task). Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier Tested-by: Jon Hunter --- arch/arm64/include/asm/kvm_nested.h | 12 ++ arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/kvm/Makefile | 1 + arch/arm64/kvm/pauth.c | 196 +++++++++++++++++++++++++ 4 files changed, 210 insertions(+) create mode 100644 arch/arm64/kvm/pauth.c diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h index dbc4e3a67356..5e0ab0596246 100644 --- a/arch/arm64/include/asm/kvm_nested.h +++ b/arch/arm64/include/asm/kvm_nested.h @@ -64,4 +64,16 @@ extern bool forward_smc_trap(struct kvm_vcpu *vcpu); int kvm_init_nv_sysregs(struct kvm *kvm); +#ifdef CONFIG_ARM64_PTR_AUTH +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr); +#else +static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr) +{ + /* We really should never execute this... */ + WARN_ON_ONCE(1); + *elr = 0xbad9acc0debadbad; + return false; +} +#endif + #endif /* __ARM64_KVM_NESTED_H */ diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index ef207a0d4f0d..9943ff0af4c9 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -297,6 +297,7 @@ #define TCR_TBI1 (UL(1) << 38) #define TCR_HA (UL(1) << 39) #define TCR_HD (UL(1) << 40) +#define TCR_TBID0 (UL(1) << 51) #define TCR_TBID1 (UL(1) << 52) #define TCR_NFD0 (UL(1) << 53) #define TCR_NFD1 (UL(1) << 54) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index c0c050e53157..04882b577575 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ vgic/vgic-its.o vgic/vgic-debug.o kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o +kvm-$(CONFIG_ARM64_PTR_AUTH) += pauth.o always-y := hyp_constants.h hyp-constants.s diff --git a/arch/arm64/kvm/pauth.c b/arch/arm64/kvm/pauth.c new file mode 100644 index 000000000000..a3a5c404375b --- /dev/null +++ b/arch/arm64/kvm/pauth.c @@ -0,0 +1,196 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2024 - Google LLC + * Author: Marc Zyngier + * + * Primitive PAuth emulation for ERETAA/ERETAB. + * + * This code assumes that is is run from EL2, and that it is part of + * the emulation of ERETAx for a guest hypervisor. That's a lot of + * baked-in assumptions and shortcuts. + * + * Do no reuse for anything else! + */ + +#include + +#include +#include + +static u64 compute_pac(struct kvm_vcpu *vcpu, u64 ptr, + struct ptrauth_key ikey) +{ + struct ptrauth_key gkey; + u64 mod, pac = 0; + + preempt_disable(); + + if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU)) + mod = __vcpu_sys_reg(vcpu, SP_EL2); + else + mod = read_sysreg(sp_el1); + + gkey.lo = read_sysreg_s(SYS_APGAKEYLO_EL1); + gkey.hi = read_sysreg_s(SYS_APGAKEYHI_EL1); + + __ptrauth_key_install_nosync(APGA, ikey); + isb(); + + asm volatile(ARM64_ASM_PREAMBLE ".arch_extension pauth\n" + "pacga %0, %1, %2" : "=r" (pac) : "r" (ptr), "r" (mod)); + isb(); + + __ptrauth_key_install_nosync(APGA, gkey); + + preempt_enable(); + + /* PAC in the top 32bits */ + return pac; +} + +static bool effective_tbi(struct kvm_vcpu *vcpu, bool bit55) +{ + u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2); + bool tbi, tbid; + + /* + * Since we are authenticating an instruction address, we have + * to take TBID into account. If E2H==0, ignore VA[55], as + * TCR_EL2 only has a single TBI/TBID. If VA[55] was set in + * this case, this is likely a guest bug... + */ + if (!vcpu_el2_e2h_is_set(vcpu)) { + tbi = tcr & BIT(20); + tbid = tcr & BIT(29); + } else if (bit55) { + tbi = tcr & TCR_TBI1; + tbid = tcr & TCR_TBID1; + } else { + tbi = tcr & TCR_TBI0; + tbid = tcr & TCR_TBID0; + } + + return tbi && !tbid; +} + +static int compute_bottom_pac(struct kvm_vcpu *vcpu, bool bit55) +{ + static const int maxtxsz = 39; // Revisit these two values once + static const int mintxsz = 16; // (if) we support TTST/LVA/LVA2 + u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2); + int txsz; + + if (!vcpu_el2_e2h_is_set(vcpu) || !bit55) + txsz = FIELD_GET(TCR_T0SZ_MASK, tcr); + else + txsz = FIELD_GET(TCR_T1SZ_MASK, tcr); + + return 64 - clamp(txsz, mintxsz, maxtxsz); +} + +static u64 compute_pac_mask(struct kvm_vcpu *vcpu, bool bit55) +{ + int bottom_pac; + u64 mask; + + bottom_pac = compute_bottom_pac(vcpu, bit55); + + mask = GENMASK(54, bottom_pac); + if (!effective_tbi(vcpu, bit55)) + mask |= GENMASK(63, 56); + + return mask; +} + +static u64 to_canonical_addr(struct kvm_vcpu *vcpu, u64 ptr, u64 mask) +{ + bool bit55 = !!(ptr & BIT(55)); + + if (bit55) + return ptr | mask; + + return ptr & ~mask; +} + +static u64 corrupt_addr(struct kvm_vcpu *vcpu, u64 ptr) +{ + bool bit55 = !!(ptr & BIT(55)); + u64 mask, error_code; + int shift; + + if (effective_tbi(vcpu, bit55)) { + mask = GENMASK(54, 53); + shift = 53; + } else { + mask = GENMASK(62, 61); + shift = 61; + } + + if (esr_iss_is_eretab(kvm_vcpu_get_esr(vcpu))) + error_code = 2 << shift; + else + error_code = 1 << shift; + + ptr &= ~mask; + ptr |= error_code; + + return ptr; +} + +/* + * Authenticate an ERETAA/ERETAB instruction, returning true if the + * authentication succeeded and false otherwise. In all cases, *elr + * contains the VA to ERET to. Potential exception injection is left + * to the caller. + */ +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr) +{ + u64 sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL2); + u64 esr = kvm_vcpu_get_esr(vcpu); + u64 ptr, cptr, pac, mask; + struct ptrauth_key ikey; + + *elr = ptr = vcpu_read_sys_reg(vcpu, ELR_EL2); + + /* We assume we're already in the context of an ERETAx */ + if (esr_iss_is_eretab(esr)) { + if (!(sctlr & SCTLR_EL1_EnIB)) + return true; + + ikey.lo = __vcpu_sys_reg(vcpu, APIBKEYLO_EL1); + ikey.hi = __vcpu_sys_reg(vcpu, APIBKEYHI_EL1); + } else { + if (!(sctlr & SCTLR_EL1_EnIA)) + return true; + + ikey.lo = __vcpu_sys_reg(vcpu, APIAKEYLO_EL1); + ikey.hi = __vcpu_sys_reg(vcpu, APIAKEYHI_EL1); + } + + mask = compute_pac_mask(vcpu, !!(ptr & BIT(55))); + cptr = to_canonical_addr(vcpu, ptr, mask); + + pac = compute_pac(vcpu, cptr, ikey); + + /* + * Slightly deviate from the pseudocode: if we have a PAC + * match with the signed pointer, then it must be good. + * Anything after this point is pure error handling. + */ + if ((pac & mask) == (ptr & mask)) { + *elr = cptr; + return true; + } + + /* + * Authentication failed, corrupt the canonical address if + * PAuth2 isn't implemented, or some XORing if it is. + */ + if (!kvm_has_pauth(vcpu->kvm, PAuth2)) + cptr = corrupt_addr(vcpu, cptr); + else + cptr = ptr ^ (pac & mask); + + *elr = cptr; + return false; +} From patchwork Fri Apr 19 10:29:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636186 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B10197FBAF; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; cv=none; b=kRLKATwP+WRcxipEpXxI9/sCf927Uim4PwhsmWZXpMtMikr3cJzIqkkvsSBf8fCsQXEmM2X22XMhjapNZahiCumlfjmV8cwXB/3ad0z8BScHb9U1ScjL+uuw+nCUQn1vkwTPOsznURnJ6ljzJ3Ko5psRVi6G/cR+GEa8uJJAT0c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522593; c=relaxed/simple; bh=3mtQOeqT5hkTI2xaw9Qk8k1KFnpFiefVFxYHDiC8HeM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E0mImwKGc3+pJLHm8yjQ+E6MMdAD0xvvZvIykwH8cjYTEKMI4JR6clasJ0il1FcQ6iLXDRKqzMi9/CrzWOaTIThp9UO2XPCi6o8bKqtPqUUYy2FWojQTnCmXBya+wByHp4zbKYt7xa5qkQ/I9RZH9vGrB6+IAFRJYU3nL8dJAqQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Uj1i38ds; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Uj1i38ds" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90DE1C3277B; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522593; bh=3mtQOeqT5hkTI2xaw9Qk8k1KFnpFiefVFxYHDiC8HeM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uj1i38ds2XG5KPJghO7zrdheHgTFxnNz+F9EZBEIwPRkiTi+ZklEUR71F5mjZeILt OY92HxZR3Ll4ETYYw9up1ydb/rBHgiC5yEE83BNXCaw9Bl2QCZJqGfScIvc2BHI3oH 5GGLYsm/CyBTZkDBPh1G/5c0FXx1jis7UWJWiNRCxhkMtOV9fdOwMLd+LXnlgAzJQD 4Wk3etYvgsoRUFVoQpyVPsYZRdsLu9JqX+Xlp2c114mnQpP/GaMT9WHjie6bTwej9l RGIaDb1MAEVkqyu3lhXxKCVnexgR1nI/Q/9dvn9W0HOT8YcsK3sVJ2XhD6OF8AxE8C NM7sqEaKbQSag== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlV9-00636W-SZ; Fri, 19 Apr 2024 11:29:51 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 13/15] KVM: arm64: nv: Handle ERETA[AB] instructions Date: Fri, 19 Apr 2024 11:29:33 +0100 Message-Id: <20240419102935.1935571-14-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Now that we have some emulation in place for ERETA[AB], we can plug it into the exception handling machinery. As for a bare ERET, an "easy" ERETAx instruction is processed as a fixup, while something that requires a translation regime transition or an exception delivery is left to the slow path. Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/kvm/emulate-nested.c | 22 ++++++++++++++++++++-- arch/arm64/kvm/handle_exit.c | 3 ++- arch/arm64/kvm/hyp/vhe/switch.c | 13 +++++++++++-- 3 files changed, 33 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c index 63a74c0330f1..72d733c74a38 100644 --- a/arch/arm64/kvm/emulate-nested.c +++ b/arch/arm64/kvm/emulate-nested.c @@ -2172,7 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr) void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu) { - u64 spsr, elr; + u64 spsr, elr, esr; /* * Forward this trap to the virtual EL2 if the virtual @@ -2181,12 +2181,30 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu) if (forward_traps(vcpu, HCR_NV)) return; + /* Check for an ERETAx */ + esr = kvm_vcpu_get_esr(vcpu); + if (esr_iss_is_eretax(esr) && !kvm_auth_eretax(vcpu, &elr)) { + /* + * Oh no, ERETAx failed to authenticate. If we have + * FPACCOMBINE, deliver an exception right away. If we + * don't, then let the mangled ELR value trickle down the + * ERET handling, and the guest will have a little surprise. + */ + if (kvm_has_pauth(vcpu->kvm, FPACCOMBINE)) { + esr &= ESR_ELx_ERET_ISS_ERETA; + esr |= FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_FPAC); + kvm_inject_nested_sync(vcpu, esr); + return; + } + } + preempt_disable(); kvm_arch_vcpu_put(vcpu); spsr = __vcpu_sys_reg(vcpu, SPSR_EL2); spsr = kvm_check_illegal_exception_return(vcpu, spsr); - elr = __vcpu_sys_reg(vcpu, ELR_EL2); + if (!esr_iss_is_eretax(esr)) + elr = __vcpu_sys_reg(vcpu, ELR_EL2); trace_kvm_nested_eret(vcpu, elr, spsr); diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 1ba2f788b2c3..407bdfbb572b 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -248,7 +248,8 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu) static int kvm_handle_eret(struct kvm_vcpu *vcpu) { - if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu))) + if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu)) && + !vcpu_has_ptrauth(vcpu)) return kvm_handle_ptrauth(vcpu); /* diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 26395171621b..8e1d98b691c1 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -208,7 +208,8 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu) static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code) { - u64 spsr, mode; + u64 esr = kvm_vcpu_get_esr(vcpu); + u64 spsr, elr, mode; /* * Going through the whole put/load motions is a waste of time @@ -242,10 +243,18 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } + /* If ERETAx fails, take the slow path */ + if (esr_iss_is_eretax(esr)) { + if (!(vcpu_has_ptrauth(vcpu) && kvm_auth_eretax(vcpu, &elr))) + return false; + } else { + elr = read_sysreg_el1(SYS_ELR); + } + spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode; write_sysreg_el2(spsr, SYS_SPSR); - write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR); + write_sysreg_el2(elr, SYS_ELR); return true; } From patchwork Fri Apr 19 10:29:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636187 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F13C980028; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522594; cv=none; b=QBVguNdh1oyR2kDOhpR16m3GX1F3BCLKKM1+jx777Ugd38ZFJvp+hXkvrW9ZukCccUA8od9WmJvQ0PA2ip8UVjq1b4s92xw6rcoNx87DdJ/2XyuXYeXvE2pwv3SYiFuV5iAW9up84gURBd3s2+tQO6j4mvPvwwDsHs62MzxnMMg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522594; c=relaxed/simple; bh=fPLc3qnVR/fPZ7T4iKNF6Hx6H3w34KByeZbhDFUklq8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ogdV1Z+tgcAgBGd+Ij6Ari2KL4qtiYl6Q5JLxbhbZfA4wnZaM/+sqhkY+5aQTVIbxkOw42LeSDCB0Qw+vLznI6Mg8dhDNAijRjEXeigWRHTfJB4niPhHnbxcWJ6hv/BFSPLYLmhkkfsHCSWVxKyWMAULvCaZUjltc2YBym/F6yY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ebYp/sJG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ebYp/sJG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D03F7C2BD10; Fri, 19 Apr 2024 10:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522593; bh=fPLc3qnVR/fPZ7T4iKNF6Hx6H3w34KByeZbhDFUklq8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ebYp/sJG/Z5LQzPUd0RTiBK4UBUNHzokxAQq+7BWqofpMylN+zKlnUoeOam8+OD/S RczkYsGsDiXx1xge6RVRPcTfjMFOn9hMl00sWXGrVDH12+UeGq0eVFqg32sC1WGrw0 oCpTT9MS8D9G8g21iUX6aBX5EoAJaqZl75zxWayNymV1mskS8IOT/94lkzdzwMRf98 xQDOvFq88NhOPI53Z0khU/sK/Q2GcZOy4fHjHmW363dIMvyTCfcpvpYeGBJB0usNkP JAWjh8Ymcwm0p4o0YympCI971kqc4qyBQWT4voSPwNUqeZ8gLQG5aTk35e21mcUwsH ej0+Y5D9IxcEw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlVA-00636W-3b; Fri, 19 Apr 2024 11:29:52 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 14/15] KVM: arm64: nv: Advertise support for PAuth Date: Fri, 19 Apr 2024 11:29:34 +0100 Message-Id: <20240419102935.1935571-15-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Now that we (hopefully) correctly handle ERETAx, drop the masking of the PAuth feature (something that was not even complete, as APA3 and AGA3 were still exposed). Reviewed-by: Joey Gouly Signed-off-by: Marc Zyngier --- arch/arm64/kvm/nested.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index ced30c90521a..6813c7c7f00a 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -35,13 +35,9 @@ static u64 limit_nv_id_reg(u32 id, u64 val) break; case SYS_ID_AA64ISAR1_EL1: - /* Support everything but PtrAuth and Spec Invalidation */ + /* Support everything but Spec Invalidation */ val &= ~(GENMASK_ULL(63, 56) | - NV_FTR(ISAR1, SPECRES) | - NV_FTR(ISAR1, GPI) | - NV_FTR(ISAR1, GPA) | - NV_FTR(ISAR1, API) | - NV_FTR(ISAR1, APA)); + NV_FTR(ISAR1, SPECRES)); break; case SYS_ID_AA64PFR0_EL1: From patchwork Fri Apr 19 10:29:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13636188 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A74FA8173B; Fri, 19 Apr 2024 10:29:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522594; cv=none; b=tpQQ+ssEEb+Zi9jMV7GptYWsYP761+gIl4VDCze3NyqXpzadpyXEABjAjuLo9o7X40Tr+/y09OIllw0yxYJcddlePVLjxa9x2FRXQRapkUKJPqWmcPxIhr0bHdzYQ/EUXZdMRh7022gwNpo93nGa1yz6wyMIjl5TO1F8cjbE7mE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713522594; c=relaxed/simple; bh=Hh3LGeFs0C9lT4zyX7DVL1Zu42po1QdwGeQYaOoBnag=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=um4pC7ekZYoTZOExL1oRq+IIwyn0croCgW1gmnU4y4biXr7EO5DDj/ucETZZPcfxjdjLncWndtxnrTbXo+aDwSzk3JPM1Lu1KCyo+fOHdWAS/h8dU7h7gx5u+DucLjwiuHP/6Baq2yLiYsRTIxxgjfiiGtvO5nhFSMjcCfnYqno= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JnFqITqc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JnFqITqc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1BA0CC32782; Fri, 19 Apr 2024 10:29:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713522594; bh=Hh3LGeFs0C9lT4zyX7DVL1Zu42po1QdwGeQYaOoBnag=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JnFqITqccD2AJ3OwYxDy91yVUDodBAhWX+3+As2wPLzJdr4byMeKZeD52EsGKQwN6 /iJ/IXddu+dyCIIX+Aqw3rbdWHZkFkZn2OjUJbpHBfBRPFhEEFp14yPAnrTSUND2tO p5q90R/NFocSAX9b9cEgJJKQH0VRY2E7EnCLw5JxrojOblqVl/26GILam3gPn/Ws+k wYbLi6ENdnwXfDarmTcdDHesycfUc+ZV6z7AP2A9nUQvDB9zqT2YlNgpnMV6uW6Z/o dmLVDHBGq1jmrAwmtnAH6wnfq86uCa5mNYXDuoIOAw4GimZilIaMeMoIE1Mgftuzsv yV8LYa8NHUjSQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rxlVA-00636W-B5; Fri, 19 Apr 2024 11:29:52 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Joey Gouly , Fuad Tabba , Mostafa Saleh , Will Deacon , Catalin Marinas Subject: [PATCH v4 15/15] KVM: arm64: Drop trapping of PAuth instructions/keys Date: Fri, 19 Apr 2024 11:29:35 +0100 Message-Id: <20240419102935.1935571-16-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240419102935.1935571-1-maz@kernel.org> References: <20240419102935.1935571-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, joey.gouly@arm.com, tabba@google.com, smostafa@google.com, will@kernel.org, catalin.marinas@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false We currently insist on disabling PAuth on vcpu_load(), and get to enable it on first guest use of an instruction or a key (ignoring the NV case for now). It isn't clear at all what this is trying to achieve: guests tend to use PAuth when available, and nothing forces you to expose it to the guest if you don't want to. This also isn't totally free: we take a full GPR save/restore between host and guest, only to write ten 64bit registers. The "value proposition" escapes me. So let's forget this stuff and enable PAuth eagerly if exposed to the guest. This results in much simpler code. Performance wise, that's not bad either (tested on M2 Pro running a fully automated Debian installer as the workload): - On a non-NV guest, I can see reduction of 0.24% in the number of cycles (measured with perf over 10 consecutive runs) - On a NV guest (L2), I see a 2% reduction in wall-clock time (measured with 'time', as M2 doesn't have a PMUv3 and NV doesn't support it either) So overall, a much reduced complexity and a (small) performance improvement. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_emulate.h | 5 -- arch/arm64/include/asm/kvm_ptrauth.h | 21 +++++++ arch/arm64/kvm/arm.c | 45 +++++++++++++- arch/arm64/kvm/handle_exit.c | 10 ++-- arch/arm64/kvm/hyp/include/hyp/switch.h | 80 +------------------------ arch/arm64/kvm/hyp/nvhe/switch.c | 2 - arch/arm64/kvm/hyp/vhe/switch.c | 6 +- 7 files changed, 70 insertions(+), 99 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 87f2c31f3206..382164d791f4 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -125,11 +125,6 @@ static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TWI; } -static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) -{ - vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); -} - static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) { return vcpu->arch.vsesr_el2; diff --git a/arch/arm64/include/asm/kvm_ptrauth.h b/arch/arm64/include/asm/kvm_ptrauth.h index 0cd0965255d2..d81bac256abc 100644 --- a/arch/arm64/include/asm/kvm_ptrauth.h +++ b/arch/arm64/include/asm/kvm_ptrauth.h @@ -99,5 +99,26 @@ alternative_else_nop_endif .macro ptrauth_switch_to_hyp g_ctxt, h_ctxt, reg1, reg2, reg3 .endm #endif /* CONFIG_ARM64_PTR_AUTH */ + +#else /* !__ASSEMBLY */ + +#define __ptrauth_save_key(ctxt, key) \ + do { \ + u64 __val; \ + __val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ + ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val; \ + __val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ + ctxt_sys_reg(ctxt, key ## KEYHI_EL1) = __val; \ + } while(0) + +#define ptrauth_save_keys(ctxt) \ + do { \ + __ptrauth_save_key(ctxt, APIA); \ + __ptrauth_save_key(ctxt, APIB); \ + __ptrauth_save_key(ctxt, APDA); \ + __ptrauth_save_key(ctxt, APDB); \ + __ptrauth_save_key(ctxt, APGA); \ + } while(0) + #endif /* __ASSEMBLY__ */ #endif /* __ASM_KVM_PTRAUTH_H */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a7178af1ab0c..c5850cb8b1fa 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -35,10 +35,11 @@ #include #include #include +#include #include #include #include -#include +#include #include #include @@ -462,6 +463,44 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) } +static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu) +{ + if (vcpu_has_ptrauth(vcpu)) { + /* + * Either we're running running an L2 guest, and the API/APK + * bits come from L1's HCR_EL2, or API/APK are both set. + */ + if (unlikely(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) { + u64 val; + + val = __vcpu_sys_reg(vcpu, HCR_EL2); + val &= (HCR_API | HCR_APK); + vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK); + vcpu->arch.hcr_el2 |= val; + } else { + vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK); + } + + /* + * Save the host keys if there is any chance for the guest + * to use pauth, as the entry code will reload the guest + * keys in that case. + * Protected mode is the exception to that rule, as the + * entry into the EL2 code eagerly switch back and forth + * between host and hyp keys (and kvm_hyp_ctxt is out of + * reach anyway). + */ + if (is_protected_kvm_enabled()) + return; + + if (vcpu->arch.hcr_el2 & (HCR_API | HCR_APK)) { + struct kvm_cpu_context *ctxt; + ctxt = this_cpu_ptr_hyp_sym(kvm_hyp_ctxt); + ptrauth_save_keys(ctxt); + } + } +} + void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { struct kvm_s2_mmu *mmu; @@ -500,8 +539,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) else vcpu_set_wfx_traps(vcpu); - if (vcpu_has_ptrauth(vcpu)) - vcpu_ptrauth_disable(vcpu); + vcpu_set_pauth_traps(vcpu); + kvm_arch_vcpu_load_debug_state_flags(vcpu); if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 407bdfbb572b..b037f0a0e27e 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -217,14 +217,12 @@ static int handle_sve(struct kvm_vcpu *vcpu) * Two possibilities to handle a trapping ptrauth instruction: * * - Guest usage of a ptrauth instruction (which the guest EL1 did not - * turn into a NOP). If we get here, it is that we didn't fixup - * ptrauth on exit, and all that we can do is give the guest an - * UNDEF (as the guest isn't supposed to use ptrauth without being - * told it could). + * turn into a NOP). If we get here, it is because we didn't enable + * ptrauth for the guest. This results in an UNDEF, as it isn't + * supposed to use ptrauth without being told it could. * * - Running an L2 NV guest while L1 has left HCR_EL2.API==0, and for - * which we reinject the exception into L1. API==1 is handled as a - * fixup so the only way to get here is when API==0. + * which we reinject the exception into L1. * * Anything else is an emulation bug (hence the WARN_ON + UNDEF). */ diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index a0908d7a8f56..7c733decbe43 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -447,82 +448,6 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) return true; } -static inline bool esr_is_ptrauth_trap(u64 esr) -{ - switch (esr_sys64_to_sysreg(esr)) { - case SYS_APIAKEYLO_EL1: - case SYS_APIAKEYHI_EL1: - case SYS_APIBKEYLO_EL1: - case SYS_APIBKEYHI_EL1: - case SYS_APDAKEYLO_EL1: - case SYS_APDAKEYHI_EL1: - case SYS_APDBKEYLO_EL1: - case SYS_APDBKEYHI_EL1: - case SYS_APGAKEYLO_EL1: - case SYS_APGAKEYHI_EL1: - return true; - } - - return false; -} - -#define __ptrauth_save_key(ctxt, key) \ - do { \ - u64 __val; \ - __val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ - ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val; \ - __val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ - ctxt_sys_reg(ctxt, key ## KEYHI_EL1) = __val; \ -} while(0) - -DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt); - -static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code) -{ - struct kvm_cpu_context *ctxt; - u64 enable = 0; - - if (!vcpu_has_ptrauth(vcpu)) - return false; - - /* - * NV requires us to handle API and APK independently, just in - * case the hypervisor is totally nuts. Please barf >here<. - */ - if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) { - switch (ESR_ELx_EC(kvm_vcpu_get_esr(vcpu))) { - case ESR_ELx_EC_PAC: - if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_API)) - return false; - - enable |= HCR_API; - break; - - case ESR_ELx_EC_SYS64: - if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_APK)) - return false; - - enable |= HCR_APK; - break; - } - } else { - enable = HCR_API | HCR_APK; - } - - ctxt = this_cpu_ptr(&kvm_hyp_ctxt); - __ptrauth_save_key(ctxt, APIA); - __ptrauth_save_key(ctxt, APIB); - __ptrauth_save_key(ctxt, APDA); - __ptrauth_save_key(ctxt, APDB); - __ptrauth_save_key(ctxt, APGA); - - - vcpu->arch.hcr_el2 |= enable; - sysreg_clear_set(hcr_el2, 0, enable); - - return true; -} - static bool kvm_hyp_handle_cntpct(struct kvm_vcpu *vcpu) { struct arch_timer_context *ctxt; @@ -610,9 +535,6 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code) __vgic_v3_perform_cpuif_access(vcpu) == 1) return true; - if (esr_is_ptrauth_trap(kvm_vcpu_get_esr(vcpu))) - return kvm_hyp_handle_ptrauth(vcpu, exit_code); - if (kvm_hyp_handle_cntpct(vcpu)) return true; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 4103625e46c5..9dfe704bdb69 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -191,7 +191,6 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, - [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, }; @@ -203,7 +202,6 @@ static const exit_handler_fn pvm_exit_handlers[] = { [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, - [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, }; diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 8e1d98b691c1..f374bcdab4d4 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -41,9 +41,8 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector); * - TGE: we want the guest to use EL1, which is incompatible with * this bit being set * - * - API/APK: for hysterical raisins, we enable PAuth lazily, which - * means that the guest's bits cannot be directly applied (we really - * want to see the traps). Revisit this at some point. + * - API/APK: they are already accounted for by vcpu_load(), and can + * only take effect across a load/put cycle (such as ERET) */ #define NV_HCR_GUEST_EXCLUDE (HCR_TGE | HCR_API | HCR_APK) @@ -268,7 +267,6 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, - [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, [ESR_ELx_EC_ERET] = kvm_hyp_handle_eret, [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, };