From patchwork Thu Sep 26 03:22:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaoqin Huang X-Patchwork-Id: 13812774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A79DCCFA10 for ; Thu, 26 Sep 2024 03:28:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=J7+hyoxMPs/6YsZ4hAk6EhlN0QJa+uQdbWMbmUd1Kbw=; b=2wmuYtlRBpGyq/wvB9fcpo4ZQn u3KyWsbIaH/wUi5c4XdBxz95QsZRvz6k1eJYPcsSfpGWXZWbEC2ISrgRlwv8eicUawBlRobd/EO3S P1Tv46F1MlU23sJGiVEGUfgE9ZDIY2LPoEQV4cy6OYQmNJ9/hFlI8CiWZOF8mEmdyZLfjdiJQIFxP OnkY1CaQP9oUrQ8QyKtM3wxy37I/jj7uTJVmP3K4Id5yRwgGOk+XzyI/qD0vHiS2TpKURarYv6PwG wX9wu1v7qoTmjWGUXulRlT9F/lrE2z0u4psfSiCG8zke0QjiSjYA/B4btYpxIG4cQ3Vw5aNybLaOk yZ4z8aKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1stfBD-0000000752p-0saE; Thu, 26 Sep 2024 03:28:35 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1stf7m-000000074eT-42AN for linux-arm-kernel@lists.infradead.org; Thu, 26 Sep 2024 03:25:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1727321101; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J7+hyoxMPs/6YsZ4hAk6EhlN0QJa+uQdbWMbmUd1Kbw=; b=MAb45yRHy85SjIwwz/FHnUQ9zLdygSv73bl62RBIvwX/FOuwQrl4CoNqE/MhptBw4a5qYh Hdi771yorshjRDxOuoqzDPhaTUFP4kLJpWm7uU/MW8wveLU/R5JpgHIDSYJfF5/5H4ztz5 f4cI/NteAyU0I2kBc961fuPhpTwoV44= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-27-xppY4uQsN_OCBX0c_obmhw-1; Wed, 25 Sep 2024 23:23:13 -0400 X-MC-Unique: xppY4uQsN_OCBX0c_obmhw-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (unknown [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 717E819772DE; Thu, 26 Sep 2024 03:23:11 +0000 (UTC) Received: from virt-mtcollins-01.lab.eng.rdu2.redhat.com (unknown [10.8.1.196]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1CA1D19560A3; Thu, 26 Sep 2024 03:23:09 +0000 (UTC) From: Shaoqin Huang To: Oliver Upton , Marc Zyngier , kvmarm@lists.linux.dev Cc: Eric Auger , Sebastian Ott , Cornelia Huck , Shaoqin Huang , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Fuad Tabba , Mark Brown , Joey Gouly , Kristina Martsenko , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 1/2] KVM: arm64: Use kvm_has_feat() to check if FEAT_RAS is advertised to the guest Date: Wed, 25 Sep 2024 23:22:39 -0400 Message-Id: <20240926032244.3666579-2-shahuang@redhat.com> In-Reply-To: <20240926032244.3666579-1-shahuang@redhat.com> References: <20240926032244.3666579-1-shahuang@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240925_202503_167005_3EB72EF4 X-CRM114-Status: GOOD ( 16.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use kvm_has_feat() to check if FEAT_RAS is advertised to the guest, this is useful when FEAT_RAS is writable. Signed-off-by: Shaoqin Huang --- arch/arm64/kvm/guest.c | 4 ++-- arch/arm64/kvm/handle_exit.c | 2 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 7 +++++-- arch/arm64/kvm/sys_regs.c | 2 +- 5 files changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 11098eb7eb44..938e3cd05d1e 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -819,7 +819,7 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE); - events->exception.serror_has_esr = cpus_have_final_cap(ARM64_HAS_RAS_EXTN); + events->exception.serror_has_esr = kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, RAS, IMP); if (events->exception.serror_pending && events->exception.serror_has_esr) events->exception.serror_esr = vcpu_get_vsesr(vcpu); @@ -841,7 +841,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, bool ext_dabt_pending = events->exception.ext_dabt_pending; if (serror_pending && has_esr) { - if (!cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + if (!kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, RAS, IMP)) return -EINVAL; if (!((events->exception.serror_esr) & ~ESR_ELx_ISS_MASK)) diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index d7c2990e7c9e..99f256629ead 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -405,7 +405,7 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index) void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index) { if (ARM_SERROR_PENDING(exception_index)) { - if (this_cpu_has_cap(ARM64_HAS_RAS_EXTN)) { + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, RAS, IMP)) { u64 disr = kvm_vcpu_get_disr(vcpu); kvm_handle_guest_serror(vcpu, disr_to_esr(disr)); diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 37ff87d782b6..bf176a3cc594 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -272,7 +272,7 @@ static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr) write_sysreg(hcr, hcr_el2); - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, RAS, IMP) && (hcr & HCR_VSE)) write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); } diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 4c0fdabaf8ae..98526556d4e5 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -105,6 +105,8 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) { + struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); + ctxt->regs.pc = read_sysreg_el2(SYS_ELR); /* * Guest PSTATE gets saved at guest fixup time in all @@ -113,7 +115,7 @@ static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) if (!has_vhe() && ctxt->__hyp_running_vcpu) ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR); - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, RAS, IMP)) ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2); } @@ -220,6 +222,7 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx { u64 pstate = to_hw_pstate(ctxt); u64 mode = pstate & PSR_AA32_MODE_MASK; + struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); /* * Safety check to ensure we're setting the CPU up to enter the guest @@ -238,7 +241,7 @@ static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctx write_sysreg_el2(ctxt->regs.pc, SYS_ELR); write_sysreg_el2(pstate, SYS_SPSR); - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + if (kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, RAS, IMP)) write_sysreg_s(ctxt_sys_reg(ctxt, DISR_EL1), SYS_VDISR_EL2); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 31e49da867ff..b09f8ba3525b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -4513,7 +4513,7 @@ static void vcpu_set_hcr(struct kvm_vcpu *vcpu) if (has_vhe() || has_hvhe()) vcpu->arch.hcr_el2 |= HCR_E2H; - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) { /* route synchronous external abort exceptions to EL2 */ vcpu->arch.hcr_el2 |= HCR_TEA; /* trap error record accesses */