From patchwork Fri Apr 26 10:49:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Ott X-Patchwork-Id: 13644620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 437D6C25B4F for ; Fri, 26 Apr 2024 10:50:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bKeEgLtGc0gNia1nMX6K9eDcpb7seAbiK62KBT2ujeA=; b=AR3NwW2daiyG7F 2x6asltI7vH+Z58SQTr/1xfp+o/wFMy+oBiuQotrR9n1rCgQwGRXykNIaIwnPvXHNljazLfaKypfi hc8GUPAfBr4cwFEAuOXXhBYMr0k7V2PQicEfZ/LNpEx1cHYXd4t1eQ2EZOt+MrfZQ/6dkQEafSHWK KjqhOKxaAA13YnFEA4SWuyftFrkB384+TU45mg0WzuG3OZcYdxl1WrylhiQp9Kv5RxWC1NJrw2jLc HJ9RI5pbxxXjOs1OU2F6p3YrV7AmgUX9yj6vzpJnP81PpxSr6MsDDza1iQPHIFbxju0mQ3rH76c2L 96Jn67xk36qmDtvTtiLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s0J9y-0000000CDWE-2NOz; Fri, 26 Apr 2024 10:50:30 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s0J9g-0000000CDGv-1MCT for linux-arm-kernel@lists.infradead.org; Fri, 26 Apr 2024 10:50:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1714128601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cGbSr+4fcHT2eAsfJezUbtC5iwN5BsiYgVJ4o5jDSLU=; b=ONEZIjtCGNIT3Gqy7Go6Sx78wliohK64U206DRZVQfoMVkaJzXVjjm7Y6sAqa8IYSAekSZ WwTxf5qByy5axJ8oKIAn+5qWjSb0v0QEtCkjBTAD2+2+2k8VfgZW3n2KMcXi/AwRyWa799 A58fp0kkv1OFt6UTNojfpI1LLMj+Dnw= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-645-UL-n86AcOdC1i6J3JyLoig-1; Fri, 26 Apr 2024 06:49:59 -0400 X-MC-Unique: UL-n86AcOdC1i6J3JyLoig-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-7906a41d908so269487385a.2 for ; Fri, 26 Apr 2024 03:49:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714128599; x=1714733399; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cGbSr+4fcHT2eAsfJezUbtC5iwN5BsiYgVJ4o5jDSLU=; b=vJFaXZGzVQ+t/pgjXZ4zhfEOayWLUnNjD7Lw9OwDTYpSm1YczZiXu60MxnsnimCEds g3osxPlXxbe8Y9ymhrqBZCaT2tqZacypBL930MVEIpYA2sSdapd9LKnCyF8CyCxmGOoJ tsLehHjd9riFuwV4Ne99Z7iuX4fD55MWIzl98aeELwOpGVINhoO9hymzk9Fv6E5eP0On TNe+PpAUjboVt1wma+QBfzeNBFFX7y+/HplDnNfqvLmC54e1kyWizvM+rOINwM7urq5/ 1e2lbKtDryQdwDa6Mae25g3nkJL0w0+5hGxpgl1yms+aIA0hdXgsUCyQ1xk0QGxCKXGJ 6Rzg== X-Gm-Message-State: AOJu0YyBbwdXJk+ouqc/AKk76aHOhGFTt7lJ58KlaM51pBNlspKffmT9 KLxuWLhreJtpVSwGdni9C44oXrTWtIQPKdNO1j+IItOkskwpplM3GCmeH+L0KGAMGaweAo8PZQa g3tRWH5xSoL84KqGbHMPtaAUKXh11l4IK4sr1eVy7dXm1q4qyMuDbUh1C2f2sBkai0CdO9GvUZ0 RX+QbY/IRFw0TBsO36VXCu2t3fq5KCSIeyo7osSoNs+ryRGUdzCekR+Q== X-Received: by 2002:a05:620a:992:b0:78e:e479:3ddb with SMTP id x18-20020a05620a099200b0078ee4793ddbmr2394295qkx.23.1714128598998; Fri, 26 Apr 2024 03:49:58 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHh18C4THl4Xonjz0GTh9Qrl3LA0jHHF1NCbVJUxnZ8bmimP/NV7dkk5fFTYl157a5HGhORJQ== X-Received: by 2002:a05:620a:992:b0:78e:e479:3ddb with SMTP id x18-20020a05620a099200b0078ee4793ddbmr2394272qkx.23.1714128598452; Fri, 26 Apr 2024 03:49:58 -0700 (PDT) Received: from rh.redhat.com (p200300c93f4cc600a5cdf10de606b5e2.dip0.t-ipconnect.de. [2003:c9:3f4c:c600:a5cd:f10d:e606:b5e2]) by smtp.gmail.com with ESMTPSA id vv26-20020a05620a563a00b007907b57aa1fsm3888019qkn.12.2024.04.26.03.49.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Apr 2024 03:49:58 -0700 (PDT) From: Sebastian Ott To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Catalin Marinas , Will Deacon Subject: [PATCH v2 2/6] KVM: arm64: unify trap setup code Date: Fri, 26 Apr 2024 12:49:46 +0200 Message-ID: <20240426104950.7382-3-sebott@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240426104950.7382-1-sebott@redhat.com> References: <20240426104950.7382-1-sebott@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240426_035012_658972_AE0A5406 X-CRM114-Status: GOOD ( 18.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org There are 2 functions to set up traps via HCR_EL2: * kvm_init_sysreg() called via KVM_RUN (before the 1st run or when the pid changes) * vcpu_reset_hcr() called via KVM_ARM_VCPU_INIT To unify these 2 and to support traps that are dependent on the ID register configuration, move vcpu_reset_hcr() to sys_regs.c and call it via kvm_init_sysreg(). While at it rename kvm_init_sysreg() to kvm_setup_traps() to better reflect what it's doing. Signed-off-by: Sebastian Ott --- arch/arm64/include/asm/kvm_emulate.h | 37 ----------------------- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/arm.c | 3 +- arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++-- 4 files changed, 44 insertions(+), 42 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 975af30af31f..9e71fcbb033d 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -67,43 +67,6 @@ static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) } #endif -static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) -{ - vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; - if (has_vhe() || has_hvhe()) - vcpu->arch.hcr_el2 |= HCR_E2H; - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { - /* route synchronous external abort exceptions to EL2 */ - vcpu->arch.hcr_el2 |= HCR_TEA; - /* trap error record accesses */ - vcpu->arch.hcr_el2 |= HCR_TERR; - } - - if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) { - vcpu->arch.hcr_el2 |= HCR_FWB; - } else { - /* - * For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C - * get set in SCTLR_EL1 such that we can detect when the guest - * MMU gets turned on and do the necessary cache maintenance - * then. - */ - vcpu->arch.hcr_el2 |= HCR_TVM; - } - - if (cpus_have_final_cap(ARM64_HAS_EVT) && - !cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE)) - vcpu->arch.hcr_el2 |= HCR_TID4; - else - vcpu->arch.hcr_el2 |= HCR_TID2; - - if (vcpu_el1_is_32bit(vcpu)) - vcpu->arch.hcr_el2 &= ~HCR_RW; - - if (kvm_has_mte(vcpu->kvm)) - vcpu->arch.hcr_el2 |= HCR_ATA; -} - static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu) { return (unsigned long *)&vcpu->arch.hcr_el2; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9e8a496fb284..696acba883c1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1115,7 +1115,7 @@ int __init populate_nv_trap_config(void); bool lock_all_vcpus(struct kvm *kvm); void unlock_all_vcpus(struct kvm *kvm); -void kvm_init_sysreg(struct kvm_vcpu *); +void kvm_setup_traps(struct kvm_vcpu *); /* MMIO helpers */ void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c4a0a35e02c7..d6c27d8a8f2f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -683,7 +683,7 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) * This needs to happen after NV has imposed its own restrictions on * the feature set */ - kvm_init_sysreg(vcpu); + kvm_setup_traps(vcpu); ret = kvm_timer_enable(vcpu); if (ret) @@ -1438,7 +1438,6 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, icache_inval_all_pou(); } - vcpu_reset_hcr(vcpu); vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu); /* diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 131f5b0ca2b9..ac366d0b614a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -4020,11 +4020,43 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range * return 0; } -void kvm_init_sysreg(struct kvm_vcpu *vcpu) +static void vcpu_reset_hcr(struct kvm_vcpu *vcpu) { struct kvm *kvm = vcpu->kvm; - mutex_lock(&kvm->arch.config_lock); + vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; + if (has_vhe() || has_hvhe()) + vcpu->arch.hcr_el2 |= HCR_E2H; + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) { + /* route synchronous external abort exceptions to EL2 */ + vcpu->arch.hcr_el2 |= HCR_TEA; + /* trap error record accesses */ + vcpu->arch.hcr_el2 |= HCR_TERR; + } + + if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) { + vcpu->arch.hcr_el2 |= HCR_FWB; + } else { + /* + * For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C + * get set in SCTLR_EL1 such that we can detect when the guest + * MMU gets turned on and do the necessary cache maintenance + * then. + */ + vcpu->arch.hcr_el2 |= HCR_TVM; + } + + if (cpus_have_final_cap(ARM64_HAS_EVT) && + !cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE)) + vcpu->arch.hcr_el2 |= HCR_TID4; + else + vcpu->arch.hcr_el2 |= HCR_TID2; + + if (vcpu_el1_is_32bit(vcpu)) + vcpu->arch.hcr_el2 &= ~HCR_RW; + + if (kvm_has_mte(vcpu->kvm)) + vcpu->arch.hcr_el2 |= HCR_ATA; /* * In the absence of FGT, we cannot independently trap TLBI @@ -4033,6 +4065,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu) */ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS)) vcpu->arch.hcr_el2 |= HCR_TTLBOS; +} + +void kvm_setup_traps(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = vcpu->kvm; + + mutex_lock(&kvm->arch.config_lock); + vcpu_reset_hcr(vcpu); if (cpus_have_final_cap(ARM64_HAS_HCX)) { vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;