From patchwork Tue Apr 8 10:52:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 14042797 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17A9326F44A; Tue, 8 Apr 2025 10:52:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744109561; cv=none; b=sfs1SJDxcvRLcX4K7qi14g5u4tQb5EE7I/oizOGTI1R3ez/HDWbPMhEEpyrp3blRn8cEi4QKDzuGZNb3dQI3oxrqLq2F6l+HE3Lt9/qRUMnsDQvfVFKhKQ/HL+LSJL2s0+gT1Q8iOzrL9hgAxZ8qEaDlw4+FBN0FhyMvx47uJ04= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744109561; c=relaxed/simple; bh=GZFvuOp7F5b/p4wbdhkNUZbcbM062HzxRDlGZgd/He8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OtQKgogJDbLgdQjymBPpFAW9X5EuaVLVNGq8+rii5fZw8wRcjKlRn51Xd4tOiguhieebo6ihYaFQHD0uf4qIdRvMYQARU05OB/TW8FrJ3GSH//3mxk6oMU+chI5TB5ncvwXrS49IP0PnIsY4HirW/C7l/tRlqcIw1wpNLH6t3c0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CBNlM/Ni; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CBNlM/Ni" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ECCB8C4CEEC; Tue, 8 Apr 2025 10:52:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1744109561; bh=GZFvuOp7F5b/p4wbdhkNUZbcbM062HzxRDlGZgd/He8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CBNlM/NiOZ5oFyi+8pn1ivPQmUIiqH8A9CyiabubLSfl6a/lXiYlNh9YTjG3NyGgi 4RMuHObQeevvfvJ0VulOrNlCoqmh32ocbTPywJBF97cII0jnh9ncLuXItnQjVAbTjU 5Nt6S3D1j0hV6Qty6vhcIFJxfNnjCGBjrxZRwElO1lj4an9qF8UqoVW3YUYSzLL2HG kz9TmNS/k7Hbk9N3MiUp+sDr5y2hFomESWuFmMaKG3QBhpAnqcaqohrKF48KZOqKKL tm/8qYelGM5gVBvT+4pwc20OypPLduRb7ise0YXNIYQ6gC+x2adXpo1feA8tqVAEhR tq9eA0Xl8XxMw== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1u26ZL-003QX2-5S; Tue, 08 Apr 2025 11:52:39 +0100 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Joey Gouly , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Eric Auger Subject: [PATCH v2 11/17] KVM: arm64: nv: Handle VNCR_EL2 invalidation from MMU notifiers Date: Tue, 8 Apr 2025 11:52:19 +0100 Message-Id: <20250408105225.4002637-12-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20250408105225.4002637-1-maz@kernel.org> References: <20250408105225.4002637-1-maz@kernel.org> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, joey.gouly@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, eric.auger@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false During an invalidation triggered by an MMU notifier, we need to make sure we can drop the *host* mapping that would have been translated by the stage-2 mapping being invalidated. For the moment, the invalidation is pretty brutal, as we nuke the full IPA range, and therefore any VNCR_EL2 mapping. At some point, we'll be more light-weight, and the code is able to deal with something more targetted. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/nested.c | 75 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index 06db2097eae58..c1572fe60555f 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -433,6 +433,30 @@ static unsigned int ttl_to_size(u8 ttl) return max_size; } +static u8 pgshift_level_to_ttl(u16 shift, u8 level) +{ + u8 ttl; + + switch(shift) { + case 12: + ttl = TLBI_TTL_TG_4K; + break; + case 14: + ttl = TLBI_TTL_TG_16K; + break; + case 16: + ttl = TLBI_TTL_TG_64K; + break; + default: + BUG(); + } + + ttl <<= 2; + ttl |= level & 3; + + return ttl; +} + /* * Compute the equivalent of the TTL field by parsing the shadow PT. The * granule size is extracted from the cached VTCR_EL2.TG0 while the level is @@ -783,6 +807,53 @@ int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2) return kvm_inject_nested_sync(vcpu, esr_el2); } +static void invalidate_vncr(struct vncr_tlb *vt) +{ + vt->valid = false; + if (vt->cpu != -1) + clear_fixmap(vncr_fixmap(vt->cpu)); +} + +static void kvm_invalidate_vncr_ipa(struct kvm *kvm, u64 start, u64 end) +{ + struct kvm_vcpu *vcpu; + unsigned long i; + + lockdep_assert_held_write(&kvm->mmu_lock); + + if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY)) + return; + + kvm_for_each_vcpu(i, vcpu, kvm) { + struct vncr_tlb *vt = vcpu->arch.vncr_tlb; + u64 ipa_start, ipa_end, ipa_size; + + /* + * Careful here: We end-up here from an MMU notifier, + * and this can race against a vcpu not being onlined + * yet, without the pseudo-TLB being allocated. + * + * Skip those, as they obviously don't participate in + * the invalidation at this stage. + */ + if (!vt) + continue; + + if (!vt->valid) + continue; + + ipa_size = ttl_to_size(pgshift_level_to_ttl(vt->wi.pgshift, + vt->wr.level)); + ipa_start = vt->wr.pa & (ipa_size - 1); + ipa_end = ipa_start + ipa_size; + + if (ipa_end <= start || ipa_start >= end) + continue; + + invalidate_vncr(vt); + } +} + void kvm_nested_s2_wp(struct kvm *kvm) { int i; @@ -795,6 +866,8 @@ void kvm_nested_s2_wp(struct kvm *kvm) if (kvm_s2_mmu_valid(mmu)) kvm_stage2_wp_range(mmu, 0, kvm_phys_size(mmu)); } + + kvm_invalidate_vncr_ipa(kvm, 0, BIT(kvm->arch.mmu.pgt->ia_bits)); } void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block) @@ -809,6 +882,8 @@ void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block) if (kvm_s2_mmu_valid(mmu)) kvm_stage2_unmap_range(mmu, 0, kvm_phys_size(mmu), may_block); } + + kvm_invalidate_vncr_ipa(kvm, 0, BIT(kvm->arch.mmu.pgt->ia_bits)); } void kvm_nested_s2_flush(struct kvm *kvm)