From patchwork Mon Nov 20 13:10:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 13461283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A825C54E76 for ; Mon, 20 Nov 2023 13:17:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DsXcCrQ1UbfpnXLJQldjsDEmedvZES1LSV5UfmDbMAA=; b=07JzsVeEl+TBvt n2cLc1TyEndUjcZeKGw6ldpwwCU114hxrkebPnO+K8bbVVfqCQnkSS0NpBCpVhx/LC2mkEa5F5mG6 8sGhBJVxEeaRGpZK/wxB2mazv+BhxpfoHwWwdeycGYwcH3cBZs7UuLXlvmt7Y6Y0oDnET2HaMgEbv xKJo0h+hccl8c7UfUUqh4l/XKoOtn3B9xu2Dt4O+OsmOKXCT6EZCgd0Ia+L3UQOfAyTdCpnIh1NuW pH/V84nuIxfj0xvsQXjm5UoSG9ft7dgPnoqPRFovqPo0lZzgqxc6MueoctkLkmS6IWEU/G0/GtMTM rApu8qVq/4Dza65Eqflw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1r548K-00C6ty-0v; Mon, 20 Nov 2023 13:16:12 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1r543Z-00C4ov-2n for linux-arm-kernel@bombadil.infradead.org; Mon, 20 Nov 2023 13:11:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eyVatep5N9qzKJ3wOXJb14nOlus3L33ydE+t5L2pqbQ=; b=iCgj5nk9wgsC4WJ01xIDNNJYof +hKZWhpI7IWprk2au4iQgix3h0vOmmPNdOVsmFiiFXfxLgiRgPi6mC2eGKlHaNNZ6umr6TtLXkgTZ ruLOXjmoC9bDbpQhyymlfMMdr7S9wpk4ANdhGfnzBwGm3HIet9WFyCu6ivcNi5wRlMITDYjE6HvOo FaaLGjB93oi1BYfXME99Pe8Dkx19Nsf2MmoLY/zyDjTwf2TC9eg/LDOOZgViWUaXwW4rN1umtbdLQ OcSF4AGXSct2cxBsn8UyvoNr3dyM6xqZTqZOl8pw5LwRO+bvcImtKCnC3+8toMPdP2sFSadkk9VEE y1mF3Xyw==; Received: from ams.source.kernel.org ([145.40.68.75]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1r543T-004Y6X-S2 for linux-arm-kernel@lists.infradead.org; Mon, 20 Nov 2023 13:11:16 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id C7EBCB81730; Mon, 20 Nov 2023 13:11:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75D74C433CB; Mon, 20 Nov 2023 13:11:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700485861; bh=KHxWwe7TmB0Wt099gSvTWTsRHOb6deQ5H6KYpEECY/k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XCaUfdwxxtoD6HxWmwzV7TojsSQgTTlMZcpypqQ9QgjACoZxCDhUrzybzOcDLCllC ylPRBy4DSajhd4Cvj4hs5VciHFlDa2F5vjFZ9iWlPNA1N8ReGHCIKhx/YkZiLLWy6o d11BMRBe3sbZ/skS00/+eIGlzGFCWDMUaL8vEH7J7xFDyHUqZswcWlnI422n/eJI5n VqR3kxDmR9k4PVXfQRLn7QxeIPEzYB7nPWjFcncpMa5waDVDDdZT4HYAfUzhf8wVGa aRWKkWE7A9Z6wsqy1hMzXsePv982LLxFL+lyJwU6/lifYrplUa83peVX0sLe+JxVcF 54D2/2XXB9ZOA== Received: from sofa.misterjones.org ([185.219.108.64] helo=valley-girl.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1r543H-00EjnU-GR; Mon, 20 Nov 2023 13:10:59 +0000 From: Marc Zyngier To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Alexandru Elisei , Andre Przywara , Chase Conklin , Christoffer Dall , Ganapatrao Kulkarni , Darren Hart , Jintack Lim , Russell King , Miguel Luis , James Morse , Suzuki K Poulose , Oliver Upton , Zenghui Yu Subject: [PATCH v11 40/43] KVM: arm64: nv: Fast-track EL1 TLBIs for VHE guests Date: Mon, 20 Nov 2023 13:10:24 +0000 Message-Id: <20231120131027.854038-41-maz@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231120131027.854038-1-maz@kernel.org> References: <20231120131027.854038-1-maz@kernel.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alexandru.elisei@arm.com, andre.przywara@arm.com, chase.conklin@arm.com, christoffer.dall@arm.com, gankulkarni@os.amperecomputing.com, darren@os.amperecomputing.com, jintack@cs.columbia.edu, rmk+kernel@armlinux.org.uk, miguel.luis@oracle.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231120_131112_040364_2067AC2A X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Due to the way ARMv8.4-NV suppresses traps when accessing EL2 system registers, we can't track when the guest changes its HCR_EL2.TGE setting. This means we always trap EL1 TLBIs, even if they don't affect any guest. This obviously has a huge impact on performance, as we handle TLBI traps as a normal exit, and a normal VHE host issues thousands of TLBIs when booting (and quite a few when running userspace). A cheap way to reduce the overhead is to handle the limited case of {E2H,TGE}=={1,1} as a guest fixup, as we already have the right mmu configuration in place. Just execute the decoded instruction right away and return to the guest. Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/vhe/switch.c | 44 ++++++++++++++++++++++++++++++++- arch/arm64/kvm/hyp/vhe/tlb.c | 6 +++-- arch/arm64/kvm/sys_regs.c | 12 --------- 3 files changed, 47 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 85db519ea811..360328aaaf7c 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -224,6 +224,48 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu) __vcpu_put_switch_sysregs(vcpu); } +static bool kvm_hyp_handle_tlbi_el1(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + u32 instr; + u64 val; + + /* + * Ideally, we would never trap on EL1 TLB invalidations when the + * guest's HCR_EL2.{E2H,TGE} == {1,1}. But "thanks" to ARMv8.4, we + * don't trap writes to HCR_EL2, meaning that we can't track + * changes to the virtual TGE bit. So we leave HCR_EL2.TTLB set on + * the host. Oopsie... + * + * In order to speed-up EL1 TLBIs from the vEL2 guest when TGE is + * set, try and handle these invalidation as quickly as possible, + * without fully exiting. Note that we don't need to consider + * any forwarding here, as having E2H+TGE set is the very definition + * of being InHost. + */ + if (!vcpu_has_nv(vcpu) || !vcpu_is_el2(vcpu) || + !(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu))) + return false; + + instr = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu)); + if (sys_reg_Op0(instr) != TLBI_Op0 || + sys_reg_Op1(instr) != TLBI_Op1_EL1) + return false; + + val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu)); + __kvm_tlb_el1_instr(NULL, val, instr); + __kvm_skip_instr(vcpu); + + return true; +} + +static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + if (kvm_hyp_handle_tlbi_el1(vcpu, exit_code)) + return true; + + return kvm_hyp_handle_sysreg(vcpu, exit_code); +} + static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code) { u64 spsr, mode; @@ -270,7 +312,7 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code) static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, - [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, + [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 737ea0591b54..bf7ab30522e9 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -271,7 +271,8 @@ void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding) dsb(ishst); /* Switch to requested VMID */ - __tlb_switch_to_guest(mmu, &cxt); + if (mmu) + __tlb_switch_to_guest(mmu, &cxt); /* * Execute the same instruction as the guest hypervisor did, @@ -310,5 +311,6 @@ void __kvm_tlb_el1_instr(struct kvm_s2_mmu *mmu, u64 val, u64 sys_encoding) dsb(ish); isb(); - __tlb_switch_to_host(&cxt); + if (mmu) + __tlb_switch_to_host(&cxt); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9a82f42b45ed..e53bc33a23cc 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3283,18 +3283,6 @@ static bool handle_tlbi_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p, WARN_ON(!vcpu_is_el2(vcpu)); - if ((__vcpu_sys_reg(vcpu, HCR_EL2) & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) { - mutex_lock(&vcpu->kvm->lock); - /* - * ARMv8.4-NV allows the guest to change TGE behind - * our back, so we always trap EL1 TLBIs from vEL2... - */ - __kvm_tlb_el1_instr(&vcpu->kvm->arch.mmu, p->regval, sys_encoding); - mutex_unlock(&vcpu->kvm->lock); - - return true; - } - kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr), &(union tlbi_info) { .va = {