From patchwork Tue Jun 6 19:28:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7DBFC7EE37 for ; Tue, 6 Jun 2023 19:29:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xMAiNPoK06dRH3W9HsUqkCLvqzfBUMrvDdT1bQwCUbQ=; b=tniBNrqHwngPhsfn8WhP/rENv7 ZN4TkVoRKI9h6J8rKmO+lvPYn8N17JLfENjxdCnnMoMJMJes61VYtDkyRMqZgbTnZ7BuDZHu20lQp O5scF0HEHlqgWvGgdI2WzyBbsC+OKAkaDCOwoaTb3LXgNT9pOu1T1kmXzfRvllAAPJba+e64ghNYR KlLqk7CZdTFGMwdzxmgk3PLtmBRtFx8e7xs4apNrUvR/5+QpquNT41xGL4v7s8xvgkgurHiW5frRY si2Uox3D/oPXxzoiFQDO9piV4zFdRNRWLWntcUaM1ecrt3GpiE+SKTMuU2cxbwCOCusM9jmDGEK89 eQdEgfPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q6cMi-002xep-0B; Tue, 06 Jun 2023 19:29:12 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q6cMb-002xZ1-39 for linux-arm-kernel@lists.infradead.org; Tue, 06 Jun 2023 19:29:07 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-568a85f180dso95948367b3.2 for ; Tue, 06 Jun 2023 12:29:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079744; x=1688671744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=y/6r0APF1ZckoGqXlaGvs8Fu6vhIxmcIHa2PTldZqrc=; b=HROLBGcLzE2atijiNX1ii4qKy1NnZjDVJ2kEkcVq7akH6GXfAU/PURTOYeA1Fd3irL loZJcuzeMJQWiPc+eco7vhtWoXqUp+IN5FkojMhYMlCiNpiw0fngJVyEFOcVxVyWnFQh +gcgaUw7f215ouppXeJewToyof44pQBb6BMXlpmlQXBXulkOf14wsE2M+dzGwXRJXGDO tXYBE0nFA7WI3kdoYPlUzPvl3zlKkxhGXpRIxX3eMo9NQ0NTn6HpNV1hyxUFHYQvVx79 7pEjVRLezWD4bN8CzhqGspEo7nDCUf/xgwe0hSdoiB3Hz7hn4zw5wVWFGuOYWVlr+Pd6 MDQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079744; x=1688671744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=y/6r0APF1ZckoGqXlaGvs8Fu6vhIxmcIHa2PTldZqrc=; b=Zup3gVrdFp2OdOu0OacIYDZ50Eo/IiSK/sFd+paW4kB4o/6uYXdEtBBMbGrFEjcWam Rw3s/J62XUqeqYQCWXxeYg6FWPLihEz/xotQqwT9QTgZLn1zuZtMkVyhltmivQ2V+PGL KiD05SGezsMjLrMdx831rTQvO/Scm4An+j8AggQwzmU7DGSIKIyWmXA8weQkXpbQT7F+ YcaWDcMiRTm4vQGj75cVAJ5naScxwZYsl0+MZT7hK9Jjdkr2urBoVCHlqiuWvw029g3f xdQVvarfUGazzTqY86Djq3xO7Co2Hx28BncPJZZ9iGTmGFJjk2JzL4G81IFZ7IIAsKPT zbgw== X-Gm-Message-State: AC+VfDyEOhF+zi6knLDGkgXZ/ifT57qUvY6xzRUTgKpI/rOzMK015MGS ULji9twSuFGU7jhzgCug6QGsQyk48hmh X-Google-Smtp-Source: ACHHUZ54ovTJrXatnDidaPzJUKJNjQF3PJ8x+yDMQFY1Hlip1RgYFAk5ctLOp86VAyj2wEpJVtXaO/c4qHQg X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:708:b0:568:ee6d:3364 with SMTP id bs8-20020a05690c070800b00568ee6d3364mr1523405ywb.4.1686079744392; Tue, 06 Jun 2023 12:29:04 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:53 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-3-rananta@google.com> Subject: [PATCH v5 2/7] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230606_122906_016560_CA516F8E X-CRM114-Status: GOOD ( 14.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 28 ++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544d..60ed0880cc9d6 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, @@ -225,6 +226,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b0..a19a9299c8362 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,16 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(unsigned long, pages, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, pages); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -316,6 +326,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_read_vmcr), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 978179133f4b9..213b11952f641 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -130,6 +130,36 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt, false); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..3ca3d38b7eb23 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,34 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt;