From patchwork Tue Jun 6 19:28:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2F3FC7EE37 for ; Tue, 6 Jun 2023 19:29:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239320AbjFFT3Q (ORCPT ); Tue, 6 Jun 2023 15:29:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239041AbjFFT3F (ORCPT ); Tue, 6 Jun 2023 15:29:05 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1E6E10D2 for ; Tue, 6 Jun 2023 12:29:03 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-77a1335cf04so124367739f.3 for ; Tue, 06 Jun 2023 12:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079743; x=1688671743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WFJK44bg6mTFZNh04T55o8cqW/XBy3bKKtAhRnNGyNM=; b=d70o+mmg/FlsnhdyTBCBIyehP5Om6Pi4yYAx8ru+yyhjlJj4dQB5ICb64pXkWgEIks zKDhwL4Vu2j6CIbfJTHkJeB6o4e6AE+P95KDOo1cPZtFK41yeFXHDkV479VAHm1k95Da EfdPWWpizZJo4KoZQvbTPMHWFMWOfhDMYax306cTGzHURmuxfF3y3EuxDxZ4M6zzm4yE ACvMInyOE9fEjfp3nb/+yUAmMe85XXR/7ims7FSGsxkN/9pZwB60OoHuGbWNXFiUq5gq r/gsGyEq4rnUhc1sYYx1YmTufkR+WCSQN8FBaCZ3SeFPQqhBnquezv46nGxcZw4V6dOc rczg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079743; x=1688671743; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WFJK44bg6mTFZNh04T55o8cqW/XBy3bKKtAhRnNGyNM=; b=KzlEKw0Mgpmy3Be7meqd//82Cf1zpeACbbtUGEmOF6AtLnGbd4GggH98bX5lG6Ydqv rvQH/ZST+NhqDx4jdghYDWVyxP5ezyD/v3GMTsNB9bgz6WtnIzr0PJwE7kbSVA2tf2wo xxv/TV/i1UI8WNkDBz3yetN4jDepC/Y5igK/u3qDJm7Q7EySzAFV9EKAhjS7MIZQj4J2 GTC0ABikzHuaxQ3rF7yKfu1IK9bMIm6mvDvn6/tK83y7XLrsUeHgOP68ixj881TBcdxr BZpJaHTzh4hB8hKAJEIwiUgfJWciiZ/LG4nnqeKokAUocW4b3jae8hNL12hsq8bcaj5I U42w== X-Gm-Message-State: AC+VfDznliHLMxS4vct0kFX6qABqUnekcp/V5pIc15PllLVnJc9occMT WhmCrH/NEJqcR9/8GvQREOIeKOcSBmm2 X-Google-Smtp-Source: ACHHUZ62Hug8P99peZzbJUWEuxmHtWhbRFuSd2AYuFgkpN4/A/feJCg3inpG9Gm+vzkRn3a+VIu54mcG95EX X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:6102:0:b0:774:8f36:bb8e with SMTP id v2-20020a6b6102000000b007748f36bb8emr1571026iob.2.1686079743400; Tue, 06 Jun 2023 12:29:03 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:52 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-2-rananta@google.com> Subject: [PATCH v5 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..4775378b6da1b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } From patchwork Tue Jun 6 19:28:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99A6FC7EE37 for ; Tue, 6 Jun 2023 19:29:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239028AbjFFT3S (ORCPT ); Tue, 6 Jun 2023 15:29:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239068AbjFFT3G (ORCPT ); Tue, 6 Jun 2023 15:29:06 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21EED10CA for ; Tue, 6 Jun 2023 12:29:05 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-565d6824f2dso96373787b3.0 for ; Tue, 06 Jun 2023 12:29:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079744; x=1688671744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=y/6r0APF1ZckoGqXlaGvs8Fu6vhIxmcIHa2PTldZqrc=; b=HROLBGcLzE2atijiNX1ii4qKy1NnZjDVJ2kEkcVq7akH6GXfAU/PURTOYeA1Fd3irL loZJcuzeMJQWiPc+eco7vhtWoXqUp+IN5FkojMhYMlCiNpiw0fngJVyEFOcVxVyWnFQh +gcgaUw7f215ouppXeJewToyof44pQBb6BMXlpmlQXBXulkOf14wsE2M+dzGwXRJXGDO tXYBE0nFA7WI3kdoYPlUzPvl3zlKkxhGXpRIxX3eMo9NQ0NTn6HpNV1hyxUFHYQvVx79 7pEjVRLezWD4bN8CzhqGspEo7nDCUf/xgwe0hSdoiB3Hz7hn4zw5wVWFGuOYWVlr+Pd6 MDQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079744; x=1688671744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=y/6r0APF1ZckoGqXlaGvs8Fu6vhIxmcIHa2PTldZqrc=; b=KsEzgEkp/dMwphfwHSQF4Wj619oh1R7O3wHwL2wntIJlwirvAEs4ZhZiWKnjivryhY ZMGyPrGJrI0oZH+7ufXIDRtl2WCbvTT0BbauLzvywvqRiOfPORXb/Ff9O4DD5efRJbMo tz+mXh2pBO1l5JSm095awfVuDdhC1Mpc4EFVig1yLNNAD/zoyWEjG8nKPhLJ8uYhwYi2 Jzii7FpWpFNPVwvD0LyGEAFTXdE9sEBrv8uVWcxKyAVbeI2d4hS3mDdw9/QJpnhSLoHX SuqP0W8jvLikRC6emQxtqF/2B/OHjXttq9AK141K8V3LK1yONer6+BXGugcw8G10i/ih 3WoQ== X-Gm-Message-State: AC+VfDwPz+jxZ5nYtT53KCnQvRIxj2nS49WjCu3WB/K/opJ1Aelzk7oT L9ORaHewG/LPbubTFbInfJo0gKkT8Ph4 X-Google-Smtp-Source: ACHHUZ54ovTJrXatnDidaPzJUKJNjQF3PJ8x+yDMQFY1Hlip1RgYFAk5ctLOp86VAyj2wEpJVtXaO/c4qHQg X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:708:b0:568:ee6d:3364 with SMTP id bs8-20020a05690c070800b00568ee6d3364mr1523405ywb.4.1686079744392; Tue, 06 Jun 2023 12:29:04 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:53 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-3-rananta@google.com> Subject: [PATCH v5 2/7] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 28 ++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544d..60ed0880cc9d6 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, @@ -225,6 +226,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b0..a19a9299c8362 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,16 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(unsigned long, pages, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, pages); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -316,6 +326,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_read_vmcr), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 978179133f4b9..213b11952f641 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -130,6 +130,36 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt, false); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..3ca3d38b7eb23 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,34 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; From patchwork Tue Jun 6 19:28:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EAD3C83003 for ; Tue, 6 Jun 2023 19:29:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239147AbjFFT3Z (ORCPT ); Tue, 6 Jun 2023 15:29:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238183AbjFFT3O (ORCPT ); Tue, 6 Jun 2023 15:29:14 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4932B10D7 for ; Tue, 6 Jun 2023 12:29:06 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-565a33c35f6so81002777b3.1 for ; Tue, 06 Jun 2023 12:29:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079745; x=1688671745; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oJX97QI+ti3MLO2eZ0RB5q+cUC0tg2B7MRgGhqPSdoU=; b=RcVezT/+/Q8mG0UYDZnsdFHKOHniiQuIpjqjLx/cY/g1xzgASwICXK9Wqgs9eVucjT HIGAvPfE8ZgZh7cy7t0nQdK82U5/rzUnJKV3VS4vqSTdMjvp99ELBTEKYTE6OZecdZho 651BWNdkC19a1JqYSWSyjmzOMAjT1wQhAL+FMk40w16n9aRdV/fLLle2cLbl33qZPyq7 Z9IVaUYiI90fOFYfbURYEQK3I6FUdKje/TNTIkA3zjRF0WC7MK0bvKYng9obi+zCX1Z0 40qfAe5u8OsWTqN6PqApBq1e4MfndxJKZLIILNempK+BnBKPZjEznus0nLi/aKu/5krT 4ntg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079745; x=1688671745; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oJX97QI+ti3MLO2eZ0RB5q+cUC0tg2B7MRgGhqPSdoU=; b=kH05jV9sWVhZcGd+TH2FTRT8TKLHMP/t6ehqzXGJBC6bZH64qTrbCkOlsYwmUxSGJY +/cYoyRIF9X5Un8UqmjLPDwKZvCrk3MPhOwRX0RPGu/GAt5klv10IFRDHJMZJYcBc4Kz vmRxwMfn7mGaEzadx9Dw9f9M7r6XnmHVGlPZIcMpVBaOOvhESZXhxY3lb0eXLxxfjWSr P4a5/mOWd8+v4xLZDTgI320jAptFsiJ7Nvh9Hhx4OL5WLf7btrOtK+UTrPmuNqCPqnDl ImdscrdqtE7qM6fowHKkhgUVUSI+6zXDISqgkJhNn0KMi1Ow78oAKFtf1aWaJw27nRIO e/hg== X-Gm-Message-State: AC+VfDy4QJvuxLprUMFoQ/BJmYzDDKL/qlgQTctA2KDQeWKfZn3NilA3 MxO8sxidmSoymrdio08NocRVu9plPPnE X-Google-Smtp-Source: ACHHUZ4utofyeOL86/Kh7XYvV5vjDO1rXmuNvLh41L3YVuhJgVaiYRIoiVec1XFLov1RpXm667fMK2Lo8ndc X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1886:b0:ba8:4b22:4e8a with SMTP id cj6-20020a056902188600b00ba84b224e8amr1774981ybb.0.1686079745555; Tue, 06 Jun 2023 12:29:05 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:54 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-4-rananta@google.com> Subject: [PATCH v5 3/7] KVM: arm64: Define kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement the helper kvm_tlb_flush_vmid_range() that acts as a wrapper for range-based TLB invalidations. For the given VMID, use the range-based TLBI instructions to do the job or fallback to invalidating all the TLB entries. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 4cd6762bda805..1b12295a83595 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -682,4 +682,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); * kvm_pgtable_prot format. */ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); + +/** + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries + * + * @mmu: Stage-2 KVM MMU struct + * @addr: The base Intermediate physical address from which to invalidate + * @size: Size of the range from the base to invalidate + */ +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size); #endif /* __ARM64_KVM_PGTABLE_H__ */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d2..df8ac14d9d3d4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -631,6 +631,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); } +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t addr, size_t size) +{ + unsigned long pages, inval_pages; + + if (!system_supports_tlb_range()) { + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + return; + } + + pages = size >> PAGE_SHIFT; + while (pages > 0) { + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); + + addr += inval_pages << PAGE_SHIFT; + pages -= inval_pages; + } +} + #define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt)) static int stage2_set_prot_attr(struct kvm_pgtable *pgt, enum kvm_pgtable_prot prot, From patchwork Tue Jun 6 19:28:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88BD7C7EE37 for ; Tue, 6 Jun 2023 19:29:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239368AbjFFT3U (ORCPT ); Tue, 6 Jun 2023 15:29:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239133AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BC2210FB for ; Tue, 6 Jun 2023 12:29:07 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bb39aebdd87so1934686276.0 for ; Tue, 06 Jun 2023 12:29:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079746; x=1688671746; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3WYVW7sMItJ+/qNz0zQExZlTscw6/G0/crBTyowFvIE=; b=5IlabwCGrIyj/AGVDbv02tLgdBhugCUdfaXNCXYMSgWmeO4GWMwq9kUV8ZLcrznmtJ 9w/fRVw2Q8vQO+X8PD0RiK+E4de/ZlpNpjX//3+ZJ/z1bgt4IWCS+WCqcmPYchdYMhu7 6ZyTyuOgCdCzQ6Qd9R9LhK4vGuuDSFhKZxBgCoDZYz1ivx6gHwTZc2aV0zJ34iKVwEts b+wR3veNb0Evao1qV8B/2Wkryz4orAqDviNP7ie3it9vbdqD20hDgy70q8P3ZsyWQY24 V90tcmm1HDL61ztZ1eduvI73n/LrSx8omh8WnuH0h1b7VOBl7pbLKx3+xvncMPyB0MAp TYmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079746; x=1688671746; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3WYVW7sMItJ+/qNz0zQExZlTscw6/G0/crBTyowFvIE=; b=Mg72eNwazLfY9HK618bqMdFBVwkck4VGYY+GaqVv5KLhZIVmS7nuzs/8PIoBi+hiSS 0T6BIYv7JEqMm20/wdlm5uNLUoJGmfJl4TS5WQCP+lmYZ2S6+I03lzcM5SYH5uMYpl2E Hz1U9Ko1jBsv7yeNaziA2+zXl/kUuidTx1Z+uIcrlRUTmtD3r6YPrjh6vmr7GSSH2T+0 EeL/12kH3pmDAvFww6dw96zEI8FmZI83pEkYqmi4czCUQiHdGSWA04OE/eMJJUi9SxG7 Dz6jq0DQXfEihybxJ5SN7TiF5A6L8V4WBAkJKs2Gc+xMWwo2uQpD8x29VeEYL/Fb9Cgo BgyQ== X-Gm-Message-State: AC+VfDxJhjIhSIQIWftAyEBRzQTHoblCVlGUJGt1YWrtm5qcZ/hKJEW+ j+wu1D0Cpl5xFJWfhF+pXtyizqrRkluY X-Google-Smtp-Source: ACHHUZ6EqwKWg0miPDGHXqsiKtPZWCKDCcQHa7RLTfS8/ASb2ZH4UEIkh0Qo1ATHiY2DnDeSbDhiY4EFJcXm X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:aac6:0:b0:b9e:7fbc:15e1 with SMTP id t64-20020a25aac6000000b00b9e7fbc15e1mr8330733ybi.0.1686079746618; Tue, 06 Jun 2023 12:29:06 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:55 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-5-rananta@google.com> Subject: [PATCH v5 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/mmu.c | 7 +++++++ 2 files changed, 10 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 81ab41b84f436..343fb530eea9c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1081,6 +1081,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d0a0d3dca9316..c3ec2141c3284 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -92,6 +92,13 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +{ + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, + start_gfn << PAGE_SHIFT, pages << PAGE_SHIFT); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); From patchwork Tue Jun 6 19:28:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F80CC7EE45 for ; Tue, 6 Jun 2023 19:29:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239513AbjFFT3f (ORCPT ); Tue, 6 Jun 2023 15:29:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239186AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29F341706 for ; Tue, 6 Jun 2023 12:29:08 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-77751dc936eso555474539f.0 for ; Tue, 06 Jun 2023 12:29:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079747; x=1688671747; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vmdPfikIRRvnyRjXE+lL2qXckkyOSYAQTpf3USgKEV0=; b=NHF1fXOFFod7VAbBEt8UCeYZ0LWUlIN+Z6b57RJ3dYkbyPMi44oby9chiAOpKFs6un rhxs+wXMHvdYt4NYsIrtkVl2YKSo3zk8tNR2TwujCnBckhIbpVAaktlHB3QijHKNfJUe hkb8y/6aZZhYsF9WaKPqZYo3xyz17qUNtYekIJTDlgCURWGI245/YxQk2wkmQm7hJ4hL xayxtGHFCypiY81y05iqASARTfMp4rlZxyu38FOX8d7dloa7uO6fa0YCW5u90zQup9cK dJkRfasVP0S5ITGJLJepNh3XaHWNfVUfe+xqGnGRaOc8k4fa1vead1/j2XUFEnm8uJDb bDQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079747; x=1688671747; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vmdPfikIRRvnyRjXE+lL2qXckkyOSYAQTpf3USgKEV0=; b=HjYRw2NX+ttWwdfGNqqm6oOO8C2GZ4cx1aU+7F3V17R25j5e9fPx8+g6LlDXMd1Aom fiRq0jbr/povDPCuhwkHilVYD6urKfoN0diNf/Ky/w7xqQPZmd3w6of+iLmnCUzDXdW8 5t7TbPHr4pyQkkn9XmdXvtDXIw8SzEg2OMdGwjqefIrWwHaT8SOZ+8KxvAdOwElppAt+ F05Nep5nF25hlm7vNLqCH7PPooZzJosMir47KkrBZN6WEFIwY0lMXf8xvQdMAcAFFase lm5emZi0KJoyTdLq67a47NnmZkHmwk6y/xKnNkuIsB3+rRXdnmfHTG3Itxjw9T9XAamk a5wQ== X-Gm-Message-State: AC+VfDwhMDCAV/Y8sBSK0sHyPVgR23JYtEWxDdXY9QxnZ93ZjLlYgRd6 iUDugaVZDmHLu4SFX7Kwv2vhrWLhnw0v X-Google-Smtp-Source: ACHHUZ7BFKutOddfAvI8ltY6fxG4ugpMD+YbWGB0EQ+yV6UaPweh0t3ry+B0nZFcGDzDjvtX8EMf8dfFMhPB X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:6e01:0:b0:774:9732:324 with SMTP id d1-20020a6b6e01000000b0077497320324mr1569372ioh.2.1686079747656; Tue, 06 Jun 2023 12:29:07 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:56 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-6-rananta@google.com> Subject: [PATCH v5 5/7] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c3ec2141c3284..94f10e670c100 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -992,7 +992,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } /** From patchwork Tue Jun 6 19:28:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAF7EC7EE37 for ; Tue, 6 Jun 2023 19:29:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239315AbjFFT32 (ORCPT ); Tue, 6 Jun 2023 15:29:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239256AbjFFT3P (ORCPT ); Tue, 6 Jun 2023 15:29:15 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 150CF10FF for ; Tue, 6 Jun 2023 12:29:09 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-778d823038bso151719739f.3 for ; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079748; x=1688671748; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mSh72g4ZHJDv4/8vl2ojF6XiepiDvTufhm8MWFlXvRw=; b=DgqodgA0VlUZWQrMG4GP7C/weUqyJOTcJeeBU+QJtyx73nrL97iFH09J3NN1a3ZqZp HnGGp8mag6nk53M7ixB65x5AD6Ms56KjI1ef3YbrAECkzd/sBiP49ZaRawRDGSG6W/48 zpTIZ9xiNUakwan9Sah7O3FQZq8hbYO+vJYMpPhkX+vPtaFLwYWWIJatBbkiuWPko3KK xLOP8K2IyKlFQWL5H/yaFnL404Xk/QQOl56Czr3QPqSe5ks+kz2URjUEHgddpbJaiPlb GYXlIgD7DvVtN5XvZx5Rt/1wo3vRwOLcLfUaazlKhmWPNVBuGWB5FFLS01Y/RwmF7Ac/ NxtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079748; x=1688671748; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mSh72g4ZHJDv4/8vl2ojF6XiepiDvTufhm8MWFlXvRw=; b=U5gDqkzJkwjzVNp2JrW1gpOTG87+D1hVXQDAuM3LiSEh9GYIaN5J65h8zeY9Ee/ub9 4ceOfiJHoC7ENppUPgYUfX+D6fBvq0zIAbLYhUDVXguOY2Za250z1us0wfgPWC9fUTGV v9i54su9muH9bHt2OLW+KIJ3HRR92uO8QJeWX6cQ7m4hOlmJWcvhPA4sZPD3DL0hOSei /Zgvn22Jbqt+49R5x7RqlJCOrgvcc1jaZAZTU/3Z5uDDZ6/0sjVY2eFTJKuaLjOL+C0K mRE4SZAGkhyNnQS7QN9eSWqgUOFa05PubPaQk+fgICRQhNbjLeMwy6X/Fb3S4exFw8iq u7fg== X-Gm-Message-State: AC+VfDwSVJv0PsoG/7soaw9OXGiTU3w69ev+1l/2yEiYGzH7Cr42nnTu rssIUj02D0IxAMceWu7oL/dtwcnclsfv X-Google-Smtp-Source: ACHHUZ7CbVG1v9+02j/lP6GitPjQFKqy8MFcVF4f6QKfj749lOkIhDCulQsPILWvgJRfRfbdJbdIHNk5CYhj X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:85e6:0:b0:416:7e77:bb5f with SMTP id d93-20020a0285e6000000b004167e77bb5fmr1392579jai.0.1686079748520; Tue, 06 Jun 2023 12:29:08 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:57 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-7-rananta@google.com> Subject: [PATCH v5 6/7] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, leverage kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df8ac14d9d3d4..50ef7623c54db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -766,7 +766,8 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, * value (if any). */ if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + kvm_tlb_flush_vmid_range(mmu, ctx->addr, + kvm_granule_size(ctx->level)); else if (kvm_pte_valid(ctx->old)) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); From patchwork Tue Jun 6 19:28:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13269668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C43B1C7EE43 for ; Tue, 6 Jun 2023 19:29:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239256AbjFFT3c (ORCPT ); Tue, 6 Jun 2023 15:29:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239356AbjFFT3Q (ORCPT ); Tue, 6 Jun 2023 15:29:16 -0400 Received: from mail-io1-xd49.google.com (mail-io1-xd49.google.com [IPv6:2607:f8b0:4864:20::d49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A3651732 for ; Tue, 6 Jun 2023 12:29:10 -0700 (PDT) Received: by mail-io1-xd49.google.com with SMTP id ca18e2360f4ac-774efe8c0cfso409472539f.1 for ; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686079749; x=1688671749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VhTidRsQ7M0B21RWWeLYKjL/CqqK7Lwi7zT2NAAb1NY=; b=xRdfPtc42t3tWbTgM9ApieV/cDRl7yttpemwXRGTQ29qIVK/7EzJsoDi1w7500YQxK VkZroFYdAFjD4L8oeQoYclUdM3xt/ovyCUAsu5x/j9Pe6iUmDMSfCEArYp2QxzAQ5O6Z A/hHw65k++MQ3lkHpkZeObv/86BZddej7Q6Owxqr+0GDTHWYhA8Ekyxm69emOOf0VJXJ r1yUFL1fTbbmeDyUpxa5IqlBqW2Q9ZuwWKmdPrY9jzAqk7eMKbPnTCqjzQ6g6gu6Rezb Qs6bqyRlTvej95KtDkRzujhhevLRfzsMlKJ3gKk5lfozadujZQAR7894f9QC/T7/hDpu E4Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686079749; x=1688671749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VhTidRsQ7M0B21RWWeLYKjL/CqqK7Lwi7zT2NAAb1NY=; b=ceHhcMpFYHbpAStiQzD9C+AhH9lIViAlbE6Woh1DU5pZSD/tdLY1wJtKlLVwHObODK 2vnt601jAThdosykIcBEiInczid9hrGZQflndDITz4JvyaZZdbiNfjjsgguukH2GGhpN 5nOweaCuq92gpWTKRmproLxaRPUGSf0Mjb1gQTriIj0gP4PBmahMkgX/OiPuLO3JNjfQ KszTFenNK7Q5NMVgGCqqwjD5Dt9yOofRVtKyyJO1xdNTFzfOd3XTrmsqQ9yYoKsElXVT UMn22fqNQ7RIQRKpDfAkoe8kkZmD/b+GFkNbyciZYA3MrcavLWLwjrG4kdwtwE2ZSFIc dojA== X-Gm-Message-State: AC+VfDxGtChE8J0U6x3n3YPbFiHAgvFw9ugKSeQ+Rk04dCMfNGIrG0sx 0FDNcgRrhmgZ53/hKs6PDCmjMTTvOIrk X-Google-Smtp-Source: ACHHUZ7QCQzpJGrZKqBh5n3es8NB7ieKOI67W6rTPhyCwNnbXcHk5AAG6ZdYbhJ9YFPUUVxtemi/vNZ/436T X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5e:8345:0:b0:777:b726:3e17 with SMTP id y5-20020a5e8345000000b00777b7263e17mr1513935iom.3.1686079749368; Tue, 06 Jun 2023 12:29:09 -0700 (PDT) Date: Tue, 6 Jun 2023 19:28:58 +0000 In-Reply-To: <20230606192858.3600174-1-rananta@google.com> Mime-Version: 1.0 References: <20230606192858.3600174-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Message-ID: <20230606192858.3600174-8-rananta@google.com> Subject: [PATCH v5 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck on some systems. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition deferred TLB invalidation on the system supporting FWB, as the optimization is entirely pointless when the unmap walker needs to perform CMOs. Rename stage2_put_pte() to stage2_unmap_put_pte() as the function now serves the stage-2 unmap walker specifically, rather than acting generic. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 67 +++++++++++++++++++++++++++++++----- 1 file changed, 58 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 50ef7623c54db..c6e080867919d 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -789,16 +789,54 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n smp_store_release(ctx->ptep, new); } -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool defer_tlb_flush_init; +}; + +static bool __stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) +{ + /* + * If FEAT_TLBIRANGE is implemented, defer the individual + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition deferred TLB invalidation on the + * system supporting FWB, as the optimization is entirely + * pointless when the unmap walker needs to perform CMOs. + */ + return system_supports_tlb_range() && stage2_has_fwb(pgt); +} + +static bool stage2_unmap_defer_tlb_flush(struct stage2_unmap_data *unmap_data) +{ + bool defer_tlb_flush = __stage2_unmap_defer_tlb_flush(unmap_data->pgt); + + /* + * Since __stage2_unmap_defer_tlb_flush() is based on alternative + * patching and the TLBIs' operations behavior depend on this, + * track if there's any change in the state during the unmap sequence. + */ + WARN_ON(unmap_data->defer_tlb_flush_init != defer_tlb_flush); + return defer_tlb_flush; +} + +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, + struct kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) { + struct stage2_unmap_data *unmap_data = ctx->arg; + /* - * Clear the existing PTE, and perform break-before-make with - * TLB maintenance if it was valid. + * Clear the existing PTE, and perform break-before-make if it was + * valid. Depending on the system support, the TLB maintenance for + * the same can be deferred until the entire unmap is completed. */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!stage2_unmap_defer_tlb_flush(unmap_data)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } mm_ops->put_page(ctx->ptep); @@ -1005,7 +1043,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt = ctx->arg; + struct stage2_unmap_data *unmap_data = ctx->arg; + struct kvm_pgtable *pgt = unmap_data->pgt; struct kvm_s2_mmu *mmu = pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; kvm_pte_t *childp = NULL; @@ -1033,7 +1072,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_unmap_put_pte(ctx, mmu, mm_ops); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1047,13 +1086,23 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data = { + .pgt = pgt, + .defer_tlb_flush_init = __stage2_unmap_defer_tlb_flush(pgt), + }; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, - .arg = pgt, + .arg = &unmap_data, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (stage2_unmap_defer_tlb_flush(&unmap_data)) + /* Perform the deferred TLB invalidations */ + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); + + return ret; } struct stage2_attr_data {