From patchwork Fri Apr 14 17:29:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EFE3C77B6E for ; Fri, 14 Apr 2023 17:29:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230311AbjDNR3f (ORCPT ); Fri, 14 Apr 2023 13:29:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230112AbjDNR3c (ORCPT ); Fri, 14 Apr 2023 13:29:32 -0400 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC0EE7ED0 for ; Fri, 14 Apr 2023 10:29:27 -0700 (PDT) Received: by mail-il1-x149.google.com with SMTP id j9-20020a056e02218900b0032966644c32so2384410ila.23 for ; Fri, 14 Apr 2023 10:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493367; x=1684085367; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m1kZE+I4lCokEPTtuViylzrsjbyFZVriw/+IDTkdUZs=; b=wKAqTp0gCgCpvOk94vxoajpDy1Jwz98+EewdQmmSdhNqE2Jlk4VMvFAu5B8crUN4Dz h96GsoqGDRzHtHUkMg6TRiTh4WIycBwd9t3vPEADx97JkaEUt1YUmkVQjkw+TnbTXKKd AxmO1vjU0CzLrwsCbVvYNXNvP9uvDkbcfKCBKrGd88fzD+2Y2DA2uBNAejUQU8EQbRS5 Ak8gLtAu43PdrBZqXK+3Bt/JY6WS0m7v1Ig9rOXjI0tIgXNOEsAlE+nD3NTxcG+IL6Jr wRbed8j8LkgwlnnhSGHgtvO6fBEooaanIBc8nxPZ6QO8tgEdlEEw1bcTchIOEpnQVHC9 lgcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493367; x=1684085367; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m1kZE+I4lCokEPTtuViylzrsjbyFZVriw/+IDTkdUZs=; b=G5EB9M6OX7FgQgC3K62wtEQW7cXR9MB29PYmJVfy/hWAeEXrNG/EAgRE/P1zg20gjN JYHcrRHAkddLeQAwFd8o9WtlMqh6fScOb3H2EwrgKVGc3+U0O/03bS/Vg9j4HwFEAzTf NFGD+xBUJ5rJ7hTf6ufTb7MelcIYKOUbDjvKWchr+KnvKdhOGbZYtQKcNChRPwA4AVD6 M76lUEU/aHoaPMpOPacq0H3+nT9Rm6LZF1pjLNia65rGqTHsBVlyv0E7qt8XGNA8Ndf8 hJ5CJoxQeu8Mx8Lvp+5rZ7ibgnwQbRd4GQRy21AKLJnzFwUJiiIaY19HqPqxqTQKj+KP h2KA== X-Gm-Message-State: AAQBX9fCn+1c8MJZ5TsBQ8DD90GvG+kJ5/tMgUzQwLAB9QFMPbwTvpLP BcT5XG7cBYpd9nl/8MXOVf5U9elHgLqP X-Google-Smtp-Source: AKy350Yddd2lwEeWhxMenGEg3cSJzJsTClfQ0/zSbWeiRrRDBV4gIVJaV1PJNPipYcDx7Jss85J4DCGudYVM X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:63ca:0:b0:40b:c01e:6d0f with SMTP id j193-20020a0263ca000000b0040bc01e6d0fmr2100031jac.2.1681493367400; Fri, 14 Apr 2023 10:29:27 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:16 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-2-rananta@google.com> Subject: [PATCH v3 1/7] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, the core TLB flush functionality of __flush_tlb_range() hardcodes vae1is (and variants) for the flush operation. In the upcoming patches, the KVM code reuses this core algorithm with ipas2e1is for range based TLB invalidations based on the IPA. Hence, extract the core flush functionality of __flush_tlb_range() into its own macro that accepts an 'op' argument to pass any TLBI operation, such that other callers (KVM) can benefit. No functional changes intended. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- 1 file changed, 55 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 412a3b9a3c25d..4775378b6da1b 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, */ #define MAX_TLBI_OPS PTRS_PER_PTE +/* When the CPU does not support TLB range operations, flush the TLB + * entries one by one at the granularity of 'stride'. If the TLB + * range ops are supported, then: + * + * 1. If 'pages' is odd, flush the first page through non-range + * operations; + * + * 2. For remaining pages: the minimum range granularity is decided + * by 'scale', so multiple range TLBI operations may be required. + * Start from scale = 0, flush the corresponding number of pages + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it + * until no pages left. + * + * Note that certain ranges can be represented by either num = 31 and + * scale or num = 0 and scale + 1. The loop below favours the latter + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. + */ +#define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user) do { \ + int num = 0; \ + int scale = 0; \ + unsigned long addr; \ + \ + while (pages > 0) { \ + if (!system_supports_tlb_range() || \ + pages % 2 == 1) { \ + addr = __TLBI_VADDR(start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ + start += stride; \ + pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ + num = __TLBI_RANGE_NUM(pages, scale); \ + if (num >= 0) { \ + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ + num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ + pages -= __TLBI_RANGE_PAGES(num, scale); \ + } \ + scale++; \ + } \ +} while (0) + static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) { - int num = 0; - int scale = 0; - unsigned long asid, addr, pages; + unsigned long asid, pages; start = round_down(start, stride); end = round_up(end, stride); @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); asid = ASID(vma->vm_mm); - /* - * When the CPU does not support TLB range operations, flush the TLB - * entries one by one at the granularity of 'stride'. If the TLB - * range ops are supported, then: - * - * 1. If 'pages' is odd, flush the first page through non-range - * operations; - * - * 2. For remaining pages: the minimum range granularity is decided - * by 'scale', so multiple range TLBI operations may be required. - * Start from scale = 0, flush the corresponding number of pages - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it - * until no pages left. - * - * Note that certain ranges can be represented by either num = 31 and - * scale or num = 0 and scale + 1. The loop below favours the latter - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. - */ - while (pages > 0) { - if (!system_supports_tlb_range() || - pages % 2 == 1) { - addr = __TLBI_VADDR(start, asid); - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); - } - start += stride; - pages -= stride >> PAGE_SHIFT; - continue; - } - - num = __TLBI_RANGE_NUM(pages, scale); - if (num >= 0) { - addr = __TLBI_VADDR_RANGE(start, asid, scale, - num, tlb_level); - if (last_level) { - __tlbi(rvale1is, addr); - __tlbi_user(rvale1is, addr); - } else { - __tlbi(rvae1is, addr); - __tlbi_user(rvae1is, addr); - } - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; - pages -= __TLBI_RANGE_PAGES(num, scale); - } - scale++; - } + if (last_level) + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); + else + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); + dsb(ish); } From patchwork Fri Apr 14 17:29:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61039C77B73 for ; Fri, 14 Apr 2023 17:29:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230381AbjDNR3i (ORCPT ); Fri, 14 Apr 2023 13:29:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230338AbjDNR3e (ORCPT ); Fri, 14 Apr 2023 13:29:34 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 005EF76B4 for ; Fri, 14 Apr 2023 10:29:28 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54f8b46f399so101189597b3.10 for ; Fri, 14 Apr 2023 10:29:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493368; x=1684085368; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6yXpQV1dyaIZNXVMPwYu1hFdV6Rbra8P0PHGUh6odQY=; b=BH+iDovlEdPlXeqFnQAwC3i/GbldXsEXVPPEleNqyMhYfztj3mdzkuLlpAM8js5DUN 4U4peoDsgxG17FkNzwtjyxSL3WK1rPGjVyS5QvlEVrjLnJgBjRFxmmeYv457CmnZ8KDJ 4mVIPXKqkjuU6lq6hbX+u7l+2lD2htHCU+DPKyG8N5n76kYehutWHFguGNaE0jRxQIg/ LNATsfmPe+mliFpE/lIwk8FNYS5HccutpQOfwg0Re4I1wNzbCBkZh3zsds829lIrEx08 dSdWCtgy7SccBWpxP6luF5aSag3jFs0zKj7ece2j/n4S47P7iOsVYZp3PKXG+V+EVjI2 dZHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493368; x=1684085368; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6yXpQV1dyaIZNXVMPwYu1hFdV6Rbra8P0PHGUh6odQY=; b=TJm5clpwUIlGxREJWAZXSQm5ojDjl4ElgwSE3EfAIaFOTpvj2bRoAkj7FgzlMH9qDL ctrWBoMgevL5PcyhGgfp4ZEvCKcIkji9NjE+liLll/C3iOIsWT1n1/aMisISxE2Of7tE qWFWnUggxC26w3ZWltLpoQmidf976qPvys5w9DBHSDbIybBGdIKquVzPPQfAj0VvZy44 7fQuSkfPSTDcVXwYx5E2nxPMJcWhBJJeiRc31ISEGJPdCPrNtiswlMFRq5Hz5cJjL7mt tqlWKOsBjhrIbQgi+Ob0Kks5JmvPb7u4dNSBGDJVICTmKO7xopHQV0JjzSFIwU88Hxfx +fKQ== X-Gm-Message-State: AAQBX9e69hU4//JKEC1I2vhvClpmWTh2tSIqgSk0pmulVxnXyz/Qfpux FmT40yazRSZs/1TpmLT9ZXeO1lWAuwzD X-Google-Smtp-Source: AKy350aJJ9Vbsc0RGRyarCzXp0LRIHsj2K5ceufsNwzluZRRFidFLTDVynELZbpYeTnaEMbB4S53iorRdTV9 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:ed06:0:b0:541:693f:cdd1 with SMTP id k6-20020a81ed06000000b00541693fcdd1mr4258848ywm.9.1681493368275; Fri, 14 Apr 2023 10:29:28 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:17 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-3-rananta@google.com> Subject: [PATCH v3 2/7] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 39 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 35 +++++++++++++++++++++++++++ 4 files changed, 88 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544d..33352d9399e32 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, }; #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] @@ -225,6 +226,8 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b0..81d30737dc7c9 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,16 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(phys_addr_t, end, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, end); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +325,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f5896..d2504df9d38b6 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,45 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end) +{ + struct tlb_inv_context cxt; + unsigned long pages, stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + end = round_up(end, stride); + pages = (end - start) >> PAGE_SHIFT; + + if (!system_supports_tlb_range() || pages >= MAX_TLBI_RANGE_PAGES) { + __kvm_tlb_flush_vmid(mmu); + return; + } + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment below in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e9..f34d6dd9e4674 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,41 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, phys_addr_t end) +{ + struct tlb_inv_context cxt; + unsigned long pages, stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + end = round_up(end, stride); + pages = (end - start) >> PAGE_SHIFT; + + if (!system_supports_tlb_range() || pages >= MAX_TLBI_RANGE_PAGES) { + __kvm_tlb_flush_vmid(mmu); + return; + } + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_tlb_range_op(ipas2e1is, start, pages, stride, 0, 0, false); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; From patchwork Fri Apr 14 17:29:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E076FC77B79 for ; Fri, 14 Apr 2023 17:29:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230344AbjDNR3k (ORCPT ); Fri, 14 Apr 2023 13:29:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230367AbjDNR3f (ORCPT ); Fri, 14 Apr 2023 13:29:35 -0400 Received: from mail-il1-x149.google.com (mail-il1-x149.google.com [IPv6:2607:f8b0:4864:20::149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C055E5FEE for ; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) Received: by mail-il1-x149.google.com with SMTP id z13-20020a056e02088d00b00326098d01d9so6558753ils.2 for ; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493369; x=1684085369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xod2ygDVduTwUHImrHGaOn1Ey3T4FHeje26KGvtdzFY=; b=urihsx8BJ4/z847iQ0af0/ABhs7C6WBtcCWjGJXbhxr/VPQvARnb7XaJ7A5oFQiHCD 90urhiaRyXF7oVdI+4/fu/OPC5OJvxewZGz9QxN5hZ5yTVEjUtFiVeYjgKwW9nNFK1ru ce1L+PFnnDRNFsRRRIrGtENwVQTm3GxN0xCbR3syOuv9KLkEd+6jW5XLxUGt1ZHmyXnR HgnKGTeToQwFaAXYXVQwBhn++Abwiz3alwNHfKA9Vz6CpdqXz9JFA+VXvh7qQ0Qk8Eho hnqoSXzs/4uE8yim0YCcb5e3OEyL4Tz/6RTOh3sL0WbaHGriWM3BXETHdZiXgWQWsmI6 Kx6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493369; x=1684085369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xod2ygDVduTwUHImrHGaOn1Ey3T4FHeje26KGvtdzFY=; b=Nm8S7VVEI6L7TN8ucU5fGBZVeoSGaM0UiQkz8axjCBDTJvUoDZjDADUMiCOOLSSoNK jCr7YxnfZ28NYHLH2VUzssidskzxOiReofLHmJ8HT7jP6tMiT6/w15QoVczhPsAsWXvg 2lG59OGR+mCtNxto7QT+irUwhbAWStue5T5vWYYjhSPHOunndYlDc+Q1eRfFIDqD4Uxe 7TTKulihIYLXnqdJOnqKFykmzKFaPs8554zEr1hON6+4x1BOWgdtPizoaUd+YIIC21GR 38x3qDkjqIGEnfh3BNPJOrm3jIYgzRLVdpTXnTZHVVCwOYMC/tAwCBtgtcJPVpiLYC5V gPqg== X-Gm-Message-State: AAQBX9cCOQw2xIjN4CNP6UHFc9SoR0SdGnvmGaaNKuAjgTHas60iHteE Iqxc4ur6KG00/++Acka/IVVwoNje23ld X-Google-Smtp-Source: AKy350aPSA8VnmqDZpOyrrIV/qNFr7vvNcijupLw4tbGJajccFhHKKFJcef90ZPOEeNPVI0c8hECYDUjbfor X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:b19d:0:b0:40f:787a:852f with SMTP id t29-20020a02b19d000000b0040f787a852fmr1494204jah.6.1681493369209; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:18 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-4-rananta@google.com> Subject: [PATCH v3 3/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement kvm_arch_flush_remote_tlbs_range() for arm64 to invalidate the given range in the TLB. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/mmu.c | 11 +++++++++++ 2 files changed, 14 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 17c215a2df7d7..075d3e6482e53 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1044,6 +1044,9 @@ struct kvm *kvm_arch_alloc_vm(void); #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS int kvm_arch_flush_remote_tlbs(struct kvm *kvm); +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); + static inline bool kvm_vm_is_protected(struct kvm *kvm) { return false; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d0a0d3dca9316..e3673b4c10292 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -92,6 +92,17 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) return 0; } +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +{ + phys_addr_t start, end; + + start = start_gfn << PAGE_SHIFT; + end = (start_gfn + pages) << PAGE_SHIFT; + + kvm_call_hyp(__kvm_tlb_flush_vmid_range, &kvm->arch.mmu, start, end); + return 0; +} + static bool kvm_is_device_pfn(unsigned long pfn) { return !pfn_is_map_memory(pfn); From patchwork Fri Apr 14 17:29:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C88AC77B6E for ; Fri, 14 Apr 2023 17:29:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230388AbjDNR3m (ORCPT ); Fri, 14 Apr 2023 13:29:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230369AbjDNR3g (ORCPT ); Fri, 14 Apr 2023 13:29:36 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AC905B9F for ; Fri, 14 Apr 2023 10:29:30 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54f9df9ebc5so88725017b3.13 for ; Fri, 14 Apr 2023 10:29:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493370; x=1684085370; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KPViUvkFxC3TgEpc/g+CHYNB1CL2Y9j6aLSEKzBNRU4=; b=bfUCUQuKiETTWlzQAbfjNWEznPFMMabCQwMyvFryGC5wRD7C7JsvkTJRpUdEZwuD3d EmNvtIVJ1PzWXZp48dtSC3Uqt+Qd7L6LI6SYR2R18kQn4tOvoKZvDTWk28yQdgjV/t+g aEQHdU35HFceLmkmPdsoDUlmW9uweT8xG74XVcOTweM7F/e4vvyBkpSw2ZfUszXVuX67 16Y20IFlGTWB08vhSSrtkSoRM6c/HGCuncpoLyYgPWweOItK5ELHtnPUFzjbLrNEYcNm uWOezjyZmaPewp35SfIf/iN4YAyxULNabW1cY5mFVAaVM+373vhmIlEchR/iiRJYdrBd +lIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493370; x=1684085370; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KPViUvkFxC3TgEpc/g+CHYNB1CL2Y9j6aLSEKzBNRU4=; b=SuCu9wcIHspKsJlSYsUKLkCE73Hqxi44ld1u+WrHCELl/oAptWC6g9NAdbip/ZRg9F eFbMXfV35QintQOSbxrqdC4LGH4mshvn+k0dHCHAe+uXFaRf0QB2ydTP55Gld0JJekLP MIaiJg49FyAO/+wTIkRCaC7t47BszjEQUFZfGoeTjEEDjun5X1V/NGlFBSDeaYefEB2r Z2L4W1y+Lm+b7Uw0QmbSKZritc5HNorw+mSAz46LQ9qMl+ctlkn/rGYJ8YZD3bzetPi/ 7bgR81Je7W85Paw4re6Hcy8I8NYDgbhnp84Wszennj/+vNTSzeoZh7RkSJfM5dWFxnx+ fuEg== X-Gm-Message-State: AAQBX9c02VeiECE7woZAYnbSVk3xbnccJuIFME+hUW0zzXsiVW09moOs 5Qy3/5u2gb8roWg7ebki52frhwFKzpd2 X-Google-Smtp-Source: AKy350aLcIkj/hYVjZbP05g2b5z8MsRd71+scXvLOlNSX4p4BB89Fk+7FtQuUnznnwYfU5yATegzNaTrdaKg X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a81:c406:0:b0:54d:913b:c9e8 with SMTP id j6-20020a81c406000000b0054d913bc9e8mr3919984ywi.5.1681493369929; Fri, 14 Apr 2023 10:29:29 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:19 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-5-rananta@google.com> Subject: [PATCH v3 4/7] KVM: arm64: Flush only the memslot after write-protect From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org After write-protecting the region, currently KVM invalidates the entire TLB entries using kvm_flush_remote_tlbs(). Instead, scope the invalidation only to the targeted memslot. If supported, the architecture would use the range-based TLBI instructions to flush the memslot or else fallback to flushing all of the TLBs. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e3673b4c10292..2ea6eb4ea763e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -996,7 +996,7 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) write_lock(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); write_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } /** From patchwork Fri Apr 14 17:29:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8761C77B72 for ; Fri, 14 Apr 2023 17:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbjDNR3t (ORCPT ); Fri, 14 Apr 2023 13:29:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230375AbjDNR3h (ORCPT ); Fri, 14 Apr 2023 13:29:37 -0400 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8067A618C for ; Fri, 14 Apr 2023 10:29:31 -0700 (PDT) Received: by mail-il1-x14a.google.com with SMTP id n10-20020a056e02100a00b00325c9240af7so10616853ilj.10 for ; Fri, 14 Apr 2023 10:29:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oKp1uqEVUDsYvqb7n0EE6E39VgTURLiqq4SIcHauWnY=; b=A4ww0TsBpNc8NyuHHycocTyIKbFT6p62k0/A8t5/ltX9KxG0M7SK44yK6ry6Z2mMF+ jeqV8N1OxvCVX9VRDGFZjgankBqNIL3n4bQYbPNlur0Gt+i5wrEcpZtd0hABAsHIgk0I QQYcuZsYerlzIHJ/z/LzsZc/GTmGp4M7XXAI8JUUuj3h736xuhABjmEb6C0gPmwG/Lbv tnshnBhAnysXSf0cOeG/AmMed5U4ze4xTJmfqAReYTCUtmv8k1LurjOJXsjR04evZfNt 855K7678/DIciNdxe2RQK1JEWX6ua6QX1paK8Ve8Ve8btXIDgE4M5fLPABa2fB1l2v8h VeTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oKp1uqEVUDsYvqb7n0EE6E39VgTURLiqq4SIcHauWnY=; b=GmKJFB3m+l36ofeG1INMQTEr+Apjl9o2FCkdMrfVzuXa129jo6DLv0aapHlsujFoaI ikqDjgcwMu5xdujgUFJr+5mCLeCZUee5IHsXCoL7WZBS34zosRcFcUF5UgWOI21O6yLf oDOA56e2zv59h1eCoaWKe1HZdj203NbsylSTM0RetaMzEXl63Dc69eSAHe9iorSeo6Fv 52jhpo3Mc4LNwaCYva1exCkXfAjOM+k0AdbXL534IJJiz0WYJFck0qbQKx7PBxtjL3Lv kJ9IU4hfFLZL8RrEzVibdpJ2x8IziZyMs9YudXLxzsoG6EKzZcwTS68/2k+BWiaRxX8L GTMw== X-Gm-Message-State: AAQBX9e+HNlmPK4T/rCOYv7zI39KMHxadJkTpsvo/9wTqdK6mgzVpVMU m5Iur+du1BVQj18XnVf6kEopy1mAx+Be X-Google-Smtp-Source: AKy350YdGpBiZBP25Vx/4WpYHfw47rqMyEvx05N69DBsQ4xqQZ/W+6ShMamGjXosvj+ntNMXpPu5TKlDemlu X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a6b:e60c:0:b0:758:b52c:ec8b with SMTP id g12-20020a6be60c000000b00758b52cec8bmr2589760ioh.3.1681493370852; Fri, 14 Apr 2023 10:29:30 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:20 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-6-rananta@google.com> Subject: [PATCH v3 5/7] KVM: arm64: Invalidate the table entries upon a range From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, during the operations such as a hugepage collapse, KVM would flush the entire VM's context using 'vmalls12e1is' TLBI operation. Specifically, if the VM is faulting on many hugepages (say after dirty-logging), it creates a performance penalty for the guest whose pages have already been faulted earlier as they would have to refill their TLBs again. Instead, call __kvm_tlb_flush_vmid_range() for table entries. If the system supports it, only the required range will be flushed. Else, it'll fallback to the previous mechanism. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d2..b8f0dbd12f773 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -745,10 +745,13 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, * Perform the appropriate TLB invalidation based on the evicted pte * value (if any). */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) + if (kvm_pte_table(ctx->old, ctx->level)) { + u64 end = ctx->addr + kvm_granule_size(ctx->level); + + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, ctx->addr, end); + } else if (kvm_pte_valid(ctx->old)) { kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + } if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); From patchwork Fri Apr 14 17:29:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E76C8C77B79 for ; Fri, 14 Apr 2023 17:30:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230316AbjDNRaB (ORCPT ); Fri, 14 Apr 2023 13:30:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbjDNR3m (ORCPT ); Fri, 14 Apr 2023 13:29:42 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EC9572BE for ; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id v200-20020a252fd1000000b00b8f548a72bbso6815914ybv.9 for ; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k/QcOk48IC1/WqT95Uij/puIMmQjCMEPr+3tO3HuwfQ=; b=QWo1etgNuCpfarKk0sMjzQs753lB5TZr5/dnk4kDZ9dJPaUJEmNY3uILq7HPKvclaQ eOaY0c84jzyuPtkG8WIs8wNNj4dICRvoSUA3jdsan+4OW4kEjfaDE2m7pI9ViIoSnWJB ZEuzO0GtCVYYyoXhaLH30dGXOiQI6gnx5uDxm7h5j7oyVv5zhpVts6Jiy7MLwkHgghpU SsMjEqgOL/Wzfe9PrnwFfVsVkxk8L+0IPhBMd+B5Rc44u+EK7i109xafD6IUh3tWfIFp zJbiyMqBbmQLFjVyJVoTA/ei6xFOZmIB/Zpb7Xob7ZA/TtTIwFZvMlrTUJgd8QJ/FP3X Oq9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493371; x=1684085371; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k/QcOk48IC1/WqT95Uij/puIMmQjCMEPr+3tO3HuwfQ=; b=eyBbcFhwxXnj0l1mEf2kgLG9QBRkEjkokDhwoHMdDgF2OKweSKZho8BW2ITYik9x31 /+8243ahxKQJlCwRovMqfcTrBQgoIAG43wSImvry74rc2WOcsd2NASYtRNjY/UbL50Pp rs7WP26oRt0Snr2QUGs/x/cXxpjCJScVfWhB5F6y7b8JAzKuVNLHjnw59yYFHBcK2iPX SSoFNdTfmBNowgDlCf9FYUMdFnwb1eKwWxCGaCDuRPz4i6tYzgoyYPKZH6eEFH6mbR3K 5ldXrPwQTH6uJTt1wy2etX5iO2Wgfx7zeNUrX4+1wKN7xV69ntzrl31FO5e/tqT2wjOM /0ug== X-Gm-Message-State: AAQBX9cg4330HNwpzdChLVzmmDo8hHPze23kL6RrQd1YXOvoA4wC5DcJ 48dUGoX/a197EdE3CgGTp3BSkFIlPLC8 X-Google-Smtp-Source: AKy350aKTf8FVFWFUEuIlPbsBl3FnkxxsJzsWFEzqHW9OeHTbS7Yyx6LF4uwJ6X2m+BIEmm4mo5HYIIjXnbX X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:cbd4:0:b0:b78:4788:525b with SMTP id b203-20020a25cbd4000000b00b784788525bmr4175103ybg.0.1681493371735; Fri, 14 Apr 2023 10:29:31 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:21 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-7-rananta@google.com> Subject: [PATCH v3 6/7] KVM: arm64: Add 'skip_flush' arg to stage2_put_pte() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a 'skip_flush' argument in stage2_put_pte() to control the TLB invalidations. This will be leveraged by the upcoming patch to defer the individual PTE invalidations until the entire walk is finished. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- arch/arm64/kvm/hyp/pgtable.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b8f0dbd12f773..3f136e35feb5e 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -772,7 +772,7 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n } static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, - struct kvm_pgtable_mm_ops *mm_ops) + struct kvm_pgtable_mm_ops *mm_ops, bool skip_flush) { /* * Clear the existing PTE, and perform break-before-make with @@ -780,7 +780,10 @@ static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s */ if (kvm_pte_valid(ctx->old)) { kvm_clear_pte(ctx->ptep); - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + + if (!skip_flush) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); } mm_ops->put_page(ctx->ptep); @@ -1015,7 +1018,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops); + stage2_put_pte(ctx, mmu, mm_ops, false); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), From patchwork Fri Apr 14 17:29:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13211873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA2AC77B78 for ; Fri, 14 Apr 2023 17:30:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230130AbjDNRaD (ORCPT ); Fri, 14 Apr 2023 13:30:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230406AbjDNR3r (ORCPT ); Fri, 14 Apr 2023 13:29:47 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28E9D7ED0 for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id r12-20020a056602234c00b00760a20a99e8so4360579iot.7 for ; Fri, 14 Apr 2023 10:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=Nx6yKI3EIRZ1j9gTwptzX9+uokMoja8+/mQJmU3Rbml7atU0NoaDLBGoC2ADYzVIzd 7aha5aGITbsGDsKM1l7NfS0DEzKHMH2NeeQayQA6VslRgzCarBgdYEG+PUEtCAER1UXf fyrUxrpkNjxH5oSb49dS7Rfk+7CWIo2gqojXOL8s9xSwDoxq+npACrZokCS2ovl6fOzW 8l09mW/ccEk4oAdZg2MnlEm5WTsX7ut19ZEDWM18wpQZ6t2WzsLVdUB3PkYURffuJxRY g+aUVAE5+Yw98jqRcnug+ZTFUE+1aDOhSwxOp+NtIoyMuMlVf2eoWotUhma2O9uNKFW9 +UAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681493372; x=1684085372; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B/zMNPvsJ76QjkM1FVDxX2x1kCiWFrXUv8piHTPNmKE=; b=NBQac5mPG3PmNa6Mh9aiT7XPRSvmilo6U8yGvlMjlhEZBlb1K0E/rBQGEMMeZwh/30 8EZtwuKC99BtsrrNW0n2Bz9Ep1PfaeMM8cznqYiRKMm/ZDKpwx+jUibRmVACnfcAtyNz S2tIYnTSgfq+MOq1syFXwExz+mHsTNRf2V0kymq1b5ujcL8Wycldjb90DcxVTGWGAMI2 ETtekIu5aEoqkiyY2UA/6cZ5QQeddyGHQ2d7PBEb+yOMuTQ4CNtQ/XQq5X/buqpgZE3W UsB+SLjHQZYeViFP5SZDpH5LXjKNVl/gSUmDj8AYEhJlGgFWHkS0VHC6hz6jvwnjCsPX wmfA== X-Gm-Message-State: AAQBX9f0RzQ+1ByKL7fsLUv+b0+0oVAr3xhVYuHN3vGTvVDICiW6Jhmj yKrPlhOwuH/1/Tdn5g0biwtS/JjEyFXg X-Google-Smtp-Source: AKy350Z4dUZKdVtn0ngicjL45vSsU+MnN2dqL/cSY7ktGGKCk03DL47Ioacf0/CM1+w6Tkn33IoibTV1wujW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:85ed:0:b0:40b:bd17:3c31 with SMTP id d100-20020a0285ed000000b0040bbd173c31mr2482742jai.0.1681493372447; Fri, 14 Apr 2023 10:29:32 -0700 (PDT) Date: Fri, 14 Apr 2023 17:29:22 +0000 In-Reply-To: <20230414172922.812640-1-rananta@google.com> Mime-Version: 1.0 References: <20230414172922.812640-1-rananta@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230414172922.812640-8-rananta@google.com> Subject: [PATCH v3 7/7] KVM: arm64: Use TLBI range-based intructions for unmap From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Ricardo Koller , Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The current implementation of the stage-2 unmap walker traverses the given range and, as a part of break-before-make, performs TLB invalidations with a DSB for every PTE. A multitude of this combination could cause a performance bottleneck. Hence, if the system supports FEAT_TLBIRANGE, defer the TLB invalidations until the entire walk is finished, and then use range-based instructions to invalidate the TLBs in one go. Condition this upon S2FWB in order to avoid walking the page-table again to perform the CMOs after issuing the TLBI. Signed-off-by: Raghavendra Rao Ananta Suggested-by: Oliver Upton --- arch/arm64/kvm/hyp/pgtable.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3f136e35feb5e..bcb748e3566c7 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -987,10 +987,16 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, return ret; } +struct stage2_unmap_data { + struct kvm_pgtable *pgt; + bool skip_pte_tlbis; +}; + static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct kvm_pgtable *pgt = ctx->arg; + struct stage2_unmap_data *unmap_data = ctx->arg; + struct kvm_pgtable *pgt = unmap_data->pgt; struct kvm_s2_mmu *mmu = pgt->mmu; struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; kvm_pte_t *childp = NULL; @@ -1018,7 +1024,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, * block entry and rely on the remaining portions being faulted * back lazily. */ - stage2_put_pte(ctx, mmu, mm_ops, false); + stage2_put_pte(ctx, mmu, mm_ops, unmap_data->skip_pte_tlbis); if (need_flush && mm_ops->dcache_clean_inval_poc) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), @@ -1032,13 +1038,32 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { + int ret; + struct stage2_unmap_data unmap_data = { + .pgt = pgt, + /* + * If FEAT_TLBIRANGE is implemented, defer the individial PTE + * TLB invalidations until the entire walk is finished, and + * then use the range-based TLBI instructions to do the + * invalidations. Condition this upon S2FWB in order to avoid + * a page-table walk again to perform the CMOs after TLBI. + */ + .skip_pte_tlbis = system_supports_tlb_range() && + stage2_has_fwb(pgt), + }; struct kvm_pgtable_walker walker = { .cb = stage2_unmap_walker, - .arg = pgt, + .arg = &unmap_data, .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, }; - return kvm_pgtable_walk(pgt, addr, size, &walker); + ret = kvm_pgtable_walk(pgt, addr, size, &walker); + if (unmap_data.skip_pte_tlbis) + /* Perform the deferred TLB invalidations */ + kvm_call_hyp(__kvm_tlb_flush_vmid_range, pgt->mmu, + addr, addr + size); + + return ret; } struct stage2_attr_data {