From patchwork Tue Aug 8 23:13:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13347207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30060C04E69 for ; Tue, 8 Aug 2023 23:14:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbjHHXOG (ORCPT ); Tue, 8 Aug 2023 19:14:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231909AbjHHXNw (ORCPT ); Tue, 8 Aug 2023 19:13:52 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A781BDF for ; Tue, 8 Aug 2023 16:13:45 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-d4e1be2dd10so4149265276.0 for ; Tue, 08 Aug 2023 16:13:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691536424; x=1692141224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bI2/Ai2EKT2A2IhEWIOyLZxp6nDkYiv4aO92c97jYBA=; b=hOQ/QNJ4fDMZ2FC3pGaUDXkd1GQHOg5TRJBn6sYxz/OXZpP+kRlssN8bL3vT8Dq/Oz PJeWi4rKvEE795D0Mdr0NYRNJ8XYctBzLKe81Zn50caX0GD9ToYpOsrw7CBlOvi0aB4Z DrEbTqOG+c5oyjUIV6w4gfSf6i/e1hy12lLpXNAvyEkLpASj2E4Xt/qsGisnVZnqiTo3 qK1+DZmlYg9+ohhyaVcnvxJ4JRNK9eaW8bIvDgE9COakWSV+KTWSt0KHykR8gI4BH9hz jOxvzwcesjF0cVffspEemGYKzaHNuAMxpTjjLBR8VbYHzDfOJdS466JXlh2I7vC/czlh zvDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691536424; x=1692141224; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bI2/Ai2EKT2A2IhEWIOyLZxp6nDkYiv4aO92c97jYBA=; b=Be1k8mgmrBYIc5FdDKEv4/bFWjBAtmnNu/ERW4pbfK4XH5ZBVmA/cr2L3ZdLmqiNum iMsg31kkrXww/BQLRct84rhaP6u/mkZNmMwIodcEwmHc0GCTu+UebiJrUXdHtrrfoOJ3 25lOiEfgV6TJ2JTq6V5ItiUP0aXsM8DgULDt4Nae1XPMjsw/xYNJvXYhRQq6g8WXli12 oRsyL6Gov4uJJZyWAEghCyKajdwxHpe47MHsxpDrg9e6GhLwr+GpAJx+bVgEdGBDuXsg iD9xCENFYnpNFxMNqTR/UNpNObDpMFyvwBcCDZRaOwJxLz4emRQ+IpmfAibgJqUedfbd Elvw== X-Gm-Message-State: AOJu0YwB8fqazLNaaXnHjXg3hRhWzFi3xh3+DkIzA8st3CkTutQQ16zo k1bOnorcyBI+WIHpclpGme8PSz32+x3w X-Google-Smtp-Source: AGHT+IGGHYwkr7nrnrafn7gr98QPyZhjySK4/ETqW8spkcqbMy1k4MjlMxIWPFgxTCp7whfgGZUxiSmogUJy X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:ab62:0:b0:d4c:f456:d54f with SMTP id u89-20020a25ab62000000b00d4cf456d54fmr18964ybi.8.1691536424596; Tue, 08 Aug 2023 16:13:44 -0700 (PDT) Date: Tue, 8 Aug 2023 23:13:25 +0000 In-Reply-To: <20230808231330.3855936-1-rananta@google.com> Mime-Version: 1.0 References: <20230808231330.3855936-1-rananta@google.com> X-Mailer: git-send-email 2.41.0.640.ga95def55d0-goog Message-ID: <20230808231330.3855936-10-rananta@google.com> Subject: [PATCH v8 09/14] KVM: arm64: Implement __kvm_tlb_flush_vmid_range() From: Raghavendra Rao Ananta To: Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , Raghavendra Rao Anata , David Matlack , Fuad Tabba , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Define __kvm_tlb_flush_vmid_range() (for VHE and nVHE) to flush a range of stage-2 page-tables using IPA in one go. If the system supports FEAT_TLBIRANGE, the following patches would conviniently replace global TLBI such as vmalls12e1is in the map, unmap, and dirty-logging paths with ripas2e1is instead. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_asm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 30 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/vhe/tlb.c | 28 ++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7d170aaa2db41..2c27cb8cf442d 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa_nsh, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, __KVM_HOST_SMCCC_FUNC___vgic_v3_read_vmcr, @@ -229,6 +230,8 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index a169c619db60b..857d9bc04fd48 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -135,6 +135,16 @@ static void handle___kvm_tlb_flush_vmid_ipa_nsh(struct kvm_cpu_context *host_ctx __kvm_tlb_flush_vmid_ipa_nsh(kern_hyp_va(mmu), ipa, level); } +static void +handle___kvm_tlb_flush_vmid_range(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, start, host_ctxt, 2); + DECLARE_REG(unsigned long, pages, host_ctxt, 3); + + __kvm_tlb_flush_vmid_range(kern_hyp_va(mmu), start, pages); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -327,6 +337,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa_nsh), HANDLE_FUNC(__kvm_tlb_flush_vmid), + HANDLE_FUNC(__kvm_tlb_flush_vmid_range), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), HANDLE_FUNC(__vgic_v3_read_vmcr), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index b9991bbd8e3fd..1b265713d6bed 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -182,6 +182,36 @@ void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt, false); + + __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 0); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* See the comment in __kvm_tlb_flush_vmid_ipa() */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index e69da550cdc5b..46bd43f61d76f 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -143,6 +143,34 @@ void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, + phys_addr_t start, unsigned long pages) +{ + struct tlb_inv_context cxt; + unsigned long stride; + + /* + * Since the range of addresses may not be mapped at + * the same level, assume the worst case as PAGE_SIZE + */ + stride = PAGE_SIZE; + start = round_down(start, stride); + + dsb(ishst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + __flush_s2_tlb_range_op(ipas2e1is, start, pages, stride, 0); + + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt;