From patchwork Wed Mar 1 21:09:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5FB3C7EE30 for ; Wed, 1 Mar 2023 21:09:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229590AbjCAVJi (ORCPT ); Wed, 1 Mar 2023 16:09:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229568AbjCAVJe (ORCPT ); Wed, 1 Mar 2023 16:09:34 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4313457C7 for ; Wed, 1 Mar 2023 13:09:33 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id fa3-20020a17090af0c300b002377eefb6acso5032995pjb.3 for ; Wed, 01 Mar 2023 13:09:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pwnKFT79/G38czmqFkklntAsuiORkBHeRzzI9xeKUd0=; b=Y6KMs/J5dy6ymYWZOILIpgF4uO5DyW+W2L6h5SgiUgrKlxbLV0VfD0eB5T0SyWakz5 On/knjt5goPxrGOx+u/AS54/HTVc/ZzYJt01ipGAwPPC8Vf9LemxxKOL/HAnY81IgWLZ CqTxvAiKCm6F2IvqbpzH/4Rfc1lJqU5Tr/XEL9FddDj4PuxA5mw/wTh4mhI5gjLElCuu a5QeFHwA4n+6YLZoOK866YTQJHvi39nBEB05IriqEZQDfpmVKWdMbV/wc8z7YMXrHk7N 241L+jSqXN7TvHf7OS/H/Q7fRRN2r//7+X3mY5wTBmPGJWZFjIB0mnaYuWYD+du+fGO8 6xBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pwnKFT79/G38czmqFkklntAsuiORkBHeRzzI9xeKUd0=; b=WfSg/YhYSgDEOBpCAG+HlK3DloGOHgdH757oymDOafrRu2LWFUeTaVOrA/AqJsKDZJ 7efbTTLSec5iAsT5snUcThtxjZQcQjDsj6hMPpELSLQcKuKEEkNZOCaM0A3dLYqahXLU fYmRgKz784E6xr17oetq2HvxGrnc46VQE/EImt3HrLR/WBog5f6sR/2zCcKD4C5sPmAo Mqhtu7orH5TVW4p0Db4T6ddve3xy0mmKoPJl/bUU6OQQ6R0Wx2CPjvgxW3rFjc7YAyjZ eiCv/J3iZb28dtLVrdGMsGfxIZnrfbARZ28s6vkNMSmKF83Y65BPY++frYBuLohJJXU/ K0gw== X-Gm-Message-State: AO0yUKVrOBWdFmR+K4gYYV0Hq+JoF6XzFkvS7wqwdyuIuiVl5/3hwSVI iZoeuwwEgRGSjIT2Thxl7TpNv3Ph72EBrQ== X-Google-Smtp-Source: AK7set+fc1bMQ+o4i8otmXQs+s5skvluxb/TyWEthIPZ07xqUbCJm3klSDLcBdMEN23k6LUqdKJnKT+xrQXMyQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:902:f816:b0:19a:7c89:c63 with SMTP id ix22-20020a170902f81600b0019a7c890c63mr2835575plb.9.1677704972857; Wed, 01 Mar 2023 13:09:32 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:17 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-2-ricarkol@google.com> Subject: [PATCH v5 01/12] KVM: arm64: Add KVM_PGTABLE_WALK ctx->flags for skipping BBM and CMO From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add two flags to kvm_pgtable_visit_ctx, KVM_PGTABLE_WALK_SKIP_BBM and KVM_PGTABLE_WALK_SKIP_CMO, to indicate that the walk should not perform break-before-make (BBM) nor cache maintenance operations (CMO). This will by a future commit to create unlinked tables not accessible to the HW page-table walker. This is safe as these removed tables are not visible to the HW page-table walker. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 18 ++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 27 ++++++++++++++++----------- 2 files changed, 34 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 63f81b27a4e3..252b651f743d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -188,12 +188,20 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end, * children. * @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared * with other software walkers. + * @KVM_PGTABLE_WALK_SKIP_BBM: Visit and update table entries + * without Break-before-make + * requirements. + * @KVM_PGTABLE_WALK_SKIP_CMO: Visit and update table entries + * without Cache maintenance + * operations required. */ enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_LEAF = BIT(0), KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), KVM_PGTABLE_WALK_TABLE_POST = BIT(2), KVM_PGTABLE_WALK_SHARED = BIT(3), + KVM_PGTABLE_WALK_SKIP_BBM = BIT(4), + KVM_PGTABLE_WALK_SKIP_CMO = BIT(5), }; struct kvm_pgtable_visit_ctx { @@ -215,6 +223,16 @@ static inline bool kvm_pgtable_walk_shared(const struct kvm_pgtable_visit_ctx *c return ctx->flags & KVM_PGTABLE_WALK_SHARED; } +static inline bool kvm_pgtable_walk_skip_bbm(const struct kvm_pgtable_visit_ctx *ctx) +{ + return ctx->flags & KVM_PGTABLE_WALK_SKIP_BBM; +} + +static inline bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx) +{ + return ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO; +} + /** * struct kvm_pgtable_walker - Hook into a page-table walk. * @cb: Callback function to invoke during the walk. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11cf2c618a6..e093e222daf3 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -717,14 +717,17 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) return false; - /* - * Perform the appropriate TLB invalidation based on the evicted pte - * value (if any). - */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + if (!kvm_pgtable_walk_skip_bbm(ctx)) { + /* + * Perform the appropriate TLB invalidation based on the + * evicted pte value (if any). + */ + if (kvm_pte_table(ctx->old, ctx->level)) + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + else if (kvm_pte_valid(ctx->old)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); + } if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); @@ -808,11 +811,13 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, return -EAGAIN; /* Perform CMOs before installation of the guest stage-2 PTE */ - if (mm_ops->dcache_clean_inval_poc && stage2_pte_cacheable(pgt, new)) + if (!kvm_pgtable_walk_skip_cmo(ctx) && mm_ops->dcache_clean_inval_poc && + stage2_pte_cacheable(pgt, new)) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(new, mm_ops), - granule); + granule); - if (mm_ops->icache_inval_pou && stage2_pte_executable(new)) + if (!kvm_pgtable_walk_skip_cmo(ctx) && mm_ops->icache_inval_pou && + stage2_pte_executable(new)) mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); stage2_make_pte(ctx, new); From patchwork Wed Mar 1 21:09:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1384AC7EE23 for ; Wed, 1 Mar 2023 21:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229825AbjCAVJj (ORCPT ); Wed, 1 Mar 2023 16:09:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229589AbjCAVJg (ORCPT ); Wed, 1 Mar 2023 16:09:36 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A3964AFF6 for ; Wed, 1 Mar 2023 13:09:35 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-536a4eba107so293241857b3.19 for ; Wed, 01 Mar 2023 13:09:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VzIyQySx5sWgoLBKBraq0PBD+mNgK07WEx+j6MVzqPM=; b=fJSfK1i4WA7O+rhUUrcur7zN//1Q2TYfP++kVyMePa8X3MKpgHn9f2vhnTpPMjWRMF CLkyRV4kv0vWqNd6TGly5t6cluezCp5iqyy6kfoZ5MNhoqniGYTRLXwLcHB3+1bqGbg8 +T9MU3Vw0ogu30e7okI/c1GHhoitQxsgGTv/FGVDMpiDMcOxFDncIQuTNjzqk+3jXcqz mqkx5sxL8+M4wzlhLdzTDKte6+JW99WVunEJUUGDP4+Ocit73z4/mnInak6WumrY5NHV C2yFJ60kA1ZoD8r55B4XPtRbBijvK6jm4yVlTPdknH+MQB12tqeyCImLOAJ7ilv6Qe57 elCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VzIyQySx5sWgoLBKBraq0PBD+mNgK07WEx+j6MVzqPM=; b=iMSCc4Cj8AE5ovLe+BkoGJ8M19pV+nHvk25O8MCjVvyszritqDjwmD0wB5GhVk07vi yWB7360VfTbAawIUIsP48IEeZgAZ6vriarMWX3RV+n691WC4HWCfxH1WZARe5Qo6ky5o Dwsz4sPrNlK0pArozDmwl7doKbrH16512O7uWU3ZoGi1xOeajD+Wj+3NFPhVhkoaocAA yeEKnee/oGIWkLalQPqME8hlJ8BYY7CsvPof89D4VVosIIp4IHecTF5v8Bs8sKJKgYrd z06uUZrE+TKY2jgFk5Xw7EEatZkWtYQhP2V+5g4A0BhzRFIlqR6bW/4hmGv6wY1HGqw1 OsJw== X-Gm-Message-State: AO0yUKWTsYCSlJKvLzF5zaME25Dzf9EFxY5vdcghUkOAvPFBy0F3p1A+ nszUw7jjwaMDTg1lnkcN4oJM6IZbOaw1ow== X-Google-Smtp-Source: AK7set+U/GHrK922XxR7lZ6/5QF7NSwHYfRHSugXKbr3gq2VSnqUgzH/QvSpXE8keLEtLhkmc3S+49W/5FJ8PQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a5b:8b:0:b0:8bb:dfe8:a33b with SMTP id b11-20020a5b008b000000b008bbdfe8a33bmr4245653ybp.9.1677704974687; Wed, 01 Mar 2023 13:09:34 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:18 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-3-ricarkol@google.com> Subject: [PATCH v5 02/12] KVM: arm64: Rename free_unlinked to free_removed From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Normalize on referring to tables outside of an active paging structure as 'unlinked'. A subsequent change to KVM will add support for building page tables that are not part of an active paging structure. The existing 'removed_table' terminology is quite clunky when applied in this context. No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Oliver Upton Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 8 ++++---- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 +++--- arch/arm64/kvm/hyp/pgtable.c | 6 +++--- arch/arm64/kvm/mmu.c | 10 +++++----- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 252b651f743d..dcd3aafd3e6c 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -99,7 +99,7 @@ static inline bool kvm_level_supports_block_mapping(u32 level) * allocation is physically contiguous. * @free_pages_exact: Free an exact number of memory pages previously * allocated by zalloc_pages_exact. - * @free_removed_table: Free a removed paging structure by unlinking and + * @free_unlinked_table: Free an unlinked paging structure by unlinking and * dropping references. * @get_page: Increment the refcount on a page. * @put_page: Decrement the refcount on a page. When the @@ -119,7 +119,7 @@ struct kvm_pgtable_mm_ops { void* (*zalloc_page)(void *arg); void* (*zalloc_pages_exact)(size_t size); void (*free_pages_exact)(void *addr, size_t size); - void (*free_removed_table)(void *addr, u32 level); + void (*free_unlinked_table)(void *addr, u32 level); void (*get_page)(void *addr); void (*put_page)(void *addr); int (*page_count)(void *addr); @@ -450,7 +450,7 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); /** - * kvm_pgtable_stage2_free_removed() - Free a removed stage-2 paging structure. + * kvm_pgtable_stage2_free_unlinked() - Free an unlinked stage-2 paging structure. * @mm_ops: Memory management callbacks. * @pgtable: Unlinked stage-2 paging structure to be freed. * @level: Level of the stage-2 paging structure to be freed. @@ -458,7 +458,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); * The page-table is assumed to be unreachable by any hardware walkers prior to * freeing and therefore no TLB invalidation is performed. */ -void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); +void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); /** * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 552653fa18be..b030170d803b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -91,9 +91,9 @@ static void host_s2_put_page(void *addr) hyp_put_page(&host_s2_pool, addr); } -static void host_s2_free_removed_table(void *addr, u32 level) +static void host_s2_free_unlinked_table(void *addr, u32 level) { - kvm_pgtable_stage2_free_removed(&host_mmu.mm_ops, addr, level); + kvm_pgtable_stage2_free_unlinked(&host_mmu.mm_ops, addr, level); } static int prepare_s2_pool(void *pgt_pool_base) @@ -110,7 +110,7 @@ static int prepare_s2_pool(void *pgt_pool_base) host_mmu.mm_ops = (struct kvm_pgtable_mm_ops) { .zalloc_pages_exact = host_s2_zalloc_pages_exact, .zalloc_page = host_s2_zalloc_page, - .free_removed_table = host_s2_free_removed_table, + .free_unlinked_table = host_s2_free_unlinked_table, .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, .page_count = hyp_page_count, diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index e093e222daf3..0a5ef9288371 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -841,7 +841,7 @@ static int stage2_map_walk_table_pre(const struct kvm_pgtable_visit_ctx *ctx, if (ret) return ret; - mm_ops->free_removed_table(childp, ctx->level); + mm_ops->free_unlinked_table(childp, ctx->level); return 0; } @@ -886,7 +886,7 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx, * The TABLE_PRE callback runs for table entries on the way down, looking * for table entries which we could conceivably replace with a block entry * for this mapping. If it finds one it replaces the entry and calls - * kvm_pgtable_mm_ops::free_removed_table() to tear down the detached table. + * kvm_pgtable_mm_ops::free_unlinked_table() to tear down the detached table. * * Otherwise, the LEAF callback performs the mapping at the existing leaves * instead. @@ -1250,7 +1250,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) pgt->pgd = NULL; } -void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level) +void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level) { kvm_pteref_t ptep = (kvm_pteref_t)pgtable; struct kvm_pgtable_walker walker = { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a3ee3b605c9b..9bd3c2cfb476 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -130,21 +130,21 @@ static void kvm_s2_free_pages_exact(void *virt, size_t size) static struct kvm_pgtable_mm_ops kvm_s2_mm_ops; -static void stage2_free_removed_table_rcu_cb(struct rcu_head *head) +static void stage2_free_unlinked_table_rcu_cb(struct rcu_head *head) { struct page *page = container_of(head, struct page, rcu_head); void *pgtable = page_to_virt(page); u32 level = page_private(page); - kvm_pgtable_stage2_free_removed(&kvm_s2_mm_ops, pgtable, level); + kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); } -static void stage2_free_removed_table(void *addr, u32 level) +static void stage2_free_unlinked_table(void *addr, u32 level) { struct page *page = virt_to_page(addr); set_page_private(page, (unsigned long)level); - call_rcu(&page->rcu_head, stage2_free_removed_table_rcu_cb); + call_rcu(&page->rcu_head, stage2_free_unlinked_table_rcu_cb); } static void kvm_host_get_page(void *addr) @@ -681,7 +681,7 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = { .zalloc_page = stage2_memcache_zalloc_page, .zalloc_pages_exact = kvm_s2_zalloc_pages_exact, .free_pages_exact = kvm_s2_free_pages_exact, - .free_removed_table = stage2_free_removed_table, + .free_unlinked_table = stage2_free_unlinked_table, .get_page = kvm_host_get_page, .put_page = kvm_s2_put_page, .page_count = kvm_host_page_count, From patchwork Wed Mar 1 21:09:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EC50C7EE36 for ; Wed, 1 Mar 2023 21:09:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229826AbjCAVJl (ORCPT ); Wed, 1 Mar 2023 16:09:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229696AbjCAVJh (ORCPT ); Wed, 1 Mar 2023 16:09:37 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E78D6A271 for ; Wed, 1 Mar 2023 13:09:36 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id e1-20020a17090301c100b0019cd429f407so7553405plh.17 for ; Wed, 01 Mar 2023 13:09:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UB+v1KXfYv/jq1MVixM+5dSTAYoM6sVXzmmlPxOdc9M=; b=tfaYY9Orf8PMpLOU5PYzEPoa4q54eqwuLJvLYe1xBdGlf0VmD6YZ4QCnm3REyv8UeZ JdlUGstzUqkoSXO+JDGyhlXZCYz5iEWW+hAEvWJfJ5ERMIKcZ/AHA+RevjY14CHtglLE ml41KPrpGEM/g4Nmz2X8iU2+iFNlK3xuc2EM9YygqOXZWQWD5SZHwlm0EVrbBSB8jU3R VW74ddI6S+AvCoPfCN6ezecAEZz790a+FeaZGCHWdV11canB8lwxdiJTkaIdddQR7y18 KpXDawM9KIMRbJS4TNqbNHQAILbB+rceVdAHmNQag6632CphAIQWNmwZueEW1frI1MUa BaaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UB+v1KXfYv/jq1MVixM+5dSTAYoM6sVXzmmlPxOdc9M=; b=mEJhy3lUxCGq7EBWSfcPcqVEDuTdWA2dr9dV1zuk7ox1JQWV2gaGCijDyAnQbHY8ZA PedeHPcXlrkSfQyq8l0iaLe0NBS7kKG9eC7+tA2fSrJZpkINnB0SsCdXStq/4/o4jptC 4Q9g627gcPoM+iKiFIUep8bwRwYyEJ3RNmKPnpSRfhdBP/OFyg/TjTZWu7ugntKGAsor MbsINsZQ+EvvZIwt1h/HkJD6O9xz2ukdmMNVlvHxCmCmz+128b0JiIZdC+G3AmZaPUYU OXyGrCdlTw8vY3pg7PmyHtJY8ahQOxzAUckIN+NjuKv8nrwqBzEjMnPvmprktGJ+5IJM VoNg== X-Gm-Message-State: AO0yUKVDMiG7YwqBPvZDqOGxZJtQTskyFO/aIGSkthNf4FFtGmjKXqka 5SqhJQMONTyFbLn1jZ2vWA6sSmPy3xd40w== X-Google-Smtp-Source: AK7set+fOUhI/s8CCfJxuDPTO2RqICMUFLBxG1ByShr+Fzf0Ju8g3a0Fo12lxZhawKm/jPXDNOpTuyq4bxOG6g== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:90b:274b:b0:233:bada:17b5 with SMTP id qi11-20020a17090b274b00b00233bada17b5mr4782888pjb.4.1677704976349; Wed, 01 Mar 2023 13:09:36 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:19 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-4-ricarkol@google.com> Subject: [PATCH v5 03/12] KVM: arm64: Add helper for creating unlinked stage2 subtrees From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a stage2 helper, kvm_pgtable_stage2_create_unlinked(), for creating unlinked tables (which is the opposite of kvm_pgtable_stage2_free_unlinked()). Creating an unlinked table is useful for splitting PMD and PUD blocks into subtrees of PAGE_SIZE PTEs. For example, a PUD can be split into PAGE_SIZE PTEs by first creating a fully populated tree, and then use it to replace the PUD in a single step. This will be used in a subsequent commit for eager huge-page splitting (a dirty-logging optimization). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 28 +++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 46 ++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index dcd3aafd3e6c..2b98357a5497 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -460,6 +460,34 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); */ void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); +/** + * kvm_pgtable_stage2_create_unlinked() - Create an unlinked stage-2 paging structure. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). + * @phys: Physical address of the memory to map. + * @level: Starting level of the stage-2 paging structure to be created. + * @prot: Permissions and attributes for the mapping. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * @force_pte: Force mappings to PAGE_SIZE granularity. + * + * Returns an unlinked page-table tree. If @force_pte is true or + * @level is 2 (the PMD level), then the tree is mapped up to the + * PAGE_SIZE leaf PTE; the tree is mapped up one level otherwise. + * This new page-table tree is not reachable (i.e., it is unlinked) + * from the root pgd and it's therefore unreachableby the hardware + * page-table walker. No TLB invalidation or CMOs are performed. + * + * If device attributes are not explicitly requested in @prot, then the + * mapping will be normal, cacheable. + * + * Return: The fully populated (unlinked) stage-2 paging structure, or + * an ERR_PTR(error) on failure. + */ +kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, + u64 phys, u32 level, + enum kvm_pgtable_prot prot, + void *mc, bool force_pte); + /** * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0a5ef9288371..3554b74e13c6 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1181,6 +1181,52 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) return kvm_pgtable_walk(pgt, addr, size, &walker); } +kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, + u64 phys, u32 level, + enum kvm_pgtable_prot prot, + void *mc, bool force_pte) +{ + struct stage2_map_data map_data = { + .phys = phys, + .mmu = pgt->mmu, + .memcache = mc, + .force_pte = force_pte, + }; + struct kvm_pgtable_walker walker = { + .cb = stage2_map_walker, + .flags = KVM_PGTABLE_WALK_LEAF | + KVM_PGTABLE_WALK_SKIP_BBM | + KVM_PGTABLE_WALK_SKIP_CMO, + .arg = &map_data, + }; + /* .addr (the IPA) is irrelevant for an unlinked table */ + struct kvm_pgtable_walk_data data = { + .walker = &walker, + .addr = 0, + .end = kvm_granule_size(level), + }; + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; + kvm_pte_t *pgtable; + int ret; + + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); + if (ret) + return ERR_PTR(ret); + + pgtable = mm_ops->zalloc_page(mc); + if (!pgtable) + return ERR_PTR(-ENOMEM); + + ret = __kvm_pgtable_walk(&data, mm_ops, (kvm_pteref_t)pgtable, + level + 1); + if (ret) { + kvm_pgtable_stage2_free_unlinked(mm_ops, pgtable, level); + mm_ops->put_page(pgtable); + return ERR_PTR(ret); + } + + return pgtable; +} int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops, From patchwork Wed Mar 1 21:09:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD4AFC7EE30 for ; Wed, 1 Mar 2023 21:09:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229835AbjCAVJm (ORCPT ); Wed, 1 Mar 2023 16:09:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229568AbjCAVJk (ORCPT ); Wed, 1 Mar 2023 16:09:40 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3BDD474C2 for ; Wed, 1 Mar 2023 13:09:38 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id i11-20020a056a00224b00b005d44149eb06so7543300pfu.10 for ; Wed, 01 Mar 2023 13:09:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UXlqkXdQ57535COU40bhXXPdK69DMqOVmCbRf71v1wo=; b=okvialHfqoT3ej0yJmCEiD4loMwg5foTgzLUZaDEFtdjRh0CiJaJqjAtRsDTYrbZtn aBm2bcxiXStkha31Kch/Z3yTKx3ss0tZRiDT6aaM+4JQCGGCAjaJUxT1esO8B6nHpXgQ XoZrMGwwu2mQOacwkKAO88tArbrdO4/b9zhJyUHmHn/Au77aahZ08pUcfVy+0fHZGnyP Cp4wBTH5zW2YIhF+S81H+qs0iiM/9EjZd9iCD0QFIWlpt/dOvUW7NAsD1Q85bYttkgPH CYWSuwyM5LqXYBudX1kEw27LmNm+z8UZtpEY9GKdefz03STorQI/BRBIgqr2HLwhmsE6 YuwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UXlqkXdQ57535COU40bhXXPdK69DMqOVmCbRf71v1wo=; b=mF+5ug2oEDWS4ipqIzZDHl07WYcxPN+y4dqDzqOODp8WsAitCubGgJsepbS69kuOQt pfaxZ0yF4693XaCbxFgdIJhVMdD1EvKlvufqX0ZxnHEXFj5mIO6sJXPv0QB93agYM91G nrtc0DXgc2FZeRID1dvv9XPzN7ulAPFSyeOk6nGHQhsaOOOJicZpmopXSg6HaLeYdalD e6/GkxIftg0tlLiIGTZyYnNEKpopNO/o3/bCB9b5zWT9Rp3mH2+5IMi5gn9PqxGAbF8Z CyioXRr7T6DBLyEBSaL8jVmUii5CLX5zaH4L1pgOPEFNtZ80jafT7ZD89AqFjGDsBBNg Yazg== X-Gm-Message-State: AO0yUKUff0bdHWIhe36gTcmt5c4VBR1qsEa8AfILNvpGocwbbDQH5/p2 Plg7hhUiOV1gcNE+59EMNPaeq/uDPUfg1A== X-Google-Smtp-Source: AK7set9R9fCov/cUPwxBPh4IR7WQbWxZQlFt1DS4knCiN0CUXr55ieEgvqthD3+dvbvaKKYYX8d8XoICE7rjxg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:90a:d3ca:b0:237:9cbe:22ad with SMTP id d10-20020a17090ad3ca00b002379cbe22admr3132878pjw.5.1677704978261; Wed, 01 Mar 2023 13:09:38 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:20 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-5-ricarkol@google.com> Subject: [PATCH v5 04/12] KVM: arm64: Add kvm_pgtable_stage2_split() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a range of huge pages. This will be used for eager-splitting huge pages into PAGE_SIZE pages. The goal is to avoid having to split huge pages on write-protection faults, and instead use this function to do it ahead of time for large ranges (e.g., all guest memory in 1G chunks at a time). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 30 +++++++ arch/arm64/kvm/hyp/pgtable.c | 113 +++++++++++++++++++++++++++ 2 files changed, 143 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 2b98357a5497..ce0a8e17fb6d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -657,6 +657,36 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); */ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +/** + * kvm_pgtable_stage2_split() - Split a range of huge pages into leaf PTEs pointing + * to PAGE_SIZE guest pages. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). + * @addr: Intermediate physical address from which to split. + * @size: Size of the range. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * @mc_capacity: Number of pages in @mc. + * + * @addr and the end (@addr + @size) are effectively aligned down and up to + * the top level huge-page block size. This is an example using 1GB + * huge-pages and 4KB granules. + * + * [---input range---] + * : : + * [--1G block pte--][--1G block pte--][--1G block pte--][--1G block pte--] + * : : + * [--2MB--][--2MB--][--2MB--][--2MB--] + * : : + * [ ][ ][:][ ][ ][ ][ ][ ][:][ ][ ][ ] + * : : + * + * Return: 0 on success, negative error code on failure. Note that + * kvm_pgtable_stage2_split() is best effort: it tries to break as many + * blocks in the input range as allowed by @mc_capacity. + */ +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, + void *mc, u64 mc_capacity); + /** * kvm_pgtable_walk() - Walk a page-table. * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3554b74e13c6..75726edba2f3 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1228,6 +1228,119 @@ kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, return pgtable; } +struct stage2_split_data { + struct kvm_s2_mmu *mmu; + void *memcache; + u64 mc_capacity; +}; + +/* + * Get the number of page-tables needed to replace a block with a + * fully populated tree, up to the PTE level, at particular level. + */ +static inline int stage2_block_get_nr_page_tables(u32 level) +{ + if (WARN_ON_ONCE(level < KVM_PGTABLE_MIN_BLOCK_LEVEL || + level >= KVM_PGTABLE_MAX_LEVELS)) + return -EINVAL; + + switch (level) { + case 1: + return PTRS_PER_PTE + 1; + case 2: + return 1; + case 3: + return 0; + default: + return -EINVAL; + }; +} + +static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags visit) +{ + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; + struct stage2_split_data *data = ctx->arg; + kvm_pte_t pte = ctx->old, new, *childp; + enum kvm_pgtable_prot prot; + void *mc = data->memcache; + u32 level = ctx->level; + bool force_pte; + int nr_pages; + u64 phys; + + /* No huge-pages exist at the last level */ + if (level == KVM_PGTABLE_MAX_LEVELS - 1) + return 0; + + /* We only split valid block mappings */ + if (!kvm_pte_valid(pte)) + return 0; + + nr_pages = stage2_block_get_nr_page_tables(level); + if (nr_pages < 0) + return nr_pages; + + if (data->mc_capacity >= nr_pages) { + /* Build a tree mapped down to the PTE granularity. */ + force_pte = true; + } else { + /* + * Don't force PTEs. This requires a single page of PMDs at the + * PUD level, or a single page of PTEs at the PMD level. If we + * are at the PUD level, the PTEs will be created recursively. + */ + force_pte = false; + nr_pages = 1; + } + + if (data->mc_capacity < nr_pages) + return -ENOMEM; + + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_stage2_pte_prot(pte); + + childp = kvm_pgtable_stage2_create_unlinked(data->mmu->pgt, phys, + level, prot, mc, force_pte); + if (IS_ERR(childp)) + return PTR_ERR(childp); + + if (!stage2_try_break_pte(ctx, data->mmu)) { + kvm_pgtable_stage2_free_unlinked(mm_ops, childp, level); + mm_ops->put_page(childp); + return -EAGAIN; + } + + /* + * Note, the contents of the page table are guaranteed to be made + * visible before the new PTE is assigned because stage2_make_pte() + * writes the PTE using smp_store_release(). + */ + new = kvm_init_table_pte(childp, mm_ops); + stage2_make_pte(ctx, new); + dsb(ishst); + data->mc_capacity -= nr_pages; + return 0; +} + +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, + void *mc, u64 mc_capacity) +{ + struct stage2_split_data split_data = { + .mmu = pgt->mmu, + .memcache = mc, + .mc_capacity = mc_capacity, + }; + + struct kvm_pgtable_walker walker = { + .cb = stage2_split_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = &split_data, + }; + + return kvm_pgtable_walk(pgt, addr, size, &walker); +} + int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops, enum kvm_pgtable_stage2_flags flags, From patchwork Wed Mar 1 21:09:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68878C678D4 for ; Wed, 1 Mar 2023 21:17:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229520AbjCAVRi (ORCPT ); Wed, 1 Mar 2023 16:17:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229613AbjCAVRZ (ORCPT ); Wed, 1 Mar 2023 16:17:25 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6668D4D61C for ; Wed, 1 Mar 2023 13:17:24 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id c3-20020a170902724300b0019d1ffec36dso4871292pll.9 for ; Wed, 01 Mar 2023 13:17:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AyQwFFR+sM3AjGhPSeNMTQXkHflwmeDdXwXEbmr5h+E=; b=pYMa2OtEY+vmyWWa9yw3R2ExSa6IY1i6E6t+6wzheNsXmU5EPR01Vot7JIJHT/qwak SkoNr2mRknbC84EJc5DKGfsdFapxqQpcB5SOG+6efwyftIotDOGzPLW1LXWiBwKHTcjg kovTTzH7nZIumcI/elhor2wVw/k0ndk1z55wrbw+WtxH5Lf9iTJ7Y07jAX/4Gb8WLclD MTWins+bfPXhOi0PfQUNOMXM1brjemy27YbFup/ZcKSlbrRpQUnQZXKyaTQlYGb8dA/v ue8HOohm7e0c8whS0Ge4nM62Sl8VIQ2egCyJdvQOWhZZYIO0DQb6ApOU5Wq3UTTCEXVF y5TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AyQwFFR+sM3AjGhPSeNMTQXkHflwmeDdXwXEbmr5h+E=; b=ZE6kd4xyffKd+iTP/zvRXk+AjX3iRDFLu4hvQ5d69ynJ4VqQ9zDvbc6NioCHvlTH70 pKs0b0IT9cjq/vdt8nCnGC+XKLaOeYLRnWdudgCpcttmID4cUUFwYW9hCm/4+Fc/9XAC OVNXsf5r/romwOLHIwX0Uc/XtbH7u0V6TjJJdIMsO9xfrGmWEnsPPfYHA5uftcfH6FXO 2wbPCCqhlRRF7rEtVnP1HT/Y3rDnF39WFb5ISjhhW6mXT2tu0gDskFh0kvkyinG81t0I l4g/KVga+8H0oQRXb896qp5Lxum+/l1d3eXm8s+cXqEeYqjMwSLwTCeMTBnByahqo6Xg s2DQ== X-Gm-Message-State: AO0yUKUUyOHzFb3i0RG4miXHP3fawtByUB7EuL7gLpviCTecDJDml7sA cgozdcnczGFt8EbRukJlqYpF+h1R9QziIQ== X-Google-Smtp-Source: AK7set9TXehDrschYbjcA/9KfvI0u1IPieBacO26x8ZbIZC1/zC1kpDzsxUGBOvvlhI5NDOtZwBae8PR/0EA+A== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:2d8:b0:5a8:9872:2b9b with SMTP id b24-20020a056a0002d800b005a898722b9bmr3125821pft.1.1677704980039; Wed, 01 Mar 2023 13:09:40 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:21 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-6-ricarkol@google.com> Subject: [PATCH v5 05/12] KVM: arm64: Refactor kvm_arch_commit_memory_region() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor kvm_arch_commit_memory_region() as a preparation for a future commit to look cleaner and more understandable. Also, it looks more like its x86 counterpart (in kvm_mmu_slot_apply_flags()). No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/mmu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9bd3c2cfb476..d2c5e6992459 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1761,20 +1761,27 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { + bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES; + /* * At this point memslot has been committed and there is an * allocated dirty_bitmap[], dirty pages will be tracked while the * memory slot is write protected. */ - if (change != KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (log_dirty_pages) { + + if (change == KVM_MR_DELETE) + return; + /* * If we're with initial-all-set, we don't need to write * protect any pages because they're all reported as dirty. * Huge pages and normal pages will be write protect gradually. */ - if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { - kvm_mmu_wp_memory_region(kvm, new->id); - } + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + return; + + kvm_mmu_wp_memory_region(kvm, new->id); } } From patchwork Wed Mar 1 21:09:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99897C678D4 for ; Wed, 1 Mar 2023 21:09:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229651AbjCAVJo (ORCPT ); Wed, 1 Mar 2023 16:09:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbjCAVJn (ORCPT ); Wed, 1 Mar 2023 16:09:43 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A7154C6C9 for ; Wed, 1 Mar 2023 13:09:42 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id a10-20020a056a000c8a00b005fc6b117942so4535771pfv.2 for ; Wed, 01 Mar 2023 13:09:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FD7uNwpEYF9RsVYatisVMtB7W3/05QCRhc5pQD1utdg=; b=GKbeHEhd81x4RT0NRiaNegia/D4yGXeprV1zo45q3lUMhN8Jd0I5JPlSf0CNvA0/+e XBbcwBZiGpFyfRO88xezpgSDQzv1xFoV6nXnPYjkBOHhGox/VP2VPJkn1S9XYhf6d52P nz3VuQCoHgfWyD+YD8NffwGbJRhosrREg0eRLtc6DgoR3C+EWBi6LGrEsTgA11q0L/CG xRHJHbXfg67Sc9Sp0OFyUoKPaeqtXDtK8zQE2OVXSldXJnGaPNPXIzDm7BMv+9TKgg1d mcprn81G4KfeaX08Uthza5kJIwnmIUGcuozOci+gMx9fix/o98HhRJux7SLwShphnMSt hLMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FD7uNwpEYF9RsVYatisVMtB7W3/05QCRhc5pQD1utdg=; b=4bl8q0XqZbXpvobGoJaU7aGW+pOMwtqSlxjbk29XAdrSN4kyxqxuhkPzkHaVGkZyeD pRC2+rDQgowVm9CG7Z2wVQ3G6tyxkFoc/pDYrXC8MiA6zSQ/tm88K2bYhkFcb/N08HSO B71P72yusrBYMtx8aEHqLaeweh8jQeqU5rS6Y/Exlyvmu88Ct+sAbC/lNv5AtD+EuUUJ 3zptN3po34RsbQ/zmLQa8KdkxKBBUnuVfsJvgk4qpgP2xVMvDe+TjVAn8rXa73IdvosY nPbvI8oO8dkjw47qyLSr6IevoKcxi7bbJz7hCb6AznBhx5JDgRaK2bcX3jjWmagg3ApD Lhjg== X-Gm-Message-State: AO0yUKXOaoDXuv+8Z1z89AjLfhYNfzMWgW1aQZcGcfmHnZdD/hBWZVXA dbIF9wGuO+/Wjbw8Ycr31ibhU68dCtqcYw== X-Google-Smtp-Source: AK7set8BFQJrdcP7LxWyeTBzLMdsUZBctkkgkT8+/5jkqM87wF7FFCh3sVs/KW+w4RyY6le5Sdo6CkLY3bMq2A== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a63:381d:0:b0:503:7c81:4599 with SMTP id f29-20020a63381d000000b005037c814599mr2229087pga.11.1677704981897; Wed, 01 Mar 2023 13:09:41 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:22 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-7-ricarkol@google.com> Subject: [PATCH v5 06/12] KVM: arm64: Add kvm_uninit_stage2_mmu() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add kvm_uninit_stage2_mmu() and move kvm_free_stage2_pgd() into it. A future commit will add some more things to do inside of kvm_uninit_stage2_mmu(). No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/mmu.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e4a7e6369499..058f3ae5bc26 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -167,6 +167,7 @@ void free_hyp_pgds(void); void stage2_unmap_vm(struct kvm *kvm); int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type); +void kvm_uninit_stage2_mmu(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size, bool writable); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d2c5e6992459..812633a75e74 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -766,6 +766,11 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return err; } +void kvm_uninit_stage2_mmu(struct kvm *kvm) +{ + kvm_free_stage2_pgd(&kvm->arch.mmu); +} + static void stage2_unmap_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) { @@ -1855,7 +1860,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_free_stage2_pgd(&kvm->arch.mmu); + kvm_uninit_stage2_mmu(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, From patchwork Wed Mar 1 21:09:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4360DC7EE23 for ; Wed, 1 Mar 2023 21:09:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229840AbjCAVJr (ORCPT ); Wed, 1 Mar 2023 16:09:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229842AbjCAVJp (ORCPT ); Wed, 1 Mar 2023 16:09:45 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F9754BEA5 for ; Wed, 1 Mar 2023 13:09:44 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id v15-20020a17090a458f00b0023816b2f381so4347038pjg.2 for ; Wed, 01 Mar 2023 13:09:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DqA9XPbdG72ij8bnt7P1FG5k/aZwFU14SdlEVjUmaeQ=; b=Di8RI9KA4HPs+lwFiOVEWU1f+4M8cDSY9XVJ/MlMifv+lFD1Kx+kz15kGAQ26n33Rw mLMjSPzYRJ/nA8sEBvn7Ge1ky3/HXLI5lzHK4ddEk0XvH3x3kphbhtm2NM7a3PFMtPmw dQu24XlzT/OTKwnEC/dX6oZ2++xnNEzNw/S7IoZ/bPVBy0rcJX8g0cGQuCqBcc1oczET V/5F98DdOOcXw+2cKe9nhAEnUrJiEpuL9jW+tdG8YqbPnTlKBoXpgbkwU9UMxbh05YRH sWwwgixDq//SOtNAEZI7n+RQGa2bY6wjv3xzSj98H8ReMzzb+wnY20jIzezLmA09eWVj IbKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DqA9XPbdG72ij8bnt7P1FG5k/aZwFU14SdlEVjUmaeQ=; b=VohlXrtomed3yadSeCnDbORvyk8NxX199ocZkuyEjm4NFDQvrcaeH04ZGXX+ZyTzG8 pQM8zQTaybmPtXpfpRijZDHtAJPRsmV7et6/ODucy/9+ZL3ghPI/KkNkdstzaMNXZosg dnzXEYTmdwh8rqFA2K9BB7DoEamliQUIfuN8Q2ylpHZyr79lXhVdONUAqMEnhCnAcwg6 6Sjg0Qrttto73VnL/bodFm6Hy6s0yuXmzGCBnO3CYjSKWkMfdmEh7NKgGkkKxhrFN0WS hUF+fEUAJQlBqk6bhip/fzfq8kdyY+tMTccERk4qCRqfhQqEkDluTMzrB9OPOEJsBGPb 4WSw== X-Gm-Message-State: AO0yUKVWQMPC/JN4m48YD1TjwM0xea/e1YHCandeFJIDmcgRdomwZx6R jhC35gOHDJY4RBMFQymwY1BNxeCFMVyBjw== X-Google-Smtp-Source: AK7set9aWhP2GLViX6xifcLL83bOgHSFy+VYxOzjjIe65tdTxY8xZL1AY5Jg738RWC0IZjRmQr5q+r1hn95JbQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:903:187:b0:19b:36:30e0 with SMTP id z7-20020a170903018700b0019b003630e0mr3105569plg.5.1677704983960; Wed, 01 Mar 2023 13:09:43 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:23 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-8-ricarkol@google.com> Subject: [PATCH v5 07/12] KVM: arm64: Export kvm_are_all_memslots_empty() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Export kvm_are_all_memslots_empty(). This will be used by a future commit when checking before setting a capability. No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4f26b244f6d0..8c5530e03a78 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -991,6 +991,8 @@ static inline bool kvm_memslots_empty(struct kvm_memslots *slots) return RB_EMPTY_ROOT(&slots->gfn_tree); } +bool kvm_are_all_memslots_empty(struct kvm *kvm); + #define kvm_for_each_memslot(memslot, bkt, slots) \ hash_for_each(slots->id_hash, bkt, memslot, id_node[slots->node_idx]) \ if (WARN_ON_ONCE(!memslot->npages)) { \ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9c60384b5ae0..3940d2467e1b 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4604,7 +4604,7 @@ int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm, return -EINVAL; } -static bool kvm_are_all_memslots_empty(struct kvm *kvm) +bool kvm_are_all_memslots_empty(struct kvm *kvm) { int i; From patchwork Wed Mar 1 21:09:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86BD0C7EE23 for ; Wed, 1 Mar 2023 21:09:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229847AbjCAVJt (ORCPT ); Wed, 1 Mar 2023 16:09:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229765AbjCAVJr (ORCPT ); Wed, 1 Mar 2023 16:09:47 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 668D64BEA5 for ; Wed, 1 Mar 2023 13:09:46 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-536bbaeceeaso290996677b3.11 for ; Wed, 01 Mar 2023 13:09:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jE1J4U6/atcmd20NL1Mp29Y9St+slS149D7mgDZE7fs=; b=g70xmngABYMketwfOJelnlfpyG6iTtR3NeNZKhS5EghXY/IMSnL+IyguR4MtiLARax W9c2o88G9Gd/K/VXBYKpK5UMawKjYXSox90Z6YBrsWeT8bVG8sQteMR9MA5lLp1OJlJw zSW3+bRwtJ9b6n9BO2CpGieXAKvUDAsu1+TiImEjtllltieXyS+IGqZEajhmRx9caiMc vRyCVxcl3XGelcdKDaGHBzAgadD51oIN/7oemh91rSoz6ZlHgvO6HZuHgfmExyovRzIO CTCwFsQj/TuLgs6grBj8nsAr4vZDsw0wbBCDJKBP0RlP08GCz2vgAWBeYsoIvEvbqx/d UomQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jE1J4U6/atcmd20NL1Mp29Y9St+slS149D7mgDZE7fs=; b=MkhzB6Pz09aWQIXPwgk633koHsGJbTFCyemdF/ZyxW9JRh4uY6R+ObawcgVuzxSEw+ l0G/z7tQGir6ZkOujMmByH1EnhcoqoXD0I6PJBgGRCzlRmB3hEMZ4oGVG9o/6T6R3LrV S97VIVCUqoKTk9R7MJ8f4Tt2zJ83C9xL1omoip/guFal5eGRN2BEURx0P9AtvNrX4A9o mRT1sSHgW19mc00HpMERtmb5V/DQkUEZXA8SgZMfUr4cZyTp+O30SIH65tB4CfIAp62h CmlbYsc+5U0Ewm3axs/CiWUW10fyGpLcitDWQ43Y5PYuqourUtMQ8RGpzMhN2NzsxqR0 37Hw== X-Gm-Message-State: AO0yUKW03VQQ+cTPqSzshdyudqOtMbyNUFUwk6MGgMmmAJ4P4AH1JXv1 9DFgMewMt95n5LljVSF5uPARTDR350yVEw== X-Google-Smtp-Source: AK7set9kvNiIbs9Cxn/NNJL2YtCFCGTzOjDisfC42CMKc717U5WjOA54kOKjWNC+5uxK1Mh4rxIH73bmFDCuVw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:101:b0:a4e:4575:f3ec with SMTP id o1-20020a056902010100b00a4e4575f3ecmr3487680ybh.0.1677704985661; Wed, 01 Mar 2023 13:09:45 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:24 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-9-ricarkol@google.com> Subject: [PATCH v5 08/12] KVM: arm64: Add KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a capability for userspace to specify the eager split chunk size. The chunk size specifies how many pages to break at a time, using a single allocation. Bigger the chunk size, more pages need to be allocated ahead of time. Suggested-by: Oliver Upton Signed-off-by: Ricardo Koller --- Documentation/virt/kvm/api.rst | 26 ++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 19 +++++++++++++++++++ arch/arm64/kvm/arm.c | 22 ++++++++++++++++++++++ arch/arm64/kvm/mmu.c | 3 +++ include/uapi/linux/kvm.h | 1 + 5 files changed, 71 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 9807b05a1b57..a9332e331cce 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -8284,6 +8284,32 @@ structure. When getting the Modified Change Topology Report value, the attr->addr must point to a byte where the value will be stored or retrieved from. +8.40 KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE +--------------------------------------- + +:Capability: KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE +:Architectures: arm64 +:Type: vm +:Parameters: arg[0] is the new chunk size. +:Returns: 0 on success, -EINVAL if any memslot has been created. + +This capability sets the chunk size used in Eager Page Splitting. + +Eager Page Splitting improves the performance of dirty-logging (used +in live migrations) when guest memory is backed by huge-pages. This +optimization is enabled by default on arm64. It avoids splitting +huge-pages (into PAGE_SIZE pages) on fault, by doing it eagerly when +enabling dirty logging (with the KVM_MEM_LOG_DIRTY_PAGES flag for a +memory region), or when using KVM_CLEAR_DIRTY_LOG. + +The chunk size specifies how many pages to break at a time, using a +single allocation for each chunk. Bigger the chunk size, more pages +need to be allocated ahead of time. A good heuristic is to pick the +size of the huge-pages as the chunk size. + +If the chunk size (arg[0]) is zero, then no eager page splitting is +performed. The default value PMD size (e.g., 2M when PAGE_SIZE is 4K). + 9. Known KVM API problems ========================= diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 35a159d131b5..1445cbf6295e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -153,6 +153,25 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; +#define KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT PMD_SIZE + /* + * Memory cache used to split + * KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE worth of huge pages. It + * is used to allocate stage2 page tables while splitting huge + * pages. Note that the choice of EAGER_PAGE_SPLIT_CHUNK_SIZE + * influences both the capacity of the split page cache, and + * how often KVM reschedules. Be wary of raising CHUNK_SIZE + * too high. + * + * A good heuristic to pick CHUNK_SIZE is that it should be + * the size of the huge-pages backing guest memory. If not + * known, the PMD size (usually 2M) is a good guess. + * + * Protected by kvm->slots_lock. + */ + struct kvm_mmu_memory_cache split_page_cache; + uint64_t split_page_chunk_size; + struct kvm_arch *arch; }; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5573bc4614..c80617ced599 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -101,6 +101,22 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = 0; set_bit(KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED, &kvm->arch.flags); break; + case KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE: + mutex_lock(&kvm->lock); + mutex_lock(&kvm->slots_lock); + /* + * To keep things simple, allow changing the chunk + * size only if there are no memslots created. + */ + if (!kvm_are_all_memslots_empty(kvm)) { + r = -EINVAL; + } else { + r = 0; + kvm->arch.mmu.split_page_chunk_size = cap->args[0]; + } + mutex_unlock(&kvm->slots_lock); + mutex_unlock(&kvm->lock); + break; default: r = -EINVAL; break; @@ -298,6 +314,12 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PTRAUTH_GENERIC: r = system_has_full_ptr_auth(); break; + case KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE: + if (kvm) + r = kvm->arch.mmu.split_page_chunk_size; + else + r = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; + break; default: r = 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 812633a75e74..e2ada6588017 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -755,6 +755,9 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; + mmu->split_page_cache.gfp_zero = __GFP_ZERO; + mmu->split_page_chunk_size = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; + mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); return 0; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 55155e262646..02e05f7918e2 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1175,6 +1175,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225 +#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 226 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Wed Mar 1 21:09:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD051C678D4 for ; Wed, 1 Mar 2023 21:09:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229580AbjCAVJv (ORCPT ); Wed, 1 Mar 2023 16:09:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229844AbjCAVJt (ORCPT ); Wed, 1 Mar 2023 16:09:49 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 044ED4C6C9 for ; Wed, 1 Mar 2023 13:09:48 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id w192-20020a25dfc9000000b009fe14931caaso1682030ybg.7 for ; Wed, 01 Mar 2023 13:09:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9dXTqLteclKP7ykUjHGgGAe2d1P6jiwOHz0hIPOMm10=; b=ou6dhW0Lv/la3qTIj45HoE6QCVoQax7IFOViGUppwWJZ7rLXlh2GX4+m1mGhRyUAH1 3xslsWjPi2mhT2iwdv/oyxrhHG7R3joGh9i7uiZhJ0Zr8nXDYow82AHS83hKl/vUIfDW zgOu1SUa3H8zpR70l6VLiuEm78ON1s9/SLg0o5gaFZZIGzKVFFO4fspI+hgeL4PwCeL5 BJaeytWGQUrYVheWoNCUZJpSPxO+K94oJ9ZlqvVRxIYi1aumovo65PJm/YezZgA5PK4U UJ75E0V2wxlSquWc7iIEjpbP3/wnLwTUkywgdY61Laq999YvfH/Fla45V5GCwGksy5qe he0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9dXTqLteclKP7ykUjHGgGAe2d1P6jiwOHz0hIPOMm10=; b=n0ZUU1YG/TiDpkKYtz248zxZ04LuBupsYUTEFY9Yiuxw1hz2wMuedNnTzTBTRY/e2b U1rgoVIupdGvFVzZBoO6Fenl9l57DwKLqQq/cnIzpKV5IpEEyf6R1LMwXHwdNNFKc7SV 4hY51yxnH0Z7dSDaRSWeTi5XLEVeTvH17IXBAWmehigCZErer3fuTnAfTE6tr/SqSTOT o4AS3QxdTsGfN8ggBehuHGPvxQmsFvHjKmfBaf2cHjKTpjsGMx1B+7sI3m5CLmy99L7a LH+esR6e4HuV6YI9oc6LTuL9F38VOMb7zro4dNdzJWdse/f43hGxOW6q0BEj/nPaA8Iz ikIA== X-Gm-Message-State: AO0yUKULo1tuEIyrlvNOPzqbJq8aagncMSeqhuILM9z2OJxUf66tAFVC iDBwt96VdknGt4ajVaREgEuDTjvSrDzsWg== X-Google-Smtp-Source: AK7set8hPwFbM6ppaQmV7LpD61SrZTZ2fA3S5by+DLDxU/NdyQo8waJkdChULEZAyaTYCg9+Ou/jU6rhlwjoyg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:b664:0:b0:52e:c77d:3739 with SMTP id h36-20020a81b664000000b0052ec77d3739mr4762235ywk.9.1677704987215; Wed, 01 Mar 2023 13:09:47 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:25 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-10-ricarkol@google.com> Subject: [PATCH v5 09/12] KVM: arm64: Split huge pages when dirty logging is enabled From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split huge pages eagerly when enabling dirty logging. The goal is to avoid doing it while faulting on write-protected pages, which negatively impacts guest performance. A memslot marked for dirty logging is split in 1GB pieces at a time. This is in order to release the mmu_lock and give other kernel threads the opportunity to run, and also in order to allocate enough pages to split a 1GB range worth of huge pages (or a single 1GB huge page). Note that these page allocations can fail, so eager page splitting is best-effort. This is not a correctness issue though, as huge pages can still be split on write-faults. The benefits of eager page splitting are the same as in x86, added with commit a3fe5dbda0a4 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU when dirty logging is enabled"). For example, when running dirty_log_perf_test with 64 virtual CPUs (Ampere Altra), 1GB per vCPU, 50% reads, and 2MB HugeTLB memory, the time it takes vCPUs to access all of their memory after dirty logging is enabled decreased by 44% from 2.58s to 1.42s. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/mmu.c | 118 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 116 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index e2ada6588017..20458251c85e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -31,14 +31,21 @@ static phys_addr_t hyp_idmap_vector; static unsigned long io_map_base; -static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) +static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, + phys_addr_t size) { - phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); phys_addr_t boundary = ALIGN_DOWN(addr + size, size); return (boundary - 1 < end - 1) ? boundary : end; } +static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) +{ + phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); + + return __stage2_range_addr_end(addr, end, size); +} + /* * Release kvm_mmu_lock periodically if the memory region is large. Otherwise, * we may see kernel panics with CONFIG_DETECT_HUNG_TASK, @@ -71,6 +78,77 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr, return ret; } +static bool need_topup_split_page_cache_or_resched(struct kvm *kvm, uint64_t min) +{ + struct kvm_mmu_memory_cache *cache; + + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + return true; + + cache = &kvm->arch.mmu.split_page_cache; + return kvm_mmu_memory_cache_nr_free_objects(cache) < min; +} + +/* + * Get the maximum number of page-tables needed to split a range of + * blocks into PAGE_SIZE PTEs. It assumes the range is already mapped + * at the PMD level, or at the PUD level if allowed. + */ +static int kvm_mmu_split_nr_page_tables(u64 range) +{ + int n = 0; + + if (KVM_PGTABLE_MIN_BLOCK_LEVEL < 2) + n += DIV_ROUND_UP_ULL(range, PUD_SIZE); + n += DIV_ROUND_UP_ULL(range, PMD_SIZE); + return n; +} + +static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + struct kvm_mmu_memory_cache *cache; + struct kvm_pgtable *pgt; + int ret; + u64 next; + u64 chunk_size = kvm->arch.mmu.split_page_chunk_size; + int cache_capacity = kvm_mmu_split_nr_page_tables(chunk_size); + + if (chunk_size == 0) + return 0; + + lockdep_assert_held_write(&kvm->mmu_lock); + + cache = &kvm->arch.mmu.split_page_cache; + + do { + if (need_topup_split_page_cache_or_resched(kvm, + cache_capacity)) { + write_unlock(&kvm->mmu_lock); + cond_resched(); + /* Eager page splitting is best-effort. */ + ret = __kvm_mmu_topup_memory_cache(cache, + cache_capacity, + cache_capacity); + write_lock(&kvm->mmu_lock); + if (ret) + break; + } + + pgt = kvm->arch.mmu.pgt; + if (!pgt) + return -EINVAL; + + next = __stage2_range_addr_end(addr, end, chunk_size); + ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, + cache, cache_capacity); + if (ret) + break; + } while (addr = next, addr != end); + + return ret; +} + #define stage2_apply_range_resched(kvm, addr, end, fn) \ stage2_apply_range(kvm, addr, end, fn, true) @@ -772,6 +850,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t void kvm_uninit_stage2_mmu(struct kvm *kvm) { kvm_free_stage2_pgd(&kvm->arch.mmu); + kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } static void stage2_unmap_memslot(struct kvm *kvm, @@ -999,6 +1078,31 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, stage2_wp_range(&kvm->arch.mmu, start, end); } +/** + * kvm_mmu_split_memory_region() - split the stage 2 blocks into PAGE_SIZE + * pages for memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to split + * + * Acquires kvm->mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot = id_to_memslot(slots, slot); + phys_addr_t start, end; + + lockdep_assert_held(&kvm->slots_lock); + + start = memslot->base_gfn << PAGE_SHIFT; + end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + write_lock(&kvm->mmu_lock); + kvm_mmu_split_huge_pages(kvm, start, end); + write_unlock(&kvm->mmu_lock); +} + /* * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected * dirty pages. @@ -1790,6 +1894,16 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, return; kvm_mmu_wp_memory_region(kvm, new->id); + kvm_mmu_split_memory_region(kvm, new->id); + } else { + /* + * Free any leftovers from the eager page splitting cache. Do + * this when deleting, moving, disabling dirty logging, or + * creating the memslot (a nop). Doing it for deletes makes + * sure we don't leak memory, and there's no need to keep the + * cache around for any of the other cases. + */ + kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } } From patchwork Wed Mar 1 21:09:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10604C7EE23 for ; Wed, 1 Mar 2023 21:09:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229868AbjCAVJy (ORCPT ); Wed, 1 Mar 2023 16:09:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229881AbjCAVJx (ORCPT ); Wed, 1 Mar 2023 16:09:53 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBC214C6C9 for ; Wed, 1 Mar 2023 13:09:49 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id j125-20020a25d283000000b008f257b16d71so1677530ybg.15 for ; Wed, 01 Mar 2023 13:09:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3PeO1BbXZoKnKTqikGOhua/uQ/4m0gR3AAlD74X8P8s=; b=A4lJPfSQKRYCnc/9amFiPPf7+Bjeka6AhKG66HJhpw94GfrOuYOKqQV9yQu/j60gVu VjS3ACS1WbmzP6AZ6BoI1nnf/7IbPY1UAv9vTxYRcjhNTfs6VUlwbBrVFunEoNgOQhD3 g8bLbR4d30xvSiEudDHlT4qdpvXTpbCN9tU42JHwEwW19HPKsYDYXnT5wGE3mNFoqkuS Z7B2i5Npz5xZFdajfQ1Y/LRd5YwvFI9sBhA5SIEREIw6O770ukE47IlsJ8AilnaDV9fg BlACSnxqtrQkpP8EltNz1I0jZeXjRdtow6c+aaLGW1Yv+YBl+A52NMBqk57t4JyXMF8m sHrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3PeO1BbXZoKnKTqikGOhua/uQ/4m0gR3AAlD74X8P8s=; b=vcKHeQjwnjKO3XlbuD6l8yTxv8F7g2BYGCCnAR8B5SVXB0uvp20Tiw6ZDCEZWEYWmQ uLmLKPvOA5zYDJPhCMIMY9H1fZ9Q+OXOLOwZo14chKJ9mvU7lp5nUb9GS2Ahnt93RIm6 N+CxizePMC4/za4qm2IKswwceviNHQ1x9Spee8f+0X8pbTLNrRl7WPLkT+bagVH3tbZG aIYCJO40iYSF7VAmnhmmS//xQ6rVgNAm4+OZ2/nETyW8qz0g+H2KWJzlRWBzQlPvetug FMaaJIKzdiw5Gb+PRandf+/KiAhi8Rv9vRJqNzFLb1tpXTmzz2KvIj8w1OkaL2uWotz7 wJjA== X-Gm-Message-State: AO0yUKW5uZ9qYoWl96XUP+j6zJOP5ep3fcFbWLxLoYKKAXmzOt/Xjorx fHDct96pVbTu5pRzeZu6UngwJWsqiaNspw== X-Google-Smtp-Source: AK7set8JGzUU9TcO158ugv/5oUa4xXqtNymTEf1o1Wa7afCjiaOVkQdrKBvIE7MHjEBH7Kug5DMgAIdmTaWuSg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:7442:0:b0:533:9ba2:2661 with SMTP id p63-20020a817442000000b005339ba22661mr14ywc.41.1677704988874; Wed, 01 Mar 2023 13:09:48 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:26 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-11-ricarkol@google.com> Subject: [PATCH v5 10/12] KVM: arm64: Open-code kvm_mmu_write_protect_pt_masked() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the functionality of kvm_mmu_write_protect_pt_masked() into its caller, kvm_arch_mmu_enable_log_dirty_pt_masked(). This will be used in a subsequent commit in order to share some of the code in kvm_arch_mmu_enable_log_dirty_pt_masked(). No functional change intended. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 42 +++++++++++++++--------------------------- 1 file changed, 15 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 20458251c85e..8e9d612dda00 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,28 +1056,6 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) kvm_flush_remote_tlbs(kvm); } -/** - * kvm_mmu_write_protect_pt_masked() - write protect dirty pages - * @kvm: The KVM pointer - * @slot: The memory slot associated with mask - * @gfn_offset: The gfn offset in memory slot - * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory - * slot to be write protected - * - * Walks bits set in mask write protects the associated pte's. Caller must - * acquire kvm_mmu_lock. - */ -static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask) -{ - phys_addr_t base_gfn = slot->base_gfn + gfn_offset; - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - - stage2_wp_range(&kvm->arch.mmu, start, end); -} - /** * kvm_mmu_split_memory_region() - split the stage 2 blocks into PAGE_SIZE * pages for memory slot @@ -1104,17 +1082,27 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) } /* - * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected - * dirty pages. + * kvm_arch_mmu_enable_log_dirty_pt_masked() - enable dirty logging for selected pages. + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of pages at offset 'gfn_offset' in this memory + * slot to enable dirty logging on * - * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to - * enable dirty logging for them. + * Writes protect selected pages to enable dirty logging for them. Caller must + * acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { - kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + lockdep_assert_held_write(&kvm->mmu_lock); + + stage2_wp_range(&kvm->arch.mmu, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) From patchwork Wed Mar 1 21:09:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D61E7C7EE33 for ; Wed, 1 Mar 2023 21:09:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229616AbjCAVJ5 (ORCPT ); Wed, 1 Mar 2023 16:09:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229870AbjCAVJz (ORCPT ); Wed, 1 Mar 2023 16:09:55 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11148521D5 for ; Wed, 1 Mar 2023 13:09:51 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id w192-20020a25dfc9000000b009fe14931caaso1682188ybg.7 for ; Wed, 01 Mar 2023 13:09:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=up44/HkkefZu4QAfnYtnSO5Fn3N4tuvaF5fFvsNlgKM=; b=KFenBJBQhiq+VeTpG0WQ5D5BMqeQ83fiQPGB2GuyTbjtiPjPFsEeRFay06D0Lvq0HP ZCc0kl63EyzK9iVJ52X8sj1AhxNdohfE4hDa9FJERr0ZsCWsy3fnX82Fl5hhwi7utvNh udzc/kKUimEhzg91Wsi6uFNsVnvm/dkcVNsLq5hLNSUJPb5bAut5HDtrFquk0Z8BcdPb yrU8jr2g6sATFELofrP5wcL0mVchbIaZJjFqmCylXqJpEz05t6mCxRvoNJ9aAc0AKgyg Z4gqHveP4IBhCfwUsqimkoSYGdnkpgsJo+nXQCJFx5TUUK4TYf/XP8ngI1uczubwBiQ1 lrjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=up44/HkkefZu4QAfnYtnSO5Fn3N4tuvaF5fFvsNlgKM=; b=RUL+QTv294/60m2PNRC4F1mAmvAw4bV5nY10InLAdvlfcnNZ5uTtEELOaU3h7buZsN UcCjoeMB+WB8VviIYUxlnigPK59vgOz18wLX1pNbgnS4M4unN26hqn5WxGs2RsqiUtYu 4bJivGKmYrgqkZYWwxNGKZNIXJEoiGXqxHoDLlYwzqCO9IjDGA9WtDOeESFBWGQh7kWQ ytNZrwMRdwnhn95F1p8Xb7D6KsETjeYW8+QrZKN5PP4aDq2nR0HzebNjbYPquShZZuzS EOWw5KYzYYg51DIj01cNKJ48bTCKh9gvXbHRuppO0X2Attd7AwdHZAqJ+JUUdk50ftx7 tT7g== X-Gm-Message-State: AO0yUKU3q4Z/fmQwf16JKbEQv+NnaOGAX+aBB2CruOQNM1RKUBbKQror xauLrFpNd0Ofo8sBC7VCcK3MF8YQkaovIw== X-Google-Smtp-Source: AK7set/SXs7DUT4Vm7RzbvFXUYh/VdGpjsd0hoffUyZ7jELuggCBuptbg0X4Vx/OTPO10kIGaafwlpr70rjBNQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:af5c:0:b0:534:515:e472 with SMTP id x28-20020a81af5c000000b005340515e472mr4667794ywj.4.1677704990794; Wed, 01 Mar 2023 13:09:50 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:27 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-12-ricarkol@google.com> Subject: [PATCH v5 11/12] KVM: arm64: Split huge pages during KVM_CLEAR_DIRTY_LOG From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is the arm64 counterpart of commit cb00a70bd4b7 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU during KVM_CLEAR_DIRTY_LOG"), which has the benefit of splitting the cost of splitting a memslot across multiple ioctls. Split huge pages on the range specified using KVM_CLEAR_DIRTY_LOG. And do not split when enabling dirty logging if KVM_DIRTY_LOG_INITIALLY_SET is set. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8e9d612dda00..5dae0e6a697f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1089,8 +1089,8 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) * @mask: The mask of pages at offset 'gfn_offset' in this memory * slot to enable dirty logging on * - * Writes protect selected pages to enable dirty logging for them. Caller must - * acquire kvm->mmu_lock. + * Splits selected pages to PAGE_SIZE and then writes protect them to enable + * dirty logging for them. Caller must acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -1103,6 +1103,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, lockdep_assert_held_write(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); + + /* + * If initially-all-set mode is not set, then huge-pages were already + * split when enabling dirty logging: no need to do it again. + */ + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + kvm_mmu_split_huge_pages(kvm, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) @@ -1889,7 +1896,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * this when deleting, moving, disabling dirty logging, or * creating the memslot (a nop). Doing it for deletes makes * sure we don't leak memory, and there's no need to keep the - * cache around for any of the other cases. + * cache around for any of the other cases. Keeping the cache + * is useful for successive KVM_CLEAR_DIRTY_LOG calls, which is + * not handled in this function. */ kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } From patchwork Wed Mar 1 21:09:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13156515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C04BBC678D4 for ; Wed, 1 Mar 2023 21:10:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229547AbjCAVKB (ORCPT ); Wed, 1 Mar 2023 16:10:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229824AbjCAVJ7 (ORCPT ); Wed, 1 Mar 2023 16:09:59 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4596B521F2 for ; Wed, 1 Mar 2023 13:09:53 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id t185-20020a635fc2000000b00502e332493fso4859846pgb.12 for ; Wed, 01 Mar 2023 13:09:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BJLAJR4lXHPHq3Me/87q0IvvlwMB/K5VaYZ9dQFs+Xs=; b=Dbt5hYrRVV8VLJ5SlGd7daEHp0vWm9ZMEz7aKiJPN0eLljSmVxMZX/2kXvZoxkhHL5 QyHxSsyvYpmWyrRp/bFE8IE9i9eUbJQFj8p5CnW5W6m3sEAkE8rjpbuQJ4xfi/FpRzmn i4QpTyV/dChKN4cU+bcJKnc0KDgglI5b6bw95wsI/frmmHL9TYwoW8g5XiqsGLY4xuvh VMKRFQdSPVHok2Z1N7ra48CUsuk9FG0JrsUouIHxQ7/2cfxSR0PCNDPnDkxuvyl9KguP kQJm9iI+O+aTnq3UgBDpfp9kUap6tEJEz2WCHmgtgXQKA/2xmYljX4D+dZApiAjpbaaE vCQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BJLAJR4lXHPHq3Me/87q0IvvlwMB/K5VaYZ9dQFs+Xs=; b=dTQklg56qRW5nBmae56YiISmbZy58S1j5TzGZ6+Qr6FEw7CRSxmK9ow6vYpwdRLZS9 DUYIXFx0w22WklyUWsIVgOdZ0P1j36ZzgnxQ0qQSdeV3GOboYiM41IU7E76qOpWi75X/ ktaZOnpfIgDClIo8E/xQsDIBWboSoAa+YVe55zU7/Iv5FeGfUKP0JMZ/7eGE4usnIV/1 1h7/D5r4YaNIo8mQKhmuhqeCBpCurIj+U5NQIwO8AEAGCVfM6zJGxGqj8QjcVHKkUWdw tavzwY8REVqDkd7W+ZNX7N4pV0j8ChOHVg790k0hKwdhDTPchSwWUF8Tnxg/Qt9yP0fd QUZw== X-Gm-Message-State: AO0yUKU3/zu5bR7llzqQ698X5VrL2M0NpP5LMn0AbjTTsfWCmD1Cwz1o m92psdzvUckuZh7rZXKAdN35BpoXDEgxEw== X-Google-Smtp-Source: AK7set97vlxCN7m1MDHOD/dkJDNcp8qSFC5ADTWqwufrcbV/A5WHBVqwRi73NB9M3taMFHVuWdTzVJ7m2MckzA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:90b:274b:b0:233:bada:17b5 with SMTP id qi11-20020a17090b274b00b00233bada17b5mr4783223pjb.4.1677704992546; Wed, 01 Mar 2023 13:09:52 -0800 (PST) Date: Wed, 1 Mar 2023 21:09:28 +0000 In-Reply-To: <20230301210928.565562-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230301210928.565562-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230301210928.565562-13-ricarkol@google.com> Subject: [PATCH v5 12/12] KVM: arm64: Use local TLBI on permission relaxation From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marc Zyngier Broadcasted TLB invalidations (TLBI) are usually less performant than their local variant. In particular, we observed some implementations that take millliseconds to complete parallel broadcasted TLBIs. It's safe to use local, non-shareable, TLBIs when relaxing permissions on a PTE in the KVM case for a couple of reasons. First, according to the ARM Arm (DDI 0487H.a D5-4913), permission relaxation does not need break-before-make. Second, KVM does not set the VTTBR_EL2.CnP bit, so each PE has its own TLB entry for the same page. KVM could tolerate that when doing permission relaxation (i.e., not having changes broadcasted to all PEs). Signed-off-by: Marc Zyngier Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_asm.h | 4 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 10 ++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 54 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 2 +- arch/arm64/kvm/hyp/vhe/tlb.c | 32 ++++++++++++++++++ 5 files changed, 101 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544..bb17b2ead4c7 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa_nsh, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, @@ -225,6 +226,9 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, + int level); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b..c6bf1e49ca93 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,15 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void handle___kvm_tlb_flush_vmid_ipa_nsh(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2); + DECLARE_REG(int, level, host_ctxt, 3); + + __kvm_tlb_flush_vmid_ipa_nsh(kern_hyp_va(mmu), ipa, level); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +324,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa_nsh), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f589..ef2b70587f93 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,60 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, int level) +{ + struct tlb_inv_context cxt; + + dsb(nshst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi_level(ipas2e1, ipa, level); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(nsh); + __tlbi(vmalle1); + dsb(nsh); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 75726edba2f3..540f55a14b80 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1148,7 +1148,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, KVM_PGTABLE_WALK_SHARED); if (!ret) - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, pgt->mmu, addr, level); + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; } diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e..e69da550cdc5 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,38 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, int level) +{ + struct tlb_inv_context cxt; + + dsb(nshst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi_level(ipas2e1, ipa, level); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(nsh); + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt;