From patchwork Tue Mar 7 03:45:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9555C6FA99 for ; Tue, 7 Mar 2023 03:46:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229972AbjCGDqG (ORCPT ); Mon, 6 Mar 2023 22:46:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230089AbjCGDqD (ORCPT ); Mon, 6 Mar 2023 22:46:03 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35C9B53DAB for ; Mon, 6 Mar 2023 19:46:00 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id t12-20020aa7938c000000b005ac41980708so6468215pfe.7 for ; Mon, 06 Mar 2023 19:46:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160759; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LXjg3XdHor1xScvKJ1/3b/tZKV68fOXxw69FE2u7KXM=; b=TEIrBjGjU179ZpJIWEH+C5Nz88ToCnQe7ZOCNm4q8ZJFUpinJCVFJ4sycG4rScvLaj tU9NVkDxDVW53B9O/4L59S4NvY1smRH7Sh6P6E0xd55c+9LWsJxfTCjQdewtzgESZkdz GTDel2JZpqbWT9LSzLS9DHDtpvu4YOOSZzrccCpumt7v/IC8RU4PkS+Y2Umnr2eBYnXc Y9fJ7t9edp+mO7+pEa9DOjQq6Sw2GLu8jhNODkpGME1bklArNyCTx/qTyFZVpijPxtjs t5FqX72mB9ADxcxtXT8TGpb62PoYJ3/+WNVDN3x4bVnDy76LwcXs4tDsjrko71CM62Mp 6njw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160759; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LXjg3XdHor1xScvKJ1/3b/tZKV68fOXxw69FE2u7KXM=; b=ZnszcqsnvBhBgHHIGnnyh0ZoqEw7BuD18Zcs51puwXBw5zaiYuDHsP8UFGw9TD7aoR HoRGWVcE/OCovOTHvH/Z7P0Utt77RuCZGADCSfCKWrc8Gvm5X+bKIct7cXijOyjj997o XyKd1yzMVUD+WXIGZ4//U9nw9JZ52UqYCk88cnUoXZMby4zaubu9rnQDhKhwJuZmLPpX g76gm9/4JcwmYwLg5HjlTbNIduqe32bp3XhR/yyKU90ns+bgdI6BzDYtwo+2DqxToPAi zcLwVQOmfheHYLMS7nKHNGqL9ktcw7HCZqhdNcwub7hVipTXMkDqDpabLdm0zSJRafwP DJFQ== X-Gm-Message-State: AO0yUKUwnRJ0jgwjWsPzis5zBTQpO2E2hC6s25WIsiDzh6v4riTwfrll hymwWknRxRuDexT//W0stlYVSebnogvNpA== X-Google-Smtp-Source: AK7set8zOY8bAwD3+MNCNiolz9Kw3seRpXSRe4yVUgiUYl/zeIALVmQIW3a9zD8WFqOvbJW1PcnYbWqfr7tVjw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:902:a3c4:b0:199:6e3:187a with SMTP id q4-20020a170902a3c400b0019906e3187amr5371422plb.6.1678160759623; Mon, 06 Mar 2023 19:45:59 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:44 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-2-ricarkol@google.com> Subject: [PATCH v6 01/12] KVM: arm64: Rename free_removed to free_unlinked From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Oliver Upton , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Normalize on referring to tables outside of an active paging structure as 'unlinked'. A subsequent change to KVM will add support for building page tables that are not part of an active paging structure. The existing 'removed_table' terminology is quite clunky when applied in this context. No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Oliver Upton Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 8 ++++---- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 6 +++--- arch/arm64/kvm/hyp/pgtable.c | 6 +++--- arch/arm64/kvm/mmu.c | 10 +++++----- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 4cd6762bda80..26a4293726c1 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -104,7 +104,7 @@ static inline bool kvm_level_supports_block_mapping(u32 level) * allocation is physically contiguous. * @free_pages_exact: Free an exact number of memory pages previously * allocated by zalloc_pages_exact. - * @free_removed_table: Free a removed paging structure by unlinking and + * @free_unlinked_table: Free an unlinked paging structure by unlinking and * dropping references. * @get_page: Increment the refcount on a page. * @put_page: Decrement the refcount on a page. When the @@ -124,7 +124,7 @@ struct kvm_pgtable_mm_ops { void* (*zalloc_page)(void *arg); void* (*zalloc_pages_exact)(size_t size); void (*free_pages_exact)(void *addr, size_t size); - void (*free_removed_table)(void *addr, u32 level); + void (*free_unlinked_table)(void *addr, u32 level); void (*get_page)(void *addr); void (*put_page)(void *addr); int (*page_count)(void *addr); @@ -440,7 +440,7 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); /** - * kvm_pgtable_stage2_free_removed() - Free a removed stage-2 paging structure. + * kvm_pgtable_stage2_free_unlinked() - Free an unlinked stage-2 paging structure. * @mm_ops: Memory management callbacks. * @pgtable: Unlinked stage-2 paging structure to be freed. * @level: Level of the stage-2 paging structure to be freed. @@ -448,7 +448,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); * The page-table is assumed to be unreachable by any hardware walkers prior to * freeing and therefore no TLB invalidation is performed. */ -void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); +void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); /** * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 552653fa18be..b030170d803b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -91,9 +91,9 @@ static void host_s2_put_page(void *addr) hyp_put_page(&host_s2_pool, addr); } -static void host_s2_free_removed_table(void *addr, u32 level) +static void host_s2_free_unlinked_table(void *addr, u32 level) { - kvm_pgtable_stage2_free_removed(&host_mmu.mm_ops, addr, level); + kvm_pgtable_stage2_free_unlinked(&host_mmu.mm_ops, addr, level); } static int prepare_s2_pool(void *pgt_pool_base) @@ -110,7 +110,7 @@ static int prepare_s2_pool(void *pgt_pool_base) host_mmu.mm_ops = (struct kvm_pgtable_mm_ops) { .zalloc_pages_exact = host_s2_zalloc_pages_exact, .zalloc_page = host_s2_zalloc_page, - .free_removed_table = host_s2_free_removed_table, + .free_unlinked_table = host_s2_free_unlinked_table, .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, .page_count = hyp_page_count, diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3d61bd3e591d..a3246d6cddec 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -860,7 +860,7 @@ static int stage2_map_walk_table_pre(const struct kvm_pgtable_visit_ctx *ctx, if (ret) return ret; - mm_ops->free_removed_table(childp, ctx->level); + mm_ops->free_unlinked_table(childp, ctx->level); return 0; } @@ -905,7 +905,7 @@ static int stage2_map_walk_leaf(const struct kvm_pgtable_visit_ctx *ctx, * The TABLE_PRE callback runs for table entries on the way down, looking * for table entries which we could conceivably replace with a block entry * for this mapping. If it finds one it replaces the entry and calls - * kvm_pgtable_mm_ops::free_removed_table() to tear down the detached table. + * kvm_pgtable_mm_ops::free_unlinked_table() to tear down the detached table. * * Otherwise, the LEAF callback performs the mapping at the existing leaves * instead. @@ -1276,7 +1276,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) pgt->pgd = NULL; } -void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level) +void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level) { kvm_pteref_t ptep = (kvm_pteref_t)pgtable; struct kvm_pgtable_walker walker = { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7113587222ff..efdaab3f154d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -131,21 +131,21 @@ static void kvm_s2_free_pages_exact(void *virt, size_t size) static struct kvm_pgtable_mm_ops kvm_s2_mm_ops; -static void stage2_free_removed_table_rcu_cb(struct rcu_head *head) +static void stage2_free_unlinked_table_rcu_cb(struct rcu_head *head) { struct page *page = container_of(head, struct page, rcu_head); void *pgtable = page_to_virt(page); u32 level = page_private(page); - kvm_pgtable_stage2_free_removed(&kvm_s2_mm_ops, pgtable, level); + kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); } -static void stage2_free_removed_table(void *addr, u32 level) +static void stage2_free_unlinked_table(void *addr, u32 level) { struct page *page = virt_to_page(addr); set_page_private(page, (unsigned long)level); - call_rcu(&page->rcu_head, stage2_free_removed_table_rcu_cb); + call_rcu(&page->rcu_head, stage2_free_unlinked_table_rcu_cb); } static void kvm_host_get_page(void *addr) @@ -682,7 +682,7 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = { .zalloc_page = stage2_memcache_zalloc_page, .zalloc_pages_exact = kvm_s2_zalloc_pages_exact, .free_pages_exact = kvm_s2_free_pages_exact, - .free_removed_table = stage2_free_removed_table, + .free_unlinked_table = stage2_free_unlinked_table, .get_page = kvm_host_get_page, .put_page = kvm_s2_put_page, .page_count = kvm_host_page_count, From patchwork Tue Mar 7 03:45:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9075C6FD1B for ; Tue, 7 Mar 2023 03:46:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230154AbjCGDqI (ORCPT ); Mon, 6 Mar 2023 22:46:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230126AbjCGDqF (ORCPT ); Mon, 6 Mar 2023 22:46:05 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 942584C6D6 for ; Mon, 6 Mar 2023 19:46:02 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id r13-20020a25760d000000b0096c886848c9so12717921ybc.3 for ; Mon, 06 Mar 2023 19:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160761; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DPheaVcCbjSmROjElANCHCHz5AMlJCBRQkCYrvmfbqU=; b=WxzFs7JxxYImFM2eogjaHHE3j/Guhz9RteJOE6Yvo7foersk17L159zhAsf4Hl+8lY jFtVvvVjqAN7eNKo33ysvgKJwUM6BMy9eIY4+GxYTZOGw/1DSrEFFzaejk3b+WWNioEe cBMlBKvP4bVfnq+vkpz7X8Eh7Du8Drg1dneZ8iDhU11sVoSri2X+lOPUvYvzg5Ht2DFa pllMqbUjBGeYeIka/V26CHJwAWMu1jYccDW8xgi4vIpy5Tv3aYswT2kkmPH9ZuPJl55t YAMozr7wag+WtHBlGWqRb1/KyiljV0FW/nQi/Ma5A9V6LCWNuMXzi8W2Ez5sf7tZeFRG +isA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160761; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DPheaVcCbjSmROjElANCHCHz5AMlJCBRQkCYrvmfbqU=; b=UHr6436FlAUoUKcaiQLGW3BVMCHlAlMI57MgA9ElVWMu7q6kmgbuRbu9YOQTnwBpru S/RM0h4iYI5q5I6sYItA/qqLzjQTpIckgZb/8Htr5syw2cRdJ+cdjy7rC6PnOcp2ymRH nVqTk8UTTUCvQWo6v8H0dxWczMrFQ1OOtCc4ONxpodWbQEkIhwGylTlES/jN60BBZKJx NyAsiPoC9x42xGTb0QmirrTUZqxh2lC/ejQTyAIIpFdH/1iR4JUk+UiN+/KIF2yMImf8 x8GtegLesNNbWOh2MH0KodNbSMcOK6MyQ1VhUyUQAtUr7tPqRDpOIH9NBXtqepYBazFJ DT/g== X-Gm-Message-State: AO0yUKVT2xkHlWySMc5gZb9PA5KjOshX6weciqQmiubuIvReMjb1zwF0 uqII7snLGB4UxgC6hvpaqLlDJpfo867ehQ== X-Google-Smtp-Source: AK7set+fqoEuVvKk0bd9tdokqq3ZyjFzf4QWcHC3PLM/mqaktQFkzCCnuiuJqqWCxnmTxHxJKjfAP6JD3e8ADQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:5b54:0:b0:52b:e47f:8a00 with SMTP id p81-20020a815b54000000b0052be47f8a00mr1ywb.22.1678160761453; Mon, 06 Mar 2023 19:46:01 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:45 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-3-ricarkol@google.com> Subject: [PATCH v6 02/12] KVM: arm64: Add KVM_PGTABLE_WALK ctx->flags for skipping BBM and CMO From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add two flags to kvm_pgtable_visit_ctx, KVM_PGTABLE_WALK_SKIP_BBM and KVM_PGTABLE_WALK_SKIP_CMO, to indicate that the walk should not perform break-before-make (BBM) nor cache maintenance operations (CMO). This will by a future commit to create unlinked tables not accessible to the HW page-table walker. This is safe as these unlinked tables are not visible to the HW page-table walker. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 18 ++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 27 ++++++++++++++++----------- 2 files changed, 34 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 26a4293726c1..c7a269cad053 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -195,6 +195,12 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end, * with other software walkers. * @KVM_PGTABLE_WALK_HANDLE_FAULT: Indicates the page-table walk was * invoked from a fault handler. + * @KVM_PGTABLE_WALK_SKIP_BBM: Visit and update table entries + * without Break-before-make + * requirements. + * @KVM_PGTABLE_WALK_SKIP_CMO: Visit and update table entries + * without Cache maintenance + * operations required. */ enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_LEAF = BIT(0), @@ -202,6 +208,8 @@ enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_TABLE_POST = BIT(2), KVM_PGTABLE_WALK_SHARED = BIT(3), KVM_PGTABLE_WALK_HANDLE_FAULT = BIT(4), + KVM_PGTABLE_WALK_SKIP_BBM = BIT(5), + KVM_PGTABLE_WALK_SKIP_CMO = BIT(6), }; struct kvm_pgtable_visit_ctx { @@ -223,6 +231,16 @@ static inline bool kvm_pgtable_walk_shared(const struct kvm_pgtable_visit_ctx *c return ctx->flags & KVM_PGTABLE_WALK_SHARED; } +static inline bool kvm_pgtable_walk_skip_bbm(const struct kvm_pgtable_visit_ctx *ctx) +{ + return ctx->flags & KVM_PGTABLE_WALK_SKIP_BBM; +} + +static inline bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx) +{ + return ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO; +} + /** * struct kvm_pgtable_walker - Hook into a page-table walk. * @cb: Callback function to invoke during the walk. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index a3246d6cddec..4f703cc4cb03 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -741,14 +741,17 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) return false; - /* - * Perform the appropriate TLB invalidation based on the evicted pte - * value (if any). - */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + if (!kvm_pgtable_walk_skip_bbm(ctx)) { + /* + * Perform the appropriate TLB invalidation based on the + * evicted pte value (if any). + */ + if (kvm_pte_table(ctx->old, ctx->level)) + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + else if (kvm_pte_valid(ctx->old)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); + } if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); @@ -832,11 +835,13 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, return -EAGAIN; /* Perform CMOs before installation of the guest stage-2 PTE */ - if (mm_ops->dcache_clean_inval_poc && stage2_pte_cacheable(pgt, new)) + if (!kvm_pgtable_walk_skip_cmo(ctx) && mm_ops->dcache_clean_inval_poc && + stage2_pte_cacheable(pgt, new)) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(new, mm_ops), - granule); + granule); - if (mm_ops->icache_inval_pou && stage2_pte_executable(new)) + if (!kvm_pgtable_walk_skip_cmo(ctx) && mm_ops->icache_inval_pou && + stage2_pte_executable(new)) mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); stage2_make_pte(ctx, new); From patchwork Tue Mar 7 03:45:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EC3BC61DA4 for ; Tue, 7 Mar 2023 03:46:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230168AbjCGDqK (ORCPT ); Mon, 6 Mar 2023 22:46:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230085AbjCGDqG (ORCPT ); Mon, 6 Mar 2023 22:46:06 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9E1649896 for ; Mon, 6 Mar 2023 19:46:03 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id l24-20020a25b318000000b007eba3f8e3baso12571196ybj.4 for ; Mon, 06 Mar 2023 19:46:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160763; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kZZEvh98SWD9tn0E0TdmaWcIXyV4ORxN7PNbtYHIGE8=; b=Wy5ar7FlY1cJem3SOk3mfW/K7als1Bqv/HgmrKNATViB0WDWrKwSQtn3yzECxRDBNz KEjzP0aagE292OG2LONhBe1ORH55XdY1aQXY252i+uR0PRXJrWmGDEiMMtx7yVOAO6/6 zrJ6xybaBeGr7khtvcbrdIMFlAhTg1oX3WB8/30hsUmjfNTRs4dE2nZXsL4Vx/PLF+Aj fSVLA/ZN+KCkELeif3F1u4kCLPINFXNnEwg8rAbqpZ3fFZyM9XxsAzsOhDryQfKqiD8I 3tEYoWRY80HVQf7QrNF3q9kMYKOiXLyvNkUTYkmnBZQ5iBWP679/4UUEBkA6Bmhg0MiA aDEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160763; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kZZEvh98SWD9tn0E0TdmaWcIXyV4ORxN7PNbtYHIGE8=; b=ViYaM0qmFhVacYKzXXGp3dS6qcnr9tt6Xe9CZ10IoGWGZ2ykAwIfiY9tZfO6Wm3snb ZIOoxaYQxHxzfPOYAIB5XEutpPz7P/ihtGEo86CG7QDEgCAk/hwf65km3hZ2afNHrJqb yIFmV+AGsYnnJ+aTwnsd192dRo7dZAz7eJbPzJW6htyUp4pHFA/BbXRfo+vzOEaKxGIW xv0pQPL+ExcNZHIvsoeUhhpnOEIygiBnRUDGjBEpslFqDBrzAKGE1G0hd2/gFI+HIHJI ExpHDOKk+AA2hOhaUmE4zx/w1LCGB1nlih/RYyd5rXEbGCrivLvhikSLz5gsdypX/50S B32A== X-Gm-Message-State: AO0yUKXbr8R5mwub7/bTS0E0AqrTp19z1qG1g3n2N+LAHGbr+PgYg59j y0HmX5fObsKO5jR8YyCjBfQGmYNijV0Bhg== X-Google-Smtp-Source: AK7set/TlOGbjrfZ/FRa7XeKFoZJcOZbWo37lMuVG4MBEXSWmDYNbgrw81za954EtOiRMdZrY+ihG81LzHO1Zg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:208:b0:ace:1ae4:9dd2 with SMTP id j8-20020a056902020800b00ace1ae49dd2mr7814870ybs.8.1678160763234; Mon, 06 Mar 2023 19:46:03 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:46 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-4-ricarkol@google.com> Subject: [PATCH v6 03/12] KVM: arm64: Add helper for creating unlinked stage2 subtrees From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a stage2 helper, kvm_pgtable_stage2_create_unlinked(), for creating unlinked tables (which is the opposite of kvm_pgtable_stage2_free_unlinked()). Creating an unlinked table is useful for splitting PMD and PUD blocks into subtrees of PAGE_SIZE PTEs. For example, a PUD can be split into PAGE_SIZE PTEs by first creating a fully populated tree, and then use it to replace the PUD in a single step. This will be used in a subsequent commit for eager huge-page splitting (a dirty-logging optimization). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 28 +++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 46 ++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index c7a269cad053..b7b3fc0fa7a5 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -468,6 +468,34 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); */ void kvm_pgtable_stage2_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); +/** + * kvm_pgtable_stage2_create_unlinked() - Create an unlinked stage-2 paging structure. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). + * @phys: Physical address of the memory to map. + * @level: Starting level of the stage-2 paging structure to be created. + * @prot: Permissions and attributes for the mapping. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * @force_pte: Force mappings to PAGE_SIZE granularity. + * + * Returns an unlinked page-table tree. If @force_pte is true or + * @level is 2 (the PMD level), then the tree is mapped up to the + * PAGE_SIZE leaf PTE; the tree is mapped up one level otherwise. + * This new page-table tree is not reachable (i.e., it is unlinked) + * from the root pgd and it's therefore unreachableby the hardware + * page-table walker. No TLB invalidation or CMOs are performed. + * + * If device attributes are not explicitly requested in @prot, then the + * mapping will be normal, cacheable. + * + * Return: The fully populated (unlinked) stage-2 paging structure, or + * an ERR_PTR(error) on failure. + */ +kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, + u64 phys, u32 level, + enum kvm_pgtable_prot prot, + void *mc, bool force_pte); + /** * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 4f703cc4cb03..6bdfcb671b32 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1212,6 +1212,52 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) return kvm_pgtable_walk(pgt, addr, size, &walker); } +kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, + u64 phys, u32 level, + enum kvm_pgtable_prot prot, + void *mc, bool force_pte) +{ + struct stage2_map_data map_data = { + .phys = phys, + .mmu = pgt->mmu, + .memcache = mc, + .force_pte = force_pte, + }; + struct kvm_pgtable_walker walker = { + .cb = stage2_map_walker, + .flags = KVM_PGTABLE_WALK_LEAF | + KVM_PGTABLE_WALK_SKIP_BBM | + KVM_PGTABLE_WALK_SKIP_CMO, + .arg = &map_data, + }; + /* .addr (the IPA) is irrelevant for an unlinked table */ + struct kvm_pgtable_walk_data data = { + .walker = &walker, + .addr = 0, + .end = kvm_granule_size(level), + }; + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; + kvm_pte_t *pgtable; + int ret; + + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); + if (ret) + return ERR_PTR(ret); + + pgtable = mm_ops->zalloc_page(mc); + if (!pgtable) + return ERR_PTR(-ENOMEM); + + ret = __kvm_pgtable_walk(&data, mm_ops, (kvm_pteref_t)pgtable, + level + 1); + if (ret) { + kvm_pgtable_stage2_free_unlinked(mm_ops, pgtable, level); + mm_ops->put_page(pgtable); + return ERR_PTR(ret); + } + + return pgtable; +} int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops, From patchwork Tue Mar 7 03:45:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 012A1C6FA99 for ; Tue, 7 Mar 2023 03:46:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229946AbjCGDqM (ORCPT ); Mon, 6 Mar 2023 22:46:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbjCGDqI (ORCPT ); Mon, 6 Mar 2023 22:46:08 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A89914FAB3 for ; Mon, 6 Mar 2023 19:46:05 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-536cad819c7so122682847b3.6 for ; Mon, 06 Mar 2023 19:46:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160765; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LorRr0tSpPV/HiTkZVHPusAZFDDTrzxf4iYCwZK3qzw=; b=I4a8SUlrLvvojeYvcmMiC2zUZqQybK5hGvCBgIlFENN8OH8+w84QrNNMzLTtRLSVEb WB3rMuNT6MTPDlJMFlx+wdo3vij0qvyB70IktGwAjhfI2pwmEGOfb04kkUfIahgMOaEz 8KdaZzF/EkLndc/l8jvYzuDJ1cbuyHrP3M4TgOS87Aq+CkknLjsq2GitR7FI1i1qtGfF FB7wRha45vW03qg9Aj3CNRg/vgnwf0ALVefC0xV/SnGKua9CbSIsx+Uxd4ifh3YMpaH3 +YQVnukhAMoOQeAKxtkmKr7bf4RvZH99JIUE0comNzweFbr97OlTkFal3jvJpsmwDu5+ Ef/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160765; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LorRr0tSpPV/HiTkZVHPusAZFDDTrzxf4iYCwZK3qzw=; b=vwnlL5dGihtXTht/5XZYy3O4tdyAX33qhKTgYH07iwxghB7fah7JFLfGmvyfPGaweP yEVBV5wR5Of6MflLgKNcpUu3hnioN4wS2rSm8ijgZ8rZYfyvYNrD2F7ACdOkQ+55+7/4 RjwSz3jKUhxuuvpUaVEui2pob3M7S9eHJduVu2oqcN/y5BOvKiD1DlkqYrySNWjNGhlS DkDvhWM5B5arpR7jjeA5NfYwzCHq8OCulyb0M5v2+ZZsvaqPFhmll+YN5BoipWkLdLcq d40rAUovQixocqA78pAnHXG99YJGyb0k3Lg4XEp14kLrChmvh+gxp2GByg2fabPeIJwJ dyyw== X-Gm-Message-State: AO0yUKVLnJNEc3IKGqPjpEJ8lCPh9V5kD7ef+AQ17f5ksRJS8q3RDRQL RQpoLYnBDRt3jl6v5bKTTC8/DfAzMSP5dQ== X-Google-Smtp-Source: AK7set/nYKOurqX7+27yzQnB5T3x7aiLZyUfSTSEnkwHX6VlUk/ecQNoH6NrmKl7u68atLnyiVZm6gc+BTPPFg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a5b:70c:0:b0:a30:38fb:a0b8 with SMTP id g12-20020a5b070c000000b00a3038fba0b8mr7892121ybq.9.1678160764841; Mon, 06 Mar 2023 19:46:04 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:47 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-5-ricarkol@google.com> Subject: [PATCH v6 04/12] KVM: arm64: Add kvm_pgtable_stage2_split() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a range of huge pages. This will be used for eager-splitting huge pages into PAGE_SIZE pages. The goal is to avoid having to split huge pages on write-protection faults, and instead use this function to do it ahead of time for large ranges (e.g., all guest memory in 1G chunks at a time). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_pgtable.h | 30 +++++++ arch/arm64/kvm/hyp/pgtable.c | 113 +++++++++++++++++++++++++++ 2 files changed, 143 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index b7b3fc0fa7a5..40e323a718fc 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -665,6 +665,36 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); */ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +/** + * kvm_pgtable_stage2_split() - Split a range of huge pages into leaf PTEs pointing + * to PAGE_SIZE guest pages. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). + * @addr: Intermediate physical address from which to split. + * @size: Size of the range. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * @mc_capacity: Number of pages in @mc. + * + * @addr and the end (@addr + @size) are effectively aligned down and up to + * the top level huge-page block size. This is an example using 1GB + * huge-pages and 4KB granules. + * + * [---input range---] + * : : + * [--1G block pte--][--1G block pte--][--1G block pte--][--1G block pte--] + * : : + * [--2MB--][--2MB--][--2MB--][--2MB--] + * : : + * [ ][ ][:][ ][ ][ ][ ][ ][:][ ][ ][ ] + * : : + * + * Return: 0 on success, negative error code on failure. Note that + * kvm_pgtable_stage2_split() is best effort: it tries to break as many + * blocks in the input range as allowed by @mc_capacity. + */ +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, + void *mc, u64 mc_capacity); + /** * kvm_pgtable_walk() - Walk a page-table. * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 6bdfcb671b32..3149b98d1701 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1259,6 +1259,119 @@ kvm_pte_t *kvm_pgtable_stage2_create_unlinked(struct kvm_pgtable *pgt, return pgtable; } +struct stage2_split_data { + struct kvm_s2_mmu *mmu; + void *memcache; + u64 mc_capacity; +}; + +/* + * Get the number of page-tables needed to replace a block with a + * fully populated tree, up to the PTE level, at particular level. + */ +static inline int stage2_block_get_nr_page_tables(u32 level) +{ + if (WARN_ON_ONCE(level < KVM_PGTABLE_MIN_BLOCK_LEVEL || + level >= KVM_PGTABLE_MAX_LEVELS)) + return -EINVAL; + + switch (level) { + case 1: + return PTRS_PER_PTE + 1; + case 2: + return 1; + case 3: + return 0; + default: + return -EINVAL; + }; +} + +static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags visit) +{ + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; + struct stage2_split_data *data = ctx->arg; + kvm_pte_t pte = ctx->old, new, *childp; + enum kvm_pgtable_prot prot; + void *mc = data->memcache; + u32 level = ctx->level; + bool force_pte; + int nr_pages; + u64 phys; + + /* No huge-pages exist at the last level */ + if (level == KVM_PGTABLE_MAX_LEVELS - 1) + return 0; + + /* We only split valid block mappings */ + if (!kvm_pte_valid(pte)) + return 0; + + nr_pages = stage2_block_get_nr_page_tables(level); + if (nr_pages < 0) + return nr_pages; + + if (data->mc_capacity >= nr_pages) { + /* Build a tree mapped down to the PTE granularity. */ + force_pte = true; + } else { + /* + * Don't force PTEs. This requires a single page of PMDs at the + * PUD level, or a single page of PTEs at the PMD level. If we + * are at the PUD level, the PTEs will be created recursively. + */ + force_pte = false; + nr_pages = 1; + } + + if (data->mc_capacity < nr_pages) + return -ENOMEM; + + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_stage2_pte_prot(pte); + + childp = kvm_pgtable_stage2_create_unlinked(data->mmu->pgt, phys, + level, prot, mc, force_pte); + if (IS_ERR(childp)) + return PTR_ERR(childp); + + if (!stage2_try_break_pte(ctx, data->mmu)) { + kvm_pgtable_stage2_free_unlinked(mm_ops, childp, level); + mm_ops->put_page(childp); + return -EAGAIN; + } + + /* + * Note, the contents of the page table are guaranteed to be made + * visible before the new PTE is assigned because stage2_make_pte() + * writes the PTE using smp_store_release(). + */ + new = kvm_init_table_pte(childp, mm_ops); + stage2_make_pte(ctx, new); + dsb(ishst); + data->mc_capacity -= nr_pages; + return 0; +} + +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, + void *mc, u64 mc_capacity) +{ + struct stage2_split_data split_data = { + .mmu = pgt->mmu, + .memcache = mc, + .mc_capacity = mc_capacity, + }; + + struct kvm_pgtable_walker walker = { + .cb = stage2_split_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = &split_data, + }; + + return kvm_pgtable_walk(pgt, addr, size, &walker); +} + int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops, enum kvm_pgtable_stage2_flags flags, From patchwork Tue Mar 7 03:45:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE187C61DA4 for ; Tue, 7 Mar 2023 03:46:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229619AbjCGDqR (ORCPT ); Mon, 6 Mar 2023 22:46:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37854 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229841AbjCGDqN (ORCPT ); Mon, 6 Mar 2023 22:46:13 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DD8F4D61A for ; Mon, 6 Mar 2023 19:46:07 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id o17-20020a17090ab89100b0023752c22f38so4452822pjr.4 for ; Mon, 06 Mar 2023 19:46:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160766; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j/rvnSUPkFYod+qtoGkjVnQK76DLzlbuECq38DHxQ5A=; b=fb1Gvc769uSZciENcHpUJql9DAUpW3BmIfMoSuuGM7TRqKwudVfqotQvwUJCQtiWVG siwJ97O8ckmab6z6OQKYBQ2NX42GiV28Qpes8eQW3ie3+eXHi8/oOXaaGFx3qeaGS/ar oWVYjXOHTnUR6j0g1a/lZ6A1DRaO4waasQAX/lyuYJY577MJSyCJwEr+BoRALiy8gbGb 5pSHP1OdbqLIhFi1DVa6g8w0An+r9iZhBmG7PCXJT3fMaYaU0k1Rmrskxjq3Zsc17pF5 lrzYOPJUJWq9irRijnygLpDD9GA+WMS+WZpA2ctrf6C1m9pWxMeted/cMSBIXvAX+jd6 3a2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160766; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j/rvnSUPkFYod+qtoGkjVnQK76DLzlbuECq38DHxQ5A=; b=DRvlFaSjewEb00Jl9fCUHjp42x+VbahGCLSfljLH9FH4euLC4iP9WB3+bnYwetTVeQ QCSPHVIa1mdIZepm+D4mhQB1BXpdZUytp7NrcPa6mUqGbhFfHW/gwlboD0TZXRcT/b7E BP+RBk32X/mSa4JfPjsbW10tvSKxJsHQb7tF/Ihro6ATg38OHnfvDwcDzb0znad9RdyB vmxdvs7RdHATvG/LIhWYCsg6OvSltHjOxU8BnqWDsLT0FmDgT5uRVER1ZItTgelExc0u 3Ud+sg9B2s1BKnEC50KrqyvjnamvIo7/hjPDjqUwGq4QoTK7yrgGrX4AX60Eanl6WN0V 7uyw== X-Gm-Message-State: AO0yUKWM4h2/1A8TEZd8qIvL2ikZtubwL0eQIqc8tN+5BpMV+ON7JcyR 4ElCbZLsC2Eap0QgcZM0jAhSmi8mOJCn7g== X-Google-Smtp-Source: AK7set+xmVQmajo5oH9Jd2T7h/jPslono/c3fXukdsoDK6T1PUjeCXYT1WC9c3DWD2cvqTn1Iu5v8aQY1k1FYw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:903:4285:b0:19c:da35:6699 with SMTP id ju5-20020a170903428500b0019cda356699mr5125398plb.7.1678160766547; Mon, 06 Mar 2023 19:46:06 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:48 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-6-ricarkol@google.com> Subject: [PATCH v6 05/12] KVM: arm64: Refactor kvm_arch_commit_memory_region() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor kvm_arch_commit_memory_region() as a preparation for a future commit to look cleaner and more understandable. Also, it looks more like its x86 counterpart (in kvm_mmu_slot_apply_flags()). No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/mmu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index efdaab3f154d..37d7d2aa472a 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1761,20 +1761,27 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { + bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES; + /* * At this point memslot has been committed and there is an * allocated dirty_bitmap[], dirty pages will be tracked while the * memory slot is write protected. */ - if (change != KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (log_dirty_pages) { + + if (change == KVM_MR_DELETE) + return; + /* * If we're with initial-all-set, we don't need to write * protect any pages because they're all reported as dirty. * Huge pages and normal pages will be write protect gradually. */ - if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { - kvm_mmu_wp_memory_region(kvm, new->id); - } + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + return; + + kvm_mmu_wp_memory_region(kvm, new->id); } } From patchwork Tue Mar 7 03:45:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE3C5C64EC4 for ; Tue, 7 Mar 2023 03:46:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230104AbjCGDqT (ORCPT ); Mon, 6 Mar 2023 22:46:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230155AbjCGDqS (ORCPT ); Mon, 6 Mar 2023 22:46:18 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D59FE460B3 for ; Mon, 6 Mar 2023 19:46:08 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id e195-20020a25e7cc000000b00a1e59ba7ed9so12728226ybh.11 for ; Mon, 06 Mar 2023 19:46:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160768; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZN3sAgatY30dvS/hR4J9z2QVjgT8HvsLL4dUuJImQN8=; b=dAK6HETYeiYyond2AcQHrqGNItpAGjAxFodehdXpS0NSXe0LhO1tQRuhGDxUTLkj9c XLl9Jrkl5zjP3BlHNlbuGOfg57xfWfUU+nDEU29k8WZcrGPiIqTqhl7p3dlDBNZAkRyt 2OdJWN38OBvXHQuBrMRr/71cvhTqRzaFxDrIaTYkC+F44z3RGpgwkvinNFnAa7qQ+ZLk sAeGXYEn0FYujWYUS0ycmOwjOPCOmv7Pi4jxikYP7aS1G4zCgW3sYmDnyZnKCJeqtmHX Fco+wrGTp+Qq40vEh8u0uKi0V99TasPxChnTEV85w2w2RyezJBH+n6pEohyteS4iqmom 44pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160768; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZN3sAgatY30dvS/hR4J9z2QVjgT8HvsLL4dUuJImQN8=; b=szjvb/aPorgiH8hdQ6qrpa/gKHdFv9ojrzyGQfN48Akx7uV45L3tNrk+4rZYIj/bQL lZayZ/uGDysnOOnFkXTiuk30FhX+4l3P1ivuYRcNGG726jlO2dJtajrVxkRc0oTJqwSL EMIsaopEp6lS2ajLfKmYkO557YIkWrhvi5c8poIWUhje+2sDP/jou0AiruqHgW/1CYMi jV8LAzmcaWk6uSr4Eii/9OpBn+cRVIYU7GoDxUhM92ACaK0WtYfcuZ+Rb3BhX8De7+Z4 r65xqiZecNTugq1YxB/QmeIEMFkhTjE0pjObXXBbEQhODCKY0wx3bALlhJUf+V94wdQp +orA== X-Gm-Message-State: AO0yUKX8Kd0/FE0NbE+nr0LkyRhBJLqAKyrxIi02d01Wif0ujw5DRqcG ENnD11OtlkXLu0JC+LW6OvxLiZ5+l2awsQ== X-Google-Smtp-Source: AK7set8ee95We8gb4rYyXJWSKJuRj4FB8E8hjjSXq3ql7PAhVhxEmqmhoG9N1QXJ7pfXVYz0MboxUFHbRuwhsw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:b243:0:b0:52e:d380:ab14 with SMTP id q64-20020a81b243000000b0052ed380ab14mr6675889ywh.3.1678160768167; Mon, 06 Mar 2023 19:46:08 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:49 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-7-ricarkol@google.com> Subject: [PATCH v6 06/12] KVM: arm64: Add kvm_uninit_stage2_mmu() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add kvm_uninit_stage2_mmu() and move kvm_free_stage2_pgd() into it. A future commit will add some more things to do inside of kvm_uninit_stage2_mmu(). No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/mmu.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 083cc47dca08..7d173da5bd51 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -168,6 +168,7 @@ void __init free_hyp_pgds(void); void stage2_unmap_vm(struct kvm *kvm); int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type); +void kvm_uninit_stage2_mmu(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size, bool writable); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 37d7d2aa472a..a2800e5c4271 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -767,6 +767,11 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return err; } +void kvm_uninit_stage2_mmu(struct kvm *kvm) +{ + kvm_free_stage2_pgd(&kvm->arch.mmu); +} + static void stage2_unmap_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) { @@ -1855,7 +1860,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_free_stage2_pgd(&kvm->arch.mmu); + kvm_uninit_stage2_mmu(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, From patchwork Tue Mar 7 03:45:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5072FC64EC4 for ; Tue, 7 Mar 2023 03:46:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230181AbjCGDqZ (ORCPT ); Mon, 6 Mar 2023 22:46:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230127AbjCGDqX (ORCPT ); Mon, 6 Mar 2023 22:46:23 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E722D4FAB3 for ; Mon, 6 Mar 2023 19:46:10 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id w192-20020a25dfc9000000b009fe14931caaso12761325ybg.7 for ; Mon, 06 Mar 2023 19:46:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160769; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JBDy+pBnHpjEjOCFqtKXk+f++U9hV1b2BXNfwFPUaag=; b=H7twHRZ0Fi9DJ14Zy8DaC/l7qXIcrc7PT56MNE+bCMUTxeLv8I65uGeWLj8QpdNvJ+ zemUHTJ78+a3AzjBsLhyfqnnBx76e88kG1KI22k5JrTGuTd11HiyHM/DvRR3Nn8sAGky Qt2vuEg8ydatRajyCAdtzbeAkC0sxkHdqPeUdUhL/127nV6LIMjzDsqPPDKH4kk0Pk4x mDIy8miFlHg0JMsyW5O95OfLarVmBLAKEAr3oyWrScx/Jw6z0WDIdVWOAnRBmxjDAfl8 mACzfVldnROI2cE2YPdRTbc5TQuSyzOSxLeyQo7KeoI0fFt7zTFIktknzUoVeE32JQ9d K6Ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160769; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JBDy+pBnHpjEjOCFqtKXk+f++U9hV1b2BXNfwFPUaag=; b=hCYDw52cu8L7fjBAhx28fEWwRL3wm/+OM5+g+gkbBoSFPLHiZb8/27TRmZAB17MVXK MuE6SNhh8eW2/2hTOM9SxpH8uceDIPAvkqEWZQ3iUr9StNAbDCSWKn8zt8TuFFz9YlcU oNnRklPgAHzmhyJPaei7m/TtUQ+L0m92+X7NJkNTzbfRb7VD+dMmfaj+xXqcCKWTz5QM qBM5XRO1s4Cy2UdyHH0TutpWAyvKl0GmjmhUqfs+jDCCymIEiI8HpwbeSbuxPsOOjzAU Nn+v0a8HnbYJ5IuW+b8vJ+57Jabp5ZBOyCpYiD1Q1WeQBBjK5yDAsOZ7CPvLp1PMpydp ACVw== X-Gm-Message-State: AO0yUKXROR9IsRizeEt7IoEP8UKG3iETprt5euqiOTWoTLUZMUrV1Xlw SOyoG+SWvzjuMkxo9oO0FQXmcrvbKskx4w== X-Google-Smtp-Source: AK7set8nUDspX8xTjKFw9UYAbj4P+7S+kz0/D8p5Y20QEe03+fM7j9/RdaMzpeW987uC+6YHWSiaUyuxg3HoCA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a5b:40e:0:b0:ac2:ffe:9cc9 with SMTP id m14-20020a5b040e000000b00ac20ffe9cc9mr7832529ybp.3.1678160769764; Mon, 06 Mar 2023 19:46:09 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:50 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-8-ricarkol@google.com> Subject: [PATCH v6 07/12] KVM: arm64: Export kvm_are_all_memslots_empty() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Export kvm_are_all_memslots_empty(). This will be used by a future commit when checking before setting a capability. No functional change intended. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8ada23756b0e..c6fa634f236d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -990,6 +990,8 @@ static inline bool kvm_memslots_empty(struct kvm_memslots *slots) return RB_EMPTY_ROOT(&slots->gfn_tree); } +bool kvm_are_all_memslots_empty(struct kvm *kvm); + #define kvm_for_each_memslot(memslot, bkt, slots) \ hash_for_each(slots->id_hash, bkt, memslot, id_node[slots->node_idx]) \ if (WARN_ON_ONCE(!memslot->npages)) { \ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d255964ec331..897b000787be 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4596,7 +4596,7 @@ int __attribute__((weak)) kvm_vm_ioctl_enable_cap(struct kvm *kvm, return -EINVAL; } -static bool kvm_are_all_memslots_empty(struct kvm *kvm) +bool kvm_are_all_memslots_empty(struct kvm *kvm) { int i; From patchwork Tue Mar 7 03:45:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66229C61DA4 for ; Tue, 7 Mar 2023 03:46:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230187AbjCGDq2 (ORCPT ); Mon, 6 Mar 2023 22:46:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230127AbjCGDq0 (ORCPT ); Mon, 6 Mar 2023 22:46:26 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E78B053D8F for ; Mon, 6 Mar 2023 19:46:11 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id a9-20020a170902b58900b0019e2eafafddso6921921pls.7 for ; Mon, 06 Mar 2023 19:46:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160771; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gvJNj0mQeIAybmV+sHqcHxeWw8+9xyXfDJ8xf/pLTCA=; b=KuFR4KTgeSooNzN/+zGbVnqYq4wq4kyTAhwmrHbGurV9+8hMkG2Qc8H09cvvDcqzGg j/IhFToTxYiAmo9iNXaSMdkUN2ylaxTvtcDiqK1cz3ReHAa/BXICOcoEB/nNoZXnmO83 fA4ikyiPQ6H04H0L3HO2zkDhxVnZLuXMeyTlhbpa96jztGxSaIUxSLgRyxLwkANp6p8L MDDxBGPTa0u7fv3JEfFSYSuBU6MtxwgbdmLsuS2e7KQ9y4T4vqq+u5IppD4sU2Zqelud Iib8XjYPWsrMBWHKGQGSKZ9OpEvD3mqpeRtMdBhtaOVQJp+9izt59WrnPkhKhmT4hFUd veTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160771; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gvJNj0mQeIAybmV+sHqcHxeWw8+9xyXfDJ8xf/pLTCA=; b=nz6nZCQWDM0qCtGRvuAYzGsMfewqL3TXeUvL75HY9JBqd2aPOWMdIeRCkSuBR2xvuY CkIqwG6egJXVywczu0ETwaucIzIjOhOD1Mz1DyRWfoS+RvQFVndpCJbhfJHmDjQ3Ob2Z /Nw5DaY/xTZEfzyOaXLDb/aB4OFMx03s0FavJ7fDAYwP6DtfmfG/PuyJwFgTmahaKjcR w7mX0sQq7vrOMQ+Jbt0QZ/CQbL4lNJi/pBmbfkc9IHlMy6YAD4/+rPRM7F8Z3fYyadso uhZuum2s3OrjhuBGLdboV7orWYwSyIEpIMYq+kR4GNbpn8uC27wHKNpUZqeCezV0QqMz KTAw== X-Gm-Message-State: AO0yUKX/ZnoI78uGlyY7WcoaNx02dxGWedMi1RzNaUbLTONrnAIE3Wpu uHN818XKBQSHw7MiHlBok0e7d/HrFHW1KA== X-Google-Smtp-Source: AK7set9SV3zCXNwHWNNDJ8yWRWTV0zHQlV0Jsaxr1Bj4VcYs8eaYZCCL4+YdsU8YvpCLjZQiLrjrMF14QfO+CQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a17:902:ef95:b0:19b:8be:33dc with SMTP id iz21-20020a170902ef9500b0019b08be33dcmr5059517plb.6.1678160771101; Mon, 06 Mar 2023 19:46:11 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:51 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-9-ricarkol@google.com> Subject: [PATCH v6 08/12] KVM: arm64: Add KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Oliver Upton Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a capability for userspace to specify the eager split chunk size. The chunk size specifies how many pages to break at a time, using a single allocation. Bigger the chunk size, more pages need to be allocated ahead of time. Suggested-by: Oliver Upton Signed-off-by: Ricardo Koller --- Documentation/virt/kvm/api.rst | 26 ++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_host.h | 19 +++++++++++++++++++ arch/arm64/kvm/arm.c | 22 ++++++++++++++++++++++ arch/arm64/kvm/mmu.c | 3 +++ include/uapi/linux/kvm.h | 1 + 5 files changed, 71 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 62de0768d6aa..872dae7cfbe0 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -8380,6 +8380,32 @@ structure. When getting the Modified Change Topology Report value, the attr->addr must point to a byte where the value will be stored or retrieved from. +8.40 KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE +--------------------------------------- + +:Capability: KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE +:Architectures: arm64 +:Type: vm +:Parameters: arg[0] is the new chunk size. +:Returns: 0 on success, -EINVAL if any memslot has been created. + +This capability sets the chunk size used in Eager Page Splitting. + +Eager Page Splitting improves the performance of dirty-logging (used +in live migrations) when guest memory is backed by huge-pages. This +optimization is enabled by default on arm64. It avoids splitting +huge-pages (into PAGE_SIZE pages) on fault, by doing it eagerly when +enabling dirty logging (with the KVM_MEM_LOG_DIRTY_PAGES flag for a +memory region), or when using KVM_CLEAR_DIRTY_LOG. + +The chunk size specifies how many pages to break at a time, using a +single allocation for each chunk. Bigger the chunk size, more pages +need to be allocated ahead of time. A good heuristic is to pick the +size of the huge-pages as the chunk size. + +If the chunk size (arg[0]) is zero, then no eager page splitting is +performed. The default value PMD size (e.g., 2M when PAGE_SIZE is 4K). + 9. Known KVM API problems ========================= diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a1892a8f6032..b7755d0cbd4d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -158,6 +158,25 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; +#define KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT PMD_SIZE + /* + * Memory cache used to split + * KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE worth of huge pages. It + * is used to allocate stage2 page tables while splitting huge + * pages. Note that the choice of EAGER_PAGE_SPLIT_CHUNK_SIZE + * influences both the capacity of the split page cache, and + * how often KVM reschedules. Be wary of raising CHUNK_SIZE + * too high. + * + * A good heuristic to pick CHUNK_SIZE is that it should be + * the size of the huge-pages backing guest memory. If not + * known, the PMD size (usually 2M) is a good guess. + * + * Protected by kvm->slots_lock. + */ + struct kvm_mmu_memory_cache split_page_cache; + uint64_t split_page_chunk_size; + struct kvm_arch *arch; }; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 3bd732eaf087..3468fee223ae 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -91,6 +91,22 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, r = 0; set_bit(KVM_ARCH_FLAG_SYSTEM_SUSPEND_ENABLED, &kvm->arch.flags); break; + case KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE: + mutex_lock(&kvm->lock); + mutex_lock(&kvm->slots_lock); + /* + * To keep things simple, allow changing the chunk + * size only if there are no memslots created. + */ + if (!kvm_are_all_memslots_empty(kvm)) { + r = -EINVAL; + } else { + r = 0; + kvm->arch.mmu.split_page_chunk_size = cap->args[0]; + } + mutex_unlock(&kvm->slots_lock); + mutex_unlock(&kvm->lock); + break; default: r = -EINVAL; break; @@ -288,6 +304,12 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PTRAUTH_GENERIC: r = system_has_full_ptr_auth(); break; + case KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE: + if (kvm) + r = kvm->arch.mmu.split_page_chunk_size; + else + r = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; + break; default: r = 0; } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a2800e5c4271..898985b09321 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -756,6 +756,9 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; + mmu->split_page_cache.gfp_zero = __GFP_ZERO; + mmu->split_page_chunk_size = KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; + mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); return 0; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d77aef872a0a..af43acdc7901 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1184,6 +1184,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225 #define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226 +#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 227 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Tue Mar 7 03:45:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B53C64EC4 for ; Tue, 7 Mar 2023 03:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230118AbjCGDqb (ORCPT ); Mon, 6 Mar 2023 22:46:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbjCGDq0 (ORCPT ); Mon, 6 Mar 2023 22:46:26 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91D8E5849D for ; Mon, 6 Mar 2023 19:46:13 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id in10-20020a17090b438a00b002376d8554eeso4462858pjb.1 for ; Mon, 06 Mar 2023 19:46:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160773; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0kbdUxROCJTBS8z9lmIYx9gACkJGPeujB5l0piGpv+k=; b=TbCw9eJ0dM9E7T8mQceJoTr90wmJa9vgnDpZxVYs4ylE9snyO4kkWxk0fgY71llqak d9SxYdnweG/2Vk6QQ7BAgzpmJ/JJlo7/ok2JsNk1CMwtVS7dPSdx2J9mjMEDpU+PjkAG c1mnkbc6H4KId/ZTCsgwx5BQ5Gku2OLiDkh4JjVBc44//yFs9nmn/PBU0BGuA+mOk8jy 2OOH88g7O5cqBxQNj+1VJTWD/NAribPH5NG1zZcTMdS59e6pieIwynN++/zqNlA6gJoj nbuV82Y+7lV1MKhYWN5li1Y1n4CF2V42WNTX76H40+AauUzH45qOtt2NofFlHDZ/X382 5hdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160773; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0kbdUxROCJTBS8z9lmIYx9gACkJGPeujB5l0piGpv+k=; b=r4iZ4RdrspQxYdAEgrN31BaQK1tibDlkodvOshDfMwB4VKhzcNXtTneud7Uunfttu5 ETln1228h/7UU096aPAepnPhflBdth0T2CMkE7g0WuVWwkIpfAj9Oag9AEPlJoi22Eki ZQebMpW4leSQUisUdvVkiIrPhipPyFbXsmuCg/AZ2zPLsF6SslKyIXVnXNIhniYPp8uy aa9HTI6UXSHWUW18zU4Lx01x1jTvp3REiQVLYs4LEAEGNlC+CE2hvvWWrWhRangaV9Zy G6Wle6nq2lH3yKaZh+KAREwaQlauEH+gGLGCqj7Xn+GzfbSOcplguudZY7w/FkC6+SKp nS7A== X-Gm-Message-State: AO0yUKXNlfKgJRymJEXkNBJjthnwFDc6ZpahsoC6FWnnpwCgG+X/9eID Gz58w/BcDYUKuY0WfxIBh25dz1WkLK4BQw== X-Google-Smtp-Source: AK7set+pIqg7F05Y0jFwWRARKotBU8WUlIZMPZZX1myzX9R3zeOnMYhUb8jioQEEeJsOkhW8UGKmlBu1GaTZVA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a63:ab42:0:b0:4fd:72f3:5859 with SMTP id k2-20020a63ab42000000b004fd72f35859mr4598315pgp.2.1678160772927; Mon, 06 Mar 2023 19:46:12 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:52 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-10-ricarkol@google.com> Subject: [PATCH v6 09/12] KVM: arm64: Split huge pages when dirty logging is enabled From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller , Shaoqin Huang Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split huge pages eagerly when enabling dirty logging. The goal is to avoid doing it while faulting on write-protected pages, which negatively impacts guest performance. A memslot marked for dirty logging is split in 1GB pieces at a time. This is in order to release the mmu_lock and give other kernel threads the opportunity to run, and also in order to allocate enough pages to split a 1GB range worth of huge pages (or a single 1GB huge page). Note that these page allocations can fail, so eager page splitting is best-effort. This is not a correctness issue though, as huge pages can still be split on write-faults. The benefits of eager page splitting are the same as in x86, added with commit a3fe5dbda0a4 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU when dirty logging is enabled"). For example, when running dirty_log_perf_test with 64 virtual CPUs (Ampere Altra), 1GB per vCPU, 50% reads, and 2MB HugeTLB memory, the time it takes vCPUs to access all of their memory after dirty logging is enabled decreased by 44% from 2.58s to 1.42s. Signed-off-by: Ricardo Koller Reviewed-by: Shaoqin Huang --- arch/arm64/kvm/mmu.c | 118 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 116 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 898985b09321..b1b8da5f8b6c 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -31,14 +31,21 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; static unsigned long __ro_after_init io_map_base; -static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) +static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, + phys_addr_t size) { - phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); phys_addr_t boundary = ALIGN_DOWN(addr + size, size); return (boundary - 1 < end - 1) ? boundary : end; } +static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) +{ + phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); + + return __stage2_range_addr_end(addr, end, size); +} + /* * Release kvm_mmu_lock periodically if the memory region is large. Otherwise, * we may see kernel panics with CONFIG_DETECT_HUNG_TASK, @@ -75,6 +82,77 @@ static int stage2_apply_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, #define stage2_apply_range_resched(mmu, addr, end, fn) \ stage2_apply_range(mmu, addr, end, fn, true) +static bool need_topup_split_page_cache_or_resched(struct kvm *kvm, uint64_t min) +{ + struct kvm_mmu_memory_cache *cache; + + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + return true; + + cache = &kvm->arch.mmu.split_page_cache; + return kvm_mmu_memory_cache_nr_free_objects(cache) < min; +} + +/* + * Get the maximum number of page-tables needed to split a range of + * blocks into PAGE_SIZE PTEs. It assumes the range is already mapped + * at the PMD level, or at the PUD level if allowed. + */ +static int kvm_mmu_split_nr_page_tables(u64 range) +{ + int n = 0; + + if (KVM_PGTABLE_MIN_BLOCK_LEVEL < 2) + n += DIV_ROUND_UP_ULL(range, PUD_SIZE); + n += DIV_ROUND_UP_ULL(range, PMD_SIZE); + return n; +} + +static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + struct kvm_mmu_memory_cache *cache; + struct kvm_pgtable *pgt; + int ret; + u64 next; + u64 chunk_size = kvm->arch.mmu.split_page_chunk_size; + int cache_capacity = kvm_mmu_split_nr_page_tables(chunk_size); + + if (chunk_size == 0) + return 0; + + lockdep_assert_held_write(&kvm->mmu_lock); + + cache = &kvm->arch.mmu.split_page_cache; + + do { + if (need_topup_split_page_cache_or_resched(kvm, + cache_capacity)) { + write_unlock(&kvm->mmu_lock); + cond_resched(); + /* Eager page splitting is best-effort. */ + ret = __kvm_mmu_topup_memory_cache(cache, + cache_capacity, + cache_capacity); + write_lock(&kvm->mmu_lock); + if (ret) + break; + } + + pgt = kvm->arch.mmu.pgt; + if (!pgt) + return -EINVAL; + + next = __stage2_range_addr_end(addr, end, chunk_size); + ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, + cache, cache_capacity); + if (ret) + break; + } while (addr = next, addr != end); + + return ret; +} + static bool memslot_is_logging(struct kvm_memory_slot *memslot) { return memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY); @@ -773,6 +851,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t void kvm_uninit_stage2_mmu(struct kvm *kvm) { kvm_free_stage2_pgd(&kvm->arch.mmu); + kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } static void stage2_unmap_memslot(struct kvm *kvm, @@ -999,6 +1078,31 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, stage2_wp_range(&kvm->arch.mmu, start, end); } +/** + * kvm_mmu_split_memory_region() - split the stage 2 blocks into PAGE_SIZE + * pages for memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to split + * + * Acquires kvm->mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot = id_to_memslot(slots, slot); + phys_addr_t start, end; + + lockdep_assert_held(&kvm->slots_lock); + + start = memslot->base_gfn << PAGE_SHIFT; + end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + write_lock(&kvm->mmu_lock); + kvm_mmu_split_huge_pages(kvm, start, end); + write_unlock(&kvm->mmu_lock); +} + /* * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected * dirty pages. @@ -1790,6 +1894,16 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, return; kvm_mmu_wp_memory_region(kvm, new->id); + kvm_mmu_split_memory_region(kvm, new->id); + } else { + /* + * Free any leftovers from the eager page splitting cache. Do + * this when deleting, moving, disabling dirty logging, or + * creating the memslot (a nop). Doing it for deletes makes + * sure we don't leak memory, and there's no need to keep the + * cache around for any of the other cases. + */ + kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } } From patchwork Tue Mar 7 03:45:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1EAAC64EC4 for ; Tue, 7 Mar 2023 03:46:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbjCGDqg (ORCPT ); Mon, 6 Mar 2023 22:46:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230179AbjCGDq2 (ORCPT ); Mon, 6 Mar 2023 22:46:28 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76A9E6B5FF for ; Mon, 6 Mar 2023 19:46:15 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-536bbaa701aso122534277b3.3 for ; Mon, 06 Mar 2023 19:46:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160774; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pxHXXFOzzPtwBytDTC3DJkgO49OCnlzta01APx3XRoY=; b=nj/c+LEQmDR3kTSgRRFj/Gf2yKA5uMA9WHuiQKtfyl0Cgt1idBG16QiUDxM3VOKJLW 2QgLWoN42UOUUF/XJAIxdB0g5996QWHSl1b2EYqzH/Uc39menAy95Ls6YVr5Q4d9+FiQ nlJ6wGmYGtNEMWyfgGB8fYaEHGBL+j0ftq+rnrsIUcUT0c3ltL1Nx9FURFUMBMUZt6q6 DbrCE5qRjl70S4SigIUvM6wNYgCEnXDYpdnnQrT5GdO84Xkr/ycK17e/96iP13EE5JQC n3IwZWYckEoV9HQII0A29MCIOXPjx6WfFgdrGTZcUZ86MDkWrCvGYtGZKnVnYrwdu4sF 4TYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160774; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pxHXXFOzzPtwBytDTC3DJkgO49OCnlzta01APx3XRoY=; b=SPiN48Rc6Twvqt8VbtJ0L/971wlf4EMn8zs67ScnXSHEur8h0sTZ2l83GW3zn4Fj23 oDzbEkwSGXnVw70KY/3lcTnimu+weMnmWe11PhMBZJ4jbysJVGmk/kntE45sjS95NmCU mutWnzW6GTcg1voZXnX9GgXr7OKf+tcnjqZE9xVpR4TuyKNvMtJDuBw0cDoiwzwtb4PH bcbSNlCeRBeWRsF4jvt2GAMiA6PlWy8mT+pa+pgfchX72fYgjNXjVAxOsqrLbTxq7qVQ 6D3K2QBEOJwm+FkgX0YqNKuWjyou3I2XMO4BFib5/J49HzEuHRvduJ+szbyAE1yPfCHl Oh+A== X-Gm-Message-State: AO0yUKUjb75o6e18uuNSyjPgtoE/JgFOrobWRMnKW0jlBziC67IBJqGF pA//a9FGJFxwO1a4n8uip+RpxXDXNN+eBQ== X-Google-Smtp-Source: AK7set/QPG5XWbwT+rXjxMWSbAg7uofBayxdEsaP1/+8DML9BhxrSFvP0atmTt2hqDzmmsQ06l4NdFlLgMdgug== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6902:151:b0:afa:d8b5:8e82 with SMTP id p17-20020a056902015100b00afad8b58e82mr5595371ybh.6.1678160774501; Mon, 06 Mar 2023 19:46:14 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:53 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-11-ricarkol@google.com> Subject: [PATCH v6 10/12] KVM: arm64: Open-code kvm_mmu_write_protect_pt_masked() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the functionality of kvm_mmu_write_protect_pt_masked() into its caller, kvm_arch_mmu_enable_log_dirty_pt_masked(). This will be used in a subsequent commit in order to share some of the code in kvm_arch_mmu_enable_log_dirty_pt_masked(). No functional change intended. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 42 +++++++++++++++--------------------------- 1 file changed, 15 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b1b8da5f8b6c..910aea6bbd1e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1056,28 +1056,6 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) kvm_flush_remote_tlbs(kvm); } -/** - * kvm_mmu_write_protect_pt_masked() - write protect dirty pages - * @kvm: The KVM pointer - * @slot: The memory slot associated with mask - * @gfn_offset: The gfn offset in memory slot - * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory - * slot to be write protected - * - * Walks bits set in mask write protects the associated pte's. Caller must - * acquire kvm_mmu_lock. - */ -static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask) -{ - phys_addr_t base_gfn = slot->base_gfn + gfn_offset; - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - - stage2_wp_range(&kvm->arch.mmu, start, end); -} - /** * kvm_mmu_split_memory_region() - split the stage 2 blocks into PAGE_SIZE * pages for memory slot @@ -1104,17 +1082,27 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) } /* - * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected - * dirty pages. + * kvm_arch_mmu_enable_log_dirty_pt_masked() - enable dirty logging for selected pages. + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of pages at offset 'gfn_offset' in this memory + * slot to enable dirty logging on * - * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to - * enable dirty logging for them. + * Writes protect selected pages to enable dirty logging for them. Caller must + * acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { - kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + lockdep_assert_held_write(&kvm->mmu_lock); + + stage2_wp_range(&kvm->arch.mmu, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) From patchwork Tue Mar 7 03:45:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBC2AC64EC4 for ; Tue, 7 Mar 2023 03:46:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230239AbjCGDql (ORCPT ); Mon, 6 Mar 2023 22:46:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbjCGDqc (ORCPT ); Mon, 6 Mar 2023 22:46:32 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E71E158C1C for ; Mon, 6 Mar 2023 19:46:16 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id iy17-20020a170903131100b0019ce046d8f3so7051228plb.23 for ; Mon, 06 Mar 2023 19:46:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160776; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ygqX7cYaXDWkUbcA+/8gfKKiXYuqG9E9j8+54ijaRhI=; b=UdTnjJR7diqCGdoPrwymK8BVf2lIJswPqqOkE9ieVHPG32C3/AxTqH18MvIqTpgNfs f9cSw/RpRxTpxdGUJssaGIH7B1x2fTjtL0fWX8KrXbvpU0jwDsN7LEZkcR3zsh5bnHzp 9FrzMZQEAKIBx90HXg0Iab/s2eZNRua6hCxF7I4XhPJjanaONXxQVF3QXN6J/FtMRJp4 LMnTqKC1bKyZzIFqRa4+ObK2Pvqo3Mq2D+DfuaxhJtreC3fCXFUwUYkEXXJtpKlgxcoZ k7B+HRC0oaPlarVnjzJweOW9wAn7eADmgauRTxut0/dZ9W+pGmUmSo8zOkmSiLxtgFsx WJeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160776; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ygqX7cYaXDWkUbcA+/8gfKKiXYuqG9E9j8+54ijaRhI=; b=pSaq/IJIj9RxCiSeyhIjMt2z2CokwyvE15kwrqgyxWufRY+cbasnTvEblvPlgF89He eRhUR8Qaaa1J4aqKH3qGmO0yEfWVyEUPkf0TdWTWOiDvOi4adRjOx5rExZvvA6rrdrGn 3WiRJpAZ7Z8tQ6LQUWgQRrS2DbBPTX2XraAGHFQtM8bduKct2jzdyf2Z6m6bEyUq0OiZ LbrMh9UV6V1M7XqBV4pAgijWj+bRJ8fgDTxWW4OP7rf+BdWrXd0a3rk7mcxmEoJWRGEZ 5T1Etk/47j50J9FLd7Lfah5zsPqmSKs8syGvkAxw0UlguvZmo5BSpS0S3PNeJ8KvfSJU fWYw== X-Gm-Message-State: AO0yUKVi9zdYz+tYgjaj+kbdqDTlXRf+cop+p3h4FExXFplTy5ZxzsGN 05hGXOJaQf0FQWWwwHZANWhp653ArN3Adg== X-Google-Smtp-Source: AK7set9RrPF1bWr9sfzbU55TuzyilEy7iLjQ4Qc3mOoUfD5BSrWI3TYPI784ZQIhAOP0QbH+0RUdUxhh2/lIeA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a62:860d:0:b0:5f4:fe6d:fafa with SMTP id x13-20020a62860d000000b005f4fe6dfafamr5651514pfd.0.1678160776437; Mon, 06 Mar 2023 19:46:16 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:54 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-12-ricarkol@google.com> Subject: [PATCH v6 11/12] KVM: arm64: Split huge pages during KVM_CLEAR_DIRTY_LOG From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is the arm64 counterpart of commit cb00a70bd4b7 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU during KVM_CLEAR_DIRTY_LOG"), which has the benefit of splitting the cost of splitting a memslot across multiple ioctls. Split huge pages on the range specified using KVM_CLEAR_DIRTY_LOG. And do not split when enabling dirty logging if KVM_DIRTY_LOG_INITIALLY_SET is set. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 910aea6bbd1e..d54223b5db97 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1089,8 +1089,8 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) * @mask: The mask of pages at offset 'gfn_offset' in this memory * slot to enable dirty logging on * - * Writes protect selected pages to enable dirty logging for them. Caller must - * acquire kvm->mmu_lock. + * Splits selected pages to PAGE_SIZE and then writes protect them to enable + * dirty logging for them. Caller must acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -1103,6 +1103,13 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, lockdep_assert_held_write(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); + + /* + * If initially-all-set mode is not set, then huge-pages were already + * split when enabling dirty logging: no need to do it again. + */ + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + kvm_mmu_split_huge_pages(kvm, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) @@ -1889,7 +1896,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * this when deleting, moving, disabling dirty logging, or * creating the memslot (a nop). Doing it for deletes makes * sure we don't leak memory, and there's no need to keep the - * cache around for any of the other cases. + * cache around for any of the other cases. Keeping the cache + * is useful for successive KVM_CLEAR_DIRTY_LOG calls, which is + * not handled in this function. */ kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } From patchwork Tue Mar 7 03:45:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13162736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90343C61DA4 for ; Tue, 7 Mar 2023 03:46:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230171AbjCGDqs (ORCPT ); Mon, 6 Mar 2023 22:46:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230196AbjCGDqk (ORCPT ); Mon, 6 Mar 2023 22:46:40 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0888858C3C for ; Mon, 6 Mar 2023 19:46:18 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-536c6ce8d74so121930387b3.9 for ; Mon, 06 Mar 2023 19:46:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678160778; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VGjqrjpuw0L+8X6Iy5HBMW42liiteNEc7MWTHjJh4Xc=; b=ZpQKgXe/WHKw2SpCF1799ueHt09uyGUMIRhtd39yp6IyRqDw5BHgs5JsewmN/3fviw A3tR9lH6NieRvEoom6/4J7TvxDyl0154kMjJLQMf6F86hRUJUvn2L0Vg5SsXmhflxbLZ z/SmxVqCOkVJ2MhgexhOUkTXeBimk7WLC3NjJytVPU6HIatgt+NQiU2NhhCKNRLvEQ+F D0aDJgws+OK3g/tNAvG81LJiXYRS1/3mb1lYT7syHpW5LUKXedgf1RNcXjpCSHwE9/E/ jZ/fFD/4B6zHM3NTNAQ3Q4kM1WbRQQbJUVvWo4VyWmDjNatlScZBpAGCBoc48Kab02Pa 7UDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678160778; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VGjqrjpuw0L+8X6Iy5HBMW42liiteNEc7MWTHjJh4Xc=; b=cql57rvejrihoj+1SzOpA2YgKu/kbZLJTj5KL467hzOcL727CZ6Ba2rBIwZPVvRL+c AZkSE+8g/OqgSDPGd15F1Phlp71ugKvolSARn3Pza1ggGWI3CYP05PLTIWimZXofCtHg 1OzCxoGrYRGdKR89+5A0aTORKQX+wixuB2iusrIlZDUFRYOqGtZpvinoLDSF2VsfSXuV jv5rOuA32CcCrRi3aoosZm5m6EjCPQMaMyNeQSHfp71Zk+6RejOWZcwkn8dh0C2hXQmq 1Vu2O145OdDJ6J3svy9syXiOcwC8F+u5aIHOaCX0vA1f9puDwoVO4sZQO/FXzckQ8RmQ Sfnw== X-Gm-Message-State: AO0yUKXwUSH+YjMr7X20uetxd7fMQHbQbAXIZHK+d8kRL4YNmJjB6QJM JF0ZGDzy96NzWOyb/Rb0f3S6xtELQGo9JA== X-Google-Smtp-Source: AK7set9YBnrrwalPv7KgetgsTgM9LIHdjWC6Zuez5YX5KaiKx9tfgePWQo2w7LWoc1G9aSBtyhweELPz143+pQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a81:af4e:0:b0:521:db3f:9e27 with SMTP id x14-20020a81af4e000000b00521db3f9e27mr8363695ywj.2.1678160777865; Mon, 06 Mar 2023 19:46:17 -0800 (PST) Date: Tue, 7 Mar 2023 03:45:55 +0000 In-Reply-To: <20230307034555.39733-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230307034555.39733-1-ricarkol@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230307034555.39733-13-ricarkol@google.com> Subject: [PATCH v6 12/12] KVM: arm64: Use local TLBI on permission relaxation From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marc Zyngier Broadcasted TLB invalidations (TLBI) are usually less performant than their local variant. In particular, we observed some implementations that take millliseconds to complete parallel broadcasted TLBIs. It's safe to use local, non-shareable, TLBIs when relaxing permissions on a PTE in the KVM case for a couple of reasons. First, according to the ARM Arm (DDI 0487H.a D5-4913), permission relaxation does not need break-before-make. Second, the VTTBR_EL2.CnP==0 case, where each PE has its own TLB entry for the same page, is tolerated correctly by KVM when doing permission relaxation. Not having changes broadcasted to all PEs is correct for this case, as it's safe to have other PEs fault on permission on the same page. Signed-off-by: Marc Zyngier Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_asm.h | 4 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 10 ++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 54 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 2 +- arch/arm64/kvm/hyp/vhe/tlb.c | 32 ++++++++++++++++++ 5 files changed, 101 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544..bb17b2ead4c7 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa_nsh, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, @@ -225,6 +226,9 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, + int level); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b..c6bf1e49ca93 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,15 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void handle___kvm_tlb_flush_vmid_ipa_nsh(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2); + DECLARE_REG(int, level, host_ctxt, 3); + + __kvm_tlb_flush_vmid_ipa_nsh(kern_hyp_va(mmu), ipa, level); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +324,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa_nsh), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f589..ef2b70587f93 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,60 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, int level) +{ + struct tlb_inv_context cxt; + + dsb(nshst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi_level(ipas2e1, ipa, level); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(nsh); + __tlbi(vmalle1); + dsb(nsh); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 3149b98d1701..dcf7ec1810c7 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1179,7 +1179,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED); if (!ret) - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, pgt->mmu, addr, level); + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; } diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e..e69da550cdc5 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,38 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, int level) +{ + struct tlb_inv_context cxt; + + dsb(nshst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi_level(ipas2e1, ipa, level); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(nsh); + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt;