From patchwork Fri Jan 13 03:49:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1926C54EBD for ; Fri, 13 Jan 2023 03:50:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233405AbjAMDuH (ORCPT ); Thu, 12 Jan 2023 22:50:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231895AbjAMDuF (ORCPT ); Thu, 12 Jan 2023 22:50:05 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C015C12D3C for ; Thu, 12 Jan 2023 19:50:04 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id k18-20020a170902c41200b001896d523dc8so13955423plk.19 for ; Thu, 12 Jan 2023 19:50:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jZjRhXEPC2vsb7WOpKphyVNmOEayhCKVSagTVwK+CXE=; b=m0sCAioIytK8Y3KnJTXLfxMbnYEWFn9cqCWm4fEsmelo+c3j2aBzy5U+Srm/7J3Bt6 7HNsFb2BcI0WNv8cK3vppN0msHyojYVdHgzULoN31bKA8B5tEnToC8eGP6CYreBkBhjt geVKTpc6aLxPnRSaSOL47CGD0R0Pj6wNyTHmYSjfC1NVIW3iS09Vo65AhxnbPdNLDMT+ gWbGWtwt5X8ACHTVsHyOpnPCPIMkGISCbyq5xHHf+YblxzRcsnkQSMzWcfNtkKkRIJcq 69Z5J6VKaKOS7IOrxLbCZT9Ipmh9liXBvuUNxavLsYraYBOFGWn8AKuaiZqR6RWBvD1v uiiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jZjRhXEPC2vsb7WOpKphyVNmOEayhCKVSagTVwK+CXE=; b=iEbpzK8a2HPNvaXCwjOAmffpjQntCyukuTpJvhrlwJrDGkVyU6WtwDu+OdoTBy6jzE 0Kem1irT6oDx+RRyrYz9llbQh7cGiM60frYTLXoJycI0tqbQAUwY2buodQxAG416o0+m VBxu7RHFx3DDXynPIBCp4x8Mfi+aV4yqn5S2C00w+E84ZdiDZ+uyH2hmtJZsKxgUNcVE pzvMGFciNFMm+j9CLGrFNQ2hvcmIQbWPLwTkOfqUi4+JErfMchHfyrRTsXAOw7k+xEQl Df6qR9S52vBx+SgyTlWXl2hilWjaQEOTCmcqDWjC2RoUg10Gs/Lguehc9F931zvL8RBz jxdA== X-Gm-Message-State: AFqh2koic2tqCTaBemR6iEQWONH4iT2xPTxTa/5TgrLjPfPThnXyNciH 32jvs5Zn2/WEVPLCmmhKATB3wpR1IT/3vA== X-Google-Smtp-Source: AMrXdXtxEz6Q4oQ8SbH1Gxz2JkQI2nFkBflO+n+bZU0IuqUE0eXpfIbxeGj5xYpYgoxqVygDgol01k7lU6kBeA== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:aa7:8d95:0:b0:582:9b37:2600 with SMTP id i21-20020aa78d95000000b005829b372600mr3186912pfr.57.1673581804224; Thu, 12 Jan 2023 19:50:04 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:52 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-2-ricarkol@google.com> Subject: [PATCH 1/9] KVM: arm64: Add KVM_PGTABLE_WALK_REMOVED into ctx->flags From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a flag to kvm_pgtable_visit_ctx, KVM_PGTABLE_WALK_REMOVED, to indicate that the walk is on a removed table not accesible to the HW page-table walker. Then use it to avoid doing break-before-make or performing CMOs (Cache Maintenance Operations) when mapping a removed table. This is safe as these removed tables are not visible to the HW page-table walker. This will be used in a subsequent commit for replacing huge-page block PTEs into tables of 4K PTEs. Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_pgtable.h | 8 ++++++++ arch/arm64/kvm/hyp/pgtable.c | 27 ++++++++++++++++----------- 2 files changed, 24 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 63f81b27a4e3..84a271647007 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -188,12 +188,15 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end, * children. * @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared * with other software walkers. + * @KVM_PGTABLE_WALK_REMOVED: Indicates the page-tables are + * removed: not visible to the HW walker. */ enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_LEAF = BIT(0), KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), KVM_PGTABLE_WALK_TABLE_POST = BIT(2), KVM_PGTABLE_WALK_SHARED = BIT(3), + KVM_PGTABLE_WALK_REMOVED = BIT(4), }; struct kvm_pgtable_visit_ctx { @@ -215,6 +218,11 @@ static inline bool kvm_pgtable_walk_shared(const struct kvm_pgtable_visit_ctx *c return ctx->flags & KVM_PGTABLE_WALK_SHARED; } +static inline bool kvm_pgtable_walk_removed(const struct kvm_pgtable_visit_ctx *ctx) +{ + return ctx->flags & KVM_PGTABLE_WALK_REMOVED; +} + /** * struct kvm_pgtable_walker - Hook into a page-table walk. * @cb: Callback function to invoke during the walk. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11cf2c618a6..87fd40d09056 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -717,14 +717,17 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx, if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED)) return false; - /* - * Perform the appropriate TLB invalidation based on the evicted pte - * value (if any). - */ - if (kvm_pte_table(ctx->old, ctx->level)) - kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); - else if (kvm_pte_valid(ctx->old)) - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); + if (!kvm_pgtable_walk_removed(ctx)) { + /* + * Perform the appropriate TLB invalidation based on the + * evicted pte value (if any). + */ + if (kvm_pte_table(ctx->old, ctx->level)) + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); + else if (kvm_pte_valid(ctx->old)) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, + ctx->addr, ctx->level); + } if (stage2_pte_is_counted(ctx->old)) mm_ops->put_page(ctx->ptep); @@ -808,11 +811,13 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx, return -EAGAIN; /* Perform CMOs before installation of the guest stage-2 PTE */ - if (mm_ops->dcache_clean_inval_poc && stage2_pte_cacheable(pgt, new)) + if (!kvm_pgtable_walk_removed(ctx) && mm_ops->dcache_clean_inval_poc && + stage2_pte_cacheable(pgt, new)) mm_ops->dcache_clean_inval_poc(kvm_pte_follow(new, mm_ops), - granule); + granule); - if (mm_ops->icache_inval_pou && stage2_pte_executable(new)) + if (!kvm_pgtable_walk_removed(ctx) && mm_ops->icache_inval_pou && + stage2_pte_executable(new)) mm_ops->icache_inval_pou(kvm_pte_follow(new, mm_ops), granule); stage2_make_pte(ctx, new); From patchwork Fri Jan 13 03:49:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2A4BC5479D for ; Fri, 13 Jan 2023 03:50:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233851AbjAMDuJ (ORCPT ); Thu, 12 Jan 2023 22:50:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230035AbjAMDuH (ORCPT ); Thu, 12 Jan 2023 22:50:07 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DAAE12D3C for ; Thu, 12 Jan 2023 19:50:06 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4c0fe6e3f13so214422047b3.0 for ; Thu, 12 Jan 2023 19:50:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vmp1Kc4wx1CMTXvUGnc7kt4vfG7uXA3nNZdPoWjSPWk=; b=Sbe6PwDItRsJwuIWtuHakgH+nKZjoQozxzoc6Ffpja7NEX4l9tAAnCtB4g6D2pPj26 +2fcDnDZHAwgvQfT9wlYY3i2HwfC34B6M0dvoQrrk1vmw8sONtY59ZGotBT5LRIFiiCn KtfRj8xXfJrxNjXJDb1o7qEETCIeU28yDE7E58WuqN6NuDZR5UC1J4VguZtnOr8+jRZ7 9nYHyoYOaxo+v8+rll3rsB7DfdpVeKrny+ktV5HTpFo6/MQ6SIR5AK3ZdnUJjepo97sp DVQ91IUxCv75ztrKXMlPBuvHMpF+ZfkJPvrwYOuZf37RpmPUGJ/ptmKxLIWQAjDLX1yx c9rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vmp1Kc4wx1CMTXvUGnc7kt4vfG7uXA3nNZdPoWjSPWk=; b=IUHs7LjeYZkL1DFBsJhjfmc8WyD2+cuQGjNnNGgg4h6rRLcoKRsZR7a+Jav3+btBeF g4aOtu7tvTZVjm0aUszhTOMTf90Fo5NoLv2qd8kKoEU83704PEIOmC1CXdA0vdJ62SBu D11SSKqdIsg4mGR9Wh/7tRUH06/bgSkpVETtFeLuaLAvmD0OPt/n7mtf4fhWxNYJTG3N JVhKFqOFl6YPfA08i7I9PwW4MRKHJ9U5gkaDLudc/pSvrQyhC/jZ8JKKdIMZ4ppi+/gd NXoPt3qlOe08u4hGuhqlaKxvEW3tjlIwSFx9Fr5ZgqWlrtMhnB4TwnWg62mDwjtT7i6j F5Kg== X-Gm-Message-State: AFqh2kpH6IhA4uRCcoYUQOP9TI3zj3NLrFUwwqM4BSHZwpUiLHoAILXF 1CM5MP7lfpJBY7BR0hauHkF1aJxCxTDJQA== X-Google-Smtp-Source: AMrXdXsQSZJtgtSeMsHcjfJu/cgQkQRfMArQC1Un94wlx6OadkveqwpLcA1ZoEyNU/jeLgk27DIYUuT2meetUw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:690c:68e:b0:4ce:6aab:6c39 with SMTP id bp14-20020a05690c068e00b004ce6aab6c39mr1931785ywb.90.1673581805824; Thu, 12 Jan 2023 19:50:05 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:53 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-3-ricarkol@google.com> Subject: [PATCH 2/9] KVM: arm64: Add helper for creating removed stage2 subtrees From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a stage2 helper, kvm_pgtable_stage2_create_removed(), for creating removed tables (the opposite of kvm_pgtable_stage2_free_removed()). Creating a removed table is useful for splitting block PTEs into subtrees of 4K PTEs. For example, a 1G block PTE can be split into 4K PTEs by first creating a fully populated tree, and then use it to replace the 1G PTE in a single step. This will be used in a subsequent commit for eager huge-page splitting (a dirty-logging optimization). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_pgtable.h | 25 +++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 47 ++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 84a271647007..8ad78d61af7f 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -450,6 +450,31 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); */ void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); +/** + * kvm_pgtable_stage2_free_removed() - Create a removed stage-2 paging structure. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). + * @new: Unlinked stage-2 paging structure to be created. + * @phys: Physical address of the memory to map. + * @level: Level of the stage-2 paging structure to be created. + * @prot: Permissions and attributes for the mapping. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * + * Create a removed page-table tree of PAGE_SIZE leaf PTEs under *new. + * This new page-table tree is not reachable (i.e., it is removed) from the + * root pgd and it's therefore unreachableby the hardware page-table + * walker. No TLB invalidation or CMOs are performed. + * + * If device attributes are not explicitly requested in @prot, then the + * mapping will be normal, cacheable. + * + * Return: 0 only if a fully populated tree was created, negative error + * code on failure. No partially-populated table can be returned. + */ +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, + kvm_pte_t *new, u64 phys, u32 level, + enum kvm_pgtable_prot prot, void *mc); + /** * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 87fd40d09056..0dee13007776 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1181,6 +1181,53 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) return kvm_pgtable_walk(pgt, addr, size, &walker); } +/* + * map_data->force_pte is true in order to force creating PAGE_SIZE PTEs. + * data->addr is 0 because the IPA is irrelevant for a removed table. + */ +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, + kvm_pte_t *new, u64 phys, u32 level, + enum kvm_pgtable_prot prot, void *mc) +{ + struct stage2_map_data map_data = { + .phys = phys, + .mmu = pgt->mmu, + .memcache = mc, + .force_pte = true, + }; + struct kvm_pgtable_walker walker = { + .cb = stage2_map_walker, + .flags = KVM_PGTABLE_WALK_LEAF | + KVM_PGTABLE_WALK_REMOVED, + .arg = &map_data, + }; + struct kvm_pgtable_walk_data data = { + .walker = &walker, + .addr = 0, + .end = kvm_granule_size(level), + }; + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; + kvm_pte_t *pgtable; + int ret; + + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); + if (ret) + return ret; + + pgtable = mm_ops->zalloc_page(mc); + if (!pgtable) + return -ENOMEM; + + ret = __kvm_pgtable_walk(&data, mm_ops, pgtable, level + 1); + if (ret) { + kvm_pgtable_stage2_free_removed(mm_ops, pgtable, level); + mm_ops->put_page(pgtable); + return ret; + } + + *new = kvm_init_table_pte(pgtable, mm_ops); + return 0; +} int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops, From patchwork Fri Jan 13 03:49:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E37CCC54EBD for ; Fri, 13 Jan 2023 03:50:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233392AbjAMDuK (ORCPT ); Thu, 12 Jan 2023 22:50:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229756AbjAMDuI (ORCPT ); Thu, 12 Jan 2023 22:50:08 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D895554703 for ; Thu, 12 Jan 2023 19:50:07 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id a4-20020a5b0004000000b006fdc6aaec4fso21742876ybp.20 for ; Thu, 12 Jan 2023 19:50:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PpQHC4NcAeUWNMKwINV6b15y1SedzAb1MKkwfHkrc18=; b=sRVsa74aJlqP+CI0/p52Jx9L5VC+mO7wdeF4D7T+Mo1wjekmMl3RXgLkUBGj+ov+7h iqfNNV4APK79XeAWWxe975BGxAcf4PH5/fet8p2+BoKQ9ApU9VqDVk/w0QsIpAxniQPI c4zXPaeSm03ulWOiL6fqv0KI4i3Lvmk4X8aBK/v8jSbOYwCK0RylPwO7oKcc9uXsaVLl Os6NLLvPEKQGFjv1EKil1CqdjqT86lB7Q0JlZnV20/9FaGCY1zZWugPaRzdcTn6bJDU5 FqM8HbiJeD2BA5fPe9wiNAbNPJh0HpDScF044zvfsYz5SF7hO/KGAHBrZXH/Xk722/KT 9Xww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PpQHC4NcAeUWNMKwINV6b15y1SedzAb1MKkwfHkrc18=; b=50WddlCQRp6NF7LvZYpFwUxDIWlFQfQDRXyzpbOHsxTWzzkyF6nOollmv8nO7vTRgX w2zXsJRp7iA6k5B2H6xC10Llocb9IRl/APA0CR5HOAt3m6sxo9Dd64PlWxX1L9Ym/4P0 pST+q/s5qI6sW/qaHN6HRwCTFNeeDxy6emOvkHouc/yWYSrP2vFA+O9AAzYMxku/mABI +XgoHLwvfCrg3KkZTpNeINS8tUiyPBVjZEte3wBGG7Xn/D/+vztLfOlZugxws36R5Hcx nCTanw/rCdx3piUBeSRlySH2fTIPEYFbW96vvweXKFe7Izq4K4qovcROL31JIRpFNqlH 6sxA== X-Gm-Message-State: AFqh2kr0Ux+j0Gi3D1gwre2cDNBZQ4MGcxwDalusDVr/QfQD5YMNy3ox XabLCXq2doIwv2SbziNXB1JRmLGkGDOaeQ== X-Google-Smtp-Source: AMrXdXtByGjzrtFwpkQ3ixpE9eu5GVo/M7lmKE4T2FFkjUAkJQvw/7lHXnVfIY+XJox9W/TxwJ8KB3r/Q59HNw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:4c81:0:b0:6f9:ece2:7b87 with SMTP id z123-20020a254c81000000b006f9ece27b87mr9161127yba.485.1673581807103; Thu, 12 Jan 2023 19:50:07 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:54 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-4-ricarkol@google.com> Subject: [PATCH 3/9] KVM: arm64: Add kvm_pgtable_stage2_split() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new stage2 function, kvm_pgtable_stage2_split(), for splitting a range of huge pages. This will be used for eager-splitting huge pages into PAGE_SIZE pages. The goal is to avoid having to split huge pages on write-protection faults, and instead use this function to do it ahead of time for large ranges (e.g., all guest memory in 1G chunks at a time). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_pgtable.h | 29 ++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 67 ++++++++++++++++++++++++++++ 2 files changed, 96 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 8ad78d61af7f..5fbdc1f259fd 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -644,6 +644,35 @@ bool kvm_pgtable_stage2_is_young(struct kvm_pgtable *pgt, u64 addr); */ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +/** + * kvm_pgtable_stage2_split() - Split a range of huge pages into leaf PTEs pointing + * to PAGE_SIZE guest pages. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). + * @addr: Intermediate physical address from which to split. + * @size: Size of the range. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * + * @addr and the end (@addr + @size) are effectively aligned down and up to + * the top level huge-page block size. This is an exampe using 1GB + * huge-pages and 4KB granules. + * + * [---input range---] + * : : + * [--1G block pte--][--1G block pte--][--1G block pte--][--1G block pte--] + * : : + * [--2MB--][--2MB--][--2MB--][--2MB--] + * : : + * [ ][ ][:][ ][ ][ ][ ][ ][:][ ][ ][ ] + * : : + * + * Return: 0 on success, negative error code on failure. Note that + * kvm_pgtable_stage2_split() is best effort: it tries to break as many + * blocks in the input range as allowed by the size of the memcache. It + * will fail it wasn't able to break any block. + */ +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, void *mc); + /** * kvm_pgtable_walk() - Walk a page-table. * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0dee13007776..db9d1a28769b 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1229,6 +1229,73 @@ int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, return 0; } +struct stage2_split_data { + struct kvm_s2_mmu *mmu; + void *memcache; +}; + +static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx, + enum kvm_pgtable_walk_flags visit) +{ + struct stage2_split_data *data = ctx->arg; + struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; + kvm_pte_t pte = ctx->old, new, *childp; + enum kvm_pgtable_prot prot; + void *mc = data->memcache; + u32 level = ctx->level; + u64 phys; + int ret; + + /* Nothing to split at the last level */ + if (level == KVM_PGTABLE_MAX_LEVELS - 1) + return 0; + + /* We only split valid block mappings */ + if (!kvm_pte_valid(pte) || kvm_pte_table(pte, ctx->level)) + return 0; + + phys = kvm_pte_to_phys(pte); + prot = kvm_pgtable_stage2_pte_prot(pte); + + ret = kvm_pgtable_stage2_create_removed(data->mmu->pgt, &new, phys, + level, prot, mc); + if (ret) + return ret; + + if (!stage2_try_break_pte(ctx, data->mmu)) { + childp = kvm_pte_follow(new, mm_ops); + kvm_pgtable_stage2_free_removed(mm_ops, childp, level); + mm_ops->put_page(childp); + return -EAGAIN; + } + + /* + * Note, the contents of the page table are guaranteed to be + * made visible before the new PTE is assigned because + * stage2_make_pte() writes the PTE using smp_store_release(). + */ + stage2_make_pte(ctx, new); + dsb(ishst); + return 0; +} + +int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, + u64 addr, u64 size, void *mc) +{ + struct stage2_split_data split_data = { + .mmu = pgt->mmu, + .memcache = mc, + }; + + struct kvm_pgtable_walker walker = { + .cb = stage2_split_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = &split_data, + }; + + return kvm_pgtable_walk(pgt, addr, size, &walker); +} + int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops, enum kvm_pgtable_stage2_flags flags, From patchwork Fri Jan 13 03:49:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 623D1C5479D for ; Fri, 13 Jan 2023 03:50:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233912AbjAMDuP (ORCPT ); Thu, 12 Jan 2023 22:50:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230035AbjAMDuK (ORCPT ); Thu, 12 Jan 2023 22:50:10 -0500 Received: from mail-ot1-x34a.google.com (mail-ot1-x34a.google.com [IPv6:2607:f8b0:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D65C1A3A5 for ; Thu, 12 Jan 2023 19:50:09 -0800 (PST) Received: by mail-ot1-x34a.google.com with SMTP id e8-20020a9d63c8000000b006704cedcfe2so10086231otl.19 for ; Thu, 12 Jan 2023 19:50:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jNJ/QahVWN0jZIoUQ1jgvhpTISkbQRg3HWBwcBAUB8Q=; b=jm0+bUOJ3mDeey09UoZ0K1vIDMCBj+tFUgrbwxmIcVKdV/6vOjsiE91Z+YAsIKjPWu 2kpQ50Z4WNB9r7nym2XtMF/iO4yRZgvA3R4WmjEgkBI6mc0bVMLMiffonCYNjBoMCj8A sBVYWLfHCLn5nGR1VCPdDZQMeVS0gYvXQIx7Ev6TqxjI1WZRueWtRaFiGUISNolXtz5Z a/R2ZJsbH5ale00jO+gWq8rlz23NDTGWYJrXQ5a9wSdW57IG8ckAT49lbxGmUIP29nks jI8VybzlUq0TKFxHmXys9gUBDe2VJJnvlh6+enRRL+4X9iiZ36Z6EckhT5/CxKcopKow 16Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jNJ/QahVWN0jZIoUQ1jgvhpTISkbQRg3HWBwcBAUB8Q=; b=7fgo+F5riMRAjiGW+k6ugBPafC5AIcCFZXDaoBzqMQBec9lvIKsqcAE5q/3Ndx3Krh 5349JaXSP3FYBI8VjYGuVe2Rk0Qfk0a4ZEkxDRG2G6OWpJ4LXk6FcpZQ23K8PuKWMTMv DRSmp27OCjwSFR5JZv6QS82464PP048qjU7Wb7aZC2YFaz2+8S+aeMYrjnxgqwC2DUGq lIMfCnPYwKaXE9dirsaZUJ8SX/uTLVk+7rqwKrdeMwd7upTAjY2BSTr6WStui816320W qDhUj1lR1p80h42T5v0SJHd9JUkbf0b1/WZBIwWEdA5Sr+FNX27hqItqsxstlV9JnGGq zstQ== X-Gm-Message-State: AFqh2kr+uCItKUJGvVeHnNVhbUEEHK6eDAv2ox2FqSWScySH65Vq4JSa 2Tce4tv28ZaAa5bi4g3y9Wk29CsVQ+Jc3A== X-Google-Smtp-Source: AMrXdXtPfxk0qn/HIjWb+HtpyhFViCmVAVNxVfGLg7Z5Hw6JRRZ2JEjSNtm1ZCbxYwHrgd1azKVdovB2Qlsjaw== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6808:628e:b0:364:45c4:93b with SMTP id du14-20020a056808628e00b0036445c4093bmr1421665oib.209.1673581808676; Thu, 12 Jan 2023 19:50:08 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:55 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-5-ricarkol@google.com> Subject: [PATCH 4/9] KVM: arm64: Refactor kvm_arch_commit_memory_region() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor kvm_arch_commit_memory_region() as a preparation for a future commit to look cleaner and more understandable. Also, it looks more like its x86 counterpart (in kvm_mmu_slot_apply_flags()). No functional change intended. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 31d7fa4c7c14..dbcd5d9bc260 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1758,20 +1758,27 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { + bool log_dirty_pages = new && new->flags & KVM_MEM_LOG_DIRTY_PAGES; + /* * At this point memslot has been committed and there is an * allocated dirty_bitmap[], dirty pages will be tracked while the * memory slot is write protected. */ - if (change != KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (log_dirty_pages) { + + if (change == KVM_MR_DELETE) + return; + /* * If we're with initial-all-set, we don't need to write * protect any pages because they're all reported as dirty. * Huge pages and normal pages will be write protect gradually. */ - if (!kvm_dirty_log_manual_protect_and_init_set(kvm)) { - kvm_mmu_wp_memory_region(kvm, new->id); - } + if (kvm_dirty_log_manual_protect_and_init_set(kvm)) + return; + + kvm_mmu_wp_memory_region(kvm, new->id); } } From patchwork Fri Jan 13 03:49:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFAFAC5479D for ; Fri, 13 Jan 2023 03:50:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234223AbjAMDuS (ORCPT ); Thu, 12 Jan 2023 22:50:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234092AbjAMDuL (ORCPT ); Thu, 12 Jan 2023 22:50:11 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA28612D3C for ; Thu, 12 Jan 2023 19:50:10 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id k127-20020a628485000000b0058029fb70a3so9550773pfd.19 for ; Thu, 12 Jan 2023 19:50:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=59+3RpyO/YTzkZC7kIC3ThLSoyfPtBgDOXegbo4OhfY=; b=bzNVzkNXtnTnahSLZkUEXB8tjP1blOh4M71aPnfiSROYzEkijWOQSg/o8bEsRpISoL IGZpGSYPfkIfFiOLuqYktNZfww01RF6+kCg9lJnQ1QwxbhHMhARNfh7tLWgh+bBVXHAt nvk9hMg1V/+lZ/rRL3ZJkcfqeLMSGtCuDUuAZPYorZFwjAv3eNeb85y878hTsfYlJ46j ClSyOuEfyi/rmURelRK4AMw3mSJnL4UM9p0qogq5ZoZya5petTYU4JNcIBvvRWx8ObqT EESysOg1CmDz34AfcByaXrN3h2d7sRytwcz7sXzP19mwSSmh0vMR21X/0SEfUz0+DB7f K2Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=59+3RpyO/YTzkZC7kIC3ThLSoyfPtBgDOXegbo4OhfY=; b=PviiHODv0Wo9cagM3PHwwPBZ+k9DlGbDRY/Jw7LJC342oQflt8wzYIMGlQzuMnfFAp GcvlXpTwLHzGF7xy0akOtgnDp8cdMRWfjF2b7AuKOVe50QRUwcnZeIOEYNgckkRAGdaZ EI0VwH5QFSj5lGwgnCu+OXvdx/FIV6dxUpB1N4SSl1sCzy+40ICw7d1jXjVuZVUr3LeB m+CNsQW+bl26MT6sopahlNRGdkgGceePtj2XNBcBNILEUdYLtajnjfoV8/cHPDGx3OGo jNwo4rYe+Tmx/IRh1x5FWLkhMXsHSAe/manvZ0lbwAtopTzFdVYrYmEHGSVESyATxtv5 t9/Q== X-Gm-Message-State: AFqh2kp+/q5KHkvZL4kFVdz+kgwWihNOTPBbfcYdHUpEHLvtqbj4tHWF mo/YSbT7ufjqxNFNX8ZNfouEO+FtbPV7Hg== X-Google-Smtp-Source: AMrXdXtCfy1jIMLXxGdkB1qRX4MoYXpbEzTbjEnEa1WpelHoYyJ+3tvm/1Xz/D7sg2og6hJcnmq08VBe0DG8Ug== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6a00:22d2:b0:58b:a1bd:6aae with SMTP id f18-20020a056a0022d200b0058ba1bd6aaemr425501pfj.25.1673581810284; Thu, 12 Jan 2023 19:50:10 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:56 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-6-ricarkol@google.com> Subject: [PATCH 5/9] KVM: arm64: Add kvm_uninit_stage2_mmu() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add kvm_uninit_stage2_mmu() and move kvm_free_stage2_pgd() into it. A future commit will add some more things to do inside of kvm_uninit_stage2_mmu(). No functional change intended. Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/mmu.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e4a7e6369499..058f3ae5bc26 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -167,6 +167,7 @@ void free_hyp_pgds(void); void stage2_unmap_vm(struct kvm *kvm); int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type); +void kvm_uninit_stage2_mmu(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, phys_addr_t pa, unsigned long size, bool writable); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index dbcd5d9bc260..700c5774b50d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -766,6 +766,11 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return err; } +void kvm_uninit_stage2_mmu(struct kvm *kvm) +{ + kvm_free_stage2_pgd(&kvm->arch.mmu); +} + static void stage2_unmap_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) { @@ -1852,7 +1857,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_free_stage2_pgd(&kvm->arch.mmu); + kvm_uninit_stage2_mmu(kvm); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm, From patchwork Fri Jan 13 03:49:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C794EC54EBE for ; Fri, 13 Jan 2023 03:50:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235476AbjAMDuV (ORCPT ); Thu, 12 Jan 2023 22:50:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233786AbjAMDuO (ORCPT ); Thu, 12 Jan 2023 22:50:14 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 956672F7BD for ; Thu, 12 Jan 2023 19:50:12 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id k204-20020a256fd5000000b007b8b040bc50so19852700ybc.1 for ; Thu, 12 Jan 2023 19:50:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j5IWILtFhYwjjiNl1bo1qzza8egckK4zHijflhnnttM=; b=K5uKbAzjF8REXeEArNdBe8EWe6X1KFfCfVs8/OsWG025d/P+jkby7bJ9U89qMJ8I3x P0gxRuDwX0Hy8k8Brpb6NgjqHPPDzjtJSYBWOhEI6umJBn/Qtjiya2wDm4xvqRT44eAF EZVSbN51Q+bMTnDjSQoMQ2okgY3N9fC0zmyFxNPnihOv9apEAecEh1Tps8IOWhgCNRcR FcmKcby2g5iRZ2n5Zqz6+FbWFSA5786/cxm38Qf276IXMUJ8OEfjP3YSZV8IB9HqXVVW otCNXEcSIf8ZZLRuk0rDUyZt4RLLYzk5fWeEtF66DocCcBj/fwpNUWJQQc8RuuaN3qK2 STyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j5IWILtFhYwjjiNl1bo1qzza8egckK4zHijflhnnttM=; b=NHJ3dBmoP3pUjdcJHh9IM856hZ1v7HYK2iqv3vBNxZuehO8tHGgXkqAGtc/tuHkLhz xKc6T32rOV4jinLTfIjYBMURo0o1I/mBHLYWfuJdG2WdiOfjkHSVpw6b7cm+EFOSZh7M beDTUw5nWe82Uq0UO0IZH0V1/CgtKbkZDwoi+9ilTnr8yqBD0uN5tw4efXzzD7MJupJI d2pWdtbpOsJwG8A+TtaZAXuvSkwCyLvq3IrUJH1a8CiGMnQVOC+Hj4VzcCqLSEJEkxl6 xkeOtmCmns/ZZ8telsVrQcgbbsVjTeMsR1MlpdETjQdoGlU9X/QkMZ0V8lPz47ngL9gh P63w== X-Gm-Message-State: AFqh2kocegpUrkFpotFDNkNTyJcXv+KwdzIBNSSf4YUH/4N9vM4yUAge 0v2VwN2q0BWFhT6Cu0tUB6npO/rLyFVBFA== X-Google-Smtp-Source: AMrXdXtr4MUR0c7pEsnGR03P21EnyU6o4Ft3N/6wVnlFn7yIM+45Cae2QqPlkI3okSk3coJ7jY8g6+XoICWDGQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a25:6a06:0:b0:735:ea17:94d9 with SMTP id f6-20020a256a06000000b00735ea1794d9mr7614716ybc.378.1673581811889; Thu, 12 Jan 2023 19:50:11 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:57 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-7-ricarkol@google.com> Subject: [PATCH 6/9] KVM: arm64: Split huge pages when dirty logging is enabled From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Split huge pages eagerly when enabling dirty logging. The goal is to avoid doing it while faulting on write-protected pages, which negatively impacts guest performance. A memslot marked for dirty logging is split in 1GB pieces at a time. This is in order to release the mmu_lock and give other kernel threads the opportunity to run, and also in order to allocate enough pages to split a 1GB range worth of huge pages (or a single 1GB huge page). Note that these page allocations can fail, so eager page splitting is best-effort. This is not a correctness issue though, as huge pages can still be split on write-faults. The benefits of eager page splitting are the same as in x86, added with commit a3fe5dbda0a4 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU when dirty logging is enabled"). For example, when running dirty_log_perf_test with 64 virtual CPUs (Ampere Altra), 1GB per vCPU, 50% reads, and 2MB HugeTLB memory, the time it takes vCPUs to access all of their memory after dirty logging is enabled decreased by 44% from 2.58s to 1.42s. Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_host.h | 30 ++++++++ arch/arm64/kvm/mmu.c | 110 +++++++++++++++++++++++++++++- 2 files changed, 138 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 35a159d131b5..6ab37209b1d1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -153,6 +153,36 @@ struct kvm_s2_mmu { /* The last vcpu id that ran on each physical CPU */ int __percpu *last_vcpu_ran; + /* + * Memory cache used to split EAGER_PAGE_SPLIT_CHUNK_SIZE worth of huge + * pages. It is used to allocate stage2 page tables while splitting + * huge pages. Its capacity should be EAGER_PAGE_SPLIT_CACHE_CAPACITY. + * Note that the choice of EAGER_PAGE_SPLIT_CHUNK_SIZE influences both + * the capacity of the split page cache (CACHE_CAPACITY), and how often + * KVM reschedules. Be wary of raising CHUNK_SIZE too high. + * + * A good heuristic to pick CHUNK_SIZE is that it should be larger than + * all the available huge-page sizes, and be a multiple of all the + * other ones; for example, 1GB when all the available huge-page sizes + * are (1GB, 2MB, 32MB, 512MB). + * + * CACHE_CAPACITY should have enough pages to cover CHUNK_SIZE; for + * example, 1GB requires the following number of PAGE_SIZE-pages: + * - 512 when using 2MB hugepages with 4KB granules (1GB / 2MB). + * - 513 when using 1GB hugepages with 4KB granules (1 + (1GB / 2MB)). + * - 32 when using 32MB hugepages with 16KB granule (1GB / 32MB). + * - 2 when using 512MB hugepages with 64KB granules (1GB / 512MB). + * CACHE_CAPACITY below assumes the worst case: 1GB hugepages with 4KB + * granules. + * + * Protected by kvm->slots_lock. + */ +#define EAGER_PAGE_SPLIT_CHUNK_SIZE SZ_1G +#define EAGER_PAGE_SPLIT_CACHE_CAPACITY \ + (DIV_ROUND_UP_ULL(EAGER_PAGE_SPLIT_CHUNK_SIZE, SZ_1G) + \ + DIV_ROUND_UP_ULL(EAGER_PAGE_SPLIT_CHUNK_SIZE, SZ_2M)) + struct kvm_mmu_memory_cache split_page_cache; + struct kvm_arch *arch; }; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 700c5774b50d..41ee330edae3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -31,14 +31,24 @@ static phys_addr_t hyp_idmap_vector; static unsigned long io_map_base; -static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) +bool __read_mostly eager_page_split = true; +module_param(eager_page_split, bool, 0644); + +static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, + phys_addr_t size) { - phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); phys_addr_t boundary = ALIGN_DOWN(addr + size, size); return (boundary - 1 < end - 1) ? boundary : end; } +static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) +{ + phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); + + return __stage2_range_addr_end(addr, end, size); +} + /* * Release kvm_mmu_lock periodically if the memory region is large. Otherwise, * we may see kernel panics with CONFIG_DETECT_HUNG_TASK, @@ -71,6 +81,64 @@ static int stage2_apply_range(struct kvm *kvm, phys_addr_t addr, return ret; } +static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min) +{ + return kvm_mmu_memory_cache_nr_free_objects(cache) < min; +} + +static bool need_topup_split_page_cache_or_resched(struct kvm *kvm) +{ + struct kvm_mmu_memory_cache *cache; + + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + return true; + + cache = &kvm->arch.mmu.split_page_cache; + return need_topup(cache, EAGER_PAGE_SPLIT_CACHE_CAPACITY); +} + +static int kvm_mmu_split_huge_pages(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + struct kvm_mmu_memory_cache *cache; + struct kvm_pgtable *pgt; + int ret; + u64 next; + int cache_capacity = EAGER_PAGE_SPLIT_CACHE_CAPACITY; + + lockdep_assert_held_write(&kvm->mmu_lock); + + lockdep_assert_held(&kvm->slots_lock); + + cache = &kvm->arch.mmu.split_page_cache; + + do { + if (need_topup_split_page_cache_or_resched(kvm)) { + write_unlock(&kvm->mmu_lock); + cond_resched(); + /* Eager page splitting is best-effort. */ + ret = __kvm_mmu_topup_memory_cache(cache, + cache_capacity, + cache_capacity); + write_lock(&kvm->mmu_lock); + if (ret) + break; + } + + pgt = kvm->arch.mmu.pgt; + if (!pgt) + return -EINVAL; + + next = __stage2_range_addr_end(addr, end, + EAGER_PAGE_SPLIT_CHUNK_SIZE); + ret = kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + if (ret) + break; + } while (addr = next, addr != end); + + return ret; +} + #define stage2_apply_range_resched(kvm, addr, end, fn) \ stage2_apply_range(kvm, addr, end, fn, true) @@ -755,6 +823,8 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t for_each_possible_cpu(cpu) *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1; + mmu->split_page_cache.gfp_zero = __GFP_ZERO; + mmu->pgt = pgt; mmu->pgd_phys = __pa(pgt->pgd); return 0; @@ -769,6 +839,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t void kvm_uninit_stage2_mmu(struct kvm *kvm) { kvm_free_stage2_pgd(&kvm->arch.mmu); + kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } static void stage2_unmap_memslot(struct kvm *kvm, @@ -996,6 +1067,29 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, stage2_wp_range(&kvm->arch.mmu, start, end); } +/** + * kvm_mmu_split_memory_region() - split the stage 2 blocks into PAGE_SIZE + * pages for memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to split + * + * Acquires kvm->mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + struct kvm_memory_slot *memslot = id_to_memslot(slots, slot); + phys_addr_t start, end; + + start = memslot->base_gfn << PAGE_SHIFT; + end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + write_lock(&kvm->mmu_lock); + kvm_mmu_split_huge_pages(kvm, start, end); + write_unlock(&kvm->mmu_lock); +} + /* * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected * dirty pages. @@ -1783,7 +1877,19 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, if (kvm_dirty_log_manual_protect_and_init_set(kvm)) return; + if (READ_ONCE(eager_page_split)) + kvm_mmu_split_memory_region(kvm, new->id); + kvm_mmu_wp_memory_region(kvm, new->id); + } else { + /* + * Free any leftovers from the eager page splitting cache. Do + * this when deleting, moving, disabling dirty logging, or + * creating the memslot (a nop). Doing it for deletes makes + * sure we don't leak memory, and there's no need to keep the + * cache around for any of the other cases. + */ + kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } } From patchwork Fri Jan 13 03:49:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E9A5C54EBD for ; Fri, 13 Jan 2023 03:50:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235583AbjAMDuX (ORCPT ); Thu, 12 Jan 2023 22:50:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234658AbjAMDuQ (ORCPT ); Thu, 12 Jan 2023 22:50:16 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4981F2719E for ; Thu, 12 Jan 2023 19:50:14 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-46658ec0cfcso215555887b3.19 for ; Thu, 12 Jan 2023 19:50:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p+sSgvrkkZpJDxwlY/ZTApHr3MYP7u1zHpbjO+TJRms=; b=qmCt49Z9xLx8+zo81eq6M0PqPAvthNQXAFrnCnGUMNTHju3lF74tpwAsMU8O/lanM0 CO22Eyn2zHvq6NOnuKQI7Wm4oWbuh2mcszoEuTt//Ca9YFcnHK8qxKtMuCwqP6Ra7Yx0 U37LRPpdtFh6Fu/vUUihiLfApJ4FJ5fjBtlnE58L+DkuLReB/Ws4nL08MVYJjOCPYhhW rQMiVdPONO8BLonW0RmdttxPBBkstP2V6MIQ2xs1IO6LGjMyKpWiWYVzjnhSBPGl0Erg C2ufCE2b4X/3aFuxFYjIyTbF/SX6KA7Uqy0lyiG02ZjVIosGSCaw0YFKjSHoc60pJ4uD VCEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p+sSgvrkkZpJDxwlY/ZTApHr3MYP7u1zHpbjO+TJRms=; b=PbEVMLg6yrRtwFPiTd2v47Mj0VCZURxPdHM4CWci85eOyS0Uhn8XVoroaKQQOLMLuj luIf6j6AADyfGBP1gLnfBuevI7UT8z6ZIiaKRyF4fXe8/7TECcrekfJMwwCRzhBMY2rG e+jUbkIVQwZfAUMKLmh2VR+Mlxexpnt99O/joBP+0RKHWtTM+7Hg5JXZRNhQwQL+U6iw DeQH+VGI+cRIBy+3KbYtkQriyettDqCERpCi0AR6aQD8RU+Q3dT/3bD5uK0CXyQMfmaT r+nUd5YJ3kFqT+HFe1D6qZce5sTGVp2OkBXuYjUdRJzRO08ovua04jvwcQ5MTNVSSNey UTiw== X-Gm-Message-State: AFqh2kqI2a3aoPi6XalGxFxbxM+wwoxRoYjT18gHpycxpbMZFlP0BBhT YZc500tH2QrAy/ers7ycW8QtWHgULMmiRQ== X-Google-Smtp-Source: AMrXdXuC/KVbLSGIMwQ2ptv5Lv1P18un3U+GBECCaw+jcTKBsYHjMPMeo4iku7CoqDNfts9qfl0pquZ6AEnfyQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:d657:0:b0:47b:772:83bc with SMTP id y84-20020a0dd657000000b0047b077283bcmr1487026ywd.311.1673581813622; Thu, 12 Jan 2023 19:50:13 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:58 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-8-ricarkol@google.com> Subject: [PATCH 7/9] KVM: arm64: Open-code kvm_mmu_write_protect_pt_masked() From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the functionality of kvm_mmu_write_protect_pt_masked() into its caller, kvm_arch_mmu_enable_log_dirty_pt_masked(). This will be used in a subsequent commit in order to share some of the code in kvm_arch_mmu_enable_log_dirty_pt_masked(). No functional change intended. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 42 +++++++++++++++--------------------------- 1 file changed, 15 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 41ee330edae3..009468822bca 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1045,28 +1045,6 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) kvm_flush_remote_tlbs(kvm); } -/** - * kvm_mmu_write_protect_pt_masked() - write protect dirty pages - * @kvm: The KVM pointer - * @slot: The memory slot associated with mask - * @gfn_offset: The gfn offset in memory slot - * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory - * slot to be write protected - * - * Walks bits set in mask write protects the associated pte's. Caller must - * acquire kvm_mmu_lock. - */ -static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, - struct kvm_memory_slot *slot, - gfn_t gfn_offset, unsigned long mask) -{ - phys_addr_t base_gfn = slot->base_gfn + gfn_offset; - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; - - stage2_wp_range(&kvm->arch.mmu, start, end); -} - /** * kvm_mmu_split_memory_region() - split the stage 2 blocks into PAGE_SIZE * pages for memory slot @@ -1091,17 +1069,27 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) } /* - * kvm_arch_mmu_enable_log_dirty_pt_masked - enable dirty logging for selected - * dirty pages. + * kvm_arch_mmu_enable_log_dirty_pt_masked() - enable dirty logging for selected pages. + * @kvm: The KVM pointer + * @slot: The memory slot associated with mask + * @gfn_offset: The gfn offset in memory slot + * @mask: The mask of pages at offset 'gfn_offset' in this memory + * slot to enable dirty logging on * - * It calls kvm_mmu_write_protect_pt_masked to write protect selected pages to - * enable dirty logging for them. + * Writes protect selected pages to enable dirty logging for them. Caller must + * acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) { - kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; + phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + + lockdep_assert_held_write(&kvm->mmu_lock); + + stage2_wp_range(&kvm->arch.mmu, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) From patchwork Fri Jan 13 03:49:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7E81C5479D for ; Fri, 13 Jan 2023 03:50:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236012AbjAMDuZ (ORCPT ); Thu, 12 Jan 2023 22:50:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234010AbjAMDuR (ORCPT ); Thu, 12 Jan 2023 22:50:17 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E46A62191 for ; Thu, 12 Jan 2023 19:50:15 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id p18-20020a25bd52000000b007c8919c86efso4044039ybm.13 for ; Thu, 12 Jan 2023 19:50:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JVcZViDTJ/Xiw4oMrZSedmVu2/tzeU9wThwbdEKpe08=; b=tePKvf6EmWYEBoYG9dRY4Kc8Qa6b9fp4S9s+0RT8gROI7YwOZchFJ5EsJVYghMGX+x WBc8oMtmtcO4PQPQlfnA/XSMktZ3rue8sW2/gapg4HbgqZLi/j3ND5J4QHgPpPTk0OEt LLKbhbWwmeMkA4BB20XBwkrg5Uaw3EWSIMGU8AHp/KrJ3s/+fP4zbE+gZhWjfmh++4yR KA1WImaDcpohYUoQtSG+BJMBt0cSgVRPfy3EWDC7oDJo4hCZRYxIAYWJJoZV12T8h6i5 aJGw4oFdJSkc/5JMHgpTkvdhVyFww+Hc+Kgi/5tq/0sA3pvuTKqHdjiQfth3a5fozvBP YLzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JVcZViDTJ/Xiw4oMrZSedmVu2/tzeU9wThwbdEKpe08=; b=WOeAwMmkCJTBP8SEVf6OieYhdPHZzM7W9ClkIzetdYPIDW4mt5WLGtfA9HxZanDQy1 Cl2yYJ+yApdF2/Dgi0VJGqwYf9NEIL6scmaRrBqvDsKW/uMBI/pqmAClkcjcMLaOEbq9 3pQWdC+/YBhkhRPz0gKsQOMHsGucN9QUowpB0xM9UbnW183VUmt9pu6Juz6y2EKbqWrR VejcTFh6tseJT92bwAPlVMNeifMCrZ/4K16LH+cf2A/ryXqVg+QOD2VHHDJLh574JQbc ci2ImtNurZKJeW8zFlfXUKKvJgd3XlBTvatUb+1OsyrsWt8VBGpBOUrPMbIs00TF61e6 HhYA== X-Gm-Message-State: AFqh2kpclvVwYMNs0P/LntuabXCOK615H0Fvk4chHoZUluqLGWxy6Evf QoZXgfpd4tOYH66X23xGqB4YfMZmLaqy3Q== X-Google-Smtp-Source: AMrXdXurHATa35Vs0camgUzuMOcgp363aVN8cu0eKnDnrIA6eR0vdxPzlHA3NYSk3s3xZZQ5OWqvfp1tYgEdyQ== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a0d:d087:0:b0:3c7:38d8:798f with SMTP id s129-20020a0dd087000000b003c738d8798fmr3727770ywd.489.1673581814930; Thu, 12 Jan 2023 19:50:14 -0800 (PST) Date: Fri, 13 Jan 2023 03:49:59 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-9-ricarkol@google.com> Subject: [PATCH 8/9] KVM: arm64: Split huge pages during KVM_CLEAR_DIRTY_LOG From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This is the arm64 counterpart of commit cb00a70bd4b7 ("KVM: x86/mmu: Split huge pages mapped by the TDP MMU during KVM_CLEAR_DIRTY_LOG"), which has the benefit of splitting the cost of splitting a memslot across multiple ioctls. Split huge pages on the range specified using KVM_CLEAR_DIRTY_LOG. And do not split when enabling dirty logging if KVM_DIRTY_LOG_INITIALLY_SET is set. Signed-off-by: Ricardo Koller --- arch/arm64/kvm/mmu.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 009468822bca..08f28140c4a9 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1076,8 +1076,8 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot) * @mask: The mask of pages at offset 'gfn_offset' in this memory * slot to enable dirty logging on * - * Writes protect selected pages to enable dirty logging for them. Caller must - * acquire kvm->mmu_lock. + * Splits selected pages to PAGE_SIZE and then writes protect them to enable + * dirty logging for them. Caller must acquire kvm->mmu_lock. */ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -1090,6 +1090,14 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, lockdep_assert_held_write(&kvm->mmu_lock); stage2_wp_range(&kvm->arch.mmu, start, end); + + /* + * If initially-all-set mode is not set, then huge-pages were already + * split when enabling dirty logging: no need to do it again. + */ + if (kvm_dirty_log_manual_protect_and_init_set(kvm) && + READ_ONCE(eager_page_split)) + kvm_mmu_split_huge_pages(kvm, start, end); } static void kvm_send_hwpoison_signal(unsigned long address, short lsb) @@ -1875,7 +1883,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * this when deleting, moving, disabling dirty logging, or * creating the memslot (a nop). Doing it for deletes makes * sure we don't leak memory, and there's no need to keep the - * cache around for any of the other cases. + * cache around for any of the other cases. Keeping the cache + * is useful for succesive KVM_CLEAR_DIRTY_LOG calls, which is + * not handled in this function. */ kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } From patchwork Fri Jan 13 03:50:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Koller X-Patchwork-Id: 13099716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7DC2C54EBE for ; Fri, 13 Jan 2023 03:50:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236952AbjAMDu1 (ORCPT ); Thu, 12 Jan 2023 22:50:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234892AbjAMDuT (ORCPT ); Thu, 12 Jan 2023 22:50:19 -0500 Received: from mail-oa1-x49.google.com (mail-oa1-x49.google.com [IPv6:2001:4860:4864:20::49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D5C163183 for ; Thu, 12 Jan 2023 19:50:17 -0800 (PST) Received: by mail-oa1-x49.google.com with SMTP id 586e51a60fabf-155144f233dso7628555fac.22 for ; Thu, 12 Jan 2023 19:50:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hbzanLh7pJc2dr4z027O7I/0B3G+NEkna+fAGs8Wjw0=; b=B6TbjRCZH4GEguyyIU/4c8chBAHZR6FNJQxuwVv3YbmMpBPbku5u3A9D+bFKHJUFyF hQwDaMV8NA450eyaYqD8YR2ur/Jh+B9oDhSC/SFKpciuCTQpkhjAf6gk/dWoItiHyUXW iWQR8uaNyImh5PkQ/fzxvczmg5Ka1x5XnaOWaG8JGUqHCyGTwYaa59e4JjE6vYi/aML2 KmNB1bEZ0/RRSnh2/a8l2ngJuglfbjFibifJbrhJdlOU/ao+anCG7iEfuf/Gyy56du8Z Ejkk6A4rIhQ9DsET+hKJ0PBW6ZNpKy3uInig7yycjlS+ePeeX7jS1fhVjtX389tcbshL VQGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hbzanLh7pJc2dr4z027O7I/0B3G+NEkna+fAGs8Wjw0=; b=fTfxvhstJblqVxanrR6bJi9jFdL4yT5Xd15iJ0q8bvS0x2yEoztIVgVPskOA4xI40p oFgNClegODkV+sOILolxFXiveAhipc49rfOd7S1cpZbEhd7oXIRaRkdAMRLcpon0nqWW Ljeemm7petb2QYPvPJ4uV13sHel9RajVpYZ2QDU/MiL03JekvOWqtq3TuUC8qf1NqnNx ZOOCQOkFBGt7TQ3ybqsUhOOBgFL70BHNvIC9srJ2RpYcbQ2gTz8TXNAHAFkUxQ2AaWL8 phA3xMbJEnHNme1wAfk+LZCdUBRTTFCsD+2V7dPp7krJ08C8UAKCOlb3m2z+wtPObyK5 rn/w== X-Gm-Message-State: AFqh2kqxoznQKwGTXBHTPld6HErylrgk9wwxROO5X4pMmKRhp5kX8NKL DjcblZf0cfNRBR7cozVjtsebrmP1mA5NPA== X-Google-Smtp-Source: AMrXdXv6ZBj69dDqx+ohh1LywVQGoT/+OOB5r6zNe09ZWC/QPPAi3gM64mtMaIHUmNspgolZBYFMIHmRFUQxwg== X-Received: from ricarkol4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1248]) (user=ricarkol job=sendgmr) by 2002:a05:6870:63aa:b0:15e:dca1:7fd4 with SMTP id t42-20020a05687063aa00b0015edca17fd4mr141342oap.170.1673581816475; Thu, 12 Jan 2023 19:50:16 -0800 (PST) Date: Fri, 13 Jan 2023 03:50:00 +0000 In-Reply-To: <20230113035000.480021-1-ricarkol@google.com> Mime-Version: 1.0 References: <20230113035000.480021-1-ricarkol@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230113035000.480021-10-ricarkol@google.com> Subject: [PATCH 9/9] KVM: arm64: Use local TLBI on permission relaxation From: Ricardo Koller To: pbonzini@redhat.com, maz@kernel.org, oupton@google.com, yuzenghui@huawei.com, dmatlack@google.com Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, qperret@google.com, catalin.marinas@arm.com, andrew.jones@linux.dev, seanjc@google.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, eric.auger@redhat.com, gshan@redhat.com, reijiw@google.com, rananta@google.com, bgardon@google.com, ricarkol@gmail.com, Ricardo Koller Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Marc Zyngier Broadcasted TLB invalidations (TLBI) are usually less performant than their local variant. In particular, we observed some implementations that take millliseconds to complete parallel broadcasted TLBIs. It's safe to use local, non-shareable, TLBIs when relaxing permissions on a PTE in the KVM case for a couple of reasons. First, according to the ARM Arm (DDI 0487H.a D5-4913), permission relaxation does not need break-before-make. Second, KVM does not set the VTTBR_EL2.CnP bit, so each PE has its own TLB entry for the same page. KVM could tolerate that when doing permission relaxation (i.e., not having changes broadcasted to all PEs). Signed-off-by: Marc Zyngier Signed-off-by: Ricardo Koller --- arch/arm64/include/asm/kvm_asm.h | 4 +++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 10 ++++++ arch/arm64/kvm/hyp/nvhe/tlb.c | 54 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 2 +- arch/arm64/kvm/hyp/vhe/tlb.c | 32 ++++++++++++++++++ 5 files changed, 101 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 43c3bc0f9544..bb17b2ead4c7 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa, + __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa_nsh, __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid, __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context, __KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff, @@ -225,6 +226,9 @@ extern void __kvm_flush_vm_context(void); extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu); extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, int level); +extern void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, + int level); extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu); extern void __kvm_timer_set_cntvoff(u64 cntvoff); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 728e01d4536b..c6bf1e49ca93 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -125,6 +125,15 @@ static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt) __kvm_tlb_flush_vmid_ipa(kern_hyp_va(mmu), ipa, level); } +static void handle___kvm_tlb_flush_vmid_ipa_nsh(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); + DECLARE_REG(phys_addr_t, ipa, host_ctxt, 2); + DECLARE_REG(int, level, host_ctxt, 3); + + __kvm_tlb_flush_vmid_ipa_nsh(kern_hyp_va(mmu), ipa, level); +} + static void handle___kvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -315,6 +324,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa), + HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa_nsh), HANDLE_FUNC(__kvm_tlb_flush_vmid), HANDLE_FUNC(__kvm_flush_cpu_context), HANDLE_FUNC(__kvm_timer_set_cntvoff), diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index d296d617f589..ef2b70587f93 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -109,6 +109,60 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, int level) +{ + struct tlb_inv_context cxt; + + dsb(nshst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi_level(ipas2e1, ipa, level); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(nsh); + __tlbi(vmalle1); + dsb(nsh); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (icache_is_vpipt()) + icache_inval_all_pou(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index db9d1a28769b..7d694d12b5c4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1148,7 +1148,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, KVM_PGTABLE_WALK_SHARED); if (!ret) - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, pgt->mmu, addr, level); + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; } diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c index 24cef9b87f9e..e69da550cdc5 100644 --- a/arch/arm64/kvm/hyp/vhe/tlb.c +++ b/arch/arm64/kvm/hyp/vhe/tlb.c @@ -111,6 +111,38 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, __tlb_switch_to_host(&cxt); } +void __kvm_tlb_flush_vmid_ipa_nsh(struct kvm_s2_mmu *mmu, + phys_addr_t ipa, int level) +{ + struct tlb_inv_context cxt; + + dsb(nshst); + + /* Switch to requested VMID */ + __tlb_switch_to_guest(mmu, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi_level(ipas2e1, ipa, level); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(nsh); + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(&cxt); +} + void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu) { struct tlb_inv_context cxt;