From patchwork Fri May 26 23:44:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13257459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B48CC77B7A for ; Fri, 26 May 2023 23:44:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230226AbjEZXom (ORCPT ); Fri, 26 May 2023 19:44:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbjEZXom (ORCPT ); Fri, 26 May 2023 19:44:42 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FBB4189 for ; Fri, 26 May 2023 16:44:40 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-ba8c3186735so2747339276.3 for ; Fri, 26 May 2023 16:44:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685144680; x=1687736680; h=cc:to:from:subject:mime-version:message-id:date:from:to:cc:subject :date:message-id:reply-to; bh=p2/B2gpADUVSH4+1+wdvhwwOJ3ZLS46dsub4yrmfIrY=; b=gaII7xm1rKF0WflE0kwbmo9D1wFCFrrD6VH1zQ9TXwAabssHzzxcZKDpxFjf1Ep6KT Z+Qr7BXS8s0v9zHiHqeN2dE3WQB88ARaW8AH1iSDJCS3cr45iCn1L9lVI9ZPcsS480+k wbmFDYCaUAqowUGhuKg1M8XmNsmQHtwbdRXfkKNKftJJVvjlpX0Njj/C3Keo2/5Fq3Ip rJb1BWaFB48m4r9hq6xevdAWxHKsOWbdYQ3JRHwN28GN7dhocxPWoxBC2UOWVDxNAvVb KELlyeUjKRt7k8dTr6Xo/3BcetEN0kRAapuiQFxtxhRi3ZR8WdPaedyJyCpfkgSiKRr3 Vpww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685144680; x=1687736680; h=cc:to:from:subject:mime-version:message-id:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=p2/B2gpADUVSH4+1+wdvhwwOJ3ZLS46dsub4yrmfIrY=; b=ADL4lk6dU6fA7BNKv4J76FfcHq9j8/B/W+JBnmyiwF+BVizr72BKE9W1U+XoZPXbui uEw3BaL/uQqUVOpzE+JWJvNYp74Dlgh1SoVp3WH0D5aMRL5csD2nfoQ/7Bvl9yox2PJS QJ1UFNZtYW6lKk17zYGQp2QxvPCsOKDGeMRteRO1dwBlAo+VT4qEa0bFk+/V2IgftKDa lgnxXIRrjOoKLZlkeRk4DzYn6KRMM4VW5bXWqv3ywmI0uwcfGAQizOJOJFo2lPWPDLC1 aBg+Re9Gu9VJKdOQLiV2H/w0MoeZe1RZz8hkWb1rXgGpE1JyXmdIdyLf63KOsCdf+WOQ +EDQ== X-Gm-Message-State: AC+VfDxebTRvqUNCDEOTk3nM2VujpgmNMdhgbg4sHlOj4iE6NdVHfrN8 ZR+JFx5nH5KiSDtSwox6WmkrDU9Rz3A= X-Google-Smtp-Source: ACHHUZ7YDXs22loXI9Hn4p7O2u7mz3EIOuiyK6CgImdELnj21szlSQpHCqRMIQPES8e6xN/3760vCEBuevA= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:910f:8a15:592b:2087]) (user=yuzhao job=sendgmr) by 2002:a25:7343:0:b0:bad:99d:f086 with SMTP id o64-20020a257343000000b00bad099df086mr1339084ybc.10.1685144679852; Fri, 26 May 2023 16:44:39 -0700 (PDT) Date: Fri, 26 May 2023 17:44:25 -0600 Message-Id: <20230526234435.662652-1-yuzhao@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.rc0.172.g3f132b7071-goog Subject: [PATCH mm-unstable v2 00/10] mm/kvm: locklessly clear the accessed bit From: Yu Zhao To: Andrew Morton , Paolo Bonzini Cc: Alistair Popple , Anup Patel , Ben Gardon , Borislav Petkov , Catalin Marinas , Chao Peng , Christophe Leroy , Dave Hansen , Fabiano Rosas , Gaosheng Cui , Gavin Shan , "H. Peter Anvin" , Ingo Molnar , James Morse , "Jason A. Donenfeld" , Jason Gunthorpe , Jonathan Corbet , Marc Zyngier , Masami Hiramatsu , Michael Ellerman , Michael Larabel , Mike Rapoport , Nicholas Piggin , Oliver Upton , Paul Mackerras , Peter Xu , Sean Christopherson , Steven Rostedt , Suzuki K Poulose , Thomas Gleixner , Thomas Huth , Will Deacon , Zenghui Yu , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, x86@kernel.org, linux-mm@google.com, Yu Zhao Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org TLDR ==== This patchset adds a fast path to clear the accessed bit without taking kvm->mmu_lock. It can significantly improve the performance of guests when the host is under heavy memory pressure. ChromeOS has been using a similar approach [1] since mid 2021 and it was proven successful on tens of millions devices. This v2 addressed previous requests [2] on refactoring code, removing inaccurate/redundant texts, etc. [1] https://crrev.com/c/2987928 [2] https://lore.kernel.org/r/20230217041230.2417228-1-yuzhao@google.com/ Overview ======== The goal of this patchset is to optimize the performance of guests when the host memory is overcommitted. It focuses on a simple yet common case where hardware sets the accessed bit in KVM PTEs and VMs are not nested. Complex cases fall back to the existing slow path where kvm->mmu_lock is then taken. The fast path relies on two techniques to safely clear the accessed bit: RCU and CAS. The former protects KVM page tables from being freed while the latter clears the accessed bit atomically against both the hardware and other software page table walkers. A new mmu_notifier_ops member, test_clear_young(), supersedes the existing clear_young() and test_young(). This extended callback can operate on a range of KVM PTEs individually according to a bitmap, if the caller provides it. Evaluation ========== An existing selftest can quickly demonstrate the effectiveness of this patchset. On a generic workstation equipped with 128 CPUs and 256GB DRAM: $ sudo max_guest_memory_test -c 64 -m 250 -s 250 MGLRU run2 ------------------ Before [1] ~64s After ~51s kswapd (MGLRU before) 100.00% balance_pgdat 100.00% shrink_node 100.00% shrink_one 99.99% try_to_shrink_lruvec 99.71% evict_folios 97.29% shrink_folio_list ==>> 13.05% folio_referenced 12.83% rmap_walk_file 12.31% folio_referenced_one 7.90% __mmu_notifier_clear_young 7.72% kvm_mmu_notifier_clear_young 7.34% _raw_write_lock kswapd (MGLRU after) 100.00% balance_pgdat 100.00% shrink_node 100.00% shrink_one 99.99% try_to_shrink_lruvec 99.59% evict_folios 80.37% shrink_folio_list ==>> 3.74% folio_referenced 3.59% rmap_walk_file 3.19% folio_referenced_one 2.53% lru_gen_look_around 1.06% __mmu_notifier_test_clear_young Comprehensive benchmarks are coming soon. [1] "mm: rmap: Don't flush TLB after checking PTE young for page reference" was included so that the comparison is apples to apples. https://lore.kernel.org/r/20220706112041.3831-1-21cnbao@gmail.com/ Yu Zhao (10): mm/kvm: add mmu_notifier_ops->test_clear_young() mm/kvm: use mmu_notifier_ops->test_clear_young() kvm/arm64: export stage2_try_set_pte() and macros kvm/arm64: make stage2 page tables RCU safe kvm/arm64: add kvm_arch_test_clear_young() kvm/powerpc: make radix page tables RCU safe kvm/powerpc: add kvm_arch_test_clear_young() kvm/x86: move tdp_mmu_enabled and shadow_accessed_mask kvm/x86: add kvm_arch_test_clear_young() mm: multi-gen LRU: use mmu_notifier_test_clear_young() Documentation/admin-guide/mm/multigen_lru.rst | 6 +- arch/arm64/include/asm/kvm_host.h | 6 + arch/arm64/include/asm/kvm_pgtable.h | 55 +++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/hyp/pgtable.c | 61 +------- arch/arm64/kvm/mmu.c | 53 ++++++- arch/powerpc/include/asm/kvm_host.h | 8 + arch/powerpc/include/asm/kvm_ppc.h | 1 + arch/powerpc/kvm/book3s.c | 6 + arch/powerpc/kvm/book3s.h | 1 + arch/powerpc/kvm/book3s_64_mmu_radix.c | 65 +++++++- arch/powerpc/kvm/book3s_hv.c | 5 + arch/x86/include/asm/kvm_host.h | 13 ++ arch/x86/kvm/mmu.h | 6 - arch/x86/kvm/mmu/spte.h | 1 - arch/x86/kvm/mmu/tdp_mmu.c | 34 +++++ include/linux/kvm_host.h | 22 +++ include/linux/mmu_notifier.h | 79 ++++++---- include/linux/mmzone.h | 6 +- include/trace/events/kvm.h | 15 -- mm/mmu_notifier.c | 48 ++---- mm/rmap.c | 8 +- mm/vmscan.c | 139 ++++++++++++++++-- virt/kvm/kvm_main.c | 114 ++++++++------ 24 files changed, 546 insertions(+), 207 deletions(-)