From patchwork Fri Aug 13 20:34:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12436159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98D0AC4320A for ; Fri, 13 Aug 2021 20:35:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A8AF60560 for ; Fri, 13 Aug 2021 20:35:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234126AbhHMUfg (ORCPT ); Fri, 13 Aug 2021 16:35:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233905AbhHMUff (ORCPT ); Fri, 13 Aug 2021 16:35:35 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98D39C061756 for ; Fri, 13 Aug 2021 13:35:08 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id v3-20020a17090ac903b029017912733966so6267490pjt.2 for ; Fri, 13 Aug 2021 13:35:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=S3drmdojS3tjpD4MuJKoM16ldFAYxygNIUpx/iPPVnU=; b=SuiFYEq5O1m0VFxPXiDuqAik9EqRfAJj4/pO11xjhgKQqYkGzZPSnrT+zPaofTXSJY 0x98iif+eeEkS91w3/gboHCXIPxM62S0s+COH9Q9USLaVJbITSZKEmphkAqzafYawZ6t CR+OuT/CCErzjuqd/KA4M4kRPv+z8OtQ59dP/EG7hmKqUeE4hrqoGIXb066ZVKVLLn3D d5q8s2B2dRTZVNv9nGX8em72T8KvQsUzsttmcGqUGV/5IUq2hiPtqICy1GrtNuTnoKL1 vXSAAohW1INpYmqRDNEz7pJXm6dKlb+yBfCMyUWMJ+ZCrylqOXbxESEPuXo2bbto+qv1 ll2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=S3drmdojS3tjpD4MuJKoM16ldFAYxygNIUpx/iPPVnU=; b=mlvxCBbgY5nL6wX25DUx10ivytqqZpdhzUVIh646snqeq6p9kYZ1o9eXdP4j9KHx9r u7BqkTHQDFzTNHWBN7rp4UYFErWrAKN5WDO6VOPuG2TV70VVHLJD6Sg9J1iDQcVNnodN XWcGeoKFaan1eqOLbxYh03zV56gejD8FOrdV9FdWF6m1vFrEKTt26dFJZajPwTVVjb+S unpzvtlw1m+2xKWnGFU7lQGC6/ewIXx4LBMLswaUTQtlbuj8Zi/mR2f+jox/JowyAxx6 6+zh8FroxF3I/QWXkYlbyT+UH2J/x1DQ/4Z/MeKJetUZHEaz3A0ri5bSs+twV4rUvlar xVLw== X-Gm-Message-State: AOAM530KrOkwVdISBLA42YmcCFnVWIsNU81V/FNN+ieyqvaPN14YiqJm LLHiLFpp0I3SDlwgJbNc4tG5aNF38tGmEg== X-Google-Smtp-Source: ABdhPJzmm7dPA/K2fqnu8UGovpr8qoKpJM7HsaEpcrl1LdAApW7BTNcc0ju2QgNdROO/YUtXzKSeKJmT3KvrOA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:8283:0:b0:3e0:f3f3:839d with SMTP id w125-20020a628283000000b003e0f3f3839dmr4116999pfd.37.1628886908129; Fri, 13 Aug 2021 13:35:08 -0700 (PDT) Date: Fri, 13 Aug 2021 20:34:59 +0000 In-Reply-To: <20210813203504.2742757-1-dmatlack@google.com> Message-Id: <20210813203504.2742757-2-dmatlack@google.com> Mime-Version: 1.0 References: <20210813203504.2742757-1-dmatlack@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [RFC PATCH 1/6] KVM: x86/mmu: Rename try_async_pf to kvm_faultin_pfn in comment From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org try_async_pf was renamed just before the comment was added and it never got updated. Fixes: 0ed0de3de2d9 ("KVM: MMU: change try_async_pf() arguments to kvm_page_fault") Signed-off-by: David Matlack --- arch/x86/kvm/mmu.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index ce369a533800..2c726b255fa8 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -158,7 +158,7 @@ struct kvm_page_fault { /* Shifted addr, or result of guest page table walk if addr is a gva. */ gfn_t gfn; - /* Outputs of try_async_pf. */ + /* Outputs of kvm_faultin_pfn. */ kvm_pfn_t pfn; hva_t hva; bool map_writable; From patchwork Fri Aug 13 20:35:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12436161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 605C1C4338F for ; Fri, 13 Aug 2021 20:35:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 45E0660560 for ; Fri, 13 Aug 2021 20:35:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234260AbhHMUfi (ORCPT ); Fri, 13 Aug 2021 16:35:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233905AbhHMUfh (ORCPT ); Fri, 13 Aug 2021 16:35:37 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FA9AC061756 for ; Fri, 13 Aug 2021 13:35:10 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id i32-20020a25b2200000b02904ed415d9d84so10317681ybj.0 for ; Fri, 13 Aug 2021 13:35:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dQRNNRfUYYSqoQRN+a9ka9GBDidkG2hWiDMvCYRSnzI=; b=jvwfvTY1z5ZmMftwAcr2OTdyk63A5pj47T30cySYMd0WZ6+lh3A/DWWt7dDBKN5Elf 1sZBk5/QsL1d2GRC3zps433zw5Ju3XyrC9HtKfImKPceHd7Bra7pg5mVbw8vel100AJC sZ8JXGLS7Db8epJVZzk2QYEzukfBYumh8Libnnrweo5AYn30x8FhbA7NiQhQoMgYk4nH 7hAe3C0DqmM1H8Oj8Bvk9OP/5IKhcF+t4yUDXh1gBpInlw0KQRsi6l0bGN2lglLrjtcq 7j+iNWFItwDZaiAU1uFmiEtqqUvaaDNy14ZTgfZMR3L/uzDaZtfKWPXrxR/sStLZBSA7 4+AQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dQRNNRfUYYSqoQRN+a9ka9GBDidkG2hWiDMvCYRSnzI=; b=tB6Dk9R7d6M5MqIW1DImn9RQ6As50NuABVhWYnQBMZzTjoRIgX8u2TAKhOWjnbbjM3 50dyzQTnwaprgxvPq1fsUIv9Rq3Y8nClsy0fgVa8QYqDAZv7LZi0cvECzttBbtiy9amZ kZybYPWTAbtzbjmU1WQUOSgr3kDA96NKTPgt02VtY7H3Nvtl+cUMhf+Vr0/8MUWpb8XA AvBOSoLgEijUMi8qOL/tOxjv5jM1B3uE4uHQcdj2F73iuLjlpGrGrmIWf9qRFU3kqI38 0Oi+pfPlS1rZ4DbFu9Ma44bKceW5kP+0IO+gbKb6siZZNHwWSyxHmeTs1evQKRZrn8sN zYyw== X-Gm-Message-State: AOAM532XhhfcMx5FLqQ30GZDanMN63hkSms4JwBU/ep5Ky1vK08yp6gs sQHMZbTRagipnJ3UzMcCLyzLim5InnzTDQ== X-Google-Smtp-Source: ABdhPJzDv3Bkges/YuIzD0gHw4ta7DoUmRiiSei/fG25sb100rDhNKkxRGkt42r6ehJnB5gtpAWjlR9zxVcsNQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a25:7810:: with SMTP id t16mr5206376ybc.352.1628886909701; Fri, 13 Aug 2021 13:35:09 -0700 (PDT) Date: Fri, 13 Aug 2021 20:35:00 +0000 In-Reply-To: <20210813203504.2742757-1-dmatlack@google.com> Message-Id: <20210813203504.2742757-3-dmatlack@google.com> Mime-Version: 1.0 References: <20210813203504.2742757-1-dmatlack@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [RFC PATCH 2/6] KVM: x86/mmu: Fold rmap_recycle into rmap_add From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Consolidate rmap_recycle and rmap_add into a single function since they are only ever called together (and only from one place). This has a nice side effect of eliminating an extra kvm_vcpu_gfn_to_memslot(). In addition it makes mmu_set_spte(), which is a very long function, a little shorter. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 40 ++++++++++++++-------------------------- 1 file changed, 14 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a272ccbddfa1..3352312ab1c9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1076,20 +1076,6 @@ static bool rmap_can_add(struct kvm_vcpu *vcpu) return kvm_mmu_memory_cache_nr_free_objects(mc); } -static int rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) -{ - struct kvm_memory_slot *slot; - struct kvm_mmu_page *sp; - struct kvm_rmap_head *rmap_head; - - sp = sptep_to_sp(spte); - kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - return pte_list_add(vcpu, spte, rmap_head); -} - - static void rmap_remove(struct kvm *kvm, u64 *spte) { struct kvm_memslots *slots; @@ -1102,9 +1088,9 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) gfn = kvm_mmu_page_get_gfn(sp, spte - sp->spt); /* - * Unlike rmap_add and rmap_recycle, rmap_remove does not run in the - * context of a vCPU so have to determine which memslots to use based - * on context information in sp->role. + * Unlike rmap_add, rmap_remove does not run in the context of a vCPU + * so we have to determine which memslots to use based on context + * information in sp->role. */ slots = kvm_memslots_for_spte_role(kvm, sp->role); @@ -1644,19 +1630,24 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) +static void rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) { struct kvm_memory_slot *slot; - struct kvm_rmap_head *rmap_head; struct kvm_mmu_page *sp; + struct kvm_rmap_head *rmap_head; + int rmap_count; sp = sptep_to_sp(spte); + kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); + rmap_count = pte_list_add(vcpu, spte, rmap_head); - kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(sp->role.level)); + if (rmap_count > RMAP_RECYCLE_THRESHOLD) { + kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); + kvm_flush_remote_tlbs_with_address( + vcpu->kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + } } bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -2694,7 +2685,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, bool host_writable) { int was_rmapped = 0; - int rmap_count; int set_spte_ret; int ret = RET_PF_FIXED; bool flush = false; @@ -2754,9 +2744,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (!was_rmapped) { kvm_update_page_stats(vcpu->kvm, level, 1); - rmap_count = rmap_add(vcpu, sptep, gfn); - if (rmap_count > RMAP_RECYCLE_THRESHOLD) - rmap_recycle(vcpu, sptep, gfn); + rmap_add(vcpu, sptep, gfn); } return ret; From patchwork Fri Aug 13 20:35:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12436163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F36A0C4320A for ; Fri, 13 Aug 2021 20:35:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D609060560 for ; Fri, 13 Aug 2021 20:35:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234336AbhHMUfk (ORCPT ); Fri, 13 Aug 2021 16:35:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234308AbhHMUfi (ORCPT ); Fri, 13 Aug 2021 16:35:38 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABE26C0617AD for ; Fri, 13 Aug 2021 13:35:11 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id ns7-20020a17090b250700b0017942fa12a8so1916236pjb.6 for ; Fri, 13 Aug 2021 13:35:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=g1XjU39Wew2ggUnWSYJ2KWc8KcaVf4eBlGQoqtGdE7I=; b=eaQkRzioOJLAfKhXV/HRjB4+heOGPb+3/o+kKl1VLcrbw7DgxjyCix0/MlW6A4/M3X EUu0vDQoH0lDqUZfVU0mSVMuN8/k8DPzXQL/0qzdKj+Fbac+WLs22v4UuZ3JG7X/4/GL LLDDrCa2FK5LP9ChtRiDLloqtNNTxFvMAdj8JVj10e0E2phLSdPU+/9Zv9PoYt52+iCa r0zwSqje8UEOrIf5Dqij0mkdLOJBKi+T17InaNCCTiDO9GAQTDB6pSf262mVHdv7p0aA vfDVPuQDjZDh4XQEmn5EdSewstltb2ITGdZfC7HG+tEZN6mQrOyRmybYYZr67yia0dNq qPig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=g1XjU39Wew2ggUnWSYJ2KWc8KcaVf4eBlGQoqtGdE7I=; b=byU7vq2mvWuYCZWlFwttogvHmLXDXuYtuet19qU4KiawakodGjOX91LWgp81SJjfl2 ioB6zK7vWAmNOETzEWJ4g0zqL4HV35WsaWgdzm7urt4eNI870ztvTLzu5gANf3x1+Em2 CzjkvIMxKx9UGf+tp4soCALLoNfzRhkhYzWR4CxSMhPe/o+LXjhbon7mwM0VGcvZYatL nGkpi8VHqz6VTBMgir5h2mSx+fdys2cuHME6BlQM1YztGHEThYZz+j8oJP8Ecw508lR9 7igcIUuvbCllhzWE+JCX/NRFvEa9yq6beaJWKnC2QwFPoHdf72R89Jy+rE9DirmEItTx NWmQ== X-Gm-Message-State: AOAM5305n2SA05xG0tNSRzbeqgpHrDTb1SOP3vP8k1OrOEqnK9jRH0Lf qPth44DBF5vpnQtY2HhGL6lPHMvDhj/qPA== X-Google-Smtp-Source: ABdhPJxE04ZTynZYa1+HXenoA4rbBOrTxJ/7TMnUxe1gXnJKZ9ZehHb9hVjcCXOvixXMi41VQndvjWeQt/iOYw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:10cb:b029:3c6:8cc9:5098 with SMTP id d11-20020a056a0010cbb02903c68cc95098mr4100359pfu.41.1628886911141; Fri, 13 Aug 2021 13:35:11 -0700 (PDT) Date: Fri, 13 Aug 2021 20:35:01 +0000 In-Reply-To: <20210813203504.2742757-1-dmatlack@google.com> Message-Id: <20210813203504.2742757-4-dmatlack@google.com> Mime-Version: 1.0 References: <20210813203504.2742757-1-dmatlack@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [RFC PATCH 3/6] KVM: x86/mmu: Pass the memslot around via struct kvm_page_fault From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The memslot for the faulting gfn is used throughout the page fault handling code, so capture it in kvm_page_fault as soon as we know the gfn and use it in the page fault handling code that has direct access to the kvm_page_fault struct. This, in combination with the subsequent patch, improves "Populate memory time" in dirty_log_perf_test by 5% when using the legacy MMU. There is no discerable improvement to the performance of the TDP MMU. No functional change intended. Suggested-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 27 +++++++++------------------ arch/x86/kvm/mmu/paging_tmpl.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 10 +++++----- 4 files changed, 19 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 2c726b255fa8..8d13333f0345 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -158,6 +158,9 @@ struct kvm_page_fault { /* Shifted addr, or result of guest page table walk if addr is a gva. */ gfn_t gfn; + /* The memslot containing gfn. May be NULL. */ + struct kvm_memory_slot *slot; + /* Outputs of kvm_faultin_pfn. */ kvm_pfn_t pfn; hva_t hva; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3352312ab1c9..fb2c95e8df00 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2890,7 +2890,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - struct kvm_memory_slot *slot; + struct kvm_memory_slot *slot = fault->slot; kvm_pfn_t mask; fault->huge_page_disallowed = fault->exec && fault->nx_huge_page_workaround_enabled; @@ -2901,8 +2901,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) return; - slot = gfn_to_memslot_dirty_bitmap(vcpu, fault->gfn, true); - if (!slot) + if (kvm_slot_dirty_track_enabled(slot)) return; /* @@ -3076,13 +3075,9 @@ static bool page_fault_can_be_fast(struct kvm_page_fault *fault) * someone else modified the SPTE from its original value. */ static bool -fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, +fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, u64 *sptep, u64 old_spte, u64 new_spte) { - gfn_t gfn; - - WARN_ON(!sp->role.direct); - /* * Theoretically we could also set dirty bit (and flush TLB) here in * order to eliminate unnecessary PML logging. See comments in @@ -3098,14 +3093,8 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (cmpxchg64(sptep, old_spte, new_spte) != old_spte) return false; - if (is_writable_pte(new_spte) && !is_writable_pte(old_spte)) { - /* - * The gfn of direct spte is stable since it is - * calculated by sp->gfn. - */ - gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt); - kvm_vcpu_mark_page_dirty(vcpu, gfn); - } + if (is_writable_pte(new_spte) && !is_writable_pte(old_spte)) + mark_page_dirty_in_slot(vcpu->kvm, fault->slot, fault->gfn); return true; } @@ -3233,7 +3222,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * since the gfn is not stable for indirect shadow page. See * Documentation/virt/kvm/locking.rst to get more detail. */ - if (fast_pf_fix_direct_spte(vcpu, sp, sptep, spte, new_spte)) { + if (fast_pf_fix_direct_spte(vcpu, fault, sptep, spte, new_spte)) { ret = RET_PF_FIXED; break; } @@ -3823,7 +3812,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int *r) { - struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + struct kvm_memory_slot *slot = fault->slot; bool async; /* @@ -3888,6 +3877,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault int r; fault->gfn = fault->addr >> PAGE_SHIFT; + fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + if (page_fault_handle_page_track(vcpu, fault)) return RET_PF_EMULATE; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f70afecbf3a2..50ade6450ace 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -847,6 +847,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault } fault->gfn = walker.gfn; + fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn); + if (page_fault_handle_page_track(vcpu, fault)) { shadow_page_table_clear_flood(vcpu, fault->addr); return RET_PF_EMULATE; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 47ec9f968406..6f733a68d750 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -533,6 +533,7 @@ static inline bool tdp_mmu_set_spte_atomic_no_dirty_log(struct kvm *kvm, * TDP page fault. * * @vcpu: The vcpu instance that took the TDP page fault. + * @fault: The kvm_page_fault being resolved by this SPTE. * @iter: a tdp_iter instance currently on the SPTE that should be set * @new_spte: The value the SPTE should be set to * @@ -540,6 +541,7 @@ static inline bool tdp_mmu_set_spte_atomic_no_dirty_log(struct kvm *kvm, * this function will have no side-effects. */ static inline bool tdp_mmu_map_set_spte_atomic(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, struct tdp_iter *iter, u64 new_spte) { @@ -553,12 +555,10 @@ static inline bool tdp_mmu_map_set_spte_atomic(struct kvm_vcpu *vcpu, * handle_changed_spte_dirty_log() to leverage vcpu->last_used_slot. */ if (is_writable_pte(new_spte)) { - struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, iter->gfn); - - if (slot && kvm_slot_dirty_track_enabled(slot)) { + if (fault->slot && kvm_slot_dirty_track_enabled(fault->slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON_ONCE(iter->level > PG_LEVEL_4K); - mark_page_dirty_in_slot(kvm, slot, iter->gfn); + mark_page_dirty_in_slot(kvm, fault->slot, iter->gfn); } } @@ -934,7 +934,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; - else if (!tdp_mmu_map_set_spte_atomic(vcpu, iter, new_spte)) + else if (!tdp_mmu_map_set_spte_atomic(vcpu, fault, iter, new_spte)) return RET_PF_RETRY; /* From patchwork Fri Aug 13 20:35:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12436165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0208CC4338F for ; Fri, 13 Aug 2021 20:35:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB40061042 for ; Fri, 13 Aug 2021 20:35:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234337AbhHMUfl (ORCPT ); Fri, 13 Aug 2021 16:35:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233905AbhHMUfk (ORCPT ); Fri, 13 Aug 2021 16:35:40 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7044EC061756 for ; Fri, 13 Aug 2021 13:35:13 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id y3-20020a17090a8b03b02901787416b139so8499574pjn.4 for ; Fri, 13 Aug 2021 13:35:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JoeBI2FWWe8+qtj8allmUyPT7N7MOZEnV3MUTpN3vLM=; b=pBhQYrihngUQP1L2fJ7Wglg4jHdFlfk+6L3x2kncyyoRo3a9grH7FR1mn6yBaiQuDS iI45yTVsI62kzygM+2HIpQt2vE/dI3tKrtQ5zXSWtzj3AhbmumX3Jh6KkKnEZuf/qEQ2 iVWJwg/e+0BQg944RKLzfmcvGjHnLJPrYc0BHFxnQF3jGSQZa7qazZZoO2bfmobHQ4kY S4vhQdlIzA1nNuQBXvEJ/YpBKcqJu8u7M5QF6kBObg03RLLrh8W9Y8Iqiw/eI41Pz3dA wp4RwBeldb6Y6fPn1VbSkjv+Cea/3G0TvJqLzf/MNInCODFuvpzVdjUTHy1pZuTW8c3u UVmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JoeBI2FWWe8+qtj8allmUyPT7N7MOZEnV3MUTpN3vLM=; b=ICxznzTqrnwaaT7nhLD4ThSprmWZaSgj6LFyfoaEd0PBIrGvFB5RRG8E6d1pdfJm9b pbYV8cIfq7wzFyLqNEuLdbhk095vWdmB4ZxpmQbe+GX6yCkYS9R7A7G41yvifAgsVa0F HlIbkuB/YxXPSR09aG6W1KekfJushcY5HZWc/KvHGzdDITaP8U7EwTPYmmQO4gKpxbD8 GMqKZ0KiT2h1uV9q7nDjZxCsCr/O3d3jY4eyPa3JPosaVlGI0QBa/leDY2AE8rq9WEZ4 VhxRu5NSSUKGAS0VTXrXZBUHmH4Iji16DkLwtb2+UxKc5IWbfJcfwUizx5tv62i2q46S bHVQ== X-Gm-Message-State: AOAM530HPeX10yPKLwGXjtsc/ognEclhdBY96UK+m3XoXDsWRaHZ0eq1 x7by340rmO3xax4LwBHjhwOTZ001erk3XQ== X-Google-Smtp-Source: ABdhPJy58eOU/sxqw96lvJaFBQ9WAqpU1n295RHWNSRZPgrhq+mQqlSQ36yp9bBOEJqVgAyPMsALz+Bg0Fl/oQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:903:1243:b029:107:eca4:d5bf with SMTP id u3-20020a1709031243b0290107eca4d5bfmr3426333plh.15.1628886912960; Fri, 13 Aug 2021 13:35:12 -0700 (PDT) Date: Fri, 13 Aug 2021 20:35:02 +0000 In-Reply-To: <20210813203504.2742757-1-dmatlack@google.com> Message-Id: <20210813203504.2742757-5-dmatlack@google.com> Mime-Version: 1.0 References: <20210813203504.2742757-1-dmatlack@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [RFC PATCH 4/6] KVM: x86/mmu: Avoid memslot lookup in page_fault_handle_page_track From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that kvm_page_fault has a pointer to the memslot it can be passed down to the page tracking code to avoid a redundant slot lookup. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 ++ arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/page_track.c | 20 +++++++++++++------- 3 files changed, 16 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 87bd6025d91d..8766adb52a73 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -61,6 +61,8 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, enum kvm_page_track_mode mode); bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, enum kvm_page_track_mode mode); +bool kvm_slot_page_track_is_active(struct kvm_memory_slot *slot, gfn_t gfn, + enum kvm_page_track_mode mode); void kvm_page_track_register_notifier(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index fb2c95e8df00..c148d481e9b5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3776,7 +3776,7 @@ static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, * guest is writing the page which is write tracked which can * not be fixed by page fault handler. */ - if (kvm_page_track_is_active(vcpu, fault->gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(fault->slot, fault->gfn, KVM_PAGE_TRACK_WRITE)) return true; return false; diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 269f11f92fd0..a9e2e02f2f4f 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -136,19 +136,14 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, } EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); -/* - * check if the corresponding access on the specified guest page is tracked. - */ -bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, - enum kvm_page_track_mode mode) +bool kvm_slot_page_track_is_active(struct kvm_memory_slot *slot, gfn_t gfn, + enum kvm_page_track_mode mode) { - struct kvm_memory_slot *slot; int index; if (WARN_ON(!page_track_mode_is_valid(mode))) return false; - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); if (!slot) return false; @@ -156,6 +151,17 @@ bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, return !!READ_ONCE(slot->arch.gfn_track[mode][index]); } +/* + * check if the corresponding access on the specified guest page is tracked. + */ +bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, + enum kvm_page_track_mode mode) +{ + struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + + return kvm_slot_page_track_is_active(slot, gfn, mode); +} + void kvm_page_track_cleanup(struct kvm *kvm) { struct kvm_page_track_notifier_head *head; From patchwork Fri Aug 13 20:35:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12436167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D5EFC432BE for ; Fri, 13 Aug 2021 20:35:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73D7361042 for ; Fri, 13 Aug 2021 20:35:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234344AbhHMUfm (ORCPT ); Fri, 13 Aug 2021 16:35:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234313AbhHMUfm (ORCPT ); Fri, 13 Aug 2021 16:35:42 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04C2CC061756 for ; Fri, 13 Aug 2021 13:35:15 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id n20-20020a2540140000b0290593b8e64cd5so10311303yba.3 for ; Fri, 13 Aug 2021 13:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NDhJpouFdG89AEfajp8ak9xJOpY2ec2CIW475HaRdgk=; b=m1oizn6VFNYZbKSZsYD36pdm8nm1bHGfOkVr3VGLBdVyjyr7XUZv1onPVFcSTNu8DB m6ZQmhJdJ4y5G1i5IvWWC9WMYHOC92xWWu3Eeeh1DRnpWwRmiURH44DpjKpSfIX6j2Rd Zr9zyHz3xjtuj8xoMjo2LynYC9GCMJvIJd+4/AVYJ4NXPvgfPB6ZTV5OnR9V4l/cflKx lhFGh8GUVdUc25CiYhURwiFN6cUfOYMfTBNiRqNR16IIcjVAdxvxbA6eq8jJAyS5L2rj 3+by8Raj6aTuHYUXfPawAgpInw5NdFXbQLR+F8EjdBaIQn3nmJPCDeyGqiRgBjsC+JTT lrvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NDhJpouFdG89AEfajp8ak9xJOpY2ec2CIW475HaRdgk=; b=e3A/DM/67xX+YmcOowG/3ae6DcQkT7Hdx2wPEtwWyWBOkFuUGT7VautM96gFdPE2a1 2UHbu+ARdy6xQaOl6yxMe3I2QzlsBtw7SUIEPnzOlsalh8SI8PPJtJm/AQjl2ocYjyIW lIMf+nSrFTBkv1M6vw1cW01lh/PT2lTYbybw9gs8uN6TAqzn0Bo8PmSDiQv3GtKMboad pbtro7gDoaqin3pwasU9XylsGUyxfU6X+IGVmjGLpl8rCGqiwQylMqWqSe88+GznONaR LT33Bx1L9Jwf2Ofevk82uZ7AV/tKxbXrepQPnW36S+WMrPRwgoAtCYnnsSQvGf4SbSPe JJ1Q== X-Gm-Message-State: AOAM531Imf82P1eRHUh7sIJ6OXM3AuHo0J73CZn/WmmGQrBEEFCzTK+i 5gFOTD6a7vsqlWEg+hgaPkrSL7Z7t03r+w== X-Google-Smtp-Source: ABdhPJy8dDyZsPEILPPirhhSJin4HY3mQ8PMSKqAurGCjFkhQhHgvE+nNSD2XUqpX2Sw3ifsgrG2tWsZ6Y9Urw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a25:c752:: with SMTP id w79mr5370703ybe.348.1628886914267; Fri, 13 Aug 2021 13:35:14 -0700 (PDT) Date: Fri, 13 Aug 2021 20:35:03 +0000 In-Reply-To: <20210813203504.2742757-1-dmatlack@google.com> Message-Id: <20210813203504.2742757-6-dmatlack@google.com> Mime-Version: 1.0 References: <20210813203504.2742757-1-dmatlack@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [RFC PATCH 5/6] KVM: x86/mmu: Avoid memslot lookup in rmap_add From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Avoid the memslot lookup in rmap_add by passing it down from the fault handling code to mmu_set_spte and then to rmap_add. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 29 ++++++++--------------------- arch/x86/kvm/mmu/paging_tmpl.h | 12 +++++++++--- 2 files changed, 17 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c148d481e9b5..41e2ef8ad09b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1630,16 +1630,15 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) +static void rmap_add(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) { - struct kvm_memory_slot *slot; struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; int rmap_count; sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); - slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); rmap_count = pte_list_add(vcpu, spte, rmap_head); @@ -2679,9 +2678,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, return ret; } -static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned int pte_access, bool write_fault, int level, - gfn_t gfn, kvm_pfn_t pfn, bool speculative, +static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + u64 *sptep, unsigned int pte_access, bool write_fault, + int level, gfn_t gfn, kvm_pfn_t pfn, bool speculative, bool host_writable) { int was_rmapped = 0; @@ -2744,24 +2743,12 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (!was_rmapped) { kvm_update_page_stats(vcpu->kvm, level, 1); - rmap_add(vcpu, sptep, gfn); + rmap_add(vcpu, slot, sptep, gfn); } return ret; } -static kvm_pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, - bool no_dirty_log) -{ - struct kvm_memory_slot *slot; - - slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, no_dirty_log); - if (!slot) - return KVM_PFN_ERR_FAULT; - - return gfn_to_pfn_memslot_atomic(slot, gfn); -} - static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *start, u64 *end) @@ -2782,7 +2769,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, return -1; for (i = 0; i < ret; i++, gfn++, start++) { - mmu_set_spte(vcpu, start, access, false, sp->role.level, gfn, + mmu_set_spte(vcpu, slot, start, access, false, sp->role.level, gfn, page_to_pfn(pages[i]), true, true); put_page(pages[i]); } @@ -2979,7 +2966,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) account_huge_nx_page(vcpu->kvm, sp); } - ret = mmu_set_spte(vcpu, it.sptep, ACC_ALL, + ret = mmu_set_spte(vcpu, fault->slot, it.sptep, ACC_ALL, fault->write, fault->goal_level, base_gfn, fault->pfn, fault->prefault, fault->map_writable); if (ret == RET_PF_SPURIOUS) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 50ade6450ace..653ca44afa58 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -561,6 +561,7 @@ static bool FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *spte, pt_element_t gpte, bool no_dirty_log) { + struct kvm_memory_slot *slot; unsigned pte_access; gfn_t gfn; kvm_pfn_t pfn; @@ -573,8 +574,13 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gfn = gpte_to_gfn(gpte); pte_access = sp->role.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, + + slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, no_dirty_log && (pte_access & ACC_WRITE_MASK)); + if (!slot) + return false; + + pfn = gfn_to_pfn_memslot_atomic(slot, gfn); if (is_error_pfn(pfn)) return false; @@ -582,7 +588,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * we call mmu_set_spte() with host_writable = true because * pte_prefetch_gfn_to_pfn always gets a writable pfn. */ - mmu_set_spte(vcpu, spte, pte_access, false, PG_LEVEL_4K, gfn, pfn, + mmu_set_spte(vcpu, slot, spte, pte_access, false, PG_LEVEL_4K, gfn, pfn, true, true); kvm_release_pfn_clean(pfn); @@ -749,7 +755,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, } } - ret = mmu_set_spte(vcpu, it.sptep, gw->pte_access, fault->write, + ret = mmu_set_spte(vcpu, fault->slot, it.sptep, gw->pte_access, fault->write, it.level, base_gfn, fault->pfn, fault->prefault, fault->map_writable); if (ret == RET_PF_SPURIOUS) From patchwork Fri Aug 13 20:35:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12436169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61EA7C4338F for ; Fri, 13 Aug 2021 20:35:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 46E2961042 for ; Fri, 13 Aug 2021 20:35:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234433AbhHMUfo (ORCPT ); Fri, 13 Aug 2021 16:35:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234365AbhHMUfn (ORCPT ); Fri, 13 Aug 2021 16:35:43 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B0A4C0617AD for ; Fri, 13 Aug 2021 13:35:16 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id o17-20020ae9f5110000b02903d22e54c86eso8219227qkg.8 for ; Fri, 13 Aug 2021 13:35:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YexKCcJGTHEvrqc5W+Lk2o+DH61+xdD3D5sl+BOf8D4=; b=fc9y0myyE09foimYLPZWehqj1f/DpdIs2E3xy4vL3EzI1u4AvgO4Oms/4weetwFOvl OWdNSBiKid2X8wD54UppFQPhknjmtbBVOT5QTuuzyMk8OGPJtZZ1EaDAVRGoJEVz3lLm 9Tm7GTf+2EXqpXVWNpV96YDAU7mVrFAPOGV2ajOd2Fa2UkzTNHqPIVmsZFgAQ7jbyp5g wD4cBJLIg9gYUc+fpHcSW7wMk4B2pDIVc0BU/bZbfS+GXUoIxc88lieQ/0A/NX2PXXkB rNgxBx7zFLIE37cPf1OzL1Q69WuPqdxu+AMgxwlzCcC2dogAHBRTgR9A5wvK6yAoAXUc 8vGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YexKCcJGTHEvrqc5W+Lk2o+DH61+xdD3D5sl+BOf8D4=; b=MzlIowc9rSsgWIHRqXL9rkK7t+OZKA2cxY8UDwSlma2tAMr66Jo5rSOnT1kykZZqg4 0FBusGlctwYvoQCKWUKoDqRhVD/vj9aUoxyKUEWizXcII3bcc08iGsuBHQQK0ZLnR0xW z+6VipEfIAVOiJMt9MwtVt0/3lB1f+3A+Oij3zfIFXLbrfC0gyVuDu3/itNqh3lvKeIy SI0ZHn9J5Uzfz1Xed2LPwLW3vXSoGX8uw/wCrUy/rAsraMp1MkFzMUOf8zznMBkjFy/t 1L9vq8uUmVaQvjlW4UnxBAu+Tth4LAP+yWD094joBTas+3lyHrL8xfqqzFquv3ZOSXE4 b5Bg== X-Gm-Message-State: AOAM532edmpmw2QXFz2aU7+7/j7MzEIUr5m457Jp+OgxRJjgACZmvVjU CZu7QTJ7/Zoy5imeTzGeL7GBYrz88V1E2Q== X-Google-Smtp-Source: ABdhPJwnZIlhDRx5bQDknB4NHTcz7lqW3mjF0oDUen/sbB9SwQbTfpIfjjNRkm7+mh4Pu2u90aZctK2Axz2Kmw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a0c:aac2:: with SMTP id g2mr4524117qvb.44.1628886915759; Fri, 13 Aug 2021 13:35:15 -0700 (PDT) Date: Fri, 13 Aug 2021 20:35:04 +0000 In-Reply-To: <20210813203504.2742757-1-dmatlack@google.com> Message-Id: <20210813203504.2742757-7-dmatlack@google.com> Mime-Version: 1.0 References: <20210813203504.2742757-1-dmatlack@google.com> X-Mailer: git-send-email 2.33.0.rc1.237.g0d66db33f3-goog Subject: [RFC PATCH 6/6] KVM: x86/mmu: Avoid memslot lookup in mmu_try_to_unsync_pages From: David Matlack To: Paolo Bonzini Cc: kvm@vger.kernel.org, Ben Gardon , Joerg Roedel , Jim Mattson , Wanpeng Li , Vitaly Kuznetsov , Sean Christopherson , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org mmu_try_to_unsync_pages checks if page tracking is active for the given gfn, which requires knowing the memslot. We can pass down the memslot all the way from mmu_set_spte to avoid this lookup. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 -- arch/x86/kvm/mmu/mmu.c | 16 +++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 3 ++- arch/x86/kvm/mmu/page_track.c | 14 +++----------- arch/x86/kvm/mmu/paging_tmpl.h | 4 +++- arch/x86/kvm/mmu/spte.c | 11 ++++++----- arch/x86/kvm/mmu/spte.h | 9 +++++---- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 8 files changed, 29 insertions(+), 32 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index 8766adb52a73..be76dda0952b 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -59,8 +59,6 @@ void kvm_slot_page_track_add_page(struct kvm *kvm, void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); -bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, - enum kvm_page_track_mode mode); bool kvm_slot_page_track_is_active(struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 41e2ef8ad09b..136056b13e15 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2583,7 +2583,8 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + gfn_t gfn, bool can_unsync) { struct kvm_mmu_page *sp; @@ -2592,7 +2593,7 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) * track machinery is used to write-protect upper-level shadow pages, * i.e. this guards the role.level == 4K assertion below! */ - if (kvm_page_track_is_active(vcpu, gfn, KVM_PAGE_TRACK_WRITE)) + if (kvm_slot_page_track_is_active(slot, gfn, KVM_PAGE_TRACK_WRITE)) return -EPERM; /* @@ -2654,8 +2655,8 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) return 0; } -static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned int pte_access, int level, +static int set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + u64 *sptep, unsigned int pte_access, int level, gfn_t gfn, kvm_pfn_t pfn, bool speculative, bool can_unsync, bool host_writable) { @@ -2665,8 +2666,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, sp = sptep_to_sp(sptep); - ret = make_spte(vcpu, pte_access, level, gfn, pfn, *sptep, speculative, - can_unsync, host_writable, sp_ad_disabled(sp), &spte); + ret = make_spte(vcpu, slot, pte_access, level, gfn, pfn, *sptep, + speculative, can_unsync, host_writable, + sp_ad_disabled(sp), &spte); if (spte & PT_WRITABLE_MASK) kvm_vcpu_mark_page_dirty(vcpu, gfn); @@ -2717,7 +2719,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, was_rmapped = 1; } - set_spte_ret = set_spte(vcpu, sptep, pte_access, level, gfn, pfn, + set_spte_ret = set_spte(vcpu, slot, sptep, pte_access, level, gfn, pfn, speculative, true, host_writable); if (set_spte_ret & SET_SPTE_WRITE_PROTECTED_PT) { if (write_fault) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 658d8d228d43..e0c9c68ff617 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -116,7 +116,8 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync); +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + gfn_t gfn, bool can_unsync); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index a9e2e02f2f4f..4179f4712152 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -136,6 +136,9 @@ void kvm_slot_page_track_remove_page(struct kvm *kvm, } EXPORT_SYMBOL_GPL(kvm_slot_page_track_remove_page); +/* + * check if the corresponding access on the specified guest page is tracked. + */ bool kvm_slot_page_track_is_active(struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { @@ -151,17 +154,6 @@ bool kvm_slot_page_track_is_active(struct kvm_memory_slot *slot, gfn_t gfn, return !!READ_ONCE(slot->arch.gfn_track[mode][index]); } -/* - * check if the corresponding access on the specified guest page is tracked. - */ -bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn, - enum kvm_page_track_mode mode) -{ - struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - - return kvm_slot_page_track_is_active(slot, gfn, mode); -} - void kvm_page_track_cleanup(struct kvm *kvm) { struct kvm_page_track_notifier_head *head; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 653ca44afa58..f85786534d86 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1086,6 +1086,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); for (i = 0; i < PT64_ENT_PER_PAGE; i++) { + struct kvm_memory_slot *slot; unsigned pte_access; pt_element_t gpte; gpa_t pte_gpa; @@ -1135,7 +1136,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) host_writable = sp->spt[i] & shadow_host_writable_mask; - set_spte_ret |= set_spte(vcpu, &sp->spt[i], + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + set_spte_ret |= set_spte(vcpu, slot, &sp->spt[i], pte_access, PG_LEVEL_4K, gfn, spte_to_pfn(sp->spt[i]), true, false, host_writable); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 3e97cdb13eb7..5f0c99532a96 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -89,10 +89,11 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) E820_TYPE_RAM); } -int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, - gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, - bool can_unsync, bool host_writable, bool ad_disabled, - u64 *new_spte) +int make_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, + u64 old_spte, bool speculative, bool can_unsync, + bool host_writable, bool ad_disabled, u64 *new_spte) { u64 spte = SPTE_MMU_PRESENT_MASK; int ret = 0; @@ -159,7 +160,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync)) { + if (mmu_try_to_unsync_pages(vcpu, slot, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret |= SET_SPTE_WRITE_PROTECTED_PT; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index eb7b227fc6cf..6d2446f4c591 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -339,10 +339,11 @@ static inline u64 get_mmio_spte_generation(u64 spte) #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) #define SET_SPTE_SPURIOUS BIT(2) -int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, - gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, - bool can_unsync, bool host_writable, bool ad_disabled, - u64 *new_spte); +int make_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, + unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, + u64 old_spte, bool speculative, bool can_unsync, + bool host_writable, bool ad_disabled, u64 *new_spte); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6f733a68d750..cd72184327a4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -927,7 +927,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (unlikely(is_noslot_pfn(fault->pfn))) new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); else - make_spte_ret = make_spte(vcpu, ACC_ALL, iter->level, iter->gfn, + make_spte_ret = make_spte(vcpu, fault->slot, ACC_ALL, iter->level, iter->gfn, fault->pfn, iter->old_spte, fault->prefault, true, fault->map_writable, !shadow_accessed_mask, &new_spte);