From patchwork Fri Apr 1 17:55:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33BDCC43219 for ; Fri, 1 Apr 2022 17:56:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350608AbiDAR6A (ORCPT ); Fri, 1 Apr 2022 13:58:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350629AbiDAR5x (ORCPT ); Fri, 1 Apr 2022 13:57:53 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE4281834D4 for ; Fri, 1 Apr 2022 10:56:02 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id s188-20020a6377c5000000b003825c503580so2004746pgc.13 for ; Fri, 01 Apr 2022 10:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IZRim+LrWsbKLhWGtCJAPIUqsiasbKStOmO5lLjRFP8=; b=OZ/CKz/kS7tsC+5ET9TAryRE1kCwgOMM6JGu+otu2+fefVt4XEH8bXii6Cm3AUtLCN Gw9osJsofH3J6mv+Pa2qF4AgPyuVpqfC38Nn1tKiG+M3EKrlbdGH2QPh3j9eewzu59lx ICuKCxkiGaO+YeoyVNcW7Nro8EgWUt56TC8bmJ6E3nLjN6WPFH2wT9SX/X9xZP2ZiF17 FJmRQTr/BPkOe9FVzGO8FNFlXg4UfuMC8jzLYVLEELEQwUE4jThW/O/paupXC7lWuxy1 jJcoNEaCMLN9riLuRhRPnUtblhKer7FLcbWvKt2OPcW3oc7aFw2S1ONaKDFn48FfppNp IR4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IZRim+LrWsbKLhWGtCJAPIUqsiasbKStOmO5lLjRFP8=; b=VKC3o4j7Uwvtucv9hGYpwLzRNri9m6YsWbk4zs7/HkR8GB5zEKY3ZsY0527xaqmxMZ 4oawCdIer9cH2Q/o/aWCFogDnyHZmtv8wobr1ThW6N+YKCmIIzb12lzSFp5+i33M4i1L EN96iKJ5Cea/yE9Umf+ssvZnRcHwdE18na7p+nE2OsvzwYYnOiBd4vxfG3W0oPqOFG0D rfv7ScUCTfw2zDboSWnc/iZC6iGkvr3Abvr2oGUKpwGT2R5SpXP7udRRPgX21L15xPd6 JQZCvCaB0Yds4whiNihcBzMiGi6AfZiUUDNRQZqtyNs9GA7V0SJBvJF6bqrOJ0ovQh8T 5XPg== X-Gm-Message-State: AOAM533Sg00VNYGvNq3Wr5GiUt92vHZKrOPEnzSkHQtFVApFuvHw4UwM Ejnvk51qnq9DZUCAxySrI4+8Ejc8iokUMg== X-Google-Smtp-Source: ABdhPJw5vwbBPjHYSVeggYwsZsRSNlx5RqVJMTrxoIjsNKWoBtog3ECYJyqS377WsBTJdAeMsxOsjt/Joc7/Ug== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:6403:0:b0:4fa:c74c:83c5 with SMTP id y3-20020a626403000000b004fac74c83c5mr11997208pfb.30.1648835762419; Fri, 01 Apr 2022 10:56:02 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:32 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-2-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 01/23] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Commit fb58a9c345f6 ("KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs") skipped the unsync checks and write flood clearing for full direct MMUs. We can extend this further to skip the checks for all direct shadow pages. Direct shadow pages in indirect MMUs (i.e. shadow paging) are used when shadowing a guest huge page with smaller pages. Such direct shadow pages, like their counterparts in fully direct MMUs, are never marked unsynced or have a non-zero write-flooding count. Checking sp->role.direct also generates better code than checking direct_map because, due to register pressure, direct_map has to get shoved onto the stack and then pulled back off. No functional change intended. Reviewed-by: Sean Christopherson Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1361eb4599b4..dbfda133adbe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2034,7 +2034,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, int direct, unsigned int access) { - bool direct_mmu = vcpu->arch.mmu->direct_map; union kvm_mmu_page_role role; struct hlist_head *sp_list; unsigned quadrant; @@ -2075,7 +2074,8 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, continue; } - if (direct_mmu) + /* unsync and write-flooding only apply to indirect SPs. */ + if (sp->role.direct) goto trace_get_page; if (sp->unsync) { From patchwork Fri Apr 1 17:55:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A3EBC433F5 for ; Fri, 1 Apr 2022 17:56:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350639AbiDAR6B (ORCPT ); Fri, 1 Apr 2022 13:58:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350640AbiDAR5y (ORCPT ); Fri, 1 Apr 2022 13:57:54 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96062C4E2A for ; Fri, 1 Apr 2022 10:56:04 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id gk23-20020a17090b119700b001c65a1baa01so1637406pjb.5 for ; Fri, 01 Apr 2022 10:56:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jbEbkPKHHe1JjNjX3H5iG4xlTs9fnMsds5ok0hlcBBc=; b=VV57lG6jUHDDUIO2Lve/uFa7PUGIIl+l6vrjKQogFkiWK8u49u3fcY3FaY1VnX4SWU lAhHv82539GUIUhq5aY9Du6ih9DNPH3fkNleGNlcSEwXfPzGL+uipqnqtPitwZgwY1En 57IYSOhVR6QY6lWhxuALbHeIrXgCjKYELkI6V4Xg8HggG1c7xMCmklb0Ukj4j2O9z6nQ YHOCnhTZhVGdpOaQG3eacojXN34VLFVPbQS8nVMFlhHg4igvU1YVx1PAf6nhmFbJBGl2 y61oE1GKqMOOrtIH+uzY9fU55CkheYZRQEPwA//BTrqovqruJ4S0W1hVj+wMXZn/uESV wioA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jbEbkPKHHe1JjNjX3H5iG4xlTs9fnMsds5ok0hlcBBc=; b=C5A2IsGCXmJkALJlc6QJ2BQspFitxOhCzFQJwsCH4xgURZxrEnjC2Iz3918ixOWrkr UOqlz/NXHRbhXofqGdeJVPzOjeoeoR4Zs/RyxG4H0WcsqdREt4IkGnxWPTAwj3a8rZ3W 0uFA66b/nXwE5M6RhzW2TBb/lZmkGlls+ew0oaV94f22gynRLfr4L8HkjFhTlJIdbTrP nda5jlQImGigRprPPHyhu1OZ/KjfKQAKT34psP7FTkObIIov5yuaLOoWWN70EM3G+j0t kFlIDxX1zOlrBt9Aa1gdp2CGPDwigBPhlPuFrg7Jw4ObpOLxiWwm4INTbpVSfM/a15vd +owg== X-Gm-Message-State: AOAM531m4HMO8OXrV8SXCbpjmxn9kEjwFl4mC7afmbTmNg6chg/3qbkt ualfCq4+KBnRbXBrJv1C05sG1q7moGEHmA== X-Google-Smtp-Source: ABdhPJyXx7LaJtvQMH0HZ9aVW4MOJlG+noLjDvNAF17gJHNyl1HpSw9eMVXCjPkuG58rcMsky7HoJ7Kj5bLngA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ccc4:b0:156:5d37:b42f with SMTP id z4-20020a170902ccc400b001565d37b42fmr7667666ple.157.1648835764076; Fri, 01 Apr 2022 10:56:04 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:33 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-3-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 02/23] KVM: x86/mmu: Use a bool for direct From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The parameter "direct" can either be true or false, and all of the callers pass in a bool variable or true/false literal, so just use the type bool. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dbfda133adbe..1c8d157c097b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1706,7 +1706,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct) +static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) { struct kvm_mmu_page *sp; @@ -2031,7 +2031,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gaddr, unsigned level, - int direct, + bool direct, unsigned int access) { union kvm_mmu_page_role role; From patchwork Fri Apr 1 17:55:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EFCCC433EF for ; Fri, 1 Apr 2022 17:56:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350630AbiDAR6C (ORCPT ); Fri, 1 Apr 2022 13:58:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350651AbiDAR54 (ORCPT ); Fri, 1 Apr 2022 13:57:56 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD8EB1834D4 for ; Fri, 1 Apr 2022 10:56:06 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id t24-20020a632258000000b003988eed18b2so1990253pgm.22 for ; Fri, 01 Apr 2022 10:56:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XpIZ6CyHOOvE8Od+c+ze0q4rrAbewJsobbY4vDum0Lc=; b=IVSSCnphw7M0K9aXRxRH3sf3G0YsL0fqqfaGnlBT+6wYNtznF4z0IHRE97ked0u78f PKVMbySEg4DzJZrAcA6zYy5avxG3CQ96pBmh/PNjSHS5UExhx/I1aQ+acBTOQohLqeZF itEaxW1GIqfNl23Q9bUW1DL/NDdsLua1gDNg9u+9XQuuqrA66lmcvJdALC8IzTDs+2pX Iu32VKB4X8R0vgmdKrF/CGp24YxM0x532yQ7XlN1VVESXC2W3U21/WXqovIX1DroLwxm bEyUnmqd9hciCLP6wQBjKT/cCWERSA/UcN91gxugfFWeQ1pmDzoacaU6QLmJCG7Ab005 jmOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XpIZ6CyHOOvE8Od+c+ze0q4rrAbewJsobbY4vDum0Lc=; b=RP5q9uXMe3Rw/rRUu4MRU5TFX66LynrSaBAOn1+akYF8HabhGJ4QoGYjSKOIE4gIL0 9YcbAdhSWpe50S9jtynF1/hbwZXqA4GFEZ47JtDf0eDMNY99FqpVhJ2TvcgMy4McTQn9 HPYbWOhMMuLvRYb/k7eP8Yh7qje1mMVKrV20hvzRFPbLd13/4tB32sGk4WczuatWxa+e BlST7JxymuhNyF+Bn91xcjcW0xOKXglV+LozUrrEc4zm4pBuuhgVZflTd0rYSsc5j9Km YkpdrrQNKVLMlP49l524AuuveEQhSu09d0w1bgrHhJA2jYq9gwTYw54frfvDdQWDhyUN 6B1Q== X-Gm-Message-State: AOAM533sJapc/1wzvYQOPyBHVbpEsRNpxpsvXqnwXbs7INyWiWTI5gzd IjmbiWJgAp7SBdajeAeGj4QdC+c4CirRIA== X-Google-Smtp-Source: ABdhPJxGxM0q3ZoqUruShiwAwcq3Y/NOo9c7YDt9yXTgSeHLAGJMUYhHuR2A8KTgTdfSkk4dnIGYUv0yfxS+Zw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr588173pjn.0.1648835765704; Fri, 01 Apr 2022 10:56:05 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:34 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-4-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 03/23] KVM: x86/mmu: Derive shadow MMU page role from parent From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Instead of computing the shadow page role from scratch for every new page, we can derive most of the information from the parent shadow page. This avoids redundant calculations and reduces the number of parameters to kvm_mmu_get_page(). Preemptively split out the role calculation to a separate function for use in a following commit. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 96 +++++++++++++++++++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 9 ++-- 2 files changed, 71 insertions(+), 34 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1c8d157c097b..8253d68cc30b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2027,30 +2027,14 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, - gfn_t gfn, - gva_t gaddr, - unsigned level, - bool direct, - unsigned int access) +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) { - union kvm_mmu_page_role role; struct hlist_head *sp_list; - unsigned quadrant; struct kvm_mmu_page *sp; int collisions = 0; LIST_HEAD(invalid_list); - role = vcpu->arch.mmu->mmu_role.base; - role.level = level; - role.direct = direct; - role.access = access; - if (role.has_4_byte_gpte) { - quadrant = gaddr >> (PAGE_SHIFT + (PT64_PT_BITS * level)); - quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1; - role.quadrant = quadrant; - } - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; for_each_valid_sp(vcpu->kvm, sp, sp_list) { if (sp->gfn != gfn) { @@ -2068,7 +2052,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ - if (level > PG_LEVEL_4K && sp->unsync) + if (role.level > PG_LEVEL_4K && sp->unsync) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); continue; @@ -2107,14 +2091,14 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, direct); + sp = kvm_mmu_alloc_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; hlist_add_head(&sp->hash_link, sp_list); - if (!direct) { + if (!role.direct) { account_shadowed(vcpu->kvm, sp); - if (level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) + if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } trace_kvm_mmu_get_page(sp, true); @@ -2126,6 +2110,51 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, return sp; } +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access) +{ + struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); + union kvm_mmu_page_role role; + + role = parent_sp->role; + role.level--; + role.access = access; + role.direct = direct; + + /* + * If the guest has 4-byte PTEs then that means it's using 32-bit, + * 2-level, non-PAE paging. KVM shadows such guests using 4 PAE page + * directories, each mapping 1/4 of the guest's linear address space + * (1GiB). The shadow pages for those 4 page directories are + * pre-allocated and assigned a separate quadrant in their role. + * + * Since we are allocating a child shadow page and there are only 2 + * levels, this must be a PG_LEVEL_4K shadow page. Here the quadrant + * will either be 0 or 1 because it maps 1/2 of the address space mapped + * by the guest's PG_LEVEL_4K page table (or 4MiB huge page) that it + * is shadowing. In this case, the quadrant can be derived by the index + * of the SPTE that points to the new child shadow page in the page + * directory (parent_sp). Specifically, every 2 SPTEs in parent_sp + * shadow one half of a guest's page table (or 4MiB huge page) so the + * quadrant is just the parity of the index of the SPTE. + */ + if (role.has_4_byte_gpte) { + WARN_ON_ONCE(role.level != PG_LEVEL_4K); + role.quadrant = (sptep - parent_sp->spt) % 2; + } + + return role; +} + +static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, + u64 *sptep, gfn_t gfn, + bool direct, u32 access) +{ + union kvm_mmu_page_role role; + + role = kvm_mmu_child_role(sptep, direct, access); + return kvm_mmu_get_page(vcpu, gfn, role); +} + static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, struct kvm_vcpu *vcpu, hpa_t root, u64 addr) @@ -2930,8 +2959,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_shadow_present_pte(*it.sptep)) continue; - sp = kvm_mmu_get_page(vcpu, base_gfn, it.addr, - it.level - 1, true, ACC_ALL); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, true, ACC_ALL); link_shadow_page(vcpu, it.sptep, sp); if (fault->is_tdp && fault->huge_page_disallowed && @@ -3313,12 +3341,21 @@ static int mmu_check_root(struct kvm_vcpu *vcpu, gfn_t root_gfn) return ret; } -static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, gva_t gva, +static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, u8 level, bool direct) { + union kvm_mmu_page_role role; struct kvm_mmu_page *sp; - sp = kvm_mmu_get_page(vcpu, gfn, gva, level, direct, ACC_ALL); + role = vcpu->arch.mmu->mmu_role.base; + role.level = level; + role.direct = direct; + role.access = ACC_ALL; + + if (role.has_4_byte_gpte) + role.quadrant = quadrant; + + sp = kvm_mmu_get_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); @@ -3352,8 +3389,8 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) for (i = 0; i < 4; ++i) { WARN_ON_ONCE(IS_VALID_PAE_ROOT(mmu->pae_root[i])); - root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), - i << 30, PT32_ROOT_LEVEL, true); + root = mmu_alloc_root(vcpu, i << (30 - PAGE_SHIFT), i, + PT32_ROOT_LEVEL, true); mmu->pae_root[i] = root | PT_PRESENT_MASK | shadow_me_mask; } @@ -3522,8 +3559,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) root_gfn = pdptrs[i] >> PAGE_SHIFT; } - root = mmu_alloc_root(vcpu, root_gfn, i << 30, - PT32_ROOT_LEVEL, false); + root = mmu_alloc_root(vcpu, root_gfn, i, PT32_ROOT_LEVEL, false); mmu->pae_root[i] = root | pm_mask; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 8621188b46df..729394de2658 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -683,8 +683,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (!is_shadow_present_pte(*it.sptep)) { table_gfn = gw->table_gfn[it.level - 2]; access = gw->pt_access[it.level - 2]; - sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr, - it.level-1, false, access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, table_gfn, + false, access); + /* * We must synchronize the pagetable before linking it * because the guest doesn't need to flush tlb when @@ -740,8 +741,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, drop_large_spte(vcpu, it.sptep); if (!is_shadow_present_pte(*it.sptep)) { - sp = kvm_mmu_get_page(vcpu, base_gfn, fault->addr, - it.level - 1, true, direct_access); + sp = kvm_mmu_get_child_sp(vcpu, it.sptep, base_gfn, + true, direct_access); link_shadow_page(vcpu, it.sptep, sp); if (fault->huge_page_disallowed && fault->req_level >= it.level) From patchwork Fri Apr 1 17:55:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 172A9C43219 for ; Fri, 1 Apr 2022 17:56:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350652AbiDAR6D (ORCPT ); Fri, 1 Apr 2022 13:58:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350549AbiDAR56 (ORCPT ); Fri, 1 Apr 2022 13:57:58 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89A4D1834D4 for ; Fri, 1 Apr 2022 10:56:08 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id i4-20020a170902eb4400b001561d3be0f3so1798792pli.15 for ; Fri, 01 Apr 2022 10:56:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HFmOXjdTm4bKuwpXyqPSjeWAmueQ2kp+gRfD4Io5xrA=; b=edC1kXvmiD1wHyEz0bAXG+Ro2vWFCm3Xn7InOKhIPfA34B9sPuHUUqYVs/SHdGxRrw jWRQHl6FTYIj9MAHwbfTfJfnOBAmUcnuyqglp1+wPvHjg9s6/WOLPPe09WZ8Eyh01DcM gvB64jDytmncJALI33os10qcutUKbgkFhWrCBMBqwDSgvw6GF5HbqCToSIPgWhm32NNM KA2znvla3jUqsM3RrasAMUXRR2tojM7hHQLJFQWV8yiDmAFUxh+FMrv/uRlk3Lt4UFLn ENw4Kf5Z/B8afJGDUB5jQiToxQiYbhTHRsK1j5+F3tIbZoSvKS2YRyOJs+0gSTue2Um3 Jnzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HFmOXjdTm4bKuwpXyqPSjeWAmueQ2kp+gRfD4Io5xrA=; b=fcErUXg6plDcE5THiIJ8cRqLSnhDw+bhQhQZfLm5zp8ytgrmgJ7j65m3IACNZBgRYk /WbrVnqXpaViv8iYBWShmxzKeFteLxgonzIb6vo+KwiXNMOtSO17nVsoHCTWt9iKd0Qq qqrCe0nFWrOvGYgjTw3eVqaHxfr7MZcA59AnGYeYv95qQRyjiLGxMbwYK4y4mjtryxaJ kk8xhqIz8H6Mm7m9HKeLLERQRaOzqwgRL7Vgn2H/wRxaGdhcSkLAcrJRB77CBy4wjzD6 3rZ0B3XrNj9/QfkIKDfHyGzuiQiOK9jlXyvPOxfVinsausUU1Zuuc/YB8yZI80bL8Z1y ILQQ== X-Gm-Message-State: AOAM530sMYXR2Wt364IsYZxU0zrn3kHAxU7q9Vt5AleLjw5/N/ApSGdQ vkYl36Ko0jY8pb5mKjHXyu4dLqfkiDua6Q== X-Google-Smtp-Source: ABdhPJwnqJBzfLm+Ds6V5OWREXOmRK/sUx5t1bSmdng1ilg+Qj+FOqvYH+aW09+0A8FDoL/oz9ikffec7I0g1g== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr588186pjn.0.1648835767641; Fri, 01 Apr 2022 10:56:07 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:35 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-5-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 04/23] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Decompose kvm_mmu_get_page() into separate helper functions to increase readability and prepare for allocating shadow pages without a vcpu pointer. Specifically, pull the guts of kvm_mmu_get_page() into 3 helper functions: __kvm_mmu_find_shadow_page() - Walks the page hash checking for any existing mmu pages that match the given gfn and role. Does not attempt to synchronize the page if it is unsync. kvm_mmu_find_shadow_page() - Wraps __kvm_mmu_find_shadow_page() and handles syncing if necessary. kvm_mmu_new_shadow_page() Allocates and initializes an entirely new kvm_mmu_page. This currently requries a vcpu pointer for allocation and looking up the memslot but that will be removed in a future commit. Note, kvm_mmu_new_shadow_page() is temporary and will be removed in a subsequent commit. The name uses "new" rather than the more typical "alloc" to avoid clashing with the existing kvm_mmu_alloc_page(). No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 124 +++++++++++++++++++++++---------- arch/x86/kvm/mmu/paging_tmpl.h | 5 +- arch/x86/kvm/mmu/spte.c | 5 +- 3 files changed, 94 insertions(+), 40 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8253d68cc30b..8fdddd25029d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2027,16 +2027,25 @@ static void clear_sp_write_flooding_count(u64 *spte) __clear_sp_write_flooding_count(sptep_to_sp(spte)); } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +/* + * Searches for an existing SP for the given gfn and role. Makes no attempt to + * sync the SP if it is marked unsync. + * + * If creating an upper-level page table, zaps unsynced pages for the same + * gfn and adds them to the invalid_list. It's the callers responsibility + * to call kvm_mmu_commit_zap_page() on invalid_list. + */ +static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm, + gfn_t gfn, + union kvm_mmu_page_role role, + struct list_head *invalid_list) { struct hlist_head *sp_list; struct kvm_mmu_page *sp; int collisions = 0; - LIST_HEAD(invalid_list); - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; - for_each_valid_sp(vcpu->kvm, sp, sp_list) { + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + for_each_valid_sp(kvm, sp, sp_list) { if (sp->gfn != gfn) { collisions++; continue; @@ -2053,60 +2062,103 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, * upper-level page will be write-protected. */ if (role.level > PG_LEVEL_4K && sp->unsync) - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, - &invalid_list); + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); + continue; } - /* unsync and write-flooding only apply to indirect SPs. */ - if (sp->role.direct) - goto trace_get_page; + /* Write-flooding is only tracked for indirect SPs. */ + if (!sp->role.direct) + __clear_sp_write_flooding_count(sp); - if (sp->unsync) { - /* - * The page is good, but is stale. kvm_sync_page does - * get the latest guest state, but (unlike mmu_unsync_children) - * it doesn't write-protect the page or mark it synchronized! - * This way the validity of the mapping is ensured, but the - * overhead of write protection is not incurred until the - * guest invalidates the TLB mapping. This allows multiple - * SPs for a single gfn to be unsync. - * - * If the sync fails, the page is zapped. If so, break - * in order to rebuild it. - */ - if (!kvm_sync_page(vcpu, sp, &invalid_list)) - break; + goto out; + } + sp = NULL; + +out: + if (collisions > kvm->stat.max_mmu_page_hash_collisions) + kvm->stat.max_mmu_page_hash_collisions = collisions; + + return sp; +} + +/* + * Looks up an existing SP for the given gfn and role if one exists. The + * return SP is guaranteed to be synced. + */ +static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + LIST_HEAD(invalid_list); + + sp = __kvm_mmu_find_shadow_page(vcpu->kvm, gfn, role, &invalid_list); + + if (sp && sp->unsync) { + /* + * The page is good, but is stale. kvm_sync_page does + * get the latest guest state, but (unlike mmu_unsync_children) + * it doesn't write-protect the page or mark it synchronized! + * This way the validity of the mapping is ensured, but the + * overhead of write protection is not incurred until the + * guest invalidates the TLB mapping. This allows multiple + * SPs for a single gfn to be unsync. + * + * If the sync fails, the page is zapped and added to the + * invalid_list. + */ + if (kvm_sync_page(vcpu, sp, &invalid_list)) { WARN_ON(!list_empty(&invalid_list)); kvm_flush_remote_tlbs(vcpu->kvm); + } else { + sp = NULL; } + } - __clear_sp_write_flooding_count(sp); + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); + return sp; +} -trace_get_page: - trace_kvm_mmu_get_page(sp, false); - goto out; - } +static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + struct hlist_head *sp_list; ++vcpu->kvm->stat.mmu_cache_miss; sp = kvm_mmu_alloc_page(vcpu, role.direct); - sp->gfn = gfn; sp->role = role; + + sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); + if (!role.direct) { account_shadowed(vcpu->kvm, sp); if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); } - trace_kvm_mmu_get_page(sp, true); -out: - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); - if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions) - vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions; + return sp; +} + +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, + union kvm_mmu_page_role role) +{ + struct kvm_mmu_page *sp; + bool created = false; + + sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); + if (!sp) { + created = true; + sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + } + + trace_kvm_mmu_get_page(sp, created); return sp; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 729394de2658..db63b5377465 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -692,8 +692,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * the gpte is changed from non-present to present. * Otherwise, the guest may use the wrong mapping. * - * For PG_LEVEL_4K, kvm_mmu_get_page() has already - * synchronized it transiently via kvm_sync_page(). + * For PG_LEVEL_4K, kvm_mmu_get_existing_sp() has + * already synchronized it transiently via + * kvm_sync_page(). * * For higher level pagetable, we synchronize it via * the slower mmu_sync_children(). If it needs to diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4739b53c9734..d10189d9c877 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -150,8 +150,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, /* * Optimization: for pte sync, if spte was writable the hash * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * is responsibility of kvm_mmu_create_sp() and + * kvm_mmu_sync_roots(). Same reasoning can be applied to dirty + * page accounting. */ if (is_writable_pte(old_spte)) goto out; From patchwork Fri Apr 1 17:55:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53ADFC433F5 for ; Fri, 1 Apr 2022 17:56:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350653AbiDAR6E (ORCPT ); Fri, 1 Apr 2022 13:58:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347150AbiDAR57 (ORCPT ); Fri, 1 Apr 2022 13:57:59 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD5031834D4 for ; Fri, 1 Apr 2022 10:56:09 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id b7-20020aa79507000000b004fa88200f15so1990364pfp.14 for ; Fri, 01 Apr 2022 10:56:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5zXRbPH5UCC0MzIPY39weUiVp/chM0xOMHiRWD4la3c=; b=judHef9BAzgj4FgeW1vRm9YBTxJbUYxIamY2hI0NMvlM3cr5VrZONZvMzgzY7mB5fc Et/RNC+awLN7YqYwnnpHGoSFIrpJq2iHlgSKmbY1a8X8r6AEMzIShNB2vOGVmDUH236e bLg+ZXy5E+OlhsYteEkCa49jGpzhz/nHN1WgF+BBZ9kL6bi5bSvI0HyGYah0G90zn0E7 eWj9EUCrQC63YZptaVS06Wg5k6oo7X0VSsM4lBcYI4RQ3zm+9gffX0txYBDSpgOjhiIG tQBtKSlr0ySXNad11oRT+rju3gtUZm+AEg6oBd7n7RsnJLiJaciztvxcMN5HPBhVNkqD LwCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5zXRbPH5UCC0MzIPY39weUiVp/chM0xOMHiRWD4la3c=; b=zd3sRs6pdrrTIXG0X1/NVfGnX9xSG9pBuFKafbaAsrwcF37xlzXjTSsBeCMnylwOxG Hq7yZFJVQQEf8TmM4QLPXcAnprorGSRpRzM/BC9jGiJWyNTPUfCtWo6etKTb43j7ThqI kQHn/hm3iAzPC6woFYWqkN47QmU9NqNZoucrz6oIebU4tRPzkrhjIDALQTVl+Rmbn9Y+ F9gQUPixwZq/QDqoi7YZUWu0riYnbmdvUpwjKP92apermIJC4dcs1Fz755d0nXUndfWS wPQbYZ5LlxV4VM71rrPW2blZuIVqwceFBjR9+MUm5Pr6Kj7aXMgbJVthvCtvrUWvceCj shIA== X-Gm-Message-State: AOAM531FYyZDa1mKc0o53R+AyM8PJYAwGm7pe/9T9UXS5s92fZZD8tqM 1npLKhFSORlYR1yak1wrHoPuqbhmMem9vA== X-Google-Smtp-Source: ABdhPJyefj5gqr540fQC5ORDWM5/+t3TXsB4vdSmUL266n9Vg0IBeJqnGewfNN7AZaQki9m2eBNaBN075pqkiQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:e8d1:b0:156:5651:777 with SMTP id v17-20020a170902e8d100b0015656510777mr9260591plg.65.1648835769357; Fri, 01 Apr 2022 10:56:09 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:36 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-6-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 05/23] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rename 3 functions: kvm_mmu_get_page() -> kvm_mmu_get_shadow_page() kvm_mmu_alloc_page() -> kvm_mmu_alloc_shadow_page() kvm_mmu_free_page() -> kvm_mmu_free_shadow_page() This change makes it clear that these functions deal with shadow pages rather than struct pages. Prefer "shadow_page" over the shorter "sp" since these are core routines. Acked-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8fdddd25029d..dc1825de0752 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1668,7 +1668,7 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_page(struct kvm_mmu_page *sp) +static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); hlist_del(&sp->hash_link); @@ -1706,7 +1706,8 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, bool direct) +static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, + bool direct) { struct kvm_mmu_page *sp; @@ -2130,7 +2131,7 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, ++vcpu->kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_page(vcpu, role.direct); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; @@ -2146,8 +2147,9 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, - union kvm_mmu_page_role role) +static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, + gfn_t gfn, + union kvm_mmu_page_role role) { struct kvm_mmu_page *sp; bool created = false; @@ -2204,7 +2206,7 @@ static struct kvm_mmu_page *kvm_mmu_get_child_sp(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role; role = kvm_mmu_child_role(sptep, direct, access); - return kvm_mmu_get_page(vcpu, gfn, role); + return kvm_mmu_get_shadow_page(vcpu, gfn, role); } static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterator, @@ -2480,7 +2482,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); - kvm_mmu_free_page(sp); + kvm_mmu_free_shadow_page(sp); } } @@ -3407,7 +3409,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, if (role.has_4_byte_gpte) role.quadrant = quadrant; - sp = kvm_mmu_get_page(vcpu, gfn, role); + sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; return __pa(sp->spt); From patchwork Fri Apr 1 17:55:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A08EC433EF for ; Fri, 1 Apr 2022 17:56:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350672AbiDAR6F (ORCPT ); Fri, 1 Apr 2022 13:58:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236237AbiDAR6B (ORCPT ); Fri, 1 Apr 2022 13:58:01 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92FDA1834D4 for ; Fri, 1 Apr 2022 10:56:11 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id l6-20020a170903120600b0014f43ba55f3so1808110plh.11 for ; Fri, 01 Apr 2022 10:56:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IyIPSk2/EaBUB0UobyNpb06WjR6I1O9i3ZizvW3fhmY=; b=diK+NDVVMnH1euNmcgNZkZR+RNe+RtbEPnphRRHdwnohQBdcNG3DKV2+OalZZ9dtiO 4U1BmQRB9ltXsq1iribzjEnPiYHQfyjaB/RIv7u3RQME2nqt4UISDss9FwhaKRI73n2M bMU+zTqvPQzdMjCfjifjO/HgM4CjFbg0oko4cVui9/LHWSrFQKcR1tLhe7lbcKG+MY6Q lj9/alcDyXKJ/nW3dpp0m5fF9issvOVGtIPN401aoBG2OVH4cv2Xqlye/tg/RF+3/hn7 WxjoFGMkrlclU9nEnzQJ5GSO36yNskFztCeAocXYe1LYhR2jSCTTrWpJUtyLXPUObvDR s/ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IyIPSk2/EaBUB0UobyNpb06WjR6I1O9i3ZizvW3fhmY=; b=F7ZAM2UEXFADa4OLKdt3br7oI/NIqvlulzdP0r8D/P1ZTJkt3xcfWo+vJiD4afRhaf BMiQXIVrcGEtcPxYDfIEY7SJx2BoSAXt4NgMIDSnry2eEgjuSIJ/HgLFK48wfcBBtRmY 2SKbD3pnBfoHDTNfuCwGiH1xaDoP7eOOZAbsT4gvtns5V+eDxaT2OKjNVtepD0wMw2eH rWLKUF3JKjAa7zb86Heul4hyS/lbZbDSPmIs53mhKtjlDhehI/HQ7Zjut7MDwQcMMtsY aMRt2crVXJwTCRsIRdjqNTDbYrwt149DsSd1wQscnvvmcLC7INDM4ntjw+pCcqprQWgZ 5FHw== X-Gm-Message-State: AOAM533rlZ1+raSs1+FONCt64JuWUawiHmDfgPObDdK0SKJ0M+xDtPQX EOU4buWGLiT2Nu7nkYhd/Q1sBl2L/u575Q== X-Google-Smtp-Source: ABdhPJw+3njlRranWemDGRAFfxcGs6Z7lxR+Q6odwZANW7lQMpZ7k6xIEK9JVJtusBFO1h+uHnsiH7sfkuQYbQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:2d0:b0:4f4:1f34:e39d with SMTP id b16-20020a056a0002d000b004f41f34e39dmr12123192pft.14.1648835771030; Fri, 01 Apr 2022 10:56:11 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:37 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-7-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 06/23] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Passing the memslot to kvm_mmu_new_shadow_page() avoids the need for the vCPU pointer when write-protecting indirect 4k shadow pages. This moves us closer to being able to create new shadow pages during VM ioctls for eager page splitting, where there is not vCPU pointer. This change does not negatively impact "Populate memory time" for ept=Y or ept=N configurations since kvm_vcpu_gfn_to_memslot() caches the last use slot. So even though we now look up the slot more often, it is a very cheap check. Opportunistically move the code to write-protect GFNs shadowed by PG_LEVEL_4K shadow pages into account_shadowed() to reduce indentation and consolidate the code. This also eliminates a memslot lookup. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dc1825de0752..abfb3e5d1372 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -793,16 +793,14 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) update_gfn_disallow_lpage_count(slot, gfn, -1); } -static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) +static void account_shadowed(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_mmu_page *sp) { - struct kvm_memslots *slots; - struct kvm_memory_slot *slot; gfn_t gfn; kvm->arch.indirect_shadow_pages++; gfn = sp->gfn; - slots = kvm_memslots_for_spte_role(kvm, sp->role); - slot = __gfn_to_memslot(slots, gfn); /* the non-leaf shadow pages are keeping readonly. */ if (sp->role.level > PG_LEVEL_4K) @@ -810,6 +808,9 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) KVM_PAGE_TRACK_WRITE); kvm_mmu_gfn_disallow_lpage(slot, gfn); + + if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) + kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); } void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -2123,6 +2124,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { @@ -2138,11 +2140,8 @@ static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); - if (!role.direct) { - account_shadowed(vcpu->kvm, sp); - if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1); - } + if (!role.direct) + account_shadowed(vcpu->kvm, slot, sp); return sp; } @@ -2151,13 +2150,15 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, gfn_t gfn, union kvm_mmu_page_role role) { + struct kvm_memory_slot *slot; struct kvm_mmu_page *sp; bool created = false; sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); if (!sp) { created = true; - sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); + slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); + sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); } trace_kvm_mmu_get_page(sp, created); From patchwork Fri Apr 1 17:55:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AB30C433F5 for ; Fri, 1 Apr 2022 17:56:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350667AbiDAR6G (ORCPT ); Fri, 1 Apr 2022 13:58:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350645AbiDAR6C (ORCPT ); Fri, 1 Apr 2022 13:58:02 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CE041834D4 for ; Fri, 1 Apr 2022 10:56:13 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id ml1-20020a17090b360100b001ca140b56ffso1925645pjb.7 for ; Fri, 01 Apr 2022 10:56:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T6MU8k6cQbf3TkeoEFiVrn7xHGx6m7+aDUnGVn3gY9A=; b=psRoNjQx06GKBw5vk1GEEdr6zd56BqsU6L5ZB+rOQfu2XH6HJb+EMCsMZX4hnwAonC 2qG5EnZHZ6vyxBjGbdz8b1ylt6lv4n/XYVsWPs5pHj7JFPfkrX1V4OyHPShUpqfk7GLA d911moNr53YYCdp3FNox0tSzlA0Fh25LgD83E572TTJFC+Gdm06EViMmGs2KvtC8wZ6W x17XSyEP01m5bqVgtCZ+WxHNdlHXdwqVUZvyOmLgdiN5Hlznx1jD+fC30Tj0tHOgwdXd rQUzG3elXhJ5R145vXnIyE5klI/0eFbeDgab/AtBOOlKGNWRf4G056rfgCyinO9LPkIZ +xCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T6MU8k6cQbf3TkeoEFiVrn7xHGx6m7+aDUnGVn3gY9A=; b=GgmYn3Px9hsxtfppfpNjd8ko6PO3aCLxdGKv9vubsWWnqsSozNfkOIj4UldQU9RyQN rUunQJEuweT+pzlIfScIcw2LMcNHDfpZmgmFlOmURU0JJTAVLavXkitU6E3cOdT0w+0o w8O2HgVS3xnJ5Ky0lmYu/jQyGIwF1enWlC6bQqpRLFoxzxMgHmiNxpbRPqw/pURKFIsD Wt8cXNs3J8kautnJ9F0tTcHgkn46IdcUi98pjX7ZN7JVD31J32ilZ7RjVUdsL24udf/X GLtKwbBMjqRyt9WNE7E7VRbXTmZU6hxSEE4i4nut0qevQqXA9BV+ntYiaFPhSvMfDTki GJRw== X-Gm-Message-State: AOAM530WePRmpDvmO5lrR656na0bh0tJNTbr+Y/bG6F1Sbw8C3YYbx2N K5pd9WVrJ9Rd4uo9P5HN4ZjVqFi7Sr/hxw== X-Google-Smtp-Source: ABdhPJxpf4Ryvjy4Xf0xw/2qrejO1ASWA3zDiH/Rc6IEicNMwrszz8mQR+qiF4YjulK0xOMtMCMyzDcI70BvKg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:995:b0:4fb:607d:444c with SMTP id u21-20020a056a00099500b004fb607d444cmr12098520pfg.69.1648835772682; Fri, 01 Apr 2022 10:56:12 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:38 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-8-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 07/23] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the code that allocates a new shadow page from the vCPU caches from the code that initializes it. This is in preparation for creating new shadow pages from VM ioctls for eager page splitting, where we do not have access to the vCPU caches. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 38 ++++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index abfb3e5d1372..421fcbc97f9e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1716,16 +1716,9 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); if (!direct) sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - /* - * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() - * depends on valid pages being added to the head of the list. See - * comments in kvm_zap_obsolete_pages(). - */ - sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; - list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); - kvm_mod_used_mmu_pages(vcpu->kvm, +1); return sp; } @@ -2123,27 +2116,31 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, return sp; } -static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *slot, - gfn_t gfn, - union kvm_mmu_page_role role) +static void init_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp, + struct kvm_memory_slot *slot, gfn_t gfn, + union kvm_mmu_page_role role) { - struct kvm_mmu_page *sp; struct hlist_head *sp_list; - ++vcpu->kvm->stat.mmu_cache_miss; + ++kvm->stat.mmu_cache_miss; - sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); sp->gfn = gfn; sp->role = role; + sp->mmu_valid_gen = kvm->arch.mmu_valid_gen; - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; + /* + * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() + * depends on valid pages being added to the head of the list. See + * comments in kvm_zap_obsolete_pages(). + */ + list_add(&sp->link, &kvm->arch.active_mmu_pages); + kvm_mod_used_mmu_pages(kvm, 1); + + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; hlist_add_head(&sp->hash_link, sp_list); if (!role.direct) - account_shadowed(vcpu->kvm, slot, sp); - - return sp; + account_shadowed(kvm, slot, sp); } static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, @@ -2158,7 +2155,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, if (!sp) { created = true; slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - sp = kvm_mmu_new_shadow_page(vcpu, slot, gfn, role); + sp = kvm_mmu_alloc_shadow_page(vcpu, role.direct); + init_shadow_page(vcpu->kvm, sp, slot, gfn, role); } trace_kvm_mmu_get_page(sp, created); From patchwork Fri Apr 1 17:55:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4F4BC433EF for ; Fri, 1 Apr 2022 17:56:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239207AbiDAR6H (ORCPT ); Fri, 1 Apr 2022 13:58:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350669AbiDAR6F (ORCPT ); Fri, 1 Apr 2022 13:58:05 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60C1E1DB3F8 for ; Fri, 1 Apr 2022 10:56:15 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id r16-20020a638f50000000b00386086767c6so2010997pgn.6 for ; Fri, 01 Apr 2022 10:56:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iIUi4j5dTF9zBg27X61yXEUlsha95IziaYqw02FJyl4=; b=XUYkEsDRY+yxOCCR/ISvExrk4TUU/9GP5XtZIVgZZDgY9kQP/XNnlk8fvUiSOISjYK R5m/JMOu+EmpXbr04dXRQa3x0NDxA+wEK3Kuz/EnkzaIX58tYbeh5VIO1ePqPEADkcP+ Bxr9baz/W/j0udZ76lWfRF8cYFvhRj3MFWoWUBcTG0gE/u6V6q5Pr6lM/mJiJpFcEG/M Ypf+ihXzgburWCxjfnncvi45fUrEjlfc6Kk10teb6t1BFUYa7k9dsG5WHjomwRREcHD9 UGOO5aksD6e2p7VRKL8vpkBf9oXsHOu3bl1zFvbHF0S6ZiuWYOv4Gib1FOulsahDO063 e0wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iIUi4j5dTF9zBg27X61yXEUlsha95IziaYqw02FJyl4=; b=wZsCpvbCmOU6y/e9k1Vt0KqhMSYc0mTPeDV9Ox5RxFwG3enZYCxFNrGK8s6Gs8D3jJ sXXSjnqHrhBy5vdesPgB2uK9LyGTikVH/YNyeL+Lbtl/8n0uiVqCg7nCnI1IlU/qQbHO uPj1Xz1qOlw1EcangI0xaQAOY4mA30+6SjBa+YYbsiejJEouZtlSNBLPNHlKAbUorOFF jfleTGUX9A0Zbw2IiuOpSMDnCsA12baCB9l7T7lo4nXSGVsOxJpXYWsR00XYFMyJ1jlZ B8hDMJnHKaEsh66+/E924aH13D4vjSYcoCXA8r9c3uzreoIZFZ5Bq84LefsphTkil0Ea rrXw== X-Gm-Message-State: AOAM533JzDaoyJjVdPoLOk5K/u2y1LxSjw5j2DSUYUrQtQqrgQg0ZdG1 RmAv1JQaT24LgOl0e46MlH7RwV+ysj1vnA== X-Google-Smtp-Source: ABdhPJzO4HC0kF89hVPwzXPdQHYPWNvD/kUD++1/OnBDK4J5ailVX4fYbPj+olrmsSjzQDC0shWUbcvlHeR7vQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:e81:b0:1c6:5a9c:5afa with SMTP id fv1-20020a17090b0e8100b001c65a9c5afamr587543pjb.1.1648835774239; Fri, 01 Apr 2022 10:56:14 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:39 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-9-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 08/23] KVM: x86/mmu: Link spt to sp during allocation From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Link the shadow page table to the sp (via set_page_private()) during allocation rather than initialization. This is a more logical place to do it because allocation time is also where we do the reverse link (setting sp->spt). This creates one extra call to set_page_private(), but having multiple calls to set_page_private() is unavoidable anyway. We either do set_page_private() during allocation, which requires 1 per allocation function, or we do it during initialization, which requires 1 per initialization function. No functional change intended. Suggested-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b3b6426725d4..17354e55735f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -274,6 +274,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); return sp; } @@ -281,8 +282,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - sp->role = role; sp->gfn = gfn; sp->ptep = sptep; @@ -1435,6 +1434,8 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp) return NULL; } + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + return sp; } From patchwork Fri Apr 1 17:55:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 596D5C4332F for ; Fri, 1 Apr 2022 17:56:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350669AbiDAR6I (ORCPT ); Fri, 1 Apr 2022 13:58:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350671AbiDAR6G (ORCPT ); Fri, 1 Apr 2022 13:58:06 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0258215469 for ; Fri, 1 Apr 2022 10:56:16 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id d6-20020a655886000000b00398b858cdd3so2016774pgu.7 for ; Fri, 01 Apr 2022 10:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nTmuYtTREbDuE8X5WKQrckgnKI3yoAyUOjS6IPz+d8s=; b=r1irV2cXU5balcgpJ+77g5pazDurBLKGlZ2Dfmc5U8VLb3YkAgu0DV8Hm8rTD/uVNR 4BFbWqVJzVzfgcSg/Qy3n1yW1bItPLIGG6Y2QN/OffvUeg0i997msfBloThaBEeVaIdW ivSbp9ZOUCJQdZ7sfRmYuChNnZ/RoMCSqHoW+ZboctWTFgAJ6mMmwDVoWQQjekBtaC6s /TbZn/Gm9YkE2eUSDw5p4bluh48WV3voiXn/aHalocyfRAsJ1Rc9LArz6w0hkjdKMpj1 smEHv0Tc0QDl3l0mZQTpGNMG4BoOnlIVQ1nSOcnOV8uHNA3V4tyFMQFIBqgqnIHvnqzG TU8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nTmuYtTREbDuE8X5WKQrckgnKI3yoAyUOjS6IPz+d8s=; b=cMvX1BsTSnDA7xFCK7tGAmoR23xboT5/TOpoo5HFJQAW7NfYPaymQbsd1+W5ZF+J50 mtImlAK3IVaTZ8D5VtXI7BJEhL4Vx6SYmxlM/qjKDTQgAFQjqByYVebi0Bfp+7xBMAgV tOkDrmjttSmNDSrZPbhdWNQJWu559hWqNPyN8p5hinzwg2HU5plaweoaJY+6jZuLh8tw ZO5K5X100FydtTjSR7X2xNkIjsrtEOmou6h+G7HPlBeEfe1nBLiQLICWDKHQtN2SawQl q61eM/aKtN2MHlrNQrzJLkMxSERxfSVv96Id2HFie2pqY1HNg/mpOcZCx1RlN2RMNn2p 1+8Q== X-Gm-Message-State: AOAM532T0Sy/2pbvQGMkOEDwRv7Ur9Si7wCjeMJUnd8WiSLc+2Ztqf8d mfBFzd7oMQzCq2KX2Aw3UPAxexqGXhIZKg== X-Google-Smtp-Source: ABdhPJwn8ro+5iboEj4iBbUxK1MDSaRBAe0CkNK+ZysloOmbIZqhBQ1y1ArIceThcmiHYQpxZuwA5eCXoSY58Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1acb:b0:4fa:de8e:da9d with SMTP id f11-20020a056a001acb00b004fade8eda9dmr12200024pfv.42.1648835776102; Fri, 01 Apr 2022 10:56:16 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:40 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-10-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 09/23] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Move the code that allocates a new shadow page for splitting huge pages into mmu.c. Currently this code is only used by the TDP MMU but it will be reused in subsequent commits to also split huge pages mapped by the shadow MMU. Move the GFP flags calculation down into the allocation code so that it does not have to be duplicated when the shadow MMU needs to start allocating SPs for splitting. Preemptively split out the gfp flags calculation to a separate helpers for use in a subsequent commit that adds support for eager page splitting to the shadow MMU. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 37 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 34 ++---------------------------- 3 files changed, 41 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 421fcbc97f9e..657c2a906c12 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1722,6 +1722,43 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, return sp; } +static inline gfp_t gfp_flags_for_split(bool locked) +{ + /* + * If under the MMU lock, use GFP_NOWAIT to avoid direct reclaim (which + * is slow) and to avoid making any filesystem callbacks (which can end + * up invoking KVM MMU notifiers, resulting in a deadlock). + */ + return (locked ? GFP_NOWAIT : GFP_KERNEL) | __GFP_ACCOUNT; +} + +/* + * Allocate a new shadow page, potentially while holding the MMU lock. + * + * Huge page splitting always uses direct shadow pages since the huge page is + * being mapped directly with a lower level page table. Thus there's no need to + * allocate the gfns array. + */ +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) +{ + gfp_t gfp = gfp_flags_for_split(locked) | __GFP_ZERO; + struct kvm_mmu_page *sp; + + sp = kmem_cache_alloc(mmu_page_header_cache, gfp); + if (!sp) + return NULL; + + sp->spt = (void *)__get_free_page(gfp); + if (!sp->spt) { + kmem_cache_free(mmu_page_header_cache, sp); + return NULL; + } + + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + + return sp; +} + static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1bff453f7cbe..a0648e7ddd33 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -171,4 +171,6 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 17354e55735f..34e581bcaaf6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1418,43 +1418,13 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, return spte_set; } -static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp) -{ - struct kvm_mmu_page *sp; - - gfp |= __GFP_ZERO; - - sp = kmem_cache_alloc(mmu_page_header_cache, gfp); - if (!sp) - return NULL; - - sp->spt = (void *)__get_free_page(gfp); - if (!sp->spt) { - kmem_cache_free(mmu_page_header_cache, sp); - return NULL; - } - - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - return sp; -} - static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, struct tdp_iter *iter, bool shared) { struct kvm_mmu_page *sp; - /* - * Since we are allocating while under the MMU lock we have to be - * careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct - * reclaim and to avoid making any filesystem callbacks (which can end - * up invoking KVM MMU notifiers, resulting in a deadlock). - * - * If this allocation fails we drop the lock and retry with reclaim - * allowed. - */ - sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT); + sp = kvm_mmu_alloc_direct_sp_for_split(true); if (sp) return sp; @@ -1466,7 +1436,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, write_unlock(&kvm->mmu_lock); iter->yielded = true; - sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT); + sp = kvm_mmu_alloc_direct_sp_for_split(false); if (shared) read_lock(&kvm->mmu_lock); From patchwork Fri Apr 1 17:55:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59BC5C433EF for ; Fri, 1 Apr 2022 17:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350703AbiDAR6L (ORCPT ); Fri, 1 Apr 2022 13:58:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350684AbiDAR6I (ORCPT ); Fri, 1 Apr 2022 13:58:08 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CC6124B5F2 for ; Fri, 1 Apr 2022 10:56:18 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id e15-20020a17090ab38f00b001c9989ae56cso1919682pjr.9 for ; Fri, 01 Apr 2022 10:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9irR3brB5K4k6kskWzWAbqG40+EEbVRaPIvCDPrQqZc=; b=TTHccN07Kc2+dKX/CQM4O229WOCMUEIEZjrDqAkCum2pWRxXyep/RtD21aVHTzWJ0u 5Ur/7qOrscd0hKtzG7s1EiErU9Vn5N7nHtLcCGMtzTi3YMoFLmHW94dw6Gq8G0YshxYY MXs/f+M/+N5rlpf0p71rU9X8qWgGkVAjsH0i1IRyHKq8aV01lhAROqIUxZdIBEdQ6Fn7 o/5H/YoPwviW5zcmK5mwsLjoCqpLLJwdhb4vCxGtiGEedukX7RGb51wslmYlEJfKZvrb xua3zBzP8zwd6jU0yexNhL/1HTixfROGNf25yDDwoHh2EgdS6uv8nRoFcKEn2F6VBDv0 gMBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9irR3brB5K4k6kskWzWAbqG40+EEbVRaPIvCDPrQqZc=; b=xt109vQTSKJiUcKdydDlEQjyqfSlAv7m5cRyo3BLKu6sao9OWf4Fs/mRsx+3zvNjjY 0aA2efupZarPbE7cXnDVeJk6CU6rNuUgrYNVkj0ZsdW5DxBxJSTGL0kUkjlv5Z7EkgeR z4UsZ/a1dr8xFr9McMdaeEHY7vr20/Zj8mz+6WBhun6yIdXtlIAvN6+o58SiqMjzlG/e +gRJKMdmc79L1cj9GI0kPYB1bN3UOGQeuGlz65trF8VXYU5Nuxmr/3coliijq/miWhbZ uNIezRnCqlaiZ+PZoIADb5waYEtHcWxNirfShV+XX+SYGQPMhVuZbcKgvWsEWtAJIBqb 7M8g== X-Gm-Message-State: AOAM531GHL/NWmnGkbMRKeLwq/u7gZwmC+RpTo2QnQrRHFS+qYksx+Lb /v+iay0A750CMIXANv4E+XpoLawQYCU82w== X-Google-Smtp-Source: ABdhPJxg+/Q/17NTgwHdrbJN0RELEFI4TA91YpU0RNqlmFBBNYRSy1Zgxz7Lu9EEyB4Z0QtTYObcl0Q0cVZDDA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:118c:b0:1ca:307:9b50 with SMTP id gk12-20020a17090b118c00b001ca03079b50mr13195653pjb.26.1648835777771; Fri, 01 Apr 2022 10:56:17 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:41 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-11-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 10/23] KVM: x86/mmu: Use common code to free kvm_mmu_page structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use a common function to free kvm_mmu_page structs in the TDP MMU and the shadow MMU. This reduces the amount of duplicate code and is needed in subsequent commits that allocate and free kvm_mmu_pages for eager page splitting. Keep tdp_mmu_free_sp() as a wrapper to mirror tdp_mmu_alloc_sp(). No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 657c2a906c12..27996fdb0e7e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1669,11 +1669,8 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } -static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { - MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); - list_del(&sp->link); free_page((unsigned long)sp->spt); if (!sp->role.direct) free_page((unsigned long)sp->gfns); @@ -2518,6 +2515,9 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); + MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); + hlist_del(&sp->hash_link); + list_del(&sp->link); kvm_mmu_free_shadow_page(sp); } } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a0648e7ddd33..5f91e4d07a95 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,4 +173,6 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 34e581bcaaf6..8b00c868405b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -64,8 +64,7 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) static void tdp_mmu_free_sp(struct kvm_mmu_page *sp) { - free_page((unsigned long)sp->spt); - kmem_cache_free(mmu_page_header_cache, sp); + kvm_mmu_free_shadow_page(sp); } /* From patchwork Fri Apr 1 17:55:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63A8BC433F5 for ; Fri, 1 Apr 2022 17:56:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350718AbiDAR6M (ORCPT ); Fri, 1 Apr 2022 13:58:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350694AbiDAR6J (ORCPT ); Fri, 1 Apr 2022 13:58:09 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3A351AF501 for ; Fri, 1 Apr 2022 10:56:19 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id p21-20020a631e55000000b00372d919267cso2025190pgm.1 for ; Fri, 01 Apr 2022 10:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W+wK5vxPR/QvP8f06+LL6qG83douEveyqaG4cfdgLaY=; b=UlIX32ER5ckYEDHWAIcjYr1jnfgt+txi+GTB7aQ575YuSQEhVBgxyCUID6hNAgokFE i76MKrGhcCti1IAqqtc8kmHq8LqNd7RHGUdhQj5N55jui7o41Er8H+t8k78BKld1GGlF 1VOlV5WFmwGA6KgQUp2Zp1tMkKdYXozagaTpFsbpYSaFotNIcx7eaEFtAXC+ZnpCROYq ABsmzYy5wYQds//B5QF75M7rFYH+fy/3noPiqdzNNeN7P/vT017vKaxWsaHrKZktu0mv qnRJyh8yDxMicnr56dYAjntxpQ2hWjBPYb0U8saUvcnyua7EEZ2oOdLzLWIpXm5jfysP Is+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W+wK5vxPR/QvP8f06+LL6qG83douEveyqaG4cfdgLaY=; b=fkIkfK6SkIK8eO1nBwp4Ylp1MATNIyJB0b8dOW+jYqsCIV25mSIpqRL2DVyA4rxYp8 9SQkQvkmGOHJZXJ6rz9NEz9YLTNHzvUl2Gy9T+JEtlRD/3c/VejZvxEBNEAfwrbGj6YT H6lL+Lf8eTVGvPpyc9CPZWXZl4Aer+mp0/eavwekG0TCt4L5E10QOTQ1LTpZTsmA4lTQ 6x6qjXipP14E65arK95dC7p5meRnx0FHReDjTMKyc2L5K20H0qQWLaiSG8YHOFUqoSWm xVeuGJOIkE8nu9+bm2+RUbkTP4hE3CE0vLt0XiggwgsY+/R+4gZZKaBUWvhtTBJR0mPm 3NEQ== X-Gm-Message-State: AOAM533es+tQv4TbkgPY9nbgJC0QEljL2XUMr0ArQeiDJRB6NqAYssYl ueXaxTp3Bf38NC0dTpgBAfwJsATTyKMRAw== X-Google-Smtp-Source: ABdhPJxYuSmgflitVWDuHEX61iCJV7QQcSTVJ1SS0S3w86E/w3yONDjqYny6QBsz2LrR6va1k+Ws4RIi9FnL+w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4c84:b0:1c7:7769:3cc7 with SMTP id my4-20020a17090b4c8400b001c777693cc7mr13054532pjb.73.1648835779185; Fri, 01 Apr 2022 10:56:19 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:42 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-12-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 11/23] KVM: x86/mmu: Use common code to allocate shadow pages from vCPU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that allocating shadow pages is isolated to a helper function, use it in the TDP MMU as well. Keep tdp_mmu_alloc_sp() to avoid hard-coding direct=true in multiple places. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 3 +-- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 8 +------- 3 files changed, 3 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 27996fdb0e7e..37385835c399 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1704,8 +1704,7 @@ static void drop_parent_pte(struct kvm_mmu_page *sp, mmu_spte_clear_no_track(parent_pte); } -static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, - bool direct) +struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direct) { struct kvm_mmu_page *sp; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5f91e4d07a95..d4e2de5f2a6d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -173,6 +173,7 @@ void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked); +struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direct); void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8b00c868405b..f6201b89045b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -269,13 +269,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) { - struct kvm_mmu_page *sp; - - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - - return sp; + return kvm_mmu_alloc_shadow_page(vcpu, true); } static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, From patchwork Fri Apr 1 17:55:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EFBBC433F5 for ; Fri, 1 Apr 2022 17:56:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350707AbiDAR6O (ORCPT ); Fri, 1 Apr 2022 13:58:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350685AbiDAR6K (ORCPT ); Fri, 1 Apr 2022 13:58:10 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BF7B20A941 for ; Fri, 1 Apr 2022 10:56:21 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id x186-20020a627cc3000000b004fa939658c5so2016222pfc.4 for ; Fri, 01 Apr 2022 10:56:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zKDQj9FrafQ+86iVPP3K51OIfRdmfs50RkYPmE+oxOQ=; b=jwJrV9wgSOZhOdo64taCxPoODAC6XGsgJuKkwc48uKLDwCehUF0oYRs0fx/6iUYJsN bDU8lVrfYvdUg3f5cyhCck37cCyLL5/oocbLVeX/2RMocHB7whP9YgJV1BPU0FInpT8M Jey3QTtqfjXizgqaTWKrOuc0XTGX8k9DlHlIvbHGsjO2yN1YnnybWB4lTqrMuaaxskHT tzBNngTy7KVEqbn10arUq+QvHMRdcdxlBk6ThObz7bZ8fYOmHleUEQKDX3znsHn2k++f k/IMtVSXnTPiSX4RDt6Ql8Je3nitAyb3Jw6XHuF5ueww4Kk/F0qDai4j5R/xhVPgfIGK wEzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zKDQj9FrafQ+86iVPP3K51OIfRdmfs50RkYPmE+oxOQ=; b=czOW/9Xao+tlUQHFri6g/P18ziZp7cYRWm2ld26u7V0FMB843gncgWf2NNrcqQiHgm umvngh7hhdbFGbhKkw4wIDjBJwFQZW72V7qWJ3tj4gP3kr9rp98wfqSjTmiwyM4TkhSU RqUSwkK0dViweNa8Q9hSNzK53/04C2t4Is6t8sQJ3aWoncndCl8mRPqXPZSRXd0q0St9 yBEnDFdbHQ8WNjPB3Gd/+w0iNuiGwMXnjZ2deL5hpimCWRI1ODZi7SVU1o80mtRpadfg Jbs+X6m5L9/NdGwBeBI/7SeT3PwLtMKCMswYY5HWsWHt7LfOgRkBiIjfIao/FU+ZVw4P MN7g== X-Gm-Message-State: AOAM5332FZCqclNOh88D7iYCdzXU/gHJtzy5diMGLRMSHBp9WW+ltvzu 5wUm8FX+IuD2zDqaH5QUbtXQhmUSV1YlqQ== X-Google-Smtp-Source: ABdhPJxYcEKA7pbWp7jjIlDB7Ac6r83BwprafTTSyvULHdi4YCW6thoQ1uhQkF3OnBLrsgJpdsF5d76dQHZQHA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:7304:b0:1c6:aadc:90e5 with SMTP id m4-20020a17090a730400b001c6aadc90e5mr13237879pjk.164.1648835780708; Fri, 01 Apr 2022 10:56:20 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:43 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-13-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 12/23] KVM: x86/mmu: Pass const memslot to rmap_add() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org rmap_add() only uses the slot to call gfn_to_rmap() which takes a const memslot. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 37385835c399..1efe161f9c02 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1596,7 +1596,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; From patchwork Fri Apr 1 17:55:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46071C433FE for ; Fri, 1 Apr 2022 17:56:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350712AbiDAR6T (ORCPT ); Fri, 1 Apr 2022 13:58:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350714AbiDAR6M (ORCPT ); Fri, 1 Apr 2022 13:58:12 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A22CD28D536 for ; Fri, 1 Apr 2022 10:56:22 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id fh22-20020a17090b035600b001c6a163499cso1647745pjb.2 for ; Fri, 01 Apr 2022 10:56:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=98QuPuIamNZ4VXhVyk60fd3jtdaNhZOSrUDoLns0RNg=; b=aUeZagy6oSm/W0IESWczDjmWRTSG4Npvh1AoFSuKwIbezAKth9hVIv88sZ1PVSs54P QuQzvJ5uvlUy4BZeBvhbx8V7VTbmtyOFBlVK6TO1Y5OqSJrdAWIzlesY9Y7Ef0+ulNiw etoPUBWitef9pf0CbOSZCO7DfbbKKlk5tLH4vpNPAoXH3otLyid9PS7TlLRXA2o/s6Zx 14w18b1jE0xK/QCtfi7eO05rYdM47uPTnLEBVIcuO8jOAe+Yt04t1s8B7z6YuNT78vLP UhF/qof2Hy5JcUFWeN8ucBOI/LTmnlgblwP0QMCgT702dp28RE7ENQrHUu79Ry4+dba8 IV1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=98QuPuIamNZ4VXhVyk60fd3jtdaNhZOSrUDoLns0RNg=; b=47lf5J+RIs316zqzODshOE+nysRU9j7ciO4HS5bRPKmrJm/UenalwrNAULLp1SW4t6 5/cPCQw3eTggA6ZmKt/VAlRSUPQlJLhpiv3nE185eyo9j1PGt4S2fsZAk6gcYeKF906w LLkC0mFyiqFI/4EsUsuZQFzcUWYsJjHbnPheFsAywk1VsoVLtK1+ftSH1XMXuslu4oHA ouE4j9NZxL5XDhfhD1eWoKrR0CwNyzeRgGjasx0L25bmP7i7p3PWlfgMtt2jRLih3qav Qj4Xf1pxIWHwMok8Fw2grvGQDfqGEJx9C9TM/9XOLFJj2BCVGWc/b+zSHgiKGdUCCOB5 2oRg== X-Gm-Message-State: AOAM532FraTZFfPO+HBSnwRptUlmRI+jHzL4brU+Z8v7iznlQtoG7lD3 awiavXOcheFU8UUFmYY7MYeH6hBuYd4M7A== X-Google-Smtp-Source: ABdhPJz698nbKbYgL2dF565iPEoR7WzCc+Wp0S/qIabLKC8pciUWRSwlWFExH/GdxxtVuQGJTVkUd+56ESvveA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1acb:b0:4fb:358f:fe87 with SMTP id f11-20020a056a001acb00b004fb358ffe87mr12183610pfv.75.1648835782098; Fri, 01 Apr 2022 10:56:22 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:44 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-14-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 13/23] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Use a const pointer so that init_shadow_page() can be called from contexts where we have a const pointer. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_page_track.h | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/page_track.c | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h index eb186bc57f6a..3a2dc183ae9a 100644 --- a/arch/x86/include/asm/kvm_page_track.h +++ b/arch/x86/include/asm/kvm_page_track.h @@ -58,7 +58,7 @@ int kvm_page_track_create_memslot(struct kvm *kvm, unsigned long npages); void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode); void kvm_slot_page_track_remove_page(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1efe161f9c02..39d9cccbdc7e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -794,7 +794,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn) } static void account_shadowed(struct kvm *kvm, - struct kvm_memory_slot *slot, + const struct kvm_memory_slot *slot, struct kvm_mmu_page *sp) { gfn_t gfn; @@ -1373,7 +1373,7 @@ int kvm_cpu_dirty_log_size(void) } bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn, + const struct kvm_memory_slot *slot, u64 gfn, int min_level) { struct kvm_rmap_head *rmap_head; @@ -2150,7 +2150,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, } static void init_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, union kvm_mmu_page_role role) { struct hlist_head *sp_list; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d4e2de5f2a6d..b6e22ba9c654 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -134,7 +134,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, - struct kvm_memory_slot *slot, u64 gfn, + const struct kvm_memory_slot *slot, u64 gfn, int min_level); void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c index 2e09d1b6249f..3e7901294573 100644 --- a/arch/x86/kvm/mmu/page_track.c +++ b/arch/x86/kvm/mmu/page_track.c @@ -84,7 +84,7 @@ int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot) return 0; } -static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, +static void update_gfn_track(const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode, short count) { int index, val; @@ -112,7 +112,7 @@ static void update_gfn_track(struct kvm_memory_slot *slot, gfn_t gfn, * @mode: tracking mode, currently only write track is supported. */ void kvm_slot_page_track_add_page(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, enum kvm_page_track_mode mode) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f6201b89045b..a04262bc34e2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1793,7 +1793,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, * Returns true if an SPTE was set and a TLB flush is needed. */ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, int min_level) { struct kvm_mmu_page *root; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5e5ef2576c81..c139635d4209 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -48,7 +48,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, - struct kvm_memory_slot *slot, gfn_t gfn, + const struct kvm_memory_slot *slot, gfn_t gfn, int min_level); void kvm_tdp_mmu_try_split_huge_pages(struct kvm *kvm, From patchwork Fri Apr 1 17:55:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83662C433FE for ; Fri, 1 Apr 2022 17:56:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350700AbiDAR6P (ORCPT ); Fri, 1 Apr 2022 13:58:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350753AbiDAR6O (ORCPT ); Fri, 1 Apr 2022 13:58:14 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BD5628F81A for ; Fri, 1 Apr 2022 10:56:24 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id gk23-20020a17090b119700b001c65a1baa01so1637788pjb.5 for ; Fri, 01 Apr 2022 10:56:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6nXa09mmt/6fKuSSqvfCsyYP8ACA5E/75PQkFkVUSuM=; b=EjkKGkzp3sFVKutQ7HKMWIknnxfZtmcm1Uv7rWc8W3rzaUUCvWJ/WDcOR+lAkqZDBN 8si82TCM2H9vua5OYF3I9On/pnmiMmZjbHw7FjG8TJgHo0DU9fAMb4jRpwje9wriV6Gc Xxy2A9lqTCeYgoOvDvLH9nxii+AKEutfjVGG+FEkzE0CqPLVWfBDdeTsQqtjxHQYXxQQ RAvTE1OzMWxA71F8Zf6SKr8BUsFVr2tcj0btjFJfDEergnYTo3L1ot7Ye37Yn2UMMWeP gjthXPgtBWHa7s1v+UFaMNe7hCjtbhb5SIcmbO/Vbko9K+g5ICEgluBdl3/o4WoqMJei t0vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6nXa09mmt/6fKuSSqvfCsyYP8ACA5E/75PQkFkVUSuM=; b=luOJH7EtIvwmIe0JpBquEXA7VcqL6h67Rb9QaV2DeXEwVwbCOOSgJU1fDfo3bXmDJL qi6YIVL3bhRkOFc8JhRU5lipybYopn9SZz/tC4Q9q00TXTMrkgz5ahkTzjp7zUYWuCI5 KaFGS+XrYtXSkg+EvAjqrlIrp6WGGemxrvyqYpfGA6H4oOL3cHjifvbwAsR3ZHT9+akv +IzdTltu6Sv6K4FlYvV4KZ8nVn14fGNWKKOA7BuHy2Q9Je9Q7ng7sDxyhpjaHyHQAuJb fY9FTl3jJK1co2FJY2nzX6Jq/PuVj+wiPAYt8jbwx6ffckZguI5u4gbRUDK4fLQEWRaQ Wzhg== X-Gm-Message-State: AOAM531P5Gqj1Q/pFA5srlDja0yBvcofl4T5z96MH5ejqiUwMzlVNsfU ZcR1G5Fgtc+cy4V1+Yv8GdcUldLiyA63Cw== X-Google-Smtp-Source: ABdhPJxTUcosla4+mIbtqxvUm22jFAX4QiyWBPyvxtfD4VUdc7Cj5IRmQn5lIj5TzvvvJYTw/mjAo71ZpLvBXA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90b:4b44:b0:1c7:41d:9428 with SMTP id mi4-20020a17090b4b4400b001c7041d9428mr13003517pjb.85.1648835783718; Fri, 01 Apr 2022 10:56:23 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:45 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-15-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 14/23] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Allow adding new entries to the rmap and linking shadow pages without a struct kvm_vcpu pointer by moving the implementation of rmap_add() and link_shadow_page() into inner helper functions. No functional change intended. Reviewed-by: Ben Gardon Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 44 +++++++++++++++++++++++++----------------- 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 39d9cccbdc7e..7305a8c625c0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -725,11 +725,6 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } -static struct pte_list_desc *mmu_alloc_pte_list_desc(struct kvm_vcpu *vcpu) -{ - return kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_pte_list_desc_cache); -} - static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) { kmem_cache_free(pte_list_desc_cache, pte_list_desc); @@ -874,7 +869,7 @@ gfn_to_memslot_dirty_bitmap(struct kvm_vcpu *vcpu, gfn_t gfn, /* * Returns the number of pointers in the rmap chain, not counting the new one. */ -static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, +static int pte_list_add(struct kvm_mmu_memory_cache *cache, u64 *spte, struct kvm_rmap_head *rmap_head) { struct pte_list_desc *desc; @@ -885,7 +880,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, rmap_head->val = (unsigned long)spte; } else if (!(rmap_head->val & 1)) { rmap_printk("%p %llx 1->many\n", spte, *spte); - desc = mmu_alloc_pte_list_desc(vcpu); + desc = kvm_mmu_memory_cache_alloc(cache); desc->sptes[0] = (u64 *)rmap_head->val; desc->sptes[1] = spte; desc->spte_count = 2; @@ -897,7 +892,7 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte, while (desc->spte_count == PTE_LIST_EXT) { count += PTE_LIST_EXT; if (!desc->more) { - desc->more = mmu_alloc_pte_list_desc(vcpu); + desc->more = kvm_mmu_memory_cache_alloc(cache); desc = desc->more; desc->spte_count = 0; break; @@ -1596,8 +1591,10 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) +static void __rmap_add(struct kvm *kvm, + struct kvm_mmu_memory_cache *cache, + const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1606,15 +1603,21 @@ static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); - rmap_count = pte_list_add(vcpu, spte, rmap_head); + rmap_count = pte_list_add(cache, spte, rmap_head); if (rmap_count > RMAP_RECYCLE_THRESHOLD) { - kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); + kvm_unmap_rmapp(kvm, rmap_head, NULL, gfn, sp->role.level, __pte(0)); kvm_flush_remote_tlbs_with_address( - vcpu->kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, + u64 *spte, gfn_t gfn) +{ + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); +} + bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; @@ -1682,13 +1685,13 @@ static unsigned kvm_page_table_hashfn(gfn_t gfn) return hash_64(gfn, KVM_MMU_HASH_SHIFT); } -static void mmu_page_add_parent_pte(struct kvm_vcpu *vcpu, +static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache, struct kvm_mmu_page *sp, u64 *parent_pte) { if (!parent_pte) return; - pte_list_add(vcpu, parent_pte, &sp->parent_ptes); + pte_list_add(cache, parent_pte, &sp->parent_ptes); } static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, @@ -2304,8 +2307,8 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } -static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, - struct kvm_mmu_page *sp) +static void __link_shadow_page(struct kvm_mmu_memory_cache *cache, u64 *sptep, + struct kvm_mmu_page *sp) { u64 spte; @@ -2315,12 +2318,17 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, mmu_spte_set(sptep, spte); - mmu_page_add_parent_pte(vcpu, sp, sptep); + mmu_page_add_parent_pte(cache, sp, sptep); if (sp->unsync_children || sp->unsync) mark_unsync(sptep); } +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, struct kvm_mmu_page *sp) +{ + __link_shadow_page(&vcpu->arch.mmu_pte_list_desc_cache, sptep, sp); +} + static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned direct_access) { From patchwork Fri Apr 1 17:55:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35CCBC433EF for ; Fri, 1 Apr 2022 17:56:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350697AbiDAR6S (ORCPT ); Fri, 1 Apr 2022 13:58:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350712AbiDAR6P (ORCPT ); Fri, 1 Apr 2022 13:58:15 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A69121AF501 for ; Fri, 1 Apr 2022 10:56:25 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id t66-20020a625f45000000b004fabd8f5cc1so1999646pfb.11 for ; Fri, 01 Apr 2022 10:56:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M8diawJ0a5+6KeqCTlFavoigvh5ZH93bmgpmzfeJdKU=; b=RKIqIHqTnVlf04If3sQSSAHbqv3QjMDqucAB4bBcWHE84GJh7XwJyqak7+P6HFLJ6n +L1EoCVaxTwBwzeLwEZ0Mnk42VtiBqYSiLFEinADKf1v/Yjbv09HeUnW2aHu/sANey8q cjo663yO9FAQ7xYig94gQ5SsFEJK87TpzlrwfQpq1FmDouuBdRw3HLf/GzCksxq6h6Qd dHPc1eGJ/weS0TRiejDzlpOCxn9sAZnBRZi0jKPd4b8sUEQnMpL4KP+7rN83HYMRmSNw x95OtGAk1v69jDYVtvnNR9Y1eLvJJQfdxffJ5OeETA4K9WvDstqnYfAG9CmLiSo2fVV2 anFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M8diawJ0a5+6KeqCTlFavoigvh5ZH93bmgpmzfeJdKU=; b=34sHCez34r9znyt0bjDclQezw2sW5FCAd6Fzw3E1tAs0cV3W2OaTx/vjWZkkKHm33S 7A8sFBn6ce/1j2F8JMNNnMHJAbyxHI7FwYkbnXb1rDRCnjp7r2TaLHOWC/VkFZuo3pYE aDRZJmo02hecXx02UGwe0DHSIdNkcFLyFmFeL5l5VUK+/88RmFYvdwjfQuoXdSHX6zsS +1AfKYB8aGpd6g4K2EIvdCT8v9jVCB+tFaz5rDSvM7q6UDmWqwh8rqT6bawnc88oVHMp s7bLwzqUn8Va8NuO3xw6s2rLtRiLnweTDy83e0BkMmy4awyLh7xV/l9zJToeb5QjmHAF p2Bg== X-Gm-Message-State: AOAM533P1ab/5sbskmcEfksiQPHn2aauMvVCzhJCVnblRBpOcM0xIs0n HmFgGGcpr9Nl14AmO0UWXM+QWcAcQMCS0w== X-Google-Smtp-Source: ABdhPJzPFkqyWE5YerNB8OHSTmxQmbipoI1rO0qgKI7jWDlomyQ1zN1wSQvVwslRKXespEzPjv+VbOwMvNn3Mg== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ec8c:b0:154:7cee:774e with SMTP id x12-20020a170902ec8c00b001547cee774emr11782718plg.61.1648835785152; Fri, 01 Apr 2022 10:56:25 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:46 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-16-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 15/23] KVM: x86/mmu: Update page stats in __rmap_add() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Update the page stats in __rmap_add() rather than at the call site. This will avoid having to manually update page stats when splitting huge pages in a subsequent commit. No functional change intended. Reviewed-by: Ben Gardon Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7305a8c625c0..5e1002d57689 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1602,6 +1602,8 @@ static void __rmap_add(struct kvm *kvm, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); + kvm_update_page_stats(kvm, sp->role.level, 1); + rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); rmap_count = pte_list_add(cache, spte, rmap_head); @@ -2839,7 +2841,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - kvm_update_page_stats(vcpu->kvm, level, 1); rmap_add(vcpu, slot, sptep, gfn); } From patchwork Fri Apr 1 17:55:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A168C433EF for ; Fri, 1 Apr 2022 17:56:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350773AbiDAR6V (ORCPT ); Fri, 1 Apr 2022 13:58:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350758AbiDAR6S (ORCPT ); Fri, 1 Apr 2022 13:58:18 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CD981AF501 for ; Fri, 1 Apr 2022 10:56:27 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id bv2-20020a17090af18200b001c63c69a774so1662226pjb.0 for ; Fri, 01 Apr 2022 10:56:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZZYIO/jnGHmZ4rH+JBLCnJUupr7XJywfFhnFxY1c2JA=; b=NyoJT0MjlL1XQEPPb1nlvLG7YLfoBZnsrKii89D0PDAULAX/Nnp+BmbYVneYVm7HID 2k3LGwu0EimGxpzxMMLgLenU4gJ0JRiWPIbIoZfMcIiXs9TbUAws/A0cdsH2+iRm0w/k hMSsl5Cbi9ejl3Uu9ksAzZhnc+on97UnujdE/a80/r/wyhkUQUcpSmNN8vnh8YfL0aqX EtUOdH3Lq4dPt4LfWkbYTrPaezu1yKO7xpvpJFP/pSk6/nGGPGYpbxPBqCdphCro//4W sAcXABrzIYiCnu6GCQw5jWZbmtz2UP/DLewJphPuOZXKcxoDSR3/3RTuS0DjaYSqyQY2 FZTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZZYIO/jnGHmZ4rH+JBLCnJUupr7XJywfFhnFxY1c2JA=; b=FjDG3C/F+/PY5aZN+hPsrfR5ZzSstlUsRt1Ib8BpV2xv0QXMrNH7L+L4PeVopPozDa dIvXVH2++7EGtV7mDTM2FuDC9jaxskRFbnCXdxWu/DmUu/NExYr18gCg6sTdkok5tLMh DPjYkuRsmUhih8Nq4MJiU127d7IzYxBC3JmNjq81fbJPPyowtKIOFo/GxrIdDT6No1SA uX9dHscF2avpRPuON5/Njmmj1ilQZ1NBpqV0Z4I5k/iP3Jl/95WB9n9eIaJeQYELYgHM 5etFbORSXMXAH5nc6b+z51d6OT+Vg7kLxXmMHoS4aCk3PW9gRMFGugjDk3w2SMQKW7Qe m18g== X-Gm-Message-State: AOAM533auSVy8P6HWOm8QMjvNAyHf6O7wZXK0eAt4+EuQ2EQjHNwRZLI /ZQ8nfwBdIjN0KHZFUU+vexItAydq+4jfw== X-Google-Smtp-Source: ABdhPJz2hCYi6E81nF9CFy0u7qVJk/su8KMqwFLzykWKFkpDTKc8tN/+7J8eexUEDHOBAfrJM0iixp6dQ5bHWw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:bcca:b0:153:88c7:a02 with SMTP id o10-20020a170902bcca00b0015388c70a02mr11424516pls.112.1648835786683; Fri, 01 Apr 2022 10:56:26 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:47 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-17-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 16/23] KVM: x86/mmu: Cache the access bits of shadowed translations From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to split a huge page we need to know what access bits to assign to the role of the new child page table. This can't be easily derived from the huge page SPTE itself since KVM applies its own access policies on top, such as for HugePage NX. We could walk the guest page tables to determine the correct access bits, but that is difficult to plumb outside of a vCPU fault context. Instead, we can store the original access bits for each leaf SPTE alongside the GFN in the gfns array. The access bits only take up 3 bits, which leaves 61 bits left over for gfns, which is more than enough. So this change does not require any additional memory. In order to keep the access bit cache in sync with the guest, we have to extend FNAME(sync_page) to also update the access bits. Now that the gfns array caches more information than just GFNs, rename it to shadowed_translation. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 71 ++++++++++++++++++++++++++++----- arch/x86/kvm/mmu/mmu_internal.h | 20 +++++++++- arch/x86/kvm/mmu/paging_tmpl.h | 8 +++- 4 files changed, 85 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9694dd5e6ccc..be4349c9ffea 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -696,7 +696,7 @@ struct kvm_vcpu_arch { struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_gfn_array_cache; + struct kvm_mmu_memory_cache mmu_shadowed_info_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5e1002d57689..3a425ed80e23 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -708,7 +708,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) if (r) return r; if (maybe_indirect) { - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_gfn_array_cache, + r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadowed_info_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; @@ -721,7 +721,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_gfn_array_cache); + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -733,7 +733,7 @@ static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { if (!sp->role.direct) - return sp->gfns[index]; + return sp->shadowed_translation[index].gfn; return sp->gfn + (index << ((sp->role.level - 1) * PT64_LEVEL_BITS)); } @@ -741,7 +741,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) { if (!sp->role.direct) { - sp->gfns[index] = gfn; + sp->shadowed_translation[index].gfn = gfn; return; } @@ -752,6 +752,47 @@ static void kvm_mmu_page_set_gfn(struct kvm_mmu_page *sp, int index, gfn_t gfn) kvm_mmu_page_get_gfn(sp, index), gfn); } +static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, u32 access) +{ + if (!sp->role.direct) { + sp->shadowed_translation[index].access = access; + return; + } + + if (WARN_ON(access != sp->role.access)) + pr_err_ratelimited("access mismatch under direct page %llx " + "(expected %llx, got %llx)\n", + kvm_mmu_page_get_gfn(sp, index), + sp->role.access, access); +} + +/* + * For leaf SPTEs, fetch the *guest* access permissions being shadowed. Note + * that the SPTE itself may have a more constrained access permissions that + * what the guest enforces. For example, a guest may create an executable + * huge PTE but KVM may disallow execution to mitigate iTLB multihit. + */ +static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) +{ + if (!sp->role.direct) + return sp->shadowed_translation[index].access; + + /* + * For direct MMUs (e.g. TDP or non-paging guests) there are no *guest* + * access permissions being shadowed. So we can just return ACC_ALL + * here. + * + * For indirect MMUs (shadow paging), direct shadow pages exist when KVM + * is shadowing a guest huge page with smaller pages, since the guest + * huge page is being directly mapped. In this case the guest access + * permissions being shadowed are the access permissions of the huge + * page. + * + * In both cases, sp->role.access contains exactly what we want. + */ + return sp->role.access; +} + /* * Return the pointer to the large page information for a given gfn, * handling slots that are not large page aligned. @@ -1594,7 +1635,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, static void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) + u64 *spte, gfn_t gfn, u32 access) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1602,6 +1643,7 @@ static void __rmap_add(struct kvm *kvm, sp = sptep_to_sp(spte); kvm_mmu_page_set_gfn(sp, spte - sp->spt, gfn); + kvm_mmu_page_set_access(sp, spte - sp->spt, access); kvm_update_page_stats(kvm, sp->role.level, 1); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); @@ -1615,9 +1657,9 @@ static void __rmap_add(struct kvm *kvm, } static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn) + u64 *spte, gfn_t gfn, u32 access) { - __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn); + __rmap_add(vcpu->kvm, &vcpu->arch.mmu_pte_list_desc_cache, slot, spte, gfn, access); } bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -1678,7 +1720,7 @@ void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { free_page((unsigned long)sp->spt); if (!sp->role.direct) - free_page((unsigned long)sp->gfns); + free_page((unsigned long)sp->shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1715,8 +1757,12 @@ struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu, bool direc sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + + BUILD_BUG_ON(sizeof(sp->shadowed_translation[0]) != sizeof(u64)); + if (!direct) - sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache); + sp->shadowed_translation = + kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadowed_info_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -1738,7 +1784,7 @@ static inline gfp_t gfp_flags_for_split(bool locked) * * Huge page splitting always uses direct shadow pages since the huge page is * being mapped directly with a lower level page table. Thus there's no need to - * allocate the gfns array. + * allocate the shadowed_translation array. */ struct kvm_mmu_page *kvm_mmu_alloc_direct_sp_for_split(bool locked) { @@ -2841,7 +2887,10 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, if (!was_rmapped) { WARN_ON_ONCE(ret == RET_PF_SPURIOUS); - rmap_add(vcpu, slot, sptep, gfn); + rmap_add(vcpu, slot, sptep, gfn, pte_access); + } else { + /* Already rmapped but the pte_access bits may have changed. */ + kvm_mmu_page_set_access(sp, sptep - sp->spt, pte_access); } return ret; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index b6e22ba9c654..3f76f4c1ae59 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -32,6 +32,18 @@ extern bool dbg; typedef u64 __rcu *tdp_ptep_t; +/* + * Stores the result of the guest translation being shadowed by an SPTE. KVM + * shadows two types of guest translations: nGPA -> GPA (shadow EPT/NPT) and + * GVA -> GPA (traditional shadow paging). In both cases the result of the + * translation is a GPA and a set of access constraints. + */ +struct shadowed_translation_entry { + /* Note, GFNs can have at most 64 - PAGE_SHIFT = 52 bits. */ + u64 gfn:52; + u64 access:3; +}; + struct kvm_mmu_page { /* * Note, "link" through "spt" fit in a single 64 byte cache line on @@ -53,8 +65,12 @@ struct kvm_mmu_page { gfn_t gfn; u64 *spt; - /* hold the gfn of each spte inside spt */ - gfn_t *gfns; + /* + * Caches the result of the intermediate guest translation being + * shadowed by each SPTE. NULL for direct shadow pages. + */ + struct shadowed_translation_entry *shadowed_translation; + /* Currently serving as active root */ union { int root_count; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index db63b5377465..91c2088464ce 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1014,7 +1014,8 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } /* - * Using the cached information from sp->gfns is safe because: + * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn() + * and kvm_mmu_page_get_access()) is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. * @@ -1088,12 +1089,15 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) continue; - if (gfn != sp->gfns[i]) { + if (gfn != kvm_mmu_page_get_gfn(sp, i)) { drop_spte(vcpu->kvm, &sp->spt[i]); flush = true; continue; } + if (pte_access != kvm_mmu_page_get_access(sp, i)) + kvm_mmu_page_set_access(sp, i, pte_access); + sptep = &sp->spt[i]; spte = *sptep; host_writable = spte & shadow_host_writable_mask; From patchwork Fri Apr 1 17:55:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FAC5C433F5 for ; Fri, 1 Apr 2022 17:56:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350727AbiDAR6W (ORCPT ); Fri, 1 Apr 2022 13:58:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350736AbiDAR6T (ORCPT ); Fri, 1 Apr 2022 13:58:19 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C128328F81E for ; Fri, 1 Apr 2022 10:56:28 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 77-20020a621450000000b004fa8868a49eso2020151pfu.3 for ; Fri, 01 Apr 2022 10:56:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=N+Yt2vLgKrZHcuNVUsOj9eJGBLy1X2SsKXkQhye+WRI=; b=Qr6kriferOcm7AIY4k/KBQ72DaTvRRltjRQh/5ECnhmB7SaEkrHaUEXakckg6PNkRI UqnATSklWc+8zALzY6JKcjXGFe1SfBq6T8EoKDzLpY6jLE2QLs0t53yQ/skVLxXtxGPc SbCNotOgDY1raopFMdahKZ5R9WaFB1/m5rObgP9wTILsP2yEOkos0mtBOC4YzivxawyR A6HVPHEPfUothLkegc9HrO+kBmoz4HMGCFgxA1bBAmyGBat3PADxzlV+wjdNE3FggTSW FImW+j6GQKwF5ZJexZpan7VjGim/N71Uhn5jXmS5VsRXU7NBV1T63G8bmnjgkl+Uvk8D elaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=N+Yt2vLgKrZHcuNVUsOj9eJGBLy1X2SsKXkQhye+WRI=; b=B3iba0YDiT3m1QTBEGkhDXbizuoMdcW5xmYCJxZMOxohs1joAmpwPzN+8trFdnhu9H HwVmlcyK9y3B7VgmRDH76I1ViPi0sIW22FjHgoto4NaD0/LTaFdknqnJGM68Gt52bBJC VKfT5zz2I9gCmFwPs8r0apL2lsFRN+veMl7Y36HpVzhniVFuv1cHzAbLwVNFjN1332Ru Y8ucFgJAeqVhlH+kPR67Fsodo+IbHLBM6KdnW6Wy8xIFSi57VUPT7IKxA8ziXJUV8uMx rY+XBrwj/DL67IDWUj90l+GIDADmGXBaem1OhOhvspY1YibIArVXcU6xbVaDt7HO+jK3 E/Xw== X-Gm-Message-State: AOAM532s3kbdy1UqEPP04YI/oEQ1B0RmjAODV7TKKkBgRDq/pl9oNpZZ O5GnxcQjQfUp+dNEHtjD4R44ByXj12r4zA== X-Google-Smtp-Source: ABdhPJwplQwfsWEVcDA4UHiGwuSL/A4oELiqXDjC0LjFD0RE2g1bBGBlZDUBl9Lc3mLD3IwFfsHZo9b9soqNMw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:3e44:b0:1c9:8365:5753 with SMTP id t4-20020a17090a3e4400b001c983655753mr13236316pjm.60.1648835788080; Fri, 01 Apr 2022 10:56:28 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:48 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-18-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 17/23] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently make_huge_page_split_spte() assumes execute permissions can be granted to any 4K SPTE when splitting huge pages. This is true for the TDP MMU but is not necessarily true for the shadow MMU, since we may be splitting a huge page that shadows a non-executable guest huge page. To fix this, pass in the child shadow page where the huge page will be split and derive the execution permission from the shadow page's role. This is correct because huge pages are always split with direct shadow page and thus the shadow page role contains the correct access permissions. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/spte.c | 13 +++++++------ arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index d10189d9c877..ef6537c6f5ef 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -216,10 +216,11 @@ static u64 make_spte_executable(u64 spte) * This is used during huge page splitting to build the SPTEs that make up the * new page table. */ -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) +u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index) { + bool exec_allowed = sp->role.access & ACC_EXEC_MASK; + int child_level = sp->role.level; u64 child_spte; - int child_level; if (WARN_ON_ONCE(!is_shadow_present_pte(huge_spte))) return 0; @@ -228,7 +229,6 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) return 0; child_spte = huge_spte; - child_level = huge_level - 1; /* * The child_spte already has the base address of the huge page being @@ -241,10 +241,11 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index) child_spte &= ~PT_PAGE_SIZE_MASK; /* - * When splitting to a 4K page, mark the page executable as the - * NX hugepage mitigation no longer applies. + * When splitting to a 4K page where execution is allowed, mark + * the page executable as the NX hugepage mitigation no longer + * applies. */ - if (is_nx_huge_page_enabled()) + if (exec_allowed && is_nx_huge_page_enabled()) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 73f12615416f..921ea77f1b5e 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -415,7 +415,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); -u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index); +u64 make_huge_page_split_spte(u64 huge_spte, struct kvm_mmu_page *sp, int index); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a04262bc34e2..36d241405ecc 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1455,7 +1455,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < PT64_ENT_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i); + sp->spt[i] = make_huge_page_split_spte(huge_spte, sp, i); /* * Replace the huge spte with a pointer to the populated lower level From patchwork Fri Apr 1 17:55:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47714C433FE for ; Fri, 1 Apr 2022 17:56:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350818AbiDAR60 (ORCPT ); Fri, 1 Apr 2022 13:58:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350766AbiDAR6V (ORCPT ); Fri, 1 Apr 2022 13:58:21 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D77F1D7612 for ; Fri, 1 Apr 2022 10:56:30 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id ml1-20020a17090b360100b001ca140b56ffso1925988pjb.7 for ; Fri, 01 Apr 2022 10:56:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bPjRdwbrgeaGoRO6D9pIVUv9XL1QVrE/jZ7frr9Y9sk=; b=HKXVYj5bLgLQDNl0Yp3f3Cxk0xEfw/WwCidCusFgYIXZPgwnivI6+Ih1CXvlNY8wv+ 0SB7IO2lfWqyxHZXsqnHLnT0QBt4/Kn5lk5EW8K9iW1NthwnnpRVMkumhd4k1OQE77wT S2O1Z7AitpIGEox5Qh5FKe0zYHXrRWVT9NJBODMGEgo+t6A06dO9q/xdIDofE1nFLYnl tWGum6qGeGVSd65XaJizCavBESlaKNmwmbP75qcI6R+2txzDbk0mbonxSd1y21Sie2vy W5UgLGjdxmVDqUWGtPF5QIJMbd5Gcb1De7NMCzJgjAuAzu6A7T0xAsFyXc5SNRC5Sf5R j+lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bPjRdwbrgeaGoRO6D9pIVUv9XL1QVrE/jZ7frr9Y9sk=; b=F/UmkmjxMe8VnNwOL8jCY0pXfhRdQZ9zPFM4q0YrThJl+aduqgcMl4Mkaqd2F2B2Hu LYMsdVpW6UhHXxQiGTxm/r6WPXI+9tqUfNcvJwnqD8YODAA1Ma/elnqPoDScLNwxGD0D lozjH7uVrZlod80tRsHC7Au0crqrTKMqRTq8x19nADKa4cyh5v4XJOOwlIv8c/Vt1YZV GMv5GVGCq92W72hIkOdWjs8mHXtrqKbunM8C2tKppDslTof4Zs1TQ3pTHnmtAjbgypDz NcRymIe9uc14rC0NNIbKki6C8RyKgZaRSB6FOBNc6FFppIouDMHojyeV42fDOhU5BvLw IVFA== X-Gm-Message-State: AOAM533aKhLBfBAH+RNqgKxVYJFxCosqzdRgR/7QH1Ty5Z9Dc710zG9Q zalCLeYhVnk22zpuB2XqKo/TEvPwrq6KTg== X-Google-Smtp-Source: ABdhPJwAwneAua3CK3hqlxT6AVdmOtu9SNJP0PRrgeiWzgw657lKsS+2gpSEl29HSjZknUEHAGP38CQTV81Y8Q== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:d888:b0:151:6fe8:6e68 with SMTP id b8-20020a170902d88800b001516fe86e68mr11236930plz.158.1648835789747; Fri, 01 Apr 2022 10:56:29 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:49 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-19-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 18/23] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently KVM only zaps collapsible 4KiB SPTEs in the shadow MMU (i.e. in the rmap). This is fine for now KVM never creates intermediate huge pages during dirty logging, i.e. a 1GiB page is never partially split to a 2MiB page. However, this will stop being true once the shadow MMU participates in eager page splitting, which can in fact leave behind partially split huge pages. In preparation for that change, change the shadow MMU to iterate over all necessary levels when zapping collapsible SPTEs. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3a425ed80e23..6390b23d286a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6172,18 +6172,25 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, return need_tlb_flush; } +static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) +{ + /* + * Note, use KVM_MAX_HUGEPAGE_LEVEL - 1 since there's no need to zap + * pages that are already mapped at the maximum possible level. + */ + if (slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, + PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, + true)) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); +} + void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot) { if (kvm_memslots_have_rmaps(kvm)) { write_lock(&kvm->mmu_lock); - /* - * Zap only 4k SPTEs since the legacy MMU only supports dirty - * logging at a 4k granularity and never creates collapsible - * 2m SPTEs during dirty logging. - */ - if (slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_rmap_zap_collapsible_sptes(kvm, slot); write_unlock(&kvm->mmu_lock); } From patchwork Fri Apr 1 17:55:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E5FDC433F5 for ; Fri, 1 Apr 2022 17:56:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350852AbiDAR63 (ORCPT ); Fri, 1 Apr 2022 13:58:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350749AbiDAR6W (ORCPT ); Fri, 1 Apr 2022 13:58:22 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D51F526481A for ; Fri, 1 Apr 2022 10:56:31 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id k7-20020aa79727000000b004fdc86d9b79so1986594pfg.8 for ; Fri, 01 Apr 2022 10:56:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SFXm2KwniaE00QEj2NzqHgvf1ASQ84rh2AbpNytHfco=; b=qJlGcvodbGXModlH9jGukKmgTNEjWAOITQQu62mFHP6LKDJbwZqskUJsLcP86ZbJU6 ci9SF25lZR0byT8KkjvNOzhD5SdjyuAbbl77X2sYflrRrmhCGM3luPp1sOeeEkV+3EAL 72SOwJfxzbZxHxeJw0r5hmFUuOz8+1rGkoNZg9n+oGWFvq9KXzbUDF9pwFZSsRX7F7CU QAYlhUtNCoVt4sAs4aFJDgnE2BjvLy8UVhToIBfsNEPPIcfoYrdvQP2Y5Al9np8aW+3Z vZlfv9GdlmFaj4JXon3BM18j1zorzjZSDNDzm7hgwuUh83hha4yb45BqMoS1/CXJ+Jpn +MMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SFXm2KwniaE00QEj2NzqHgvf1ASQ84rh2AbpNytHfco=; b=qIvf4+93Od+vE9b6Y2OgadFluED2B2Vz1243dHQtupg8peSMJRPi4Sf2DdP7YK+xUT zKTCrz4LEkw2BfaxQFfX26IPG55hTRl/pmwJusAW7/VUJ4kwxIcPwc/XkHE8t4ruO08T d12Dx/esMl9a2nZu3rO98pwNwPcan7ZI13Hspo8XHxWJ+7NTNOBMilHJRbZAuuKYmxiF mBHdKakkZM/RsWbV+OYKatc7xSo48q/EOQ0Hxo47tGAmkIlp9WGlXFHXH/VRWHJSrJ/M ChH78+lO7bi3XYwzqib+QtGtovw/PerfgQpXQZf1w0aklFelJ7G7xMgeXqVPyP+qgmnw +rWg== X-Gm-Message-State: AOAM531gy14n5qn2EbEIZUB0gymrhba2iarTaZ6rOXVGmPI3eaRoa5ez RGV9gy1TQrJglRK9qbKNtC7+MzJD6yMtOg== X-Google-Smtp-Source: ABdhPJy5g1z/iSbFZBwpj7J/HiTUYMlrOwh+Ud1JO3xyCytgN0gs7EbKQm2V60MVCQpu5tAlITqiMh3TNmIocA== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:e78f:b0:156:3b3b:e4ce with SMTP id cp15-20020a170902e78f00b001563b3be4cemr17310175plb.8.1648835791370; Fri, 01 Apr 2022 10:56:31 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:50 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-20-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 19/23] KVM: x86/mmu: Refactor drop_large_spte() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org drop_large_spte() drops a large SPTE if it exists and then flushes TLBs. Its helper function, __drop_large_spte(), does the drop without the flush. In preparation for eager page splitting, which will need to sometimes flush when dropping large SPTEs (and sometimes not), push the flushing logic down into __drop_large_spte() and add a bool parameter to control it. No functional change intended. Reviewed-by: Peter Xu Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6390b23d286a..f058f28909ea 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1184,28 +1184,29 @@ static void drop_spte(struct kvm *kvm, u64 *sptep) rmap_remove(kvm, sptep); } - -static bool __drop_large_spte(struct kvm *kvm, u64 *sptep) +static void __drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) { - if (is_large_pte(*sptep)) { - WARN_ON(sptep_to_sp(sptep)->role.level == PG_LEVEL_4K); - drop_spte(kvm, sptep); - return true; - } + struct kvm_mmu_page *sp; - return false; -} + if (!is_large_pte(*sptep)) + return; -static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) -{ - if (__drop_large_spte(vcpu->kvm, sptep)) { - struct kvm_mmu_page *sp = sptep_to_sp(sptep); + sp = sptep_to_sp(sptep); + WARN_ON(sp->role.level == PG_LEVEL_4K); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, + drop_spte(kvm, sptep); + + if (flush) { + kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } } +static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep) +{ + return __drop_large_spte(vcpu->kvm, sptep, true); +} + /* * Write-protect on the specified @sptep, @pt_protect indicates whether * spte write-protection is caused by protecting shadow page table. From patchwork Fri Apr 1 17:55:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F6DFC43219 for ; Fri, 1 Apr 2022 17:56:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350750AbiDAR6a (ORCPT ); Fri, 1 Apr 2022 13:58:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350796AbiDAR6Y (ORCPT ); Fri, 1 Apr 2022 13:58:24 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F1FF20A941 for ; Fri, 1 Apr 2022 10:56:33 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 63-20020a621742000000b004fa914b7f34so2038311pfx.0 for ; Fri, 01 Apr 2022 10:56:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KCsCCKPJw//ZakmipHFrFH80mcQyRNSTiCbn8jePd0o=; b=fHLn2+n3366F34M19Tmgg8BnVu9+LEvD/2GR4UBzeiov31AnJXMHjbvaLIkEhk4j3o N2FZDGKKyT0qh/GAFAsu7U+qc4lSEocUuRqAjOf7lQ9u6ItLIqvfQrsKijs6YPJLzBjd uC8gf+S8jFrJllP72uEXB2Y+07rGTl1RQKc7glH/ssP6++fOrooznYAYU5n7zg/qKHeW iNvNTcMHR5IsB0XNEvYETh9u4AwOCHGs2pl2pc1vOsB7472EF6j2krfhqIl8ZUkBnDZD UIZ5nzt2s0eTLQqqA6efcgkB3hYEgq0tDT4G/RkeDl6nWFCfmeYN3u6h2EoKQ3p5BEMe hyaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KCsCCKPJw//ZakmipHFrFH80mcQyRNSTiCbn8jePd0o=; b=BQGg1In4lgVivdsYxJqn7AZcbPdeE/jPY74w5EUBlt4H8TO8TXqDhq9ZBPgKTNnswX dAZQ2ROrhjmQSSxgL8HIS1KmmTPtH0wjXHC/KJykGUdp6VWl1QT2GABo+Lm/pIDO3cG0 M1UfDBcpYAVK0uoMrvQt45xA3h8E1pCuqoDjkucqnsGUNsuh6JNCaBd1DR+ohFFr5Gw9 g5Wni5HSRWHZe8moOEFLT44DDtNt0yWqSzV9ZwqEEniHnpOb3Al/sXqgnIYgsLHbo2Ic zOz++zhAazCZ2rIqWizwltyg+HE43CVdzfd2cbb88qsBFHGOc0QQaoHHje6PUo47Gv0H qV3w== X-Gm-Message-State: AOAM53218BUiaPb4HAo8/R+QwKXuVv+i3n0XGSaqeJhKvVZuKA61Rk7o yQohmi5RSJBWOiiqUzndoneO44dx2p7HqQ== X-Google-Smtp-Source: ABdhPJx3yk8DGw0UUUnESqpCywr8AT/K25hWiO7n8nNV5Yk05HotoBcuxffoYGIonmxFrdqGUZzHk5X3TRJ/9w== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a62:1dc9:0:b0:4fa:e4d2:7745 with SMTP id d192-20020a621dc9000000b004fae4d27745mr11986119pfd.61.1648835793113; Fri, 01 Apr 2022 10:56:33 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:51 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-21-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 20/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 with a capacity of 512+ objects without having to increase the capacity of all caches in KVM. This change requires each cache now specify its capacity at runtime, since the cache struct itself no longer has a fixed capacity known at compile time. To protect against someone accidentally defining a kvm_mmu_memory_cache struct directly (without the extra storage), this commit includes a WARN_ON() in kvm_mmu_topup_memory_cache(). This change, unfortunately, adds some grottiness to kvm_phys_addr_ioremap() in arm64, which uses a function-local (i.e. stack-allocated) kvm_mmu_memory_cache struct. Since C does not allow anonymous structs in functions, the new wrapper struct that contains kvm_mmu_memory_cache and the objects pointer array, must be named, which means dealing with an outer and inner struct. The outer struct can't be dropped since then there would be no guarantee the kvm_mmu_memory_cache struct and objects array would be laid out consecutively on the stack. No functional change intended. Signed-off-by: David Matlack Acked-by: Anup Patel --- arch/arm64/include/asm/kvm_host.h | 2 +- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/mmu.c | 13 +++++++++---- arch/mips/include/asm/kvm_host.h | 2 +- arch/mips/kvm/mips.c | 2 ++ arch/riscv/include/asm/kvm_host.h | 2 +- arch/riscv/kvm/mmu.c | 17 ++++++++++------- arch/riscv/kvm/vcpu.c | 1 + arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 9 +++++++++ include/linux/kvm_types.h | 19 +++++++++++++++++-- virt/kvm/kvm_main.c | 10 +++++++++- 12 files changed, 65 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0e96087885fe..4670491899de 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -362,7 +362,7 @@ struct kvm_vcpu_arch { bool pause; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* Target CPU and feature flags */ int target; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ba9165e84396..af4d8a490af5 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -320,6 +320,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; /* Set up the timer */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0d19259454d8..01e15bcb7be2 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -764,7 +764,12 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, { phys_addr_t addr; int ret = 0; - struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = { + .cache = { + .gfp_zero = __GFP_ZERO, + .capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE, + }, + }; struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_R | @@ -777,14 +782,14 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, guest_ipa &= PAGE_MASK; for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) { - ret = kvm_mmu_topup_memory_cache(&cache, + ret = kvm_mmu_topup_memory_cache(&page_cache.cache, kvm_mmu_cache_min_pages(kvm)); if (ret) break; write_lock(&kvm->mmu_lock); ret = kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache); + &page_cache.cache); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -792,7 +797,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, pa += PAGE_SIZE; } - kvm_mmu_free_memory_cache(&cache); + kvm_mmu_free_memory_cache(&page_cache.cache); return ret; } diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 717716cc51c5..935511d7fc3a 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -347,7 +347,7 @@ struct kvm_vcpu_arch { unsigned long pending_exceptions_clr; /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* vcpu's vzguestid is different on each host cpu in an smp system */ u32 vzguestid[NR_CPUS]; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index a25e0b73ee70..45c7179144dc 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -387,6 +387,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) if (err) goto out_free_gebase; + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + return 0; out_free_gebase: diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 78da839657e5..4ec0b7a3d515 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -186,7 +186,7 @@ struct kvm_vcpu_arch { struct kvm_sbi_context sbi_context; /* Cache pages needed to program page tables with spinlock held */ - struct kvm_mmu_memory_cache mmu_page_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); /* VCPU power-off state */ bool power_off; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f80a34fbf102..5ffd164a5aeb 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -347,10 +347,12 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, int ret = 0; unsigned long pfn; phys_addr_t addr, end; - struct kvm_mmu_memory_cache pcache; - - memset(&pcache, 0, sizeof(pcache)); - pcache.gfp_zero = __GFP_ZERO; + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = { + .cache = { + .gfp_zero = __GFP_ZERO, + .capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE, + }, + }; end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn = __phys_to_pfn(hpa); @@ -361,12 +363,13 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, if (!writable) pte = pte_wrprotect(pte); - ret = kvm_mmu_topup_memory_cache(&pcache, stage2_pgd_levels); + ret = kvm_mmu_topup_memory_cache(&page_cache.cache, + stage2_pgd_levels); if (ret) goto out; spin_lock(&kvm->mmu_lock); - ret = stage2_set_pte(kvm, 0, &pcache, addr, &pte); + ret = stage2_set_pte(kvm, 0, &page_cache.cache, addr, &pte); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -375,7 +378,7 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, } out: - kvm_mmu_free_memory_cache(&pcache); + kvm_mmu_free_memory_cache(&page_cache.cache); return ret; } diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 624166004e36..6a5f5aa45bac 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -94,6 +94,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; + vcpu->arch.mmu_page_cache.capacity = KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; /* Setup ISA features available to VCPU */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index be4349c9ffea..ffb2b99f3a60 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -694,10 +694,10 @@ struct kvm_vcpu_arch { */ struct kvm_mmu *walk_mmu; - struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_shadow_page_cache; - struct kvm_mmu_memory_cache mmu_shadowed_info_cache; - struct kvm_mmu_memory_cache mmu_page_header_cache; + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_pte_list_desc_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_shadow_page_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_shadowed_info_cache); + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_header_cache); /* * QEMU userspace and the guest each have their own FPU state. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f058f28909ea..a8200b3f8782 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5800,12 +5800,21 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) { int ret; + vcpu->arch.mmu_pte_list_desc_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_page_header_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_shadowed_info_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + + vcpu->arch.mmu_shadow_page_cache.capacity = + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index ac1ebb37a0ff..579cf39986ec 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -83,14 +83,29 @@ struct gfn_to_pfn_cache { * MMU flows is problematic, as is triggering reclaim, I/O, etc... while * holding MMU locks. Note, these caches act more like prefetch buffers than * classical caches, i.e. objects are not returned to the cache on being freed. + * + * The storage for the cache object pointers is laid out after the struct, to + * allow different declarations to choose different capacities. The capacity + * field defines the number of object pointers available after the struct. */ struct kvm_mmu_memory_cache { int nobjs; + int capacity; gfp_t gfp_zero; struct kmem_cache *kmem_cache; - void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE]; + void *objects[]; }; -#endif + +#define __DEFINE_KVM_MMU_MEMORY_CACHE(_name, _capacity) \ + struct { \ + struct kvm_mmu_memory_cache _name; \ + void *_name##_objects[_capacity]; \ + } + +#define DEFINE_KVM_MMU_MEMORY_CACHE(_name) \ + __DEFINE_KVM_MMU_MEMORY_CACHE(_name, KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE) + +#endif /* KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE */ #define HALT_POLL_HIST_COUNT 32 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 70e05af5ebea..c4cac4195f4a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -373,9 +373,17 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) { void *obj; + /* + * The capacity fieldmust be initialized since the storage for the + * objects pointer array is laid out after the kvm_mmu_memory_cache + * struct and not known at compile time. + */ + if (WARN_ON(mc->capacity == 0)) + return -EINVAL; + if (mc->nobjs >= min) return 0; - while (mc->nobjs < ARRAY_SIZE(mc->objects)) { + while (mc->nobjs < mc->capacity) { obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; From patchwork Fri Apr 1 17:55:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8929C433EF for ; Fri, 1 Apr 2022 17:56:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350783AbiDAR6d (ORCPT ); Fri, 1 Apr 2022 13:58:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350814AbiDAR6Z (ORCPT ); Fri, 1 Apr 2022 13:58:25 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B47D20D82A for ; Fri, 1 Apr 2022 10:56:35 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id nr19-20020a17090b241300b001c6f8baf45eso2444189pjb.0 for ; Fri, 01 Apr 2022 10:56:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rk47OwLDTUZoPp3sSFPqcpIfMojh6XvvX1htIIdRD8M=; b=s72V80y7tDOh1F0V57dD8TPMVDGZRFJLVeRypa2j6NOIeyK6OdhN/vMWnCocaWrr7k CTCwsrxCAl4hj3azp9KbFVRgymRC6cGX41YRf3bKbcz4hip3E1y1CUleOkW/mRUjOv72 4q0juwIAEhigFEFYWcTUd2u5Yv+XnqXVD6a3pn/Ui+K8aArSkJYYBVjZFCZK1SOZsjC5 7Yesg542/Fq0ocOM9rn7tyOyRxQKZ3AxCieAlZZBYn/IE9/hjt2QS80Tz+PLFcorlflQ sdxYZkSU8ub5I6xHkzMHujo97FILxUXUQ+MHykloWi+evJGsQ9HO/j+9n0zxGqN3fxiy QTYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rk47OwLDTUZoPp3sSFPqcpIfMojh6XvvX1htIIdRD8M=; b=D45tW/gzCzWAHkFE3M9xgJKmySouCaH9HClEPTjyMe8uhQZKPmRg4EfDu1yRPj8cRP jN9UyJukNRhTj9TLobd6lWKg9eo1+LqNxNbdw50DjNniSz05kh8kMTOWj5bqV3nTDyHx 5MC/r9HUwGuPkZ2aQE4fBP4SYJO3rHCic3TYt9FON9LFh9HFLvXwEAzaSf3ZXaLqk3Pj gpBMDHo9ayDGLp3OBLXUv1c69Dd/W8KmTALjH6hYrFY+Kum0PRaEjMcrIF3kbC3BBnKI c22GvAI+5jgJ8iDbO5q0CiRowxYOYjGRFEa8UblxW2KVuLs1LvPPM2zabI3lfo527dsQ g0/w== X-Gm-Message-State: AOAM531pT2Z3zrsf94OvdUo9KNeculAvLemUgDVwzCy1x1GKaLdnwT1g sbRJE4IUGtHf3NfhE780zvFA/sEyzHYDrw== X-Google-Smtp-Source: ABdhPJybrRr8dD4aYv/cbz/erk9I24s3Xk9K694qQkW/0pJU9JMwG7vQGFtAvLaRQheDfdF0TPnn3iF2T50Fnw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:902:ce0f:b0:156:5a4:926c with SMTP id k15-20020a170902ce0f00b0015605a4926cmr11415497plg.3.1648835794980; Fri, 01 Apr 2022 10:56:34 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:52 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-22-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 21/23] KVM: Allow GFP flags to be passed when topping up MMU caches From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This will be used in a subsequent commit to top-up MMU caches under the MMU lock with GFP_NOWAIT as part of eager page splitting. No functional change intended. Reviewed-by: Ben Gardon Signed-off-by: David Matlack --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 252ee4a61b58..7d3a1f28beb2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1335,6 +1335,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp); int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc); void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c4cac4195f4a..554148ea0c30 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -369,7 +369,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, return (void *)__get_free_page(gfp_flags); } -int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min, gfp_t gfp) { void *obj; @@ -384,7 +384,7 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) if (mc->nobjs >= min) return 0; while (mc->nobjs < mc->capacity) { - obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); + obj = mmu_memory_cache_alloc_obj(mc, gfp); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; mc->objects[mc->nobjs++] = obj; @@ -392,6 +392,11 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) return 0; } +int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +{ + return __kvm_mmu_topup_memory_cache(mc, min, GFP_KERNEL_ACCOUNT); +} + int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc) { return mc->nobjs; From patchwork Fri Apr 1 17:55:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A3D1C433F5 for ; Fri, 1 Apr 2022 17:57:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350828AbiDAR6x (ORCPT ); Fri, 1 Apr 2022 13:58:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350784AbiDAR63 (ORCPT ); Fri, 1 Apr 2022 13:58:29 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45E7C28F81A for ; Fri, 1 Apr 2022 10:56:37 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id rm11-20020a17090b3ecb00b001c713925e58so4349171pjb.6 for ; Fri, 01 Apr 2022 10:56:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5xLJkysC3ZlCsyjnevnbbczfF7DAWIHXhWTtUpk/DaM=; b=BGWmxg2u+a742ZhGq8e/gaCDz955+1ZlJovsdI6L0nUAdWVa6Qdtc23Nr2Pjp8RQ3N ACHnaXMUEZZJtcUTF+9+oQ7mPkwkih4Kyme64gsN+BLDFtcBI2OQ/7C0QjVXQ4T/159e lLX9UO9w/OYj40ZxkmxV0yV/PEMV9uDKZXfTwSZfO+Vc4Ty6Q1q8YKMJMEFOWECBG+qC Esy1K3+NzO2OoFIKu40KLMPncdvpA75at3FLolRVU1+pXbEb1fIbJA2nh41yucAQM+1F fH1akGlZxRH6ir3R6Niw4Z7lGVNrxjVDaz6IRs2J8IbjR18pMPuYiYP1zkb/NFf1KEOc 1CWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5xLJkysC3ZlCsyjnevnbbczfF7DAWIHXhWTtUpk/DaM=; b=KP2le9kt4MB8kddTsqT1p4PKJjvrKPUgcH1BMT8hhkbLrVr9wyfw2oG0/VuBNdP+3A 0mFudWEQQs2pzUBv61DzANcc6eQlP+S9Ycre+nRSwdgBJ5P80nNtCWzDEwe3+qIxyeRI o+fpaLojd+duFJEbxCgnULZV54MnkAr/ct4XJnjzBigwHs8VAb1JRbaD7Hqmk2l3yuRx NfY6p0tHRN1zRKSxHKNNQn/2lQVG1W8gRkd85C3vVM+euUKF2b992XHPMJni+UzqM/Uo shHVICpoSUv+oKqwhRBs5nSqddj7DSIuo836Ta8Ce1XYJAscny0HXz4B7JPt/GWmKtQb eZMg== X-Gm-Message-State: AOAM531GChtURu4655d21aS2EKTuQFETLpTJvb+0p6GLFiyeYUGdbUqR odncHBa6xVbOtSsDGNcVsrdRBlxGbTHZKQ== X-Google-Smtp-Source: ABdhPJyRqF97zxTLz0gW9feu0V4xTh0qrgEHDZnAhXWlpLSbGZiPVvAurNY7AvSLmzoxLwnBmcQvKj8DX1SHqw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:aa7:81c1:0:b0:4f7:6ba1:553b with SMTP id c1-20020aa781c1000000b004f76ba1553bmr12211127pfn.45.1648835796738; Fri, 01 Apr 2022 10:56:36 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:53 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-23-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 22/23] KVM: x86/mmu: Support Eager Page Splitting in the shadow MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add support for Eager Page Splitting pages that are mapped by the shadow MMU. Walk through the rmap first splitting all 1GiB pages to 2MiB pages, and then splitting all 2MiB pages to 4KiB pages. Splitting huge pages mapped by the shadow MMU requries dealing with some extra complexity beyond that of the TDP MMU: (1) The shadow MMU has a limit on the number of shadow pages that are allowed to be allocated. So, as a policy, Eager Page Splitting refuses to split if there are KVM_MIN_FREE_MMU_PAGES or fewer pages available. (2) Huge pages may be mapped by indirect shadow pages which have the possibility of being unsync. As a policy we opt not to split such pages as their translation may no longer be valid. (3) Splitting a huge page may end up re-using an existing lower level shadow page tables. This is unlike the TDP MMU which always allocates new shadow page tables when splitting. (4) When installing the lower level SPTEs, they must be added to the rmap which may require allocating additional pte_list_desc structs. Note, for case (3) we have to be careful about dealing with what's already in the lower level page table. Specifically the lower level page table may only be partially filled in and may point to even lower level page tables that are partially filled in. We can fill in non-present entries, but recursing into the lower level page tables would be too complex. This means that Eager Page Splitting may partially unmap a huge page. To handle this we flush TLBs after dropping the huge SPTE whenever we are about to install a lower level page table that was partially filled in (*). We can skip the TLB flush if the lower level page table was empty (no aliasing) or identical to what we were already going to populate it with (aliased huge page that was just eagerly split). (*) This TLB flush could probably be delayed until we're about to drop the MMU lock, which would also let us batch flushes for multiple splits. However such scenarios should be rare in practice (a huge page must be aliased in multiple SPTEs and have been split for NX Huge Pages in only some of them). Flushing immediately is simpler to plumb and also reduces the chances of tripping over a CPU bug (e.g. see iTLB multi-hit). Suggested-by: Peter Feiner [ This commit is based off of the original implementation of Eager Page Splitting from Peter in Google's kernel from 2016 that handles cases (1) and (2) above. ] Signed-off-by: David Matlack --- .../admin-guide/kernel-parameters.txt | 3 - arch/x86/include/asm/kvm_host.h | 12 + arch/x86/kvm/mmu/mmu.c | 268 ++++++++++++++++++ arch/x86/kvm/x86.c | 6 + 4 files changed, 286 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 05161afd7642..495f6ac53801 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2360,9 +2360,6 @@ the KVM_CLEAR_DIRTY ioctl, and only for the pages being cleared. - Eager page splitting currently only supports splitting - huge pages mapped by the TDP MMU. - Default is Y (on). kvm.enable_vmware_backdoor=[KVM] Support VMware backdoor PV interface. diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ffb2b99f3a60..053a32afd18b 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1246,6 +1246,16 @@ struct kvm_arch { hpa_t hv_root_tdp; spinlock_t hv_root_tdp_lock; #endif + + /* + * Memory cache used to allocate pte_list_desc structs while splitting + * huge pages. In the worst case, to split one huge page we need 512 + * pte_list_desc structs to add each lower level leaf sptep to the rmap + * plus 1 to extend the parent_ptes rmap of the lower level page table. + */ +#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 513 + __DEFINE_KVM_MMU_MEMORY_CACHE(huge_page_split_desc_cache, + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY); }; struct kvm_vm_stat { @@ -1621,6 +1631,8 @@ void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); +void free_huge_page_split_desc_cache(struct kvm *kvm); + int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3); int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a8200b3f8782..9adafed43048 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5972,6 +5972,11 @@ void kvm_mmu_init_vm(struct kvm *kvm) node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); + + kvm->arch.huge_page_split_desc_cache.capacity = + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY; + kvm->arch.huge_page_split_desc_cache.kmem_cache = pte_list_desc_cache; + kvm->arch.huge_page_split_desc_cache.gfp_zero = __GFP_ZERO; } void kvm_mmu_uninit_vm(struct kvm *kvm) @@ -6102,12 +6107,267 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); } +static int topup_huge_page_split_desc_cache(struct kvm *kvm, bool locked) +{ + gfp_t gfp = gfp_flags_for_split(locked); + + /* + * We may need up to HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY descriptors + * to split any given huge page. We could more accurately calculate how + * many we actually need by inspecting all the rmaps and check which + * will need new descriptors, but that's not worth the extra cost or + * code complexity. + */ + return __kvm_mmu_topup_memory_cache( + &kvm->arch.huge_page_split_desc_cache, + HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY, + gfp); +} + +void free_huge_page_split_desc_cache(struct kvm *kvm) +{ + kvm_mmu_free_memory_cache(&kvm->arch.huge_page_split_desc_cache); +} + +static int alloc_memory_for_split(struct kvm *kvm, struct kvm_mmu_page **spp, + bool locked) +{ + int r; + + r = topup_huge_page_split_desc_cache(kvm, locked); + if (r) + return r; + + if (!*spp) { + *spp = kvm_mmu_alloc_direct_sp_for_split(locked); + r = *spp ? 0 : -ENOMEM; + } + + return r; +} + +static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, + struct kvm_mmu_page **spp) +{ + struct kvm_mmu_page *sp, *huge_sp = sptep_to_sp(huge_sptep); + union kvm_mmu_page_role role; + LIST_HEAD(invalid_list); + unsigned int access; + gfn_t gfn; + + gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt); + access = kvm_mmu_page_get_access(huge_sp, huge_sptep - huge_sp->spt); + + /* + * Huge page splitting always uses direct shadow pages since we are + * directly mapping the huge page GFN region with smaller pages. + */ + role = kvm_mmu_child_role(huge_sptep, true, access); + + sp = __kvm_mmu_find_shadow_page(kvm, gfn, role, &invalid_list); + if (sp) { + /* Direct SPs should never be unsync. */ + WARN_ON_ONCE(sp->unsync); + trace_kvm_mmu_get_page(sp, false); + } else { + swap(sp, *spp); + init_shadow_page(kvm, sp, slot, gfn, role); + trace_kvm_mmu_get_page(sp, true); + } + + kvm_mmu_commit_zap_page(kvm, &invalid_list); + + return sp; +} + +static void kvm_mmu_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) + +{ + struct kvm_mmu_memory_cache *cache = &kvm->arch.huge_page_split_desc_cache; + u64 huge_spte = READ_ONCE(*huge_sptep); + struct kvm_mmu_page *sp; + bool flush = false; + u64 *sptep, spte; + gfn_t gfn; + int index; + + sp = kvm_mmu_get_sp_for_split(kvm, slot, huge_sptep, spp); + + for (index = 0; index < PT64_ENT_PER_PAGE; index++) { + sptep = &sp->spt[index]; + gfn = kvm_mmu_page_get_gfn(sp, index); + + /* + * sp may have populated page table entries, e.g. if this huge + * page is aliased by multiple sptes with the same access + * permissions. We know the sptes will be mapping the same + * gfn-to-pfn translation since sp is direct. However, a given + * spte may point to an even lower level page table. We don't + * know if that lower level page table is completely filled in, + * i.e. we may be effectively unmapping a region of memory, so + * we must flush the TLB. + */ + if (is_shadow_present_pte(*sptep)) { + flush |= !is_last_spte(*sptep, sp->role.level); + continue; + } + + spte = make_huge_page_split_spte(huge_spte, sp, index); + mmu_spte_set(sptep, spte); + __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.access); + } + + /* + * Replace the huge spte with a pointer to the populated lower level + * page table. If the lower-level page table indentically maps the huge + * page, there's no need for a TLB flush. Otherwise, flush TLBs after + * dropping the huge page and before installing the shadow page table. + */ + __drop_large_spte(kvm, huge_sptep, flush); + __link_shadow_page(cache, huge_sptep, sp); +} + +static int __try_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) +{ + int r = 0; + + if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) + return -ENOSPC; + + if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) + goto drop_lock; + + r = alloc_memory_for_split(kvm, spp, true); + if (r) + goto drop_lock; + + kvm_mmu_split_huge_page(kvm, slot, huge_sptep, spp); + + return 0; + +drop_lock: + write_unlock(&kvm->mmu_lock); + cond_resched(); + r = alloc_memory_for_split(kvm, spp, false); + write_lock(&kvm->mmu_lock); + + /* + * Ask the caller to try again if the allocation succeeded. We dropped + * the MMU lock so huge_sptep may no longer be valid. + */ + return r ?: -EAGAIN; +} + +static int try_split_huge_page(struct kvm *kvm, + const struct kvm_memory_slot *slot, + u64 *huge_sptep, struct kvm_mmu_page **spp) +{ + struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep); + int level, r; + gfn_t gfn; + u64 spte; + + /* + * Record information about the huge page being split to use in the + * tracepoint below. Do this now because __try_split_huge_page() may + * drop the MMU lock, after which huge_sptep may no longer be a valid + * pointer. + */ + gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt); + level = huge_sp->role.level; + spte = *huge_sptep; + + r = __try_split_huge_page(kvm, slot, huge_sptep, spp); + + trace_kvm_mmu_split_huge_page(gfn, spte, level, r); + + return r; +} + + +static bool skip_split_huge_page(u64 *huge_sptep) +{ + struct kvm_mmu_page *sp = sptep_to_sp(huge_sptep); + + if (WARN_ON_ONCE(!is_large_pte(*huge_sptep))) + return true; + + /* + * As a policy, do not split huge pages if the sp on which they reside + * is unsync. Unsync means the guest is modifying the page table being + * shadowed, so splitting may be a waste of cycles and memory. + */ + return sp->role.invalid || sp->unsync; +} + +static bool rmap_try_split_huge_pages(struct kvm *kvm, + struct kvm_rmap_head *rmap_head, + const struct kvm_memory_slot *slot) +{ + struct kvm_mmu_page *sp = NULL; + struct rmap_iterator iter; + u64 *huge_sptep; + int r; + +restart: + for_each_rmap_spte(rmap_head, &iter, huge_sptep) { + if (skip_split_huge_page(huge_sptep)) + continue; + + r = try_split_huge_page(kvm, slot, huge_sptep, &sp); + if (r < 0 && r != -EAGAIN) + break; + + /* + * Splitting succeeded (and removed huge_sptep from the + * iterator) or we had to drop the MMU lock. Either way, restart + * the iterator to get it back into a consistent state. + */ + goto restart; + } + + if (sp) + kvm_mmu_free_shadow_page(sp); + + return false; +} + +static void kvm_rmap_try_split_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + int target_level) +{ + int level; + + /* + * Split huge pages starting with KVM_MAX_HUGEPAGE_LEVEL and working + * down to the target level. This ensures pages are recursively split + * all the way to the target level. There's no need to split pages + * already at the target level. + */ + for (level = KVM_MAX_HUGEPAGE_LEVEL; level > target_level; level--) { + slot_handle_level_range(kvm, slot, + rmap_try_split_huge_pages, + level, level, start, end - 1, + true, false); + } +} + /* Must be called with the mmu_lock held in write-mode. */ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *memslot, u64 start, u64 end, int target_level) { + if (kvm_memslots_have_rmaps(kvm)) + kvm_rmap_try_split_huge_pages(kvm, memslot, start, end, target_level); + if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, false); @@ -6125,6 +6385,14 @@ void kvm_mmu_slot_try_split_huge_pages(struct kvm *kvm, u64 start = memslot->base_gfn; u64 end = start + memslot->npages; + if (kvm_memslots_have_rmaps(kvm)) { + topup_huge_page_split_desc_cache(kvm, false); + write_lock(&kvm->mmu_lock); + kvm_rmap_try_split_huge_pages(kvm, memslot, start, end, target_level); + write_unlock(&kvm->mmu_lock); + free_huge_page_split_desc_cache(kvm); + } + if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); kvm_tdp_mmu_try_split_huge_pages(kvm, memslot, start, end, target_level, true); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d3a9ce07a565..02728c3f088e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12106,6 +12106,12 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * page faults will create the large-page sptes. */ kvm_mmu_zap_collapsible_sptes(kvm, new); + + /* + * Free any memory left behind by eager page splitting. Ignore + * the module parameter since userspace might have changed it. + */ + free_huge_page_split_desc_cache(kvm); } else { /* * Initially-all-set does not require write protecting any page, From patchwork Fri Apr 1 17:55:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 12798573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82228C433EF for ; Fri, 1 Apr 2022 17:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350801AbiDAR6g (ORCPT ); Fri, 1 Apr 2022 13:58:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350837AbiDAR63 (ORCPT ); Fri, 1 Apr 2022 13:58:29 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAB5E1834D4 for ; Fri, 1 Apr 2022 10:56:38 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id t66-20020a625f45000000b004fabd8f5cc1so1999906pfb.11 for ; Fri, 01 Apr 2022 10:56:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=whi7VXODCYStZwZ0yhJ9hj7iu8dCf643usT2exONxAs=; b=fuN0d8N9vFpmGZgcbUnIPl1/u50XpVyZVJS25Zl8sNb/QRw8cVTyJ5dMdqA5+b4+ZF uSZa76VjUdHU0qH1Va5a1zhK0stt4d5uFxMsf6ivJGXpSjoisa6ZlsqPiy1q//ZG6dgK E70U/wr8WlB5MVNjCZS6zluBKURb/ry6bmDWBkudFenFfsRDpzcJwK8g9cWgZalXPg39 dhNKu56TjS2VbFJgtFxwT7wFpAedLoTQxO4/+9hOW6Wb4II8CrKPKuliva4BSCXXUE2Q 8/Bl+aR/+ltV30PKrHQhHQgNsOJUQvLBay/8SjLanwL99lfTbrvVCd2W7FugWSH6scXf 43Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=whi7VXODCYStZwZ0yhJ9hj7iu8dCf643usT2exONxAs=; b=ksBn5ggxmpvq/PZoATpPYsSWvJhHQ+pVTLQu9wE5a8EdI0GlIbG7SI5DeIqVRzu9P5 V5qpYdZ7D4qJ5+EaG/ABJ2CUKoaRWW9S6FkzX2Lhacy8REKhAq71OaJKEffhhQfTdvp4 EXjhjqe5+yNJVYD0Y4rw1uftyyaezjA/eKO7tEN018VytGD3yCczx9foSEwuEh18BE0J uBr70896V7PqAe3LElhJypV9NRBx0bapECt49BYfse7G3mTWO8j2yAoPR6YD5oVUcELL mZAvby9FBOgwnjPhZ0bMRRZbU6XJuNkkNzBr1pR7TQ9lqCEijCkbwlctlqAkM3QVj6bu p/fQ== X-Gm-Message-State: AOAM532Euynkw5ayh9M6F5tpIhjcnwsN4DMoHascRv/nyJVJZBbtm+Fl R6e71IKqare4cerbdTDpwQagXmT2BnXsbw== X-Google-Smtp-Source: ABdhPJzrOoFK0CXfNiBHYff/Ss14MY3SHaY29S4xt2lMgr9gQfCmCewctxWArdepbXNkov7vRSKMSJUb4tjYpQ== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1ca1:b0:4fa:7e80:6957 with SMTP id y33-20020a056a001ca100b004fa7e806957mr12069429pfw.33.1648835798210; Fri, 01 Apr 2022 10:56:38 -0700 (PDT) Date: Fri, 1 Apr 2022 17:55:54 +0000 In-Reply-To: <20220401175554.1931568-1-dmatlack@google.com> Message-Id: <20220401175554.1931568-24-dmatlack@google.com> Mime-Version: 1.0 References: <20220401175554.1931568-1-dmatlack@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v3 23/23] KVM: selftests: Map x86_64 guest virtual memory with huge pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Override virt_map() in x86_64 selftests to use the largest page size possible when mapping guest virtual memory. This enables testing eager page splitting with shadow paging (e.g. kvm_intel.ept=N), as it allows KVM to shadow guest memory with huge pages. Signed-off-by: David Matlack --- .../selftests/kvm/include/x86_64/processor.h | 6 ++++ tools/testing/selftests/kvm/lib/kvm_util.c | 4 +-- .../selftests/kvm/lib/x86_64/processor.c | 31 +++++++++++++++++++ 3 files changed, 39 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 37db341d4cc5..efb228d2fbf7 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -470,6 +470,12 @@ enum x86_page_size { X86_PAGE_SIZE_2M, X86_PAGE_SIZE_1G, }; + +static inline size_t page_size_bytes(enum x86_page_size page_size) +{ + return 1UL << (page_size * 9 + 12); +} + void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, enum x86_page_size page_size); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 1665a220abcb..60198587236d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1432,8 +1432,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm) * Within the VM given by @vm, creates a virtual translation for * @npages starting at @vaddr to the page range starting at @paddr. */ -void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, - unsigned int npages) +void __weak virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages) { size_t page_size = vm->page_size; size_t size = npages * page_size; diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 9f000dfb5594..7df84292d5de 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -282,6 +282,37 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K); } +void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages) +{ + size_t size = (size_t) npages * vm->page_size; + size_t vend = vaddr + size; + enum x86_page_size page_size; + size_t stride; + + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + /* + * Map the region with all 1G pages if possible, falling back to all + * 2M pages, and finally all 4K pages. This could be improved to use + * a mix of page sizes so that more of the region is mapped with large + * pages. + */ + for (page_size = X86_PAGE_SIZE_1G; page_size >= X86_PAGE_SIZE_4K; page_size--) { + stride = page_size_bytes(page_size); + + if (!(vaddr % stride) && !(paddr % stride) && !(size % stride)) + break; + } + + TEST_ASSERT(page_size >= X86_PAGE_SIZE_4K, + "Cannot map unaligned region: vaddr 0x%lx paddr 0x%lx npages 0x%x\n", + vaddr, paddr, npages); + + for (; vaddr < vend; vaddr += stride, paddr += stride) + __virt_pg_map(vm, vaddr, paddr, page_size); +} + static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr) {