From patchwork Thu Dec 8 19:38:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5BBF4C4332F for ; Thu, 8 Dec 2022 20:49:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=hBmwNG3eBBhbTPyRAYcU99NyxgnZEVZ3Y11oPouGJa0=; b=ukL7qZ0sAAiAWpZzbGONlglcJ0 s1QyY91FWsLDmefqzCZtGrynFu1dN8AicDa0RDLXs7Zpl51hS5OZBi2K6sZuQy/W3BgEcVo/62KdF PhStoZtOX9GW2nTwk6d0NrJ2eDhW6+n5M+zwIROXk2LGt1TJEi+yoQzY2FP9KbagWWWfEhWmF9qdk a5mnsg2HZM5e6UJiVkW6zMqx5NYOmRL+ni47qvh/HaUwMFdm5mpaESLET7SogKUaYlaxdtLQcymM4 /Spr5p/zBWqO7AklF798Ti2lqub0O7z/Xx4YCck2KsI2hebgqzDQ63mH0OGiyKPO1FTwT2MqNXowg tOaxJsWA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NpD-00AwgF-Rf; Thu, 08 Dec 2022 20:48:59 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjg-009rIJ-6g for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:13 +0000 Received: by mail-pj1-x1049.google.com with SMTP id ot18-20020a17090b3b5200b00219c3543529so5254985pjb.1 for ; Thu, 08 Dec 2022 11:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zB9oiH2BRr4yMBo3wf45SO3Nck80VorbJdVA47iblXM=; b=g9cixz5ghWNJL3K+TodNMkBqI/oQhLTOr97M7M2/XpZEkjrmI3nr8w2JGkmRiCYvFp oCqkxLLopMMEof8X6INg95QudTyolVDU+RZhaURZt8OAHqkW63+Kglxtq76sXMR3UwFq TH0D0M7upbnITs3QBeHc4oBh6Bot1NKIQBOUihmvyAsu3jIi5z0OXboG4BlLnZQDvrbN irbKiodu/sENkvb3heU4imTZch1oJk77UygEm5fQ1B7wZYOkf7/kSwfDOcxQWmBFS0gg yWvbjdMM2Sl2l6GeuSkr5btl/7w76Xe8GfrXo/KlMP5jfkzzRUjzb+ofPgAfWivKb76C TC7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zB9oiH2BRr4yMBo3wf45SO3Nck80VorbJdVA47iblXM=; b=aMgqy3JYSS3LceRQ5raimwbY8LK6WBvjpY2TjT3t3bOGeqcp/j5Re3S6e9NONJe5OR FvXdMJ+fG7zlvUpVBB4cdU7iXt53y5THZD5LK9dccoxPV7Qglev/+CrLNCq/VGSsE0je d1Dl4ipzqmCS9qZEZOInIUJsBzBRJel0W78/ft5dOEyFRQnau98kPRDM/XQA2qTSeoBy vOdCghecs5N2YJS9TFk+uvSf/XjqLEKR7E0TBrKD2Z1fYgvzmb5avnWOU6wc1WmeSzEo /zzVn8N+AZI0Jyd3HESR6Z0z/K4OiFuK/nBK2Lv25+4oFef7w5YOh2i/sVIVf1zHKkyq lWsQ== X-Gm-Message-State: ANoB5pl1UaSwQLSFqO8+6OR/jyrG16TOLQQuEUMvovMATJliVnhZc1Iq tnYq08WqDXizNip52hge2Z9Rifz2hfDgMg== X-Google-Smtp-Source: AA0mqf4iTqxLqkeKUJGKBK2vDw/TcOMUrCzACSliwb1y6sicvyYssf60rgU2Phn8zeoC+EETXrN69Yd24770KA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:33c8:b0:219:94b2:2004 with SMTP id lk8-20020a17090b33c800b0021994b22004mr28273120pjb.215.1670528349120; Thu, 08 Dec 2022 11:39:09 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:21 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-2-dmatlack@google.com> Subject: [RFC PATCH 01/37] KVM: x86/mmu: Store the address space ID directly in kvm_mmu_page_role From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113912_274677_052F3416 X-CRM114-Status: GOOD ( 16.48 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Rename kvm_mmu_page_role.smm with kvm_mmu_page_role.as_id and use it directly as the address space ID throughout the KVM MMU code. This eliminates a needless level of indirection, kvm_mmu_role_as_id(), and prepares for making kvm_mmu_page_role architecture-neutral. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 6 +++--- arch/x86/kvm/mmu/mmu_internal.h | 10 ---------- arch/x86/kvm/mmu/tdp_iter.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++------ 5 files changed, 12 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index aa4eb8cfcd7e..0a819d40131a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -348,7 +348,7 @@ union kvm_mmu_page_role { * simple shift. While there is room, give it a whole * byte so it is also faster to load it from memory. */ - unsigned smm:8; + unsigned as_id:8; }; }; @@ -2056,7 +2056,7 @@ enum { # define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE # define KVM_ADDRESS_SPACE_NUM 2 # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) -# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) +# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).as_id) #else # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4d188f056933..f375b719f565 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5056,7 +5056,7 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) union kvm_cpu_role role = {0}; role.base.access = ACC_ALL; - role.base.smm = is_smm(vcpu); + role.base.as_id = is_smm(vcpu); role.base.guest_mode = is_guest_mode(vcpu); role.ext.valid = 1; @@ -5112,7 +5112,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, role.access = ACC_ALL; role.cr0_wp = true; role.efer_nx = true; - role.smm = cpu_role.base.smm; + role.as_id = cpu_role.base.as_id; role.guest_mode = cpu_role.base.guest_mode; role.ad_disabled = !kvm_ad_enabled(); role.level = kvm_mmu_get_tdp_level(vcpu); @@ -5233,7 +5233,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, /* * KVM does not support SMM transfer monitors, and consequently does not - * support the "entry to SMM" control either. role.base.smm is always 0. + * support the "entry to SMM" control either. role.base.as_id is always 0. */ WARN_ON_ONCE(is_smm(vcpu)); role.base.level = level; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ac00bfbf32f6..5427f65117b4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -133,16 +133,6 @@ struct kvm_mmu_page { extern struct kmem_cache *mmu_page_header_cache; -static inline int kvm_mmu_role_as_id(union kvm_mmu_page_role role) -{ - return role.smm ? 1 : 0; -} - -static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) -{ - return kvm_mmu_role_as_id(sp->role); -} - static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) { /* diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 39b48e7d7d1a..4a7d58bf81c4 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -52,7 +52,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, iter->root_level = root_level; iter->min_level = min_level; iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root->spt; - iter->as_id = kvm_mmu_page_as_id(root); + iter->as_id = root->role.as_id; tdp_iter_restart(iter); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 764f7c87286f..7ccac1aa8df6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -237,7 +237,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, _root; \ _root = tdp_mmu_next_root(_kvm, _root, _shared, _only_valid)) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, _shared) && \ - kvm_mmu_page_as_id(_root) != _as_id) { \ + _root->role.as_id != _as_id) { \ } else #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ @@ -256,7 +256,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ - kvm_mmu_page_as_id(_root) != _as_id) { \ + _root->role.as_id != _as_id) { \ } else static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) @@ -310,7 +310,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) * Check for an existing root before allocating a new one. Note, the * role check prevents consuming an invalid root. */ - for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) { + for_each_tdp_mmu_root(kvm, root, role.as_id) { if (root->role.word == role.word && kvm_tdp_mmu_get_root(root)) goto out; @@ -496,8 +496,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, REMOVED_SPTE, level); } - handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn, - old_spte, REMOVED_SPTE, level, shared); + handle_changed_spte(kvm, sp->role.as_id, gfn, old_spte, + REMOVED_SPTE, level, shared); } call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); @@ -923,7 +923,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) return false; - __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, + __tdp_mmu_set_spte(kvm, sp->role.as_id, sp->ptep, old_spte, 0, sp->gfn, sp->role.level + 1, true, true); return true; From patchwork Thu Dec 8 19:38:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068886 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 079BBC001B2 for ; Thu, 8 Dec 2022 20:53:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YKSJPF27lhShqajm7IOGANpHQxVuNLo0aawxtUR+N4g=; b=guMRQgX1QdamTs7vOgCFctd0Zi muaLGh5m7IFZn4WObZ285ZR9AYOQ6fCGHx3CO6Ylg7HfaxjFP50aMLhPWTwDz06N+9sAhQ/XIzxTa FxvSyFK8cE9Rz89aoKRT6fz12HFHnMg1PuygCNBCAeoTIOAqYyfVtk2nkIAq4bB7N6nW/xb+z6kmM PQ5vtndP3kCxGD8D8980gsJ8fAVfCqtTXDwD1qMgAeJYYhEdT8Aa2zG13ZWLqsXcX13FHBKPNK2WN scVkRBCRK4CThKmVHlEx4X6F0BXt9DcoYQUWH5q2cpu1Z47GBEVX+FpAKakNeBEBXG0+4cX+kwc6v dRaIYPnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NtD-00B056-8S; Thu, 08 Dec 2022 20:53:07 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjj-009rJY-37 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:24 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-36810cfa61fso24764207b3.6 for ; Thu, 08 Dec 2022 11:39:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KqovNnYNqiZvNY+3x4jGBZSOBKhorYQcOMdDmlAeVA0=; b=blsWKuOuU9nBfjbx2ZvfooJjVg/Tk7E29BOlxwM52Mh4m9Gy5w5cZYlxndBsSY4dqj e/snZkpg/KsxnZBVk09yrntro8xgrBBTRp5xNEMxknp2oiInXsh0BQib9Zuq2shGBlMv ofp5dMV6pJa5cUvPDQu4p/P+faSALpbfpnCxIGMXnkbnc7HQyL87OTHgk2gjRv+9DgR1 zf4uWQPBfjDRAZz+kjct5rkcc6EWfXyC3sEt34sMJVMmqD20+jFh4bXPCHCtCQ4GAnOo vwg2XNZUbi9JTdc7rdOMCw99CjFWBWO6dpclFuZV2ap5LsvN1Gj8H3CGg4gY3kJabJ5+ 3Gnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KqovNnYNqiZvNY+3x4jGBZSOBKhorYQcOMdDmlAeVA0=; b=Vb4FpbtMBomPV+sypyp4QaVDXXuLEAlIY5kqlaYy1ygOqXKrICVB/nQiW0zHLH7GrS V2ge+TNzDK0gM8t56ioMaYdE9IV7Ha6J8QeRWQZMHd6FtZz0YBDBWmA8l+JbCAuNJW4D xEk4/E+5sBzHy9QPItl+QynmxuEe6I1ZNyI1xejnrhc4hEAnpS3ScaN2JAXCY+0Deykk 8A8+ij0RQauVKxQV4dGP4Sm6MLcwKLIxCsiWjP1gRpmRLejtCfWpHEGO21f67m6lhAIL SDGDPbYV9DDW5CR8qJuy8A/KWt5jzaQXy/4o6R3qm3vLuGusNEgzJvP1EPcU01R4f+l1 Rd+g== X-Gm-Message-State: ANoB5pmiwj72JWvT4c2hLHiqO/zdW3xkoc07qK9PoZM0h9RqGAb/PFeO ZiecDX9afTi5lbnt+FfHzcRMI5HDiQxf3g== X-Google-Smtp-Source: AA0mqf40e1aWU3nMj030gXIfYJtDxcpJg/DK+IU9dwsqtYBiXfIBVm5+sqf++gOtXR+kPXfwZNzYqWMa6cBzLw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:e755:0:b0:6f1:2a94:40f2 with SMTP id e82-20020a25e755000000b006f12a9440f2mr65803947ybh.332.1670528350927; Thu, 08 Dec 2022 11:39:10 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:22 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-3-dmatlack@google.com> Subject: [RFC PATCH 02/37] KVM: MMU: Move struct kvm_mmu_page_role into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113915_305412_58A2721F X-CRM114-Status: GOOD ( 29.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move struct kvm_mmu_page_role into common code, and move all x86-specific fields into a separate sub-struct within the role, kvm_mmu_page_role_arch. Signed-off-by: David Matlack --- MAINTAINERS | 4 +- arch/x86/include/asm/kvm/mmu_types.h | 56 ++++++++++ arch/x86/include/asm/kvm_host.h | 68 +----------- arch/x86/kvm/mmu/mmu.c | 156 +++++++++++++-------------- arch/x86/kvm/mmu/mmu_internal.h | 4 +- arch/x86/kvm/mmu/mmutrace.h | 12 +-- arch/x86/kvm/mmu/paging_tmpl.h | 20 ++-- arch/x86/kvm/mmu/spte.c | 4 +- arch/x86/kvm/mmu/spte.h | 2 +- arch/x86/kvm/x86.c | 8 +- include/kvm/mmu_types.h | 37 +++++++ 11 files changed, 202 insertions(+), 169 deletions(-) create mode 100644 arch/x86/include/asm/kvm/mmu_types.h create mode 100644 include/kvm/mmu_types.h diff --git a/MAINTAINERS b/MAINTAINERS index 89672a59c0c3..7e586d7ba78c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11198,7 +11198,8 @@ W: http://www.linux-kvm.org T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git F: Documentation/virt/kvm/ F: include/asm-generic/kvm* -F: include/kvm/iodev.h +F: include/kvm/ +X: include/kvm/arm_* F: include/linux/kvm* F: include/trace/events/kvm.h F: include/uapi/asm-generic/kvm* @@ -11285,6 +11286,7 @@ L: kvm@vger.kernel.org S: Supported T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git F: arch/x86/include/asm/kvm* +F: arch/x86/include/asm/kvm/ F: arch/x86/include/asm/svm.h F: arch/x86/include/asm/vmx*.h F: arch/x86/include/uapi/asm/kvm* diff --git a/arch/x86/include/asm/kvm/mmu_types.h b/arch/x86/include/asm/kvm/mmu_types.h new file mode 100644 index 000000000000..35f893ebab5a --- /dev/null +++ b/arch/x86/include/asm/kvm/mmu_types.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_KVM_MMU_TYPES_H +#define __ASM_KVM_MMU_TYPES_H + +#include + +/* + * This is a subset of the overall kvm_cpu_role to minimize the size of + * kvm_memory_slot.arch.gfn_track, i.e. allows allocating 2 bytes per gfn + * instead of 4 bytes per gfn. + * + * Upper-level shadow pages having gptes are tracked for write-protection via + * gfn_track. As above, gfn_track is a 16 bit counter, so KVM must not create + * more than 2^16-1 upper-level shadow pages at a single gfn, otherwise + * gfn_track will overflow and explosions will ensure. + * + * A unique shadow page (SP) for a gfn is created if and only if an existing SP + * cannot be reused. The ability to reuse a SP is tracked by its role, which + * incorporates various mode bits and properties of the SP. Roughly speaking, + * the number of unique SPs that can theoretically be created is 2^n, where n + * is the number of bits that are used to compute the role. + * + * Note, not all combinations of modes and flags below are possible: + * + * - invalid shadow pages are not accounted, so the bits are effectively 18 + * + * - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging); + * execonly and ad_disabled are only used for nested EPT which has + * has_4_byte_gpte=0. Therefore, 2 bits are always unused. + * + * - the 4 bits of level are effectively limited to the values 2/3/4/5, + * as 4k SPs are not tracked (allowed to go unsync). In addition non-PAE + * paging has exactly one upper level, making level completely redundant + * when has_4_byte_gpte=1. + * + * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if + * cr0_wp=0, therefore these three bits only give rise to 5 possibilities. + * + * Therefore, the maximum number of possible upper-level shadow pages for a + * single gfn is a bit less than 2^13. + */ +struct kvm_mmu_page_role_arch { + u16 has_4_byte_gpte:1; + u16 quadrant:2; + u16 direct:1; + u16 access:3; + u16 efer_nx:1; + u16 cr0_wp:1; + u16 smep_andnot_wp:1; + u16 smap_andnot_wp:1; + u16 ad_disabled:1; + u16 guest_mode:1; + u16 passthrough:1; +}; + +#endif /* !__ASM_KVM_MMU_TYPES_H */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 0a819d40131a..ebcd7a0dabef 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,8 @@ #include #include +#include + #define __KVM_HAVE_ARCH_VCPU_DEBUGFS #define KVM_MAX_VCPUS 1024 @@ -286,72 +288,6 @@ enum x86_intercept_stage; struct kvm_kernel_irq_routing_entry; -/* - * kvm_mmu_page_role tracks the properties of a shadow page (where shadow page - * also includes TDP pages) to determine whether or not a page can be used in - * the given MMU context. This is a subset of the overall kvm_cpu_role to - * minimize the size of kvm_memory_slot.arch.gfn_track, i.e. allows allocating - * 2 bytes per gfn instead of 4 bytes per gfn. - * - * Upper-level shadow pages having gptes are tracked for write-protection via - * gfn_track. As above, gfn_track is a 16 bit counter, so KVM must not create - * more than 2^16-1 upper-level shadow pages at a single gfn, otherwise - * gfn_track will overflow and explosions will ensure. - * - * A unique shadow page (SP) for a gfn is created if and only if an existing SP - * cannot be reused. The ability to reuse a SP is tracked by its role, which - * incorporates various mode bits and properties of the SP. Roughly speaking, - * the number of unique SPs that can theoretically be created is 2^n, where n - * is the number of bits that are used to compute the role. - * - * But, even though there are 19 bits in the mask below, not all combinations - * of modes and flags are possible: - * - * - invalid shadow pages are not accounted, so the bits are effectively 18 - * - * - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging); - * execonly and ad_disabled are only used for nested EPT which has - * has_4_byte_gpte=0. Therefore, 2 bits are always unused. - * - * - the 4 bits of level are effectively limited to the values 2/3/4/5, - * as 4k SPs are not tracked (allowed to go unsync). In addition non-PAE - * paging has exactly one upper level, making level completely redundant - * when has_4_byte_gpte=1. - * - * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if - * cr0_wp=0, therefore these three bits only give rise to 5 possibilities. - * - * Therefore, the maximum number of possible upper-level shadow pages for a - * single gfn is a bit less than 2^13. - */ -union kvm_mmu_page_role { - u32 word; - struct { - unsigned level:4; - unsigned has_4_byte_gpte:1; - unsigned quadrant:2; - unsigned direct:1; - unsigned access:3; - unsigned invalid:1; - unsigned efer_nx:1; - unsigned cr0_wp:1; - unsigned smep_andnot_wp:1; - unsigned smap_andnot_wp:1; - unsigned ad_disabled:1; - unsigned guest_mode:1; - unsigned passthrough:1; - unsigned :5; - - /* - * This is left at the top of the word so that - * kvm_memslots_for_spte_role can extract it with a - * simple shift. While there is room, give it a whole - * byte so it is also faster to load it from memory. - */ - unsigned as_id:8; - }; -}; - /* * kvm_mmu_extended_role complements kvm_mmu_page_role, tracking properties * relevant to the current MMU configuration. When loading CR0, CR4, or EFER, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f375b719f565..355548603960 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -210,13 +210,13 @@ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ { \ return !!(mmu->cpu_role. base_or_ext . reg##_##name); \ } -BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); +BUILD_MMU_ROLE_ACCESSOR(base.arch, cr0, wp); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); -BUILD_MMU_ROLE_ACCESSOR(base, efer, nx); +BUILD_MMU_ROLE_ACCESSOR(base.arch, efer, nx); BUILD_MMU_ROLE_ACCESSOR(ext, efer, lma); static inline bool is_cr0_pg(struct kvm_mmu *mmu) @@ -226,7 +226,7 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu) static inline bool is_cr4_pae(struct kvm_mmu *mmu) { - return !mmu->cpu_role.base.has_4_byte_gpte; + return !mmu->cpu_role.base.arch.has_4_byte_gpte; } static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu) @@ -618,7 +618,7 @@ static bool mmu_spte_age(u64 *sptep) static inline bool is_tdp_mmu_active(struct kvm_vcpu *vcpu) { - return tdp_mmu_enabled && vcpu->arch.mmu->root_role.direct; + return tdp_mmu_enabled && vcpu->arch.mmu->root_role.arch.direct; } static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu) @@ -695,10 +695,10 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp); static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) { - if (sp->role.passthrough) + if (sp->role.arch.passthrough) return sp->gfn; - if (!sp->role.direct) + if (!sp->role.arch.direct) return sp->shadowed_translation[index] >> PAGE_SHIFT; return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); @@ -727,7 +727,7 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) * * In both cases, sp->role.access contains the correct access bits. */ - return sp->role.access; + return sp->role.arch.access; } static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, @@ -739,14 +739,14 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, } WARN_ONCE(access != kvm_mmu_page_get_access(sp, index), - "access mismatch under %s page %llx (expected %u, got %u)\n", - sp->role.passthrough ? "passthrough" : "direct", - sp->gfn, kvm_mmu_page_get_access(sp, index), access); + "access mismatch under %s page %llx (expected %u, got %u)\n", + sp->role.arch.passthrough ? "passthrough" : "direct", + sp->gfn, kvm_mmu_page_get_access(sp, index), access); WARN_ONCE(gfn != kvm_mmu_page_get_gfn(sp, index), - "gfn mismatch under %s page %llx (expected %llx, got %llx)\n", - sp->role.passthrough ? "passthrough" : "direct", - sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn); + "gfn mismatch under %s page %llx (expected %llx, got %llx)\n", + sp->role.arch.passthrough ? "passthrough" : "direct", + sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn); } static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, @@ -1723,7 +1723,7 @@ static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) hlist_del(&sp->hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); - if (!sp->role.direct) + if (!sp->role.arch.direct) free_page((unsigned long)sp->shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1884,10 +1884,10 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, static bool sp_has_gptes(struct kvm_mmu_page *sp) { - if (sp->role.direct) + if (sp->role.arch.direct) return false; - if (sp->role.passthrough) + if (sp->role.arch.passthrough) return false; return true; @@ -2065,7 +2065,7 @@ static void clear_sp_write_flooding_count(u64 *spte) * The vCPU is required when finding indirect shadow pages; the shadow * page may already exist and syncing it needs the vCPU pointer in * order to read guest page tables. Direct shadow pages are never - * unsync, thus @vcpu can be NULL if @role.direct is true. + * unsync, thus @vcpu can be NULL if @role.arch.direct is true. */ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, struct kvm_vcpu *vcpu, @@ -2101,7 +2101,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, } /* unsync and write-flooding only apply to indirect SPs. */ - if (sp->role.direct) + if (sp->role.arch.direct) goto out; if (sp->unsync) { @@ -2162,7 +2162,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache); - if (!role.direct) + if (!role.arch.direct) sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -2187,7 +2187,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, return sp; } -/* Note, @vcpu may be NULL if @role.direct is true; see kvm_mmu_find_shadow_page. */ +/* Note, @vcpu may be NULL if @role.arch.direct is true; see kvm_mmu_find_shadow_page. */ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm, struct kvm_vcpu *vcpu, struct shadow_page_caches *caches, @@ -2231,9 +2231,9 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, role = parent_sp->role; role.level--; - role.access = access; - role.direct = direct; - role.passthrough = 0; + role.arch.access = access; + role.arch.direct = direct; + role.arch.passthrough = 0; /* * If the guest has 4-byte PTEs then that means it's using 32-bit, @@ -2261,9 +2261,9 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, * covers bit 21 (see above), thus the quadrant is calculated from the * _least_ significant bit of the PDE index. */ - if (role.has_4_byte_gpte) { + if (role.arch.has_4_byte_gpte) { WARN_ON_ONCE(role.level != PG_LEVEL_4K); - role.quadrant = spte_index(sptep) & 1; + role.arch.quadrant = spte_index(sptep) & 1; } return role; @@ -2292,7 +2292,7 @@ static void shadow_walk_init_using_root(struct kvm_shadow_walk_iterator *iterato if (iterator->level >= PT64_ROOT_4LEVEL && vcpu->arch.mmu->cpu_role.base.level < PT64_ROOT_4LEVEL && - !vcpu->arch.mmu->root_role.direct) + !vcpu->arch.mmu->root_role.arch.direct) iterator->level = PT32E_ROOT_LEVEL; if (iterator->level == PT32E_ROOT_LEVEL) { @@ -2391,7 +2391,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, * a new sp with the correct access. */ child = spte_to_child_sp(*sptep); - if (child->role.access == direct_access) + if (child->role.arch.access == direct_access) return; drop_parent_pte(child, sptep); @@ -2420,7 +2420,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, * avoids retaining a large number of stale nested SPs. */ if (tdp_enabled && invalid_list && - child->role.guest_mode && !child->parent_ptes.val) + child->role.arch.guest_mode && !child->parent_ptes.val) return kvm_mmu_prepare_zap_page(kvm, child, invalid_list); } @@ -2689,7 +2689,7 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) gpa_t gpa; int r; - if (vcpu->arch.mmu->root_role.direct) + if (vcpu->arch.mmu->root_role.arch.direct) return 0; gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL); @@ -2900,7 +2900,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, { struct page *pages[PTE_PREFETCH_NUM]; struct kvm_memory_slot *slot; - unsigned int access = sp->role.access; + unsigned int access = sp->role.arch.access; int i, ret; gfn_t gfn; @@ -2928,7 +2928,7 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *spte, *start = NULL; int i; - WARN_ON(!sp->role.direct); + WARN_ON(!sp->role.arch.direct); i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); spte = sp->spt + i; @@ -3549,7 +3549,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) * This should not be called while L2 is active, L2 can't invalidate * _only_ its own roots, e.g. INVVPID unconditionally exits. */ - WARN_ON_ONCE(mmu->root_role.guest_mode); + WARN_ON_ONCE(mmu->root_role.arch.guest_mode); for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { root_hpa = mmu->prev_roots[i].hpa; @@ -3557,7 +3557,7 @@ void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu) continue; if (!to_shadow_page(root_hpa) || - to_shadow_page(root_hpa)->role.guest_mode) + to_shadow_page(root_hpa)->role.arch.guest_mode) roots_to_free |= KVM_MMU_ROOT_PREVIOUS(i); } @@ -3585,10 +3585,10 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, struct kvm_mmu_page *sp; role.level = level; - role.quadrant = quadrant; + role.arch.quadrant = quadrant; - WARN_ON_ONCE(quadrant && !role.has_4_byte_gpte); - WARN_ON_ONCE(role.direct && role.has_4_byte_gpte); + WARN_ON_ONCE(quadrant && !role.arch.has_4_byte_gpte); + WARN_ON_ONCE(role.arch.direct && role.arch.has_4_byte_gpte); sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); ++sp->root_count; @@ -3834,7 +3834,7 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) * equivalent level in the guest's NPT to shadow. Allocate the tables * on demand, as running a 32-bit L1 VMM on 64-bit KVM is very rare. */ - if (mmu->root_role.direct || + if (mmu->root_role.arch.direct || mmu->cpu_role.base.level >= PT64_ROOT_4LEVEL || mmu->root_role.level < PT64_ROOT_4LEVEL) return 0; @@ -3932,7 +3932,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) int i; struct kvm_mmu_page *sp; - if (vcpu->arch.mmu->root_role.direct) + if (vcpu->arch.mmu->root_role.arch.direct) return; if (!VALID_PAGE(vcpu->arch.mmu->root.hpa)) @@ -4161,7 +4161,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, arch.token = alloc_apf_token(vcpu); arch.gfn = gfn; - arch.direct_map = vcpu->arch.mmu->root_role.direct; + arch.direct_map = vcpu->arch.mmu->root_role.arch.direct; arch.cr3 = vcpu->arch.mmu->get_guest_pgd(vcpu); return kvm_setup_async_pf(vcpu, cr2_or_gpa, @@ -4172,7 +4172,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) { int r; - if ((vcpu->arch.mmu->root_role.direct != work->arch.direct_map) || + if ((vcpu->arch.mmu->root_role.arch.direct != work->arch.direct_map) || work->wakeup_all) return; @@ -4180,7 +4180,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) if (unlikely(r)) return; - if (!vcpu->arch.mmu->root_role.direct && + if (!vcpu->arch.mmu->root_role.arch.direct && work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) return; @@ -4456,7 +4456,7 @@ static void nonpaging_init_context(struct kvm_mmu *context) static inline bool is_root_usable(struct kvm_mmu_root_info *root, gpa_t pgd, union kvm_mmu_page_role role) { - return (role.direct || pgd == root->pgd) && + return (role.arch.direct || pgd == root->pgd) && VALID_PAGE(root->hpa) && role.word == to_shadow_page(root->hpa)->role.word; } @@ -4576,7 +4576,7 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd) * If this is a direct root page, it doesn't have a write flooding * count. Otherwise, clear the write flooding count. */ - if (!new_role.direct) + if (!new_role.arch.direct) __clear_sp_write_flooding_count( to_shadow_page(vcpu->arch.mmu->root.hpa)); } @@ -4803,7 +4803,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu, shadow_zero_check = &context->shadow_zero_check; __reset_rsvds_bits_mask(shadow_zero_check, reserved_hpa_bits(), context->root_role.level, - context->root_role.efer_nx, + context->root_role.arch.efer_nx, guest_can_use_gbpages(vcpu), is_pse, is_amd); if (!shadow_me_mask) @@ -5055,21 +5055,21 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { union kvm_cpu_role role = {0}; - role.base.access = ACC_ALL; role.base.as_id = is_smm(vcpu); - role.base.guest_mode = is_guest_mode(vcpu); + role.base.arch.access = ACC_ALL; + role.base.arch.guest_mode = is_guest_mode(vcpu); role.ext.valid = 1; if (!____is_cr0_pg(regs)) { - role.base.direct = 1; + role.base.arch.direct = 1; return role; } - role.base.efer_nx = ____is_efer_nx(regs); - role.base.cr0_wp = ____is_cr0_wp(regs); - role.base.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs); - role.base.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs); - role.base.has_4_byte_gpte = !____is_cr4_pae(regs); + role.base.arch.efer_nx = ____is_efer_nx(regs); + role.base.arch.cr0_wp = ____is_cr0_wp(regs); + role.base.arch.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs); + role.base.arch.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs); + role.base.arch.has_4_byte_gpte = !____is_cr4_pae(regs); if (____is_efer_lma(regs)) role.base.level = ____is_cr4_la57(regs) ? PT64_ROOT_5LEVEL @@ -5109,15 +5109,15 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, { union kvm_mmu_page_role role = {0}; - role.access = ACC_ALL; - role.cr0_wp = true; - role.efer_nx = true; role.as_id = cpu_role.base.as_id; - role.guest_mode = cpu_role.base.guest_mode; - role.ad_disabled = !kvm_ad_enabled(); role.level = kvm_mmu_get_tdp_level(vcpu); - role.direct = true; - role.has_4_byte_gpte = false; + role.arch.access = ACC_ALL; + role.arch.cr0_wp = true; + role.arch.efer_nx = true; + role.arch.guest_mode = cpu_role.base.arch.guest_mode; + role.arch.ad_disabled = !kvm_ad_enabled(); + role.arch.direct = true; + role.arch.has_4_byte_gpte = false; return role; } @@ -5194,7 +5194,7 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, * NX can be used by any non-nested shadow MMU to avoid having to reset * MMU contexts. */ - root_role.efer_nx = true; + root_role.arch.efer_nx = true; shadow_mmu_init_context(vcpu, context, cpu_role, root_role); } @@ -5212,13 +5212,13 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, union kvm_mmu_page_role root_role; /* NPT requires CR0.PG=1. */ - WARN_ON_ONCE(cpu_role.base.direct); + WARN_ON_ONCE(cpu_role.base.arch.direct); root_role = cpu_role.base; root_role.level = kvm_mmu_get_tdp_level(vcpu); if (root_role.level == PT64_ROOT_5LEVEL && cpu_role.base.level == PT64_ROOT_4LEVEL) - root_role.passthrough = 1; + root_role.arch.passthrough = 1; shadow_mmu_init_context(vcpu, context, cpu_role, root_role); kvm_mmu_new_pgd(vcpu, nested_cr3); @@ -5237,11 +5237,11 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, */ WARN_ON_ONCE(is_smm(vcpu)); role.base.level = level; - role.base.has_4_byte_gpte = false; - role.base.direct = false; - role.base.ad_disabled = !accessed_dirty; - role.base.guest_mode = true; - role.base.access = ACC_ALL; + role.base.arch.has_4_byte_gpte = false; + role.base.arch.direct = false; + role.base.arch.ad_disabled = !accessed_dirty; + role.base.arch.guest_mode = true; + role.base.arch.access = ACC_ALL; role.ext.word = 0; role.ext.execonly = execonly; @@ -5385,13 +5385,13 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu) { int r; - r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->root_role.direct); + r = mmu_topup_memory_caches(vcpu, !vcpu->arch.mmu->root_role.arch.direct); if (r) goto out; r = mmu_alloc_special_roots(vcpu); if (r) goto out; - if (vcpu->arch.mmu->root_role.direct) + if (vcpu->arch.mmu->root_role.arch.direct) r = mmu_alloc_direct_roots(vcpu); else r = mmu_alloc_shadow_roots(vcpu); @@ -5526,7 +5526,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, gpa, bytes, sp->role.word); offset = offset_in_page(gpa); - pte_size = sp->role.has_4_byte_gpte ? 4 : 8; + pte_size = sp->role.arch.has_4_byte_gpte ? 4 : 8; /* * Sometimes, the OS only writes the last one bytes to update status @@ -5550,7 +5550,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) page_offset = offset_in_page(gpa); level = sp->role.level; *nspte = 1; - if (sp->role.has_4_byte_gpte) { + if (sp->role.arch.has_4_byte_gpte) { page_offset <<= 1; /* 32->64 */ /* * A 32-bit pde maps 4MB while the shadow pdes map @@ -5564,7 +5564,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte) } quadrant = page_offset >> PAGE_SHIFT; page_offset &= ~PAGE_MASK; - if (quadrant != sp->role.quadrant) + if (quadrant != sp->role.arch.quadrant) return NULL; } @@ -5628,7 +5628,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err void *insn, int insn_len) { int r, emulation_type = EMULTYPE_PF; - bool direct = vcpu->arch.mmu->root_role.direct; + bool direct = vcpu->arch.mmu->root_role.arch.direct; if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa))) return RET_PF_RETRY; @@ -5659,7 +5659,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err * paging in both guests. If true, we simply unprotect the page * and resume the guest. */ - if (vcpu->arch.mmu->root_role.direct && + if (vcpu->arch.mmu->root_role.arch.direct && (error_code & PFERR_NESTED_GUEST_PAGE) == PFERR_NESTED_GUEST_PAGE) { kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); return 1; @@ -6321,7 +6321,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm, spte = make_huge_page_split_spte(kvm, huge_spte, sp->role, index); mmu_spte_set(sptep, spte); - __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.access); + __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.arch.access); } __link_shadow_page(kvm, cache, huge_sptep, sp, flush); @@ -6380,7 +6380,7 @@ static bool shadow_mmu_try_split_huge_pages(struct kvm *kvm, sp = sptep_to_sp(huge_sptep); /* TDP MMU is enabled, so rmap only contains nested MMU SPs. */ - if (WARN_ON_ONCE(!sp->role.guest_mode)) + if (WARN_ON_ONCE(!sp->role.arch.guest_mode)) continue; /* The rmaps should never contain non-leaf SPTEs. */ @@ -6502,7 +6502,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, * the guest, and the guest page table is using 4K page size * mapping if the indirect sp has level = 1. */ - if (sp->role.direct && + if (sp->role.arch.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, PG_LEVEL_NUM)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); @@ -6942,7 +6942,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) struct kvm_mmu_page, possible_nx_huge_page_link); WARN_ON_ONCE(!sp->nx_huge_page_disallowed); - WARN_ON_ONCE(!sp->role.direct); + WARN_ON_ONCE(!sp->role.arch.direct); /* * Unaccount and do not attempt to recover any NX Huge Pages diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 5427f65117b4..c19a80fdeb8d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -143,7 +143,7 @@ static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) * being enabled is mandatory as the bits used to denote WP-only SPTEs * are reserved for PAE paging (32-bit KVM). */ - return kvm_x86_ops.cpu_dirty_log_size && sp->role.guest_mode; + return kvm_x86_ops.cpu_dirty_log_size && sp->role.arch.guest_mode; } int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, @@ -270,7 +270,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, }; int r; - if (vcpu->arch.mmu->root_role.direct) { + if (vcpu->arch.mmu->root_role.arch.direct) { fault.gfn = fault.addr >> PAGE_SHIFT; fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn); } diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..6a4a43b90780 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -35,13 +35,13 @@ " %snxe %sad root %u %s%c", \ __entry->mmu_valid_gen, \ __entry->gfn, role.level, \ - role.has_4_byte_gpte ? 4 : 8, \ - role.quadrant, \ - role.direct ? " direct" : "", \ - access_str[role.access], \ + role.arch.has_4_byte_gpte ? 4 : 8, \ + role.arch.quadrant, \ + role.arch.direct ? " direct" : "", \ + access_str[role.arch.access], \ role.invalid ? " invalid" : "", \ - role.efer_nx ? "" : "!", \ - role.ad_disabled ? "!" : "", \ + role.arch.efer_nx ? "" : "!", \ + role.arch.ad_disabled ? "!" : "", \ __entry->root_count, \ __entry->unsync ? "unsync" : "sync", 0); \ saved_ptr; \ diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e5662dbd519c..e15ec1c473da 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -55,7 +55,7 @@ #define PT_LEVEL_BITS 9 #define PT_GUEST_DIRTY_SHIFT 9 #define PT_GUEST_ACCESSED_SHIFT 8 - #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.ad_disabled) + #define PT_HAVE_ACCESSED_DIRTY(mmu) (!(mmu)->cpu_role.base.arch.ad_disabled) #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL #else #error Invalid PTTYPE value @@ -532,7 +532,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access & FNAME(gpte_access)(gpte); + pte_access = sp->role.arch.access & FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, @@ -592,7 +592,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, if (unlikely(vcpu->kvm->mmu_invalidate_in_progress)) return; - if (sp->role.direct) + if (sp->role.arch.direct) return __direct_pte_prefetch(vcpu, sp, sptep); i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); @@ -884,7 +884,7 @@ static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp) WARN_ON(sp->role.level != PG_LEVEL_4K); if (PTTYPE == 32) - offset = sp->role.quadrant << SPTE_LEVEL_BITS; + offset = sp->role.arch.quadrant << SPTE_LEVEL_BITS; return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t); } @@ -1003,9 +1003,11 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) */ const union kvm_mmu_page_role sync_role_ign = { .level = 0xf, - .access = 0x7, - .quadrant = 0x3, - .passthrough = 0x1, + .arch = { + .access = 0x7, + .quadrant = 0x3, + .passthrough = 0x1, + }, }; /* @@ -1014,7 +1016,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the * reserved bits checks will be wrong, etc... */ - if (WARN_ON_ONCE(sp->role.direct || + if (WARN_ON_ONCE(sp->role.arch.direct || (sp->role.word ^ root_role.word) & ~sync_role_ign.word)) return -1; @@ -1043,7 +1045,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) } gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access; + pte_access = sp->role.arch.access; pte_access &= FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index c0fd7e049b4e..fe4b626cb431 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -146,7 +146,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, WARN_ON_ONCE(!pte_access && !shadow_present_mask); - if (sp->role.ad_disabled) + if (sp->role.arch.ad_disabled) spte |= SPTE_TDP_AD_DISABLED_MASK; else if (kvm_mmu_page_ad_need_write_protect(sp)) spte |= SPTE_TDP_AD_WRPROT_ONLY_MASK; @@ -301,7 +301,7 @@ u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page * the page executable as the NX hugepage mitigation no longer * applies. */ - if ((role.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled(kvm)) + if ((role.arch.access & ACC_EXEC_MASK) && is_nx_huge_page_enabled(kvm)) child_spte = make_spte_executable(child_spte); } diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 1f03701b943a..ad84c549fe96 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -260,7 +260,7 @@ static inline bool kvm_ad_enabled(void) static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) { - return sp->role.ad_disabled; + return sp->role.arch.ad_disabled; } static inline bool spte_ad_enabled(u64 spte) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9b2da8c8f30a..2bfe060768fc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8442,7 +8442,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, WARN_ON_ONCE(!(emulation_type & EMULTYPE_PF))) return false; - if (!vcpu->arch.mmu->root_role.direct) { + if (!vcpu->arch.mmu->root_role.arch.direct) { /* * Write permission should be allowed since only * write access need to be emulated. @@ -8475,7 +8475,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_release_pfn_clean(pfn); /* The instructions are well-emulated on direct mmu. */ - if (vcpu->arch.mmu->root_role.direct) { + if (vcpu->arch.mmu->root_role.arch.direct) { unsigned int indirect_shadow_pages; write_lock(&vcpu->kvm->mmu_lock); @@ -8543,7 +8543,7 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, vcpu->arch.last_retry_eip = ctxt->eip; vcpu->arch.last_retry_addr = cr2_or_gpa; - if (!vcpu->arch.mmu->root_role.direct) + if (!vcpu->arch.mmu->root_role.arch.direct) gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); @@ -8846,7 +8846,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, ctxt->exception.address = cr2_or_gpa; /* With shadow page tables, cr2 contains a GVA or nGPA. */ - if (vcpu->arch.mmu->root_role.direct) { + if (vcpu->arch.mmu->root_role.arch.direct) { ctxt->gpa_available = true; ctxt->gpa_val = cr2_or_gpa; } diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h new file mode 100644 index 000000000000..3f35a924e031 --- /dev/null +++ b/include/kvm/mmu_types.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_MMU_TYPES_H +#define __KVM_MMU_TYPES_H + +#include +#include +#include + +#include + +/* + * kvm_mmu_page_role tracks the properties of a shadow page (where shadow page + * also includes TDP pages) to determine whether or not a page can be used in + * the given MMU context. + */ +union kvm_mmu_page_role { + u32 word; + struct { + struct { + /* The address space ID mapped by the page. */ + u16 as_id:8; + + /* The level of the page in the page table hierarchy. */ + u16 level:4; + + /* Whether the page is invalid, i.e. pending destruction. */ + u16 invalid:1; + }; + + /* Architecture-specific properties. */ + struct kvm_mmu_page_role_arch arch; + }; +}; + +static_assert(sizeof(union kvm_mmu_page_role) == sizeof_field(union kvm_mmu_page_role, word)); + +#endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DE637C4332F for ; Thu, 8 Dec 2022 20:50:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=N+DhgknRwlYxMTLi2poFDw0/51+kNHm5IyMBHbCbw6c=; b=qxU9gPCnNH0xZ/WJlQcURMp6HX 44HPRpucSGl9lRkr7iCeP5POIJMxJXPsq5zh/2JKMcYKvzjq858iCbjERtka+CoGul+bFhMI1GYpj 1CT4VxKao9kc3kJDUUEAWQ8fgj/z9zjNBt1fei4GpXv6d8m12TBHTx2hgcmOg9AIu/f2zr6Yrnt9g BltVpX6HjTBRXAGdq8IfXWGH4476b/+DPlVT+NQ6tm+2MxdVmoOsZIQc38mVRB87nqxf3XDhqpcYN ADRh3BsUhb+7UQVlT0So5Tedo5Ry1HJmlfVnUToRy6F3Y/u+nrGDWt555DFcU/LMAadRmNr0WY9Bh sK0wUU6Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NqD-00AxTn-NM; Thu, 08 Dec 2022 20:50:02 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjj-009rOF-0h for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:19 +0000 Received: by mail-pf1-x449.google.com with SMTP id bd20-20020a056a00279400b0057340fe1658so1744902pfb.6 for ; Thu, 08 Dec 2022 11:39:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zPqhilQnjSVH+DKsiF3UuxsA3mWYraWvqkpSQiCn4mk=; b=iEvz84VootOQzL2fuUOye77ff0RBTUJwBPc/nCp9n7wqfMIJewYjnWs+e+UV8BdMIw elYqajuLHReDubNQsxzeD4LQOafn7i58BjesdxlcK+vzw+x1HQo9F7VP57LAnqtKoesr UywCJYsjvD0BlUoCSDFWtr6dHwgWaW3JtPzOAkFIkkT8WGiWMzZ2oAHhyStfBm7eJSsS 38EqRrqfu/Bb+ppVcaHkYCPUXH92lKxDVFDbgcDFRtoe1sRIy80NzBB22C9JYr71mn8T A1Ozw0Mgmaj7KonevqN/mmVI3ivsgF7FtLFAmhFybNP5PHPBvBWnlJEVJf0pakv7c2rW r4Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zPqhilQnjSVH+DKsiF3UuxsA3mWYraWvqkpSQiCn4mk=; b=8G1e8tby/y1xl8NbGmx5UhBAu+0roWon6kQK7f7I/a5LddwuIxlu44PBPqGxzlsP6P q/HLSD4Xxl4EJ2V3DNG5aZzNt9CdGB5NSy8BdFoGLHShQkd47JFsQFHnt+NV56iVjkf5 xLIl44bVCPh2zVVy1mwRNRz8KVMxKe1XIYlOcUIMztnzaLv213KqJebXnnBghoP5fOT5 k430AXi0YyAHzE44uvotRujbRK4HPUYKYPFo1zKUm18c/JNEWXFoUIMmL6ekwY84Igkf bCNveVfrzaeiKCKXldxvkjFCMvxN/H5LXbXasN6zDNWAw8aXQLsAVhZEri4wBCfZyEwp PAwA== X-Gm-Message-State: ANoB5plf9cDNez6pF4DFFfdEbpB4FFSNf/rS91G3qySkzr8msBrjdKjR 3dYfMsWLfQwhvfI1yQSZhSXfqV+oznwrMw== X-Google-Smtp-Source: AA0mqf7eM5A3bS0eYMJF7BGmyZN+evZ/sdXM528vLtaTteWzbjmMBYJhb583jjAiLFnL4yHJF6g542RYvvvpWw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:1623:b0:577:4baf:f412 with SMTP id e3-20020a056a00162300b005774baff412mr10161002pfc.77.1670528352538; Thu, 08 Dec 2022 11:39:12 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:23 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-4-dmatlack@google.com> Subject: [RFC PATCH 03/37] KVM: MMU: Move tdp_ptep_t into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113915_086290_C254C484 X-CRM114-Status: GOOD ( 11.46 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move the definition of tdp_ptep_t into kvm/mmu_types.h so it can be used from common code in future commits. No functional changed intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 2 -- include/kvm/mmu_types.h | 2 ++ 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c19a80fdeb8d..e32379c5b1ad 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -44,8 +44,6 @@ extern bool dbg; #define INVALID_PAE_ROOT 0 #define IS_VALID_PAE_ROOT(x) (!!(x)) -typedef u64 __rcu *tdp_ptep_t; - struct kvm_mmu_page { /* * Note, "link" through "spt" fit in a single 64 byte cache line on diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 3f35a924e031..14099956fdac 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -34,4 +34,6 @@ union kvm_mmu_page_role { static_assert(sizeof(union kvm_mmu_page_role) == sizeof_field(union kvm_mmu_page_role, word)); +typedef u64 __rcu *tdp_ptep_t; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CE484C001B2 for ; Thu, 8 Dec 2022 20:38:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VfxX1F0zYBlu2y2eL+/A9CYBcKhAFHiwUxJzAwRO39A=; b=qNYQpzALUrPsRDn8XmMd1xGShf YQbM5TocIhh7PjnEiiVWf8KEDM4CIEpKM/rTzf7qfeHSt7+TNzJ5UFDkSDFO0VwNf5VfuQOjcZznF 56HNN7vCuQ7xHtl3uc54yPNJiFW3KaA4trC5vnFEjnFeY3+ff3Zz2hm5BI+NIPftuxD+CxF6hDQD3 yiJgYZhpubqSuNjD2KcthXcC7z/qEDyII9Fl6QNXS7v5OmnqGz5cb0JebpFaJE7emQyJXyJAmyful mnzFM1F28nQBp9rz1nMu8MrW6YLmx3op27gXLctC2fkgj0wN3XnHyfnGWu8wvrZE+FIdITmUl00HM ZwNnPgpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nf6-00ApVD-Fs; Thu, 08 Dec 2022 20:38:32 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV7-00AgYd-Dy for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=HLrD3t5bH+2UVRBzeB4pMLD/le/gTmNW425ksyvCqFk=; b=lmk2QjAudRlwK6zgKYVKxHoCFe PuWB+pcY/E7Ije223Hh6YgO4dRQxsQ3IkME+FmWC6mrKeP3NbU+EcVkT7zO876TO2qantELQrZt3E 1ufid9Ea0HBeUC9fNsU374K6GQ2/7M6BqAyjQlUk3/pjYHDQhODRafiG1xK8gj7VtCx4jFo5tWmUJ yiEV/ZfCVMyEUFFPAULPxIzMga0ujuAbMCZMDxVCmWukN1DloG7qTc9Qc/bzeRnBJZBq1GMWA7mA1 p7c9RAg8++q3TXMlBLknxazeBlbnFx0nLWi7AAllMJ/mRJr+bj6k+M4J8D91aLfKcWzy8FiF7wOPY ej+rzFxw==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjl-008Zlv-RO for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:21 +0000 Received: by mail-yb1-xb49.google.com with SMTP id k7-20020a256f07000000b006cbcc030bc8so2527084ybc.18 for ; Thu, 08 Dec 2022 11:39:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HLrD3t5bH+2UVRBzeB4pMLD/le/gTmNW425ksyvCqFk=; b=Gz2UiZJQ4DZMvEnArLUIALIGxYLRWiZ17+uUATC7JW/9U1+U3CD+XxyUp4xTCXYyqm DDneOFTbuWB1kx+t4q0UHmlcS5n+bBhngCf182KgN6PPTjRQsSMq1oAQexRSMgVG39ZD WhmvaX0yvy8NFr4Zie7hieP7Qf0Cd9sbZbC0FQq7beeBSoCGOAI0ayUDbpD1ZJxACp3Y RsidsgionOp80YxgC6DQ4znzyHHQNpvrHQge9fW5yLMBKr5BHx+VLpdd/WsaXsD3cms0 3Ph9MeTQfZs+riAoOSxWCz4/93fX3jbybIXAJwuXHBDPPpC+reEQ9lxn1c7vmVVnZgpl TQhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HLrD3t5bH+2UVRBzeB4pMLD/le/gTmNW425ksyvCqFk=; b=Dww0lsAZh74C9Ym0gg9ETcVVuIs5aWeB7WUptFn8sF/c2LiqCHeXmiqOjVn/9TzBne V5VxLFKr197UDquf5TPD5/JpkV3goheJVUMspsDj7Fh+Qhk5NrRzEn/Qg+S//TMrLL7f 9mIfa0yZz3rtNS2qTsfnVLSDaLvaLoQ3rpkTf3oo5a+sofqwLRtz378t8PMDRBK+6VIP l2o4JEuzfQGCxxpfwan7+pyMOl3lrw3giSaXQIeG0IiooU0dnJNkFGMQ1Hz+A8IEgVwl GfPkbPd0iqy/WANWoD6sWjwBAvpbDYX1pgx85PyqBc5H5yuV3KH3j0S0w7g283ibqLpr Q2FA== X-Gm-Message-State: ANoB5pmATrC00qur075xuq6w5e0sjtOoaQrCB+hpzprm1cUIcQOfxEO+ OGS2G2pK9VQ/iEGZ1N2Cz0XZTpu04De/bA== X-Google-Smtp-Source: AA0mqf7CjJBroe1GU5lusLDR6DJkiCylX/dWdqQ1e2UjuKyXk5ZPZTZv3qlpbJO/f2YNw2OZIgu/a4kIbkSpkQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:2184:0:b0:6ea:e952:4d4a with SMTP id h126-20020a252184000000b006eae9524d4amr80656342ybh.120.1670528354217; Thu, 08 Dec 2022 11:39:14 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:24 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-5-dmatlack@google.com> Subject: [RFC PATCH 04/37] KVM: x86/mmu: Invert sp->tdp_mmu_page to sp->shadow_mmu_page From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193917_966803_122EA4B5 X-CRM114-Status: GOOD ( 13.91 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Invert the meaning of sp->tdp_mmu_page and rename it accordingly. This allows the TDP MMU code to not care about this field, which will be used in a subsequent commit to move the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 1 + arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 3 --- arch/x86/kvm/mmu/tdp_mmu.h | 5 ++++- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 355548603960..f7668a32721d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2180,6 +2180,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, sp->gfn = gfn; sp->role = role; + sp->shadow_mmu_page = true; hlist_add_head(&sp->hash_link, sp_list); if (sp_has_gptes(sp)) account_shadowed(kvm, sp); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e32379c5b1ad..c1a379fba24d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -52,7 +52,7 @@ struct kvm_mmu_page { struct list_head link; struct hlist_node hash_link; - bool tdp_mmu_page; + bool shadow_mmu_page; bool unsync; u8 mmu_valid_gen; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7ccac1aa8df6..fc0b87ceb1ea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -133,8 +133,6 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) return; - WARN_ON(!is_tdp_mmu_page(root)); - /* * The root now has refcount=0. It is valid, but readers already * cannot acquire a reference to it because kvm_tdp_mmu_get_root() @@ -279,7 +277,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, sp->role = role; sp->gfn = gfn; sp->ptep = sptep; - sp->tdp_mmu_page = true; trace_kvm_mmu_get_page(sp, true); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 0a63b1afabd3..18d3719f14ea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -71,7 +71,10 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte); #ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } +static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) +{ + return !sp->shadow_mmu_page; +} #else static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } #endif From patchwork Thu Dec 8 19:38:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AF147C4332F for ; Thu, 8 Dec 2022 20:38:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=J0erIfMUpKOcU1pjpHBf2a4ciiAq0EqeTrlTVwLOZmY=; b=Hakm/DrVhbBhDQbJuNJM7AcRbM tQLdg9rk1OR16aRcyGiwFE/UIgz0f++4HRZWxg/UddKE/avl3qgzQuvigMAei2ANsVVBqiJOLGhTl 8Qchs0QcgrO0drmJgy7IUQUbO0+0/KnSJ35D0icQskPVY+Sy1CE8NE6/ozR24Zj95anv/kP2lO7XM Wj9GdQFBgqvYYK4JPEiI/zXvwLHJp0nhe6kWPR8+eijgs2K6bKYbzr1f3kfUorJ9E4aFGDFXsJ+G0 bJ2Tpzse3NkvHyRGQpgiWc8scYZNbwgA3gEnT0R9LyLBcQCPqzcVnzMvIYCojEN/LfiZ1PVNUYBDI Ks2t7QLQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nex-00ApMW-8t; Thu, 08 Dec 2022 20:38:23 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV6-00AgYd-O6 for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=bKBzEWnnUSmo38V1MoJOUz8eBd NKZOelA7lZSZJhPd/nbprBk19zV8BeQ7FN6g7K3FUIvSZ8/tPHZsBbXZI/6jh6xgz7MQGCmiWXhzL GUd1AEBNm5EkerkRmjG/mba98Tu3Gjy7oemQO3pGLSy+Ttdm3VR0abKyNemJzw9Wi1O7nwlWBIUWz axLKlfpRDREpfYUfmYpTrfi1fuH+4cPk9yanqQgi2TdAdSjzKuJLeSyINraysGgk4u8tnGG0QLQpI 5ncid/CSILalcQTimMFuq2JjBwStbasT18tJ3Fwm+vbxXA3vqasNbE3O+G02O9QWQaXU5zJjP4mmX LqYAhTMA==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjl-008Zly-PC for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:21 +0000 Received: by mail-yb1-xb49.google.com with SMTP id c188-20020a25c0c5000000b006d8eba07513so2556179ybf.17 for ; Thu, 08 Dec 2022 11:39:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=irP5mcGoAOYc8wosnD/TtALC0WWa+NGjhVIHJTTnr/3Z8yjzEK7JM4oiVhdZFBRQwy zJ2t/ttFB0YDo1jKtcssuFL71HMxb9Mn2FDYgr/S1maeH1kDPYzBK0DF9LASFH6X+Gyz ADGVieTHTJS3Xu0juG8XWrdDMzDZvqxucwC6VjLm3/r24BngHKVLne44DdbtStaOGGOh nAuOImwdaJqBcKx9C/oAm0bmd7zRXkwa5CkgPV2RFBbSGZAPdiE+y9pF0NYdTwDcUcBb YIWPvUTgaFvUXFehRkfrZQ4c+oYOqfGi3KvXXK7lpuBLnfpJgagtw4O5VTtpSo61tAre q86Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7K9ka6MUeaf4VmwASDJsBFWEG85o/chirVO+ir7DJ1k=; b=WrcmVV0tDRH4qG+PzllVQSdZMUdTkUnt/gBFttIxiea2HQyfYl/SSQMq/iRenzBiVk 5Fz+B+R7HOHK6aG8tQjYP0D+rfa7/6QWAf8VULhvMyBPhB6iXfLeMSAFSmU0YTKxEFcQ KCrZJE9rUvwHlkFk0jY4ZJQghgXXBoBC/Z30t38wOv57I17oFapORVx2MIjLBFudRc9G jW5d4o9zGqIkRyhPMOdYlnklq2/htPozsDIfjyHjY2l6lUCT4XqnNptqMQcgnE8VYH8y XOpb+snz7/7r0ltRkifIljAXxLWkfDg9I6xgCGC7OLFAInOEGZR6S9UmjmlG1SSgfxU/ xTgg== X-Gm-Message-State: ANoB5pn7Pr1mDbkkEHlY+4XJSdd37u699QMYwWIbZ1RNHrADkkXdgNVw U5c5uk4KqqyHv+ggs1w18+dh5b0/J93yiw== X-Google-Smtp-Source: AA0mqf4/lKyQk5lKEjo3ymU4c9dqtITM1gYvYc6O/tXnyNlkcxf5NSwbnRprMsGK5JwDZSs7UsMVtOf+2DbYHA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:120d:0:b0:3d5:ecbb:2923 with SMTP id 13-20020a81120d000000b003d5ecbb2923mr35023748yws.485.1670528355728; Thu, 08 Dec 2022 11:39:15 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:25 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-6-dmatlack@google.com> Subject: [RFC PATCH 05/37] KVM: x86/mmu: Unify TDP MMU and Shadow MMU root refcounts From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193917_939783_600DB874 X-CRM114-Status: GOOD ( 19.39 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use the same field for refcounting roots in the TDP MMU and Shadow MMU. The atomicity provided by refcount_t is overkill for the Shadow MMU, since it holds the write-lock. But converging this field will enable a future commit to more easily move struct kvm_mmu_page to common code. Note, refcount_dec_and_test() returns true if the resulting refcount is 0. Hence the check in mmu_free_root_page() is inverted to check if shadow root refcount is 0. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 14 +++++++------- arch/x86/kvm/mmu/mmu_internal.h | 6 ++---- arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- 5 files changed, 15 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f7668a32721d..11cef930d5ed 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2498,14 +2498,14 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, if (sp->unsync) kvm_unlink_unsync_page(kvm, sp); - if (!sp->root_count) { + if (!refcount_read(&sp->root_refcount)) { /* Count self */ (*nr_zapped)++; /* * Already invalid pages (previously active roots) are not on * the active page list. See list_del() in the "else" case of - * !sp->root_count. + * !sp->root_refcount. */ if (sp->role.invalid) list_add(&sp->link, invalid_list); @@ -2515,7 +2515,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, } else { /* * Remove the active root from the active page list, the root - * will be explicitly freed when the root_count hits zero. + * will be explicitly freed when the root_refcount hits zero. */ list_del(&sp->link); @@ -2570,7 +2570,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); list_for_each_entry_safe(sp, nsp, invalid_list, link) { - WARN_ON(!sp->role.invalid || sp->root_count); + WARN_ON(!sp->role.invalid || refcount_read(&sp->root_refcount)); kvm_mmu_free_shadow_page(sp); } } @@ -2593,7 +2593,7 @@ static unsigned long kvm_mmu_zap_oldest_mmu_pages(struct kvm *kvm, * Don't zap active root pages, the page itself can't be freed * and zapping it will just force vCPUs to realloc and reload. */ - if (sp->root_count) + if (refcount_read(&sp->root_refcount)) continue; unstable = __kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, @@ -3481,7 +3481,7 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, if (is_tdp_mmu_page(sp)) kvm_tdp_mmu_put_root(kvm, sp, false); - else if (!--sp->root_count && sp->role.invalid) + else if (refcount_dec_and_test(&sp->root_refcount) && sp->role.invalid) kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); *root_hpa = INVALID_PAGE; @@ -3592,7 +3592,7 @@ static hpa_t mmu_alloc_root(struct kvm_vcpu *vcpu, gfn_t gfn, int quadrant, WARN_ON_ONCE(role.arch.direct && role.arch.has_4_byte_gpte); sp = kvm_mmu_get_shadow_page(vcpu, gfn, role); - ++sp->root_count; + refcount_inc(&sp->root_refcount); return __pa(sp->spt); } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c1a379fba24d..fd4990c8b0e9 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -87,10 +87,8 @@ struct kvm_mmu_page { u64 *shadowed_translation; /* Currently serving as active root */ - union { - int root_count; - refcount_t tdp_mmu_root_count; - }; + refcount_t root_refcount; + unsigned int unsync_children; union { struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 6a4a43b90780..ffd10ce3eae3 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -19,7 +19,7 @@ __entry->mmu_valid_gen = sp->mmu_valid_gen; \ __entry->gfn = sp->gfn; \ __entry->role = sp->role.word; \ - __entry->root_count = sp->root_count; \ + __entry->root_count = refcount_read(&sp->root_refcount); \ __entry->unsync = sp->unsync; #define KVM_MMU_PAGE_PRINTK() ({ \ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fc0b87ceb1ea..34d674080170 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -130,7 +130,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, { kvm_lockdep_assert_mmu_lock_held(kvm, shared); - if (!refcount_dec_and_test(&root->tdp_mmu_root_count)) + if (!refcount_dec_and_test(&root->root_refcount)) return; /* @@ -158,7 +158,7 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, * zap the root because a root cannot go from invalid to valid. */ if (!kvm_tdp_root_mark_invalid(root)) { - refcount_set(&root->tdp_mmu_root_count, 1); + refcount_set(&root->root_refcount, 1); /* * Zapping the root in a worker is not just "nice to have"; @@ -316,7 +316,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) root = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_sp(root, NULL, 0, role); - refcount_set(&root->tdp_mmu_root_count, 1); + refcount_set(&root->root_refcount, 1); spin_lock(&kvm->arch.tdp_mmu_pages_lock); list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); @@ -883,7 +883,7 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, * and lead to use-after-free as zapping a SPTE triggers "writeback" of * dirty accessed bits to the SPTE's associated struct page. */ - WARN_ON_ONCE(!refcount_read(&root->tdp_mmu_root_count)); + WARN_ON_ONCE(!refcount_read(&root->root_refcount)); kvm_lockdep_assert_mmu_lock_held(kvm, shared); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 18d3719f14ea..19d3153051a3 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -14,7 +14,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) { - return refcount_inc_not_zero(&root->tdp_mmu_root_count); + return refcount_inc_not_zero(&root->root_refcount); } void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, From patchwork Thu Dec 8 19:38:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9EF2BC4332F for ; Thu, 8 Dec 2022 20:37:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=n3VrP7GfYwpwkN/+YZIA6foxUeIXtA3tVmQcjA3HHPc=; b=j9zE2XN4WoPqXZlqTEa1Xntaxw WJZyZh27KKUf7+zqdjdRbxL6sQHl9BLi5X4Hujh4sS6Eg2R6zxUNUaNi4W9OhZa4hmdLbcKgMDyqj /cUxE8ouYjOZ95SCUIqineL13k8ccvVpfGHpK6pRuSL+Bv2Oe+i3aO2tltlyYh9vpDkH8GhphRqZK xXHlLnkuGRg/Hd8a9xroR8TcbkLJ3jRWhiCti5U5tVRnfJFZiVblTsZRciNUFivu7uiDDqI4qUz5B uD5xlubVToaV2U/3auJnAte9wEBPqdK1JHMFiJWnmqq1ed2WKpCKQ8XWy9NC0ck9jnyfDk9xYfaXq /oDXCDrg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nda-00AoIa-Ex; Thu, 08 Dec 2022 20:36:58 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NV4-00AgYd-Iz for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=a3bKpO5b/v0RCxuNw899Xshau8/zESVZr6kYd+4zdKE=; b=QFkqKeIffWUjauuxHsNlkSOXCE pFNP/vkYkynaeS9vh7ZhihZ3PUe42WQele6cBoAyLXdwpjJwUjR2DL51fKJGDdOXXjXvYyy2It37h CXcrovG4kNvbA3LDiAOrgh0tp2nLyIx85SxGgoS/4c440KTnYWALPUnfmkMh4Fl21xcTyiPukwDNX V1Zpxsn6dSYvTepx3mdrFdjfONVMWcdnvTvcrbp3g95GJki64lkCHaaSCGW/+WRBl2F1HbuwLTCfY WmPU74WiFBZ+bSRjtBXZaJbrUo3AUyb4J7l3KNxXOn2OZeaEJ8p/EmTqsBdqqy+x49XsEfXplo7VY 4lO/4Bfw==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjn-008ZnK-1H for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:24 +0000 Received: by mail-yb1-xb49.google.com with SMTP id e15-20020a5b0ccf000000b006ed1704b40cso2546421ybr.5 for ; Thu, 08 Dec 2022 11:39:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=a3bKpO5b/v0RCxuNw899Xshau8/zESVZr6kYd+4zdKE=; b=KPxzCU0dxo9eGutKZUhotgl9jJKDzTmrDJ6vzV8erWlSEgK6SxsymFRTIzbdw/dwSY +SC6PtCJm9pU4ixS9rlPhFsl5m6gvXODTybYIHpQJFhDImrcri/mVQCf3U7geImSfYZv jOCNDVtIAcIEEh22mPn+jaizoJ2GqxOS+Wu/loB2TQYGTJav7XgL+IkKf50x/qxFwed5 KTLCUMIw8jkfg57J7bMRWqf0cY8RS9HRUaqkp6JSA+YDTR7c6IyBfSoqX9W9wVR0n1EG fdBWsC/Qy8Z8slhboisRHODy4G1CnX1V+3pJ0tUZkCd2ntcVE/dPnHvwgIZS+klnYn7M Opqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a3bKpO5b/v0RCxuNw899Xshau8/zESVZr6kYd+4zdKE=; b=DUWsyB7NInF3V0qVpvCr6eBwPXBtFzTe96TuLjrsBf6OSnt48Aw0cWEEIeLVQIFfqI 2pb89jmWSPqBTWS6gILE3xZ+wWRQk6wEwcPttMxu5Zuj8PLAkfC/uh0fx0bJLY+Z0be8 PbR7FOAKJXKljRN2o47VGNVOhKb2F2eLEYavZQ4AD+CjkxBhPFs7kdluP1lpl6KkZkEm 5tpBBehk2iYokwqj7wLpBsjqD16kwHGq/76pfDQUSiYp1PD7iPVYOGLNrF3sQJufb8HD R73NHKwCegVpNfV3XiLKwxeKIVhTRLpo5FFioFidRCPoQi+CKv35dS2nGYofs1ahpO/O gSRw== X-Gm-Message-State: ANoB5pn3J17U9nPZn+OugRLbYBY9ZxeCHo0oVUwrroqMXyBwstC0v+mu CmQH6WHSyWCs1DoHiY2TToqiTZoLkeqN8w== X-Google-Smtp-Source: AA0mqf5gdryBtEuIfjSANvRDkIbH3z8min3Cb8Q36LkcXIqZ65yLJBzUlATA8LbtoFr5VbGGv9+8vT8wYeHhZQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:8644:0:b0:3c7:3c2b:76b5 with SMTP id w65-20020a818644000000b003c73c2b76b5mr46075323ywf.22.1670528357332; Thu, 08 Dec 2022 11:39:17 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:26 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-7-dmatlack@google.com> Subject: [RFC PATCH 06/37] KVM: MMU: Move struct kvm_mmu_page to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193919_162265_47D22046 X-CRM114-Status: GOOD ( 29.33 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move struct kvm_mmu_page to common code and all x86-specific fields into kvm_mmu_page_arch. This commit increases the size of struct kvm_mmu_page by 64 bytes on x86_64 (184 bytes -> 248 bytes). The size of this struct can be reduced in future commits by moving TDP MMU root fields into a separate struct and by dynamically allocating fields only used by the Shadow MMU. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/mmu_types.h | 62 ++++++++++++++ arch/x86/include/asm/kvm_host.h | 4 - arch/x86/kvm/mmu/mmu.c | 122 ++++++++++++++------------- arch/x86/kvm/mmu/mmu_internal.h | 83 ------------------ arch/x86/kvm/mmu/mmutrace.h | 4 +- arch/x86/kvm/mmu/paging_tmpl.h | 10 +-- arch/x86/kvm/mmu/tdp_mmu.c | 8 +- arch/x86/kvm/mmu/tdp_mmu.h | 2 +- include/kvm/mmu_types.h | 32 ++++++- 9 files changed, 167 insertions(+), 160 deletions(-) diff --git a/arch/x86/include/asm/kvm/mmu_types.h b/arch/x86/include/asm/kvm/mmu_types.h index 35f893ebab5a..affcb520b482 100644 --- a/arch/x86/include/asm/kvm/mmu_types.h +++ b/arch/x86/include/asm/kvm/mmu_types.h @@ -2,6 +2,8 @@ #ifndef __ASM_KVM_MMU_TYPES_H #define __ASM_KVM_MMU_TYPES_H +#include +#include #include /* @@ -53,4 +55,64 @@ struct kvm_mmu_page_role_arch { u16 passthrough:1; }; +struct kvm_rmap_head { + unsigned long val; +}; + +struct kvm_mmu_page_arch { + struct hlist_node hash_link; + + bool shadow_mmu_page; + bool unsync; + u8 mmu_valid_gen; + + /* + * The shadow page can't be replaced by an equivalent huge page + * because it is being used to map an executable page in the guest + * and the NX huge page mitigation is enabled. + */ + bool nx_huge_page_disallowed; + + /* + * Stores the result of the guest translation being shadowed by each + * SPTE. KVM shadows two types of guest translations: nGPA -> GPA + * (shadow EPT/NPT) and GVA -> GPA (traditional shadow paging). In both + * cases the result of the translation is a GPA and a set of access + * constraints. + * + * The GFN is stored in the upper bits (PAGE_SHIFT) and the shadowed + * access permissions are stored in the lower bits. Note, for + * convenience and uniformity across guests, the access permissions are + * stored in KVM format (e.g. ACC_EXEC_MASK) not the raw guest format. + */ + u64 *shadowed_translation; + + unsigned int unsync_children; + + /* Rmap pointers to all parent sptes. */ + struct kvm_rmap_head parent_ptes; + + DECLARE_BITMAP(unsync_child_bitmap, 512); + + /* + * Tracks shadow pages that, if zapped, would allow KVM to create an NX + * huge page. A shadow page will have nx_huge_page_disallowed set but + * not be on the list if a huge page is disallowed for other reasons, + * e.g. because KVM is shadowing a PTE at the same gfn, the memslot + * isn't properly aligned, etc... + */ + struct list_head possible_nx_huge_page_link; + +#ifdef CONFIG_X86_32 + /* + * Used out of the mmu-lock to avoid reading spte values while an + * update is in progress; see the comments in __get_spte_lockless(). + */ + int clear_spte_count; +#endif + + /* Number of writes since the last time traversal visited this page. */ + atomic_t write_flooding_count; +}; + #endif /* !__ASM_KVM_MMU_TYPES_H */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ebcd7a0dabef..f5743a652e10 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -329,10 +329,6 @@ union kvm_cpu_role { }; }; -struct kvm_rmap_head { - unsigned long val; -}; - struct kvm_pio_request { unsigned long linear_rip; unsigned long count; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 11cef930d5ed..e47f35878ab5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -350,7 +350,7 @@ static void count_spte_clear(u64 *sptep, u64 spte) /* Ensure the spte is completely set before we increase the count */ smp_wmb(); - sp->clear_spte_count++; + sp->arch.clear_spte_count++; } static void __set_spte(u64 *sptep, u64 spte) @@ -432,7 +432,7 @@ static u64 __get_spte_lockless(u64 *sptep) int count; retry: - count = sp->clear_spte_count; + count = sp->arch.clear_spte_count; smp_rmb(); spte.spte_low = orig->spte_low; @@ -442,7 +442,7 @@ static u64 __get_spte_lockless(u64 *sptep) smp_rmb(); if (unlikely(spte.spte_low != orig->spte_low || - count != sp->clear_spte_count)) + count != sp->arch.clear_spte_count)) goto retry; return spte.spte; @@ -699,7 +699,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) return sp->gfn; if (!sp->role.arch.direct) - return sp->shadowed_translation[index] >> PAGE_SHIFT; + return sp->arch.shadowed_translation[index] >> PAGE_SHIFT; return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS)); } @@ -713,7 +713,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index) static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index) { if (sp_has_gptes(sp)) - return sp->shadowed_translation[index] & ACC_ALL; + return sp->arch.shadowed_translation[index] & ACC_ALL; /* * For direct MMUs (e.g. TDP or non-paging guests) or passthrough SPs, @@ -734,7 +734,7 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index, gfn_t gfn, unsigned int access) { if (sp_has_gptes(sp)) { - sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; + sp->arch.shadowed_translation[index] = (gfn << PAGE_SHIFT) | access; return; } @@ -825,18 +825,18 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) * on the list if KVM is reusing an existing shadow page, i.e. if KVM * links a shadow page at multiple points. */ - if (!list_empty(&sp->possible_nx_huge_page_link)) + if (!list_empty(&sp->arch.possible_nx_huge_page_link)) return; ++kvm->stat.nx_lpage_splits; - list_add_tail(&sp->possible_nx_huge_page_link, + list_add_tail(&sp->arch.possible_nx_huge_page_link, &kvm->arch.possible_nx_huge_pages); } static void account_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp, bool nx_huge_page_possible) { - sp->nx_huge_page_disallowed = true; + sp->arch.nx_huge_page_disallowed = true; if (nx_huge_page_possible) track_possible_nx_huge_page(kvm, sp); @@ -861,16 +861,16 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - if (list_empty(&sp->possible_nx_huge_page_link)) + if (list_empty(&sp->arch.possible_nx_huge_page_link)) return; --kvm->stat.nx_lpage_splits; - list_del_init(&sp->possible_nx_huge_page_link); + list_del_init(&sp->arch.possible_nx_huge_page_link); } static void unaccount_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - sp->nx_huge_page_disallowed = false; + sp->arch.nx_huge_page_disallowed = false; untrack_possible_nx_huge_page(kvm, sp); } @@ -1720,11 +1720,11 @@ static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); - hlist_del(&sp->hash_link); + hlist_del(&sp->arch.hash_link); list_del(&sp->link); free_page((unsigned long)sp->spt); if (!sp->role.arch.direct) - free_page((unsigned long)sp->shadowed_translation); + free_page((unsigned long)sp->arch.shadowed_translation); kmem_cache_free(mmu_page_header_cache, sp); } @@ -1739,13 +1739,13 @@ static void mmu_page_add_parent_pte(struct kvm_mmu_memory_cache *cache, if (!parent_pte) return; - pte_list_add(cache, parent_pte, &sp->parent_ptes); + pte_list_add(cache, parent_pte, &sp->arch.parent_ptes); } static void mmu_page_remove_parent_pte(struct kvm_mmu_page *sp, u64 *parent_pte) { - pte_list_remove(parent_pte, &sp->parent_ptes); + pte_list_remove(parent_pte, &sp->arch.parent_ptes); } static void drop_parent_pte(struct kvm_mmu_page *sp, @@ -1761,7 +1761,7 @@ static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) u64 *sptep; struct rmap_iterator iter; - for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) { + for_each_rmap_spte(&sp->arch.parent_ptes, &iter, sptep) { mark_unsync(sptep); } } @@ -1771,9 +1771,9 @@ static void mark_unsync(u64 *spte) struct kvm_mmu_page *sp; sp = sptep_to_sp(spte); - if (__test_and_set_bit(spte_index(spte), sp->unsync_child_bitmap)) + if (__test_and_set_bit(spte_index(spte), sp->arch.unsync_child_bitmap)) return; - if (sp->unsync_children++) + if (sp->arch.unsync_children++) return; kvm_mmu_mark_parents_unsync(sp); } @@ -1799,7 +1799,7 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp, { int i; - if (sp->unsync) + if (sp->arch.unsync) for (i=0; i < pvec->nr; i++) if (pvec->page[i].sp == sp) return 0; @@ -1812,9 +1812,9 @@ static int mmu_pages_add(struct kvm_mmu_pages *pvec, struct kvm_mmu_page *sp, static inline void clear_unsync_child_bit(struct kvm_mmu_page *sp, int idx) { - --sp->unsync_children; - WARN_ON((int)sp->unsync_children < 0); - __clear_bit(idx, sp->unsync_child_bitmap); + --sp->arch.unsync_children; + WARN_ON((int)sp->arch.unsync_children < 0); + __clear_bit(idx, sp->arch.unsync_child_bitmap); } static int __mmu_unsync_walk(struct kvm_mmu_page *sp, @@ -1822,7 +1822,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, { int i, ret, nr_unsync_leaf = 0; - for_each_set_bit(i, sp->unsync_child_bitmap, 512) { + for_each_set_bit(i, sp->arch.unsync_child_bitmap, 512) { struct kvm_mmu_page *child; u64 ent = sp->spt[i]; @@ -1833,7 +1833,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, child = spte_to_child_sp(ent); - if (child->unsync_children) { + if (child->arch.unsync_children) { if (mmu_pages_add(pvec, child, i)) return -ENOSPC; @@ -1845,7 +1845,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, nr_unsync_leaf += ret; } else return ret; - } else if (child->unsync) { + } else if (child->arch.unsync) { nr_unsync_leaf++; if (mmu_pages_add(pvec, child, i)) return -ENOSPC; @@ -1862,7 +1862,7 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp, struct kvm_mmu_pages *pvec) { pvec->nr = 0; - if (!sp->unsync_children) + if (!sp->arch.unsync_children) return 0; mmu_pages_add(pvec, sp, INVALID_INDEX); @@ -1871,9 +1871,9 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp, static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { - WARN_ON(!sp->unsync); + WARN_ON(!sp->arch.unsync); trace_kvm_mmu_sync_page(sp); - sp->unsync = 0; + sp->arch.unsync = 0; --kvm->stat.mmu_unsync; } @@ -1894,7 +1894,7 @@ static bool sp_has_gptes(struct kvm_mmu_page *sp) } #define for_each_valid_sp(_kvm, _sp, _list) \ - hlist_for_each_entry(_sp, _list, hash_link) \ + hlist_for_each_entry(_sp, _list, arch.hash_link) \ if (is_obsolete_sp((_kvm), (_sp))) { \ } else @@ -1934,7 +1934,7 @@ static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) /* TDP MMU pages do not use the MMU generation. */ return !is_tdp_mmu_page(sp) && - unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen); + unlikely(sp->arch.mmu_valid_gen != kvm->arch.mmu_valid_gen); } struct mmu_page_path { @@ -2006,7 +2006,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) WARN_ON(idx == INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; - } while (!sp->unsync_children); + } while (!sp->arch.unsync_children); } static int mmu_sync_children(struct kvm_vcpu *vcpu, @@ -2053,7 +2053,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) { - atomic_set(&sp->write_flooding_count, 0); + atomic_set(&sp->arch.write_flooding_count, 0); } static void clear_sp_write_flooding_count(u64 *spte) @@ -2094,7 +2094,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ - if (role.level > PG_LEVEL_4K && sp->unsync) + if (role.level > PG_LEVEL_4K && sp->arch.unsync) kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); continue; @@ -2104,7 +2104,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, if (sp->role.arch.direct) goto out; - if (sp->unsync) { + if (sp->arch.unsync) { if (KVM_BUG_ON(!vcpu, kvm)) break; @@ -2163,25 +2163,26 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache); if (!role.arch.direct) - sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); + sp->arch.shadowed_translation = + kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); - INIT_LIST_HEAD(&sp->possible_nx_huge_page_link); + INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); /* * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() * depends on valid pages being added to the head of the list. See * comments in kvm_zap_obsolete_pages(). */ - sp->mmu_valid_gen = kvm->arch.mmu_valid_gen; + sp->arch.mmu_valid_gen = kvm->arch.mmu_valid_gen; list_add(&sp->link, &kvm->arch.active_mmu_pages); kvm_account_mmu_page(kvm, sp); sp->gfn = gfn; sp->role = role; - sp->shadow_mmu_page = true; - hlist_add_head(&sp->hash_link, sp_list); + sp->arch.shadow_mmu_page = true; + hlist_add_head(&sp->arch.hash_link, sp_list); if (sp_has_gptes(sp)) account_shadowed(kvm, sp); @@ -2368,7 +2369,7 @@ static void __link_shadow_page(struct kvm *kvm, mmu_page_add_parent_pte(cache, sp, sptep); - if (sp->unsync_children || sp->unsync) + if (sp->arch.unsync_children || sp->arch.unsync) mark_unsync(sptep); } @@ -2421,7 +2422,8 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, * avoids retaining a large number of stale nested SPs. */ if (tdp_enabled && invalid_list && - child->role.arch.guest_mode && !child->parent_ptes.val) + child->role.arch.guest_mode && + !child->arch.parent_ptes.val) return kvm_mmu_prepare_zap_page(kvm, child, invalid_list); } @@ -2449,7 +2451,7 @@ static void kvm_mmu_unlink_parents(struct kvm_mmu_page *sp) u64 *sptep; struct rmap_iterator iter; - while ((sptep = rmap_get_first(&sp->parent_ptes, &iter))) + while ((sptep = rmap_get_first(&sp->arch.parent_ptes, &iter))) drop_parent_pte(sp, sptep); } @@ -2496,7 +2498,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, if (!sp->role.invalid && sp_has_gptes(sp)) unaccount_shadowed(kvm, sp); - if (sp->unsync) + if (sp->arch.unsync) kvm_unlink_unsync_page(kvm, sp); if (!refcount_read(&sp->root_refcount)) { /* Count self */ @@ -2527,7 +2529,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, zapped_root = !is_obsolete_sp(kvm, sp); } - if (sp->nx_huge_page_disallowed) + if (sp->arch.nx_huge_page_disallowed) unaccount_nx_huge_page(kvm, sp); sp->role.invalid = 1; @@ -2704,7 +2706,7 @@ static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { trace_kvm_mmu_unsync_page(sp); ++kvm->stat.mmu_unsync; - sp->unsync = 1; + sp->arch.unsync = 1; kvm_mmu_mark_parents_unsync(sp); } @@ -2739,7 +2741,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, if (!can_unsync) return -EPERM; - if (sp->unsync) + if (sp->arch.unsync) continue; if (prefetch) @@ -2764,7 +2766,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * for write, i.e. unsync cannot transition from 0->1 * while this CPU holds mmu_lock for read (or write). */ - if (READ_ONCE(sp->unsync)) + if (READ_ONCE(sp->arch.unsync)) continue; } @@ -2796,8 +2798,8 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * 2.2 Guest issues TLB flush. * That causes a VM Exit. * - * 2.3 Walking of unsync pages sees sp->unsync is - * false and skips the page. + * 2.3 Walking of unsync pages sees sp->arch.unsync + * is false and skips the page. * * 2.4 Guest accesses GVA X. * Since the mapping in the SP was not updated, @@ -2805,7 +2807,7 @@ int mmu_try_to_unsync_pages(struct kvm *kvm, const struct kvm_memory_slot *slot, * gets used. * 1.1 Host marks SP * as unsync - * (sp->unsync = true) + * (sp->arch.unsync = true) * * The write barrier below ensures that 1.1 happens before 1.2 and thus * the situation in 2.4 does not arise. It pairs with the read barrier @@ -3126,7 +3128,7 @@ void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_ cur_level == fault->goal_level && is_shadow_present_pte(spte) && !is_large_pte(spte) && - spte_to_child_sp(spte)->nx_huge_page_disallowed) { + spte_to_child_sp(spte)->arch.nx_huge_page_disallowed) { /* * A small SPTE exists for this pfn, but FNAME(fetch), * direct_map(), or kvm_tdp_mmu_map() would like to create a @@ -3902,7 +3904,7 @@ static bool is_unsync_root(hpa_t root) /* * The read barrier orders the CPU's read of SPTE.W during the page table - * walk before the reads of sp->unsync/sp->unsync_children here. + * walk before the reads of sp->arch.{unsync,unsync_children} here. * * Even if another CPU was marking the SP as unsync-ed simultaneously, * any guest page table changes are not guaranteed to be visible anyway @@ -3922,7 +3924,7 @@ static bool is_unsync_root(hpa_t root) if (WARN_ON_ONCE(!sp)) return false; - if (sp->unsync || sp->unsync_children) + if (sp->arch.unsync || sp->arch.unsync_children) return true; return false; @@ -5510,8 +5512,8 @@ static bool detect_write_flooding(struct kvm_mmu_page *sp) if (sp->role.level == PG_LEVEL_4K) return false; - atomic_inc(&sp->write_flooding_count); - return atomic_read(&sp->write_flooding_count) >= 3; + atomic_inc(&sp->arch.write_flooding_count); + return atomic_read(&sp->arch.write_flooding_count) >= 3; } /* @@ -6389,7 +6391,7 @@ static bool shadow_mmu_try_split_huge_pages(struct kvm *kvm, continue; /* SPs with level >PG_LEVEL_4K should never by unsync. */ - if (WARN_ON_ONCE(sp->unsync)) + if (WARN_ON_ONCE(sp->arch.unsync)) continue; /* Don't bother splitting huge pages on invalid SPs. */ @@ -6941,8 +6943,8 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) */ sp = list_first_entry(&kvm->arch.possible_nx_huge_pages, struct kvm_mmu_page, - possible_nx_huge_page_link); - WARN_ON_ONCE(!sp->nx_huge_page_disallowed); + arch.possible_nx_huge_page_link); + WARN_ON_ONCE(!sp->arch.nx_huge_page_disallowed); WARN_ON_ONCE(!sp->role.arch.direct); /* @@ -6977,7 +6979,7 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm) flush |= kvm_tdp_mmu_zap_sp(kvm, sp); else kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); - WARN_ON_ONCE(sp->nx_huge_page_disallowed); + WARN_ON_ONCE(sp->arch.nx_huge_page_disallowed); if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { kvm_mmu_remote_flush_or_zap(kvm, &invalid_list, flush); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index fd4990c8b0e9..af2ae4887e35 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -44,89 +44,6 @@ extern bool dbg; #define INVALID_PAE_ROOT 0 #define IS_VALID_PAE_ROOT(x) (!!(x)) -struct kvm_mmu_page { - /* - * Note, "link" through "spt" fit in a single 64 byte cache line on - * 64-bit kernels, keep it that way unless there's a reason not to. - */ - struct list_head link; - struct hlist_node hash_link; - - bool shadow_mmu_page; - bool unsync; - u8 mmu_valid_gen; - - /* - * The shadow page can't be replaced by an equivalent huge page - * because it is being used to map an executable page in the guest - * and the NX huge page mitigation is enabled. - */ - bool nx_huge_page_disallowed; - - /* - * The following two entries are used to key the shadow page in the - * hash table. - */ - union kvm_mmu_page_role role; - gfn_t gfn; - - u64 *spt; - - /* - * Stores the result of the guest translation being shadowed by each - * SPTE. KVM shadows two types of guest translations: nGPA -> GPA - * (shadow EPT/NPT) and GVA -> GPA (traditional shadow paging). In both - * cases the result of the translation is a GPA and a set of access - * constraints. - * - * The GFN is stored in the upper bits (PAGE_SHIFT) and the shadowed - * access permissions are stored in the lower bits. Note, for - * convenience and uniformity across guests, the access permissions are - * stored in KVM format (e.g. ACC_EXEC_MASK) not the raw guest format. - */ - u64 *shadowed_translation; - - /* Currently serving as active root */ - refcount_t root_refcount; - - unsigned int unsync_children; - union { - struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ - tdp_ptep_t ptep; - }; - union { - DECLARE_BITMAP(unsync_child_bitmap, 512); - struct { - struct work_struct tdp_mmu_async_work; - void *tdp_mmu_async_data; - }; - }; - - /* - * Tracks shadow pages that, if zapped, would allow KVM to create an NX - * huge page. A shadow page will have nx_huge_page_disallowed set but - * not be on the list if a huge page is disallowed for other reasons, - * e.g. because KVM is shadowing a PTE at the same gfn, the memslot - * isn't properly aligned, etc... - */ - struct list_head possible_nx_huge_page_link; -#ifdef CONFIG_X86_32 - /* - * Used out of the mmu-lock to avoid reading spte values while an - * update is in progress; see the comments in __get_spte_lockless(). - */ - int clear_spte_count; -#endif - - /* Number of writes since the last time traversal visited this page. */ - atomic_t write_flooding_count; - -#ifdef CONFIG_X86_64 - /* Used for freeing the page asynchronously if it is a TDP MMU page. */ - struct rcu_head rcu_head; -#endif -}; - extern struct kmem_cache *mmu_page_header_cache; static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ffd10ce3eae3..335f26dabdf3 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -16,11 +16,11 @@ __field(bool, unsync) #define KVM_MMU_PAGE_ASSIGN(sp) \ - __entry->mmu_valid_gen = sp->mmu_valid_gen; \ + __entry->mmu_valid_gen = sp->arch.mmu_valid_gen; \ __entry->gfn = sp->gfn; \ __entry->role = sp->role.word; \ __entry->root_count = refcount_read(&sp->root_refcount); \ - __entry->unsync = sp->unsync; + __entry->unsync = sp->arch.unsync; #define KVM_MMU_PAGE_PRINTK() ({ \ const char *saved_ptr = trace_seq_buffer_ptr(p); \ diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e15ec1c473da..18bb92b70a01 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -671,7 +671,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * KVM_REQ_MMU_SYNC is not necessary but it * expedites the process. */ - if (sp->unsync_children && + if (sp->arch.unsync_children && mmu_sync_children(vcpu, sp, false)) return RET_PF_RETRY; } @@ -921,7 +921,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) pt_element_t gpte; gpa_t pte_gpa; - if (!sp->unsync) + if (!sp->arch.unsync) break; pte_gpa = FNAME(get_level1_sp_gpa)(sp); @@ -942,7 +942,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); } - if (!sp->unsync_children) + if (!sp->arch.unsync_children) break; } write_unlock(&vcpu->kvm->mmu_lock); @@ -974,8 +974,8 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, } /* - * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn()) is - * safe because: + * Using the information in sp->arch.shadowed_translation + * (kvm_mmu_page_get_gfn()) is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. * diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 34d674080170..66231c7ed31e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -270,7 +270,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - INIT_LIST_HEAD(&sp->possible_nx_huge_page_link); + INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); set_page_private(virt_to_page(sp->spt), (unsigned long)sp); @@ -385,7 +385,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, { tdp_unaccount_mmu_page(kvm, sp); - if (!sp->nx_huge_page_disallowed) + if (!sp->arch.nx_huge_page_disallowed) return; if (shared) @@ -393,7 +393,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, else lockdep_assert_held_write(&kvm->mmu_lock); - sp->nx_huge_page_disallowed = false; + sp->arch.nx_huge_page_disallowed = false; untrack_possible_nx_huge_page(kvm, sp); if (shared) @@ -1181,7 +1181,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); - sp->nx_huge_page_disallowed = fault->huge_page_disallowed; + sp->arch.nx_huge_page_disallowed = fault->huge_page_disallowed; if (is_shadow_present_pte(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 19d3153051a3..e6a929089715 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -73,7 +73,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, #ifdef CONFIG_X86_64 static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { - return !sp->shadow_mmu_page; + return !sp->arch.shadow_mmu_page; } #else static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 14099956fdac..a9da33d4baa8 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -3,8 +3,11 @@ #define __KVM_MMU_TYPES_H #include -#include +#include +#include #include +#include +#include #include @@ -36,4 +39,31 @@ static_assert(sizeof(union kvm_mmu_page_role) == sizeof_field(union kvm_mmu_page typedef u64 __rcu *tdp_ptep_t; +struct kvm_mmu_page { + struct list_head link; + + union kvm_mmu_page_role role; + + /* The start of the GFN region mapped by this shadow page. */ + gfn_t gfn; + + /* The page table page. */ + u64 *spt; + + /* The PTE that points to this shadow page. */ + tdp_ptep_t ptep; + + /* Used for freeing TDP MMU pages asynchronously. */ + struct rcu_head rcu_head; + + /* The number of references to this shadow page as a root. */ + refcount_t root_refcount; + + /* Used for tearing down an entire page table tree. */ + struct work_struct tdp_mmu_async_work; + void *tdp_mmu_async_data; + + struct kvm_mmu_page_arch arch; +}; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068858 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 621DCC4332F for ; Thu, 8 Dec 2022 20:27:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ZAhGIjIAq3fAGJnVQXvjniyTYTwUBorIf5XtkNCKfMQ=; b=eFFaUF9AYKs3i3gV34FPMY4L3+ YTRFPfl0Yjasbw1ULgf1UgmG1IV198lFaaeABg+WMeskw7hWfSrE36mb31+WZR5cWbiU4ZbVO3UyF rCicIyRDRZ32FrF6q8AVZczizPkgRNsFY7aVkA9miqOy2upBVkkCCuVb5+hHG2ppe2rpZOcmsjvzU AxxLk6We0rdET9HZXd4gfxEvn2TwuGBJDlRmV7ndNCFuEHX4oRkpsoXFJXRQXG+BmpaJjKZ7uAkpf LbAf8PslvmgAIXq/3cuZwGII8MBZYh5IGqCTflxsa5x0P5ggvCxcypukFV90ITLBVyWfKDbNTH+x4 kpcNTcXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUh-00AghC-GT; Thu, 08 Dec 2022 20:27:47 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUg-00AgYd-6d for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=yTAZyHH91Hz9k7OZtf9gLdlJIIrZMhdblzaFwMket5k=; b=ohdKWnP7Ti2lPActsUdTAi3SQ9 2SGS4IwZk8MeiDgesizUu/CQykZOgHVWIxrQ7CZxW8cgqe2u6G1qgTJdbt6psR3TWBzmSsw9nHxle O/W/opl+2bsntFPXtF4BkH5vtBRTnRIbCG3JWyyLxcCb3AbRmnEfsxj7oFm+wX5u7UC1KxSJCXz6G FrsvqM5g8SQ4xrh9B4GuUBCKleoOcOd8+fwdr3ICi0KH9h7P/JeTfxpnysy9vb3jl7icPnFMBRAmo e+n08T4qVlvnVoelhJpFg2Gf260W7baVAyIu1HMmc4W1k0aPlzImqePtHjkqhRLo+neMIkeuo6I36 LYToTdWQ==; Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjo-008ZoH-Qi for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:25 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3b0af5bcbd3so25222337b3.0 for ; Thu, 08 Dec 2022 11:39:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yTAZyHH91Hz9k7OZtf9gLdlJIIrZMhdblzaFwMket5k=; b=GOQt4+sFqigGxf6Aa3fTvktBe1mjQiluGE/Jj6vv/IvtdV7LliPzUDKCSdDx4Qr+HY SefezSHoN8FcvlmM5v70XQPk/EyYEeJUQTBJUq5h/NPog1UINNQReWUqbVEFjNE5psNx aagqyJsONr1NaCPx7WLFrSAq/6+YU9H0nX9KZhJN95FCQM9/NOhRMdNzFnOuadNLctYN G+xWcIxFKNvCVcBGdVMIWulIdzku3X8ojhuncsVZJ/7ZuOiqyWG7OUDtS4/fr9vosjH0 V6V8OE7BSYaG+WEMOpz2hhgLOVkbabsraMBHq3XN+o9hqQFkHr8FyeB1wUdGuE+kwW+d bbVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yTAZyHH91Hz9k7OZtf9gLdlJIIrZMhdblzaFwMket5k=; b=6HuyO8JbybM2wf246Smxzjoy6X5HhJd/SrkL91GaM7gpmMTBwYgo2Si4yG3C1BZtlu 5cuex1lav56s5K+Bw0IzmfhOACeDJehOG6Byks/6ZNABqiEpPDN+BT38e9Zkst1hDXz9 8kHuaSQHrX8ek/2g2tgAVLNvymIL/rAo2Fgon+wlsN0wugcIQvPfalTJ9CqxJcC828HO HbSu98EP4xs74pOLRLqjre090d6RbgtJfgN+6XlEFIpx0n/HzQmeiabSxqDNwJCvsuFj schgx2SCa40aqj46yhFGLMm6ZstebiH3zfXOEXOCZ4tCiP1lJ/B1RcprY+MrvgcuyE80 sA/A== X-Gm-Message-State: ANoB5pmtiswZT1SGcm+KdirErOPD1D0UI9pJOl2ubC7RzG+Qs5KxEl5F Z1uhIKK04Kw7rewsPLQspZ58bSsABmCdVw== X-Google-Smtp-Source: AA0mqf4+9oetgR3uKpdfVrPGzq2N3r/JU0sb91m8xpA2HrSBGD0IqxC3Vg4zr/dJJfrFDGejU78ci1LRJL+iRw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:9f85:0:b0:700:f93d:c7cd with SMTP id u5-20020a259f85000000b00700f93dc7cdmr16638924ybq.166.1670528358851; Thu, 08 Dec 2022 11:39:18 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:27 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-8-dmatlack@google.com> Subject: [RFC PATCH 07/37] mm: Introduce architecture-neutral PG_LEVEL macros From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193921_004971_013B4A1A X-CRM114-Status: UNSURE ( 9.90 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Introduce architecture-neutral versions of the x86 macros PG_LEVEL_4K, PG_LEVEL_2M, etc. The x86 macros are used extensively by KVM/x86's page table management code. Introducing architecture-neutral version of these macros paves the way for porting KVM/x86's page table management code to architecture-neutral code. Signed-off-by: David Matlack --- arch/x86/include/asm/pgtable_types.h | 12 ++++-------- include/linux/mm_types.h | 9 +++++++++ 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index aa174fed3a71..bdf41325f089 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -518,14 +518,10 @@ extern void native_pagetable_init(void); struct seq_file; extern void arch_report_meminfo(struct seq_file *m); -enum pg_level { - PG_LEVEL_NONE, - PG_LEVEL_4K, - PG_LEVEL_2M, - PG_LEVEL_1G, - PG_LEVEL_512G, - PG_LEVEL_NUM -}; +#define PG_LEVEL_4K PG_LEVEL_PTE +#define PG_LEVEL_2M PG_LEVEL_PMD +#define PG_LEVEL_1G PG_LEVEL_PUD +#define PG_LEVEL_512G PG_LEVEL_P4D #ifdef CONFIG_PROC_FS extern void update_page_count(int level, unsigned long pages); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 500e536796ca..0445d0673afe 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1003,4 +1003,13 @@ enum fault_flag { typedef unsigned int __bitwise zap_flags_t; +enum pg_level { + PG_LEVEL_NONE, + PG_LEVEL_PTE, + PG_LEVEL_PMD, + PG_LEVEL_PUD, + PG_LEVEL_P4D, + PG_LEVEL_NUM +}; + #endif /* _LINUX_MM_TYPES_H */ From patchwork Thu Dec 8 19:38:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 05210C4332F for ; Thu, 8 Dec 2022 20:54:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=F+8IF++nezNgdS1U3011ieJg3yP9tt0t2Eil5TQ8Tlw=; b=DBX2W7NVMGpZVNh1erau4LWh9U juGRniwXRm0VZ+xidoQzgN2guxfe/ohFYFUZNBCh+w2sIbBFvZNeN9yqA+dOw/r9aQa1tO8WaCnKt 3LIOD6K51JdPUepIYd/mhKw7gDIANgr3DIueCsSYe6Htu5ggI7ibQkqqcOWgTEtwzQgsn66PYuENd /okN/Fel2IeU5dWQlW0oomjIclPZsNXOPzq3XKQVA9QuMk9Ggikwirn/EF9nEevszoVDyz/leUT6n 3zylMeZwUzZ5BEDkp9OzML5k9a5zO28oRAS8JTRk3SkZ1H6nr6POnY/Q7llK1Er1ihoYNuVpJqTAk os7Bpa4g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NuQ-00B115-3j; Thu, 08 Dec 2022 20:54:22 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjx-009rhD-7g for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:39:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=SSdGxEhCr0RqVU7mTEekOKbILUbkr12FLBfwU71mUxI=; b=F+XT8wz6ZyllLxbinEwHSYeifh jPSs6APnN/i7u/3yixjRXHU/GzWbhnPCUJH5wLzh+vco2KIxRvLsAzs9j8TAcqnhpSxZk5bcbjFR5 ps18u2p+8Z4R8CWM8tgox85FXKXFJanWal0hRKr6uvVT0Y3acT9Qke2rlwu1jwIgkMu0nhGraEy5h BxQia7/P+nRjEN+wUs44wWY4zxSSCGKqWmmF4rJgIQduPMk6DSid24W0XG0rPkUOqhvP446WMhHeL FpyZNsA5vUPaQQwOjTEIOrlzuqYMwZ339c0tOl1cDr6LLcNaPum6lQE/+y2Rbb4uqi/a2YRdOpDJQ 1mo/3a0Q==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjz-007FwH-Fu for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:36 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id t9-20020a5b03c9000000b006cff5077dc9so2551647ybp.3 for ; Thu, 08 Dec 2022 11:39:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=SSdGxEhCr0RqVU7mTEekOKbILUbkr12FLBfwU71mUxI=; b=ASwdiNISwi39e3pN7oE2KNZFMmRKqmr3VYiit4wIyv47H88t6EbCKRpQMvgIGjb8Ee rJVPUEvW73NAo8aFV+BbV0Vr3OSkq29yA95vS5SMRxjzWXbnV3DZI/1DBZWOB8qdIrDl W6zsADUJr9yTIozq12aJEIz0+DdiTk+hebpQwi9Spbwq0O7+Rp/AhyZ8kHc6W7GbLEQQ W0hWYQM1VKgOje1FPxZ4mA6ztD8KMzpQNn+h5VJfv3SoW4uFDeXDtliLmzD104DdQ5jS RPnONZy1sj8iUseTf5J8a4CjaI/8JgGCw7ZrAZK92Qp419zeHoy00JFMymW+m0ow9uJP AA4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=SSdGxEhCr0RqVU7mTEekOKbILUbkr12FLBfwU71mUxI=; b=NY0n0JkgjsUau7EhFkJn1aC41oASbOFZDocHTM6fzKBQ2j9strYNR10ITouDIHODdh 0oXuuxaWMDKPPrgk4S+0U3kkIvZ+1VHp9Ni8KMSWZK1toUDL/GfZtEjFDmtP3TvefQIf MoPCpSn0KO0LPIeD5oYH2EUKYFs6pK001ZM98/G3MqXcD9nSNAqm595uWQgarF6I5cuP /MgwXIex+SN55AHj7v9gL5em92PS7g8BFvamdmBfMwPIGSKnBMGDYOeEAeUBlwz4W6Pm FhkwWy5bRZ/h0kTk5u5H/msfyWGaMvF8lZ5zFKLL2bpLzjdJj8WW+vQ7h+ZjnWwGtIBX g4qA== X-Gm-Message-State: ANoB5pmQ0PZu/qBKVtkxBlMfSNpByUy/YeM6I+l/S9u/TXZHv54rmptR +Jmd15CHxyzQySBWLSVIWczgtlxU7l0bpg== X-Google-Smtp-Source: AA0mqf47fCCsz8lLNEesw2/yjinjmb5c0Q+BVTcOByCxMnOsH1R3SmOt/A4xjy4bk2xWJgLDn+u8z1uj6TTwRA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:9943:0:b0:3b6:79b0:c10e with SMTP id q64-20020a819943000000b003b679b0c10emr62215766ywg.87.1670528360357; Thu, 08 Dec 2022 11:39:20 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:28 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-9-dmatlack@google.com> Subject: [RFC PATCH 08/37] KVM: selftests: Stop assuming stats are contiguous in kvm_binary_stats_test From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193931_541661_C6C5AAC6 X-CRM114-Status: GOOD ( 10.70 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Jing Zhang Remove the assumption from kvm_binary_stats_test that all stats are laid out contiguously in memory. The KVM stats ABI specifically allows holes in the stats data, since each stat exposes its own offset. While here drop the check that each stats' offset is less than size_data, as that is now always true by construction. Fixes: 0b45d58738cd ("KVM: selftests: Add selftest for KVM statistics data binary interface") Signed-off-by: Jing Zhang Signed-off-by: David Matlack [Re-worded the commit message.] Signed-off-by: David Matlack --- tools/testing/selftests/kvm/kvm_binary_stats_test.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c index 894417c96f70..46a66ece29fd 100644 --- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c +++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c @@ -134,7 +134,8 @@ static void stats_test(int stats_fd) "Bucket size of stats (%s) is not zero", pdesc->name); } - size_data += pdesc->size * sizeof(*stats_data); + size_data = max(size_data, + pdesc->offset + pdesc->size * sizeof(*stats_data)); } /* @@ -149,14 +150,6 @@ static void stats_test(int stats_fd) TEST_ASSERT(size_data >= header.num_desc * sizeof(*stats_data), "Data size is not correct"); - /* Check stats offset */ - for (i = 0; i < header.num_desc; ++i) { - pdesc = get_stats_descriptor(stats_desc, i, &header); - TEST_ASSERT(pdesc->offset < size_data, - "Invalid offset (%u) for stats: %s", - pdesc->offset, pdesc->name); - } - /* Allocate memory for stats data */ stats_data = malloc(size_data); TEST_ASSERT(stats_data, "Allocate memory for stats data"); From patchwork Thu Dec 8 19:38:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 381F0C4332F for ; Thu, 8 Dec 2022 20:55:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=rlPcFb/2mqkcjpFyp2hfdtIItdJdtFTyssULb5z1s/w=; b=VsaFwerJef0uQG8xW4Zk4b4WlJ rteAolyGAw08aOruyTh3Q14vnybmD1j/VPjr7LWNFdIurJUlZGU2LrgfJrAybD+G7JU02ZT1iB4Xt uAHQvB1p5vEPuwrdIYz2KobEbHuum0FOYShNS/SxR34aKi13FiXosX5WvwuXFKVMKpavfvEWjn+w0 kMkku3cQGuO7XAs8x/cBMoFJPbr1i+g6QJcIGnKSoegYbog3Asfz79olrOphgRP6Hv5DTGrMW4gVf tdmjVJA0GTTpC6bGKonOCc6sdOqsrcavCUed+iAVyugJnjq2fRd8kS1dNp2RUsutu2rE/8F4QbSWF HGtAHoDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Nva-00B1sm-CW; Thu, 08 Dec 2022 20:55:34 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjz-009rhD-4D for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:39:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=HxpbJTUNlMZJd8tslVgq3JHdSPdfNWxn0UnaVUQQc0A=; b=AfAwUJaTcXn+/0Akuff+TkVju3 5k9VZL4PZuQH/8ZaJikDadD8txkKqw3MogOIaSetn/Q4p9FbDdXdbMD2PmPk3UIXvYKh8qzUcuxfs gYTQIFTKdEhyZD587k4URoNqw0htKaoPJKmCKnE3UjkRVumItM2NORt2I6mCDdPpmHZQWBgXI+RIA G4SI7V/rko/phrRrz5sXADpkPEAGrwmT8j6ZDNoPHAH4g1Sy8LFKC/Jipxmfq5xi0XQHkLNTyDXsw u1O9n012lhEe0KB594FEILlqDi9atvbcatlRvijHfufH2Ox9miKswrnwzRmOTHFTkVVgamzesXgC/ YLnIjNQQ==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mk4-007Fx5-4p for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:38 +0000 Received: by mail-yb1-xb49.google.com with SMTP id y6-20020a25b9c6000000b006c1c6161716so2533460ybj.8 for ; Thu, 08 Dec 2022 11:39:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HxpbJTUNlMZJd8tslVgq3JHdSPdfNWxn0UnaVUQQc0A=; b=X27Fj8NMQTMpgpQSLD6fEhoU4fVXBG4FfQvE4hDE+MTeWoJhGcT5LpYCDXh1D+/w82 hPv1oB7AV9ecGMFLuN13hRpFiRPBuJVO9vanqpaNAxBJ4jir+35FDyU9qXPhlEpW7coO /8u9arxvokXqrqKeLqz9h601trg+4wcye67rx9tVyN1axVtupBSnJ7E5BSc5oEBXMlHO KSROKMtQE8+C88/p6gUIBi3yv6webYqOh7OUiLmP0ZCWyP2DF2whWsSz/1NabrWwZBVJ X38Wy5/22Z5LwdqIzNX3mhOM6a0liS8P2HZHtMcipS0tuORgcla3kymZxvCrrppA0ZTX 8LTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HxpbJTUNlMZJd8tslVgq3JHdSPdfNWxn0UnaVUQQc0A=; b=GLiPF3ML3yaeaYcb7P0YZMCTHC1p8Fent6PqsumiURu0IN1aHZT4KQp+aFDXgmieKO wVkDN2+eBYCBbv+EOTzLNAtvf7PK+ECW1e4BFExam27fzdQYBuBeoQf2PNrd2ng5VfH6 4t6Ff/kgivgGOV4+Lg7XHG0D/edJ470a/QX0vU6el1FEvb05E9u1+S3cB2eKzbndfiXy cOdCYlhTI8tXpngmH40v+Xx5KDp4M1iEdw5Zeq/HeyjA3XPydAc7AxX2LO6rMj3PUTnI Dcp2LBHsjioLz/xSpUySTpzu8Kr9QZtc+ozbuATpDSqk8lh+Qy/4zhXvwjz66zqTzY/V 8mzg== X-Gm-Message-State: ANoB5plUmd0boHkDb0zEhK3un7bKDUX42qlpl1xALHpVCnx6BUSi5YWg QXJKFrEqAz8bxufAQ0R36cryDBy7sf8czA== X-Google-Smtp-Source: AA0mqf6nQ3bCSMPTn17mxc4GCVurLewt/EOjnyHvTIs/ImZGe4frb7eZXMD/Hm3cZq91ON4VNM6eRlaJXvRssg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d807:0:b0:6f4:1c40:273a with SMTP id p7-20020a25d807000000b006f41c40273amr52016342ybg.622.1670528361862; Thu, 08 Dec 2022 11:39:21 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:29 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-10-dmatlack@google.com> Subject: [RFC PATCH 09/37] KVM: Move page size stats into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193936_235017_CF21AFC8 X-CRM114-Status: GOOD ( 13.73 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move the page size stats into common code. This will be used in a future commit to move the TDP MMU, which populates these stats, into common code. Architectures can also start populating these stats if they wish, and export different stats depending on the page size. Continue to only expose these stats on x86, since that's currently the only architecture that populates them. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 8 -------- arch/x86/kvm/mmu.h | 5 ----- arch/x86/kvm/x86.c | 6 +++--- include/linux/kvm_host.h | 5 +++++ include/linux/kvm_types.h | 9 +++++++++ 5 files changed, 17 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f5743a652e10..9cf8f956bac3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1363,14 +1363,6 @@ struct kvm_vm_stat { u64 mmu_recycled; u64 mmu_cache_miss; u64 mmu_unsync; - union { - struct { - atomic64_t pages_4k; - atomic64_t pages_2m; - atomic64_t pages_1g; - }; - atomic64_t pages[KVM_NR_PAGE_SIZES]; - }; u64 nx_lpage_splits; u64 max_mmu_page_hash_collisions; u64 max_mmu_rmap_size; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 168c46fd8dd1..ec662108d2eb 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -261,11 +261,6 @@ kvm_mmu_slot_lpages(struct kvm_memory_slot *slot, int level) return __kvm_mmu_slot_lpages(slot, slot->npages, level); } -static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count) -{ - atomic64_add(count, &kvm->stat.pages[level - 1]); -} - gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access, struct x86_exception *exception); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2bfe060768fc..517c8ed33542 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -231,6 +231,9 @@ EXPORT_SYMBOL_GPL(host_xss); const struct _kvm_stats_desc kvm_vm_stats_desc[] = { KVM_GENERIC_VM_STATS(), + STATS_DESC_ICOUNTER(VM_GENERIC, pages_4k), + STATS_DESC_ICOUNTER(VM_GENERIC, pages_2m), + STATS_DESC_ICOUNTER(VM_GENERIC, pages_1g), STATS_DESC_COUNTER(VM, mmu_shadow_zapped), STATS_DESC_COUNTER(VM, mmu_pte_write), STATS_DESC_COUNTER(VM, mmu_pde_zapped), @@ -238,9 +241,6 @@ const struct _kvm_stats_desc kvm_vm_stats_desc[] = { STATS_DESC_COUNTER(VM, mmu_recycled), STATS_DESC_COUNTER(VM, mmu_cache_miss), STATS_DESC_ICOUNTER(VM, mmu_unsync), - STATS_DESC_ICOUNTER(VM, pages_4k), - STATS_DESC_ICOUNTER(VM, pages_2m), - STATS_DESC_ICOUNTER(VM, pages_1g), STATS_DESC_ICOUNTER(VM, nx_lpage_splits), STATS_DESC_PCOUNTER(VM, max_mmu_rmap_size), STATS_DESC_PCOUNTER(VM, max_mmu_page_hash_collisions) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f16c4689322b..22ecb7ce4d31 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2280,4 +2280,9 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count) +{ + atomic64_add(count, &kvm->stat.generic.pages[level - 1]); +} + #endif diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 76de36e56cdf..59cf958d69df 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -20,6 +20,7 @@ enum kvm_mr_change; #include #include +#include #include #include @@ -105,6 +106,14 @@ struct kvm_mmu_memory_cache { struct kvm_vm_stat_generic { u64 remote_tlb_flush; u64 remote_tlb_flush_requests; + union { + struct { + atomic64_t pages_4k; + atomic64_t pages_2m; + atomic64_t pages_1g; + }; + atomic64_t pages[PG_LEVEL_NUM - 1]; + }; }; struct kvm_vcpu_stat_generic { From patchwork Thu Dec 8 19:38:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 84E44C4332F for ; Thu, 8 Dec 2022 20:55:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5l19Du8hFgVXvwivoOvo0qHWBKGYI+GDrs8ts1nuyzA=; b=3GG+03Lo55ZQAmy/XEpVNYUpHP 8SLS2ly0/xN86tYmXtXk9GEiRfS783BE5EtbYaXbM76YBP1XGaPrXauOMVWdcN/jxtkUgvNEu09bx pMC9HGhQ6/spcWvqjbtP1aJwKxxAxq3On8YQGeVhkx3Wjcrb7BDtNnOBF7X49+foMt71XUWp8oUs/ i2H/z+CqhIepzIsgKx8HV5zyDitxn9pwM1MQAUK2MapvXir7HM9bBaR5ZJI/WwEyfxw3szMMtCQx0 /9K2vtGClL9XSlJDDuejFEJB3OBhQZf4hrgajzrSm7jnOEd67GAWFlvrfBp6wQ/lPq73Mm0J2rqAa I0EW97pA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NvP-00B1kb-1Q; Thu, 08 Dec 2022 20:55:23 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mjy-009rhD-EC for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:39:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=t0ynoo+Uz1ultq2qpj0iPRG1UdPG676PKtdcuskxKiY=; b=uVkHwtPb/xeFeMmWV9H4NEuzaA ixhMjxDtQAfqKvhdTGvurrJHPGt9WCyPTa08mA3siOjO4a9fgx4Dd+a5GxsMMP250tEgJ9KPiskmy +PGA7ALVKje/fm3OjpDx7az+uO9euw6iw2ncVmmOrrEkI5YRlPXGYRQbKHtFuIppMRuxfhVkFbAQ5 we1kBA12flBfMVpXgQTSmQWIjzN8C0a6FLho1RkuX1gJGIzlCPzeXRFLgDR5r3Ci6b7HEpxxCgLek ghuOY7isPdpEbOVJOjAFYnh6an5Nkqs1vRXSUeRhAYrvGguqVj6eoJlP5V5ZHPUIXwNZgaE6lW63n hZ2CNrQQ==; Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mk1-007Fx9-DS for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:37 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3d3769a8103so25002757b3.17 for ; Thu, 08 Dec 2022 11:39:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=t0ynoo+Uz1ultq2qpj0iPRG1UdPG676PKtdcuskxKiY=; b=U/vI9QqzlbcTVC94lbC1Vrl6wBTC4LSs8DK3cgCJS7aC6iT0NxkmeEA5mouoQXsBlB vRU1x2abo/B9HETsCE7T+2uriY4e0DO9aw6SG9ZXjTC7jV7baLhj28FmDQSoS1lWt/dS aglK4JsVpRNMuLPnpCysVcKOhpwpbnhqhgrQ9LDiIljFUfpf7nwLbkc5FSMZSWYhnXdi MJ1roetxxbRQNhMi2za2ifa3KJiSBK2dpnAPDcQ4ijs0KwT/UZusgTWxfPedLyDZasMk tEtglRM3wp0uQ6g5dcpXBb+41uZKtCjOAyCLdSN4q65o/P03ZN5Xi6fo2Cv6qPFtX41I 0IdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=t0ynoo+Uz1ultq2qpj0iPRG1UdPG676PKtdcuskxKiY=; b=UwNIuZjSXvPP/kvYfLSLAQ8qdry/yzU271JPWXULKuPgYeM3v8iDBlVlFf+lXGI0C2 vPcOkZPPwOTDNoEIT7T65WvfZk9d4artLhUpzmHQZW6pXRBA9uMxhJSk2hJXQbDTTAzx nWDPEA+jaJY8oaK35uKd0XhL7mo7V0hORNF9qPVA+LlbiNALTQ995Dd4TzOv6mvxeTUI Ko+reWNThakDYia5j1NQ+ANisQTWhMI7pvzSZcVVqGJ9/lwH9NhcRq3RLCXj919O779s plcZT2dVJ9kKLT2MMqf3DvSHC5J4VcUVOhlge8uyFODK139uSMjPW4NvERVmmGCx3DoJ U6jA== X-Gm-Message-State: ANoB5pluIpfTjwlaGwiif8N7G2y2moiLdIV2gM6RUQVxcaKwvG3xP8HN K0Yv3IywBOKo2ScG8v/6FN1JuDwsgstO8Q== X-Google-Smtp-Source: AA0mqf6OihMD/jRMNDsQCgTO9v71+W6lhbU6PuqI9O7PgDgRQqnZbXFvZf6gCEBh5Edo3Jt/baNcknprN6F5eA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:690c:f85:b0:408:babe:aa8c with SMTP id df5-20020a05690c0f8500b00408babeaa8cmr23477ywb.181.1670528363671; Thu, 08 Dec 2022 11:39:23 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:30 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-11-dmatlack@google.com> Subject: [RFC PATCH 10/37] KVM: MMU: Move struct kvm_page_fault to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193933_498303_A2ADC1B5 X-CRM114-Status: GOOD ( 22.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move struct kvm_page_fault to common code. This will be used in a future commit to move the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/mmu_types.h | 20 +++++++ arch/x86/kvm/mmu/mmu.c | 19 +++---- arch/x86/kvm/mmu/mmu_internal.h | 79 ++++++---------------------- arch/x86/kvm/mmu/mmutrace.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 14 ++--- arch/x86/kvm/mmu/tdp_mmu.c | 6 +-- include/kvm/mmu_types.h | 44 ++++++++++++++++ 7 files changed, 100 insertions(+), 84 deletions(-) diff --git a/arch/x86/include/asm/kvm/mmu_types.h b/arch/x86/include/asm/kvm/mmu_types.h index affcb520b482..59d1be85f4b7 100644 --- a/arch/x86/include/asm/kvm/mmu_types.h +++ b/arch/x86/include/asm/kvm/mmu_types.h @@ -5,6 +5,7 @@ #include #include #include +#include /* * This is a subset of the overall kvm_cpu_role to minimize the size of @@ -115,4 +116,23 @@ struct kvm_mmu_page_arch { atomic_t write_flooding_count; }; +struct kvm_page_fault_arch { + const u32 error_code; + + /* x86-specific error code bits */ + const bool present; + const bool rsvd; + const bool user; + + /* Derived from mmu and global state. */ + const bool is_tdp; + const bool nx_huge_page_workaround_enabled; + + /* + * Whether a >4KB mapping can be created or is forbidden due to NX + * hugepages. + */ + bool huge_page_disallowed; +}; + #endif /* !__ASM_KVM_MMU_TYPES_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e47f35878ab5..0593d4a60139 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3092,7 +3092,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault struct kvm_memory_slot *slot = fault->slot; kvm_pfn_t mask; - fault->huge_page_disallowed = fault->exec && fault->nx_huge_page_workaround_enabled; + fault->arch.huge_page_disallowed = + fault->exec && fault->arch.nx_huge_page_workaround_enabled; if (unlikely(fault->max_level == PG_LEVEL_4K)) return; @@ -3109,7 +3110,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault */ fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, fault->gfn, fault->max_level); - if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) + if (fault->req_level == PG_LEVEL_4K || fault->arch.huge_page_disallowed) return; /* @@ -3158,7 +3159,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - if (fault->nx_huge_page_workaround_enabled) + if (fault->arch.nx_huge_page_workaround_enabled) disallowed_hugepage_adjust(fault, *it.sptep, it.level); base_gfn = fault->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); @@ -3170,7 +3171,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) continue; link_shadow_page(vcpu, it.sptep, sp); - if (fault->huge_page_disallowed) + if (fault->arch.huge_page_disallowed) account_nx_huge_page(vcpu->kvm, sp, fault->req_level >= it.level); } @@ -3221,7 +3222,7 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { - gva_t gva = fault->is_tdp ? 0 : fault->addr; + gva_t gva = fault->arch.is_tdp ? 0 : fault->addr; vcpu_cache_mmio_info(vcpu, gva, fault->gfn, access & shadow_mmio_access_mask); @@ -3255,7 +3256,7 @@ static bool page_fault_can_be_fast(struct kvm_page_fault *fault) * generation number. Refreshing the MMIO generation needs to go down * the slow path. Note, EPT Misconfigs do NOT set the PRESENT flag! */ - if (fault->rsvd) + if (fault->arch.rsvd) return false; /* @@ -3273,7 +3274,7 @@ static bool page_fault_can_be_fast(struct kvm_page_fault *fault) * SPTE is MMU-writable (determined later), the fault can be fixed * by setting the Writable bit, which can be done out of mmu_lock. */ - if (!fault->present) + if (!fault->arch.present) return !kvm_ad_enabled(); /* @@ -4119,10 +4120,10 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) static bool page_fault_handle_page_track(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - if (unlikely(fault->rsvd)) + if (unlikely(fault->arch.rsvd)) return false; - if (!fault->present || !fault->write) + if (!fault->arch.present || !fault->write) return false; /* diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index af2ae4887e35..4abb80a3bd01 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -77,60 +77,6 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm) return READ_ONCE(nx_huge_pages) && !kvm->arch.disable_nx_huge_pages; } -struct kvm_page_fault { - /* arguments to kvm_mmu_do_page_fault. */ - const gpa_t addr; - const u32 error_code; - const bool prefetch; - - /* Derived from error_code. */ - const bool exec; - const bool write; - const bool present; - const bool rsvd; - const bool user; - - /* Derived from mmu and global state. */ - const bool is_tdp; - const bool nx_huge_page_workaround_enabled; - - /* - * Whether a >4KB mapping can be created or is forbidden due to NX - * hugepages. - */ - bool huge_page_disallowed; - - /* - * Maximum page size that can be created for this fault; input to - * FNAME(fetch), direct_map() and kvm_tdp_mmu_map(). - */ - u8 max_level; - - /* - * Page size that can be created based on the max_level and the - * page size used by the host mapping. - */ - u8 req_level; - - /* - * Page size that will be created based on the req_level and - * huge_page_disallowed. - */ - u8 goal_level; - - /* Shifted addr, or result of guest page table walk if addr is a gva. */ - gfn_t gfn; - - /* The memslot containing gfn. May be NULL. */ - struct kvm_memory_slot *slot; - - /* Outputs of kvm_faultin_pfn. */ - unsigned long mmu_seq; - kvm_pfn_t pfn; - hva_t hva; - bool map_writable; -}; - int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); /* @@ -164,22 +110,27 @@ enum { static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) { + bool is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault); struct kvm_page_fault fault = { .addr = cr2_or_gpa, - .error_code = err, - .exec = err & PFERR_FETCH_MASK, - .write = err & PFERR_WRITE_MASK, - .present = err & PFERR_PRESENT_MASK, - .rsvd = err & PFERR_RSVD_MASK, - .user = err & PFERR_USER_MASK, .prefetch = prefetch, - .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), - .nx_huge_page_workaround_enabled = - is_nx_huge_page_enabled(vcpu->kvm), + + .write = err & PFERR_WRITE_MASK, + .exec = err & PFERR_FETCH_MASK, .max_level = KVM_MAX_HUGEPAGE_LEVEL, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, + + .arch = { + .error_code = err, + .present = err & PFERR_PRESENT_MASK, + .rsvd = err & PFERR_RSVD_MASK, + .user = err & PFERR_USER_MASK, + .is_tdp = is_tdp, + .nx_huge_page_workaround_enabled = + is_nx_huge_page_enabled(vcpu->kvm), + }, }; int r; @@ -196,7 +147,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!prefetch) vcpu->stat.pf_taken++; - if (IS_ENABLED(CONFIG_RETPOLINE) && fault.is_tdp) + if (IS_ENABLED(CONFIG_RETPOLINE) && fault.arch.is_tdp) r = kvm_tdp_page_fault(vcpu, &fault); else r = vcpu->arch.mmu->page_fault(vcpu, &fault); diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 335f26dabdf3..b01767acf073 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -270,7 +270,7 @@ TRACE_EVENT( TP_fast_assign( __entry->vcpu_id = vcpu->vcpu_id; __entry->cr2_or_gpa = fault->addr; - __entry->error_code = fault->error_code; + __entry->error_code = fault->arch.error_code; __entry->sptep = sptep; __entry->old_spte = old_spte; __entry->new_spte = *sptep; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 18bb92b70a01..daf9c7731edc 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * We cannot overwrite existing page tables with an NX * large page, as the leaf could be executable. */ - if (fault->nx_huge_page_workaround_enabled) + if (fault->arch.nx_huge_page_workaround_enabled) disallowed_hugepage_adjust(fault, *it.sptep, it.level); base_gfn = fault->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); @@ -713,7 +713,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, continue; link_shadow_page(vcpu, it.sptep, sp); - if (fault->huge_page_disallowed) + if (fault->arch.huge_page_disallowed) account_nx_huge_page(vcpu->kvm, sp, fault->req_level >= it.level); } @@ -793,8 +793,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault int r; bool is_self_change_mapping; - pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_code); - WARN_ON_ONCE(fault->is_tdp); + pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->arch.error_code); + WARN_ON_ONCE(fault->arch.is_tdp); /* * Look up the guest pte for the faulting address. @@ -802,7 +802,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * The bit needs to be cleared before walking guest page tables. */ r = FNAME(walk_addr)(&walker, vcpu, fault->addr, - fault->error_code & ~PFERR_RSVD_MASK); + fault->arch.error_code & ~PFERR_RSVD_MASK); /* * The page is not mapped by the guest. Let the guest handle it. @@ -830,7 +830,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault vcpu->arch.write_fault_to_shadow_pgtable = false; is_self_change_mapping = FNAME(is_self_change_mapping)(vcpu, - &walker, fault->user, &vcpu->arch.write_fault_to_shadow_pgtable); + &walker, fault->arch.user, &vcpu->arch.write_fault_to_shadow_pgtable); if (is_self_change_mapping) fault->max_level = PG_LEVEL_4K; @@ -846,7 +846,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * we will cache the incorrect access into mmio spte. */ if (fault->write && !(walker.pte_access & ACC_WRITE_MASK) && - !is_cr0_wp(vcpu->arch.mmu) && !fault->user && fault->slot) { + !is_cr0_wp(vcpu->arch.mmu) && !fault->arch.user && fault->slot) { walker.pte_access |= ACC_WRITE_MASK; walker.pte_access &= ~ACC_USER_MASK; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 66231c7ed31e..4940413d3767 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1156,7 +1156,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) { int r; - if (fault->nx_huge_page_workaround_enabled) + if (fault->arch.nx_huge_page_workaround_enabled) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); if (iter.level == fault->goal_level) @@ -1181,7 +1181,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); - sp->arch.nx_huge_page_disallowed = fault->huge_page_disallowed; + sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; if (is_shadow_present_pte(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); @@ -1197,7 +1197,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) goto retry; } - if (fault->huge_page_disallowed && + if (fault->arch.huge_page_disallowed && fault->req_level >= iter.level) { spin_lock(&kvm->arch.tdp_mmu_pages_lock); track_possible_nx_huge_page(kvm, sp); diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index a9da33d4baa8..9f0ca920bf68 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -66,4 +66,48 @@ struct kvm_mmu_page { struct kvm_mmu_page_arch arch; }; +struct kvm_page_fault { + /* The raw faulting address. */ + const gpa_t addr; + + /* Whether the fault was synthesized to prefetch a mapping. */ + const bool prefetch; + + /* Information about the cause of the fault. */ + const bool write; + const bool exec; + + /* Shifted addr, or result of guest page table walk if shadow paging. */ + gfn_t gfn; + + /* The memslot that contains @gfn. May be NULL. */ + struct kvm_memory_slot *slot; + + /* Maximum page size that can be created for this fault. */ + u8 max_level; + + /* + * Page size that can be created based on the max_level and the page + * size used by the host mapping. + */ + u8 req_level; + + /* Final page size that will be created. */ + u8 goal_level; + + /* + * The value of kvm->mmu_invalidate_seq before fetching the host + * mapping. Used to verify that the host mapping has not changed + * after grabbing the MMU lock. + */ + unsigned long mmu_seq; + + /* Information about the host mapping. */ + kvm_pfn_t pfn; + hva_t hva; + bool map_writable; + + struct kvm_page_fault_arch arch; +}; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D54BC4167B for ; Thu, 8 Dec 2022 19:46:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=B/+L5OmrppgqbaH56wZARhewT9vR0gkGXqET9a66DKc=; b=RJsyNb1YSvKK80RKswNoplj6gv mGFQTilJfp+Zufen9uHai0ATr2lyBOXPokTNreMrScGase7YkPQQGck2FoMIm/OFdBj9ycAGDCDks bZSEFeCcqWlpI7HKg7OlQEgDyKusAPWaWv/Tmve7Ddf3fNvT+YEad/0Iqhr+7mFlUfWzFwwx8PoM4 AYh3ET9gvyyVj//d/KKj8xG4KI29PYf07XB/ZvR0INmQRRcrFAMJfFpxhQfIeElmZhwUttN/h84AG YuE6bq9YVzwYKbEa76yz9nloPoYxvxPvwcXveZ/VM4rFDBsUqqccLA1mFMf+osvHW75+dpREmLprh rapI2wGA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqP-00A1AD-VT; Thu, 08 Dec 2022 19:46:10 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqF-00A0zZ-Td for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:01 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-360b9418f64so24908577b3.7 for ; Thu, 08 Dec 2022 11:45:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jgO+J07Uxt+qL/UpUVJ8WFrJDihM9AG5kpD8vJwjTTs=; b=QrMj5zLDlUXO+ZJ45hsRtxdfgmeW4IfijEqSAU7FIBqFwathovRwq/mzAGouVficrL gQcJIU9VmUG2hiNERRKbyTP3Lu3VIHMZ2AloTdIYXMt+wMOuMe8kDJQZMRgSvO6lAwNU Y6eBROrQrZ0gTi3qA1W8jY3mdXkghgQbhZG8f+ggIaWBggOtbOEtA4aAL9qoZ6wA6iTL UulvutyuA0lBG+LimgEdM3A93jCX+bLrq+G1CQa39qEIq4rZ4XwAKrU+qa0csJ6wjd9Z 9Ua/MZeoHkveM5nl5wYxeuSnKf6S8qPxrJT/LRGNG++VotU8B0SenTfkKWEFNecwApXV JVtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jgO+J07Uxt+qL/UpUVJ8WFrJDihM9AG5kpD8vJwjTTs=; b=yRPRlxzCNhc/6dmXIlHzGvp1MdYT668TZCNZ8LlChsTOxcuutn2Mc+9YF2wYofzjMw ysvblQFfvjaJzwqk77C3WFS+gb6AbDtHUYkdxFcD/2JcwS6mmDSIGH89K7F4+HTmLoUF zachBTjZTmFjvW7H0KRsCGyyEZQ4WM40yad9m08xVhvZZyoFN37prBx+PoBSbnbDTAzf 9A0s9EguDHx5I4brWiM1B/3SY03AQMuKcMRXFFhn2URV2zRcId2Pq/gONuY4QObi1Xhw UnhhoTIbp+DzuRcNWZERlcoVu3lrgX8jpaRtndBIoSAGtlJwUuCnW6XXClMwd4ZD48Dr iMOQ== X-Gm-Message-State: ANoB5pnXQjusgyF6ZHZin9yhb3DMPsyxeF60pM7WBeEVY/QqcJTXsCAm ms6x4E1KFlQefq4wai3uDd90btxv1QNvOg== X-Google-Smtp-Source: AA0mqf4cuymNRC2hTArY/T3ebCSKh1ZW9fvhwu59cqHtPCYYXp2h3U5rWse1hGwJtXv7Z28ATD5Q6YFLoVGbsg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d03:0:b0:706:e165:f752 with SMTP id 3-20020a250d03000000b00706e165f752mr8717208ybn.510.1670528365193; Thu, 08 Dec 2022 11:39:25 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:31 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-12-dmatlack@google.com> Subject: [RFC PATCH 11/37] KVM: MMU: Move RET_PF_* into common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114600_033422_3B79EECB X-CRM114-Status: GOOD ( 14.99 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move the RET_PF_* enum into common code in preparation for moving the TDP MMU into common code. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 28 ---------------------------- include/kvm/mmu_types.h | 26 ++++++++++++++++++++++++++ 2 files changed, 26 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 4abb80a3bd01..d3c1d08002af 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -79,34 +79,6 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm) int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); -/* - * Return values of handle_mmio_page_fault(), mmu.page_fault(), fast_page_fault(), - * and of course kvm_mmu_do_page_fault(). - * - * RET_PF_CONTINUE: So far, so good, keep handling the page fault. - * RET_PF_RETRY: let CPU fault again on the address. - * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. - * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. - * RET_PF_FIXED: The faulting entry has been fixed. - * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. - * - * Any names added to this enum should be exported to userspace for use in - * tracepoints via TRACE_DEFINE_ENUM() in mmutrace.h - * - * Note, all values must be greater than or equal to zero so as not to encroach - * on -errno return values. Somewhat arbitrarily use '0' for CONTINUE, which - * will allow for efficient machine code when checking for CONTINUE, e.g. - * "TEST %rax, %rax, JNZ", as all "stop!" values are non-zero. - */ -enum { - RET_PF_CONTINUE = 0, - RET_PF_RETRY, - RET_PF_EMULATE, - RET_PF_INVALID, - RET_PF_FIXED, - RET_PF_SPURIOUS, -}; - static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 err, bool prefetch) { diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 9f0ca920bf68..07c9962f9aea 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -110,4 +110,30 @@ struct kvm_page_fault { struct kvm_page_fault_arch arch; }; +/* + * Return values for page fault handling routines. + * + * RET_PF_CONTINUE: So far, so good, keep handling the page fault. + * RET_PF_RETRY: let CPU fault again on the address. + * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. + * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_FIXED: The faulting entry has been fixed. + * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. + * + * Any names added to this enum should be exported to userspace for use in + * tracepoints via TRACE_DEFINE_ENUM() in arch/x86/kvm/mmu/mmutrace.h. + * + * Note, all values must be greater than or equal to zero so as not to encroach + * on -errno return values. Somewhat arbitrarily use '0' for CONTINUE, which + * will allow for efficient machine code when checking for CONTINUE. + */ +enum { + RET_PF_CONTINUE = 0, + RET_PF_RETRY, + RET_PF_EMULATE, + RET_PF_INVALID, + RET_PF_FIXED, + RET_PF_SPURIOUS, +}; + #endif /* !__KVM_MMU_TYPES_H */ From patchwork Thu Dec 8 19:38:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25438C4332F for ; Thu, 8 Dec 2022 19:51:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=+eb2PlZaE0QgglB8WWX1+oD4KCjyPjdAkeC8LZ+BjD4=; b=Ed2Pae6tMPHfgilt7qsYuOV4q8 2hGCX8kTynlQCQYNW82CPJ5OFK9IntUlc8Q7ccnz+qisdmMIhUFJqVdmwBDlEdBAQlYn+UuA9bf66 e6vEbOtWWKhfdY9ijlXC572IpyjEyup+VxGYMjLdeFcewIdho14jSpVIhexqNVd1up+nb0fx8bkkc 2G8V8v/xQnXC/t9dXi0hu11EaZr3q8+1A6nNrg6SvK3o25zFGm183TGcNnlOKMSPa6G7UFC+8qIqZ Um5wMEvFjiHJeP3QnkC5pQhXvvooSijIgZ5A+JXyrPvmP4zVIw6G5x711gLCT3wuwm+H5tfU/eq6V X7FkknZw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MvU-00A4hS-SC; Thu, 08 Dec 2022 19:51:24 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mrz-00A2MJ-W6 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:49 +0000 Received: by mail-yb1-xb49.google.com with SMTP id i19-20020a253b13000000b0070358cca7f7so2528257yba.9 for ; Thu, 08 Dec 2022 11:47:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T0+hU8k+RyCmVBvBo6qJA+pYHpeWz4BnzaTdktBKy2A=; b=j2HX3lJ7u8ucEzkeGBQfuBx9Yb7N2Qy9oTH39ulAVDzwqd+G0fIW4WjnRqotxR/RTx o/pna9eXTXuMdJmAYL1UVpM36PkT4GZM2h0bpfrNAurm7bBa7yOIol4vLsYL261GeqzA 3HOdEsZ41tJib1TeC34Adynr+Edi81xeW1SDkV4CdXiQGoTOGOVJ4WxSxzRqIQRX0/iB RSZpalWLzZZk3q25mDyKyCcOP/lgTKFRKvpNiWk49MLHspz7m3Pd9fzp8sxVKQa7WFIV Cq+v/eIrOgIUSWXVgUqA7VpSfMfnDwSrS/lkhXxrcLWaE6ZViWl/62plvpN2oVhZ91U+ +4vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T0+hU8k+RyCmVBvBo6qJA+pYHpeWz4BnzaTdktBKy2A=; b=ElYDSrh4dEjnCYNNDPRc0/8cq4gclwOE4FZmJqOrah+u9feYWRNE8QcWdJ0pgfbs7O bkAqNmR5tKJZXzVeOfN1DgAmkMreHjoojlSQuo/phIb5o4p5Yxs/MJXhQktEy/vTFsrC SjWAhyQT6Ia1wqEr0Qy3pAsREXw1JgnbK4mrRdX32cvHk8c9Wq7DjQTPxT7NENIbJbFJ 0LsknH8TMHDVq6DH4qLypNdvrTcbV4MIJjxn7BLRyMTIr44kE4KfPOfWeusO7GofwxVD NF9Zm1aaioEXdeJegDkMnfJFyP7bR9BbGZ3/Clm9BoJFMlyZPI2lquRFJJ92Xx5NRJJm 9Ckw== X-Gm-Message-State: ANoB5pkSUa8PFe1MuOKNssAEKq9fm9pWZSWX6d9lPQ6532RZ4RSlK9RR MaP5vhg/1ViimmaavOnkLAv0yDCZsSLyaA== X-Google-Smtp-Source: AA0mqf4KN0ToQ1wmZzmflZp36b4L5Nl4FpG0VJHdW80YwtBlwu869ju+Ri2dl6OmwZR+dYAtfRM+xdMRlNvXFw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:6994:0:b0:6f3:c535:4ea3 with SMTP id e142-20020a256994000000b006f3c5354ea3mr55731660ybc.498.1670528366862; Thu, 08 Dec 2022 11:39:26 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:32 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-13-dmatlack@google.com> Subject: [RFC PATCH 12/37] KVM: x86/mmu: Use PG_LEVEL_{PTE,PMD,PUD} in the TDP MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114748_039943_A3E637E3 X-CRM114-Status: GOOD ( 13.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use PG_LEVEL_{PTE,PMD,PUD} in the TDP MMU instead of the x86-specific PG_LEVEL_{4K,2M,1G} aliases. This prepares for moving the TDP MMU to common code, where not all architectures will have 4K PAGE_SIZE. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_iter.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index f0af385c56e0..892c078aab58 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -106,7 +106,7 @@ struct tdp_iter { tdp_iter_next(&iter)) #define for_each_tdp_pte(iter, root, start, end) \ - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_PTE, start, end) tdp_ptep_t spte_to_child_pt(u64 pte, int level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 4940413d3767..bce0566f2d94 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -347,7 +347,7 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, bool pfn_changed; struct kvm_memory_slot *slot; - if (level > PG_LEVEL_4K) + if (level > PG_LEVEL_PTE) return; pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); @@ -526,7 +526,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); WARN_ON(level > PT64_ROOT_MAX_LEVEL); - WARN_ON(level < PG_LEVEL_4K); + WARN_ON(level < PG_LEVEL_PTE); WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); /* @@ -897,9 +897,9 @@ static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, * inducing a stall to allow in-place replacement with a 1gb hugepage. * * Because zapping a SP recurses on its children, stepping down to - * PG_LEVEL_4K in the iterator itself is unnecessary. + * PG_LEVEL_PTE in the iterator itself is unnecessary. */ - __tdp_mmu_zap_root(kvm, root, shared, PG_LEVEL_1G); + __tdp_mmu_zap_root(kvm, root, shared, PG_LEVEL_PUD); __tdp_mmu_zap_root(kvm, root, shared, root->role.level); rcu_read_unlock(); @@ -944,7 +944,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_4K, start, end) { + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_PTE, start, end) { if (can_yield && tdp_mmu_iter_cond_resched(kvm, &iter, flush, false)) { flush = false; @@ -1303,7 +1303,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, /* Huge pages aren't expected to be modified without first being zapped. */ WARN_ON(pte_huge(range->pte) || range->start + 1 != range->end); - if (iter->level != PG_LEVEL_4K || + if (iter->level != PG_LEVEL_PTE || !is_shadow_present_pte(iter->old_spte)) return false; @@ -1672,7 +1672,7 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, if (!mask) break; - if (iter.level > PG_LEVEL_4K || + if (iter.level > PG_LEVEL_PTE || !(mask & (1UL << (iter.gfn - gfn)))) continue; @@ -1726,7 +1726,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, rcu_read_lock(); - for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) { + for_each_tdp_pte_min_level(iter, root, PG_LEVEL_PMD, start, end) { retry: if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; From patchwork Thu Dec 8 19:38:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E0BF6C4332F for ; Thu, 8 Dec 2022 19:45:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=AoDE0CojoL5Fz8R5JrXXujlk4clvWMzoH/Nj7mzDUNQ=; b=hNV96VM24slBIyQ/rDsD6MmBFt q9fct3BRXcYPiq+tb1Ny3RGkvfcHdfPlPptoOoEeP1x45FUT21ZFCbIR23OBcOhr0gRDJ0Tk1oSu1 U1bjC1jhFbY/C8w96gzHeh5Ruw7yXsZxDgaMZ1+e6nFYtmG8Giuzoot9ksFtREpaR6hqZefSsR5NF YwfVXfv9rrbmvawB24ZXCcorLOTqzwmVwnpon/5GKk6vaU2U1xM7RzzUC+/iyD/t4yawnPMA0Nf2S dNQrMdzqnDCAfRgFGKtJsHSbNxF9VzK+6iZg84MqyoQ1rnHqEp3fbDlt9JWFFFj+dYedejm7uAts/ vicfpX1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MpU-00A0GM-Nm; Thu, 08 Dec 2022 19:45:12 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MpR-00A06x-L4 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:11 +0000 Received: by mail-yb1-xb49.google.com with SMTP id i10-20020a25f20a000000b006ea4f43c0ddso2562230ybe.21 for ; Thu, 08 Dec 2022 11:45:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jUiXg4Z/i5LqOe33gxGyRTPndvvAyQq0aDF8Gx926dw=; b=ighxqg/snU4tXdcxe5rmS9diIJMEYpBho3CcnpnWuQAtED+AhBGXYG30AvZnewuYSW +PxEPLajwD76T6tHoZB9rvsCIME05+Ml22rzK5huArVtcScCbqC6ekIOU+P5myHm4vXi R4EgQzje0PuBByDUR9/7eRDNQOVtYlSk3/+EZQFbpySSyYsvRa5mxUOW+WJy7P2kVHMr LdNRgzhOQbMvhiL/Ue8veJ5C2WtFDdDcwfdEEzx+4yjZrEaMU4AD1VnloLuIWDHrzj2p Nqw4S8uRQquwfvmcYjqYXUnPBaJxW1MX4LbeXgp1e25+ZAuz42vzjcVwAFjS+W3H33cg ky4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jUiXg4Z/i5LqOe33gxGyRTPndvvAyQq0aDF8Gx926dw=; b=6A6qGw2uRup70VuJYhHJoZ88kINMnHGwKLQASZqhF4LvcBMkEQ/8PfvbdyvBJGOU0/ GSbH2zQHYD/Km0V1tAAzM/p2MTuqOOuPUulsDqy/Z99HWiC1fcJedbTjYABo90lJA+PD euYwzmCduj5xvp2LCLNZwqoeVxKhnZsrwW0CqiMBZfntFvpu4SaTjrz1ROomPSBSbCx6 6s5bZJtrfeep00I8Ysxt1oh8cSz6fo0Il6ObbFahfDSjl9AL9f6ueXa9+Gd+dgmzC98H Oyu7pW3wAC6cuxZHXXx7u5GJiAlcOAGHPn74GkJoWxBDM4M7YSX/ek4aK6Giolyl+hjw xd1A== X-Gm-Message-State: ANoB5plRBGTiacmKZ3gQeT5fQ0FihV42iziDD/ZSEy4sMeFrsrnJTdsE zJEMZnqiRzqeZ/nNTxMu83iFig/UWwz4sw== X-Google-Smtp-Source: AA0mqf5wtd6nzz7Bw3k0h8Agwz4nMHxILunKJx9Z/GTmhAEf2vfAyvwOajGTYXd4NdEjBogKL293TlwJdHSxug== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:7a03:0:b0:6dd:7c0a:4520 with SMTP id v3-20020a257a03000000b006dd7c0a4520mr93728044ybc.352.1670528368448; Thu, 08 Dec 2022 11:39:28 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:33 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-14-dmatlack@google.com> Subject: [RFC PATCH 13/37] KVM: MMU: Move sptep_to_sp() to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114509_711483_08C12675 X-CRM114-Status: GOOD ( 12.24 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move sptep_to_sp() to common code in preparation for moving the TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/spte.h | 14 ++------------ include/kvm/mmu.h | 19 +++++++++++++++++++ 2 files changed, 21 insertions(+), 12 deletions(-) create mode 100644 include/kvm/mmu.h diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index ad84c549fe96..4c5d518e3ac6 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -3,6 +3,8 @@ #ifndef KVM_X86_MMU_SPTE_H #define KVM_X86_MMU_SPTE_H +#include + #include "mmu_internal.h" /* @@ -219,23 +221,11 @@ static inline int spte_index(u64 *sptep) */ extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; -static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) -{ - struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); - - return (struct kvm_mmu_page *)page_private(page); -} - static inline struct kvm_mmu_page *spte_to_child_sp(u64 spte) { return to_shadow_page(spte & SPTE_BASE_ADDR_MASK); } -static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) -{ - return to_shadow_page(__pa(sptep)); -} - static inline bool is_mmio_spte(u64 spte) { return (spte & shadow_mmio_mask) == shadow_mmio_value && diff --git a/include/kvm/mmu.h b/include/kvm/mmu.h new file mode 100644 index 000000000000..425db8e4f8e9 --- /dev/null +++ b/include/kvm/mmu.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_MMU_H +#define __KVM_MMU_H + +#include + +static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) +{ + struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); + + return (struct kvm_mmu_page *)page_private(page); +} + +static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) +{ + return to_shadow_page(__pa(sptep)); +} + +#endif /* !__KVM_MMU_H */ From patchwork Thu Dec 8 19:38:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25D90C4332F for ; Thu, 8 Dec 2022 19:46:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=01I6EzyVKcAbzqnEBon041GNXEQ88PEyC1VfdjX4vXM=; b=U6sQPUMctHYsKiKfTk5YleUQm6 7kNArq29l6QMpjsrHgPbeWTQqvzercVfyqYl/X2smcdYjoUBSYxQyLQ0WzYOqLe8ywGtxjWMteNyY 05slM9Mk1PMo5qXuG3OQUzSPtxulBTzzUKHpwIaOij1ELC9Tm6lqssG9ajylL4UfMZFTADtdGWR4r MIjUP1PSP/tDgB8Q/YramlZ+voxF5zxkrKTK40MsSS2/OplmA5MtS9fS08WjvvuHlB9bDZUF2nAIe x4mduc8GUKV2A0EpHbj8ACYFaLhdQZ4/8UyOJkYaAWW92HzgI5ut+RCPwI+Q2F7TxRJV7PjK/9r7m 9UPaH64A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mqk-00A1QK-5n; Thu, 08 Dec 2022 19:46:30 +0000 Received: from mail-oa1-x49.google.com ([2001:4860:4864:20::49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqR-00A18i-IK for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:17 +0000 Received: by mail-oa1-x49.google.com with SMTP id 586e51a60fabf-1441544a0e5so1066441fac.8 for ; Thu, 08 Dec 2022 11:46:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R9LB1ZVMz4QUxz5pvo7qXgMqL6aaiH3Zx5L4jHC3TiM=; b=mywUMpb92Kk62g8KyYHaPYWn30FT7swKwMFkNaLBEhkNuRveiOM0s+d7nPhqWh1i20 qKWH28ZXa9H3ErjccI7/elONDf6HMJv/1AWg+60kRtXloHd5SH/aA/RqLJiocTnzRK+4 qa/jQ2r9oK1kv9p/wzDB5zVKYlUOpnBkrFdQIT+lXyBj2bLc+7ULy763Z3Fv68nr72wU wZ0+1kqAvAm4uefKGOueZXZxwCZCF0d3A8tP6aktlY+sm7wFxKszgnOuav6ixDC1wwUf feaUTdtfA74RTkZlBDWknWQ9H1jBlPUp4tvzao/YHBU2Be6WzqIB8r4Ypw1t0aArCdTf AMaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R9LB1ZVMz4QUxz5pvo7qXgMqL6aaiH3Zx5L4jHC3TiM=; b=CWAmUUhuqxJUj+fWEHakkiWL3VTYF4cvdXcx0/tAUt7k4CGtF9nCs5/b3VfTLaphyH MtyrOZKHEICBD005gRTc597sn93jM4yznibdHbgel4A1BQNrO8WXozOT30Bi1H0VzH+v 0GGIC3c3/je3U7T9rcdYbs8ENKi/CjmUC0SEkwu7tiR7E1XnlRhVv94yg1lSSWOWd9F4 gQcn1BJHy6LNZ/jpkfPx1JpZu7kDSpGB+Kf295deJKk3c4xF3kmS7kXqNwrTBVpU2xPL S5bITFVEXME3AFFmaNJyWPrRYrUDU1sBmASq2p+KoEj+tvsFtf8a0V2NgRYNip+dqhyo Wy9Q== X-Gm-Message-State: ANoB5pmwFSviZnwQTLQlUhgLz7ZzBh8PauOLKKzaIhG0QxOcE0++bAah Hzde805t6TI4LFxspfqqVYXeVEQeAmrEaw== X-Google-Smtp-Source: AA0mqf5n9lF4ktHiWZ6/n7J28Yg81SXih514CU7QAEeCqTrDG0HULq/c0zXQphuQ/kH7ZrnkeE/ikcBwHWzRug== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90a:43a4:b0:219:1d0a:34a6 with SMTP id r33-20020a17090a43a400b002191d0a34a6mr5768991pjg.1.1670528369971; Thu, 08 Dec 2022 11:39:29 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:34 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-15-dmatlack@google.com> Subject: [RFC PATCH 14/37] KVM: MMU: Introduce common macros for TDP page tables From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114611_636681_CA5963FE X-CRM114-Status: GOOD ( 25.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Introduce macros in common KVM code for dealing with TDP page tables. TDP page tables are assumed to be PAGE_SIZE with 64-bit PTEs. ARM will have some nuance, e.g. for root page table concatenation, but that will be handled separately when the time comes. Furthermore, we can add arch-specific overrides for any of these macros in the future on a case by case basis. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_iter.c | 14 +++++++------- arch/x86/kvm/mmu/tdp_iter.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 24 +++++++++++++----------- include/kvm/tdp_pgtable.h | 21 +++++++++++++++++++++ 4 files changed, 43 insertions(+), 19 deletions(-) create mode 100644 include/kvm/tdp_pgtable.h diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 4a7d58bf81c4..d6328dac9cd3 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -10,14 +10,15 @@ */ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) { - iter->sptep = iter->pt_path[iter->level - 1] + - SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level); + int pte_index = TDP_PTE_INDEX(iter->gfn, iter->level); + + iter->sptep = iter->pt_path[iter->level - 1] + pte_index; iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); } static gfn_t round_gfn_for_level(gfn_t gfn, int level) { - return gfn & -KVM_PAGES_PER_HPAGE(level); + return gfn & -TDP_PAGES_PER_LEVEL(level); } /* @@ -46,7 +47,7 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, int root_level = root->role.level; WARN_ON(root_level < 1); - WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); + WARN_ON(root_level > TDP_ROOT_MAX_LEVEL); iter->next_last_level_gfn = next_last_level_gfn; iter->root_level = root_level; @@ -116,11 +117,10 @@ static bool try_step_side(struct tdp_iter *iter) * Check if the iterator is already at the end of the current page * table. */ - if (SPTE_INDEX(iter->gfn << PAGE_SHIFT, iter->level) == - (SPTE_ENT_PER_PAGE - 1)) + if (TDP_PTE_INDEX(iter->gfn, iter->level) == (TDP_PTES_PER_PAGE - 1)) return false; - iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); + iter->gfn += TDP_PAGES_PER_LEVEL(iter->level); iter->next_last_level_gfn = iter->gfn; iter->sptep++; iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 892c078aab58..bfac83ab52db 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -4,6 +4,7 @@ #define __KVM_X86_MMU_TDP_ITER_H #include +#include #include "mmu.h" #include "spte.h" @@ -68,7 +69,7 @@ struct tdp_iter { */ gfn_t yielded_gfn; /* Pointers to the page tables traversed to reach the current SPTE */ - tdp_ptep_t pt_path[PT64_ROOT_MAX_LEVEL]; + tdp_ptep_t pt_path[TDP_ROOT_MAX_LEVEL]; /* A pointer to the current SPTE */ tdp_ptep_t sptep; /* The lowest GFN mapped by the current SPTE */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bce0566f2d94..a6d6e393c009 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -7,6 +7,8 @@ #include "tdp_mmu.h" #include "spte.h" +#include + #include #include @@ -428,9 +430,9 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) tdp_mmu_unlink_sp(kvm, sp, shared); - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) { + for (i = 0; i < TDP_PTES_PER_PAGE; i++) { tdp_ptep_t sptep = pt + i; - gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); + gfn_t gfn = base_gfn + i * TDP_PAGES_PER_LEVEL(level); u64 old_spte; if (shared) { @@ -525,9 +527,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool is_leaf = is_present && is_last_spte(new_spte, level); bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); - WARN_ON(level > PT64_ROOT_MAX_LEVEL); + WARN_ON(level > TDP_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_PTE); - WARN_ON(gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); + WARN_ON(gfn & (TDP_PAGES_PER_LEVEL(level) - 1)); /* * If this warning were to trigger it would indicate that there was a @@ -677,7 +679,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, return ret; kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - KVM_PAGES_PER_HPAGE(iter->level)); + TDP_PAGES_PER_LEVEL(iter->level)); /* * No other thread can overwrite the removed SPTE as they must either @@ -1075,7 +1077,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - KVM_PAGES_PER_HPAGE(iter->level + 1)); + TDP_PAGES_PER_LEVEL(iter->level + 1)); /* * If the page fault was caused by a write but the page is write @@ -1355,7 +1357,7 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); - BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); + BUG_ON(min_level > TDP_MAX_HUGEPAGE_LEVEL); for_each_tdp_pte_min_level(iter, root, min_level, start, end) { retry: @@ -1469,7 +1471,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * No need for atomics when writing to sp->spt since the page table has * not been linked in yet and thus is not reachable from any other CPU. */ - for (i = 0; i < SPTE_ENT_PER_PAGE; i++) + for (i = 0; i < TDP_PTES_PER_PAGE; i++) sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, sp->role, i); /* @@ -1489,7 +1491,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * are overwriting from the page stats. But we have to manually update * the page stats with the new present child pages. */ - kvm_update_page_stats(kvm, level - 1, SPTE_ENT_PER_PAGE); + kvm_update_page_stats(kvm, level - 1, TDP_PTES_PER_PAGE); out: trace_kvm_mmu_split_huge_page(iter->gfn, huge_spte, level, ret); @@ -1731,7 +1733,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (iter.level > KVM_MAX_HUGEPAGE_LEVEL || + if (iter.level > TDP_MAX_HUGEPAGE_LEVEL || !is_shadow_present_pte(iter.old_spte)) continue; @@ -1793,7 +1795,7 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, u64 new_spte; bool spte_set = false; - BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); + BUG_ON(min_level > TDP_MAX_HUGEPAGE_LEVEL); rcu_read_lock(); diff --git a/include/kvm/tdp_pgtable.h b/include/kvm/tdp_pgtable.h new file mode 100644 index 000000000000..968be8d92350 --- /dev/null +++ b/include/kvm/tdp_pgtable.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_TDP_PGTABLE_H +#define __KVM_TDP_PGTABLE_H + +#include +#include + +#define TDP_ROOT_MAX_LEVEL 5 +#define TDP_MAX_HUGEPAGE_LEVEL PG_LEVEL_PUD +#define TDP_PTES_PER_PAGE (PAGE_SIZE / sizeof(u64)) +#define TDP_LEVEL_BITS ilog2(TDP_PTES_PER_PAGE) +#define TDP_LEVEL_MASK ((1UL << TDP_LEVEL_BITS) - 1) + +#define TDP_LEVEL_SHIFT(level) (((level) - 1) * TDP_LEVEL_BITS) + +#define TDP_PAGES_PER_LEVEL(level) (1UL << TDP_LEVEL_SHIFT(level)) + +#define TDP_PTE_INDEX(gfn, level) \ + (((gfn) >> TDP_LEVEL_SHIFT(level)) & TDP_LEVEL_MASK) + +#endif /* !__KVM_TDP_PGTABLE_H */ From patchwork Thu Dec 8 19:38:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51DB7C4332F for ; Thu, 8 Dec 2022 19:44:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uQWTxZyqVUyc4WGoXon/SVN9i5kxXD9ch/kBp7pUsSI=; b=0jW9rc7LDFwQg4G/8ZYM5iRJPU W7X1QErb1wheN8Ohu7k02tArjUsjkf8ft06b0hlw+XmxTO/owa3DI1ubP7LWk+G8/7c2xXd0XeCAB jY/ieuCIfhq8DsbY7nIluHnJlkiDDOUqBlPo5ZYJ9j0sGZnsS4AqlNphFEWPteuw+Fg4XKWAlecNq ULXrDzPs3zEQm2bTD650rNlNXT5VfIFK9g2JB1s1o3Gs6nKykK1NuM1/EXrZZy86ETeAOX3G+jahy mPOCFAcnLL27uwDJ+2sF5hR5enOgm0Z10lNgjAZxG6APLXw/38VPVocfYVDKrohEzDfDLz1IQDaIp 4jOhAUQQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mp1-009zdu-Fn; Thu, 08 Dec 2022 19:44:43 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mox-009zW3-7N for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:44:42 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3d0465d32deso25149137b3.20 for ; Thu, 08 Dec 2022 11:44:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7lWdJkZrv9HoLXNU6i6MkXoxSJstT9R2oUrEFqSahss=; b=sg2xkDze7H4yV2lU/1/FpCU1cOSTt2gM7RO9hSQj+EGldzLsfkQyRpFJO1Dbgvo3W9 PfOPbmNzoC3iMvnIaOngSfZ5LH9xb9dgFCEw9aEBhfSkfZa0p2LGFJNWmPsP0YUfDno9 CQwSUJBX0kvzRAUmX2Me/KJgguCLZ+SfH2HWLEdPYDNzfjltbTnDARaVTFS5Gq443Bmi +QopmUV51BioUs/4b9KMy4gkmjkOujFVUjs6U8RTkqsv/HoGkTY5Ees6a9Iry1tmsH00 Td9EkW5zbDJ4OlpNw/cjDC4i3Hi3OifKlsJcAx0502DGt1TlOCLOeHM0BRmhVJInz4qJ qvxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7lWdJkZrv9HoLXNU6i6MkXoxSJstT9R2oUrEFqSahss=; b=CbeWkAVt+/jNrbjYYDcEU+4PBv1XNf3ry6gU12KM7Nqa5OnMdMGcoMV9tdf+qMhjr2 lq/+PPKoMqvCZ1f3lmdl4fBIopBncA0Rq0pVk+t4v/VPR1b6Yicp9wWAFYKtv/7c7lGC IvnkjPif2HJ5I1jLHL+2NiXqtTdHSSNPrvbFrKbWK1ZXulbYbZMTAr4gTUsSfH66A/It 2HR+ZpP7+R8o3tp1qnIqzXRitIM+ChxV/T0WMza7RBxy4qhV2Zfo9AQuPOEb1quf+9LN 8wcNCL4XSf3roycTe7C4P7JOElOTQfuBBm/F2Zmt+pjS+1PLv/OiQHXQ/Y4v50d9C42p WnoA== X-Gm-Message-State: ANoB5pkBWhRGgKUqCGV/jRsrqkrE2MZCntN5iylcYR+UaHGIIwlyUM77 pNy7IuJGVuUAPOHb7d0DwbZlEwqOc5ZeMg== X-Google-Smtp-Source: AA0mqf63KmdX+eJrD5nf7+2lHNPi/RXEFfkP5c0ill5ei/9dZE+wqXzEkU41M9xqISk+TkkYSOsMnafoSWus4A== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:1884:0:b0:6ef:c66f:e0fa with SMTP id 126-20020a251884000000b006efc66fe0famr68123962yby.3.1670528371737; Thu, 08 Dec 2022 11:39:31 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:35 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-16-dmatlack@google.com> Subject: [RFC PATCH 15/37] KVM: x86/mmu: Add a common API for inspecting/modifying TDP PTEs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114439_295364_EB095902 X-CRM114-Status: GOOD ( 28.90 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Introduce an API for inspecting and modifying TDP PTEs from common code. This will be used in future commits to move the TDP MMU to common code. Specifically, introduce the following API that can be used in common code: /* Inspection API */ tdp_pte_is_present() tdp_pte_is_writable() tdp_pte_is_huge() tdp_pte_is_leaf() tdp_pte_is_accessed() tdp_pte_is_dirty() tdp_pte_is_mmio() tdp_pte_is_volatile() tdp_pte_to_pfn() tdp_pte_check_leaf_invariants() /* Modification API */ tdp_pte_clear_writable() tdp_pte_clear_mmu_writable() tdp_pte_clear_dirty() tdp_pte_clear_accessed() Note that this does not cover constructing PTEs from scratch (e.g. during page fault handling). This will be added in a subsequent commit. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 58 +++++++++ arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/spte.c | 3 +- arch/x86/kvm/mmu/spte.h | 22 ---- arch/x86/kvm/mmu/tdp_iter.c | 4 +- arch/x86/kvm/mmu/tdp_iter.h | 5 +- arch/x86/kvm/mmu/tdp_mmu.c | 171 +++++++++++-------------- arch/x86/kvm/mmu/tdp_pgtable.c | 72 +++++++++++ include/kvm/tdp_pgtable.h | 18 +++ 9 files changed, 231 insertions(+), 124 deletions(-) create mode 100644 arch/x86/include/asm/kvm/tdp_pgtable.h create mode 100644 arch/x86/kvm/mmu/tdp_pgtable.c diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h new file mode 100644 index 000000000000..cebc4bc44b49 --- /dev/null +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_KVM_TDP_PGTABLE_H +#define __ASM_KVM_TDP_PGTABLE_H + +#include +#include + +/* + * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on + * both AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF + * vulnerability. Use only low bits to avoid 64-bit immediates. + */ +#define REMOVED_TDP_PTE 0x5a0ULL + +#define TDP_PTE_WRITABLE_MASK BIT_ULL(1) +#define TDP_PTE_HUGE_PAGE_MASK BIT_ULL(7) +#define TDP_PTE_PRESENT_MASK BIT_ULL(11) + +static inline bool tdp_pte_is_writable(u64 pte) +{ + return pte & TDP_PTE_WRITABLE_MASK; +} + +static inline bool tdp_pte_is_huge(u64 pte) +{ + return pte & TDP_PTE_HUGE_PAGE_MASK; +} + +static inline bool tdp_pte_is_present(u64 pte) +{ + return pte & TDP_PTE_PRESENT_MASK; +} + +bool tdp_pte_is_accessed(u64 pte); +bool tdp_pte_is_dirty(u64 pte); +bool tdp_pte_is_mmio(u64 pte); +bool tdp_pte_is_volatile(u64 pte); + +static inline u64 tdp_pte_clear_writable(u64 pte) +{ + return pte & ~TDP_PTE_WRITABLE_MASK; +} + +static inline u64 tdp_pte_clear_mmu_writable(u64 pte) +{ + extern u64 __read_mostly shadow_mmu_writable_mask; + + return pte & ~(TDP_PTE_WRITABLE_MASK | shadow_mmu_writable_mask); +} + +u64 tdp_pte_clear_dirty(u64 pte, bool force_wrprot); +u64 tdp_pte_clear_accessed(u64 pte); + +kvm_pfn_t tdp_pte_to_pfn(u64 pte); + +void tdp_pte_check_leaf_invariants(u64 pte); + +#endif /* !__ASM_KVM_TDP_PGTABLE_H */ diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 80e3fe184d17..c294ae51caba 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -18,7 +18,7 @@ ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o endif -kvm-$(CONFIG_X86_64) += mmu/tdp_iter.o mmu/tdp_mmu.o +kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_iter.o mmu/tdp_mmu.o kvm-$(CONFIG_KVM_XEN) += xen.o kvm-$(CONFIG_KVM_SMM) += smm.o diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index fe4b626cb431..493e109f1105 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -10,6 +10,7 @@ #include +#include #include "mmu.h" #include "mmu_internal.h" #include "x86.h" @@ -401,7 +402,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) * not set any RWX bits. */ if (WARN_ON((mmio_value & mmio_mask) != mmio_value) || - WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value)) + WARN_ON(mmio_value && (REMOVED_TDP_PTE & mmio_mask) == mmio_value)) mmio_value = 0; if (!mmio_value) diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 4c5d518e3ac6..a1b7d7730583 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -183,28 +183,6 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask; */ #define SHADOW_NONPRESENT_OR_RSVD_MASK_LEN 5 -/* - * If a thread running without exclusive control of the MMU lock must perform a - * multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a - * non-present intermediate value. Other threads which encounter this value - * should not modify the SPTE. - * - * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on - * both AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF - * vulnerability. Use only low bits to avoid 64-bit immediates. - * - * Only used by the TDP MMU. - */ -#define REMOVED_SPTE 0x5a0ULL - -/* Removed SPTEs must not be misconstrued as shadow present PTEs. */ -static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK)); - -static inline bool is_removed_spte(u64 spte) -{ - return spte == REMOVED_SPTE; -} - /* Get an SPTE's index into its parent's page table (and the spt array). */ static inline int spte_index(u64 *sptep) { diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index d6328dac9cd3..d5f024b7f6e4 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -69,10 +69,10 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level) * There's no child entry if this entry isn't present or is a * last-level entry. */ - if (!is_shadow_present_pte(spte) || is_last_spte(spte, level)) + if (!tdp_pte_is_present(spte) || tdp_pte_is_leaf(spte, level)) return NULL; - return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT); + return (tdp_ptep_t)__va(tdp_pte_to_pfn(spte) << PAGE_SHIFT); } /* diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index bfac83ab52db..6e3c38532d1d 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -45,8 +45,9 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, * logic needs to be reassessed if KVM were to use non-leaf Accessed * bits, e.g. to skip stepping down into child SPTEs when aging SPTEs. */ - if (is_shadow_present_pte(old_spte) && is_last_spte(old_spte, level) && - spte_has_volatile_bits(old_spte)) + if (tdp_pte_is_present(old_spte) && + tdp_pte_is_leaf(old_spte, level) && + tdp_pte_is_volatile(old_spte)) return kvm_tdp_mmu_write_spte_atomic(sptep, new_spte); __kvm_tdp_mmu_write_spte(sptep, new_spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a6d6e393c009..fea42bbac984 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -334,13 +334,13 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) { - if (!is_shadow_present_pte(old_spte) || !is_last_spte(old_spte, level)) + if (!tdp_pte_is_present(old_spte) || !tdp_pte_is_leaf(old_spte, level)) return; - if (is_accessed_spte(old_spte) && - (!is_shadow_present_pte(new_spte) || !is_accessed_spte(new_spte) || - spte_to_pfn(old_spte) != spte_to_pfn(new_spte))) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); + if (tdp_pte_is_accessed(old_spte) && + (!tdp_pte_is_present(new_spte) || !tdp_pte_is_accessed(new_spte) || + tdp_pte_to_pfn(old_spte) != tdp_pte_to_pfn(new_spte))) + kvm_set_pfn_accessed(tdp_pte_to_pfn(old_spte)); } static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, @@ -352,10 +352,10 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, if (level > PG_LEVEL_PTE) return; - pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + pfn_changed = tdp_pte_to_pfn(old_spte) != tdp_pte_to_pfn(new_spte); - if ((!is_writable_pte(old_spte) || pfn_changed) && - is_writable_pte(new_spte)) { + if ((!tdp_pte_is_writable(old_spte) || pfn_changed) && + tdp_pte_is_writable(new_spte)) { slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn); mark_page_dirty_in_slot(kvm, slot, gfn); } @@ -445,8 +445,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * value to the removed SPTE value. */ for (;;) { - old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, REMOVED_SPTE); - if (!is_removed_spte(old_spte)) + old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, REMOVED_TDP_PTE); + if (!tdp_pte_is_removed(old_spte)) break; cpu_relax(); } @@ -461,7 +461,7 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * unreachable. */ old_spte = kvm_tdp_mmu_read_spte(sptep); - if (!is_shadow_present_pte(old_spte)) + if (!tdp_pte_is_present(old_spte)) continue; /* @@ -481,7 +481,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * strictly necessary for the same reason, but using * the remove SPTE value keeps the shared/exclusive * paths consistent and allows the handle_changed_spte() - * call below to hardcode the new value to REMOVED_SPTE. + * call below to hardcode the new value to + * REMOVED_TDP_PTE. * * Note, even though dropping a Dirty bit is the only * scenario where a non-atomic update could result in a @@ -493,10 +494,11 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared) * it here. */ old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, - REMOVED_SPTE, level); + REMOVED_TDP_PTE, + level); } handle_changed_spte(kvm, sp->role.as_id, gfn, old_spte, - REMOVED_SPTE, level, shared); + REMOVED_TDP_PTE, level, shared); } call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); @@ -521,11 +523,11 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared) { - bool was_present = is_shadow_present_pte(old_spte); - bool is_present = is_shadow_present_pte(new_spte); - bool was_leaf = was_present && is_last_spte(old_spte, level); - bool is_leaf = is_present && is_last_spte(new_spte, level); - bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + bool was_present = tdp_pte_is_present(old_spte); + bool is_present = tdp_pte_is_present(new_spte); + bool was_leaf = was_present && tdp_pte_is_leaf(old_spte, level); + bool is_leaf = is_present && tdp_pte_is_leaf(new_spte, level); + bool pfn_changed = tdp_pte_to_pfn(old_spte) != tdp_pte_to_pfn(new_spte); WARN_ON(level > TDP_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_PTE); @@ -560,7 +562,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, trace_kvm_tdp_mmu_spte_changed(as_id, gfn, level, old_spte, new_spte); if (is_leaf) - check_spte_writable_invariants(new_spte); + tdp_pte_check_leaf_invariants(new_spte); /* * The only times a SPTE should be changed from a non-present to @@ -574,9 +576,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * impact the guest since both the former and current SPTEs * are nonpresent. */ - if (WARN_ON(!is_mmio_spte(old_spte) && - !is_mmio_spte(new_spte) && - !is_removed_spte(new_spte))) + if (WARN_ON(!tdp_pte_is_mmio(old_spte) && + !tdp_pte_is_mmio(new_spte) && + !tdp_pte_is_removed(new_spte))) pr_err("Unexpected SPTE change! Nonpresent SPTEs\n" "should not be replaced with another,\n" "different nonpresent SPTE, unless one or both\n" @@ -590,9 +592,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (is_leaf != was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); - if (was_leaf && is_dirty_spte(old_spte) && - (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + if (was_leaf && tdp_pte_is_dirty(old_spte) && + (!is_present || !tdp_pte_is_dirty(new_spte) || pfn_changed)) + kvm_set_pfn_dirty(tdp_pte_to_pfn(old_spte)); /* * Recursively handle child PTs if the change removed a subtree from @@ -645,7 +647,7 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm, * and pre-checking before inserting a new SPTE is advantageous as it * avoids unnecessary work. */ - WARN_ON_ONCE(iter->yielded || is_removed_spte(iter->old_spte)); + WARN_ON_ONCE(iter->yielded || tdp_pte_is_removed(iter->old_spte)); lockdep_assert_held_read(&kvm->mmu_lock); @@ -674,7 +676,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, * immediately installing a present entry in its place * before the TLBs are flushed. */ - ret = tdp_mmu_set_spte_atomic(kvm, iter, REMOVED_SPTE); + ret = tdp_mmu_set_spte_atomic(kvm, iter, REMOVED_TDP_PTE); if (ret) return ret; @@ -730,7 +732,7 @@ static u64 __tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep, * should be used. If operating under the MMU lock in write mode, the * use of the removed SPTE should not be necessary. */ - WARN_ON(is_removed_spte(old_spte) || is_removed_spte(new_spte)); + WARN_ON(tdp_pte_is_removed(old_spte) || tdp_pte_is_removed(new_spte)); old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level); @@ -781,8 +783,8 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, #define tdp_root_for_each_leaf_pte(_iter, _root, _start, _end) \ tdp_root_for_each_pte(_iter, _root, _start, _end) \ - if (!is_shadow_present_pte(_iter.old_spte) || \ - !is_last_spte(_iter.old_spte, _iter.level)) \ + if (!tdp_pte_is_present(_iter.old_spte) || \ + !tdp_pte_is_leaf(_iter.old_spte, _iter.level)) \ continue; \ else @@ -858,7 +860,7 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; - if (!is_shadow_present_pte(iter.old_spte)) + if (!tdp_pte_is_present(iter.old_spte)) continue; if (iter.level > zap_level) @@ -919,7 +921,7 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) return false; old_spte = kvm_tdp_mmu_read_spte(sp->ptep); - if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) + if (WARN_ON_ONCE(!tdp_pte_is_present(old_spte))) return false; __tdp_mmu_set_spte(kvm, sp->role.as_id, sp->ptep, old_spte, 0, @@ -953,8 +955,8 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, continue; } - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (!tdp_pte_is_present(iter.old_spte) || + !tdp_pte_is_leaf(iter.old_spte, iter.level)) continue; tdp_mmu_set_spte(kvm, &iter, 0); @@ -1074,8 +1076,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, ret = RET_PF_SPURIOUS; else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte)) return RET_PF_RETRY; - else if (is_shadow_present_pte(iter->old_spte) && - !is_last_spte(iter->old_spte, iter->level)) + else if (tdp_pte_is_present(iter->old_spte) && + !tdp_pte_is_leaf(iter->old_spte, iter->level)) kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, TDP_PAGES_PER_LEVEL(iter->level + 1)); @@ -1090,7 +1092,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, } /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ - if (unlikely(is_mmio_spte(new_spte))) { + if (unlikely(tdp_pte_is_mmio(new_spte))) { vcpu->stat.pf_mmio_spte_created++; trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); @@ -1168,12 +1170,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) * If SPTE has been frozen by another thread, just give up and * retry, avoiding unnecessary page table allocation and free. */ - if (is_removed_spte(iter.old_spte)) + if (tdp_pte_is_removed(iter.old_spte)) goto retry; /* Step down into the lower level page table if it exists. */ - if (is_shadow_present_pte(iter.old_spte) && - !is_large_pte(iter.old_spte)) + if (tdp_pte_is_present(iter.old_spte) && + !tdp_pte_is_huge(iter.old_spte)) continue; /* @@ -1185,7 +1187,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; - if (is_shadow_present_pte(iter.old_spte)) + if (tdp_pte_is_present(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); else r = tdp_mmu_link_sp(kvm, &iter, sp, true); @@ -1207,6 +1209,15 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } + /* + * Force the guest to retry the access if the upper level SPTEs aren't + * in place, or if the target leaf SPTE is frozen by another CPU. + */ + if (iter.level != fault->goal_level || tdp_pte_is_removed(iter.old_spte)) { + rcu_read_unlock(); + return RET_PF_RETRY; + } + ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter); retry: @@ -1255,27 +1266,13 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm, static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, struct kvm_gfn_range *range) { - u64 new_spte = 0; + u64 new_spte; /* If we have a non-accessed entry we don't need to change the pte. */ - if (!is_accessed_spte(iter->old_spte)) + if (!tdp_pte_is_accessed(iter->old_spte)) return false; - new_spte = iter->old_spte; - - if (spte_ad_enabled(new_spte)) { - new_spte &= ~shadow_accessed_mask; - } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(new_spte)) - kvm_set_pfn_dirty(spte_to_pfn(new_spte)); - - new_spte = mark_spte_for_access_track(new_spte); - } - + new_spte = tdp_pte_clear_accessed(iter->old_spte); tdp_mmu_set_spte_no_acc_track(kvm, iter, new_spte); return true; @@ -1289,7 +1286,7 @@ bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter, struct kvm_gfn_range *range) { - return is_accessed_spte(iter->old_spte); + return tdp_pte_is_accessed(iter->old_spte); } bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -1306,7 +1303,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, WARN_ON(pte_huge(range->pte) || range->start + 1 != range->end); if (iter->level != PG_LEVEL_PTE || - !is_shadow_present_pte(iter->old_spte)) + !tdp_pte_is_present(iter->old_spte)) return false; /* @@ -1364,12 +1361,12 @@ static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level) || - !(iter.old_spte & PT_WRITABLE_MASK)) + if (!tdp_pte_is_present(iter.old_spte) || + !tdp_pte_is_leaf(iter.old_spte, iter.level) || + !tdp_pte_is_writable(iter.old_spte)) continue; - new_spte = iter.old_spte & ~PT_WRITABLE_MASK; + new_spte = tdp_pte_clear_writable(iter.old_spte); if (tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) goto retry; @@ -1525,7 +1522,7 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, shared)) continue; - if (!is_shadow_present_pte(iter.old_spte) || !is_large_pte(iter.old_spte)) + if (!tdp_pte_is_present(iter.old_spte) || !tdp_pte_is_huge(iter.old_spte)) continue; if (!sp) { @@ -1607,20 +1604,12 @@ static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) continue; - if (!is_shadow_present_pte(iter.old_spte)) + if (!tdp_pte_is_present(iter.old_spte)) continue; - if (spte_ad_need_write_protect(iter.old_spte)) { - if (is_writable_pte(iter.old_spte)) - new_spte = iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte = iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + new_spte = tdp_pte_clear_dirty(iter.old_spte, false); + if (new_spte == iter.old_spte) + continue; if (tdp_mmu_set_spte_atomic(kvm, &iter, new_spte)) goto retry; @@ -1680,17 +1669,9 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, mask &= ~(1UL << (iter.gfn - gfn)); - if (wrprot || spte_ad_need_write_protect(iter.old_spte)) { - if (is_writable_pte(iter.old_spte)) - new_spte = iter.old_spte & ~PT_WRITABLE_MASK; - else - continue; - } else { - if (iter.old_spte & shadow_dirty_mask) - new_spte = iter.old_spte & ~shadow_dirty_mask; - else - continue; - } + new_spte = tdp_pte_clear_dirty(iter.old_spte, wrprot); + if (new_spte == iter.old_spte) + continue; tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); } @@ -1734,7 +1715,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; if (iter.level > TDP_MAX_HUGEPAGE_LEVEL || - !is_shadow_present_pte(iter.old_spte)) + !tdp_pte_is_present(iter.old_spte)) continue; /* @@ -1742,7 +1723,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, * a large page size, then its parent would have been zapped * instead of stepping down. */ - if (is_last_spte(iter.old_spte, iter.level)) + if (tdp_pte_is_leaf(iter.old_spte, iter.level)) continue; /* @@ -1800,13 +1781,11 @@ static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, rcu_read_lock(); for_each_tdp_pte_min_level(iter, root, min_level, gfn, gfn + 1) { - if (!is_shadow_present_pte(iter.old_spte) || - !is_last_spte(iter.old_spte, iter.level)) + if (!tdp_pte_is_present(iter.old_spte) || + !tdp_pte_is_leaf(iter.old_spte, iter.level)) continue; - new_spte = iter.old_spte & - ~(PT_WRITABLE_MASK | shadow_mmu_writable_mask); - + new_spte = tdp_pte_clear_mmu_writable(iter.old_spte); if (new_spte == iter.old_spte) break; diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c new file mode 100644 index 000000000000..cf3b692d8e21 --- /dev/null +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -0,0 +1,72 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include + +#include "mmu.h" +#include "spte.h" + +/* Removed SPTEs must not be misconstrued as shadow present PTEs. */ +static_assert(!(REMOVED_TDP_PTE & SPTE_MMU_PRESENT_MASK)); + +static_assert(TDP_PTE_WRITABLE_MASK == PT_WRITABLE_MASK); +static_assert(TDP_PTE_HUGE_PAGE_MASK == PT_PAGE_SIZE_MASK); +static_assert(TDP_PTE_PRESENT_MASK == SPTE_MMU_PRESENT_MASK); + +bool tdp_pte_is_accessed(u64 pte) +{ + return is_accessed_spte(pte); +} + +bool tdp_pte_is_dirty(u64 pte) +{ + return is_dirty_spte(pte); +} + +bool tdp_pte_is_mmio(u64 pte) +{ + return is_mmio_spte(pte); +} + +bool tdp_pte_is_volatile(u64 pte) +{ + return spte_has_volatile_bits(pte); +} + +u64 tdp_pte_clear_dirty(u64 pte, bool force_wrprot) +{ + if (force_wrprot || spte_ad_need_write_protect(pte)) { + if (tdp_pte_is_writable(pte)) + pte &= ~PT_WRITABLE_MASK; + } else if (pte & shadow_dirty_mask) { + pte &= ~shadow_dirty_mask; + } + + return pte; +} + +u64 tdp_pte_clear_accessed(u64 old_spte) +{ + if (spte_ad_enabled(old_spte)) + return old_spte & ~shadow_accessed_mask; + + /* + * Capture the dirty status of the page, so that it doesn't get lost + * when the SPTE is marked for access tracking. + */ + if (tdp_pte_is_writable(old_spte)) + kvm_set_pfn_dirty(tdp_pte_to_pfn(old_spte)); + + return mark_spte_for_access_track(old_spte); +} + +kvm_pfn_t tdp_pte_to_pfn(u64 pte) +{ + return spte_to_pfn(pte); +} + +void tdp_pte_check_leaf_invariants(u64 pte) +{ + check_spte_writable_invariants(pte); +} + diff --git a/include/kvm/tdp_pgtable.h b/include/kvm/tdp_pgtable.h index 968be8d92350..a24c45ac7765 100644 --- a/include/kvm/tdp_pgtable.h +++ b/include/kvm/tdp_pgtable.h @@ -5,6 +5,8 @@ #include #include +#include + #define TDP_ROOT_MAX_LEVEL 5 #define TDP_MAX_HUGEPAGE_LEVEL PG_LEVEL_PUD #define TDP_PTES_PER_PAGE (PAGE_SIZE / sizeof(u64)) @@ -18,4 +20,20 @@ #define TDP_PTE_INDEX(gfn, level) \ (((gfn) >> TDP_LEVEL_SHIFT(level)) & TDP_LEVEL_MASK) +/* + * If a thread running without exclusive control of the MMU lock must perform a + * multi-part operation on a PTE, it can set the PTE to REMOVED_TDP_PTE as a + * non-present intermediate value. Other threads which encounter this value + * should not modify the PTE. + */ +static inline bool tdp_pte_is_removed(u64 pte) +{ + return pte == REMOVED_TDP_PTE; +} + +static inline bool tdp_pte_is_leaf(u64 pte, int level) +{ + return tdp_pte_is_huge(pte) || level == PG_LEVEL_PTE; +} + #endif /* !__KVM_TDP_PGTABLE_H */ From patchwork Thu Dec 8 19:38:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D2D5FC4332F for ; Thu, 8 Dec 2022 19:51:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wW5/XDqb46C+aGv+wkMnIRYCTRFrsfao7Bnxe+zvbZM=; b=Xfch81SF+XTqhoep0IO92TnOzs amd73um6ZSw9SK3MqBTxddOrr1I8kXo9OJc5oWFAwF0pn5/PQfmnAynDKVcXQRs2j5IEVbQ47yaj/ 8824ZGy9C6HCFxB+1vvw7nxdPtgQw1UWBLhJpSAP5c/SSpPOQZoI4Kos29VUK0bcvnv80x8mzyWzu 9d0SHmlMrf79TylN9E3b59uxuAWK6wWhyIwtUaXKNGpE62SLE9ktW7yX5GYfJ5i3Dn1cDiBVReo3Z 4XMozE4dYQqto1nSiA+sYvuUT7O/c/5u4L2ItubIyvfN3lROLZoSHsDVzHLQbZdcvmCrr4OBusMyE 2s8/7xzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MvB-00A4Sy-0k; Thu, 08 Dec 2022 19:51:05 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mrp-00A2DQ-2x for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:38 +0000 Received: by mail-yb1-xb49.google.com with SMTP id b17-20020a25b851000000b006e32b877068so2566176ybm.16 for ; Thu, 08 Dec 2022 11:47:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Pg1Nq3rulqWxdtA2fJcZW38Nt6WCEHzRUfCCpYS2KV0=; b=aDqf6pXxZybLQuV+ns7MyujXZ7TaXxPsXCkY1cBCTnm7Z1Pk34Q2WxafDnw6pX7BHN lhEXYUvCnQrmxi67UqIabYRYVsFr1VmRYMnDgU/nCVe0vfpF9o0e/pk5kPNnjxwaoCkb 6UzjVRaqPFUO0UrFxoxAA1OT7EuOIecZ2ihew0iHfezGa4aHmagqVFYtCzqCsOXcoWMG 1BSI0f3h3n0sV4Xz4NHwLOJuYaVAHIAoL3P08pyo0Lb7cFRQm3wK50yoCvEQji2rKMPw TJvkYFZnHosx8g6qnVV89WiLKgj9ZlDUDj9dOxU8qckP9jmYepoaBePV4tvKaqieAvx3 Iqsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Pg1Nq3rulqWxdtA2fJcZW38Nt6WCEHzRUfCCpYS2KV0=; b=2RbM84Nd/PCY5m9wHKHJplHOIdUx8Mdd4KCdCIvU9pdOPnRYsAqAka+3VR9H+FTHQm zObixtqcqrgJPVQCV1LkXQNUOx+ps3BvJMV+mqfXnXMHxwFwQ2ndfTzRaZmDMcEwHUd5 Im6FuyT4ApRflNRqgUqf0p8KyMdH4zlmxyGmtTb/wcwNZR1NDp53+2apk0v2pQdLaViZ /j//J3hpOWH/ABMIIh4JLTnwqAZB/ZZ9fHt08A4uJSxucFT0V1H+Zlu0WwHDVDu6aUi8 A+nuPStYnCv1QAntGzQ8iP3Btn3thGS1TpzUJIr4zoAkZx++b/Ev+JngHw6NklfA/C49 kZMQ== X-Gm-Message-State: ANoB5pnpvMY0t7rCNSwIRUhH1lFIv8JKgUQX6aOfjFT3t/8U71KUEl1u g01nYdTDm8pmah8UCm1CO8PhDGu9DsrmNg== X-Google-Smtp-Source: AA0mqf7iyktvhKElr4AnSOjLzx6ehM+nhpfhEwE2rLq/JpjuQYZLUjwLo76vc17Zh1ckOWCNKP3vo6QzYulWTQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:25cf:0:b0:704:9d40:ecce with SMTP id l198-20020a2525cf000000b007049d40eccemr9539756ybl.316.1670528373481; Thu, 08 Dec 2022 11:39:33 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:36 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-17-dmatlack@google.com> Subject: [RFC PATCH 16/37] KVM: x86/mmu: Abstract away TDP MMU root lookup From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114737_194856_6B045F84 X-CRM114-Status: GOOD ( 14.71 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Abstract the code that looks up the TDP MMU root from vcpu->arch.mmu behind a function, tdp_mmu_root(). This will be used in a future commit to allow the TDP MMU to be moved to common code, where vcpu->arch.mmu cannot be accessed directly. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 17 +++++++---------- arch/x86/kvm/mmu/tdp_pgtable.c | 5 +++++ 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h index cebc4bc44b49..c5c4e4cab24a 100644 --- a/arch/x86/include/asm/kvm/tdp_pgtable.h +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -5,6 +5,8 @@ #include #include +struct kvm_mmu_page *tdp_mmu_root(struct kvm_vcpu *vcpu); + /* * Use a semi-arbitrary value that doesn't set RWX bits, i.e. is not-present on * both AMD and Intel CPUs, and doesn't set PFN bits, i.e. doesn't create a L1TF diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index fea42bbac984..8155a9e79203 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -788,9 +788,6 @@ static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, continue; \ else -#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ - for_each_tdp_pte(_iter, to_shadow_page(_mmu->root.hpa), _start, _end) - /* * Yield if the MMU lock is contended or this thread needs to return control * to the scheduler. @@ -1145,7 +1142,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, */ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { - struct kvm_mmu *mmu = vcpu->arch.mmu; + struct kvm_mmu_page *root = tdp_mmu_root(vcpu); struct kvm *kvm = vcpu->kvm; struct tdp_iter iter; struct kvm_mmu_page *sp; @@ -1157,7 +1154,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) rcu_read_lock(); - tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) { + for_each_tdp_pte(iter, root, fault->gfn, fault->gfn + 1) { int r; if (fault->arch.nx_huge_page_workaround_enabled) @@ -1826,14 +1823,14 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level) { + struct kvm_mmu_page *root = tdp_mmu_root(vcpu); struct tdp_iter iter; - struct kvm_mmu *mmu = vcpu->arch.mmu; gfn_t gfn = addr >> PAGE_SHIFT; int leaf = -1; - *root_level = vcpu->arch.mmu->root_role.level; + *root_level = root->role.level; - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + for_each_tdp_pte(iter, root, gfn, gfn + 1) { leaf = iter.level; sptes[leaf] = iter.old_spte; } @@ -1855,12 +1852,12 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte) { + struct kvm_mmu_page *root = tdp_mmu_root(vcpu); struct tdp_iter iter; - struct kvm_mmu *mmu = vcpu->arch.mmu; gfn_t gfn = addr >> PAGE_SHIFT; tdp_ptep_t sptep = NULL; - tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + for_each_tdp_pte(iter, root, gfn, gfn + 1) { *spte = iter.old_spte; sptep = iter.sptep; } diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index cf3b692d8e21..97cc900e8818 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -13,6 +13,11 @@ static_assert(TDP_PTE_WRITABLE_MASK == PT_WRITABLE_MASK); static_assert(TDP_PTE_HUGE_PAGE_MASK == PT_PAGE_SIZE_MASK); static_assert(TDP_PTE_PRESENT_MASK == SPTE_MMU_PRESENT_MASK); +struct kvm_mmu_page *tdp_mmu_root(struct kvm_vcpu *vcpu) +{ + return to_shadow_page(vcpu->arch.mmu->root.hpa); +} + bool tdp_pte_is_accessed(u64 pte) { return is_accessed_spte(pte); From patchwork Thu Dec 8 19:38:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3D5CFC4332F for ; Thu, 8 Dec 2022 19:59:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=F3Glwb4J7pVjfojh5QP3hVqSY7MrInvTABufHB9HvMw=; b=QGZHFhp9SP/2Labr6AZDv66mMv sk7tqv+F1e/rUHl8h2cx8rHVVy252oiDCsmWEBGtne6nsb1VJ4neTcXidmX3icM75HtT1xHC9stgZ oz8XdiSnEcaSCIpCfqgk9hlbY8m0yQPrGkyEQSeMty4euYwHYoxC8V60lE3MZQacC3Q+NIpFfqHnv K6CotDrZJosmTV0bpTGA7gqo00Sj8PKrAqybtXyvFdoSPVkzdWMe+nU9LcbUT2wt6e818vmG8M9Om 0DsejcjevZXOlzPkaCagB4Jh4dwFsV6piGZkAn1hrHxNgUGr5Wae7IGj+RsjWt/etdTfdCWtG5y/N FXn9lKow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N3Q-00AAQt-DS; Thu, 08 Dec 2022 19:59:36 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mz1-00A7bR-TU for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=9RmR6EwIqCFDBLvxI6selt5ga3zw1fPd7MYyun7sh04=; b=kpOmd/NbMxBaYH9q3oxarLuaKQ VJdQJ7TrpJAgzq4TYfyO07uxX2ea6RrJIOEvHjTovuRjlXO+JKWUXV/6tLvvqDtvEGBj3ObyJfkKV 9FkJX3kubwh7rhiOOXeb4iJo1OQ2Z0DXmmdpHWCGUIhA+YWh0CSH7UHz8zf+lTGyMi6LDabu+XYwI aWOvix4R6/P1rYfOt2qGLwCC+jE3+cr9TW3qUcKHH3YuTGgHwlnC0uLayuW8IZIfJNL0NQ2aOd6dp 9W6SZoTgREK+ff+oa0GPZr9qsbS2US1rbv3S24QiqxXmcUgxLf2YxrQ9mPrFgSoxUFrkt9ugkorvN E6iFZPJA==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkC-007Fx4-0K for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:45 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-368994f4bc0so24890767b3.14 for ; Thu, 08 Dec 2022 11:39:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9RmR6EwIqCFDBLvxI6selt5ga3zw1fPd7MYyun7sh04=; b=pUH2F0cXu3WIGRD2ce8NSaVZncsouuIq0PwSt3Hx5XNNzUTAHyLwoPUZmYYFi5CtSP nVhq3u0B+puFODMcSNnljB/IQui4QQoBBoT7VK2vjldq5K0TcwfXSpD6l8CQHX5MJLf1 +ObCB8QWeBiWlCf7gO5CWF4MLRqEya3iXno1lfQYF2S+3pFzK9GESTP8LV94Jcgc/kmm DGrmedpnRTomnz9+8f0JJ8smD91kG4llzTsIt3IMmR0C9KC9suLzy6zk0g20DYlyZJGB ATs/tP0070Y72621Cd+brELKn4hhj6+ekuepajebi56ZFA2sqrDn/ZHk1U80/2BWwxTx E8Jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9RmR6EwIqCFDBLvxI6selt5ga3zw1fPd7MYyun7sh04=; b=Ymx40+qPsEVjcRk6EkeQ4s+h0ja9cb2fkqV95rl4k30DweojuLY9IeXelz0a1jGSYV J//xZyb1ADSGb3w5xBVv+Quup00cvo0sTGDmrsqJu1Ii/KRxVsxiaqmk0pEFiv0HRTym J8vr9mwM9aOeIosBMvoCCGl6K7OtFPQKSPI+cLoMA11OQyiAIreE7TP0R7qZvWA92quN a0000VDj4oAlLtwezgaymMZfY95rozJHY1tx8aTAWEqYY7JdSjxdJ9D6hadyVF46PEwl GHcfHBGimbfM2ZRJQdqVP9D2p90JGE4EhiTJ5Hh5cEEOfCREBowCPb/+A9GzQlNp9Eju u/LA== X-Gm-Message-State: ANoB5pnoECCceXBXM9QL7Dq8napTazz1PkrQJaecpn7bsdAyJpxf1E2B 6NAC1TogQqLmA8RZ9j+WjgVcGzIdUDvncw== X-Google-Smtp-Source: AA0mqf7cV11l2t3OM7agzc6qRsrai4lnDlxSMNa6gi84ZckjSbH/wAyvmmZJEJroavTOpSqZ0tfjElEm8egcuA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d9c1:0:b0:6fd:d054:a0a7 with SMTP id q184-20020a25d9c1000000b006fdd054a0a7mr25462879ybg.112.1670528375112; Thu, 08 Dec 2022 11:39:35 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:37 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-18-dmatlack@google.com> Subject: [RFC PATCH 17/37] KVM: Move struct kvm_gfn_range to kvm_types.h From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193944_092624_8C7642C5 X-CRM114-Status: UNSURE ( 9.41 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move struct kvm_gfn_range to kvm_types.h so that it's definition can be accessed in a future commit by arch/x86/include/asm/kvm/tdp_pgtable.h without needing to include the mega-header kvm_host.h. No functional change intended. Signed-off-by: David Matlack Reviewed-by: Ben Gardon --- include/linux/kvm_host.h | 7 ------- include/linux/kvm_types.h | 8 ++++++++ 2 files changed, 8 insertions(+), 7 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 22ecb7ce4d31..469ff4202a0d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -256,13 +256,6 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif #ifdef KVM_ARCH_WANT_MMU_NOTIFIER -struct kvm_gfn_range { - struct kvm_memory_slot *slot; - gfn_t start; - gfn_t end; - pte_t pte; - bool may_block; -}; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 59cf958d69df..001aad9ea987 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -132,4 +132,12 @@ struct kvm_vcpu_stat_generic { #define KVM_STATS_NAME_SIZE 48 +struct kvm_gfn_range { + struct kvm_memory_slot *slot; + gfn_t start; + gfn_t end; + pte_t pte; + bool may_block; +}; + #endif /* __KVM_TYPES_H__ */ From patchwork Thu Dec 8 19:38:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 076FAC4167B for ; Thu, 8 Dec 2022 20:33:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WdAoJT1W4rkjvcasgcQZfPrJfIRAk3mFId89arjaj5c=; b=bNzvzCVMIviYzZHWKV6Et4tQ8t C1riekYYa4oZ8qFDzUuIPSvjGIwfYpf3us3lLZDa4OAcfnE/7ASlg7qHs+jAjsofMIC5ZM4z/K/ZB 2tGUeBYXuS3DGgtJ61hpUD0u17Y42SbUOgYj63nr975ABy/UjfG27VAVbWOV4UG6Cp3imdloIiPj8 m6V+yH4tm0YGrG7SqpGRXEmXrWpyTDPx5SoN9NUkcujyEyD9m7bHbXJcdlf7wNzkzTW1zZPdPYTeM WL3fMrJ1BO3oG6+0JsCwsQ2T5bXTWlCD4eAumyvjKfMUY3SwZDOIB+R4Mra/QL1I8LMEq/hDnebBJ njG53kxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NZg-00Al4r-3D; Thu, 08 Dec 2022 20:32:56 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUw-00AgYd-8y for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:28:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=IqmyM2QEYva7JW7axwbl4xIkL86pAwW/H4sjSNNkIgM=; b=PLe5FyFdZCVid3kg3T0UkYz6br NYMcGm5G/8AYes18uX1iPKULSNwxlaxxXSv0x3rYVxxgmWhOL51WAjAoYylWJStGT9Uk27XkpAGhD j9kT52etaIJkqbPzI1at3L9XsPjzrDd3BsE8q8BsSFU2K+8MuSAsT+zUHcWh9teztD1IiE7lW9V/e wcdf5R8VMp0MoLfDys60LIH7Ggptvi3j9KbLxCOoU/5eH93HpAbvF5Zrxdn3H4DvjNvIDEvxyIJ3E YLHquxfOp7O6vCs/YLy5sHY+rrPnQmoi/Mwoys/SeF2sgOYHpyUPKJiOSjZioV2Hrmp0F1kezahPE 1tWl/W2g==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mk5-008ZnK-Ky for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:39 +0000 Received: by mail-yb1-xb49.google.com with SMTP id e15-20020a5b0ccf000000b006ed1704b40cso2547229ybr.5 for ; Thu, 08 Dec 2022 11:39:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IqmyM2QEYva7JW7axwbl4xIkL86pAwW/H4sjSNNkIgM=; b=dlUm8IzbDvKb7N133pMZbM/DFF2KBU+Uj9aLv9J4HYtcIkmIb26+E+NQc6WW5lGWbL gRjhF2Xtb1+eYvjvVNPz1fbza7FzGrlR1Lu4PkwObVflvo1HWtD/PdV/GtwH/OYCGqmW unzqYTbSQw2kEfHvH067uKfBCEte20dfejQENL3zjpmLPkBmJgJlntv4NgZXLQazrFdX xuOUsmX6ghWNWZUkpgBa0nNKdgiEWdVdSAmfBWBxgOuca3Crx0ITb6PP4ZfWYj0CDybH 3DDCzi9SmVtrCH3Ln+bCV9cJJ5HGVNrG9C8r5e5Ny4xwAEq3sPHh7YTtyJKFJzf22O7k p2Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IqmyM2QEYva7JW7axwbl4xIkL86pAwW/H4sjSNNkIgM=; b=lW6xKaJIH/cHTr631UfY1UjS9waY5lduDK49ob3x+bHvp7Mw2F5u23qdOlbmyULJOK 6DDcv+pG6IJnwWelVA83bgn2b4R6aDOrgemC2ABkLVsy6WgFTi2Wk5QWoilAr0Otiink PEexuTWzA7ddco7WeZptZvp5qhSAeoCzcpOvC6fAWoA9gTOJyfYI1wpHlE0ai2tBtxyS kGep5sJrevNDhtxtooJKQUUJFx7Jh58LcqCGyweprzygKlbknRXbyExKmHvlR2OECcxv BhB8ZxWVVYxhTFNpPwdGj6BMTpEBSOKM2YXgSNNU8eIBwZ7jEGjBXOm+ijRRfbEMtxDj UOWQ== X-Gm-Message-State: ANoB5plcJq2BPy3MF/YpDkmwDJ9zE2IQJrOdOSyhbh0+T8oXgQCinvNm D+qAUFwZl86n5GbKz/+BTc4F/PW5crXGQg== X-Google-Smtp-Source: AA0mqf7z1vyD3rdlMeoQGriM43dtIzP7Gztt83Sfud7zIuPh1YZpIi0HN2kxDCnBaSlw5o2SzSSWGj7vYJ4cig== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:7851:0:b0:3b6:e1ed:4185 with SMTP id t78-20020a817851000000b003b6e1ed4185mr57305022ywc.330.1670528376810; Thu, 08 Dec 2022 11:39:36 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:38 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-19-dmatlack@google.com> Subject: [RFC PATCH 18/37] KVM: x86/mmu: Add common API for creating TDP PTEs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193937_920642_47ABD3BA X-CRM114-Status: GOOD ( 14.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Introduce an API for construction of TDP PTEs. - tdp_mmu_make_leaf_pte() - tdp_mmu_make_nonleaf_pte() - tdp_mmu_make_huge_page_split_pte() - tdp_mmu_make_changed_pte_notifier_pte() This will be used in a future commit to move the TDP MMU to common code, while PTE construction will stay in the architecture-specific code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 10 +++++++ arch/x86/kvm/mmu/tdp_mmu.c | 18 +++++-------- arch/x86/kvm/mmu/tdp_pgtable.c | 36 ++++++++++++++++++++++++++ 3 files changed, 52 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h index c5c4e4cab24a..ff2691ced38b 100644 --- a/arch/x86/include/asm/kvm/tdp_pgtable.h +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -4,6 +4,7 @@ #include #include +#include struct kvm_mmu_page *tdp_mmu_root(struct kvm_vcpu *vcpu); @@ -57,4 +58,13 @@ kvm_pfn_t tdp_pte_to_pfn(u64 pte); void tdp_pte_check_leaf_invariants(u64 pte); +struct tdp_iter; + +u64 tdp_mmu_make_leaf_pte(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, + struct tdp_iter *iter, bool *wrprot); +u64 tdp_mmu_make_nonleaf_pte(struct kvm_mmu_page *sp); +u64 tdp_mmu_make_changed_pte_notifier_pte(struct tdp_iter *iter, + struct kvm_gfn_range *range); +u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, + struct kvm_mmu_page *sp, int index); #endif /* !__ASM_KVM_TDP_PGTABLE_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8155a9e79203..0172b0e44817 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1057,17 +1057,13 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, struct tdp_iter *iter) { struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep)); - u64 new_spte; int ret = RET_PF_FIXED; bool wrprot = false; + u64 new_spte; WARN_ON(sp->role.level != fault->goal_level); - if (unlikely(!fault->slot)) - new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); - else - wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, - fault->pfn, iter->old_spte, fault->prefetch, true, - fault->map_writable, &new_spte); + + new_spte = tdp_mmu_make_leaf_pte(vcpu, fault, iter, &wrprot); if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; @@ -1117,7 +1113,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, struct kvm_mmu_page *sp, bool shared) { - u64 spte = make_nonleaf_spte(sp->spt, !kvm_ad_enabled()); + u64 spte = tdp_mmu_make_nonleaf_pte(sp); int ret = 0; if (shared) { @@ -1312,9 +1308,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, tdp_mmu_set_spte(kvm, iter, 0); if (!pte_write(range->pte)) { - new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, - pte_pfn(range->pte)); - + new_spte = tdp_mmu_make_changed_pte_notifier_pte(iter, range); tdp_mmu_set_spte(kvm, iter, new_spte); } @@ -1466,7 +1460,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < TDP_PTES_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, sp->role, i); + sp->spt[i] = tdp_mmu_make_huge_page_split_pte(kvm, huge_spte, sp, i); /* * Replace the huge spte with a pointer to the populated lower level diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index 97cc900e8818..e036ba0c6bee 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -5,6 +5,7 @@ #include "mmu.h" #include "spte.h" +#include "tdp_iter.h" /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ static_assert(!(REMOVED_TDP_PTE & SPTE_MMU_PRESENT_MASK)); @@ -75,3 +76,38 @@ void tdp_pte_check_leaf_invariants(u64 pte) check_spte_writable_invariants(pte); } +u64 tdp_mmu_make_leaf_pte(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, + bool *wrprot) +{ + struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep)); + u64 new_spte; + + if (unlikely(!fault->slot)) + return make_mmio_spte(vcpu, iter->gfn, ACC_ALL); + + *wrprot = make_spte(vcpu, sp, fault->slot, ACC_ALL, iter->gfn, + fault->pfn, iter->old_spte, fault->prefetch, true, + fault->map_writable, &new_spte); + + return new_spte; +} + +u64 tdp_mmu_make_nonleaf_pte(struct kvm_mmu_page *sp) +{ + return make_nonleaf_spte(sp->spt, !kvm_ad_enabled()); +} + +u64 tdp_mmu_make_changed_pte_notifier_pte(struct tdp_iter *iter, + struct kvm_gfn_range *range) +{ + return kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, + pte_pfn(range->pte)); +} + +u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, + struct kvm_mmu_page *sp, int index) +{ + return make_huge_page_split_spte(kvm, huge_spte, sp->role, index); +} From patchwork Thu Dec 8 19:38:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39F4FC001B2 for ; Thu, 8 Dec 2022 20:28:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=6QlG8I4ZiOE+F6HODazx+2eEiuaHOTgNF0vjswqwanE=; b=YR8BUkkVGukQNSbTIKW1gn1FFj RlSrNCW2CIM55uLC0s2wN7Fdkw3BmZ7hbuiZWDpPTEvSqbKYb3SCL+HTOPJEzF0oM2/PVvqvMesCS rLwo7fH5Vt/BVJQEQ0LIkGJcX2+2SvBTa6n6xHxzjK4BHIP+NVByjYBGcPJZretJnYz8HyKbVmKWt mtMJxCttIubz7y49Hd0FUODyLgFfyQpRxvUWLjbjgBSKFB8Zzo8T5jyQkoW8EtltQclXU6y+SCxkg R/HK8wVZUVcsuggjieqpBfuLCnjoBJE7u8M2VzF/m0566YDN5hULRaB89Bure3SaXiO539srPx7WS 0/0IMxqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUj-00Aglc-CD; Thu, 08 Dec 2022 20:27:49 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUg-00AgYd-Ud for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=z1OT4qzC8agmu6kAkGnJ2Z35e3wCX96lD5z5dcXDmZ8=; b=ULLbRCVRIF3j6prrUEgai2iFV7 Noc60IyJFZb0M9xSEURA4oPoYNDrtwJY8kBWZoxrB/wvB6SrMJ/WrJB4lMU/TduXA74CNbX8kuufz XpLaR5Wa6LuwSH9Lm8VMkj4qns2wBhK4Sy60BPXlou9ea+YKJIeH+I2rOoFQtWG6xcGr7sT+2cMew LEfKFOsyMkS7uhCMboTndit9g4egfnmkmCMKhzkWVBp6/X8DSsx1mi0qxIrKCJ4kzwHEvLQK1gd28 6YfuMTSnbW9a0BI7Kr6G2VyWzTqwmZ9xG1Sn8ocYIfNT2cvQwm6tWTp1IC1mgnuH1nZv/weuEApYs wmjeO9Bg==; Received: from mail-ot1-x349.google.com ([2607:f8b0:4864:20::349]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqV-008a2Y-Dh for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:17 +0000 Received: by mail-ot1-x349.google.com with SMTP id bx9-20020a056830600900b0066debed5e7dso1227301otb.10 for ; Thu, 08 Dec 2022 11:46:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z1OT4qzC8agmu6kAkGnJ2Z35e3wCX96lD5z5dcXDmZ8=; b=mYob1xjdrgICrvdwJ/y+dLEYrXUw8yHkANGFMt6kNW2zwoh86yOA5IqdKm5On/HK71 rCr+r9ZI6b6AUTyVpRLi2HPUhb4KBRes2A7dYxEjzKSFKujmjFxrPF/BYUNv2Dy6Czkl f0L60qH/T+Z7MgC6YZCM/qFdYGB6ww2JwcmxKxWSiQbNX520z0l4UVo1vs6DH+jCVtRB Ica965ONYXFaPKJ0DJgySM/ZwSZbyoeXM3nEtFV/eijoma5iS8BYP7SnArJIeoWtwwAs /dEWQtSRXEmOgV2EdezDQjp58v5mEPq257Ptg1IsS9acpYvYjSy/WRIyS1Ye2wh2Jdug JTKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z1OT4qzC8agmu6kAkGnJ2Z35e3wCX96lD5z5dcXDmZ8=; b=dy3ws/LNStxYOYHq537Llin+yxNpdNBcWhQn5yIDWcl2AJ4mnKIpkl+IH22CinlZ4B XzMkaX1iyGY5KMr19oIxdC8fGbswc7ho2rwfTYe1GkRhozG9fE5YnbbRBwjS1h334SK1 1AQ6g21a2QGkrnWUAEu+8JxGh7wYNs6pGFhY5pUW8Igoe4ShnmoFv82psBQdQbcnOnH4 J0yuvuG2pYE6N3moGDDHcG+heTUYR4rrukjPPhdwNFUc4LfpA0/9DPo7crwAgW0c8JZy q2Xi0R4bzhYS3ulgNvDssatT4BvQcyLkSPVDV+I9e7Ti7Nt5Cl4czdGoSxHTUNDQeTGR MDbQ== X-Gm-Message-State: ANoB5pm22reH7b1tz+KYsMms3Km7+nTKwwzYBGQspWjpCJFcbRv1y9Ag eXQU80GtQGf9AcrN0Upicf4CP791qmGr3g== X-Google-Smtp-Source: AA0mqf4wDINWYdyDuh5bcErKdq4AXe0SreaRGxGTgiHfTgQy11e7AFp7261AXIzWuvopVOIy1EidXNtJ/FLxDQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90a:a78c:b0:219:ef00:9ffe with SMTP id f12-20020a17090aa78c00b00219ef009ffemr14658083pjq.106.1670528378506; Thu, 08 Dec 2022 11:39:38 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:39 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-20-dmatlack@google.com> Subject: [RFC PATCH 19/37] KVM: x86/mmu: Add arch hooks for NX Huge Pages From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194615_569696_F3976D30 X-CRM114-Status: GOOD ( 13.35 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Abstract away the handling for NX Huge Pages down to arch-specific hooks. This will be used in a future commit to move the TDP MMU to common code despite NX Huge Pages, which is x86-specific. NX Huge Pages is by far the most disruptive feature in terms of needing the most arch hooks in the TDP MMU. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 57 +++++++++++++++++++--------------- arch/x86/kvm/mmu/tdp_pgtable.c | 52 +++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 0172b0e44817..7670fbd8e72d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -269,17 +269,21 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) return sp; } +__weak void tdp_mmu_arch_init_sp(struct kvm_mmu_page *sp) +{ +} + static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep, gfn_t gfn, union kvm_mmu_page_role role) { - INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); - set_page_private(virt_to_page(sp->spt), (unsigned long)sp); sp->role = role; sp->gfn = gfn; sp->ptep = sptep; + tdp_mmu_arch_init_sp(sp); + trace_kvm_mmu_get_page(sp, true); } @@ -373,6 +377,11 @@ static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) atomic64_dec(&kvm->arch.tdp_mmu_pages); } +__weak void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, + bool shared) +{ +} + /** * tdp_mmu_unlink_sp() - Remove a shadow page from the list of used pages * @@ -386,20 +395,7 @@ static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, bool shared) { tdp_unaccount_mmu_page(kvm, sp); - - if (!sp->arch.nx_huge_page_disallowed) - return; - - if (shared) - spin_lock(&kvm->arch.tdp_mmu_pages_lock); - else - lockdep_assert_held_write(&kvm->mmu_lock); - - sp->arch.nx_huge_page_disallowed = false; - untrack_possible_nx_huge_page(kvm, sp); - - if (shared) - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + tdp_mmu_arch_unlink_sp(kvm, sp, shared); } /** @@ -1129,6 +1125,23 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, return 0; } +__weak void tdp_mmu_arch_adjust_map_level(struct kvm_page_fault *fault, + struct tdp_iter *iter) +{ +} + +__weak void tdp_mmu_arch_pre_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ +} + +__weak void tdp_mmu_arch_post_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ +} + static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, struct kvm_mmu_page *sp, bool shared); @@ -1153,8 +1166,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) for_each_tdp_pte(iter, root, fault->gfn, fault->gfn + 1) { int r; - if (fault->arch.nx_huge_page_workaround_enabled) - disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); + tdp_mmu_arch_adjust_map_level(fault, &iter); if (iter.level == fault->goal_level) break; @@ -1178,7 +1190,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp = tdp_mmu_alloc_sp(vcpu); tdp_mmu_init_child_sp(sp, &iter); - sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; + tdp_mmu_arch_pre_link_sp(kvm, sp, fault); if (tdp_pte_is_present(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); @@ -1194,12 +1206,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) goto retry; } - if (fault->arch.huge_page_disallowed && - fault->req_level >= iter.level) { - spin_lock(&kvm->arch.tdp_mmu_pages_lock); - track_possible_nx_huge_page(kvm, sp); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); - } + tdp_mmu_arch_post_link_sp(kvm, sp, fault); } /* diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index e036ba0c6bee..b07ed99b4ab1 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -111,3 +111,55 @@ u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, { return make_huge_page_split_spte(kvm, huge_spte, sp->role, index); } + +void tdp_mmu_arch_adjust_map_level(struct kvm_page_fault *fault, + struct tdp_iter *iter) +{ + if (fault->arch.nx_huge_page_workaround_enabled) + disallowed_hugepage_adjust(fault, iter->old_spte, iter->level); +} + +void tdp_mmu_arch_init_sp(struct kvm_mmu_page *sp) +{ + INIT_LIST_HEAD(&sp->arch.possible_nx_huge_page_link); +} + +void tdp_mmu_arch_pre_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ + sp->arch.nx_huge_page_disallowed = fault->arch.huge_page_disallowed; +} + +void tdp_mmu_arch_post_link_sp(struct kvm *kvm, + struct kvm_mmu_page *sp, + struct kvm_page_fault *fault) +{ + if (!fault->arch.huge_page_disallowed) + return; + + if (fault->req_level < sp->role.level) + return; + + spin_lock(&kvm->arch.tdp_mmu_pages_lock); + track_possible_nx_huge_page(kvm, sp); + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); +} + +void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, + bool shared) +{ + if (!sp->arch.nx_huge_page_disallowed) + return; + + if (shared) + spin_lock(&kvm->arch.tdp_mmu_pages_lock); + else + lockdep_assert_held_write(&kvm->mmu_lock); + + sp->arch.nx_huge_page_disallowed = false; + untrack_possible_nx_huge_page(kvm, sp); + + if (shared) + spin_unlock(&kvm->arch.tdp_mmu_pages_lock); +} From patchwork Thu Dec 8 19:38:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E4F6EC4332F for ; Thu, 8 Dec 2022 19:46:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=uh3yoEA15oMl00A9RMYjouy3O3mMzsorrPbH9FeERjY=; b=RO/EpnnoIU8ax0+KGXBQzyY0qT givwJZKq9sQSbJvunGuJz3JOssTqD/mLpkqvNP9pkz6zBP+CxsQSMCZxByezBGvqOBwUXV1crYhGw dSmWtr+lxX9VwMxkqdcnJsTqOCV3EwTYLdNXosltKLtAANg1Pe0igCSfQ9ecEq265bBAtIY5OZx91 XUuQ1QsU7go2/JP3ZlCWlCWMIfkxU6ldTS2yR4JovFFGUbAz+iPjin6397wdD0Z5IAAneFtz6fJHy bveJsw+E0k+cdVoSTkGtBP+kZKmeKt+FaJfhRBr8761U4bAbtPdlgjYa6NdSQ5mkB0Xz1ATM3Q6E3 gd5BNzOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mqs-00A1XN-6A; Thu, 08 Dec 2022 19:46:38 +0000 Received: from mail-oi1-x24a.google.com ([2607:f8b0:4864:20::24a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqU-00A1By-62 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:17 +0000 Received: by mail-oi1-x24a.google.com with SMTP id a12-20020a05680804cc00b0035b9a1d20ecso1075858oie.2 for ; Thu, 08 Dec 2022 11:46:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CaDXua+LFbRv3ei+U1Fgv/kax6A2A04L4lM62rMK90Q=; b=FQRVJBa4KRKObP8S3n7w3f2m+Bcx87+PQgURkRJCXrk7m4BdVxUrIfCvxY3dvCxGvX DFNpL9OiA6gV3a7OA6rX48DXADwjzhjgTFiSwWT9yA/sZdiwLepPrDM/Dmol5FJsTFtd 1z4mxGJZs7yrfwU4FQjUHnKK5PWmnwdRAprtxH5S0Ua5f0HoeWKVyWfSamiqMLNdnacf wBp7G+B2Y+E/qsY0zWkyQWEjRsad/cIklCbMqtgv/ZnA3GI3slUIM7MR6Z7bKiBB5z6N n2PV1n0iU9J6gxAvFUL2pGUf26H+n9zVjk4Eh4mosAVvSNZBvAGEZTWWsWyjSDwLmgGs n+cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CaDXua+LFbRv3ei+U1Fgv/kax6A2A04L4lM62rMK90Q=; b=CMbmWkBLrOO+NTI0pYkiyQAESg6owQxvXE8KcTK19cW9hdQqUTIZhe370FbzPJ5Y6b CNaizPrz8UP/r1Gl1CG8/cs+iFfLcSa+TTmYrfoLEQjOomBWMSlyI/M7kBn7q1/hsGmG rw2b6f7kmqkhYU+2ge3fgZh57xzeNv8XJR++79VgoufvsskLXd0dH/IWlPwymVIVkXwp F3e8incPLxDiZWjE5jLRuOglnlR2VrChszi2u3Sc6a56ClVG6ML9SPBlRTentJ9zxATK 1ntPnue4Vmq/RBCStRfCekiVvJlL9c9WtJm0Bm5AsTbM7LbhLhse/hnGdDwhL2uz/QF0 EcQw== X-Gm-Message-State: ANoB5pmkWDpWtZoZO8m78O4xiOcPkzrVA2zsqpsplAZdzUtPB1xleZrZ x2dbFO642awbYPE+OFftiwDeV8NXa1OTPQ== X-Google-Smtp-Source: AA0mqf5XxSPsh1gElJLISTCH8jBxKw2l85vxM5vFZMR7VtMxrzDAz9u8B+smh4G6eaCc3YG8CKpqOh9t0eSRDA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:690c:b18:b0:388:7d2:587b with SMTP id cj24-20020a05690c0b1800b0038807d2587bmr8897906ywb.416.1670528380375; Thu, 08 Dec 2022 11:39:40 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:40 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-21-dmatlack@google.com> Subject: [RFC PATCH 20/37] KVM: x86/mmu: Abstract away computing the max mapping level From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114614_250939_E684A9DB X-CRM114-Status: GOOD ( 12.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Abstract away kvm_mmu_max_mapping_level(), which is an x86-specific function for computing the max level that a given GFN can be mapped in KVM's page tables. This will be used in a future commit to enable moving the TDP MMU to common code. Provide a default implementation for non-x86 architectures that just returns the max level. This will result in more zapping than necessary when disabling dirty logging (i.e. less than optimal performance) but no correctness issues. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 14 ++++++++++---- arch/x86/kvm/mmu/tdp_pgtable.c | 7 +++++++ 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7670fbd8e72d..24d1dbd0a1ec 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1696,6 +1696,13 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); } +__weak int tdp_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + struct tdp_iter *iter) +{ + return TDP_MAX_HUGEPAGE_LEVEL; +} + static void zap_collapsible_spte_range(struct kvm *kvm, struct kvm_mmu_page *root, const struct kvm_memory_slot *slot) @@ -1727,15 +1734,14 @@ static void zap_collapsible_spte_range(struct kvm *kvm, /* * If iter.gfn resides outside of the slot, i.e. the page for * the current level overlaps but is not contained by the slot, - * then the SPTE can't be made huge. More importantly, trying - * to query that info from slot->arch.lpage_info will cause an + * then the SPTE can't be made huge. On x86, trying to query + * that info from slot->arch.lpage_info will cause an * out-of-bounds access. */ if (iter.gfn < start || iter.gfn >= end) continue; - max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + max_mapping_level = tdp_mmu_max_mapping_level(kvm, slot, &iter); if (max_mapping_level < iter.level) continue; diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index b07ed99b4ab1..840d063c45b8 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -163,3 +163,10 @@ void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, if (shared) spin_unlock(&kvm->arch.tdp_mmu_pages_lock); } + +int tdp_mmu_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + struct tdp_iter *iter) +{ + return kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, PG_LEVEL_NUM); +} From patchwork Thu Dec 8 19:38:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068806 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 319CBC4332F for ; Thu, 8 Dec 2022 19:49:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Y5VUWfRSkxiQ/tA7ye1AO6vGGNJMJ7V9zxuN1n3JIVg=; b=F0Q/q7OnCxzdrFFAd/jy6yZ3PD FerdCRoJjJcvKLS67ikwR2tXHEgHmJsivfu2l7mR6yzYQ983TGBRL5jQ2aclHf1L0w2n8gHBK3Ep+ neR+PelCokcW2y1vzZjqTWRa5ukAIrN1iEhzNAxSlS+5XyzhY8Hq0TLQv+rK4VaAXGdwotBJeS53T wdtP+1vqO7J1eyur5Xsnm+KsFdYjuuFmSK4euFKB/n4eZkJzjVI7pP/dfOM52xchKGweZ/ks2Y8PZ S16LVZ8zb1hZ5TmJpDK2giSIYN7Yruy0dVCIdW3GdRRcB1gZTwJWdo2riT/UiTvHlUt1yD3uSC9HE Q7RFzMqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MtE-00A3Ci-7E; Thu, 08 Dec 2022 19:49:04 +0000 Received: from mail-oi1-x24a.google.com ([2607:f8b0:4864:20::24a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mqx-00A1PO-KM for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:44 +0000 Received: by mail-oi1-x24a.google.com with SMTP id k26-20020a54469a000000b0035ac1417866so1076401oic.18 for ; Thu, 08 Dec 2022 11:46:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4Zxu59m+BC3IHN6G+DZ7Gfccf72aG4aOXJ0CWcNu+xQ=; b=SdJhXds2L90YC8FNHmtWh5Fh9TEgerNWhfkuw8x3d9CBnfm/9eipZjv0gPdpJNH4I0 S+51SqBNWDjQiWTiQlTyu6PNy4niyZRnQhbIkbzuxQiNibwxH6mkZqPrFqBbyJTrkaSd 9Mn4ob8QKmJos009LfgYNnOTusdfrKp1xLSuoRmHmtBWBuBrOTTzfiu5JzBu+friunut 7j3QXKw2PfZd7vhhYsSAGJiGY/32mBAW8UAr3zD1WlpFMVlQcqneZ096cl3TmZsBGjzc 3x4d21+FAOrJhsugApLkXvPF1NkwgnZMzvQRH+9jPPL4ZLADP+yEglXH2v1r5QVynX/K vjCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4Zxu59m+BC3IHN6G+DZ7Gfccf72aG4aOXJ0CWcNu+xQ=; b=h4+VrRWEYODmSOgEOFVFBuRdNJMD35CWtCTx3eTvtW1nqzLvmHRkvb91nXQWlH/gGc dLuzVLpD9M/0W4eJzBpqzsndcGEUM2Dxrm6i1NnjeWl4Yd0OwWRFbigt36+6fKMNvIfR kbB0jwN2OB6yYAFYJMaXOcONtQI28cv4VTe2QNZ7/qgHTajVBthPlvrvOwsMeRoVns8N s4qv7wbZKFbWnSHa5pWOG7l+KYWUsyjk3pjS4wr4Df6hR0ZulN8zLtaCSz4FpCk/Et9c i1n30XDbLzybZWChBrgNImIP+8QVcMM8MXu76s6qxNpBk7Di+camubQ/WssXOxhJQhDI zW8g== X-Gm-Message-State: ANoB5pkqAofreu8hH36F1JzuRqivYaxiLqRxNL/S89dPKG2mLPig9A/5 662dRh0O656PAa+5s14eRwLFFbIVzXT1Iw== X-Google-Smtp-Source: AA0mqf7xeoOaS0AA84SgKnbp6lpiXTBq7xMlUXV6MswQxMERGECN0IbonM/14NtNvPIycx9F8VqJbtOKzbBrCA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90a:d3d5:b0:218:845f:36a1 with SMTP id d21-20020a17090ad3d500b00218845f36a1mr97581242pjw.117.1670528382072; Thu, 08 Dec 2022 11:39:42 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:41 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-22-dmatlack@google.com> Subject: [RFC PATCH 21/37] KVM: Introduce CONFIG_HAVE_TDP_MMU From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114643_717492_7C4F2F7E X-CRM114-Status: UNSURE ( 8.90 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Introduce a new config option to gate support for the common TDP MMU. This will be used in future commits to avoid compiling the TDP MMU code and avoid adding fields to common structs (e.g. struct kvm) on architectures that do not support the TDP MMU yet. No functional change intended. Signed-off-by: David Matlack --- virt/kvm/Kconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 9fb1ff6f19e5..75d86794d6cf 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -92,3 +92,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_TDP_MMU + bool From patchwork Thu Dec 8 19:38:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ECC59C4332F for ; Thu, 8 Dec 2022 19:51:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Fb1NA39+o0BOJ6kbxOQYYT81MpqvUoF+2SHQ9dHjOa8=; b=46nc3DAwR8qhlWExl+dVC50KAC Qy0a23qwXOO6D2k+HohyzxEhAflG6R9Icc+JaEDA66YbC2zS4rHZwM46rjGf4a0RBJhsdn06rrt5C P+H5jsnUPNB1vVKPQ3dAAYsNdHZElyoAfC0/+B8A2xseKGUq1ppIkHX9lc6In92XYIuHL0hsHNINM rvKEwkQ42RPLBiAxC+2M2FLiZUevnlneJfSwGtUJMgJc7gNHUxET5oIEY3LMnhVCaLUuUPBZ0uS/2 PeRT2YS0puG/VdpW8XVYPvHedLp+DPvOdjMwDsDUzfDwtS2GY0Ekzz5cuLhNJCDZCyXgsUhpJzHDt GrVW1Iaw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mvg-00A4rc-8c; Thu, 08 Dec 2022 19:51:36 +0000 Received: from mail-oi1-x249.google.com ([2607:f8b0:4864:20::249]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Ms7-00A2SK-SR for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:57 +0000 Received: by mail-oi1-x249.google.com with SMTP id bd24-20020a056808221800b0035b94fc144aso1080648oib.5 for ; Thu, 08 Dec 2022 11:47:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LgQMdy+BRV6etzCdcak09K9MMPBRYTlG1+oihaFX4ek=; b=HTWjIigcZ7WZjOnOg61QHaCTfbHLyHGvpSvV9l8zZQ5K6Grf0VUcQH9XdDcrgQ0qnO 1QAcwSASOHpForiZANO2+68V5a8rNealSQ0apW5C0hOTyqLw/F+VPLY3asRhXgPKSuVc ATHZcBO/0vYL56E/Ir829fUHKApmsLZ4Axd/FvOUVfKwdR9J0BHthDhp0OlK+DWkEs9G 8DbJlx/DyawOHuuPdY5BhEZwMd2ETb8lHyA5mmcw6iiBiq+/CMkbN4u0OnU/XPiCWYAg 4c1jbNQi9nK7+jDlMHAK9qxmS0z+lVbXn90csRVgjpeYpmndVSm5rYF7D0TXl9GZ2RVc 4lEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LgQMdy+BRV6etzCdcak09K9MMPBRYTlG1+oihaFX4ek=; b=a2kU6kHXbm1N0u8Y98p731r30qFq7qUDhAY0VTI1+G2zssXT1VNuTI2NK/8p+d/68x HmZU2h3xXBgBwxjXeDnCfBTob9eTRqWnjtXza8GRDQPiT9SWEQpNE8ya1sDEZucYtmBs iPNuqkopVBtKPa9o0S6gttjtDwQ6OA9WHvVBE68jRWPCR6NJZepP3ADtwAowYq9mcKrb pIAYUqROeU3FyrYH0X6n5ob5NzlVUFfese259PR40vnHAObsuDQA7chvw6JiT/fZegLk N62qHZceeU0TM2bH2AcCd5hi/VKZ9za3WnXm6MwqBKPsXbDAxUNd1jtZvo+tyjv8i0Au ylUg== X-Gm-Message-State: ANoB5pnQdXqoLLug6ML17O3AjSrJGhUyqGvdZ8CcKDSOog09OWHvR44S YeIJrjRhjJ+29JWVeaIewcx6jUiisE98tA== X-Google-Smtp-Source: AA0mqf6s5jpCYuu3sADTa5hiPGsaDUIFXQc5seS/2ekXFn3zxyf2G6r6oJqYRkkShhh/Mij7t3AhSFvHIH2QTg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:11d6:b0:219:ce92:17a1 with SMTP id gv22-20020a17090b11d600b00219ce9217a1mr20111973pjb.235.1670528383766; Thu, 08 Dec 2022 11:39:43 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:42 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-23-dmatlack@google.com> Subject: [RFC PATCH 22/37] KVM: x86: Select HAVE_TDP_MMU if X86_64 From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114755_948035_EE0D9C37 X-CRM114-Status: UNSURE ( 8.47 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In preparation for moving the TDP MMU implementation into virt/kvm/ guarded behind HAVE_TDP_MMU, ensure that HAVE_TDP_MMU is selected in X86_64 builds that enable KVM. This matches the existing behavior of the TDP MMU in x86. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fbeaa9ddef59..849185d5020d 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,7 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_TDP_MMU if X86_64 help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent From patchwork Thu Dec 8 19:38:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9458CC001B2 for ; Thu, 8 Dec 2022 19:46:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zV1cy/ifh8Zj5FVAkVKLOntxXPnvGq2L2VEqhhYZG0E=; b=uE69UZPki+T9XNPB0Oeky0/t8F vAE+98tsVoHp8jxSIaUPFWQlePDDkPDursOIm9T+y68xK1WgwEuxjflHmwoEem6pVUAfDHvZPi+Jy a03Cf1/00/UmUL47NywWWt4hRAJChxrqo/3VidaaUjLITZ+shRcSg3VJoiJf+Jl2kfnTagPPQUVcg rbB3qk7ubsC8v47rfjVWz9z4t/mTRiaoNWzoyFSujfBYhdqn1UVV2CK8b2687sm1uTOiifHafD94d GhG0dDlANvUH6MKjBrqc+QGCIQNW7Ey1PShX3akpBGzJ/EqSexAPZdxPonMp4PVwvFCf0eTj6gMo2 i2nEuAFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqK-00A14Q-07; Thu, 08 Dec 2022 19:46:04 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mq6-00A0rL-9i for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:52 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-368994f4bc0so25062227b3.14 for ; Thu, 08 Dec 2022 11:45:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2pwzA4f8rtWUlHUGqnsAy0IX27mj+6SK4MtkmoP+7nc=; b=YNpQc6sfSqCzVxX+98b5Qnwj8wuyxxhqDjbjg0/heAwG3DRXzEldN1gIKGBIFpCp7T oDiD8aBh93F2gLJVmOT2ym3C645P/yw3/XHnbYeSTumbSBmY7zP00GflN7tkMmxLNmBe Eyj0CiSk43Q0LoHiRlqBC5JjuC34iLtd/5dBrLWEX3UnGsxMKEST/caxuMEWJUYebKOs nehVMqmtND0ojhbibus4NQMbYsHvqfsKKAwwxf22ZCzh3h0VLEy5xqHZhA2pY9RcqW5/ lncbITNr2DhMHSX1upupeaJKLUSgu3cZnHc3xOSkN7l8rOc8bBbyrfwrBPoClODvZ2W8 9Y3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2pwzA4f8rtWUlHUGqnsAy0IX27mj+6SK4MtkmoP+7nc=; b=E6C0EyqoMLMP2+YxE2d3f3e1k4dE183tu0kfCBWiUPY2EjQI0kzH+jdxNYpGcb+/d3 G+qhi7own3wds3grOZBc5cjTwVk8vdCJ83l4bRhjXa6akOzD8Z+ZNvHGWq9PdDPE3I84 B2G3F2DXL4biVuF/8O7woDJPKMnWDW7uzt2/KR6Gs3zpvpL/o7KTHqc4dPa7YLVyQryS 6yp5tcOrRY6lE4bRZAQkEy/IxoVrfoWN++L5EgFqM988FSE+NWrFCMk0CFqu+P0M8Xe8 ++x7T6t3vPvaVFttGPHCLm9uZnx93wq9hKmZNiyGRCtAaaQ0rRV3sJi0sl953BXUH/fK fYxA== X-Gm-Message-State: ANoB5pktJ2oiwrbo+1Q8C26F8vk+2AvpC9vmS25nmmLH4iO2i+Gm7QJF 4UUFvJ2BVqzpYu9dAQEnOMThZmJt37KwQw== X-Google-Smtp-Source: AA0mqf63ChKxDkB73Q9eiHNZK/23L+Sw7TYmaHPIE3+6BorfXLWgR+rJnoRzaO6YbXrC3TqNmiAKa46Ml3GFRw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:ed05:0:b0:6c4:8a9:e4d2 with SMTP id k5-20020a25ed05000000b006c408a9e4d2mr92058723ybh.164.1670528385572; Thu, 08 Dec 2022 11:39:45 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:43 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-24-dmatlack@google.com> Subject: [RFC PATCH 23/37] KVM: MMU: Move VM-level TDP MMU state to struct kvm From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114550_356110_60FA4CBB X-CRM114-Status: GOOD ( 23.44 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move VM-level TDP MMU state to struct kvm so it can be accessed by common code in a future commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 39 -------------------------------- arch/x86/kvm/mmu/tdp_mmu.c | 40 ++++++++++++++++----------------- arch/x86/kvm/mmu/tdp_pgtable.c | 8 +++---- include/kvm/mmu_types.h | 40 +++++++++++++++++++++++++++++++++ include/linux/kvm_host.h | 8 +++++++ 5 files changed, 72 insertions(+), 63 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9cf8f956bac3..95c731028452 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1272,45 +1272,6 @@ struct kvm_arch { struct kvm_pmu_event_filter __rcu *pmu_event_filter; struct task_struct *nx_huge_page_recovery_thread; -#ifdef CONFIG_X86_64 - /* The number of TDP MMU pages across all roots. */ - atomic64_t tdp_mmu_pages; - - /* - * List of struct kvm_mmu_pages being used as roots. - * All struct kvm_mmu_pages in the list should have - * tdp_mmu_page set. - * - * For reads, this list is protected by: - * the MMU lock in read mode + RCU or - * the MMU lock in write mode - * - * For writes, this list is protected by: - * the MMU lock in read mode + the tdp_mmu_pages_lock or - * the MMU lock in write mode - * - * Roots will remain in the list until their tdp_mmu_root_count - * drops to zero, at which point the thread that decremented the - * count to zero should removed the root from the list and clean - * it up, freeing the root after an RCU grace period. - */ - struct list_head tdp_mmu_roots; - - /* - * Protects accesses to the following fields when the MMU lock - * is held in read mode: - * - tdp_mmu_roots (above) - * - the link field of kvm_mmu_page structs used by the TDP MMU - * - possible_nx_huge_pages; - * - the possible_nx_huge_page_link field of kvm_mmu_page structs used - * by the TDP MMU - * It is acceptable, but not necessary, to acquire this lock when - * the thread holds the MMU lock in write mode. - */ - spinlock_t tdp_mmu_pages_lock; - struct workqueue_struct *tdp_mmu_zap_wq; -#endif /* CONFIG_X86_64 */ - /* * If set, at least one shadow root has been allocated. This flag * is used as one input when determining whether certain memslot diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 24d1dbd0a1ec..b997f84c0ea7 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -21,9 +21,9 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) if (!wq) return -ENOMEM; - INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); - spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); - kvm->arch.tdp_mmu_zap_wq = wq; + INIT_LIST_HEAD(&kvm->tdp_mmu.roots); + spin_lock_init(&kvm->tdp_mmu.pages_lock); + kvm->tdp_mmu.zap_wq = wq; return 1; } @@ -42,10 +42,10 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { /* Also waits for any queued work items. */ - destroy_workqueue(kvm->arch.tdp_mmu_zap_wq); + destroy_workqueue(kvm->tdp_mmu.zap_wq); - WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages)); - WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots)); + WARN_ON(atomic64_read(&kvm->tdp_mmu.pages)); + WARN_ON(!list_empty(&kvm->tdp_mmu.roots)); /* * Ensure that all the outstanding RCU callbacks to free shadow pages @@ -114,7 +114,7 @@ static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root { root->tdp_mmu_async_data = kvm; INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work); - queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work); + queue_work(kvm->tdp_mmu.zap_wq, &root->tdp_mmu_async_work); } static inline bool kvm_tdp_root_mark_invalid(struct kvm_mmu_page *page) @@ -173,9 +173,9 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, return; } - spin_lock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); list_del_rcu(&root->link); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_unlock(&kvm->tdp_mmu.pages_lock); call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback); } @@ -198,11 +198,11 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, rcu_read_lock(); if (prev_root) - next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + next_root = list_next_or_null_rcu(&kvm->tdp_mmu.roots, &prev_root->link, typeof(*prev_root), link); else - next_root = list_first_or_null_rcu(&kvm->arch.tdp_mmu_roots, + next_root = list_first_or_null_rcu(&kvm->tdp_mmu.roots, typeof(*next_root), link); while (next_root) { @@ -210,7 +210,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, kvm_tdp_mmu_get_root(next_root)) break; - next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots, + next_root = list_next_or_null_rcu(&kvm->tdp_mmu.roots, &next_root->link, typeof(*next_root), link); } @@ -254,7 +254,7 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, * is guaranteed to be stable. */ #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ - list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \ + list_for_each_entry(_root, &_kvm->tdp_mmu.roots, link) \ if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) && \ _root->role.as_id != _as_id) { \ } else @@ -324,9 +324,9 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) refcount_set(&root->root_refcount, 1); - spin_lock(&kvm->arch.tdp_mmu_pages_lock); - list_add_rcu(&root->link, &kvm->arch.tdp_mmu_roots); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); + list_add_rcu(&root->link, &kvm->tdp_mmu.roots); + spin_unlock(&kvm->tdp_mmu.pages_lock); out: return __pa(root->spt); @@ -368,13 +368,13 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, +1); - atomic64_inc(&kvm->arch.tdp_mmu_pages); + atomic64_inc(&kvm->tdp_mmu.pages); } static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) { kvm_account_pgtable_pages((void *)sp->spt, -1); - atomic64_dec(&kvm->arch.tdp_mmu_pages); + atomic64_dec(&kvm->tdp_mmu.pages); } __weak void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, @@ -1010,7 +1010,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) */ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) { - flush_workqueue(kvm->arch.tdp_mmu_zap_wq); + flush_workqueue(kvm->tdp_mmu.zap_wq); } /* @@ -1035,7 +1035,7 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) struct kvm_mmu_page *root; lockdep_assert_held_write(&kvm->mmu_lock); - list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) { + list_for_each_entry(root, &kvm->tdp_mmu.roots, link) { if (!root->role.invalid && !WARN_ON_ONCE(!kvm_tdp_mmu_get_root(root))) { root->role.invalid = true; diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index 840d063c45b8..cc7b10f703e1 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -141,9 +141,9 @@ void tdp_mmu_arch_post_link_sp(struct kvm *kvm, if (fault->req_level < sp->role.level) return; - spin_lock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); track_possible_nx_huge_page(kvm, sp); - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_unlock(&kvm->tdp_mmu.pages_lock); } void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, @@ -153,7 +153,7 @@ void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, return; if (shared) - spin_lock(&kvm->arch.tdp_mmu_pages_lock); + spin_lock(&kvm->tdp_mmu.pages_lock); else lockdep_assert_held_write(&kvm->mmu_lock); @@ -161,7 +161,7 @@ void tdp_mmu_arch_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, untrack_possible_nx_huge_page(kvm, sp); if (shared) - spin_unlock(&kvm->arch.tdp_mmu_pages_lock); + spin_unlock(&kvm->tdp_mmu.pages_lock); } int tdp_mmu_max_mapping_level(struct kvm *kvm, diff --git a/include/kvm/mmu_types.h b/include/kvm/mmu_types.h index 07c9962f9aea..8ccc48a1cd4c 100644 --- a/include/kvm/mmu_types.h +++ b/include/kvm/mmu_types.h @@ -136,4 +136,44 @@ enum { RET_PF_SPURIOUS, }; +struct tdp_mmu { + /* The number of TDP MMU pages across all roots. */ + atomic64_t pages; + + /* + * List of kvm_mmu_page structs being used as roots. + * All kvm_mmu_page structs in the list should have + * tdp_mmu_page set. + * + * For reads, this list is protected by: + * the MMU lock in read mode + RCU or + * the MMU lock in write mode + * + * For writes, this list is protected by: + * the MMU lock in read mode + the tdp_mmu_pages_lock or + * the MMU lock in write mode + * + * Roots will remain in the list until their tdp_mmu_root_count + * drops to zero, at which point the thread that decremented the + * count to zero should removed the root from the list and clean + * it up, freeing the root after an RCU grace period. + */ + struct list_head roots; + + /* + * Protects accesses to the following fields when the MMU lock + * is held in read mode: + * - roots (above) + * - the link field of kvm_mmu_page structs used by the TDP MMU + * - (x86-only) possible_nx_huge_pages; + * - (x86-only) the arch.possible_nx_huge_page_link field of + * kvm_mmu_page structs used by the TDP MMU + * It is acceptable, but not necessary, to acquire this lock when + * the thread holds the MMU lock in write mode. + */ + spinlock_t pages_lock; + + struct workqueue_struct *zap_wq; +}; + #endif /* !__KVM_MMU_TYPES_H */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 469ff4202a0d..242eaed55320 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -45,6 +45,10 @@ #include #include +#ifdef CONFIG_HAVE_TDP_MMU +#include +#endif + #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS #endif @@ -797,6 +801,10 @@ struct kvm { struct notifier_block pm_notifier; #endif char stats_id[KVM_STATS_NAME_SIZE]; + +#ifdef CONFIG_HAVE_TDP_MMU + struct tdp_mmu tdp_mmu; +#endif }; #define kvm_err(fmt, ...) \ From patchwork Thu Dec 8 19:38:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 812E7C4332F for ; Thu, 8 Dec 2022 19:49:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MjlISX5BaHxbqS037DGutioUjdnplQ5wIgagunfIkzQ=; b=NhX8AdkLfisoGb/IhR4kkzYmwa uL9m1I1hZsryEHAXUQTmDi09lKBlTR5dXNI4RbOd3wnal8ddJndykjaXoDynE1OKtVEor1RYDY08W XOeqwDs6otaLm7zuU1kC3v3jWcvgFCWq/kWx9hE0CFhYZw9CvjYbzYW+BY6aNxxE+cOnLyNSjG2Ex al/5w7YM94lIP8RA5jjsp6itWNab6e5oV2I5c9D1mOdiESBVs1tG6Ttnz/+IBrS39gQDiLdgTxaxh n0ms1vGHIG6XO0liErKmnwQwEPQJwRpVH9g945k2H5fiPkmVwxKcy98K+++tWnHZw8ifQYRmRXkDM Pw6isqIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mt1-00A36D-TQ; Thu, 08 Dec 2022 19:48:52 +0000 Received: from mail-oi1-x24a.google.com ([2607:f8b0:4864:20::24a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mqm-00A1S8-Qk for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:46:34 +0000 Received: by mail-oi1-x24a.google.com with SMTP id o17-20020a0568080bd100b0035e30a861f7so1080116oik.14 for ; Thu, 08 Dec 2022 11:46:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1a1z/b3owkRfttdiEuiESfzblYsje7V7GrYJ8kI0Po8=; b=F0XYX+U7NiHCkK7bsbIKwHt4xn5xSLrANWygs7k8NFHbu1R2R/rmJKDDnvUWdIHP92 Ly2IYp0NOWTwXh63NaBMZcPqgDrfFaYYlcINvpDuRoegsyJJVoxOj7s7XDFnwq4W8GTC f+4upgVNo5G4bbwSbrkVhi3YHBvy4jYzsDFUG9wwYOpDkZx7UBWzJoj2myxNLvIdqZ4W pfI5REShGLM7+b3RergwOgWIMIsTPACWYsmBoNyo8t79eVgzWxxkGk1mGWxh5mVT27dB xSUuXJrP+ASrLLbWhs1/6bZw6Zgr2IKK/y+UkG+xQAvodbxK4QFJj/l93proPVAsOFHJ 1lsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1a1z/b3owkRfttdiEuiESfzblYsje7V7GrYJ8kI0Po8=; b=1fbGW4gLNQySA8Xp4ClVwcLx8j4qYJuhwptjUx5p0Mn3SLssCJtpFNPVmjmLLM+U0L nhEmE8xz/z77osgsc5RTQAPd2ISZckGaoXe/1XOzs50XXYcQgCYh9KIo9OPOYpIQh5tW GH8WEPfcfWppemepKZwqWpMKRzIdcbN++42jaicvzDX/ay1JSwprzLgnG7F+swmUfRyz 7CauhjVlaHqMTUol3LxA83knVs0kEjteWyU7u3dxBPaSKrl3ECSMtfNtLrcfFc1gsTOR FQoUnTTxjlOI1sbiPId2P4hen39X7pi2JitBODGKFpa90p2fArlbwOOS+Nvf+sJMk0XR VXCQ== X-Gm-Message-State: ANoB5pk/B5usja5ZT+xvGs5Y3gqaS5dQKyHWQSYNy7m2txGnwzCHS6X3 glJzoK+kPmaH8+SKdKx5HimJashknBzDNg== X-Google-Smtp-Source: AA0mqf5kngnEG6TtWra1+xFIY37aBRwogffq20uJpUzBVyAWsE1WUGsuHoa/DzL+7L0/Sgg0ER4DuRQldLsQMw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:1997:b0:219:8ee5:a226 with SMTP id mv23-20020a17090b199700b002198ee5a226mr29019577pjb.13.1670528387307; Thu, 08 Dec 2022 11:39:47 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:44 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-25-dmatlack@google.com> Subject: [RFC PATCH 24/37] KVM: x86/mmu: Move kvm_mmu_hugepage_adjust() up to fault handler From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114632_898933_55B1D273 X-CRM114-Status: GOOD ( 11.63 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move the call to kvm_mmu_hugepage_adjust() up to the fault handler rather than calling it from kvm_tdp_mmu_map(). Also make the same change to direct_map() for consistency. This reduces the TDP MMU's dependency on an x86-specific function, so that it can be moved into common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- arch/x86/kvm/mmu/tdp_mmu.c | 2 -- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0593d4a60139..9307608ae975 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3151,8 +3151,6 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) int ret; gfn_t base_gfn = fault->gfn; - kvm_mmu_hugepage_adjust(vcpu, fault); - trace_kvm_mmu_spte_requested(fault); for_each_shadow_entry(vcpu, fault->addr, it) { /* @@ -4330,6 +4328,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (r) goto out_unlock; + kvm_mmu_hugepage_adjust(vcpu, fault); + r = direct_map(vcpu, fault); out_unlock: @@ -4408,6 +4408,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, if (is_page_fault_stale(vcpu, fault)) goto out_unlock; + kvm_mmu_hugepage_adjust(vcpu, fault); + r = kvm_tdp_mmu_map(vcpu, fault); out_unlock: diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b997f84c0ea7..e6708829714c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1157,8 +1157,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) struct kvm_mmu_page *sp; int ret = RET_PF_RETRY; - kvm_mmu_hugepage_adjust(vcpu, fault); - trace_kvm_mmu_spte_requested(fault); rcu_read_lock(); From patchwork Thu Dec 8 19:38:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82185C001B2 for ; Thu, 8 Dec 2022 19:46:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=pcrx/EmqR5Z0kONuHXOKt0fDV//IUWmnIVZNwQ/GKGQ=; b=GHWa9V1eL8HD0HUMGwbZYADBmY E9RhljcqerTQySuWI6vz+bapR+B8tDMNyU+J3VaVqic7KsydkFsPbjv0SWeGILe/SODAjeR+qa/i4 RePKX59v7sxBNabhcCy4PlNK/yLX19I7cxSZ8KyHcvopX4VAQjGFO7aeh9a8Iz5389C68s2dt91LM gVVTNBuYEcOz0d97adeh83RCe8SxtCI4cETqJaxOioZiwRw5zW1PFkOjhtPgDwAcI98d94vCjxa+O FcJkSmL26450t3Wse1K/pCyq0D6h+304VWM7k1oo4/uzHRPZvDdgJ9xSwSR2Y7uKi7F+x6gJKg6Em 3j/0eiQQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MqB-00A0xG-Ul; Thu, 08 Dec 2022 19:45:56 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mq5-00A0pg-PU for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:51 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id i19-20020a253b13000000b0070358cca7f7so2523250yba.9 for ; Thu, 08 Dec 2022 11:45:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PeJRMD1iiEfAcW0wse8YvMRXDysx/LxLCpDqGqOGbOE=; b=it3ha/1H2vQVcMvrwLK2XI6kFW9SfZwdQ0HxxUSc3Ag107nHhiqTSUqlouPQ69hlpA gVlMznc/cXRCFulmlBil4kKdWgDSAUCR93SntgLaDJWCpUTB7FzwaETswPW9lX1yNG07 0tWWOwYFihDwyLqkpiOMrsIryZ3SjYTTI8b4R1X9oFZnj7/UyRLh3XP6ECDzERPz2C+i B3prUkRr3KEH0wVuRrtyBo09IY2F7MxhaUoLo7LOK5D3eYUuVBFYj36vL7P7ZiNQEA7c 1QUJk9A2erTf8dkQPtJhbcBzs55NBkUNAK3ruLZHLQeHsz23ls2YGKCDBr5v8J7edZjj uhPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PeJRMD1iiEfAcW0wse8YvMRXDysx/LxLCpDqGqOGbOE=; b=1/OkZMyKr3cjRtLvKeL7IXDA2ROupAFAnq7UbUN6R3V0ZtiT2CD4jKY6lYLxYXCzNN 98gvL+OAm7GZfrStlVkBlK+jr85I0xc2bHLgndInZ42nGaeL9HW6b15k4QueQ+YutFGu m7lXO7lskhsGMO7o67pIR+eLbov2cCk3aU6apZ6J5wfZPxJ7RxfE4WBx9GJGdre8E4yv uIiHTHnqCN1ab0pjb9kHcYQmfKOQXBJ24NdqlyDm+EayiA32q/EnalhkVG9bYr1ZYZ0h tKUWBq5LTU2sw6lrqsvvN8RsJvXJhKuYB2ztcq7KCHkJjLPXr2za2bU/sPMj+ZKUYeyi K/Ag== X-Gm-Message-State: ANoB5pl38bBuovzIvp0eDYMP949nMb18B3T8lXfCzJQJf23su7JV/17m Bc17PRcqTz+GCWUX6YmhOjtyLPlgDmS27A== X-Google-Smtp-Source: AA0mqf42i4HuA7nX3hyDm6T4qhjEe3EzuwVfCeNTiMqXBDpZ0EVA3TMnNfFvR5iVyImke8GmCkOtSd5OsKu+oA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:6602:0:b0:6f9:890c:6468 with SMTP id a2-20020a256602000000b006f9890c6468mr39021311ybc.610.1670528389040; Thu, 08 Dec 2022 11:39:49 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:45 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-26-dmatlack@google.com> Subject: [RFC PATCH 25/37] KVM: x86/mmu: Pass root role to kvm_tdp_mmu_get_vcpu_root_hpa() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114549_857613_8946EED3 X-CRM114-Status: GOOD ( 12.19 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Pass the root role from the caller rather than grabbing it from vcpu->arch.mmu. This will enable the TDP MMU to be moved to common code in a future commit by removing a dependency on vcpu->arch. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.h | 3 ++- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9307608ae975..aea7df3c2dcb 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3612,7 +3612,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) goto out_unlock; if (tdp_mmu_enabled) { - root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); + root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu, mmu->root_role); mmu->root.hpa = root; } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index e6708829714c..c5d1c9010d21 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -301,9 +301,9 @@ static void tdp_mmu_init_child_sp(struct kvm_mmu_page *child_sp, tdp_mmu_init_sp(child_sp, iter->sptep, iter->gfn, role); } -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, + union kvm_mmu_page_role role) { - union kvm_mmu_page_role role = vcpu->arch.mmu->root_role; struct kvm *kvm = vcpu->kvm; struct kvm_mmu_page *root; diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index e6a929089715..897608be7f75 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -10,7 +10,8 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); -hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu, + union kvm_mmu_page_role role); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) { From patchwork Thu Dec 8 19:38:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 381BEC4332F for ; Thu, 8 Dec 2022 19:40:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=qfob/EvLSZQizk8NH4bTrnCKERWd/y35+y6vSXUoRm8=; b=dmVckDmxEuTMxkT3R25GRyelE1 Sokw+8pPWKw6L1YOmduu5wMSsuPZwEcIlTGt7/pA7j0giPL2PgeO4OEc7w28ID27q3mO0nO/AoWSD IA7wm96Q+L0PsCfSDtbAr94wJe3nslBb88nWqwdxE1FySn1irnyYkpGrqZHTrNcJybyLHaDK3tsL4 H6m0wBXCNjmeTbX0NqwyKgtr1pKDmzKLvTyuepM7b/yhkQzuJcvOIEAPMJiFBnvu6DM/eUCuNUp6A qjIwn/f0S8IF3KAfzyoDdwJH+PZUcGg6viRfDAC8hSOcKKnI156TMVeNKSLWL8MjkcRgj1SYoTY9R 2MIGussA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkh-009tJR-3J; Thu, 08 Dec 2022 19:40:15 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkK-009ssI-8p for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:55 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-3d0465d32deso25015737b3.20 for ; Thu, 08 Dec 2022 11:39:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4kZ8qvn0xSGEDmN1BULNYrOl9BfWlHWgwc5Hcokigkg=; b=EZdYCeKRzwTZI0V7uIlvSkGrs4qHYIezoRhReYtQLA0czf3gVL/1TNq7yKqDrl8GTn oQz0Z7XBBQH/yEQMO1SGLwncbAgZKGbmAOjFD34HdUxLfAxVVglbdugT3hkL39u2KBaC zXXOHY89LcKvSnZnJXUb8dDzrzFe4X7e3xR+WnsmfduMxTHD82PxDk3bNR9/eVBZSZCJ 9KSr/LExzaCvHBnmGNmO5FnVmhBkHRZqq/6nuGCySvf95RTgk84Zq8NxZx5p92YocoN3 edBU29eHd/xnGcG9xhXPifzW3E7yu6pOK5Yi9gnxeJFRaCY91DpX/jIM8sH3OWbKiQbv X7yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4kZ8qvn0xSGEDmN1BULNYrOl9BfWlHWgwc5Hcokigkg=; b=NmnerKkikFQvr0TQ00SPXzggqazm2L1HPSQLi7mMQI71L3WGwsYO5rdUmsTlMFBun/ FS1LLyII+EzbBWbCGKEcYy9WCkrdYJSkIq5TufJjxunZi+sM0IW1WLqyPIWKwY3SsX98 sZDz0WN8sHbn7XeVJlxQSkN5sjtrxbLBTdefnOLqutQIrZlylbHNeNsNTQA9tBFQT6wZ nwOuNwsvWdHYDhWQUq3hHejERDEkNFIiSmiwCAMUU4ImFlZ70MipYhDaavY5zentndPj Vkl8fU/gtlxhUXeDuJKQBbp1bk0x/4f9tqzfZFVXl4tWOBLYF6LED28Wnq3MApcL1vGa Jw7w== X-Gm-Message-State: ANoB5pl9Whk7arWFOlodvMQvGrfqsQbrnHik4yXtdv9RE2kLfO9PXrN+ 3NjedzcBTvGtAU03PgqMKVQYu0TQxQ1Tkg== X-Google-Smtp-Source: AA0mqf7VJDwYeOhSUwL8eiN/HYDWJI0TjJovtybda2o797oyUV7aBV39Tw6SLnFjKIqdIxSPeuhwuLTwG1fExA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:dfc3:0:b0:6f3:2748:7469 with SMTP id w186-20020a25dfc3000000b006f327487469mr53271438ybg.564.1670528390734; Thu, 08 Dec 2022 11:39:50 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:46 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-27-dmatlack@google.com> Subject: [RFC PATCH 26/37] KVM: Move page table cache to struct kvm_vcpu From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_113952_319937_C9FCDD8C X-CRM114-Status: GOOD ( 19.46 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add a kvm_mmu_memory_cache to struct kvm_vcpu for allocating pages for page tables, replacing existing architecture-specific page table caches. Purposely do not make management of this cache architecture-neutral though to reduce churn and since not all architectures configure it the same way (MIPS does not set __GFP_ZERO.) This eliminates a dependency of the TDP MMU on an architecture-specific field, which will be used in a future commit to move the TDP MMU to common code. Signed-off-by: David Matlack --- arch/arm64/include/asm/kvm_host.h | 3 --- arch/arm64/kvm/arm.c | 4 ++-- arch/arm64/kvm/mmu.c | 2 +- arch/mips/include/asm/kvm_host.h | 3 --- arch/mips/kvm/mmu.c | 4 ++-- arch/riscv/include/asm/kvm_host.h | 3 --- arch/riscv/kvm/mmu.c | 2 +- arch/riscv/kvm/vcpu.c | 4 ++-- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/linux/kvm_host.h | 5 +++++ 12 files changed, 18 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 001c8abe87fc..da519d6c09a5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -473,9 +473,6 @@ struct kvm_vcpu_arch { /* vcpu power state */ struct kvm_mp_state mp_state; - /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; - /* Target CPU and feature flags */ int target; DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5573bc4614..0e0d4c4f79a2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -340,7 +340,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.target = -1; bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); - vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; /* * Default value for the FP state, will be overloaded at load @@ -375,7 +375,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 31d7fa4c7c14..d431c5cdb26a 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1196,7 +1196,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool device = false; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memory_cache *memcache = &vcpu->mmu_page_table_cache; struct vm_area_struct *vma; short vma_shift; gfn_t gfn; diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h index 5cedb28e8a40..b7f276331583 100644 --- a/arch/mips/include/asm/kvm_host.h +++ b/arch/mips/include/asm/kvm_host.h @@ -344,9 +344,6 @@ struct kvm_vcpu_arch { /* Bitmask of pending exceptions to be cleared */ unsigned long pending_exceptions_clr; - /* Cache some mmu pages needed inside spinlock regions */ - struct kvm_mmu_memory_cache mmu_page_cache; - /* vcpu's vzguestid is different on each host cpu in an smp system */ u32 vzguestid[NR_CPUS]; diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index 74cd64a24d05..638f728d0bbe 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -27,7 +27,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); } /** @@ -589,7 +589,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, pte_t *out_entry, pte_t *out_buddy) { struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memory_cache *memcache = &vcpu->mmu_page_table_cache; gfn_t gfn = gpa >> PAGE_SHIFT; int srcu_idx, err; kvm_pfn_t pfn; diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index dbbf43d52623..82e5d80347cc 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -219,9 +219,6 @@ struct kvm_vcpu_arch { /* SBI context */ struct kvm_sbi_context sbi_context; - /* Cache pages needed to program page tables with spinlock held */ - struct kvm_mmu_memory_cache mmu_page_cache; - /* VCPU power-off state */ bool power_off; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 3620ecac2fa1..a8281a65cb3d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -625,7 +625,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, gfn_t gfn = gpa >> PAGE_SHIFT; struct vm_area_struct *vma; struct kvm *kvm = vcpu->kvm; - struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache; + struct kvm_mmu_memory_cache *pcache = &vcpu->mmu_page_table_cache; bool logging = (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 71ebbc4821f0..9a1001ada936 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -160,7 +160,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) /* Mark this VCPU never ran */ vcpu->arch.ran_atleast_once = false; - vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX); /* Setup ISA features available to VCPU */ @@ -211,7 +211,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_timer_deinit(vcpu); /* Free unused pages pre-allocated for G-stage page table mappings */ - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); } int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 95c731028452..8cac8ec29324 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -714,7 +714,6 @@ struct kvm_vcpu_arch { struct kvm_mmu *walk_mmu; struct kvm_mmu_memory_cache mmu_pte_list_desc_cache; - struct kvm_mmu_memory_cache mmu_shadow_page_cache; struct kvm_mmu_memory_cache mmu_shadowed_info_cache; struct kvm_mmu_memory_cache mmu_page_header_cache; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aea7df3c2dcb..a845e9141ad4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -664,7 +664,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, + r = kvm_mmu_topup_memory_cache(&vcpu->mmu_page_table_cache, PT64_ROOT_MAX_LEVEL); if (r) return r; @@ -681,7 +681,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) { kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); } @@ -2218,7 +2218,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, { struct shadow_page_caches caches = { .page_header_cache = &vcpu->arch.mmu_page_header_cache, - .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache, + .shadow_page_cache = &vcpu->mmu_page_table_cache, .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache, }; @@ -5920,7 +5920,7 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; - vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c5d1c9010d21..922815407b7e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -264,7 +264,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) struct kvm_mmu_page *sp; sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); - sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->mmu_page_table_cache); return sp; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 242eaed55320..0a9baa493760 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -387,6 +387,11 @@ struct kvm_vcpu { */ struct kvm_memory_slot *last_used_slot; u64 last_used_slot_gen; + +#ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE + /* Cache used to allocate pages for use as page tables. */ + struct kvm_mmu_memory_cache mmu_page_table_cache; +#endif }; /* From patchwork Thu Dec 8 19:38:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53784C4332F for ; Thu, 8 Dec 2022 20:31:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jVHtxWSOfXBYgYevnMTx0Dfz/3+PCRzT3NBNR2SmgN8=; b=2bxP+CQGrG+U/C3G0LktjbUKxq nIBZjg9HhlSdAFxdbjBEE0ryWQ2GsyL0tHgGD6L+OS/75SCu+QcD/0O2kR14IxfW4mhKMwpHJa+xA 0Wp2FTOSdG8X0ZUV0b15xSNdEpMkHXagHWvnLZ7m8jouktLOCfJ6+DY6/Ce92LsDaQcH6KLmF1rfm yXUgDP38GuJQ3XvXb9ye/iNhd+LHFnDgH1AHszywWY/RPiCWQhXDftTfjiw3A/lZZdcVVhdcEjVaW 0mHp0gfau3dUi3Bh635Phryau/4lFmOXGRnQIALCf9N3bmeOTnG2QiyFuyIeMOHweEpq0tMMdqmMi oAbvxJSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NXq-00Ajok-Tn; Thu, 08 Dec 2022 20:31:03 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUt-00AgYd-0I for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=dgqMnMt1QIlmBLa3o//o520F4RUXcCCtqFrzTbNkpWk=; b=lB1+TylSjOwJcGg+7HEzkzXIWC 9tKKtVZ+b3HyFyhY2gyu6pCotzZXgpMN5XIgecwnsXXMmsOktnE6+zKqXP5ro+Pft7ZUorwpw1+Iy rLVP7klQRC/rjFuwzt+AqRayri5V/c+XiqGg8p36QSCsMT/Y5k1NyA/3eEzhELq7ukC98jeRyzeGs NUxhbmbFPP4iMtNwwIxvddJkr/pH1710xvoDAoM/MkI7bZIXBHZmPKSpxhx94F8uZzECfQOltyRNf Ada0miLAHnKJ69g/fKmuRyDeUTeXqEyoAXWPWGWblA3gvTD5wuYs3TLJ8HoNyAy8V0I1mU3APAYen QnSAf9Rg==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkL-008Zly-1N for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:39:55 +0000 Received: by mail-yb1-xb49.google.com with SMTP id c188-20020a25c0c5000000b006d8eba07513so2557766ybf.17 for ; Thu, 08 Dec 2022 11:39:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dgqMnMt1QIlmBLa3o//o520F4RUXcCCtqFrzTbNkpWk=; b=f4QHeg61IzowzK+7yuPHF1TXnPTFwNskqnQKjcsu0WxAKIY1McB7wZRn+VuSfBM9b8 ylyyA9kcFWxQm6+GX+bMJZFHcWTGXb7YNV/fpxz2TqMv2QX/5/VLhbXkWs5BZNCv8lvE iAqaka88+2+N5tw21tP1s0c6lUAdukNFngVoXini/VGsGOpfrraDUlE9nKTf3FFTHm9y KUTGqSOH2A4NquRC0hECU12i0F9wQKuKP4xBOjtC4vxLFMrVdTK8VzxOC9xjDjBZVdgu 7TIFyQpSoyu3BW883wqt4S03Vt2JnLjwrIPiyoLIXLGjApjxjBQRjkRdpzCFxXjOiqSy 115A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dgqMnMt1QIlmBLa3o//o520F4RUXcCCtqFrzTbNkpWk=; b=tnBfSbDoYGzvZgDawMhaxkq26E6dvOX6c+Yw9wgxigvzsJXu7Zz504itzhI0iXsrRK KTin1TCnEfC9bovz6fHX4+uxpFmpkEbRbIy1imuaD79YEkXMlozVk8HLkN0ISM0or3QA AjYsLscUzXyXTuMjc4RcY6AHRxF9+GnQgzh+kOrOhNOWdD0VVj1JNGZ8O05e5Oh7uI2n lulJCv1DVVOfZXLXFkZ1D9ATspcW+FUA90nLW3UsHfvHM0bIKI2br9gqEO8RPGGYb7O6 kgJTst3xvFeitrAunLF0ynZCwKp86W5I20Y9K8QoIwcqclyRleWGuiEd7vHOibswDW7J 787A== X-Gm-Message-State: ANoB5pk9P2tCnbKiVXrte4xak0tGje91cxg8d6DS27yrcTAtHsGAZAnt 1xS3K65B81GbEeOtV15hwjRtXKDpyd1jwA== X-Google-Smtp-Source: AA0mqf5mi9mDJ9tYVtakRomun512INrgKlNeUrlvHGHQBPGnddPX6ZC9IDNboVvCBGP15bWz6s6DsMaL8j57qA== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:b885:0:b0:701:49ca:8ae8 with SMTP id w5-20020a25b885000000b0070149ca8ae8mr16438842ybj.553.1670528392392; Thu, 08 Dec 2022 11:39:52 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:47 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-28-dmatlack@google.com> Subject: [RFC PATCH 27/37] KVM: MMU: Move mmu_page_header_cache to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_193953_272670_189430BD X-CRM114-Status: GOOD ( 16.15 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move vcpu->arch.mmu_page_header_cache and its backing kmem_cache to common code in preparation for moving the TDP MMU to common code in a future commit. The kmem_cache is still only initialized and used on x86 for the time being to avoid affecting other architectures. A future commit can move the code to manage this cache to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 11 +++++------ arch/x86/kvm/mmu/mmu_internal.h | 2 -- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/kvm/mmu.h | 2 ++ include/linux/kvm_host.h | 3 +++ virt/kvm/kvm_main.c | 1 + 6 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a845e9141ad4..f01ee01f3509 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -163,7 +163,6 @@ struct kvm_shadow_walk_iterator { __shadow_walk_next(&(_walker), spte)) static struct kmem_cache *pte_list_desc_cache; -struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; static void mmu_spte_set(u64 *sptep, u64 spte); @@ -674,7 +673,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) if (r) return r; } - return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, + return kvm_mmu_topup_memory_cache(&vcpu->mmu_page_header_cache, PT64_ROOT_MAX_LEVEL); } @@ -683,7 +682,7 @@ static void mmu_free_memory_caches(struct kvm_vcpu *vcpu) kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache); kvm_mmu_free_memory_cache(&vcpu->mmu_page_table_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadowed_info_cache); - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_header_cache); + kvm_mmu_free_memory_cache(&vcpu->mmu_page_header_cache); } static void mmu_free_pte_list_desc(struct pte_list_desc *pte_list_desc) @@ -2217,7 +2216,7 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, union kvm_mmu_page_role role) { struct shadow_page_caches caches = { - .page_header_cache = &vcpu->arch.mmu_page_header_cache, + .page_header_cache = &vcpu->mmu_page_header_cache, .shadow_page_cache = &vcpu->mmu_page_table_cache, .shadowed_info_cache = &vcpu->arch.mmu_shadowed_info_cache, }; @@ -5917,8 +5916,8 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_pte_list_desc_cache.kmem_cache = pte_list_desc_cache; vcpu->arch.mmu_pte_list_desc_cache.gfp_zero = __GFP_ZERO; - vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; - vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; + vcpu->mmu_page_header_cache.kmem_cache = mmu_page_header_cache; + vcpu->mmu_page_header_cache.gfp_zero = __GFP_ZERO; vcpu->mmu_page_table_cache.gfp_zero = __GFP_ZERO; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d3c1d08002af..4aa60d5d87b0 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -44,8 +44,6 @@ extern bool dbg; #define INVALID_PAE_ROOT 0 #define IS_VALID_PAE_ROOT(x) (!!(x)) -extern struct kmem_cache *mmu_page_header_cache; - static inline bool kvm_mmu_page_ad_need_write_protect(struct kvm_mmu_page *sp) { /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 922815407b7e..891877a6fb78 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -263,7 +263,7 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu) { struct kvm_mmu_page *sp; - sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); + sp = kvm_mmu_memory_cache_alloc(&vcpu->mmu_page_header_cache); sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->mmu_page_table_cache); return sp; diff --git a/include/kvm/mmu.h b/include/kvm/mmu.h index 425db8e4f8e9..f1416828f8fe 100644 --- a/include/kvm/mmu.h +++ b/include/kvm/mmu.h @@ -4,6 +4,8 @@ #include +extern struct kmem_cache *mmu_page_header_cache; + static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) { struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0a9baa493760..ec3a6de6d54e 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -391,6 +391,9 @@ struct kvm_vcpu { #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE /* Cache used to allocate pages for use as page tables. */ struct kvm_mmu_memory_cache mmu_page_table_cache; + + /* Cache used to allocate kvm_mmu_page structs. */ + struct kvm_mmu_memory_cache mmu_page_header_cache; #endif }; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 954ab969f55e..0f1d48ed7d57 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -108,6 +108,7 @@ static int kvm_usage_count; static atomic_t hardware_enable_failed; static struct kmem_cache *kvm_vcpu_cache; +struct kmem_cache *mmu_page_header_cache; static __read_mostly struct preempt_ops kvm_preempt_ops; static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_running_vcpu); From patchwork Thu Dec 8 19:38:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068794 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 31A7BC4332F for ; Thu, 8 Dec 2022 19:45:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WX74SFVJKNr1tBE6eN5AGPvOwLq+3oBeZfMhsRmXr5I=; b=nXslPr0em31zHY7scZN0qdanMy BxwXg5T6lrM63KO1wQ904T3nJsp54t+LOshXx8gOliZ96czX/VdNHZhelB4gKdfbvCiRWXHFvpTR5 Hi5oXuAUgCgxQbOF0T60ylvOYO3tQvCSOayQUavhQaPqDYKnMmXgff+nOqu8rtQ5wN/5ROlVcRHol B6yonZji0G3n6HLpeeu8SqKNEnv7E0gZIETa7L3dnbjd2KwsSshf1IVLx0SC6GFznriPcZIQjCexA M5kO1DexAbzlHemA1TjXXwkydiwIism1PGGSbPrpswE7TcXC8tK9yjzxMWBClwY5M4U+RWQhXpdds aS6YLe5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpy-00A0jU-0C; Thu, 08 Dec 2022 19:45:42 +0000 Received: from mail-oa1-x4a.google.com ([2001:4860:4864:20::4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mpu-00A0ds-KB for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:45:40 +0000 Received: by mail-oa1-x4a.google.com with SMTP id 586e51a60fabf-14459615659so1077213fac.1 for ; Thu, 08 Dec 2022 11:45:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XBAoBS5l4frpeZWkoJY/UpoQSbDyst8wBhQ9eKXZWXc=; b=nlEUPxR6NdyPJq+SxFDJtYRvDL+tSADMT14c6WeSKQQEUdTazqiEtfTEk+SgwjuGEq FmaRagIrALcak/FA6XBnlNgJ1op1uF68f9N8DQlFMf5Lg2V/XHFSd+dxvK4sOLW7bEkP RGyMlm7idURAZy4kLXqSJMwKdSUs8Pvjw42c4F3o6sWDVgr7R6EWxFOWpXNGMLR+X4qI 871FbXvdNcwcg/9Wsr6QYxmjjpELz0YhLIMlRDhO3bl+KizlZ1eAChWn2q1SbXmr+KvW 0p6sqHH9DhxqHphJvurk7B3DlzGH0FBewCHxofIksuZgZ1LvKdA0uEEqGlD800vqy01i +GHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XBAoBS5l4frpeZWkoJY/UpoQSbDyst8wBhQ9eKXZWXc=; b=30oE1QGW0ZnxqIl0DczUwgpn9YViPAJOd/tho6nsOcz9xP5paz5IJKOd5VsfWOqo+O 4XG3fbqJpD7IagfzK+0L3CkbVC3PxHBz9c9lWM+Dw8GEAdVzxHhs7yQg+QnBKxtGGOvG bQgsunSKibh9S9gCHmJrkHURTJ+LQFO6FXG6CcIYt7UDvn6bLcceLQPni42sx+verleu xDtO4oJEj8O6OtMWzRi366jbQGu0/CJcL3kFtnz5PMWDojlHgG4ct5vneOdzi3e8Xsfl +U6jbgSaxgAaL4y3NqpMjA+d0vyWIpnArLi57RodFF8s2fMx6ItIgzc8XDPbqKfxdHWF sq7g== X-Gm-Message-State: ANoB5pknZ1ePOuncKvJeqVLA6Iroizpv4oTvB2kyF4rBS8dJqC72Bv1t jZcGTgAFNlDtuxB8YYXSfaCH0w17GOH8Mg== X-Google-Smtp-Source: AA0mqf5xyrJDk8LkqxteQO3uKW9sySGhyFdSCZp1IMvsR62i1Fel4cZTd1/kwLAP2Vd6/CjzjTgnduujan+3NQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:d34d:0:b0:6fa:7b0a:3eed with SMTP id e74-20020a25d34d000000b006fa7b0a3eedmr35292093ybf.83.1670528393951; Thu, 08 Dec 2022 11:39:53 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:48 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-29-dmatlack@google.com> Subject: [RFC PATCH 28/37] KVM: MMU: Stub out tracepoints on non-x86 architectures From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114538_703855_46616CAA X-CRM114-Status: GOOD ( 12.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Create stub tracepoints outside of x86. The KVM MMU tracepoints can be moved to common code, but will be deferred to a future commit. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/kvm/mmutrace.h | 17 +++++++++++++++++ 2 files changed, 18 insertions(+), 1 deletion(-) create mode 100644 include/kvm/mmutrace.h diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 891877a6fb78..72746b645e99 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2,12 +2,12 @@ #include "mmu.h" #include "mmu_internal.h" -#include "mmutrace.h" #include "tdp_iter.h" #include "tdp_mmu.h" #include "spte.h" #include +#include #include #include diff --git a/include/kvm/mmutrace.h b/include/kvm/mmutrace.h new file mode 100644 index 000000000000..e95a3cb47479 --- /dev/null +++ b/include/kvm/mmutrace.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#if !defined(_TRACE_KVM_MMUTRACE_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_KVM_MMUTRACE_H + +#ifdef CONFIG_X86 +#include "../../arch/x86/kvm/mmu/mmutrace.h" +#else +#define trace_mark_mmio_spte(...) do {} while (0) +#define trace_kvm_mmu_get_page(...) do {} while (0) +#define trace_kvm_mmu_prepare_zap_page(...) do {} while (0) +#define trace_kvm_mmu_set_spte(...) do {} while (0) +#define trace_kvm_mmu_spte_requested(...) do {} while (0) +#define trace_kvm_mmu_split_huge_page(...) do {} while (0) +#define trace_kvm_tdp_mmu_spte_changed(...) do {} while (0) +#endif + +#endif /* _TRACE_KVM_MMUTRACE_H */ From patchwork Thu Dec 8 19:38:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82FA5C4332F for ; Thu, 8 Dec 2022 19:59:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=yqaQZ3AQ9q+RfY+ktxeqCzPw6N+4pYKvRU8lSmqp3Zc=; b=iVwWHUR/FycrjOfwezpv+SqYJ7 jv7tVI4vvgfj82aLLK/bFGBOhFDZ53jVdXCwJqgoNpX7C8PRNIRfmiVB6VTauS1HHwvGyiKy8VaE7 LetnuLkY3sB/n9USK3VbDgFngAe6nR2QHhqBj+Lfud2g5RqfHCdEo0Em8gov8ZsrUIvKmUVZsQg6n gba0PWv2zxnQmqqr8mR82oZj58m9T1kMNO7XLcxDHpCR3lkNeXYBjA7yYEK9dcAwaChv8LDbKlasY VnscnUOs0WnWteyLXZ2aQherrPESWqUgbfvFznnMcEPxb5rjfPXEAyZvTTR1hwhfit+574fHYpjbJ GL3ahVzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N3F-00AAJy-D9; Thu, 08 Dec 2022 19:59:25 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mz0-00A7bR-BT for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=IGXwNDL4QYyEmLxSnzAicnadmof6UqucM+DSeXyfLbg=; b=Zf8pRnS3rAfYmp1Ad+xSlQneoD 20dsMqBggMaw4GWZD/dlSnEQmOTPGHTmS5qd/ZJOtPvsdvg7/XRYcn8a0I52YUySdbX9sH0SdN8mV Od/yDJ5tOChxk1DwZyFSf+Chb8qGgjm88umbMBZB7FgGgIYe/MfjMTxv3Ppwl1VHkL5E3Cg4cS/Vy zjCGGkPeN7Q/y3OwWo7o8i6smF5ePDg2hPu5HaNKWhyC2WPGXXrSWd+jcrAx6qqwahJidOli13L1q mcZFXs9k1uMqamW0idACD0YHiblwxFHpR42jF84N/4SFTx5STYqVYq/lG5gPu5tiXhZjVYg78/zIy /Y037UBw==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkX-007G67-J3 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:07 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3fe3bedbb16so24712437b3.15 for ; Thu, 08 Dec 2022 11:39:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IGXwNDL4QYyEmLxSnzAicnadmof6UqucM+DSeXyfLbg=; b=e2wlSIbIUNSSfaa5cE8nlhbofFQRnDINdERr5/zptolHxRg/aPkZUAkV/kMjTwqvnL VAny0I7JM9mmYwlM19HgFZ4z/Bt+AByLMvJNwn4T4fl0KwFC3zH/PllsEXxvlbxKUkc0 NdMHxpCwQrIPT0+bestZB+Anm+CNeXCBzxYs+7pB5OowuPNVic/k7alXblYXNWUiyNRn 3s7YLdEKTaInLDTRes3bQcWtyjH3N9OVLHAzY6Bg1V5A6pocb3CfBR9ECK9NIDh6py2b FQXi6i4x7GPqtsNYbTReUTLXYmSwdtvZ+/Ai3ANOoadydeN1zSK+lfHKCn5oiJ5BQfNn SJvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IGXwNDL4QYyEmLxSnzAicnadmof6UqucM+DSeXyfLbg=; b=2NYtQ+FWokT1BCvN1Qeio1m9pja6X3Z8HXfOxjA1DWRrhWhP+wZaTwF3wOvYIQSKW1 bUNRBrYDx//1RnjzXajKmzuH4KUV1wmohwdnnTxMOsmEm1Vv53y6Deqa9zrARoFufejn 9YdJYWmR2OGhd4dR1amg0D0qb+/0PecZEGOrI6v+gCJpRUEtLa9quIwH8cE/9yNocODc QbyA4NVmuR/eldiAeF0ba6I0Aj5ZtQHJY4GtdZ27+zuf5ZzSqEp+AyJi7YWGfs2zxh1f mR4QLTPcWG9VbOUIb82MReGTyyfdbeAMdzjw0qKcVVGrKp0cfzrqyIIn/o2bCfdPhUgB 9WZQ== X-Gm-Message-State: ANoB5pnsGxL4+xoc7+nlpCR0BVBtcL3RLe51qX8VWJX7fu0SXPzbKSjn VQgZnbBx+XH1acG1Tewb0hLNreH852QG9Q== X-Google-Smtp-Source: AA0mqf7NcSr4AIKU//qVuXa9coH1f9XMOTx1jz8qMU/k8ctWA7f2fWXafNUay3Wq3KgA3j2vNzHhSJ2naqJ0zg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a81:6309:0:b0:3b0:773b:2b8f with SMTP id x9-20020a816309000000b003b0773b2b8fmr1555613ywb.350.1670528395580; Thu, 08 Dec 2022 11:39:55 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:49 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-30-dmatlack@google.com> Subject: [RFC PATCH 29/37] KVM: x86/mmu: Collapse kvm_flush_remote_tlbs_with_{range,address}() together From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194005_661303_0C3A04A9 X-CRM114-Status: GOOD ( 10.69 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Collapse kvm_flush_remote_tlbs_with_range() and kvm_flush_remote_tlbs_with_address() into a single function. This eliminates some lines of code and a useless NULL check on the range struct. Opportunistically switch from ENOTSUPP to EOPNOTSUPP to make checkpatch happy. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f01ee01f3509..b7bbabac9127 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -244,27 +244,20 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, - struct kvm_tlb_range *range) -{ - int ret = -ENOTSUPP; - - if (range && kvm_x86_ops.tlb_remote_flush_with_range) - ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, range); - - if (ret) - kvm_flush_remote_tlbs(kvm); -} - void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; + int ret = -EOPNOTSUPP; range.start_gfn = start_gfn; range.pages = pages; - kvm_flush_remote_tlbs_with_range(kvm, &range); + if (kvm_x86_ops.tlb_remote_flush_with_range) + ret = static_call(kvm_x86_tlb_remote_flush_with_range)(kvm, &range); + + if (ret) + kvm_flush_remote_tlbs(kvm); } static void mark_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, u64 gfn, From patchwork Thu Dec 8 19:38:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA199C4332F for ; Thu, 8 Dec 2022 20:28:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=tvQupSOwJNgWGVbYpxqYcRFjifVlCXemT2BHoC/PNFw=; b=fr3yo2Z0XyJlWzo3QaXdVrFW24 aC3Lgkw1FxvHyE5dPkhUnWiewD3YO6aOX1l8eAY1X3EuiW5SJwlmUPWIgtatbShNSnae8SHqBVX9l t16KAV6INHW3qwLooMIHYyNsBhEZmeY450yYWClhrgNhCSZG81tk56lppgOv5fB8K0AswuCHUwalC n7lhwfkUOqVKbSPCFCwFP0d9tdcOZhzkLRoip34y58vEgDERjWRqS1CBoeVm58BXFXMUnIKAIggsa UDO2b7KT3ImRmIvDq8ebjfHyoxtW1VqJFQf5P65gChfoVYiFTitK05Imudq+kZcRrm2aVPStkEusA TzbLrVIA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NVf-00Ai2b-Ux; Thu, 08 Dec 2022 20:28:48 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUo-00AgYd-MN for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=5KBqh+vKODV/WoeMK21JvgHm4EN7AAHrFHprSoZAMHM=; b=P2MmBmGsxyL2X6PASLlofhIKRf 2oxY/BoyisKgXAghXJmLpZKnf4zXtryW/NgV6Auv8xIZC8i6IedKtXChNaKpp3ZFR8BVm1TnGTb42 hL6wrjFGucEktyJVyvf2y+RJ4IytIaAcaByzgIi4b/y5VVbjh3YnD7oJZWCTc3meFuCIVMPDIllTl NRCkYRak0E+YF3709rVLRNnmBmiyN+USdPTBvRmD/Frjf9KbVXhbR3gTbl4QCuBKfUyM2IT3UxLNL 0+Uxtk0FNvhphS/oHNLqarGvH1W4bUb30zww94UzogY4o5XKVtHWjKe3NG6t0M0RKFQBVKPYpvAOe cfVh2q8A==; Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkS-008ZwV-MN for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:02 +0000 Received: by mail-pg1-x549.google.com with SMTP id e3-20020a636903000000b00478b7929f24so1610104pgc.22 for ; Thu, 08 Dec 2022 11:39:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5KBqh+vKODV/WoeMK21JvgHm4EN7AAHrFHprSoZAMHM=; b=dgxL00zUKdsqPZ5BwEzchIDS0YimfPZdMBfaW+t1GD3oqv+/EkBpDPWncV98t5WdnV TBle0pbiXgVb8ZuL21bA0THizQHKychb2xBY5k3lsL418ll3MgYAwawojTcq9RnFZ3VC gp0JVFFhtOwaUQAjwCHxFRcjwkCDGdko0D8NkwhLAgADuuA/YgCHZFY/7eyI5fPHiQQh P3D22hbfX6yT3l3DPX4jYLR3Wh/wUR/jzAQOf03iuz6zdbNZOzxX9Oq4tp6kuycxWq7u Cl84n0bCDSkrJAp+5vHhyXqu5cM1FpccTofAXSy7fllgJw53/hy7Sw7ooPaKn/gbjJTT tssg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5KBqh+vKODV/WoeMK21JvgHm4EN7AAHrFHprSoZAMHM=; b=DP+j0OkEFWateX/RFza75i+mXufdbyO+r02pU3mrUT4ifNGu/ojcFNcxYdJVedjs4O 5JkoIzZqdxAO3Y6LQFUSZ01u26yG9AAe6CkU6F+VD6XHEtsInYHgYMPviW2XpKFqgRsU 5Ou3tnkxMEokKcUOk+pulIUMkVWgrl0ulrz+OZ6xkQ5LclsAzBV4W60uHhj67AnLR+xM g1xC8bm6oFj24bt3qeB7yeaxMOEWgFCmjcLQGDWpJqUwgzclnoOS+YxITFO6GfIJvASa 68ygEFhovxZQ2ZyFMeQO2uE3CBKN3dZxzeqz1svtjv2CXKPrDihCk4mqMEOiZXuNdRS9 Qusw== X-Gm-Message-State: ANoB5pmPTH/0C5RTkbT5hnfUtY5+5oTWn/ylyObagj/ZN0ddTVnUX+dZ 5IrpzjmVku3smAbb3HLL3+snUCad2ZpCPg== X-Google-Smtp-Source: AA0mqf54N4YTuY6bG4NbJkmo6GnKEgVcMOgySXZon5g0EbgRxJrycPbdzp1seojBkwcXkg4lld4eR/cLN9ZrOw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a17:90b:f89:b0:219:5b3b:2b9f with SMTP id ft9-20020a17090b0f8900b002195b3b2b9fmr4096869pjb.2.1670528397168; Thu, 08 Dec 2022 11:39:57 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:50 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-31-dmatlack@google.com> Subject: [RFC PATCH 30/37] KVM: x86/mmu: Rename kvm_flush_remote_tlbs_with_address() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194000_797773_8B52D671 X-CRM114-Status: GOOD ( 13.40 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Rename kvm_flush_remote_tlbs_with_address() to kvm_flush_remote_tlbs_range(). This name is shorter, which reduces the number of callsites that need to be broken up across multiple lines, and more readable since it conveys a range of memory is being flushed rather than a single address. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 32 ++++++++++++++------------------ arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 7 +++---- 4 files changed, 20 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b7bbabac9127..4a28adaa92b4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -244,8 +244,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; @@ -804,7 +803,7 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) kvm_mmu_gfn_disallow_lpage(slot, gfn); if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K)) - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); } void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) @@ -1178,7 +1177,7 @@ static void drop_large_spte(struct kvm *kvm, u64 *sptep, bool flush) drop_spte(kvm, sptep); if (flush) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_range(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); } @@ -1460,7 +1459,7 @@ static bool kvm_set_pte_rmap(struct kvm *kvm, struct kvm_rmap_head *rmap_head, } if (need_flush && kvm_available_flush_tlb_with_range()) { - kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + kvm_flush_remote_tlbs_range(kvm, gfn, 1); return false; } @@ -1630,8 +1629,8 @@ static void __rmap_add(struct kvm *kvm, kvm->stat.max_mmu_rmap_size = rmap_count; if (rmap_count > RMAP_RECYCLE_THRESHOLD) { kvm_zap_all_rmap_sptes(kvm, rmap_head); - kvm_flush_remote_tlbs_with_address( - kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); } } @@ -2389,7 +2388,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, return; drop_parent_pte(child, sptep); - kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1); + kvm_flush_remote_tlbs_range(vcpu->kvm, child->gfn, 1); } } @@ -2873,8 +2872,8 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, } if (flush) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, - KVM_PAGES_PER_HPAGE(level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); pgprintk("%s: setting spte %llx\n", __func__, *sptep); @@ -5809,9 +5808,8 @@ slot_handle_level_range(struct kvm *kvm, const struct kvm_memory_slot *memslot, if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { if (flush && flush_on_yield) { - kvm_flush_remote_tlbs_with_address(kvm, - start_gfn, - iterator.gfn - start_gfn + 1); + kvm_flush_remote_tlbs_range(kvm, start_gfn, + iterator.gfn - start_gfn + 1); flush = false; } cond_resched_rwlock_write(&kvm->mmu_lock); @@ -6166,8 +6164,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, - gfn_end - gfn_start); + kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start); kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); @@ -6506,7 +6503,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) - kvm_flush_remote_tlbs_with_address(kvm, sp->gfn, + kvm_flush_remote_tlbs_range(kvm, sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); else need_tlb_flush = 1; @@ -6557,8 +6554,7 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, * is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn, - memslot->npages); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); } void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 4aa60d5d87b0..d35a5b408b98 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -65,8 +65,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index daf9c7731edc..bfee5e0d1ee1 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -929,8 +929,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, - sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + KVM_PAGES_PER_HPAGE(sp->role.level)); if (!rmap_can_add(vcpu)) break; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 72746b645e99..1f1f511cd1a0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -676,8 +676,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, if (ret) return ret; - kvm_flush_remote_tlbs_with_address(kvm, iter->gfn, - TDP_PAGES_PER_LEVEL(iter->level)); + kvm_flush_remote_tlbs_range(kvm, iter->gfn, TDP_PAGES_PER_LEVEL(iter->level)); /* * No other thread can overwrite the removed SPTE as they must either @@ -1067,8 +1066,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, return RET_PF_RETRY; else if (tdp_pte_is_present(iter->old_spte) && !tdp_pte_is_leaf(iter->old_spte, iter->level)) - kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn, - TDP_PAGES_PER_LEVEL(iter->level + 1)); + kvm_flush_remote_tlbs_range(vcpu->kvm, sp->gfn, + TDP_PAGES_PER_LEVEL(iter->level + 1)); /* * If the page fault was caused by a write but the page is write From patchwork Thu Dec 8 19:38:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 378CCC10F1B for ; Thu, 8 Dec 2022 19:40:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=FPKcAYZ7+qhOpmSreYpP1j2eF81OJGrwKJyEdV3Ex2A=; b=Sp5jn8hrjagEeXh4ZRwHpHwXj0 tBjGMWJ0nQ6emTtQ+B/3jyitPIqmJfApqd2kYoaTyCWakqLOECM9tFFapnXlnkV/DYb66Fw4bibBQ TWIQm14gDCjI3CVnUkUlMbt+mwVA/yj/yL1V7d6fH5m17NRQIbo+v6UL/PZzLEPAYEPpXTey378tB TZ94Se0p1/exY76/cEE3Ut1zg7D0vOuj8Ih9eKiBYMLVaYDhn/QIbPAg/taLKnyagIXhlEcCj+Loz 5E4T849GhpTLrRlHa+nCAMCH0PAGXTck1zKYf+J79/CQiyEVFr8BYBQdjz3kMdeEVL7Si5i8nV95F Y2AIjIRA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mks-009tZo-Mz; Thu, 08 Dec 2022 19:40:26 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkT-009t2W-O0 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:04 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-36810cfa61fso24785557b3.6 for ; Thu, 08 Dec 2022 11:40:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4nLMUWhCbgygsyxQ4LP+FBPYpvg+KdNjD6AAx7BEWII=; b=rW1pkR31ur5/8vzbX6I0g9dHa5ieQLfxMNHl7/CiYy31x1gmEotFcgfb6bMg1XD9l4 cVrOC8CHqZzC+Gd2V/LwvTlYuSZNohfxzCXcTVs+DyirYljuXOWNIY4uY7CZr2a2YKF5 72eri2VmsAQGQbk2L3oKVAoFTeSQShiiqL6JV3UiQ4QZOs2UwURjWE1AFC0PhC62yXox X0dKf4S1b8a4rdREripCobtwsSCC9MJx4k5wGRfNkLSnee4Y5BiQ4yQ0Og5S8kz28vfk 0som8w77mpuwlpN/KA9+tJzi+pYWYZboOtEkERDzhNWA7MhWTNWQZafjFT2UdiruzKAQ QPVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4nLMUWhCbgygsyxQ4LP+FBPYpvg+KdNjD6AAx7BEWII=; b=fL50YpjdB5hu2x07E7xpkKkIFyev+tUwQz5EKvrwZx2so0vQrZpQFkb67uavuNQmwP 1fS7vS+8+GTqri/YvoSeNxXbRoyzTGkF0VaAvcIGi8j+d5Icmbpx/NxYrjr6KixCB5bg A+TugP1LRAQEaP6Z2FQ/eObjU8Vkpd42Vq8a4y535X2TW8ax6rWU/a/zDDLzAvF+B8HH hcbsNsqo3pNeJDSg0S2lWx+Tjk1Msk+XiFziehOhFnYEVY5RVqCkFZ1uFJF5hhnwMgGi rReKxAfpriJFJ7XCjxwe8nA5zPlRtlNTcQS+QA/goMLw/NW9vcjx/VLW4pVxMyS/3Po9 dO9Q== X-Gm-Message-State: ANoB5pnDcnH5uZ8JH4qtFFKhbv+cwjfKOWvFeOS1O/8GPTWIjaebG0KT QRd9EjurDQPnigym2cSQBpHcF3pduWHxPg== X-Google-Smtp-Source: AA0mqf5kUDO3Tr6gk/fzW/ARrxEhzmkAXRSHhWoL0H7mQ0gz5ri8bxcSR3iro0wxy+7xhENFLIaYZergQrtKPg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:5:0:b0:6fc:8f88:813b with SMTP id 5-20020a250005000000b006fc8f88813bmr27601586yba.629.1670528399532; Thu, 08 Dec 2022 11:39:59 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:51 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-32-dmatlack@google.com> Subject: [RFC PATCH 31/37] KVM: x86/MMU: Use gfn_t in kvm_flush_remote_tlbs_range() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114001_809513_4453A8FE X-CRM114-Status: GOOD ( 10.51 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use gfn_t instead of u64 for the start_gfn parameter to kvm_flush_remote_tlbs_range(), since that is the standard type for GFNs throughout KVM. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/mmu_internal.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4a28adaa92b4..19963ed83484 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -244,7 +244,7 @@ static inline bool kvm_available_flush_tlb_with_range(void) return kvm_x86_ops.tlb_remote_flush_with_range; } -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) { struct kvm_tlb_range range; int ret = -EOPNOTSUPP; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d35a5b408b98..e44fe7ad3cfb 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -65,7 +65,7 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, u64 start_gfn, u64 pages); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; From patchwork Thu Dec 8 19:38:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F253C10F31 for ; Thu, 8 Dec 2022 19:40:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lDde+jKIeQkWtgvKa3f6PGxta3wZ5ota7GHvUDzrmnc=; b=qPZE42ZUB/dhtEuXic2c2nFS9N rxNplOTZpZ+1XAm3zDzKhcG2Hpn+2XzNqYHentsuMV/pDzA9Wl5ONJDm9sQTJPnDSwcr0wTwmFDHq 75EGHOkgU44WqaRUuqDdA7SqihEUc6hPmxq+SDbhaUGdNkoX87YvUjBjzfRSPmkzqUce4+n2mezOc ksPM5l3S+P0FqYOKteLWrSw9XIls5rXJoGdhVyDuZ6NpNycFiVUFH1ijz0SnH+a8eWwbi4L50kFYC fRVgngKfIhuQpChgrkVFzr1wcUEzeXoCw7QJEDCBO9Fh3DjKh3aHfXEZMJVAqOArZJxHhCAsntSUT QjDrzj3Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mku-009tb1-5L; Thu, 08 Dec 2022 19:40:28 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkU-009ssH-GB for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:05 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-360b9418f64so24746847b3.7 for ; Thu, 08 Dec 2022 11:40:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XV6plMeF8MwFNJz3HbB05g+syGBZZ4afzsZVPB1fNOM=; b=Or/+tQ35VJl1emV0K4s+0pZ6UBajubG9tq8FrCH/hA3kAwbcecTNaCZrvYWvTtsU9g 5JZnVfab3n41kCOJL7IiSST4+aKZY4JSEhz9//xqRqAsRUBEKxw9xl8ylX8pwq0BwIdV I/Xh88N0cUIemHf3DBiDhLCAmUuyGnaY5gTHAU4MfbG4ILGutMlede6Fq/e7cS9eVM31 nHh094v1U0MR6DFbL/HmUOOC1ZS2/Zm3j4YdatXU4M2l2wqxp12xCNyRytDsAZdWIiVJ 2+SDuw6pj88vz17PG6Z8Q+mIQbHf2BO5E5P+E5Z5Z6J+LaEFzERjkL2lFcMUz4dZZ5j1 PSpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XV6plMeF8MwFNJz3HbB05g+syGBZZ4afzsZVPB1fNOM=; b=IpJ9rQGRQYA/N2aUZReemwaQN9bcuoz10+jOGvzWCFAsBwaxTfCikwpdmJuATzfbg/ yOM5dW+yG1vlfZQjpOa3Eu90QoQoU8NbFo/Kp9ijlvbgiFVJU0/81nhtk37VdPQhWAba GYqethgAUH6BrUgOgccKdz+Q6pZqggteAccyTziH5gD9YCq9QVMHiPcx4CBKc8/UzpBY OoEiIWsjkcV2TUlR8XYyFO63t1dJOcPLnS05r+QrKizuCN+amGeli3cg/J0BfK/n5nAU N8vIyFRmoMoMv75JD8MU/nm3Wd/ocR2C6xjvpaaEPgFDQasleC728oBcBdeqeRpGNcmj cydA== X-Gm-Message-State: ANoB5pncFjC26wh+giYzabHr9UUEpkINNZQBxtogiBZYxOKWY/6yRXI2 Ck7dqzl/O7Twj7urY+b35JRY7Mm3wwl1xA== X-Google-Smtp-Source: AA0mqf6Wkh0OUy4PlZrJU9hOTo3IInMenz2jMKiaSBByUieGZ1GCpj8vRhZKagSo80e6dZu+mbM8OPIkZWhBWg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:230b:0:b0:6f9:b1b0:67f5 with SMTP id j11-20020a25230b000000b006f9b1b067f5mr28607189ybj.471.1670528401185; Thu, 08 Dec 2022 11:40:01 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:52 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-33-dmatlack@google.com> Subject: [RFC PATCH 32/37] KVM: Allow range-based TLB invalidation from common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114002_566697_90A1C17F X-CRM114-Status: GOOD ( 10.94 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Make kvm_flush_remote_tlbs_range() visible in common code and create a default implementation that just invalidates the whole TLB. This will be used in future commits to clean up kvm_arch_flush_remote_tlbs_memslot() and to move the KVM/x86 TDP MMU to common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 1 - include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 9 +++++++++ 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e44fe7ad3cfb..df815cb84bd2 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -65,7 +65,6 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn, int min_level); -void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); unsigned int pte_list_count(struct kvm_rmap_head *rmap_head); extern int nx_huge_pages; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ec3a6de6d54e..d9a7f559d2c5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1365,6 +1365,7 @@ int kvm_vcpu_yield_to(struct kvm_vcpu *target); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); +void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 0f1d48ed7d57..662ca280c0cf 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -379,6 +379,15 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); #endif +/* + * Architectures that support range-based TLB invalidation can override this + * function. + */ +void __weak kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages) +{ + kvm_flush_remote_tlbs(kvm); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); From patchwork Thu Dec 8 19:38:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1B2EAC001B2 for ; Thu, 8 Dec 2022 20:28:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=LI0a3cpodiIfp/QzIYuK7FjjJDXh7FzZa6EDsIDNl6I=; b=Obk/r463b52V951DYc+OtdB/YO tjgeqGdfBRQYNOLpx8cgTvfcN0r/VyMf0RJUUBB/NStHUNdyP4I6A/AQTvRXomndyDqva1/akihFh 3Ozam5q/MRseF0YOZ+1N07kgcmafq73bEW45Rmpcr5hNgfXsdx8ToQyyUkXQR7Ga2scJEy3pH6sHy nEKbffF95BA7wJJhz/jRL3MhPPbZjiVDqFYbhw6v4I4gWPAzM8/PDVnXsdeajIOT5lORGatSgkPey 6nS1v/Y/dvBRhqbuo9B1EPZGMaKuNbt+nqxWn8aPdZwM7Ft0lYOSHziwTUi0zPMZTogop5Sr5RqET SFEPfNpQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NVH-00AhXK-7H; Thu, 08 Dec 2022 20:28:23 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3NUm-00AgYd-He for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 20:27:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=vwi/xVZU3lQNa24CSu/xiis0ESj3Tvwd8kSXjnMpdRQ=; b=et4eFVqH2KvSA0Hr57tCjKparQ yup6KRS/Dyyg9r0g5CACa3Jz396Sy2Umljtyj3nW8LJfsjdnp16Ic8Ax6oWAq2pSbrGw9V9thp2+t n86gHUdaAwK5xUh/BFRelXxmC098uWRD+7c7J98ZW2jXVzsc0NHmR4/TYaXhC1TE5X7bxAwNEmlpu hGAi63LVjd08/A0AdYs7Vsnzk4Ph12LY0aIOMDPD7Lkb8qCD8aMF9dbvLWySrRuzfsOA/YogJN+H5 nSuO76totbDPVILvp6LlRqqEEi4otJFxGPfBYnu4aAzuN6FAx8kPdBmtrBzT2Ocd1Qzpu7UXsYLLp +5x6MbLA==; Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MkW-008Zxw-D2 for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:06 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-3bd1ff8fadfso24720827b3.18 for ; Thu, 08 Dec 2022 11:40:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vwi/xVZU3lQNa24CSu/xiis0ESj3Tvwd8kSXjnMpdRQ=; b=Bzp7aIG7f+4q2dRplZSp+D5dxJXMlRGZAGs2Ki1iTWPjWVbRjAjdCCmUBNRhSmzRU7 ZSBl++QOMiASNUCpwZkSTptJv0PVzRpzPA1RdMro1kk5ldeBT1disLMcHKtlvhksgjSa zxjGGKv7w3ih9u7oBYtEkMdXSfYAVXKmN/5IlUrbnzelK4L5OTHFDSHUAM7mcm165YKs jWTeqbcZ1jFEYYYup6JiwBJ731RZgvquulPu84Z+AOXDerg9EEKJ5bvPB7+yfyKhc+cS ejSYoSjauyKuAn6t3wLtot9xxMATKi+2Luz61ZMb+pKr8eLNlPug4DgdrHpbqVCM1Nhg pdbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vwi/xVZU3lQNa24CSu/xiis0ESj3Tvwd8kSXjnMpdRQ=; b=K/x0ezbAORsXBaxvvU0AlH1n5xchpzJav5KJjxa7vP+C3kQWtXRqy1TBsKxa001fvz nvppTdBmdGhp4gUUoKXsn5hQH52XfjjpDGhLJ0Adca5djWltvAbHURP68iPjeD9Km0aF 4tt9xVMTLWV6FbhT/e+m0wsiyt+C5Ik7/BxQPhng4KnAkMEYUcN7ldejpTkgxeimkhoK CaHDiHTE37m+KO6b5yz/B1ZI3fPkZTDsZTKbN3NjMW5ZR13LPzfmsE/e5slxW7udQ+dX SmNMKl+KusaIijoEFA5tqR+JDfHFRmwXm/CM6uoCb26LnQDDNpLmLHw2s2f1OX4V83dD ihdg== X-Gm-Message-State: ANoB5pnYi2Mtc9DhPMUR9ziz/PlIaAP5amkOwflZg82zh0u8mPz1vJXF Np96fV1cui7gZhodR1qGfC3DU9MCcpODGQ== X-Google-Smtp-Source: AA0mqf67Kq9rDxYgV+hummcIeYFqfheYhuuti7T81xEQvcx7DrJwDRhRzpL707BjojHfwgw2nWyzvQVlBe/ufQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:ac8:0:b0:6fa:678a:7623 with SMTP id 191-20020a250ac8000000b006fa678a7623mr32403861ybk.577.1670528402922; Thu, 08 Dec 2022 11:40:02 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:53 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-34-dmatlack@google.com> Subject: [RFC PATCH 33/37] KVM: Move kvm_arch_flush_remote_tlbs_memslot() to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194004_497920_21BA641B X-CRM114-Status: GOOD ( 17.81 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move kvm_arch_flush_remote_tlbs_memslot() to common code and drop "arch_" from the name. kvm_arch_flush_remote_tlbs_memslot() is just a range-based TLB invalidation where the range is defined by the memslot. Now that kvm_flush_remote_tlbs_range() can be called from common code we can just use that and drop a bunch of duplicate code from the arch directories. Note this adds a lockdep assertion for slot_lock being held when calling kvm_flush_remote_tlbs_memslot(), which was previously only asserted on x86. Signed-off-by: David Matlack --- arch/arm64/kvm/arm.c | 6 ------ arch/mips/kvm/mips.c | 10 ++-------- arch/riscv/kvm/mmu.c | 6 ------ arch/x86/kvm/mmu/mmu.c | 16 +--------------- arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 7 +++---- virt/kvm/kvm_main.c | 17 +++++++++++++++-- 7 files changed, 22 insertions(+), 42 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 0e0d4c4f79a2..4f1549c1d2d2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1430,12 +1430,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, struct kvm_arm_device_addr *dev_addr) { diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index a25e0b73ee70..ecd8a051fd6b 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -209,7 +209,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, /* Flush slot from GPA */ kvm_mips_flush_gpa_pt(kvm, slot->base_gfn, slot->base_gfn + slot->npages - 1); - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); spin_unlock(&kvm->mmu_lock); } @@ -245,7 +245,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, needs_flush = kvm_mips_mkclean_gpa_pt(kvm, new->base_gfn, new->base_gfn + new->npages - 1); if (needs_flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); spin_unlock(&kvm->mmu_lock); } } @@ -997,12 +997,6 @@ int kvm_arch_flush_remote_tlb(struct kvm *kvm) return 1; } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - long kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { long r; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index a8281a65cb3d..98bf3719a396 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -406,12 +406,6 @@ void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - kvm_flush_remote_tlbs(kvm); -} - void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free) { } diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 19963ed83484..f2602ee1771f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6524,7 +6524,7 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, */ if (slot_handle_level(kvm, slot, kvm_mmu_zap_collapsible_spte, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL - 1, true)) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + kvm_flush_remote_tlbs_memslot(kvm, slot); } void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, @@ -6543,20 +6543,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, } } -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot) -{ - /* - * All current use cases for flushing the TLBs for a specific memslot - * related to dirty logging, and many do the TLB flush out of mmu_lock. - * The interaction between the various operations on memslot must be - * serialized by slots_locks to ensure the TLB flush from one operation - * is observed by any other operation on the same memslot. - */ - lockdep_assert_held(&kvm->slots_lock); - kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); -} - void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 517c8ed33542..95ff95da55d5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12574,7 +12574,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * See is_writable_pte() for more details (the case involving * access-tracked SPTEs is particularly relevant). */ - kvm_arch_flush_remote_tlbs_memslot(kvm, new); + kvm_flush_remote_tlbs_memslot(kvm, new); } } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9a7f559d2c5..46ed0ef4fb79 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1366,6 +1366,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); void kvm_flush_remote_tlbs(struct kvm *kvm); void kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages); +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, + const struct kvm_memory_slot *memslot); #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min); @@ -1394,10 +1396,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, unsigned long mask); void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); -#ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT -void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, - const struct kvm_memory_slot *memslot); -#else /* !CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifndef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log); int kvm_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log, int *is_dirty, struct kvm_memory_slot **memslot); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 662ca280c0cf..39c2efd15504 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -388,6 +388,19 @@ void __weak kvm_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pa kvm_flush_remote_tlbs(kvm); } +void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, const struct kvm_memory_slot *memslot) +{ + /* + * All current use cases for flushing the TLBs for a specific memslot + * related to dirty logging, and many do the TLB flush out of mmu_lock. + * The interaction between the various operations on memslot must be + * serialized by slots_locks to ensure the TLB flush from one operation + * is observed by any other operation on the same memslot. + */ + lockdep_assert_held(&kvm->slots_lock); + kvm_flush_remote_tlbs_range(kvm, memslot->base_gfn, memslot->npages); +} + static void kvm_flush_shadow_all(struct kvm *kvm) { kvm_arch_flush_shadow_all(kvm); @@ -2197,7 +2210,7 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log) } if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) return -EFAULT; @@ -2314,7 +2327,7 @@ static int kvm_clear_dirty_log_protect(struct kvm *kvm, KVM_MMU_UNLOCK(kvm); if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, memslot); + kvm_flush_remote_tlbs_memslot(kvm, memslot); return 0; } From patchwork Thu Dec 8 19:38:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 258CCC4332F for ; Thu, 8 Dec 2022 19:59:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=vWiXhz2SNUDlfjRXSnAi/t+QwHJwbFaNrpNpCdhUAUE=; b=adJSoklxl5+SXBMREZtW241HJh o2OoDx+7yo3DpQAO1cNboBzhoM8q6iW5YnSbv53va6yzVYgh92sneNmUctko1XHOwFp5OTqp7jqep 6NluXUvSr5nuVpiKfg3Kjt0B/5PDWovIUjcg23aeNUvH8x6kAKBpWPSpiTmS9ywqJXzlJLnJGGE1Y +LjweK6W1Nim7r6JumPKNiew91kdIVH/rmhXDs8MrvDVoqrHaEe6dj12lOnN5CZicA9pmZKYiIILj RS7CI+3Jzzfu42oH9aeinKJzpTzanLD5YlQ3PtZaLsXEdLidwRwIhpSODDU41fa6e7J4NjVg9nsdZ 6ZXlHWIg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3N34-00AADu-U3; Thu, 08 Dec 2022 19:59:15 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Myz-00A7bR-1X for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:55:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=7el2leYxw6DuE2TKpxO/GVa0AQVZhVPYSCMmz19ZHuI=; b=Q52O4ZwjbN6zb1XcjxHSGcDBKL ihOTLHD4dGK6NE8zXcNJK4oXRt8a1vT71SIW3d5w0Faz/ONfVkvGLAU8qGwODBNVBjbN/9dR/rAJC vmwgPJIh2nYcCgX2mPx8oumQgJ7xJONX6WLSM+Tg5YF55UdAjffFH/pxO3oTHFU+0GKu+LfPyOsO8 hWebFfS3W/InlgwtE+nP73Nnql95i1hj6WV9+ls9VXgLz7Qf54OzqZ7t1rXAyNQpo58WIpZgm8F2p lbC0sTUZaT9Tr4qVR4knR09E6dXSnVl5YnmOw6oIdq2ufgV8/LIUigfdANBaCdQnG3u6889h9ANg4 QpJlsz5A==; Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkh-007G7V-0I for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:16 +0000 Received: by mail-pf1-x44a.google.com with SMTP id cp23-20020a056a00349700b005775c52dbceso1739528pfb.21 for ; Thu, 08 Dec 2022 11:40:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7el2leYxw6DuE2TKpxO/GVa0AQVZhVPYSCMmz19ZHuI=; b=O++bFFpSURwB8JUZDeWwEc8lxc4/9crS2K4HsyuyBbvilW8kdB4FZPUpBapa95NsxY 0xO+2BUFvsH7Lv9OpM/ZNw6lAX1BZkwmvpchLqFMC1N46mGiLFFJBG/HkNWtdDFKz0HV gaJZcacGlJwuJDiHLoKW0WYD/nKr6azk72Tngx5olXe8pBzakRuhI16DprQMVZTjM2QI klMPqFL6afDQzxyjJFzvNFwT6YpAogs4p8iE9rScX7ZpN21ivdBFghpL37wrQT+9CnOv tdduCPP5Ti74kCPUwjTk76Ln1pEwMrFI/tCITETf7aNuyVpMp2Rq1UKSZarbJfTksvTP 6Wqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7el2leYxw6DuE2TKpxO/GVa0AQVZhVPYSCMmz19ZHuI=; b=4Sh0KaRUyn5DaswVc8x449uwjTm3lH4N7x8ho9BuetCiGU3Fa7OZbaJu+Ckz91WnSU oRimKbtf28pEOD0b/vnypTQrbA3TsWFgxE6D9H715QGorVJ8Ky1wRjQJm9QZqJXuXZTY 66R+kluV1t5tBV5n7Ej9ypfSTyYrjRuaAy3E00meftJeyZ+SSG+kBawAqUzW6lduPjLr xe/zONHOL60KdAt9UNTlkJtlOZUDndioCq60H7dHg1IwlP19KwFpTEotyqV00BxziNrD xqXXZDhGRPHO5JTCV1WkP4+hNKkrnKH/Optfcz2LS8Go8JHxkOTgvLE4bVWY7cWxfqrd uBJA== X-Gm-Message-State: ANoB5pk5ZmjxJmgdHVuuDeEeS7PkjZEaCYP948RA5wwKJXh+bwl7JWWG jzs0kdFDgxjMKujvcounlAlq+9FCq15x4A== X-Google-Smtp-Source: AA0mqf4ImIWXetG7XX1SXxRXGnECaJ0ODlsgksb3SPKCTTh3sr1uwrp/2qso+8Xm6A48WDYB3HG6hZIxDgKp9g== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:2183:b0:574:2104:5657 with SMTP id h3-20020a056a00218300b0057421045657mr5713585pfi.58.1670528404717; Thu, 08 Dec 2022 11:40:04 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:54 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-35-dmatlack@google.com> Subject: [RFC PATCH 34/37] KVM: MMU: Move the TDP iterator to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194015_078019_5022DC66 X-CRM114-Status: GOOD ( 12.54 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move arch/x86/kvm/mmu/tdp_iter.{c,h} to into common code so that it can be used by other architectures in the future. No functional change intended. Signed-off-by: David Matlack --- MAINTAINERS | 2 +- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- arch/x86/kvm/mmu/tdp_pgtable.c | 2 +- {arch/x86/kvm/mmu => include/kvm}/tdp_iter.h | 9 +++------ virt/kvm/Makefile.kvm | 2 ++ {arch/x86 => virt}/kvm/mmu/tdp_iter.c | 4 +--- 7 files changed, 10 insertions(+), 13 deletions(-) rename {arch/x86/kvm/mmu => include/kvm}/tdp_iter.h (96%) rename {arch/x86 => virt}/kvm/mmu/tdp_iter.c (98%) diff --git a/MAINTAINERS b/MAINTAINERS index 7e586d7ba78c..3c33eca85480 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11206,7 +11206,7 @@ F: include/uapi/asm-generic/kvm* F: include/uapi/linux/kvm* F: tools/kvm/ F: tools/testing/selftests/kvm/ -F: virt/kvm/* +F: virt/kvm/ KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64) M: Marc Zyngier diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index c294ae51caba..cb9ae306892a 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -18,7 +18,7 @@ ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o endif -kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_iter.o mmu/tdp_mmu.o +kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_mmu.o kvm-$(CONFIG_KVM_XEN) += xen.o kvm-$(CONFIG_KVM_SMM) += smm.o diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 1f1f511cd1a0..c035c051161c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2,10 +2,10 @@ #include "mmu.h" #include "mmu_internal.h" -#include "tdp_iter.h" #include "tdp_mmu.h" #include "spte.h" +#include #include #include diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index cc7b10f703e1..fb40abdb9234 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -2,10 +2,10 @@ #include #include +#include #include "mmu.h" #include "spte.h" -#include "tdp_iter.h" /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ static_assert(!(REMOVED_TDP_PTE & SPTE_MMU_PRESENT_MASK)); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/include/kvm/tdp_iter.h similarity index 96% rename from arch/x86/kvm/mmu/tdp_iter.h rename to include/kvm/tdp_iter.h index 6e3c38532d1d..0a154fcf2664 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/include/kvm/tdp_iter.h @@ -1,14 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 -#ifndef __KVM_X86_MMU_TDP_ITER_H -#define __KVM_X86_MMU_TDP_ITER_H +#ifndef __KVM_TDP_ITER_H +#define __KVM_TDP_ITER_H #include #include -#include "mmu.h" -#include "spte.h" - /* * TDP MMU SPTEs are RCU protected to allow paging structures (non-leaf SPTEs) * to be zapped while holding mmu_lock for read, and to allow TLB flushes to be @@ -117,4 +114,4 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); -#endif /* __KVM_X86_MMU_TDP_ITER_H */ +#endif /* __KVM_TDP_ITER_H */ diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 2c27d5d0c367..58b595ac9b8d 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,3 +12,5 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o + +kvm-$(CONFIG_HAVE_TDP_MMU) += $(KVM)/mmu/tdp_iter.o diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/virt/kvm/mmu/tdp_iter.c similarity index 98% rename from arch/x86/kvm/mmu/tdp_iter.c rename to virt/kvm/mmu/tdp_iter.c index d5f024b7f6e4..674d93f91979 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/virt/kvm/mmu/tdp_iter.c @@ -1,8 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 -#include "mmu_internal.h" -#include "tdp_iter.h" -#include "spte.h" +#include /* * Recalculates the pointer to the SPTE for the current GFN and level and From patchwork Thu Dec 8 19:38:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44F2BC4332F for ; Thu, 8 Dec 2022 19:51:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YXeQizjCxC4BGnKii0J3aM43OM4MsC0ccquVZGbz1PM=; b=xS1y7SbYwlCLMWCy6LBB1N7+A2 HzDHBYhAyUj4IIVQiuF/6eWSx2GREgwLkXiQIdXbGH3zTJ/F1CHLgmS1eutakpQikwOepRIGUr0K/ egs3/ad2xFGdgWIbMkOcAYmcrGZ0VBH04qEQHsA/GwUA4/GOuG8ogtZSBRZouPaSBDoOZ4cRbsdfI r1cxXoJk8phSkG3rN285Z+J0JOMlUuDcqL7fh97rtyXVjq4yX2Jtso1BsZxzrhrkJhGeiAT6qcocp zi3CRDJa+O2q1fZ9NaDB5mMNSgvC3qlYellm8Bj126CMsP2pCkd4SqEoXiENYrwWMqSpYl95Elq/H KTdSIJrQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3MvK-00A4aT-Lg; Thu, 08 Dec 2022 19:51:14 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mrr-00A2FC-IN for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:47:40 +0000 Received: by mail-yb1-xb49.google.com with SMTP id f11-20020a5b01cb000000b0070374b66537so2562149ybp.14 for ; Thu, 08 Dec 2022 11:47:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZDFxAZ/PI9aKYqwedjkDPDUn0PCD9WUxHYy13eGAWTI=; b=UW5TQakihQ1Rj8oX0fHh6ol3yHICPRAUjId2fBE9HBOx4Z4cfwRHCzOFro0DVog5Py z1d9aYSN4fRQh+yVjk9c6erIMnvGY+3XCA5eND2mJd3lmcs6ZMcic9/HWoIHmoYD+B1m XjqLDiAn+QBJHnf0vzOfnGkxZlkIa53ilOScS8egg7kyZLK+h2NKejyWCaVjbbt+sR5R IhfagMv7s1KO30A1KDYGC5ziILrcoD8HH6N1IH5+EVj3HMWjpbtHUPdv5fpd7dQKJ+67 MzWuFb+b2SqNDBq56KYHPAYiPt7QBIFy/9sPV8aGa7CXG5RzY4NELQedDkJBW9qr9WbA evPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZDFxAZ/PI9aKYqwedjkDPDUn0PCD9WUxHYy13eGAWTI=; b=GvVHCvIA0fuuJIPmMGlXwjlQduZz1KxIwKRy0VAVDtmZ2FyS/EkNdpy0o5mP5xg+bA EYIiiSywDcKTJtqkuTG0mTFEnKwpolY4J6VTrfkiwv2j3SjTiYi4Vmq9V2OgZxNcCCWz 9eO5DTxmpObr0r2shcF9axLQNAQDgECkcBZNe5Lmh98OUGNVlQYx/qccysFam4adZare foIFYv1C7+1+Z8QeeQSrP68S2VyHvqSJxIJfNF/PGgnQ/qWLo9Blywx43jfBMIH6Uyu4 GaMwTxMya/tj2zVIo3hMuYY8CozVJY6tEXZW26WOu8Cht83seSbVnWI75/lIHIsXY0bv 4Ksg== X-Gm-Message-State: ANoB5pnIO4CokW/nhZzKpwEkFESB/EYz7/LuXbErz5fwZ07EwlqRdbvk JDdw1hejEJHm6b3AiYf0g8jCs4DbgZ6S9w== X-Google-Smtp-Source: AA0mqf4h5P1c4eLdLR7BQ8qrkqY9isQdWdfvRcIECxB1d810mucDHdG6W99tV9EeqSL89xl1lTy+w8pwh40XXQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:6a89:0:b0:6dd:989f:2af4 with SMTP id f131-20020a256a89000000b006dd989f2af4mr92028743ybc.38.1670528406445; Thu, 08 Dec 2022 11:40:06 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:55 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-36-dmatlack@google.com> Subject: [RFC PATCH 35/37] KVM: x86/mmu: Move tdp_mmu_max_gfn_exclusive() to tdp_pgtable.c From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114739_648684_FAE587DA X-CRM114-Status: GOOD ( 11.81 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move tdp_mmu_max_gfn_exclusive() to tdp_pgtable.c since it currently relies on the x86-specific kvm_mmu_max_gfn() function. This can be improved in the future by implementing a common API for calculating the max GFN. No functional change intended. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm/tdp_pgtable.h | 3 +++ arch/x86/kvm/mmu/tdp_mmu.c | 11 ----------- arch/x86/kvm/mmu/tdp_pgtable.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm/tdp_pgtable.h b/arch/x86/include/asm/kvm/tdp_pgtable.h index ff2691ced38b..c1047fcf1a91 100644 --- a/arch/x86/include/asm/kvm/tdp_pgtable.h +++ b/arch/x86/include/asm/kvm/tdp_pgtable.h @@ -67,4 +67,7 @@ u64 tdp_mmu_make_changed_pte_notifier_pte(struct tdp_iter *iter, struct kvm_gfn_range *range); u64 tdp_mmu_make_huge_page_split_pte(struct kvm *kvm, u64 huge_spte, struct kvm_mmu_page *sp, int index); + +gfn_t tdp_mmu_max_gfn_exclusive(void); + #endif /* !__ASM_KVM_TDP_PGTABLE_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c035c051161c..c950d688afea 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -828,17 +828,6 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, return iter->yielded; } -static inline gfn_t tdp_mmu_max_gfn_exclusive(void) -{ - /* - * Bound TDP MMU walks at host.MAXPHYADDR. KVM disallows memslots with - * a gpa range that would exceed the max gfn, and KVM does not create - * MMIO SPTEs for "impossible" gfns, instead sending such accesses down - * the slow emulation path every time. - */ - return kvm_mmu_max_gfn() + 1; -} - static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, bool shared, int zap_level) { diff --git a/arch/x86/kvm/mmu/tdp_pgtable.c b/arch/x86/kvm/mmu/tdp_pgtable.c index fb40abdb9234..4e747956d6ee 100644 --- a/arch/x86/kvm/mmu/tdp_pgtable.c +++ b/arch/x86/kvm/mmu/tdp_pgtable.c @@ -170,3 +170,14 @@ int tdp_mmu_max_mapping_level(struct kvm *kvm, { return kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, PG_LEVEL_NUM); } + +gfn_t tdp_mmu_max_gfn_exclusive(void) +{ + /* + * Bound TDP MMU walks at host.MAXPHYADDR. KVM disallows memslots with + * a gpa range that would exceed the max gfn, and KVM does not create + * MMIO SPTEs for "impossible" gfns, instead sending such accesses down + * the slow emulation path every time. + */ + return kvm_mmu_max_gfn() + 1; +} From patchwork Thu Dec 8 19:38:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44F6FC4332F for ; Thu, 8 Dec 2022 19:55:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wC1xCIM2E9RHD7kCYUD24YnmMkQI2WZLWRXqwFwtI2A=; b=UCd33+bJGbIirAD4iXxMGUJjKX GG7qgryS+A9Z4XG1kYsi8a0LSY12N5rsxeqUc9qcbSTwHpP4XCsqXmzETmwtr6f5beQRO5e8ICbpK DGf/h7SuyfsP3ZxID/acyk3vkBj9V4NXdi5FcFfiI1cVuuJmT+n/2u6Ybpg9ZAaOcFj+hYbj9/l4l XZDRYV3BoDWs9aV5Qsc2AlxzkHypzypMW2OMiQaEPxmdRvN71BLeGwsW989jqLy61zepqdhhtOPO3 oPGFxDoWhQvBuUjWdPDfs09aLuRFuCQ76Ih/l5tQ9rtG/L7YEnbsjjAAEkk31Tu4c03Ct7z6ZPNid Bghu5Cxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mzf-00A85n-A0; Thu, 08 Dec 2022 19:55:43 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Myx-00A7bR-4i for linux-riscv@bombadil.infradead.org; Thu, 08 Dec 2022 19:54:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=tkNWRrDgNwNzqtgNwpxgN/DTrlBNLFki8K4gCSCF3lo=; b=ah4D4/58xsd3LC1yw+s1ddjtMH P+v6TX40aJ8xBWpyw5swhZkdh5Kxj67rsWmboZ7Qy3J12yc8oIMPebFjkXRie5La80ZdkH8caOkC+ y3AsbbKEz8ZY61oHktUkZ1J0esFBY5dGfJO8k8P+mGXLAfzFLa1pxbuVSj/7siHNVmKecyhTMWryT 6NY5n4wCxFjJLJTweoqJ1fcqx00ppaYORUg9sGhmV3n8OzDo6SeiaqcVH5zWKHXMXdf0Uc4/K2/wi jBrSZoH7VmLlt12FJt7qNlm+SF0s4z7JGq9SZsd+Av96whpjf9JbmWMOHfSPO8d/K+x+wR7aCdweG 1geUhlzw==; Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkj-007G94-Nt for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:19 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id i10-20020a25f20a000000b006ea4f43c0ddso2549282ybe.21 for ; Thu, 08 Dec 2022 11:40:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tkNWRrDgNwNzqtgNwpxgN/DTrlBNLFki8K4gCSCF3lo=; b=Jeo44smHjnYpe2uFwUmmqKcQe01M1Ze9ea9B4uMIU5ntQkPkvB3Zc4thQ3NAsyCsM6 njB6ARB2/jwqICHZjLWARLgqlROkOE8dE/Nlmg+cddGuhCUdBUCre+Ape5y9bEn1AH0m QTNzlGiRNxTFT7VabJhOsxdhwjPofjbAJZ1PCWCHwTDUYPBouSQgwVrjYVu7xNslHehx 1YNfAbWUQ8IDhGJBMwkGg2u+vzD4HHd3ftPyCOz4Y9YrRbPnwANRWTtpcXIgRmYN6yTS OWchh5FCUZ+isGuwUpxCxPxLXQApiy8OtxlS5CtERREsPneeyxTaL8RKmGDzc7PdZpGa /plg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tkNWRrDgNwNzqtgNwpxgN/DTrlBNLFki8K4gCSCF3lo=; b=i5VSzKDEOsZKQRITYagmBlZqdE7zyJe1LqTrMzEywpdMzWtGKXpuOzx6wsJLKPnG+v VogEe99lf5Fu/rvr2+EWeZwy3UFw8MkQCDYr3ocKYOWXjo+FZCTyfteO+xbZYDlo1Hhn KcjsMlw6RoL+d5k7UP34zvzJdFYo7KvQ8zj7hHVQLX3g3w2pkEeCkTBWktMaieQWLlbn Y5vj+UOSX1yQp/4+3+xk7Ql3XFww4Mgvr/KuZU2/gg+/lIh/hdzTMfQiLCoBpEuh5qIj Kfz1rn5nXkBBtzLgF/YlIFEkoxQmGxhwR3pZJZrQYEQzccIMtQV9bo8KalkBTPtYLvGc cnwg== X-Gm-Message-State: ANoB5plosYYGm+G46jk37zBot4UWB572ncnuUIBpueUryM9/LIZCg/bq QOp1pfbWKgvYIpBBcn9kRRHN9YeWryQkEg== X-Google-Smtp-Source: AA0mqf6e/93WEAJhtz+5cgWWhhHXPdHpxdnhitMPWlmfhQ9qhBNIB9WsoIo+5YOGLZdc8lSlrXS+3/eouLkxig== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:818d:0:b0:70e:58c:1c8b with SMTP id p13-20020a25818d000000b0070e058c1c8bmr2474845ybk.229.1670528408086; Thu, 08 Dec 2022 11:40:08 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:56 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-37-dmatlack@google.com> Subject: [RFC PATCH 36/37] KVM: x86/mmu: Move is_tdp_mmu_page() to mmu_internal.h From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_194017_788604_9AA8BD6C X-CRM114-Status: UNSURE ( 8.77 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move is_tdp_mmu_page(), which is x86-specific, into mmu_internal.h. This prepares for moving tdp_mmu.h into common code. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 9 --------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index df815cb84bd2..51aef9624521 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -147,4 +147,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifdef CONFIG_X86_64 +static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) +{ + return !sp->arch.shadow_mmu_page; +} +#else +static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } +#endif + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 897608be7f75..607c1417abd1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -71,13 +71,4 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr, u64 *spte); -#ifdef CONFIG_X86_64 -static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) -{ - return !sp->arch.shadow_mmu_page; -} -#else -static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } -#endif - #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Thu Dec 8 19:38:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13068785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 792FEC4332F for ; Thu, 8 Dec 2022 19:40:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zgo4bjwcZKRXmcvQBJfDOYjToZawHimYRb3/XLABlsk=; b=eHuxQ8ayFWS3EclV+gbIo5G+N4 90Nf4TbAaUlKcsW9xmxbz6W2eqVD3cwTr3IX/gVTNHTvdOPg2ejLxSnJxUiGQS0PcENwbaYDYoUN3 Tg2qN7Y4A0z/fUwYN7LMsPszN996KlIoF2FnqduvGF9TFKFyRyQ5nyC5rydof1MJdZz/Jypp2f7BL NJbUrGSPMhSmozpCsYLVLOBr59v/VzQejSHWJHGPQBc00v3+QgIyGV+uhqz9yC0gF7o5fBxrVE1Z2 BdTihUWmSToTJS/dj48VApUcxGwbPm14EBOvThKP63yRLedZwmWKVM5OWfuVabV6mluYncs9U9kac niOWNNZg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mkw-009tdD-4x; Thu, 08 Dec 2022 19:40:30 +0000 Received: from mail-yw1-f201.google.com ([209.85.128.201]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p3Mke-009tEY-An for linux-riscv@lists.infradead.org; Thu, 08 Dec 2022 19:40:13 +0000 Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-368994f4bc0so24905927b3.14 for ; Thu, 08 Dec 2022 11:40:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9jkEeOR+eFVYLdV/biNhqONLCqpOOeuWNQBN9kUw+a4=; b=RcX1fA9WGa6w4C9PUxG4OPVBo1zxjA/xcbKeSP6a3foCAqZ7FzqI1mFwD63t1Zj64f dUcsVkx5+LL+NIjjNV1LqNWVddFOpYikSd3WUTyGFLn96H50k7PwdHsttsFJt9P1ez0b 1mfPx6AlqK6ylftXIU2F9Nj5hSc+ikFrGZ/z1f/qPxYxfvx/OXVl/tDLZo61SUF3zvKK PYv1RXX6MMcltPMtFw1Jp+095PHbtWtFgqbpJbKkr0cp5YLn1DUp9X/nB2YNG55HXF6K /rKkgYv9i7/aE4/qJBLcexm7xz9bNSYN/tkjp5XaQYlxAPljLege78wWIo+X1uBQR1Re /MJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9jkEeOR+eFVYLdV/biNhqONLCqpOOeuWNQBN9kUw+a4=; b=ve2uqSOJ0ziXfHrfVdDJBLwlit9ZL//Qi/iUrItSKvMh47LI6nGIIVCG+AoK53TtUK pWX74aA3nLLV0qX/o6LOcxR6iBx9uJqE517bLmvBtSVrcDgQdjjnoJPjHWDHSvDog462 azwW06Z/WtSUwskEox15C2ZuLPDY538m+twujXzj6dp80ET4gznxMypI6LbO6xv+Upgx nzrA5CPPmnjDROA9BuOPOMcZdtI8Li9Br349CAKWiHaf445s82dGlmNwMHdwvzxVdevq LUNXhyTUdGXwP2WX+o06LXUm1SCZa4BT72/2UzJPc0kEUyVtv/ErrBa31Ie+EgJNR6x8 gh1Q== X-Gm-Message-State: ANoB5pkBVWFYVjLLtv5o4W2EHSfFcSa+YF4IwP8dwEqOlbEow450vKO2 6P39wZGFk2bCnnGCEI7qdT38G2JdWlM57g== X-Google-Smtp-Source: AA0mqf5Y/oazogGX+tpbF7bdOJO4aayzIsYsnlV7zDudirQdvI5dtEVAX371oDYlEt+kHPpaNkmdcoOtQ7CEag== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:c6c2:0:b0:6f0:b351:c300 with SMTP id k185-20020a25c6c2000000b006f0b351c300mr63444626ybf.102.1670528409549; Thu, 08 Dec 2022 11:40:09 -0800 (PST) Date: Thu, 8 Dec 2022 11:38:57 -0800 In-Reply-To: <20221208193857.4090582-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221208193857.4090582-1-dmatlack@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221208193857.4090582-38-dmatlack@google.com> Subject: [RFC PATCH 37/37] KVM: MMU: Move the TDP MMU to common code From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Morton , David Matlack , Anshuman Khandual , Nadav Amit , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221208_114012_380635_467BBCB9 X-CRM114-Status: GOOD ( 10.21 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Move tdp_mmu.[ch] from arch/x86 and into the common code directories. This will allow other architectures to use the TDP MMU in the future. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/mmu.c | 3 ++- {arch/x86/kvm/mmu => include/kvm}/tdp_mmu.h | 6 +++++- virt/kvm/Makefile.kvm | 1 + {arch/x86 => virt}/kvm/mmu/tdp_mmu.c | 8 +++----- 5 files changed, 12 insertions(+), 8 deletions(-) rename {arch/x86/kvm/mmu => include/kvm}/tdp_mmu.h (94%) rename {arch/x86 => virt}/kvm/mmu/tdp_mmu.c (99%) diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index cb9ae306892a..06b61fdea539 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -18,7 +18,7 @@ ifdef CONFIG_HYPERV kvm-y += kvm_onhyperv.o endif -kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o mmu/tdp_mmu.o +kvm-$(CONFIG_X86_64) += mmu/tdp_pgtable.o kvm-$(CONFIG_KVM_XEN) += xen.o kvm-$(CONFIG_KVM_SMM) += smm.o diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f2602ee1771f..8653776bca6f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -19,7 +19,6 @@ #include "ioapic.h" #include "mmu.h" #include "mmu_internal.h" -#include "tdp_mmu.h" #include "x86.h" #include "kvm_cache_regs.h" #include "smm.h" @@ -27,6 +26,8 @@ #include "cpuid.h" #include "spte.h" +#include + #include #include #include diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/include/kvm/tdp_mmu.h similarity index 94% rename from arch/x86/kvm/mmu/tdp_mmu.h rename to include/kvm/tdp_mmu.h index 607c1417abd1..538c848149c9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/include/kvm/tdp_mmu.h @@ -5,7 +5,11 @@ #include -#include "spte.h" +#include +#include +#include +#include +#include int kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 58b595ac9b8d..942681308140 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -14,3 +14,4 @@ kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o kvm-$(CONFIG_HAVE_TDP_MMU) += $(KVM)/mmu/tdp_iter.o +kvm-$(CONFIG_HAVE_TDP_MMU) += $(KVM)/mmu/tdp_mmu.o diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/virt/kvm/mmu/tdp_mmu.c similarity index 99% rename from arch/x86/kvm/mmu/tdp_mmu.c rename to virt/kvm/mmu/tdp_mmu.c index c950d688afea..5ca8892ebef5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/virt/kvm/mmu/tdp_mmu.c @@ -1,11 +1,9 @@ // SPDX-License-Identifier: GPL-2.0 -#include "mmu.h" -#include "mmu_internal.h" -#include "tdp_mmu.h" -#include "spte.h" - +#include +#include #include +#include #include #include