From patchwork Tue Jul 12 02:07:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12914444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4EDBC433EF for ; Tue, 12 Jul 2022 02:07:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229977AbiGLCHa (ORCPT ); Mon, 11 Jul 2022 22:07:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229850AbiGLCH3 (ORCPT ); Mon, 11 Jul 2022 22:07:29 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A03475C95F for ; Mon, 11 Jul 2022 19:07:28 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id c18-20020a170903235200b0016c37f6d48cso4220283plh.19 for ; Mon, 11 Jul 2022 19:07:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=Kj1p4tUlo2Gi2KL3OVMzfDp7eVajTf8Yir/EAlTxtKI=; b=qpBNyQJS4f5EcknaPqi2wL+B44z9oq+wpCOZ8gsQm0DUxz7O42JYG2ISghedjvR5EN bOagk8+ttb/9b6gbfatBYrqinzIgssF2hfoAJ1ID8bswCq+3CQPpyvBKqCgWBDiQnrfQ 926ut+cksqHA+jaYfNvHC9q9fbO1nZK9zON3i+ftKFl7d1d9yNwgcauMxhmGN+0vXD5c N+RNbmmxkCLtxOjseuxGCUgsxZQ16CXmE0JFV3pC1zVu5cxeJ/DwTijXyZK+vrzcxXMu yTpDcItMjLtZm0tDLggXXxvbphP3lIC3W4IO4F4Wt4H8yTK02MkIkdjTmVFmwwr0IysW Um3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=Kj1p4tUlo2Gi2KL3OVMzfDp7eVajTf8Yir/EAlTxtKI=; b=gSkopFBlUxi9FV4bH05OmfzjM8Oca+pOgNSjr2277yz6Vie/bA5kD4x6FkSdkZdTgM Ks1NNX5vxiT0NUZQa4eIpv6S+V/W+xoCjRs02bbBIFlDBQYBwcpzC0k3hv0zepFCZyUW IMaD/kgp/azMMK62Cz8AddQ+Tj9aNqVnVN3KImXBxgS2Rzg+aDHUp5Y/44pdiIbG1zB8 JUhLmWhWPA2YouhrW/tBGE2INjyb82cwxN7979ynETN41HxZm9G2z4OfdIeKbesqjQMG zc9FsIk3Mm9QGMtfLJhvFLdgRRQg2G/beqkX8kQzz+MDN+kIDfaCZHDu8wYr1PJ1OT3D VBuw== X-Gm-Message-State: AJIora8CXLXfrma6XkBI4an2PTXzGrKHUI+7IYtYRxCjD6KWGHC0hLQV lsQHE/5XTsNMcLBu4dJkhwxecGG7Di8= X-Google-Smtp-Source: AGRyM1uy4B/Y1d/ttEgwCwfdgh29QaSmiQ+HpwxHSxaAszeXSQy/JtcCYhe34uNZq6yJHLhz1DOiXU8MrGc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:185:b0:16c:3af7:dbc8 with SMTP id z5-20020a170903018500b0016c3af7dbc8mr13006281plg.147.1657591648244; Mon, 11 Jul 2022 19:07:28 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 12 Jul 2022 02:07:22 +0000 In-Reply-To: <20220712020724.1262121-1-seanjc@google.com> Message-Id: <20220712020724.1262121-2-seanjc@google.com> Mime-Version: 1.0 References: <20220712020724.1262121-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.144.g8ac04bfd2-goog Subject: [PATCH v3 1/3] KVM: x86/mmu: Add optimized helper to retrieve an SPTE's index From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add spte_index() to dedup all the code that calculates a SPTE's index into its parent's page table and/or spt array. Opportunistically tweak the calculation to avoid pointer arithmetic, which is subtle (subtract in 8-byte chunks) and less performant (requires the compiler to generate the subtraction). Suggested-by: David Matlack Reviewed-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 22 ++++++++++------------ arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- arch/x86/kvm/mmu/spte.h | 6 ++++++ 3 files changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f7fa4c31b7c5..864a32f96082 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1038,7 +1038,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) struct kvm_rmap_head *rmap_head; sp = sptep_to_sp(spte); - gfn = kvm_mmu_page_get_gfn(sp, spte - sp->spt); + gfn = kvm_mmu_page_get_gfn(sp, spte_index(spte)); /* * Unlike rmap_add, rmap_remove does not run in the context of a vCPU @@ -1589,7 +1589,7 @@ static void __rmap_add(struct kvm *kvm, int rmap_count; sp = sptep_to_sp(spte); - kvm_mmu_page_set_translation(sp, spte - sp->spt, gfn, access); + kvm_mmu_page_set_translation(sp, spte_index(spte), gfn, access); kvm_update_page_stats(kvm, sp->role.level, 1); rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); @@ -1716,11 +1716,9 @@ static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) static void mark_unsync(u64 *spte) { struct kvm_mmu_page *sp; - unsigned int index; sp = sptep_to_sp(spte); - index = spte - sp->spt; - if (__test_and_set_bit(index, sp->unsync_child_bitmap)) + if (__test_and_set_bit(spte_index(spte), sp->unsync_child_bitmap)) return; if (sp->unsync_children++) return; @@ -2203,7 +2201,7 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsig */ if (role.has_4_byte_gpte) { WARN_ON_ONCE(role.level != PG_LEVEL_4K); - role.quadrant = (sptep - parent_sp->spt) % 2; + role.quadrant = spte_index(sptep) & 1; } return role; @@ -2828,7 +2826,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, rmap_add(vcpu, slot, sptep, gfn, pte_access); } else { /* Already rmapped but the pte_access bits may have changed. */ - kvm_mmu_page_set_access(sp, sptep - sp->spt, pte_access); + kvm_mmu_page_set_access(sp, spte_index(sptep), pte_access); } return ret; @@ -2844,7 +2842,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, int i, ret; gfn_t gfn; - gfn = kvm_mmu_page_get_gfn(sp, start - sp->spt); + gfn = kvm_mmu_page_get_gfn(sp, spte_index(start)); slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, access & ACC_WRITE_MASK); if (!slot) return -1; @@ -2870,7 +2868,7 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, WARN_ON(!sp->role.direct); - i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1); + i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); spte = sp->spt + i; for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) { @@ -6156,8 +6154,8 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu unsigned int access; gfn_t gfn; - gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt); - access = kvm_mmu_page_get_access(huge_sp, huge_sptep - huge_sp->spt); + gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep)); + access = kvm_mmu_page_get_access(huge_sp, spte_index(huge_sptep)); /* * Note, huge page splitting always uses direct shadow pages, regardless @@ -6231,7 +6229,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm, u64 spte; /* Grab information for the tracepoint before dropping the MMU lock. */ - gfn = kvm_mmu_page_get_gfn(huge_sp, huge_sptep - huge_sp->spt); + gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep)); level = huge_sp->role.level; spte = *huge_sptep; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 2448fa8d8438..d06dee7d38a8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -595,7 +595,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, if (sp->role.direct) return __direct_pte_prefetch(vcpu, sp, sptep); - i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1); + i = spte_index(sptep) & ~(PTE_PREFETCH_NUM - 1); spte = sp->spt + i; for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) { @@ -933,7 +933,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) break; pte_gpa = FNAME(get_level1_sp_gpa)(sp); - pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t); + pte_gpa += spte_index(sptep) * sizeof(pt_element_t); mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL); if (is_shadow_present_pte(old_spte)) diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index b5c855f5514f..ba3dccb202bc 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -190,6 +190,12 @@ static inline bool is_removed_spte(u64 spte) return spte == REMOVED_SPTE; } +/* Get an SPTE's index into its parent's page table (and the spt array). */ +static inline int spte_index(u64 *sptep) +{ + return ((unsigned long)sptep / sizeof(*sptep)) & (SPTE_ENT_PER_PAGE - 1); +} + /* * In some cases, we need to preserve the GFN of a non-present or reserved * SPTE when we usurp the upper five bits of the physical address space to From patchwork Tue Jul 12 02:07:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12914446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D50CCC43334 for ; Tue, 12 Jul 2022 02:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231175AbiGLCHj (ORCPT ); Mon, 11 Jul 2022 22:07:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230143AbiGLCHc (ORCPT ); Mon, 11 Jul 2022 22:07:32 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2561761D7E for ; Mon, 11 Jul 2022 19:07:30 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id i9-20020a17090a65c900b001f03395643dso3060319pjs.9 for ; Mon, 11 Jul 2022 19:07:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=7zGopdQFPQGp9nip4E1VjLTbCwtKn+Wbvtvm7Hq7wik=; b=QzHXJQot962Hjsh6aB9dnFMWGQigLj0aos5R4594XSOkJ0/+dzvCZTeJ+xkiYvm9DB ym774ehd0zGrPsOXudCxPrqK2rPkAgNXGiifW1c4yw7ypAxp/hFWM/0M6yK/Ojsw0TP5 qVOqsl1WSO5g/MhP41Ql30SwD6D9zcd4XlSgHWdatfv4JXXRZ45EnQHj8B9NyYdunS9G XzbyHrdrvdkniMQrw4jHCmwrSS7k0BRLlH5SqPJJdWphBNK8PFLyH15r9erNNsoPpiXA GAiwMgA51OITPv2eS7phoC70areFpMDuf4fA6q4WNd89Ggr7lPWrFMpwDkCkmklQPnnE LA7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=7zGopdQFPQGp9nip4E1VjLTbCwtKn+Wbvtvm7Hq7wik=; b=xYIDCXQ+rZIUT3ukzuIJZHNRWIv8Or3JZsKKCgA5EeeaHTu8Bj8fPdQB//W2tdeCpc 5KMszOwiJZIBs2hKe2DDx6Hfx8f9hRFQIAVN+in6rA9s3wUC1eOst4qh1djmALfHswOE THbKJOjxeY8d38xyWgkAk5OUwC3l5zSO+ye7rf2YRBzVUojwUpiP7fhQNnpHcs8Tzs1q wLp5xS77WiaH5LtLTpNZCOlD28fomf5tBaFCboLGPrbYQ+AfxmBCVDptQw+WuELqCqU1 dNl+PK50RWQG+dQdi/tkeoma7lozJAx87ie/NwnsvaX5Mwdlrf/ZWYZkRb87IDkp+YEu V/4A== X-Gm-Message-State: AJIora+s1+Ei8jv5Ul4kHCcSw+TYowaqxndZ9tA4otYoqUFduiMwh3wV uaW0Zu/Vbay87bWL6sA1v7NxWP4Puc8= X-Google-Smtp-Source: AGRyM1vtd9UhT2QUE4GGRjHiDLn3YbyJA3rd375NZtlyQ512IYAH6cdN/WpBQ+ntdpFTAj+U1tTtjWuwXeQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:84b:b0:528:c70c:9508 with SMTP id q11-20020a056a00084b00b00528c70c9508mr21342272pfk.31.1657591649685; Mon, 11 Jul 2022 19:07:29 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 12 Jul 2022 02:07:23 +0000 In-Reply-To: <20220712020724.1262121-1-seanjc@google.com> Message-Id: <20220712020724.1262121-3-seanjc@google.com> Mime-Version: 1.0 References: <20220712020724.1262121-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.144.g8ac04bfd2-goog Subject: [PATCH v3 2/3] KVM: x86/mmu: Expand quadrant comment for PG_LEVEL_4K shadow pages From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Tweak the comment above the computation of the quadrant for PG_LEVEL_4K shadow pages to explicitly call out how and why KVM uses role.quadrant to consume gPTE bits. Opportunistically wrap an unnecessarily long line. No functional change intended. Link: https://lore.kernel.org/all/YqvWvBv27fYzOFdE@google.com Reviewed-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 864a32f96082..7a65e57b9b41 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2168,7 +2168,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu, return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role); } -static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsigned int access) +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, + unsigned int access) { struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep); union kvm_mmu_page_role role; @@ -2195,9 +2196,15 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, unsig * uses 2 PAE page tables, each mapping a 2MiB region. For these, * @role.quadrant encodes which half of the region they map. * - * Note, the 4 PAE page directories are pre-allocated and the quadrant - * assigned in mmu_alloc_root(). So only page tables need to be handled - * here. + * Concretely, a 4-byte PDE consumes bits 31:22, while an 8-byte PDE + * consumes bits 29:21. To consume bits 31:30, KVM's uses 4 shadow + * PDPTEs; those 4 PAE page directories are pre-allocated and their + * quadrant is assigned in mmu_alloc_root(). A 4-byte PTE consumes + * bits 21:12, while an 8-byte PTE consumes bits 20:12. To consume + * bit 21 in the PTE (the child here), KVM propagates that bit to the + * quadrant, i.e. sets quadrant to '0' or '1'. The parent 8-byte PDE + * covers bit 21 (see above), thus the quadrant is calculated from the + * _least_ significant bit of the PDE index. */ if (role.has_4_byte_gpte) { WARN_ON_ONCE(role.level != PG_LEVEL_4K); From patchwork Tue Jul 12 02:07:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12914445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E54FAC433EF for ; Tue, 12 Jul 2022 02:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231180AbiGLCHl (ORCPT ); Mon, 11 Jul 2022 22:07:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230219AbiGLCHc (ORCPT ); Mon, 11 Jul 2022 22:07:32 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12F557CB4F for ; Mon, 11 Jul 2022 19:07:31 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id y37-20020a056a001ca500b00528bbf82c1eso1605738pfw.10 for ; Mon, 11 Jul 2022 19:07:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=wC4daWuN72ntGNvsqnVV6vq+bt1gw7NePoqDxLLGrS8=; b=CVfLrEPTwIaX/ki4UHYkP2ZYB4PdTx80sHad8DGQnCFvjKorZ6JfnaYRfzbWOIJvW2 itUY2sdVixbqHdbUNb5j7uQy41tHY1uo4vwY590DpNkmxAyfP6T71U64XE0p6rGWRPOQ 4zULpdTPLDYFSAHe/VMpzNp5llohVFJbfQ0d+HDFFlr8op2cZywsn81iw+nSBCDzAxmu yUKf9e3Zh+S0hmBwRzIRA6wH5eaLxpZGN7G2g1skjxWOxRxEChRb0GcasYGWdF5itqR9 12CwWrVUW6QLDxrLrkDXRmgoK7wacQgV/NLz2Agxy0kIgSDFUko16nV+OJ5j3+h2O3Pv gjOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=wC4daWuN72ntGNvsqnVV6vq+bt1gw7NePoqDxLLGrS8=; b=QZk40YXKNjCj7HPOsFKBwtmU9ScvLEONOJK4yV50PUInscjKmGK5S4bf7EI/WZ2XlS dZ2eWzT4Of4EIPvNFOYvWOashFKl4jhvzw2JMGiL+3L9NiJc2Oc24wzp4UH74NvxoSaD Pe3RQcuQIAnffGyAC3s8A2BIa204nNUFZnVXUQqklrtl2V/m76k3zJoyeKQA8J5t2911 WH8QMAHMQA9MrzcJ+t+AD4FT5akAj8XwbwUW07FLPibJ5A7TNJJxXKuElSe6SJzQvQez sycwGAargI3vpisCqGQH5F8SvrLqdvfXdyhuEw64wjwDVSAvsLD3GEmgzIhabySMQmRi GkWA== X-Gm-Message-State: AJIora+JKMRP3GR/PfF9blNLWMPXlFd1EC4QLqZAnSfudH4TJg+WAbHL aVhmGDUEpLU+WsPXDqtcEOuvgRokd/g= X-Google-Smtp-Source: AGRyM1tamDaouxIjsR84u7D+TdWg376mrBI9rco87Xf1xz5mM8bsjTYhZ9LZN+mcQ4ZJ0rEVThTJZqUAw5g= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:249:b0:1e0:a8a3:3c6c with SMTP id t9-20020a17090a024900b001e0a8a33c6cmr40863pje.0.1657591651230; Mon, 11 Jul 2022 19:07:31 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 12 Jul 2022 02:07:24 +0000 In-Reply-To: <20220712020724.1262121-1-seanjc@google.com> Message-Id: <20220712020724.1262121-4-seanjc@google.com> Mime-Version: 1.0 References: <20220712020724.1262121-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.144.g8ac04bfd2-goog Subject: [PATCH v3 3/3] KVM: x86/mmu: Fix typo and tweak comment for split_desc_cache capacity From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove a spurious closing paranthesis and tweak the comment about the cache capacity for PTE descriptors (rmaps) eager page splitting to tone down the assertion slightly, and to call out that topup requires dropping mmu_lock, which is the real motivation for avoiding topup (as opposed to memory usage). Cc: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7a65e57b9b41..52664c3caaab 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6125,14 +6125,15 @@ static int topup_split_caches(struct kvm *kvm) { /* * Allocating rmap list entries when splitting huge pages for nested - * MMUs is uncommon as KVM needs to allocate if and only if there is + * MMUs is uncommon as KVM needs to use a list if and only if there is * more than one rmap entry for a gfn, i.e. requires an L1 gfn to be - * aliased by multiple L2 gfns. Aliasing gfns when using TDP is very - * atypical for VMMs; a few gfns are often aliased during boot, e.g. - * when remapping firmware, but aliasing rarely occurs post-boot). If - * there is only one rmap entry, rmap->val points directly at that one - * entry and doesn't need to allocate a list. Buffer the cache by the - * default capacity so that KVM doesn't have to topup the cache if it + * aliased by multiple L2 gfns and/or from multiple nested roots with + * different roles. Aliasing gfns when using TDP is atypical for VMMs; + * a few gfns are often aliased during boot, e.g. when remapping BIOS, + * but aliasing rarely occurs post-boot or for many gfns. If there is + * only one rmap entry, rmap->val points directly at that one entry and + * doesn't need to allocate a list. Buffer the cache by the default + * capacity so that KVM doesn't have to drop mmu_lock to topup if KVM * encounters an aliased gfn or two. */ const int capacity = SPLIT_DESC_CACHE_MIN_NR_OBJECTS +