From patchwork Wed Mar 20 00:50:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13597151 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 357E48F77 for ; Wed, 20 Mar 2024 00:50:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895840; cv=none; b=EiZWvdOKvkRy5JsWbBitKo1v4GpBH2FSoEsarrKzuL3WXIM7rO/AITHcScSVK/yLSAGM453IdM8UNPcPwdV2SYtpiBJoBujGkaxa5QZmE+m8pLIhOHcWig3qFHMC3JMwFrgq+Fg0NsNUf005uGYjv2oJRX91/M2KAFW5k7UEaUQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895840; c=relaxed/simple; bh=W0mOjtlu5f54bLTmPCZiIAQr5uTb3v00ZxlbrR8mWrE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sJWoQ3yx7mMziDchdEW5sZ+/4Y5F+IzKzONJEm5Nb2QQ9H1R5aUzPBRCc+DDsV2SqZiADFVNBT8q5JETxPfrZmErUMyOMUM3sK+im8ohqhfWk46lejKEsNx7b2CoeX5Ho5HJdoxsaz01YqCbiSBaS8EKcERlCvg/JPvcOPkOm0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wF2u6Bwc; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wF2u6Bwc" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dcbee93a3e1so9671415276.3 for ; Tue, 19 Mar 2024 17:50:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710895838; x=1711500638; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ZEyM3WpVtlcPduz6nOsAULxgTTwc2EHZ5Latop+Y9uo=; b=wF2u6BwcW3L7nfp7pGXhVxxnu7ehekCKu8wa9WTfDLS+Rdb1R1gkb9GEgxfqFe5Eqq 1y0vEDWfEir529+lR/0Ai1sfrAcNvzP0ykano21Rq6AB0PZvM16EWu0rat4xxeTobPq8 cYtplhEhvkueQu/qrPXEuntxfWSue4HZEF34GRbpNebkAwZbueOsI+8Dezbmlx2XPVek TqcwQkHZo4BSqXxYPhUNpBY/4UhQ739d9N1dJWFA4cTeFBk4VtfpbDMhkXlrQGaRg8io 7m4I2VxVS1KiCPY2rYK4/cofh83ef3I9E5c5kGnFvr9JFE/AtNOlBfGEfOeTbz9dU4RU +LTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710895838; x=1711500638; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZEyM3WpVtlcPduz6nOsAULxgTTwc2EHZ5Latop+Y9uo=; b=rix/ldUTxC+ols6habZP2IsNyX9AOxBdPApCpTb08LDNR9HNlBmCcXXv6ebvMEemkb Zb8aqe1M/Dm3TTI5v1FxdjYkDm/+ghdbTMqBLsjc565inGPu0avEGD4KWq0KaVJhiKbC TZRNWd/5RBVVdJzGP908dnVIsILoaL47yg4R1qmKh2KD1yJ0+Ud7JcmQ8ep8zE2cPIAv XqD6U82aJ/zCbp7q+VlIQgoabwOYDKBbKmX2UztZLa8eiHFmpAT7DyYPNS1d4Hgc/mBe NDkR5R/rFG+0Q2Vg1+Uz76PD5YghomyFT1t8zP3G3VUNUXDrNf+3vT/YVktaBGaSC0/l ukgw== X-Gm-Message-State: AOJu0YwvIpPQlIdQFyeXgzhaORCN08/OxyUbpAD9k0TwkdGnhdz7Xsc4 f90Y9PsmkX9vHCKqdk4ietMgnCAuoV8nRhRjM+LMB5hCZiYTkNwtx+l2lWQM7Ck+EUWu/UFNOzT drg== X-Google-Smtp-Source: AGHT+IGvX6kMEJQVGFiJQPwTYc+9UFzG3TlVdYPiKT28fIxYPdRocSUe0xURLQF6VnlT10vdKJ1/SWO/KWA= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:2203:b0:dbe:a0c2:df25 with SMTP id dm3-20020a056902220300b00dbea0c2df25mr888093ybb.8.1710895838254; Tue, 19 Mar 2024 17:50:38 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 19 Mar 2024 17:50:21 -0700 In-Reply-To: <20240320005024.3216282-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240320005024.3216282-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.291.gc1ea87d7ee-goog Message-ID: <20240320005024.3216282-2-seanjc@google.com> Subject: [RFC PATCH 1/4] KVM: x86/mmu: Skip the "try unsync" path iff the old SPTE was a leaf SPTE From: Sean Christopherson To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Hildenbrand , David Matlack , David Stevens , Matthew Wilcox Apply make_spte()'s optimization to skip trying to unsync shadow pages if and only if the old SPTE was a leaf SPTE, as non-leaf SPTEs in direct MMUs are always writable, i.e. could trigger a false positive and incorrectly lead to KVM creating a SPTE without write-protecting or marking shadow pages unsync. This bug only affects the TDP MMU, as the shadow MMU only overwrites a shadow-present SPTE when synchronizing SPTEs (and only 4KiB SPTEs can be unsync). Specifically, mmu_set_spte() drops any non-leaf SPTEs *before* calling make_spte(), whereas the TDP MMU can do a direct replacement of a page table with the leaf SPTE. Opportunistically update the comment to explain why skipping the unsync stuff is safe, as opposed to simply saying "it's someone else's problem". Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/spte.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 4a599130e9c9..b4c1119cc48b 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -206,12 +206,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask; /* - * Optimization: for pte sync, if spte was writable the hash - * lookup is unnecessary (and expensive). Write protection - * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. - * Same reasoning can be applied to dirty page accounting. + * When overwriting an existing leaf SPTE, and the old SPTE was + * writable, skip trying to unsync shadow pages as any relevant + * shadow pages must already be unsync, i.e. the hash lookup is + * unnecessary (and expensive). + * + * The same reasoning applies to dirty page/folio accounting; + * KVM will mark the folio dirty using the old SPTE, thus + * there's no need to immediately mark the new SPTE as dirty. + * + * Note, both cases rely on KVM not changing PFNs without first + * zapping the old SPTE, which is guaranteed by both the shadow + * MMU and the TDP MMU. */ - if (is_writable_pte(old_spte)) + if (is_last_spte(old_spte, level) && is_writable_pte(old_spte)) goto out; /* From patchwork Wed Mar 20 00:50:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13597152 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28476BA45 for ; Wed, 20 Mar 2024 00:50:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895843; cv=none; b=mhygpvLWwh3FKypKFsD6cHeJZ7NDLh+QkRinFTxYfhD5+6hbXqwkZveL/3iscp4VFHHDcgn63y48qqymTcO+V/o2WwF/OAWlyxUgXWb30VGgPnv44r4G6ZyWNFAyDH0YpzAkNDuONN1IM8d07YX4hxebjUzZJMtJ8gRj2boUdGU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895843; c=relaxed/simple; bh=6Q073mfOFIAzI7qEkWQQriqWiadvjFFWhpEnWTr6mGk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Q0omE7u3b8QJuH7iC3wPlT5EdBAlShBUZw5+27OodSobMIuVevmFJVaXExRyboIcw2TLQdZk0E2gqbKr0ETIBO2c6PwyX343c3vLCxCWdEIrh3caeWjSCCo+c1VETNK33wMEhursqBLd/jNCa/FUpXr58/VKxCxk2n7jQ+aAI50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WttWwMgx; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WttWwMgx" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc743cc50a6so8755118276.2 for ; Tue, 19 Mar 2024 17:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710895840; x=1711500640; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=tCQ96HrZSAyrK2o8U8Il1owTWIbaazqQ6Mo14S4OZD0=; b=WttWwMgxtGpVvtn2QKUbc7ntn5HXROz/8C4aE1U/UxEpopfsZFknIgPCcyod2MQG97 k+nm18Nb6jbK6xMi3DeEYfNxM+mqCrYbxTt6GaeZbkFl08MiL72ZKy3ve4V5oZW2Jp2c Qba7SRICjI8NtfQUp5IOLM482eA65wyC1Cll9kyH8pQmC+XaiCUdxrtyi6983Qhf+ZZk YtQ58+S2pWl1+j2uHa87a31LJeGGQLZCLMABSnKfvcE9W4DWN5EcVqrTAK1fmud5TnNW 03FGUDRa3u0dXtyTPReewmpDQl9XdbSiPDA5TwaRG7dcO4iembcbCp6DGilOkM2ceTGC u9aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710895840; x=1711500640; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=tCQ96HrZSAyrK2o8U8Il1owTWIbaazqQ6Mo14S4OZD0=; b=BzMpBK2Mxlvd9ETwqlQcqFDLvh5IxvrGghmXDLNFeqS+plZIjOfq/okUSs14GLXz/u 2nbEFhQB2igJVhMUQ+GJW3gqwBjV3JhTg1F7+ag2eOQnyH3xuMWd+j8FJZmQow1ltyhQ 5x5TufdvaX1Gn8Q8e/ouFAQ5694geaTHp5RzVvYCuSVPTO9IgzPI71W8TQFoo6oHnPJx DDuIAQ4U+BDSevxhr6l405GxT3j2MKK0TJQ1ut6wTtcoThWoX2+mAqd4oHntHnCe3BrS hE7pOwKWvOy2vy8MRoSwmz0LV4CYFI8brgB/96/FxJvxde66AhAO7Pvpr9ZoV3tP9r7e iDRQ== X-Gm-Message-State: AOJu0YzUEhqzvdWm69ecd9yK4dVSi6dHuNj1xnt/AnkSv1GZ9GrVpHq7 oTqe6579cNd0Vt101anbw23Pt6BwCedwC+kRHSTgG6AO6jhcvr3SCRnW0MvslD/P3xydhY6hbHm Bwg== X-Google-Smtp-Source: AGHT+IGdMqF4vD2v9QPypYeBdDU6pK+MEXTo/9biMAJ+aCjODjdQpdDK43t7qA4l+VUTFWj/TlgTKU/7WCE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:e0d:b0:dcd:b593:6503 with SMTP id df13-20020a0569020e0d00b00dcdb5936503mr974695ybb.2.1710895840275; Tue, 19 Mar 2024 17:50:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 19 Mar 2024 17:50:22 -0700 In-Reply-To: <20240320005024.3216282-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240320005024.3216282-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.291.gc1ea87d7ee-goog Message-ID: <20240320005024.3216282-3-seanjc@google.com> Subject: [RFC PATCH 2/4] KVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying From: Sean Christopherson To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Hildenbrand , David Matlack , David Stevens , Matthew Wilcox Mark pages/folios dirty when creating SPTEs to map PFNs into the guest, not when zapping or modifying SPTEs, as marking folios dirty when zapping or modifying SPTEs can be extremely inefficient. E.g. when KVM is zapping collapsible SPTEs to reconstitute a hugepage after disbling dirty logging, KVM will mark every 4KiB pfn as dirty, even though _at least_ 512 pfns are guaranteed to be in a single folio (the SPTE couldn't potentially be huge if that weren't the case). The problem only becomes worse for 1GiB HugeTLB pages, as KVM can mark a single folio dirty 512*512 times. Marking a folio dirty when mapping is functionally safe as KVM drops all relevant SPTEs in response to an mmu_notifier invalidation, i.e. ensures that the guest can't dirty a folio after access has been removed. And because KVM already marks folios dirty when zapping/modifying SPTEs for KVM reasons, i.e. not in response to an mmu_notifier invalidation, there is no danger of "prematurely" marking a folio dirty. E.g. if a filesystems cleans a folio without first removing write access, then there already exists races where KVM could mark a folio dirty before remote TLBs are flushed, i.e. before guest writes are guaranteed to stop. Furthermore, x86 is literally the only architecture that marks folios dirty on the backend; every other KVM architecture marks folios dirty at map time. x86's unique behavior likely stems from the fact that x86's MMU predates mmu_notifiers. Long, long ago, before mmu_notifiers were added, marking pages dirty when zapping SPTEs was logical, and perhaps even necessary, as KVM held references to pages, i.e. kept a page's refcount elevated while the page was mapped into the guest. At the time, KVM's rmap_remove() simply did: if (is_writeble_pte(*spte)) kvm_release_pfn_dirty(pfn); else kvm_release_pfn_clean(pfn); i.e. dropped the refcount and marked the page dirty at the same time. After mmu_notifiers were introduced, commit acb66dd051d0 ("KVM: MMU: dont hold pagecount reference for mapped sptes pages") removed the refcount logic, but kept the dirty logic, i.e. converted the above to: if (is_writeble_pte(*spte)) kvm_release_pfn_dirty(pfn); And for KVM x86, that's essentially how things have stayed over the last ~15 years, without anyone revisiting *why* KVM marks pages/folios dirty at zap/modification time, e.g. the behavior was blindly carried forward to the TDP MMU. Practically speaking, the only downside to marking a folio dirty during mapping is that KVM could trigger writeback of memory that was never actually written. Except that can't actually happen if KVM marks folios during if and only if a writable SPTE is created (as done here), because KVM always marks writable SPTEs as dirty during make_spte(). See commit 9b51a63024bd ("KVM: MMU: Explicitly set D-bit for writable spte."), circa 2015. Note, KVM's access tracking logic for prefetched SPTEs is a bit odd. If a guest PTE is dirty and writable, KVM will create a writable SPTE, but then mark the SPTE for access tracking. Which isn't wrong, just a bit odd, as it results in _more_ precise dirty tracking for MMUs _without_ A/D bits. To keep things simple, mark the folio dirty before access tracking comes into play, as an access-tracked SPTE can be restored in the fast page fault path, i.e. without holding mmu_lock. While writing SPTEs and accessing memslots outside of mmu_lock is safe, marking a folio dirty is not. E.g. if the fast path gets interrupted _just_ after setting a SPTE, the primary MMU could theoretically invalidate and free a folio before KVM marks it dirty. Unlike the shadow MMU, which waits for CPUs to respond to an IPI, the TDP MMU only guarantees the page tables themselves won't be freed (via RCU). Opportunistically update a few stale comments. Cc: David Hildenbrand Cc: David Matlack Cc: David Stevens Cc: Matthew Wilcox Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 29 ++++------------------------- arch/x86/kvm/mmu/paging_tmpl.h | 7 +++---- arch/x86/kvm/mmu/spte.c | 13 ++++++++++--- arch/x86/kvm/mmu/tdp_mmu.c | 12 ------------ 4 files changed, 17 insertions(+), 44 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e4cc7f764980..bd2240b94ff6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -544,10 +544,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } - if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) { + if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) flush = true; - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); - } return flush; } @@ -590,9 +588,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); - if (is_dirty_spte(old_spte)) - kvm_set_pfn_dirty(pfn); - return old_spte; } @@ -623,13 +618,6 @@ static bool mmu_spte_age(u64 *sptep) clear_bit((ffs(shadow_accessed_mask) - 1), (unsigned long *)sptep); } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(spte)) - kvm_set_pfn_dirty(spte_to_pfn(spte)); - spte = mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); } @@ -1263,16 +1251,6 @@ static bool spte_clear_dirty(u64 *sptep) return mmu_spte_update(sptep, spte); } -static bool spte_wrprot_for_clear_dirty(u64 *sptep) -{ - bool was_writable = test_and_clear_bit(PT_WRITABLE_SHIFT, - (unsigned long *)sptep); - if (was_writable && !spte_ad_enabled(*sptep)) - kvm_set_pfn_dirty(spte_to_pfn(*sptep)); - - return was_writable; -} - /* * Gets the GFN ready for another round of dirty logging by clearing the * - D bit on ad-enabled SPTEs, and @@ -1288,7 +1266,8 @@ static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, for_each_rmap_spte(rmap_head, &iter, sptep) if (spte_ad_need_write_protect(*sptep)) - flush |= spte_wrprot_for_clear_dirty(sptep); + flush |= test_and_clear_bit(PT_WRITABLE_SHIFT, + (unsigned long *)sptep); else flush |= spte_clear_dirty(sptep); @@ -3392,7 +3371,7 @@ static bool fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, * harm. This also avoids the TLB flush needed after setting dirty bit * so non-PML cases won't be impacted. * - * Compare with set_spte where instead shadow_dirty_mask is set. + * Compare with make_spte() where instead shadow_dirty_mask is set. */ if (!try_cmpxchg64(sptep, &old_spte, new_spte)) return false; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4d4e98fe4f35..ec24a6679153 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -554,7 +554,6 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return false; mmu_set_spte(vcpu, slot, spte, pte_access, gfn, pfn, NULL); - kvm_release_pfn_clean(pfn); return true; } @@ -891,9 +890,9 @@ static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, /* * Using the information in sp->shadowed_translation (kvm_mmu_page_get_gfn()) is - * safe because: - * - The spte has a reference to the struct page, so the pfn for a given gfn - * can't change unless all sptes pointing to it are nuked first. + * safe because SPTEs are protected by mmu_notifiers and memslot generations, so + * the pfn for a given gfn can't change unless all SPTEs pointing to the gfn are + * nuked first. * * Returns * < 0: failed to sync spte diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b4c1119cc48b..490966bc893c 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -212,8 +212,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, * unnecessary (and expensive). * * The same reasoning applies to dirty page/folio accounting; - * KVM will mark the folio dirty using the old SPTE, thus - * there's no need to immediately mark the new SPTE as dirty. + * KVM marked the folio dirty when the old SPTE was created, + * thus there's no need to mark the folio dirty again. * * Note, both cases rely on KVM not changing PFNs without first * zapping the old SPTE, which is guaranteed by both the shadow @@ -235,8 +235,10 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, } } - if (pte_access & ACC_WRITE_MASK) + if (pte_access & ACC_WRITE_MASK) { spte |= spte_shadow_dirty_mask(spte); + kvm_set_pfn_dirty(pfn); + } out: if (prefetch) @@ -246,6 +248,11 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); + /* + * Mark the memslot dirty *after* modifying it for access tracking. + * Unlike folios, memslots can be safely marked dirty out of mmu_lock, + * i.e. in the fast page fault handler. + */ if ((spte & PT_WRITABLE_MASK) && kvm_slot_dirty_track_enabled(slot)) { /* Enforced by kvm_mmu_hugepage_adjust. */ WARN_ON_ONCE(level > PG_LEVEL_4K); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d078157e62aa..5866a664f46e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -511,10 +511,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (is_leaf != was_leaf) kvm_update_page_stats(kvm, level, is_leaf ? 1 : -1); - if (was_leaf && is_dirty_spte(old_spte) && - (!is_present || !is_dirty_spte(new_spte) || pfn_changed)) - kvm_set_pfn_dirty(spte_to_pfn(old_spte)); - /* * Recursively handle child PTs if the change removed a subtree from * the paging structure. Note the WARN on the PFN changing without the @@ -1224,13 +1220,6 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, iter->level); new_spte = iter->old_spte & ~shadow_accessed_mask; } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ - if (is_writable_pte(iter->old_spte)) - kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); - new_spte = mark_spte_for_access_track(iter->old_spte); iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep, iter->old_spte, new_spte, @@ -1652,7 +1641,6 @@ static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, trace_kvm_tdp_mmu_spte_changed(iter.as_id, iter.gfn, iter.level, iter.old_spte, iter.old_spte & ~dbit); - kvm_set_pfn_dirty(spte_to_pfn(iter.old_spte)); } rcu_read_unlock(); From patchwork Wed Mar 20 00:50:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13597153 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1E311366 for ; Wed, 20 Mar 2024 00:50:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895844; cv=none; b=YK9bd6sM/wD7vU17sflaLfKkV9PlmJIUfxKt//tlQxP4BPloJmw/qvBkvFFBoj28zdqUPyWUAL/BKlSKH+j9D9IBhM2Q/r66WiPks5mk8JVWs8YJIXEWTFcEsjVG3A2waA3dw9h+6/lxaCKHODLhMTwFSxCxHJFmEz8duZwgZFk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895844; c=relaxed/simple; bh=e9vCkrtfJaABLYAyKrQ0HWBYDJdjKHVlxWPsqtjWiU8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mdW66Mr7X/R65I1mE11JMAJTX00+FIdC/EIVlL1PoQ4ni+FdI3El86CPg4r5EHXP+BaKDJlyoXYEQliQwR1dk7InUfYHegIxgc3von6ptDLRrp0doQiY08Vzk7js/sj1ATdHKq42zo8HRDMYoYU8qagGZdE70RL4UCFPy3CS2wc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mh901u8t; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mh901u8t" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-5d8bcf739e5so4828149a12.1 for ; Tue, 19 Mar 2024 17:50:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710895842; x=1711500642; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=73z0LxtdomfWG0GJE09D1ttIG4ILmbtU6RU20yVkqf4=; b=mh901u8t7A54U2oZkNub4mTJqcIJ7r6ar6ZLym0UGnJFrCgbUOfjM0E6Gq15UQFSQy RSK1Rnk9FPCboZydal4qJhq3WZGrOsoVEKozytT+WM/BIUY/P+esReMCKKtB5T/hWxeY 24KJT1c3Q0KescU5r1QTa0ef21/Jo7AVT5Z+ID1lGYTORA6LKTDoh/X0X9O9j6XL5Rco gCkmxglKkv11TCT86KQWTOI5YKyWPRs3WzkvaV94tnDMl958WaAH6aoP2yDoSSdlqenE PaDgIHnxkJDqFF1UF8nKNn3DuYRvp5NzLJKWucICAHy5DLqeviJXvKsIZcmXVMSEqeZ6 2j9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710895842; x=1711500642; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=73z0LxtdomfWG0GJE09D1ttIG4ILmbtU6RU20yVkqf4=; b=U8gLPqoEnbom1eaJH9swUSbR18VAzreu5NydqThchzOd1r/xXlo6npzWMbcN9y2u6G Pkk4bvvLlYi15QCnD+MH1QW/eprAFdrvOlvIVFMi9L4Zp1PfgtJ5xGznHB+nXdwnNpBs Qw3LxuN+9Z0kv7HG2+VVhki5BdeyVb+eQTvBekiiVsotNZEVkAlZhYAib5PBLWId+1N9 edRitz47UUDDhIR7FUwc58RFdLV9y7GTh6YXFYf3YBeK/qXN2YNXyukMZ1ChnQdx7gWw bj8VycV0dJYT3J1+MNAujdEssd0+QtXt/U7CAlxJgGXIvni2DEje0pmqYtQLCnznfgU7 UQtQ== X-Gm-Message-State: AOJu0YxByHwUfCDSc0BSVstYzLMp5+OvWDKg1GLhevpTS80D36vGF2Io 5XNPJ4VVnutPRX1pgHBver/sIFNGVVOSL6QNl/If1jNpWSArxv4gBgZLjJHW8RVCbgC5/cumylZ aLQ== X-Google-Smtp-Source: AGHT+IF1elppJ1tODfZ0EG8u4kLfEw8DDLQ91IUuYrs1E3zIe4DEs7GG6DnoA43xvrWkoC/VwXNmeT6UbY4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a02:488:b0:5e8:57aa:3609 with SMTP id bw8-20020a056a02048800b005e857aa3609mr6141pgb.9.1710895841989; Tue, 19 Mar 2024 17:50:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 19 Mar 2024 17:50:23 -0700 In-Reply-To: <20240320005024.3216282-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240320005024.3216282-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.291.gc1ea87d7ee-goog Message-ID: <20240320005024.3216282-4-seanjc@google.com> Subject: [RFC PATCH 3/4] KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs From: Sean Christopherson To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Hildenbrand , David Matlack , David Stevens , Matthew Wilcox Mark folios as accessed only when zapping leaf SPTEs, which is a rough heuristic for "only in response to an mmu_notifier invalidation". Page aging and LRUs are tolerant of false negatives, i.e. KVM doesn't need to be precise for correctness, and re-marking folios as accessed when zapping entire roots or when zapping collapsible SPTEs is expensive and adds very little value. E.g. when a VM is dying, all of its memory is being freed; marking folios accessed at that time provides no known value. Similarly, because KVM makes folios as accessed when creating SPTEs, marking all folios as accessed when userspace happens to delete a memslot doesn't add value. The folio was marked access when the old SPTE was created, and will be marked accessed yet again if a vCPU accesses the pfn again after reloading a new root. Zapping collapsible SPTEs is a similar story; marking folios accessed just because userspace disable dirty logging is a side effect of KVM behavior, not a deliberate goal. Mark folios accessed when the primary MMU might be invalidating mappings, e.g. instead of completely dropping calls to kvm_set_pfn_accessed(), as such zappings are not KVM initiated, i.e. might actually be related to page aging and LRU activity. Note, x86 is the only KVM architecture that "double dips"; every other arch marks pfns as accessed only when mapping into the guest, not when mapping into the guest _and_ when removing from the guest. Signed-off-by: Sean Christopherson --- Documentation/virt/kvm/locking.rst | 76 +++++++++++++++--------------- arch/x86/kvm/mmu/mmu.c | 4 +- arch/x86/kvm/mmu/tdp_mmu.c | 7 ++- 3 files changed, 43 insertions(+), 44 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 02880d5552d5..8b3bb9fe60bf 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -138,49 +138,51 @@ Then, we can ensure the dirty bitmaps is correctly set for a gfn. 2) Dirty bit tracking -In the origin code, the spte can be fast updated (non-atomically) if the +In the original code, the spte can be fast updated (non-atomically) if the spte is read-only and the Accessed bit has already been set since the Accessed bit and Dirty bit can not be lost. But it is not true after fast page fault since the spte can be marked writable between reading spte and updating spte. Like below case: -+------------------------------------------------------------------------+ -| At the beginning:: | -| | -| spte.W = 0 | -| spte.Accessed = 1 | -+------------------------------------+-----------------------------------+ -| CPU 0: | CPU 1: | -+------------------------------------+-----------------------------------+ -| In mmu_spte_clear_track_bits():: | | -| | | -| old_spte = *spte; | | -| | | -| | | -| /* 'if' condition is satisfied. */| | -| if (old_spte.Accessed == 1 && | | -| old_spte.W == 0) | | -| spte = 0ull; | | -+------------------------------------+-----------------------------------+ -| | on fast page fault path:: | -| | | -| | spte.W = 1 | -| | | -| | memory write on the spte:: | -| | | -| | spte.Dirty = 1 | -+------------------------------------+-----------------------------------+ -| :: | | -| | | -| else | | -| old_spte = xchg(spte, 0ull) | | -| if (old_spte.Accessed == 1) | | -| kvm_set_pfn_accessed(spte.pfn);| | -| if (old_spte.Dirty == 1) | | -| kvm_set_pfn_dirty(spte.pfn); | | -| OOPS!!! | | -+------------------------------------+-----------------------------------+ ++-------------------------------------------------------------------------+ +| At the beginning:: | +| | +| spte.W = 0 | +| spte.Accessed = 1 | ++-------------------------------------+-----------------------------------+ +| CPU 0: | CPU 1: | ++-------------------------------------+-----------------------------------+ +| In mmu_spte_update():: | | +| | | +| old_spte = *spte; | | +| | | +| | | +| /* 'if' condition is satisfied. */ | | +| if (old_spte.Accessed == 1 && | | +| old_spte.W == 0) | | +| spte = new_spte; | | ++-------------------------------------+-----------------------------------+ +| | on fast page fault path:: | +| | | +| | spte.W = 1 | +| | | +| | memory write on the spte:: | +| | | +| | spte.Dirty = 1 | ++-------------------------------------+-----------------------------------+ +| :: | | +| | | +| else | | +| old_spte = xchg(spte, new_spte);| | +| if (old_spte.Accessed && | | +| !new_spte.Accessed) | | +| flush = true; | | +| if (old_spte.Dirty && | | +| !new_spte.Dirty) | | +| flush = true; | | +| OOPS!!! | | ++-------------------------------------+-----------------------------------+ The Dirty bit is lost in this case. diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bd2240b94ff6..0a6c6619d213 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -539,10 +539,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * to guarantee consistency between TLB and page tables. */ - if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) { + if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) flush = true; - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); - } if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) flush = true; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 5866a664f46e..340d5af454c6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -520,10 +520,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, if (was_present && !was_leaf && (is_leaf || !is_present || WARN_ON_ONCE(pfn_changed))) handle_removed_pt(kvm, spte_to_child_pt(old_spte, level), shared); - - if (was_leaf && is_accessed_spte(old_spte) && - (!is_present || !is_accessed_spte(new_spte) || pfn_changed)) - kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } /* @@ -841,6 +837,9 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_iter_set_spte(kvm, &iter, 0); + if (is_accessed_spte(iter.old_spte)) + kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte)); + /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, * see kvm_tdp_mmu_zap_invalidated_roots() for details. From patchwork Wed Mar 20 00:50:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13597154 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97C076FC7 for ; Wed, 20 Mar 2024 00:50:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895846; cv=none; b=t7Peis8kLWcDx1VXdaNnL3+/k+hZgn5M7+76h1XeKQVC1JDLswiryVzGsRVxEisxPeT7B1O39XF4n11aj810ExI8HXIKqk6GfnWcd4NsbOc8KJ4yli2aKYJhuXFUDMi+R1ibsdE6sOMHpbSEkMs6DN7dS2FyhLr7ALqJ0Fqjr5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710895846; c=relaxed/simple; bh=LfHjr0gh4Ja0nQSAEja4M7wubj4py+zeRyGFa6gzCSY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VXnZK+rzQVFT7pFl3CytqHXLNDsEf5roAfJ7qwVbNjYlyvFayXNHjVbAYtbXiK/8xolFZr88SiczCwmaoK9NMfiDfYHpnA+p060wZsCkTbCoZ5zrgYlby6hmmn9zLnYjkrOVlIJBRmpZ5SpKhaCOKbXqZgbMQYfRnOhNI4ShnuA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rtkMtioc; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rtkMtioc" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6e6b285aaa4so5465321b3a.2 for ; Tue, 19 Mar 2024 17:50:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1710895844; x=1711500644; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=5a2bqtKTV93JecoVjcrHiTik+s3Ay6FzfBzzZuxLT4w=; b=rtkMtiocLZwFxqLmZSpy2mFrf9rX0C05T4oKiYa3+sGRAbOVnVavD/0EZT4OH0S7Cm FtaqW0M7746+XFOhgxpuUOmsV7y4bsKeGmWeJyd6ayKGHEwVN8etALq0ST3+w/Y/7ejO wZnYr5QLYiyU/CsL1pYTbeIb8azH3cxE2k5n1cRwpnzYP/mtvPUZV6pzZAad0Kk6nAnz P348D6ET4aM2IXX20U5ikbjUiHkSgsracB22KI2hZtPF51WudjJiTDkIcDBQAaWCqTD3 IhzMqZ8YWidjBrF8+WZEkdrMq034XFK0GmXMg+OQ0uhG0L0EbvgsJcCsos89Ef0BYlE2 tAKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710895844; x=1711500644; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5a2bqtKTV93JecoVjcrHiTik+s3Ay6FzfBzzZuxLT4w=; b=A3quqKRKDvH3A0YnR6yma/QI47BmKsIU/LoprWEGubeKdweCmYu/DWxH0sifq5V0ik /cIcGQXzRw1kPrUK5t65Sl64wbGLjh14S3DdFHND7BAcEJSE1aZDnLjZel6O2wh/o54O OacAjvsro2ui5QS8pEqAkKrDUxbghELR+YhBIiGcmj0GoNTMtHCjViEMHYUuEMeCr+PD Xhx6A0pnCBhApc955G5bQa37UR9Fg8LXues1h0WCBvpoAdKshVW2YWBiaQiXTlomfa4g Q1gwPIpqQOKcLL8iu1LK1iw75HGlG20eiwyc6jGoGhihxZ28gvW0y3MBmcFWfkvMGNmf Y5zA== X-Gm-Message-State: AOJu0YxHSBytk11eYGd1dhLIQvf8AqRNpVGf/AJUD/VpRfe1creLFKf4 uwztzdlS0wcp6qxZeMzcj63xQOGfwPSkbDtWaUAMuK1LtHEcgmdIRknZteNDjvNoYL4CXpUJkyx L1g== X-Google-Smtp-Source: AGHT+IEaIA2aymRjDeAbUoUA5PBT+xAo8Cuxn++SVSqL/D4x7uk8+Uter3hx2zwaGXU73twu2UQ+PhDpSrE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:a0d:b0:6e7:956e:4388 with SMTP id p13-20020a056a000a0d00b006e7956e4388mr3227pfh.0.1710895843865; Tue, 19 Mar 2024 17:50:43 -0700 (PDT) Reply-To: Sean Christopherson Date: Tue, 19 Mar 2024 17:50:24 -0700 In-Reply-To: <20240320005024.3216282-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240320005024.3216282-1-seanjc@google.com> X-Mailer: git-send-email 2.44.0.291.gc1ea87d7ee-goog Message-ID: <20240320005024.3216282-5-seanjc@google.com> Subject: [RFC PATCH 4/4] KVM: x86/mmu: Don't force flush if SPTE update clears Accessed bit From: Sean Christopherson To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Hildenbrand , David Matlack , David Stevens , Matthew Wilcox Don't force a TLB if mmu_spte_update() clears Accessed bit, as access tracking tolerates false negatives, as evidenced by the mmu_notifier hooks that explicit test and age SPTEs without doing a TLB flush. In practice, this is very nearly a nop. spte_write_protect() and spte_clear_dirty() never clear the Accessed bit. make_spte() always sets the Accessed bit for !prefetch scenarios. FNAME(sync_spte) only sets SPTE if the protection bits are changing, i.e. if a flush will be needed regardless of the Accessed bits. And FNAME(pte_prefetch) sets SPTE if and only if the old SPTE is !PRESENT. That leaves kvm_arch_async_page_ready() as the one path that will generate a !ACCESSED SPTE *and* overwrite a PRESENT SPTE. And that's very arguably a bug, as clobbering a valid SPTE in that case is nonsensical. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 31 +++++++++---------------------- 1 file changed, 9 insertions(+), 22 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0a6c6619d213..77d1072b130d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -515,37 +515,24 @@ static u64 mmu_spte_update_no_track(u64 *sptep, u64 new_spte) * TLBs must be flushed. Otherwise rmap_write_protect will find a read-only * spte, even though the writable spte might be cached on a CPU's TLB. * + * Remote TLBs also need to be flushed if the Dirty bit is cleared, as false + * negatives are not acceptable, e.g. if KVM is using D-bit based PML on VMX. + * + * Don't flush if the Accessed bit is cleared, as access tracking tolerates + * false negatives, and the one path that does care about TLB flushes, + * kvm_mmu_notifier_clear_flush_young(), uses mmu_spte_update_no_track(). + * * Returns true if the TLB needs to be flushed */ static bool mmu_spte_update(u64 *sptep, u64 new_spte) { - bool flush = false; u64 old_spte = mmu_spte_update_no_track(sptep, new_spte); if (!is_shadow_present_pte(old_spte)) return false; - /* - * For the spte updated out of mmu-lock is safe, since - * we always atomically update it, see the comments in - * spte_has_volatile_bits(). - */ - if (is_mmu_writable_spte(old_spte) && - !is_writable_pte(new_spte)) - flush = true; - - /* - * Flush TLB when accessed/dirty states are changed in the page tables, - * to guarantee consistency between TLB and page tables. - */ - - if (is_accessed_spte(old_spte) && !is_accessed_spte(new_spte)) - flush = true; - - if (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)) - flush = true; - - return flush; + return (is_mmu_writable_spte(old_spte) && !is_writable_pte(new_spte)) || + (is_dirty_spte(old_spte) && !is_dirty_spte(new_spte)); } /*