From patchwork Thu Oct 10 18:24:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13831138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 576EFD24451 for ; Thu, 10 Oct 2024 21:01:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id: Content-Transfer-Encoding:Content-Type:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=l0R/snR1lBsefu+Gd9v26W9gBjpskR/lINZXMlcZUSs=; b=PRR1p6U7+d//vu +A0XC2bAbvoe1c0RUDTfH/QLFPgxTS+H/gcK2E/HTXWgCwX9SF1UBfoNvb2aU9kNVuaPNY797/Ccq DGm3drtwfPDeLia4xs53BWPMSKu2TROR88c6kuSpJmuRb7/cbH1BbvLtaqow85qCn2I8WElL4+KK+ Ivu1b99UlgThDPVtIKKbvw4kctSLIeIVxhD/kVef3CbwYHlHq9AOqSethmtmjv5AHbKd7NoDeTZRe kDRRclgFRzeZCxegTEDoPJS3Gm3FcqCWZNQs92BqiJGQKpe1domcch3pHBCJFFhJjC7ynoh4G5+G8 QJ7nkx+VNq7b9UO8gMUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sz0Hr-0000000ELoY-1h1N; Thu, 10 Oct 2024 21:01:31 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxt6-0000000Drua-3E2p for linux-arm-kernel@lists.infradead.org; Thu, 10 Oct 2024 18:27:51 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-6e2d1860a62so23490337b3.0 for ; Thu, 10 Oct 2024 11:27:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584866; x=1729189666; darn=lists.infradead.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=l0R/snR1lBsefu+Gd9v26W9gBjpskR/lINZXMlcZUSs=; b=oW/VXzAGHJr65LOPS7f+dxkMLtFUFWnBWUdwJwkoJynjNOMQA6FyT/YEbpKkDQB8Y1 XeIzl5K+t84icDZE9u4ugYhCQHyhHKiuRnoiR2Lm7C1CCatKcksv+5I0FiTm4qTRbBb0 cvFh3Pt4jkcYCbXegCrQTtJpKleedb6mm9BKTQq01qUz3KYjfrjunAPBGh+J9kARPELz TBCYbi38ooUbc2ZMOdoZYQZumPdBuR5OB6mAmUFQ49qCz1YIqoFGVX+FsfjgnF+fQ1da U7+/aWlz9istv976pUh+e8rfrPsTAHulSo8C7s5jKqn07pvvsJtIBFi3eS1ZDtNdKBDa b4bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584866; x=1729189666; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=l0R/snR1lBsefu+Gd9v26W9gBjpskR/lINZXMlcZUSs=; b=RRenopegb0HFqRigeePSgAyAB6sLrsbWRhaR6vZWYxgDwhTohCfHIjS673UWz2kCVT Co7mJuFCPgI22963q0lrF7VCKh18aL7zdjKvkfsTV+nlx+aKQc1LkMcniYQShtRLF8AR 2XtXcnz3DSPHEtVtzrp8DCXzxuNUBrKLs0RXWp7LokawrPtwZ+xiks+CjokhuDP0R3Tg 8UhGlwF04HoL8SbbIBDw6n3a203KrsAW0GmT/G/C0m1y141P59TSj+Ey7TfW9qq+q7ul px6liKR0F5x1WbaEfgWtsnERbIYPmM1o92fGKIRj31nGJrMwLspJjuEkUhw7+4YgouZ2 vLmA== X-Forwarded-Encrypted: i=1; AJvYcCXDvlTNV/5LuZ91Exy9TwGqm2WxJsQvTdUp+jvptJaRMH7L+ZfcTJM7y5ynt0ifVDxgpCYGAJ6GHU55hn8Q5esl@lists.infradead.org X-Gm-Message-State: AOJu0YzVkPKf0lBz/lWTXKm9t+5bjxmTefcA5WfShMzRaEAGUcHqEajC hu5LxqbVzmDHl0oIMnpTRBZzXSgfqt6SWK3oWbP4GNhEa6z7vXEAnQH69Gj5Lm6HDL863nlM/LJ sbg== X-Google-Smtp-Source: AGHT+IE49fRlqLzz+ddXUJ4fW719WQNIEaF+exc+KxAZEodOOT6YKCh59uZdX+xzyPwnVHJ15Mb6QwFZ+jU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a05:690c:470f:b0:6e3:39e5:f0e8 with SMTP id 00721157ae682-6e339e5f461mr119227b3.6.1728584866570; Thu, 10 Oct 2024 11:27:46 -0700 (PDT) Date: Thu, 10 Oct 2024 11:24:24 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-83-seanjc@google.com> Subject: [PATCH v13 82/85] KVM: x86/mmu: Don't mark "struct page" accessed when zapping SPTEs From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_112748_886832_0D5268F8 X-CRM114-Status: GOOD ( 14.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Don't mark pages/folios as accessed in the primary MMU when zapping SPTEs, as doing so relies on kvm_pfn_to_refcounted_page(), and generally speaking is unnecessary and wasteful. KVM participates in page aging via mmu_notifiers, so there's no need to push "accessed" updates to the primary MMU. And if KVM zaps a SPTe in response to an mmu_notifier, marking it accessed _after_ the primary MMU has decided to zap the page is likely to go unnoticed, i.e. odds are good that, if the page is being zapped for reclaim, the page will be swapped out regardless of whether or not KVM marks the page accessed. Dropping x86's use of kvm_set_pfn_accessed() also paves the way for removing kvm_pfn_to_refcounted_page() and all its users. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ----------------- arch/x86/kvm/mmu/tdp_mmu.c | 3 --- 2 files changed, 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5acdaf3b1007..55eeca931e23 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -559,10 +559,8 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) */ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) { - kvm_pfn_t pfn; u64 old_spte = *sptep; int level = sptep_to_sp(sptep)->role.level; - struct page *page; if (!is_shadow_present_pte(old_spte) || !spte_has_volatile_bits(old_spte)) @@ -574,21 +572,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) return old_spte; kvm_update_page_stats(kvm, level, -1); - - pfn = spte_to_pfn(old_spte); - - /* - * KVM doesn't hold a reference to any pages mapped into the guest, and - * instead uses the mmu_notifier to ensure that KVM unmaps any pages - * before they are reclaimed. Sanity check that, if the pfn is backed - * by a refcounted page, the refcount is elevated. - */ - page = kvm_pfn_to_refcounted_page(pfn); - WARN_ON_ONCE(page && !page_count(page)); - - if (is_accessed_spte(old_spte)) - kvm_set_pfn_accessed(pfn); - return old_spte; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 8aa0d7a7602b..91caa73a905b 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -861,9 +861,6 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); - if (is_accessed_spte(iter.old_spte)) - kvm_set_pfn_accessed(spte_to_pfn(iter.old_spte)); - /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, * see kvm_tdp_mmu_zap_invalidated_roots() for details.