From patchwork Thu Oct 10 18:23:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13830911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 99A85CFC5F9 for ; Thu, 10 Oct 2024 19:16:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Reply-To:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id: Content-Transfer-Encoding:Content-Type:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=sgK+YcjRpEQ+YhbfYv/o4akq/yPA2qNOXCMP41gc9EY=; b=HKoKG6UqXsrAlm Uk7LxoColpPvZ1ivYTZN9OKLk1d8WKKhAs1cxI93wJqOZERO9keEcK1HKPG3xWjBdf4Rin29F9biW RFGy03NHT5zm3wdW6MRCe5CPhZaKnpqA2pW7wKpTJ5H/Ub81eF9piWc5DlzhfdR+eakbx3iqOJrox OAkunvWrfhUPcRjLXzdwF3LB5uvGVqrcIFnFrYIqQ1gYlLD+2nscmbIkhmcioMLjKEFuwaL1+ytqH FSH/rP7EdM4vYDHmEodqpfq4A+l797oBxpJWvYDuWYNGFyZyYTTS1M9RYdFEzaLmPHoBeedpNoVYJ UMQ9kbPxv6zgcMBzICXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1syydq-0000000E53C-21FP; Thu, 10 Oct 2024 19:16:06 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxrf-0000000DqjQ-01tl for linux-arm-kernel@lists.infradead.org; Thu, 10 Oct 2024 18:26:21 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-206b912491eso23155645ad.0 for ; Thu, 10 Oct 2024 11:26:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584778; x=1729189578; darn=lists.infradead.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=sgK+YcjRpEQ+YhbfYv/o4akq/yPA2qNOXCMP41gc9EY=; b=iFkBAzBXYFBUziXH1DPHFwz0QuJbVZY9yzLrch8ot8iJUnZFwu0Ch7La+ANEyti4nR AiDSevGke+MAd+fF5+8281cC7OvrCxYOzA8Jn/wVBWgRyeNM1C+8Y4ZOM9b7hYkXpmDz rSxcKYrmZHnrNvGHVOgV5h+nWW41K7iFAYTqArL5ncvyqN2Nrg8M2ksNxs489+ODUdI5 RkxO0PgCZ3IqNb8GULWvgAuWQDd7DbJqF6gmz7/xpgXrONOBNZ8xnCYXKQDivmKtwi++ SkbfWdozQu9XNTOmoT67SyrcEA3ebbCiwMyOSypgPQ2bTRLYaltBc5xINem/udZ5gqOM R5OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584778; x=1729189578; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=sgK+YcjRpEQ+YhbfYv/o4akq/yPA2qNOXCMP41gc9EY=; b=R+No9rIXKa4EZQUlfF0oQadd9w1kZX7w00DQEjxIcxLA/LwDscC3JgiSjYUvrbKLiK Cx4By4IixZQFbBdV5oYPscTCDGeP8xLFQOiUXNhEtc+sX6Ay5ygAQ3shHrFhThH5YyiO 1mlQ4Zx2GbKlyIAb3cMpQOOytqv+8k2EaQX201n6xHUC1py3C4bwqiYiHk232xjuJXQK VD6KbjdZLHaDi/pT5Znj8ilIJKy8dKYeDWrMKylq9FUywBkrHKkHj5E3zy5aLxP5pCV9 bR8Vft+fUkKSEdmNIwogOvPuJ1dTb2w0l52nfaeNEioIrPzTUNjL5uN4DTGZzcx9qHLr FSpw== X-Forwarded-Encrypted: i=1; AJvYcCVgI/UJnFW9IB7NiAWQnqz3sJ8pyRlyWpZrrPOmTPKLYcyeHm0KnY2WdLYjyfGNP6O2GXVX7T6dEe1afjZIJ2PL@lists.infradead.org X-Gm-Message-State: AOJu0YxcauaxBv4eabRRM+95WP71+Bgl0bAT4NvsbrO2n6iC8eoyM0M1 Y6DvDCM8KqXpo+1hI2cTptxrQq3a7Cg/RpVRz2t7YzBvaRNRv/EordLlJl4XDRfXOZsc6Bjn+6H h0w== X-Google-Smtp-Source: AGHT+IHgES02wxBmUBuw4MXImaMxOgBvLS6HzbS699NHinso/VI3SIlimd+hdhb5F/AgwxmTSyUtUzYKtBo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a17:902:ec85:b0:20b:7bfa:ac0f with SMTP id d9443c01a7336-20ca037f212mr485ad.1.1728584777475; Thu, 10 Oct 2024 11:26:17 -0700 (PDT) Date: Thu, 10 Oct 2024 11:23:43 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-42-seanjc@google.com> Subject: [PATCH v13 41/85] KVM: x86/mmu: Mark pages/folios dirty at the origin of make_spte() From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_112619_160938_E1AA7610 X-CRM114-Status: GOOD ( 18.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move the marking of folios dirty from make_spte() out to its callers, which have access to the _struct page_, not just the underlying pfn. Once all architectures follow suit, this will allow removing KVM's ugly hack where KVM elevates the refcount of VM_MIXEDMAP pfns that happen to be struct page memory. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 30 ++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/paging_tmpl.h | 5 +++++ arch/x86/kvm/mmu/spte.c | 11 ----------- 3 files changed, 33 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 31a6ae41a6f4..f730870887dd 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2964,7 +2964,17 @@ static bool kvm_mmu_prefetch_sptes(struct kvm_vcpu *vcpu, gfn_t gfn, u64 *sptep, for (i = 0; i < nr_pages; i++, gfn++, sptep++) { mmu_set_spte(vcpu, slot, sptep, access, gfn, page_to_pfn(pages[i]), NULL); - kvm_release_page_clean(pages[i]); + + /* + * KVM always prefetches writable pages from the primary MMU, + * and KVM can make its SPTE writable in the fast page handler, + * without notifying the primary MMU. Mark pages/folios dirty + * now to ensure file data is written back if it ends up being + * written by the guest. Because KVM's prefetching GUPs + * writable PTEs, the probability of unnecessary writeback is + * extremely low. + */ + kvm_release_page_dirty(pages[i]); } return true; @@ -4360,7 +4370,23 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int r) { - kvm_release_pfn_clean(fault->pfn); + lockdep_assert_once(lockdep_is_held(&vcpu->kvm->mmu_lock) || + r == RET_PF_RETRY); + + /* + * If the page that KVM got from the *primary MMU* is writable, and KVM + * installed or reused a SPTE, mark the page/folio dirty. Note, this + * may mark a folio dirty even if KVM created a read-only SPTE, e.g. if + * the GFN is write-protected. Folios can't be safely marked dirty + * outside of mmu_lock as doing so could race with writeback on the + * folio. As a result, KVM can't mark folios dirty in the fast page + * fault handler, and so KVM must (somewhat) speculatively mark the + * folio dirty if KVM could locklessly make the SPTE writable. + */ + if (!fault->map_writable || r == RET_PF_RETRY) + kvm_release_pfn_clean(fault->pfn); + else + kvm_release_pfn_dirty(fault->pfn); } static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 35d0c3f1a789..f4711674c47b 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -954,6 +954,11 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int spte_to_pfn(spte), spte, true, true, host_writable, &spte); + /* + * There is no need to mark the pfn dirty, as the new protections must + * be a subset of the old protections, i.e. synchronizing a SPTE cannot + * change the SPTE from read-only to writable. + */ return mmu_spte_update(sptep, spte); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8e8d6ee79c8b..f1a50a78badb 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -277,17 +277,6 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, mark_page_dirty_in_slot(vcpu->kvm, slot, gfn); } - /* - * If the page that KVM got from the primary MMU is writable, i.e. if - * it's host-writable, mark the page/folio dirty. As alluded to above, - * folios can't be safely marked dirty in the fast page fault handler, - * and so KVM must (somewhat) speculatively mark the folio dirty even - * though it isn't guaranteed to be written as KVM won't mark the folio - * dirty if/when the SPTE is made writable. - */ - if (host_writable) - kvm_set_pfn_dirty(pfn); - *new_spte = spte; return wrprot; }