From patchwork Wed May 29 18:05:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13679367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C298FC25B75 for ; Wed, 29 May 2024 18:05:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B058C6B00A3; Wed, 29 May 2024 14:05:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8DC46B00A4; Wed, 29 May 2024 14:05:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B90E6B00A5; Wed, 29 May 2024 14:05:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7005D6B00A3 for ; Wed, 29 May 2024 14:05:24 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id ED3F01C19B6 for ; Wed, 29 May 2024 18:05:23 +0000 (UTC) X-FDA: 82172210526.19.AAB8C34 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf13.hostedemail.com (Postfix) with ESMTP id 19B5520019 for ; Wed, 29 May 2024 18:05:21 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=U50RxxbS; spf=pass (imf13.hostedemail.com: domain of 3YW5XZgoKCNQ9J7EK67JED6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jthoughton.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3YW5XZgoKCNQ9J7EK67JED6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717005922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CwoKXote5Ew7Jsdz/aAIUFFvPSk6A909NzRQYNohjtE=; b=07/Yyop9U1MQWxwhbgUhW4Thj8TW7NYE6qqDm6tKNUTA+hSYYxViNrUFxXAgLQAA2NPCHV Yny8Yp9afUmzGU8aD5h0U1EhR7HuxXbcEd1Xhmqynp1oJEsYuDIuZPkg9etjAOLobYoGkL k62FP1HxEarKJ/czFHGRbfePToPeWcE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717005922; a=rsa-sha256; cv=none; b=RRG4UwOIKpVaOgvBA5zFFN7s2zm/EpRCNwOYhHyt2Wqm9XkVRYAD84rqCVu989NYwJW91Y 6KK0EJ4RhTZVrBAzT9Z+6Mr97TO2fqX41bumEWaf4YtJ9YYVlQx955hqYw9gMAB3V/v7Vs nBZMAvLOefOdNOav0xOSBmTYuVHFrh8= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=U50RxxbS; spf=pass (imf13.hostedemail.com: domain of 3YW5XZgoKCNQ9J7EK67JED6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jthoughton.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3YW5XZgoKCNQ9J7EK67JED6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-df7721f2e70so364782276.0 for ; Wed, 29 May 2024 11:05:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1717005921; x=1717610721; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CwoKXote5Ew7Jsdz/aAIUFFvPSk6A909NzRQYNohjtE=; b=U50RxxbSv9ZBCkWJOvHRXIBcJ2BjiBxJPed/CiS/9Q9JNFbXOm6IOLRQjrkwoswTcF t7HCnh6xRq3npGs+e1LGlzjM5eHIsPEwpwZm6U84X+GYpD1/Pgi4JrpkRUcv5AA1gB46 6r00D3bMfqIB/0bWMR/96gHijO3spGZtoBRh34VfASzEAqhkd2FQKmr9YDYNdxMsr4WQ WO1DGi/RgbUr2TCvWMiagMhsDMnuPJeGcancn3A83BnbXLHpaJEVlb3hxDS6SrW6Nmnu r64uexGBI0/MQs7SHW6CeqKmJHzNJg9ojgY+a05RX3hAJZGzVp+wrN1Aditrgt0Gz7eq pPXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717005921; x=1717610721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CwoKXote5Ew7Jsdz/aAIUFFvPSk6A909NzRQYNohjtE=; b=ISvXnvAwXppl4JHYzlJX8zDN1as0P+WEAiSv/PVSPL+uBKW98fcZmc86x+WyrHgcU7 S5khMXOigYG0xDgWis3622SltKvGuPV+1KmbJKBRNN95IYXjv+BW0p3MuEXfKWgT2DoS b5J5TLVz5G7FqxSWu1/VwT8M5md4bb6TXWIBy4VlT0axJ+61tMuRoyR9B9xqva8QBZ52 vx7wDoHmXzk173iHAgF6bYOl+eCnmLqXvs0Z5aB+sIy9fYywrGkYOajQowlEyyb/1KTD FnTCOK27pq3PlJMn9zuwbbi+QRkeUhJ4wW+a+YXWOQVlhKUItwD1Cyikf8jXAP2ihRnr C6Yw== X-Forwarded-Encrypted: i=1; AJvYcCVn8DQ2nx2jwwRKaYhTlTFzMWYxHM0JeJgp/EnBjREpbuiUkpgT+PSI6YLR4wcpC4633UkUbC0bOWxJ0SNRoa6xWhY= X-Gm-Message-State: AOJu0Yx08OaYUBX5Mz/p5b+/EYMSvCZ7MkL9knhppMTDzdFhMXqNXRL7 XCEtP3Ev84E+G/tBBoQUB7VGnDxaGiTGw6yFa7p1X8kCiZqlJBf2tMASel6nqG7sjLrxoGvecCK Zb7nERsPafSajFCaBQw== X-Google-Smtp-Source: AGHT+IF9TgFBMPkNgtG0hPNwMJ317N5S8DzPgfu78mJAr8hsxOfUSU1lJMom0dwIl649roanAV0XRBVJc/tZTtZp X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6902:1005:b0:df7:9df5:4bbc with SMTP id 3f1490d57ef6-dfa462d091bmr641866276.0.1717005921060; Wed, 29 May 2024 11:05:21 -0700 (PDT) Date: Wed, 29 May 2024 18:05:07 +0000 In-Reply-To: <20240529180510.2295118-1-jthoughton@google.com> Mime-Version: 1.0 References: <20240529180510.2295118-1-jthoughton@google.com> X-Mailer: git-send-email 2.45.1.288.g0e0cd299f1-goog Message-ID: <20240529180510.2295118-5-jthoughton@google.com> Subject: [PATCH v4 4/7] KVM: Move MMU lock acquisition for test/clear_young to architecture From: James Houghton To: Andrew Morton , Paolo Bonzini Cc: Albert Ou , Ankit Agrawal , Anup Patel , Atish Patra , Axel Rasmussen , Bibo Mao , Catalin Marinas , David Matlack , David Rientjes , Huacai Chen , James Houghton , James Morse , Jonathan Corbet , Marc Zyngier , Michael Ellerman , Nicholas Piggin , Oliver Upton , Palmer Dabbelt , Paul Walmsley , Raghavendra Rao Ananta , Ryan Roberts , Sean Christopherson , Shaoqin Huang , Shuah Khan , Suzuki K Poulose , Tianrui Zhao , Will Deacon , Yu Zhao , Zenghui Yu , kvm-riscv@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 19B5520019 X-Rspam-User: X-Stat-Signature: uxaas31yg31jkr5i1us96cyet7nbeeso X-HE-Tag: 1717005921-547044 X-HE-Meta: U2FsdGVkX19IuE/ERbSmZTG3aYDP7jTZN4LhkPYzpDrPLLegcsXQaSPxg+RqC2tEkbEoZcpQjAk8PPyi70AI1hz0BTThZWBQxyVAwEsFfZuDHqESA8iq1bhVIYiDvo1j+1fwA305bt88gJAQ9LPG1NEl2GQitb+sFRvFby7PtRLEq7Keg7ANTxzzK0+dtzxKj6DIrTvHJyFFEbl31Zdrw0SMxmlaCDQe860pq6puWjKiqAAgMMRJ1Is+z8CtoLlcCY/c1zSaWqzaw6rkbih2yzjVV76zzQAOan+8QeN3TUyEUQuTLJlkPDN6bi0pRQV08DsujEJLZCumfo6RH2AqzvR5idBsIUPfh3SNjQLrh0JJGNy1pVqcbBJzQ/EbO1XKhmCrwWkd+BuoXuQ9qWz4z4vn7bNsSARRQ/cPHhawZjedqB+Y55t+3Bh7SVNoF1cxK72FXhC9ab835NFdeyDfn2pvftoln+zIqzAs0X1WJNcfOZjZhoUO/8/snw4RHsx9DcUeA3CmPT8vRTp30shLOCLfjQ5TPVwR7bUPYAZ5o1SJ2yCJ8cuW8MIK1yHmj4dOyIoTC52Q4PE1bTKSzBJFriHBkiqPISvMnzt0PTT3J21s7suGuPSH2avK21IB5v1dcbJTvNQp8GIojVE223fhlgyn8TBmVLqPinq0bQBpFsblgIz4iyFjNc7wrSCw52Kq+oXBti9z6c2JvioqoJQkmmzPWvTT7CW9yQJIij0cxuGIurjCM93lU8Hegvw2zOqNVNFVLbntqpihl+Q+W5PvF/7amfdF2iXxrau7neZjTAZCJejDVyA4TiPtsiVt9mIWmCEAOvdETrhSSOoQvj9Z9CoQwDkwFft+hLvoLbtaxRB4L9ee49M6cE7csr2TfcQfo6ESYaL2QbKLmtNRccJWcNhfh75kQse8l7pNmG1pnjrmdzuWRG9JQUzoAX0Hrhq1+KxaLnWoHPr7IKbpdr0 V7k63UFu o6jESO8QkhuMvKjnUG8vxpd9qCnT4l9W4nLGEptgeX6Z+iuGBAdA520IttUpLlXWfL9olBTMfjPwouP0NHaoT27Fx30BvzVN6Z99FSdWNEd1gsGTgDX76mNsME8IDfzHNdTgiW3ODiD1I6PRoAEKuKMPuGppnpMyc7bWY1emlv7geE3qSdHbKTvckkuKbjonVv/1Bq+tlApQlVWaDGru1+s5IafsoXPe+D5R5Aa4cKMUTfbzGn32wDBDxUloHwsFvvYaG+w3pmjjOGVfyXYMMttgtd3/CBZ/4rPHbLxT74SLvQz+ZbGxZHgAF8UvRdufE4BwlDVd/2VKJQkMz1Tn1M7LYH38KKKUwKOW7BdyLcTG3f9GBeiWOSDlT94ym0rWPMmRcAKCyGWk4KhhJV39MCncs7LFB/3Vku1i6e+4q8shKQmGKi3qBF0K+iRlwURmJW2y9kETOlMB6xTbM6MtZkGNdAwdJ49tce76T0Lx30ox+vooy8atqw/eRP7cnJWVMgP9v5P0H6tb3lL9rlanAgTpgwp28kxOQWs4P1r+E4Y2Svg7W0d0QMxEY1UqFKr+dNiA0g/hGGnC731HgE3AHQHd0OrOD1/PONhsJatdtgLFV7jVJxhFfWfGcKsnzwhQyuHSgMEZqxCoGiIPOs9fUy8nxkEUlH1NhpZQG9vAUOnvbpZqYFWvi9pRySg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For implementation mmu_notifier_{test,clear}_young, the KVM memslot walker used to take the MMU lock for us. Now make the architectures take it themselves. Don't relax locking for any architecture except powerpc e500; its implementations of kvm_age_gfn and kvm_test_age_gfn simply return false, so there is no need to grab the KVM MMU lock. Signed-off-by: James Houghton --- arch/arm64/kvm/mmu.c | 30 ++++++++++++++++++++++-------- arch/loongarch/kvm/mmu.c | 20 +++++++++++++++----- arch/mips/kvm/mmu.c | 21 ++++++++++++++++----- arch/powerpc/kvm/book3s.c | 14 ++++++++++++-- arch/riscv/kvm/mmu.c | 26 ++++++++++++++++++++------ arch/x86/kvm/mmu/mmu.c | 8 ++++++++ virt/kvm/kvm_main.c | 4 ++-- 7 files changed, 95 insertions(+), 28 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 8bcab0cc3fe9..8337009dde77 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1773,25 +1773,39 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + write_lock(&kvm->mmu_lock); if (!kvm->arch.mmu.pgt) - return false; + goto out; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, - range->start << PAGE_SHIFT, - size, true); + young = kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + range->start << PAGE_SHIFT, + size, true); + +out: + write_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + write_lock(&kvm->mmu_lock); if (!kvm->arch.mmu.pgt) - return false; + goto out; - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, - range->start << PAGE_SHIFT, - size, false); + young = kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + range->start << PAGE_SHIFT, + size, false); + +out: + write_unlock(&kvm->mmu_lock); + return young; } phys_addr_t kvm_mmu_get_httbr(void) diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c index 98883aa23ab8..5eb262bcf6b0 100644 --- a/arch/loongarch/kvm/mmu.c +++ b/arch/loongarch/kvm/mmu.c @@ -497,24 +497,34 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_ptw_ctx ctx; + bool young; + + spin_lock(&kvm->mmu_lock); ctx.flag = 0; ctx.ops = kvm_mkold_pte; kvm_ptw_prepare(kvm, &ctx); - return kvm_ptw_top(kvm->arch.pgd, range->start << PAGE_SHIFT, + young = kvm_ptw_top(kvm->arch.pgd, range->start << PAGE_SHIFT, range->end << PAGE_SHIFT, &ctx); + + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { gpa_t gpa = range->start << PAGE_SHIFT; - kvm_pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); + kvm_pte_t *ptep; + bool young; - if (ptep && kvm_pte_present(NULL, ptep) && kvm_pte_young(*ptep)) - return true; + spin_lock(&kvm->mmu_lock); + ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); - return false; + young = ptep && kvm_pte_present(NULL, ptep) && kvm_pte_young(*ptep); + + spin_unlock(&kvm->mmu_lock); + return young; } /* diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c index c17157e700c0..db3b7cf22db1 100644 --- a/arch/mips/kvm/mmu.c +++ b/arch/mips/kvm/mmu.c @@ -446,17 +446,28 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); + bool young; + + spin_lock(&kvm->mmu_lock); + young = kvm_mips_mkold_gpa_pt(kvm, range->start, range->end); + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { gpa_t gpa = range->start << PAGE_SHIFT; - pte_t *gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); + pte_t *gpa_pte; + bool young = false; - if (!gpa_pte) - return false; - return pte_young(*gpa_pte); + spin_lock(&kvm->mmu_lock); + gpa_pte = kvm_mips_pte_for_gpa(kvm, NULL, gpa); + + if (gpa_pte) + young = pte_young(*gpa_pte); + + spin_unlock(&kvm->mmu_lock); + return young; } /** diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index ff6c38373957..f503ab9ac3a5 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -887,12 +887,22 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm->arch.kvm_ops->age_gfn(kvm, range); + bool young; + + spin_lock(&kvm->mmu_lock); + young = kvm->arch.kvm_ops->age_gfn(kvm, range); + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm->arch.kvm_ops->test_age_gfn(kvm, range); + bool young; + + spin_lock(&kvm->mmu_lock); + young = kvm->arch.kvm_ops->test_age_gfn(kvm, range); + spin_unlock(&kvm->mmu_lock); + return young; } int kvmppc_core_init_vm(struct kvm *kvm) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index b63650f9b966..c78abe8041fb 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -555,17 +555,24 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) pte_t *ptep; u32 ptep_level = 0; u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + spin_lock(&kvm->mmu_lock); if (!kvm->arch.pgd) - return false; + goto out; WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, &ptep, &ptep_level)) - return false; + goto out; + + young = ptep_test_and_clear_young(NULL, 0, ptep); - return ptep_test_and_clear_young(NULL, 0, ptep); +out: + spin_unlock(&kvm->mmu_lock); + return young; } bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) @@ -573,17 +580,24 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) pte_t *ptep; u32 ptep_level = 0; u64 size = (range->end - range->start) << PAGE_SHIFT; + bool young = false; + + spin_lock(&kvm->mmu_lock); if (!kvm->arch.pgd) - return false; + goto out; WARN_ON(size != PAGE_SIZE && size != PMD_SIZE && size != PUD_SIZE); if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, &ptep, &ptep_level)) - return false; + goto out; + + young = pte_young(ptep_get(ptep)); - return pte_young(ptep_get(ptep)); +out: + spin_unlock(&kvm->mmu_lock); + return young; } int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 662f62dfb2aa..6a2a557c2c31 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1630,12 +1630,16 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; + write_lock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); if (tdp_mmu_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); + write_unlock(&kvm->mmu_lock); + return young; } @@ -1643,12 +1647,16 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; + write_lock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); if (tdp_mmu_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); + write_unlock(&kvm->mmu_lock); + return young; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d197b6725cb3..8d2d3acf18d8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -901,7 +901,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn, * more sophisticated heuristic later. */ return kvm_handle_hva_range_no_flush(mn, start, end, - kvm_age_gfn, false); + kvm_age_gfn, true); } static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, @@ -911,7 +911,7 @@ static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, trace_kvm_test_age_hva(address); return kvm_handle_hva_range_no_flush(mn, address, address + 1, - kvm_test_age_gfn, false); + kvm_test_age_gfn, true); } static void kvm_mmu_notifier_release(struct mmu_notifier *mn,