From patchwork Tue Jun 11 00:21:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13692686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BEBEC27C4F for ; Tue, 11 Jun 2024 00:22:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E2A16B0088; Mon, 10 Jun 2024 20:22:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16CF46B0089; Mon, 10 Jun 2024 20:22:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E63086B008C; Mon, 10 Jun 2024 20:22:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BF0556B0088 for ; Mon, 10 Jun 2024 20:22:03 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 755CF1C1E7A for ; Tue, 11 Jun 2024 00:22:03 +0000 (UTC) X-FDA: 82216705326.21.97C6EF3 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf25.hostedemail.com (Postfix) with ESMTP id A2BBDA0006 for ; Tue, 11 Jun 2024 00:22:01 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KoIIy8uS; spf=pass (imf25.hostedemail.com: domain of 3qJhnZgoKCLEakYflXYkfeXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3qJhnZgoKCLEakYflXYkfeXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718065321; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QTXlFX0eqK0OJ7LxxCPmXUgKAb6UuiUwA2XsTgSjq+I=; b=eGb3diuEuNBXxNhSBbIc4sPfNMPTlYiOWaXlhCaY6QI2JxNbMxxzFm0RGF4tEsrHPIiYVt obYPf6QAi0PRYcOHC4/mEtz4LefDbUVllzX8LrDVdwZq4j8KDCd5Wsn6aKZ+UZH/wXHm+N PgDkGYAWDcZYhPMCupS3O0BoK7c3W2I= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=KoIIy8uS; spf=pass (imf25.hostedemail.com: domain of 3qJhnZgoKCLEakYflXYkfeXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3qJhnZgoKCLEakYflXYkfeXffXcV.TfdcZelo-ddbmRTb.fiX@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718065321; a=rsa-sha256; cv=none; b=y1lgFYsi5zXX114Fri8WS5+VxCVVSCeMLecUCVGX4VdLLY+DpG+a+neSp3+0uyIfKOW9lp vVmbt55y360V9Rw7o9XfLqeJiVEMb3nxln1NTCnYep/qycdKLB/U1hYKQFtURnkbo4XRNb 4KF5L8tk8nGFzYeI94/KLeUFhw2fqyM= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-627f382fb97so95026147b3.3 for ; Mon, 10 Jun 2024 17:22:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1718065321; x=1718670121; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QTXlFX0eqK0OJ7LxxCPmXUgKAb6UuiUwA2XsTgSjq+I=; b=KoIIy8uSOMK9jBW03nfCN77HsMjiip7morda1nfP67/Jh8Zd4/6uFknSn0vJIqOld+ a16MynKBdDdfTgXXxu07gCmtcdULsxu2x0opHa7vITCc6oR23WV3j5iflCUb3CD6AqJc GOt4/3CQoUwIexzR+kHNSNH88wTs4BQb0bGNZ0x3aKvt2keHnYJjyMl+N3+4uwllRpP0 GbDNcybZT053fUkk0O+Fvx9zmY9BOJarEvuDG3gVhg0RFxc2xu8Grlk6587JDD2cF8Id 5Hi52cjFkSK8fSG7haB/FoB1GS/OGDaKnEjkRgZU5gkLkVCWkjCLr9jtmCEl6sap5iji Bwqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718065321; x=1718670121; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QTXlFX0eqK0OJ7LxxCPmXUgKAb6UuiUwA2XsTgSjq+I=; b=sBW5OOCOlbeRFt2Ll+E1dWJSKBdsuZLqOWWyR1VqG12DgK5+Kh4BwU9GmmGEo/peE6 fBd6b9dzGCAcwZSgcJrZk09oJvmxyRxRBoDxuilAiXYw4jZbJb/6iWp85wEu2d/maQc6 dLpWhSxZ/SCeSHs8eT0hc+mqhSckCfwkYBaJjDQpFygXSyDQPCkgB4fJhbB0UZOUV3rv zqWVcyvSmzyr/JjQ6C701sK2VZqXC4LE1QEVzi4i8kyMA58GFfR0Cmhehi8FgJsRmM5o drEWB5CVkhWOEcH/x7XnxtfyBpFLsKZe6KjT8sMRacPDPApkpL2BFpph+uDT5/9/rvaC WDXQ== X-Forwarded-Encrypted: i=1; AJvYcCUsBrnoGnzN+qlUgkgAehA9Q/d/HZWt7zL9lAiXW5s7uhizcAtrZ4iurR3bw8JwKuV3DLpFe5nCmRw/s+5gso9HwvU= X-Gm-Message-State: AOJu0Yy/eOhEOqte+a4LcmcBR2fbjJ/iPskbHUU1yOczF/0YqxmqYH7w LMu1Tzu24UB4CLR2xr5OISwQs3GW/Nye8Ycx2W0jDy/UAnUMHcd+j0Jv0xR6K+8r6nWVlcM8Zpm SVVwQndhl4wPwCOXpLw== X-Google-Smtp-Source: AGHT+IH/lTyMKkfYBTfkyMLpzJLgvdaQf1eZ0iJ8gYjDqr0mKX19Ny4cIhX4/ZVJH2rhH9pB2RhhAEM9xGJV6D5s X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:690c:f12:b0:62c:ea0b:a44e with SMTP id 00721157ae682-62cea0ba712mr29004557b3.2.1718065320553; Mon, 10 Jun 2024 17:22:00 -0700 (PDT) Date: Tue, 11 Jun 2024 00:21:38 +0000 In-Reply-To: <20240611002145.2078921-1-jthoughton@google.com> Mime-Version: 1.0 References: <20240611002145.2078921-1-jthoughton@google.com> X-Mailer: git-send-email 2.45.2.505.gda0bf45e8d-goog Message-ID: <20240611002145.2078921-3-jthoughton@google.com> Subject: [PATCH v5 2/9] KVM: x86: Relax locking for kvm_test_age_gfn and kvm_age_gfn From: James Houghton To: Andrew Morton , Paolo Bonzini Cc: Ankit Agrawal , Axel Rasmussen , Catalin Marinas , David Matlack , David Rientjes , James Houghton , James Morse , Jonathan Corbet , Marc Zyngier , Oliver Upton , Raghavendra Rao Ananta , Ryan Roberts , Sean Christopherson , Shaoqin Huang , Suzuki K Poulose , Wei Xu , Will Deacon , Yu Zhao , Zenghui Yu , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A2BBDA0006 X-Stat-Signature: n69bsg66ift1dfaatnotwc1bcsocosau X-HE-Tag: 1718065321-605996 X-HE-Meta: U2FsdGVkX1/k3YfsmZWo+serXxsT6Va1h5aevH7f4lMZa3lioWDCxVDckiNvK9+4XqhEGg27Qzm/Vl6Wt9SgzfHQRg6i5Pex2KAOjD7r6b9gd4o+v5kdlZjBoqLYZJpP5fK0vbclCf5pYS9Od/tyC922+pxk26w1rTK5kZJFp453qQVS8A2BrBpKuhduTMZw1ghuNumX/YQA8P+ik6xkouTjgpWkAQSuUsHkvnu1uhUKjrLEdjXKPODf1LkmrxkIKex3btPwGtsU98TXyQrWSgAhROND9uOvjidn5n3nNwHjrYl99LzVwufgzRySVibFQzsupLzPtQTvG4ccJSqrDSiWyHjV7LZOqkvxuPIxWze19MTqGBxBUdX9o69HyU90tJyMqgcXqY8oOvvBeCMdMm9v8buRRPLuX4qo16jCq/5jY95vbDwvKArgyRNXMhsAlyVLq7ygZrPnCzJfMrNMMWPi4HqxYz4YQAb+q2NqFvcXE3/LpVk6Em2jrN4J2dPC5AWIUYzCU9INYvHpBj8iK/qOpTe+/pXAex8ZC6H7gx1JIwuY1y687sQBG0fSy8Us7k6pEJ7N61GB3N5lNpEZYfc/MyMrzGwFHGkJ5J3UNg7p5RxlKW6hAz+zPegjvuBjrXfI+lA1MkJr0QO8TFNpKXOb1zntKcZe3GMBa0SFiOG7uBbV7riRVzmsUSSiK5ywiZodPTrIyqK9KAAVV9wdR7SiARhTzH44AU7f3kZ7gco33kiDaRBS0sToQPXDIoStfA9ybYM5te60prfKidDrS5zcgdoVUdVddBDrzQ1fdgKBjdHE9nds8kvSvaxxTCrZE4vEMEWGHCdwqN2GuUm7A1pHWSNQ2zkBpyEbtPr7UbUzEmF8bIoEebnd85EAp6YiWaTqTUB4N3Igo3I1/sIbk4HPvTsA6L35GwrBw4txQUZXbUWvmO1SxMm3jHiaYk0zQMvj5oD+0TZ39ZUNwSJ xyxGyoN6 AmYbntnwNXsPcQBuyc0zqiyNK0WG4edfNYvxoFKcbhpp1QnzOpmC93TNHluwXALcYk1HNmwjcKSY+AvLktEfC1gZPYucmWPeEsn4r9wN8mKXdHkTRDIE0WsbVCbZwiQTttObZVkgflratp+/RfqBjsyOne5sWFsgdTWd+0TJTJSVlyTYMvaHvVvRXCAdTnkbYl9WVYIKoTXw7UaTPZP0FczKwlH3dmh8wM7zaG5yvVwp5rbdKEtZl/csw80ydGxW/+pW6v51dIgxGexbdxQioP+NzUUFKqix0ZCG3338NsEo30g+v8WRlSHn8m9zhwAa89mL9kv7lG3gxDLdQan7C82SBDX7BryW2+11fuzwWVH8fE1FwMKtZJyuRnfmZJSmRg2llMeVY8CGpxaNxMxdAWD1RsVuaF35tV5JuqiYQ9CuCFd/iCgSvErZlwv8psU3O5QpBopEw1HRkbiG6j/w0xETPkmqD9CyI+kCi7OJXhWJVMgk94SJPq01d+K3Y0vphFcokfkRfk2LWjhbwdKi/KDi83VfEoWEAK2VhYepwsF4XwAznav8C/FmRpF5AYhpN7e8APnvzJ3DS+JYFxaNYC9J+xwOgVjN3Rz2b2lDKryo6LtMmrWqyTwso6Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Walk the TDP MMU in an RCU read-side critical section. This requires a way to do RCU-safe walking of the tdp_mmu_roots; do this with a new macro. The PTE modifications are now done atomically, and kvm_tdp_mmu_spte_need_atomic_write() has been updated to account for the fact that kvm_age_gfn can now lockless update the accessed bit and the R/X bits). If the cmpxchg for marking the spte for access tracking fails, we simply retry if the spte is still a leaf PTE. If it isn't, we return false to continue the walk. Harvesting age information from the shadow MMU is still done while holding the MMU write lock. Suggested-by: Yu Zhao Signed-off-by: James Houghton --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 10 ++++- arch/x86/kvm/mmu/tdp_iter.h | 27 +++++++------ arch/x86/kvm/mmu/tdp_mmu.c | 67 +++++++++++++++++++++++++-------- 5 files changed, 77 insertions(+), 29 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8ca74e7678f..011c8eb7c8d3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1438,6 +1438,7 @@ struct kvm_arch { * tdp_mmu_page set. * * For reads, this list is protected by: + * RCU alone or * the MMU lock in read mode + RCU or * the MMU lock in write mode * diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fec95a770270..9dda7f8c72ed 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -23,6 +23,7 @@ config KVM depends on X86_LOCAL_APIC select KVM_COMMON select KVM_GENERIC_MMU_NOTIFIER + select KVM_MMU_NOTIFIER_YOUNG_LOCKLESS select HAVE_KVM_IRQCHIP select HAVE_KVM_PFNCACHE select HAVE_KVM_DIRTY_RING_TSO diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8d74bdef68c1..51061f1fb3d1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1633,8 +1633,11 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; - if (kvm_memslots_have_rmaps(kvm)) + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); young = kvm_handle_gfn_range(kvm, range, kvm_age_rmap); + write_unlock(&kvm->mmu_lock); + } if (tdp_mmu_enabled) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1646,8 +1649,11 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { bool young = false; - if (kvm_memslots_have_rmaps(kvm)) + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmap); + write_unlock(&kvm->mmu_lock); + } if (tdp_mmu_enabled) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 2880fd392e0c..510936a8455a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -25,6 +25,13 @@ static inline u64 kvm_tdp_mmu_write_spte_atomic(tdp_ptep_t sptep, u64 new_spte) return xchg(rcu_dereference(sptep), new_spte); } +static inline u64 tdp_mmu_clear_spte_bits_atomic(tdp_ptep_t sptep, u64 mask) +{ + atomic64_t *sptep_atomic = (atomic64_t *)rcu_dereference(sptep); + + return (u64)atomic64_fetch_and(~mask, sptep_atomic); +} + static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) { KVM_MMU_WARN_ON(is_ept_ve_possible(new_spte)); @@ -32,10 +39,11 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) } /* - * SPTEs must be modified atomically if they are shadow-present, leaf - * SPTEs, and have volatile bits, i.e. has bits that can be set outside - * of mmu_lock. The Writable bit can be set by KVM's fast page fault - * handler, and Accessed and Dirty bits can be set by the CPU. + * SPTEs must be modified atomically if they have bits that can be set outside + * of the mmu_lock. This can happen for any shadow-present leaf SPTEs, as the + * Writable bit can be set by KVM's fast page fault handler, the Accessed and + * Dirty bits can be set by the CPU, and the Accessed and R/X bits can be + * cleared by age_gfn_range. * * Note, non-leaf SPTEs do have Accessed bits and those bits are * technically volatile, but KVM doesn't consume the Accessed bit of @@ -46,8 +54,7 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) static inline bool kvm_tdp_mmu_spte_need_atomic_write(u64 old_spte, int level) { return is_shadow_present_pte(old_spte) && - is_last_spte(old_spte, level) && - spte_has_volatile_bits(old_spte); + is_last_spte(old_spte, level); } static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, @@ -63,12 +70,8 @@ static inline u64 kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 old_spte, static inline u64 tdp_mmu_clear_spte_bits(tdp_ptep_t sptep, u64 old_spte, u64 mask, int level) { - atomic64_t *sptep_atomic; - - if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) { - sptep_atomic = (atomic64_t *)rcu_dereference(sptep); - return (u64)atomic64_fetch_and(~mask, sptep_atomic); - } + if (kvm_tdp_mmu_spte_need_atomic_write(old_spte, level)) + return tdp_mmu_clear_spte_bits_atomic(sptep, mask); __kvm_tdp_mmu_write_spte(sptep, old_spte & ~mask); return old_spte; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 36539c1b36cd..46abd04914c2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -29,6 +29,11 @@ static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, return true; } +static __always_inline bool kvm_lockdep_assert_rcu_read_lock_held(void) +{ + WARN_ON_ONCE(!rcu_read_lock_held()); + return true; +} void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { @@ -178,6 +183,15 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, ((_only_valid) && (_root)->role.invalid))) { \ } else +/* + * Iterate over all TDP MMU roots in an RCU read-side critical section. + */ +#define for_each_tdp_mmu_root_rcu(_kvm, _root, _as_id) \ + list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link) \ + if (kvm_lockdep_assert_rcu_read_lock_held() && \ + (_as_id >= 0 && kvm_mmu_page_as_id(_root) != _as_id)) { \ + } else + #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \ __for_each_tdp_mmu_root(_kvm, _root, _as_id, false) @@ -1223,6 +1237,27 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm, return ret; } +static __always_inline bool kvm_tdp_mmu_handle_gfn_lockless( + struct kvm *kvm, + struct kvm_gfn_range *range, + tdp_handler_t handler) +{ + struct kvm_mmu_page *root; + struct tdp_iter iter; + bool ret = false; + + rcu_read_lock(); + + for_each_tdp_mmu_root_rcu(kvm, root, range->slot->as_id) { + tdp_root_for_each_leaf_pte(iter, root, range->start, range->end) + ret |= handler(kvm, &iter, range); + } + + rcu_read_unlock(); + + return ret; +} + /* * Mark the SPTEs range of GFNs [start, end) unaccessed and return non-zero * if any of the GFNs in the range have been accessed. @@ -1236,28 +1271,30 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, { u64 new_spte; +retry: /* If we have a non-accessed entry we don't need to change the pte. */ if (!is_accessed_spte(iter->old_spte)) return false; if (spte_ad_enabled(iter->old_spte)) { - iter->old_spte = tdp_mmu_clear_spte_bits(iter->sptep, - iter->old_spte, - shadow_accessed_mask, - iter->level); + iter->old_spte = tdp_mmu_clear_spte_bits_atomic(iter->sptep, + shadow_accessed_mask); new_spte = iter->old_spte & ~shadow_accessed_mask; } else { - /* - * Capture the dirty status of the page, so that it doesn't get - * lost when the SPTE is marked for access tracking. - */ + new_spte = mark_spte_for_access_track(iter->old_spte); + if (__tdp_mmu_set_spte_atomic(iter, new_spte)) { + /* + * The cmpxchg failed. If the spte is still a + * last-level spte, we can safely retry. + */ + if (is_shadow_present_pte(iter->old_spte) && + is_last_spte(iter->old_spte, iter->level)) + goto retry; + /* Otherwise, continue walking. */ + return false; + } if (is_writable_pte(iter->old_spte)) kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte)); - - new_spte = mark_spte_for_access_track(iter->old_spte); - iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep, - iter->old_spte, new_spte, - iter->level); } trace_kvm_tdp_mmu_spte_changed(iter->as_id, iter->gfn, iter->level, @@ -1267,7 +1304,7 @@ static bool age_gfn_range(struct kvm *kvm, struct tdp_iter *iter, bool kvm_tdp_mmu_age_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm_tdp_mmu_handle_gfn(kvm, range, age_gfn_range); + return kvm_tdp_mmu_handle_gfn_lockless(kvm, range, age_gfn_range); } static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter, @@ -1278,7 +1315,7 @@ static bool test_age_gfn(struct kvm *kvm, struct tdp_iter *iter, bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn); + return kvm_tdp_mmu_handle_gfn_lockless(kvm, range, test_age_gfn); } /*