From patchwork Fri Aug 23 23:56:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13776131 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DF431C93A8 for ; Fri, 23 Aug 2024 23:56:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457416; cv=none; b=kKWttqR4Lky5ijb9SVKDiwYs0Sd3Qg6VSZTjW2YnV3OsYO3Pen4mL2M+PaB2C1hgcj7GdqQSMa+ZQ+YFtQlZCeyoo5U4QhA6b3Je4cYLiKfhiaIHb+8H+84JjygxItXxJh2Vff2wBHh85ZgICMZhorW/YY889o6+7sYQWD56BqY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457416; c=relaxed/simple; bh=IAZfIIsjLVHrJ/iB//I8yM+zpSJpgSCSqDvCypt7OcI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GgRj6ZC0fGPvXwZPROUsUrtLgVPNP2q+csvPk1mQaXvEhkWHrgbgic17XpavYEnCoc/LSL8aeathIBuerpU+mmW3s9CIgqXRfAOjR7jzdVWzIDDfj79UV97DYZQ671SAC/TGwb+8EISBvO2gUiLdDOOL2+mtAFsbuusNv00fJpg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dfExpEFV; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dfExpEFV" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e13e682bee1so4292607276.2 for ; Fri, 23 Aug 2024 16:56:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724457413; x=1725062213; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9tP2LsacPhuXWkmFDMCvVM/elGS1eLYh0YjQ3JXjio4=; b=dfExpEFVeK4IQ1FBj5nN209CTN/3gnkzzKXST0VR5es7x/dww4zhs6cjF5VJZqTqef p0h1vnCKslmE0JevmnSyVgtFcEl3vFqqK6KAU7Rj/HNWMtVVWYIbjE17uApPrpIQG2qW oggYuBWYJyFsvM0zga6AcuS6QfKKna6ra/mwAei441pIf3UHEkWznX3VPYPe9d/h+9C6 +w/SxWi4FRWRYPV2bHikso0BIfCIq0NaZhsVXr0qm5iPFzWKydX7Iqq4HJcYaLWOQeNQ T0ryUAzwq5XZbjEwf0Z40f8v+WI/xdnbMIe8vKfE+RsNCwp8a+8HwE7PF1OHJenHQdqp F8hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724457413; x=1725062213; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9tP2LsacPhuXWkmFDMCvVM/elGS1eLYh0YjQ3JXjio4=; b=azvaaE6oIWdz1T8G11qiQQcWIn4Rn+ePG7H3bVHVTSE9e7HVk6kULfoVTGrRxgq0Qq V6LGTcoJUZWcCxnFFgxgwpxl1x3Cl5hBN8sFnEaWdHDcX1PZ9NwSSAu+lKhLToubcUHo GqZKKLr2kMv9lqBrFluVSfUZMJeER5hZJxcOH7c/xWhpnUxNU6e786YxQvKzEgs3ADPB oD4KBMJ20+h2yH6/lJyTo14uH+wukwe2Br7DL3oAcGaiTBia/1iTbiPpBMWGp2tcaRKF 0nT/Iu7gvNRiosZNVcTPygD8Ev9bie6iYjnqqYuJtaQ8/htSwMModGwd7ThSK2EX2b8z /Z1w== X-Gm-Message-State: AOJu0YyBa6RMPrE73L6zlf6k0XbzVgJ75YhaJOFrWpMun6sUFlNuwv2T /+2IPSIVA3NbnDIk2xO2DcT3Cr9UwzTLkVeTPGVjCuxL1Nbvo/dFAk0CDuqjND2vivQ8CDJaNLQ cihocw3lCYg== X-Google-Smtp-Source: AGHT+IELbN/Fsx+rJIMPCi9JVWaJsSoU3JIVulpnNsaE1LIruNUdAyJQ5DzjUlV8JQYo5Kx87i4nFHwPp1dY5Q== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:8305:0:b0:e11:6e58:e6a9 with SMTP id 3f1490d57ef6-e17a865db0emr152981276.12.1724457413681; Fri, 23 Aug 2024 16:56:53 -0700 (PDT) Date: Fri, 23 Aug 2024 16:56:43 -0700 In-Reply-To: <20240823235648.3236880-1-dmatlack@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240823235648.3236880-1-dmatlack@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240823235648.3236880-2-dmatlack@google.com> Subject: [PATCH v2 1/6] KVM: x86/mmu: Drop @max_level from kvm_mmu_max_mapping_level() From: David Matlack To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, David Matlack Drop the @max_level parameter from kvm_mmu_max_mapping_level(). All callers pass in PG_LEVEL_NUM, so @max_level can be replaced with PG_LEVEL_NUM in the function body. No functional change intended. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 8 +++----- arch/x86/kvm/mmu/mmu_internal.h | 3 +-- arch/x86/kvm/mmu/tdp_mmu.c | 3 +-- 3 files changed, 5 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d25c2b395116..2e92d9e9b311 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3109,13 +3109,12 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, } int kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + const struct kvm_memory_slot *slot, gfn_t gfn) { bool is_private = kvm_slot_can_be_private(slot) && kvm_mem_is_private(kvm, gfn); - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private); + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -6877,8 +6876,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, * mapping if the indirect sp has level = 1. */ if (sp->role.direct && - sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_remote_tlbs_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1721d97743e9..fee385e75405 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -344,8 +344,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, } int kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3c55955bcaf8..2a843b9c8d81 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1655,8 +1655,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (iter.gfn < start || iter.gfn >= end) continue; - max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, iter.gfn); if (max_mapping_level < iter.level) continue; From patchwork Fri Aug 23 23:56:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13776132 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 065351C9EB3 for ; Fri, 23 Aug 2024 23:56:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457417; cv=none; b=ZbrqSdj8TJzErIj/OLrFEB9zuRUBEM4/cWqukjd0YO5us0SmZsJHbKua8ykJaTVnidaqFgB9LCLhWZ5aZj+PBTluapML54MXg/IarxDh4mVvXTOd65MuejGmYTMt9siXAPqQg7grph2TnQ1BtVE+BQk8ZjvS3leGneVXI14xxJA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457417; c=relaxed/simple; bh=dZEo0UOwWn12iIUCrzr8vgdqV2EOjO/gQOV4s1BjTOA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HylvLLiVN6Hy4h7ehHbVIl6SyjihwOqOfNI1BtXt4MgPkzSn7KPwsbumTqN45hl9RkUJ5h+4RGTbZX8M0gW5mnXEnrrsoAeCg/5Ci9HvZqD/GKC0V8cv7KFGN8we3b46KfBepcXaUmFEWJJK+t0FtVLTktToCvmFaAQq7ZiTzgQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=i6/OrFcJ; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i6/OrFcJ" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e1651bac405so4321181276.3 for ; Fri, 23 Aug 2024 16:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724457415; x=1725062215; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rA4HJJd1dlxmJbeqwRLcZUkWjbmbwXw5nXrnADSqjEU=; b=i6/OrFcJcJkaDet1132EULyrd3qautkwV8U5P/Vk3uZ+rKg9qUUzwjAq9Zx5WWepxV 8nnZ7lF6D08HXDMLygJTMwAzaDsKBYT3dts7Hr7xI64imCoK4bxkMM+tsHHruEkiWsfW wZ1X9/z0UHaMBCzblcJAMS9qtZ21JJeHMni+NuiLghv37imJAnxDDYwS5zZA/bG3uTS3 vdUYZ+xtqzjVpYM22gB6naoYqq6e2kj9vWTl+fx7rnIxNb2fqj+KFM+GsVfOfiv8VxzV 9VfqPYzhQ0IXrAx5Kkpvl6kRDsWWY32nxOiu1+F7ysWyru6MtvIIkCNPW9TShc5DEa3y uwjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724457415; x=1725062215; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rA4HJJd1dlxmJbeqwRLcZUkWjbmbwXw5nXrnADSqjEU=; b=t4hlK3GJoxakZuL/HLZ72fxsSLnGz2AOD6CkZeT9rIfO+k/Qc/1V7/l25eFXSDN+Xn 7xbec+GjhThDRO511/12EtOHMNiaHrwYKLiXnRUI+ru4r1SLozf5oEav4iG++xq/UR1B 0UPjx5KgpbxwJc57seaFg8uPGsafEv/zM2MKiIysbIe6JB0MJ/261r2QSIaaMi2l2sXF 5rc9B/omyWmB4/w8BJHmOakyyg7TqdjNi6Ez1X51VfkzAO1dDrs8RgtXSY4G6oXRAuiz asFx6f0wxvpk0LGaVgVxPYeXtaPEI2YDZ2te6UeQGLXh5Ctz1qydgjJDJmbdn9YCZdWe gvOg== X-Gm-Message-State: AOJu0Yy1yiiMwTh0X7nLxc2/WDgSncoUeu68m7CLbXozOcyqcMY8lvLI kUOFbxpkFfCq1mByb793J/EFWqzud/BLMWLWoeTHDSRK2c4Iw/IX3QYEHyYk8oFke7LgdbWdhEy byWWUwlgnxg== X-Google-Smtp-Source: AGHT+IFRYHUkcJEoopDgBehYt1tfMMWNLUCj+UBhwPmHwybOd0Pb/UPjFHIpZdnw+abj3e/1j9o2rDrqO1/IOg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:2188:0:b0:e16:55e7:5138 with SMTP id 3f1490d57ef6-e17a80075e1mr7203276.0.1724457415022; Fri, 23 Aug 2024 16:56:55 -0700 (PDT) Date: Fri, 23 Aug 2024 16:56:44 -0700 In-Reply-To: <20240823235648.3236880-1-dmatlack@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240823235648.3236880-1-dmatlack@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240823235648.3236880-3-dmatlack@google.com> Subject: [PATCH v2 2/6] KVM: x86/mmu: Batch TLB flushes when zapping collapsible TDP MMU SPTEs From: David Matlack To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, David Matlack Set SPTEs directly to SHADOW_NONPRESENT_VALUE and batch up TLB flushes when zapping collapsible SPTEs, rather than freezing them first. Freezing the SPTE first is not required. It is fine for another thread holding mmu_lock for read to immediately install a present entry before TLBs are flushed because the underlying mapping is not changing. vCPUs that translate through the stale 4K mappings or a new huge page mapping will still observe the same GPA->HPA translations. KVM must only flush TLBs before dropping RCU (to avoid use-after-free of the zapped page tables) and before dropping mmu_lock (to synchronize with mmu_notifiers invalidating mappings). In VMs backed with 2MiB pages, batching TLB flushes improves the time it takes to zap collapsible SPTEs to disable dirty logging: $ ./dirty_log_perf_test -s anonymous_hugetlb_2mb -v 64 -e -b 4g Before: Disabling dirty logging time: 14.334453428s (131072 flushes) After: Disabling dirty logging time: 4.794969689s (76 flushes) Skipping freezing SPTEs also avoids stalling vCPU threads on the frozen SPTE for the time it takes to perform a remote TLB flush. vCPUs faulting on the zapped mapping can now immediately install a new huge mapping and proceed with guest execution. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 55 +++++++------------------------------- 1 file changed, 10 insertions(+), 45 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2a843b9c8d81..27adbb3ecb02 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -591,48 +591,6 @@ static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm, return 0; } -static inline int __must_check tdp_mmu_zap_spte_atomic(struct kvm *kvm, - struct tdp_iter *iter) -{ - int ret; - - lockdep_assert_held_read(&kvm->mmu_lock); - - /* - * Freeze the SPTE by setting it to a special, non-present value. This - * will stop other threads from immediately installing a present entry - * in its place before the TLBs are flushed. - * - * Delay processing of the zapped SPTE until after TLBs are flushed and - * the FROZEN_SPTE is replaced (see below). - */ - ret = __tdp_mmu_set_spte_atomic(iter, FROZEN_SPTE); - if (ret) - return ret; - - kvm_flush_remote_tlbs_gfn(kvm, iter->gfn, iter->level); - - /* - * No other thread can overwrite the frozen SPTE as they must either - * wait on the MMU lock or use tdp_mmu_set_spte_atomic() which will not - * overwrite the special frozen SPTE value. Use the raw write helper to - * avoid an unnecessary check on volatile bits. - */ - __kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE); - - /* - * Process the zapped SPTE after flushing TLBs, and after replacing - * FROZEN_SPTE with 0. This minimizes the amount of time vCPUs are - * blocked by the FROZEN_SPTE and reduces contention on the child - * SPTEs. - */ - handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, - SHADOW_NONPRESENT_VALUE, iter->level, true); - - return 0; -} - - /* * tdp_mmu_set_spte - Set a TDP MMU SPTE and handle the associated bookkeeping * @kvm: KVM instance @@ -1625,13 +1583,16 @@ static void zap_collapsible_spte_range(struct kvm *kvm, gfn_t end = start + slot->npages; struct tdp_iter iter; int max_mapping_level; + bool flush = false; rcu_read_lock(); for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) { retry: - if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) + if (tdp_mmu_iter_cond_resched(kvm, &iter, flush, true)) { + flush = false; continue; + } if (iter.level > KVM_MAX_HUGEPAGE_LEVEL || !is_shadow_present_pte(iter.old_spte)) @@ -1659,11 +1620,15 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (max_mapping_level < iter.level) continue; - /* Note, a successful atomic zap also does a remote TLB flush. */ - if (tdp_mmu_zap_spte_atomic(kvm, &iter)) + if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE)) goto retry; + + flush = true; } + if (flush) + kvm_flush_remote_tlbs_memslot(kvm, slot); + rcu_read_unlock(); } From patchwork Fri Aug 23 23:56:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13776133 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B21461CC14E for ; Fri, 23 Aug 2024 23:56:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457419; cv=none; b=e+F8NHHDfo9q/M6IP3wVl/M66yjn+mqjVjJHKem62z0l5v6CFj7Z9KP6OG3azB7tdcxn/e70eRc0nd7GIEB6bsaqX780BJyy2qCMuDW2FTq1XEGxWh+LtFzZvnlZR0zEPcgp6GSXuBX+1xmr6xgJDtOi8bcfEl2jGw29wj7epYU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457419; c=relaxed/simple; bh=zHdpot0rRQyVDNMiIG5Rx0qAPITLbODwrupzz3NA1GA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XOLNVa6Bh5OPa/bgbvvJBRHbK0tpSuVLYECE0gxeNXE33MInOfwZlibabsWoBIGSL078w0P5DOU389Rgn0h2A8s/RyE8XZgV/h5zdbFUUbBbbnEUo5xhSu7ZHV7mTqtgfyPioAjQ1uE6LoaR7csadkkz2QURbPB/DmUKQ9tofas= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JE8Wkgdl; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JE8Wkgdl" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e117634c516so4230283276.0 for ; Fri, 23 Aug 2024 16:56:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724457417; x=1725062217; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tK9C7H+Jo5ZSkTQ+Rj/u5LNiqXw4I6yn7QJvBCSdj+Y=; b=JE8WkgdljAu9QFmmiIK+Zl9ybUr6M0StdjI9vHTIGBfkbqyX5XndUAFypC3xOAORji rcKe7m602kKt6P7TQpKENC2wlKVgkeBdNSlz0J/Yqegx+GdzOAHsoziSV9GTXf2ZXFfq k1CWKSxn4kBwyCaLQHtW1qPKihdHLsL9fHqTpVqzl3Pc6g+12H7qnSeB+j7t1fTLX9Zr IxC6qAq+EhSAaodG1+E0KdOuBcdu977PA+D6HsSZmgqFWNQGmZ/40pyHejk7JS7dSkFi hNd8tRJBguuuBZtuIo6J8ii/Ri5KbJHxRE+x8/iV4VUSkybdoqf4cZWB0EPBRBkbcWye RILA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724457417; x=1725062217; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tK9C7H+Jo5ZSkTQ+Rj/u5LNiqXw4I6yn7QJvBCSdj+Y=; b=OjibWf4j7fOLhCJpqYfi7e360ePHoAMwm39cId3l2lfzdY5PBJN1EHX9vV5JUbf5uv gO288IEHFhHsStsu8gfe6wg5xoefz/yqRZJ/xOidUaw5yNZRl52xkbKMw3erN8eZH2Yf YmNou6WKJn3/VWmNPvVh+izZHYk3Kqr+rBiEl7+YK7+EBrLNQHoxCz0WTaNl99Cflre5 +aRnyA2p6N1NmGq6/itoZHYjpe67Z1ueQI8NOYkIMH0WUvQkI0MhiCjxGnwN/+KGihrz h12wlbtMFSfU3bsn0CUPxovNgqvxbe40StmEJi+PBPgEnqrEMG0kU8+K3h7lk48VX4Ke LKQw== X-Gm-Message-State: AOJu0YwfZfywXHDBa8T5/HDTVe8V32PlXl4U1RpLUEvQy/WQNyPWHAxq e1sLn1e0msSzYIpi3QWlCqyaeicOnrh7tc3SQugeKhu4mmFSTnxVMRvHSN1c5feGRgzG3in3As8 oyic4tJ8lQQ== X-Google-Smtp-Source: AGHT+IHymdtkwriKWRHKcyPBdCdaXtFrh+eSQuHP6N1OeXb5SX+15GaOL1/+05hBPh88rajumOhpoeAQkLxq4g== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:bf91:0:b0:e16:67ca:e24f with SMTP id 3f1490d57ef6-e17a865aba3mr5523276.10.1724457416618; Fri, 23 Aug 2024 16:56:56 -0700 (PDT) Date: Fri, 23 Aug 2024 16:56:45 -0700 In-Reply-To: <20240823235648.3236880-1-dmatlack@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240823235648.3236880-1-dmatlack@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240823235648.3236880-4-dmatlack@google.com> Subject: [PATCH v2 3/6] KVM: x86/mmu: Refactor TDP MMU iter need resched check From: David Matlack To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, David Matlack Refactor the TDP MMU iterator "need resched" checks into a helper function so they can be called from a different code path in a subsequent commit. No functional change intended. Signed-off-by: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 27adbb3ecb02..9b8299ee4abb 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -646,6 +646,16 @@ static inline void tdp_mmu_iter_set_spte(struct kvm *kvm, struct tdp_iter *iter, #define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ for_each_tdp_pte(_iter, root_to_sp(_mmu->root.hpa), _start, _end) +static inline bool __must_check tdp_mmu_iter_need_resched(struct kvm *kvm, + struct tdp_iter *iter) +{ + /* Ensure forward progress has been made before yielding. */ + if (iter->next_last_level_gfn == iter->yielded_gfn) + return false; + + return need_resched() || rwlock_needbreak(&kvm->mmu_lock); +} + /* * Yield if the MMU lock is contended or this thread needs to return control * to the scheduler. @@ -666,11 +676,7 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, { WARN_ON_ONCE(iter->yielded); - /* Ensure forward progress has been made before yielding. */ - if (iter->next_last_level_gfn == iter->yielded_gfn) - return false; - - if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) { + if (tdp_mmu_iter_need_resched(kvm, iter)) { if (flush) kvm_flush_remote_tlbs(kvm); From patchwork Fri Aug 23 23:56:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13776134 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F86A1CC8B2 for ; Fri, 23 Aug 2024 23:56:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457421; cv=none; b=D8DcMc0gVrQA3K8qFIgTfhmeZP8ly7qOvfNkE60/ExN2jzR3lbCgrRlhAM92udVfUBLJjKsRWH852Pgng9ex8JJcO86mhgJ9sHf9SmOCqUbWYZs6AcSc3fbvohj6DA/vOkX5Hq502ekWxT7mM6bx1eGOr95Xvuv0SJ+fZ7aa4y8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457421; c=relaxed/simple; bh=txQMjcL9/9tPv9EGsyCvP7kVwE1o5XumCGliCdbXmbU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cb2dFNEOumGfichSeXXpNzcMaWI5Y1icsIEaQsfmLQGxxSkhUyN87REcovtzUIx/JmbLuuNHpKkYCynmpyg8s83KXDXt9L1FS79bMuKVTDyA4hZIl5hrjh3HbEemwwbw03OIVT3I8oWZ88WQVaebMGhrDeWox7BLHKyDjFzPqZ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=D3xhXyGU; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D3xhXyGU" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e1159159528so4808065276.1 for ; Fri, 23 Aug 2024 16:56:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724457418; x=1725062218; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+3er8uN5v3H5c/9SKj7By3SrywJZM8hn9HeBn4fV9hI=; b=D3xhXyGUQmHsMhqETMqlAT9W2W78Gahe9cJCwgo6O5LlYXxcertH0K6bKYRxsjYQam blVkKiXx6XT1tDFM9lYbw9e70bLjF66EMMjZ08dX2Yq/NvAmwecDSn10jBzlm4BVmmBz NQ4JjtEhJK7hM3ERqekHmiJCJiOOYY+jrOG6FsG35sLiSETWdfe7DI325XbNBHVHhQnQ HKZnSrCPBxB+BL6qthypXQKMhjt4s6eqDoo3rC4O6Xaaa6NNIizpiwbvzKAEN1DBnHf1 xEBd8gUzZyBhA9y7svr7hfCYVapYkEXb98XgAeR/+SpVHVfUoI/iO4f61ZbVdpBF1K3h H+MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724457418; x=1725062218; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+3er8uN5v3H5c/9SKj7By3SrywJZM8hn9HeBn4fV9hI=; b=FCCk1iYdcEdAM5P8qQlme2P6tPANC8w1MGjNN/R8utFvpQ5hTnaZkgIcrtPUc8j6pp PKPtQinwL2iMwnovIE7av4Y6CNQLb8LPPWYGKt+WX3A+rje/1odeRKVu1wE98AMb94ok Plp7pzSf0lnmt7zUGpyB4C6EBEnbwN6KsQ8EYBdaGpcj0LVfu+fYoJQdT+8fwQLcG4aL DGqR6rXiL0XVsgGeVgNuVrFHYTHXI2HfARGwCyJL9OG8CoGZK09ZaTe4dKBshYoJMv8w OZvF2kq7eodDLNcq/ThM3nbjWTH6WmIDt3VYSF2QlfbkuMt6IdXoJnS2bOiB0V5NYBzk RRng== X-Gm-Message-State: AOJu0YymcC0Ikkcee6rKm+roIUZL8Mifxcmc7dIdWCvbMHvypiNDvvnK MT+d03449tL09oKG54USpIS3P0lqnFmiUyB4r0Ge1A2A9IzSK5x8affJtHXIrSu4LhRkn2eQwpY owuIT+H4x4A== X-Google-Smtp-Source: AGHT+IEHKHd2nm/KOB9WyClDfVWsNjfy3Sjgx1PENrhaKa/o/oqE5MIMNOo6FaPKK+b5cmtuQXNaMtXe51jpQw== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:870d:0:b0:e13:ca63:f17c with SMTP id 3f1490d57ef6-e1767770842mr99486276.1.1724457418247; Fri, 23 Aug 2024 16:56:58 -0700 (PDT) Date: Fri, 23 Aug 2024 16:56:46 -0700 In-Reply-To: <20240823235648.3236880-1-dmatlack@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240823235648.3236880-1-dmatlack@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240823235648.3236880-5-dmatlack@google.com> Subject: [PATCH v2 4/6] KVM: x86/mmu: Recover TDP MMU huge page mappings in-place instead of zapping From: David Matlack To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, David Matlack Recover TDP MMU huge page mappings in-place instead of zapping them when dirty logging is disabled, and rename functions that recover huge page mappings when dirty logging is disabled to move away from the "zap collapsible spte" terminology. Before KVM flushes TLBs, guest accesses may be translated through either the (stale) small SPTE or the (new) huge SPTE. This is already possible when KVM is doing eager page splitting (where TLB flushes are also batched), and when vCPUs are faulting in huge mappings (where TLBs are flushed after the new huge SPTE is installed). Recovering huge pages reduces the number of page faults when dirty logging is disabled: $ perf stat -e kvm:kvm_page_fault -- ./dirty_log_perf_test -s anonymous_hugetlb_2mb -v 64 -e -b 4g Before: 393,599 kvm:kvm_page_fault After: 262,575 kvm:kvm_page_fault vCPU throughput and the latency of disabling dirty-logging are about equal compared to zapping, but avoiding faults can be beneficial to remove vCPU jitter in extreme scenarios. Signed-off-by: David Matlack --- arch/x86/include/asm/kvm_host.h | 4 +-- arch/x86/kvm/mmu/mmu.c | 6 ++-- arch/x86/kvm/mmu/spte.c | 39 +++++++++++++++++++++++-- arch/x86/kvm/mmu/spte.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 52 +++++++++++++++++++++++++++------ arch/x86/kvm/mmu/tdp_mmu.h | 4 +-- arch/x86/kvm/x86.c | 18 +++++------- 7 files changed, 94 insertions(+), 30 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1811a42fa093..8f9bd7c0e139 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1953,8 +1953,8 @@ void kvm_mmu_try_split_huge_pages(struct kvm *kvm, const struct kvm_memory_slot *memslot, u64 start, u64 end, int target_level); -void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *memslot); +void kvm_mmu_recover_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *memslot); void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2e92d9e9b311..2f8b1ebcbe9c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6904,8 +6904,8 @@ static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm, kvm_flush_remote_tlbs_memslot(kvm, slot); } -void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot) +void kvm_mmu_recover_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot) { if (kvm_memslots_have_rmaps(kvm)) { write_lock(&kvm->mmu_lock); @@ -6915,7 +6915,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, if (tdp_mmu_enabled) { read_lock(&kvm->mmu_lock); - kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); + kvm_tdp_mmu_recover_huge_pages(kvm, slot); read_unlock(&kvm->mmu_lock); } } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 8f7eb3ad88fc..a12437bf6e0c 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -268,15 +268,14 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return wrprot; } -static u64 make_spte_executable(u64 spte) +static u64 modify_spte_protections(u64 spte, u64 set, u64 clear) { bool is_access_track = is_access_track_spte(spte); if (is_access_track) spte = restore_acc_track_spte(spte); - spte &= ~shadow_nx_mask; - spte |= shadow_x_mask; + spte = (spte | set) & ~clear; if (is_access_track) spte = mark_spte_for_access_track(spte); @@ -284,6 +283,16 @@ static u64 make_spte_executable(u64 spte) return spte; } +static u64 make_spte_executable(u64 spte) +{ + return modify_spte_protections(spte, shadow_x_mask, shadow_nx_mask); +} + +static u64 make_spte_nonexecutable(u64 spte) +{ + return modify_spte_protections(spte, shadow_nx_mask, shadow_x_mask); +} + /* * Construct an SPTE that maps a sub-page of the given huge page SPTE where * `index` identifies which sub-page. @@ -320,6 +329,30 @@ u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, return child_spte; } +u64 make_huge_spte(struct kvm *kvm, u64 small_spte, int level) +{ + u64 huge_spte; + + if (KVM_BUG_ON(!is_shadow_present_pte(small_spte), kvm)) + return SHADOW_NONPRESENT_VALUE; + + if (KVM_BUG_ON(level == PG_LEVEL_4K, kvm)) + return SHADOW_NONPRESENT_VALUE; + + huge_spte = small_spte | PT_PAGE_SIZE_MASK; + + /* + * huge_spte already has the address of the sub-page being collapsed + * from small_spte, so just clear the lower address bits to create the + * huge page address. + */ + huge_spte &= KVM_HPAGE_MASK(level) | ~PAGE_MASK; + + if (is_nx_huge_page_enabled(kvm)) + huge_spte = make_spte_nonexecutable(huge_spte); + + return huge_spte; +} u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 2cb816ea2430..990d599eb827 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -503,6 +503,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, bool host_writable, u64 *new_spte); u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role, int index); +u64 make_huge_spte(struct kvm *kvm, u64 small_spte, int level); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); u64 mark_spte_for_access_track(u64 spte); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9b8299ee4abb..be70f0f22550 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1581,15 +1581,43 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); } -static void zap_collapsible_spte_range(struct kvm *kvm, - struct kvm_mmu_page *root, - const struct kvm_memory_slot *slot) +static int tdp_mmu_make_huge_spte(struct kvm *kvm, + struct tdp_iter *parent, + u64 *huge_spte) +{ + struct kvm_mmu_page *root = spte_to_child_sp(parent->old_spte); + gfn_t start = parent->gfn; + gfn_t end = start + KVM_PAGES_PER_HPAGE(parent->level); + struct tdp_iter iter; + + tdp_root_for_each_leaf_pte(iter, root, start, end) { + /* + * Use the parent iterator when checking for forward progress so + * that KVM doesn't get stuck continuously trying to yield (i.e. + * returning -EAGAIN here and then failing the forward progress + * check in the caller ad nauseam). + */ + if (tdp_mmu_iter_need_resched(kvm, parent)) + return -EAGAIN; + + *huge_spte = make_huge_spte(kvm, iter.old_spte, parent->level); + return 0; + } + + return -ENOENT; +} + +static void recover_huge_pages_range(struct kvm *kvm, + struct kvm_mmu_page *root, + const struct kvm_memory_slot *slot) { gfn_t start = slot->base_gfn; gfn_t end = start + slot->npages; struct tdp_iter iter; int max_mapping_level; bool flush = false; + u64 huge_spte; + int r; rcu_read_lock(); @@ -1626,7 +1654,13 @@ static void zap_collapsible_spte_range(struct kvm *kvm, if (max_mapping_level < iter.level) continue; - if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE)) + r = tdp_mmu_make_huge_spte(kvm, &iter, &huge_spte); + if (r == -EAGAIN) + goto retry; + else if (r) + continue; + + if (tdp_mmu_set_spte_atomic(kvm, &iter, huge_spte)) goto retry; flush = true; @@ -1639,17 +1673,17 @@ static void zap_collapsible_spte_range(struct kvm *kvm, } /* - * Zap non-leaf SPTEs (and free their associated page tables) which could - * be replaced by huge pages, for GFNs within the slot. + * Recover huge page mappings within the slot by replacing non-leaf SPTEs with + * huge SPTEs where possible. */ -void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot) +void kvm_tdp_mmu_recover_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot) { struct kvm_mmu_page *root; lockdep_assert_held_read(&kvm->mmu_lock); for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id) - zap_collapsible_spte_range(kvm, root, slot); + recover_huge_pages_range(kvm, root, slot); } /* diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 1b74e058a81c..ddea2827d1ad 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -40,8 +40,8 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, unsigned long mask, bool wrprot); -void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, - const struct kvm_memory_slot *slot); +void kvm_tdp_mmu_recover_huge_pages(struct kvm *kvm, + const struct kvm_memory_slot *slot); bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 966fb301d44b..3d09c12847d5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13053,19 +13053,15 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, if (!log_dirty_pages) { /* - * Dirty logging tracks sptes in 4k granularity, meaning that - * large sptes have to be split. If live migration succeeds, - * the guest in the source machine will be destroyed and large - * sptes will be created in the destination. However, if the - * guest continues to run in the source machine (for example if - * live migration fails), small sptes will remain around and - * cause bad performance. + * Recover huge page mappings in the slot now that dirty logging + * is disabled, i.e. now that KVM does not have to track guest + * writes at 4KiB granularity. * - * Scan sptes if dirty logging has been stopped, dropping those - * which can be collapsed into a single large-page spte. Later - * page faults will create the large-page sptes. + * Dirty logging might be disabled by userspace if an ongoing VM + * live migration is cancelled and the VM must continue running + * on the source. */ - kvm_mmu_zap_collapsible_sptes(kvm, new); + kvm_mmu_recover_huge_pages(kvm, new); } else { /* * Initially-all-set does not require write protecting any page, From patchwork Fri Aug 23 23:56:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13776135 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F59A1C93AF for ; Fri, 23 Aug 2024 23:57:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457422; cv=none; b=jDO4+0R9YB7upM8YHFwrUNW6f6/h9+3l2U57ptAjeM8ZnXDxxancYANYMhXpACDvpjX9LAN0sT986QRPSmlSo5a2dONFPNED9kmfCQE17e1hHGDXAyShn7i15giAFsNBXlx1EDjaz+BRKUeVPAgc56GAbcu2+Y3LXhNpETkaUc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457422; c=relaxed/simple; bh=K8St3htyA6VVaDm3Rq0CK+oQp4eUSdBDL1g81zF2H9s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ozzTQ7e/mC1fb3aGHouHVQ+WALC8PN1qVjL2lzYYR+sO+1Q3n/gAkdmwMygpEq5MJ3DQHHDdbDjlgQ9Td6QLZQW20EgY/BSbq9i0baQ2J91kxwzyeU+8vUe1BFydxcx36HV5hmvNbOZGL7zBhAd6xL0piQL+UtK1LY1RWYS3S9Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1z0qKek+; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1z0qKek+" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6886cd07673so54236097b3.3 for ; Fri, 23 Aug 2024 16:57:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724457420; x=1725062220; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UPk6JEKa91c+igfAZBjBchJ5c2y9lYQgDZNe4ht2PjQ=; b=1z0qKek+C0+0bKPc+Ba1bQ9BEsVmi0crwwRLoIfisESNbYsx3ydqqw7Hk8mgiRxNSh vkXD+1wA9LxAQVKQ6UKrzHF9WuloV8aENkmbwkigtYmX0Q9Tm8qh4D0mai33CgTnPb50 +dWJSLl4BMwtbpRztpDwBIy7/INNsHJD++nRHbVZhHyQwmcyNm5nM/Ue3UbA3C90R8Oz oj0LRO6UFN4jMKBu3d4WbiofMi5dMCHYqZs+50Bbdffsaw+DjfhK52pyydWMnlkbiOyZ EgGvwOg5D7lrd2ehX8eh+61jmDuMV6DbxFvPy/5ImvIjv6gqgr4vfkfY5XohfbdeyeTY dung== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724457420; x=1725062220; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UPk6JEKa91c+igfAZBjBchJ5c2y9lYQgDZNe4ht2PjQ=; b=JzC7f10bg5RPwcURErOWqiX8DB9UIhw8pZOU9AOtcC3QgPdfuFIO6Tz9Jq9x7teXsK kQPtH/psDax+y52XdP94SHXwKRWj11f0AaxqfYFPpJ53qI4PkV2le6Oq71Hzqx/U0Eud uKiMXHnEm5hOn+33eAE7/Y9q92lO6nAzjauYaR2qdrxujZzIkAvulFVypglB5dtaC+/M I9DjPrIo2jTvmjl4BABkBt4X4NMKPwklHO7sdfa8+m6idcsuaAobgikZZuolkplVD78u l7IHP8bPWndZFHiIJTcsp0sit3QHEed1OEZ2ZJZfVdHIhHhqu4/z5UPv0Ow+pID5N6bM bhgw== X-Gm-Message-State: AOJu0Yx/2tzb8/Vr4nCdXhmiPRhquj7pL2oCzRZw4WDWgHRmktErI+BG b+NKs7nErFOueqtd5LjZnPKemwfGbNQ0AnRet6a2wDWk0CIHb2+gr5f3G2YfKzfaetQ8IBd/Imh cTOSlZVaURw== X-Google-Smtp-Source: AGHT+IF/tzRAu8w+aKZnRjKU0gzVbU2/0JiVylPkSqCQdhpXHELsDZB8y+mBj5mOUio+yxo3cmB1sznySHRUGg== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:bc8c:0:b0:e02:c06f:1db8 with SMTP id 3f1490d57ef6-e17a83d5107mr6379276.4.1724457419913; Fri, 23 Aug 2024 16:56:59 -0700 (PDT) Date: Fri, 23 Aug 2024 16:56:47 -0700 In-Reply-To: <20240823235648.3236880-1-dmatlack@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240823235648.3236880-1-dmatlack@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240823235648.3236880-6-dmatlack@google.com> Subject: [PATCH v2 5/6] KVM: x86/mmu: Rename make_huge_page_split_spte() to make_small_spte() From: David Matlack To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, David Matlack Rename make_huge_page_split_spte() to make_small_spte(). This ensures that the usage of "small_spte" and "huge_spte" are consistent between make_huge_spte() and make_small_spte(). This should also reduce some confusion as make_huge_page_split_spte() almost reads like it will create a huge SPTE, when in fact it is creating a small SPTE to split the huge SPTE. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/spte.c | 4 ++-- arch/x86/kvm/mmu/spte.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2f8b1ebcbe9c..8967508b63f9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6693,7 +6693,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm, continue; } - spte = make_huge_page_split_spte(kvm, huge_spte, sp->role, index); + spte = make_small_spte(kvm, huge_spte, sp->role, index); mmu_spte_set(sptep, spte); __rmap_add(kvm, cache, slot, sptep, gfn, sp->role.access); } diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index a12437bf6e0c..fe010e3404b1 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -300,8 +300,8 @@ static u64 make_spte_nonexecutable(u64 spte) * This is used during huge page splitting to build the SPTEs that make up the * new page table. */ -u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, - union kvm_mmu_page_role role, int index) +u64 make_small_spte(struct kvm *kvm, u64 huge_spte, + union kvm_mmu_page_role role, int index) { u64 child_spte = huge_spte; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 990d599eb827..3aee16e0a575 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -501,8 +501,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool prefetch, bool can_unsync, bool host_writable, u64 *new_spte); -u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, - union kvm_mmu_page_role role, int index); +u64 make_small_spte(struct kvm *kvm, u64 huge_spte, + union kvm_mmu_page_role role, int index); u64 make_huge_spte(struct kvm *kvm, u64 small_spte, int level); u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index be70f0f22550..4c1cd41750ad 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1334,7 +1334,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, * not been linked in yet and thus is not reachable from any other CPU. */ for (i = 0; i < SPTE_ENT_PER_PAGE; i++) - sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte, sp->role, i); + sp->spt[i] = make_small_spte(kvm, huge_spte, sp->role, i); /* * Replace the huge spte with a pointer to the populated lower level From patchwork Fri Aug 23 23:56:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13776136 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 717E11CC8B2 for ; Fri, 23 Aug 2024 23:57:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457423; cv=none; b=OI7VRKjr9nWvgLcrwt1zcaX5dV8pa60HAldy10yOze9H4viqP6n5AgLfAa+YT3yx/ISPcdkKdrPlzUuGSWIbdtGiuRQE9LZoHCZHgAk0JkMzIvTfg3Qst/60d6VmhOEyJHwmkOOI8Gf2ALir5gpkNSU4CWF9cstVlF5+v/jn4bM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724457423; c=relaxed/simple; bh=4XD4l3U3wYnBD7f0qYtCqEDXW9p9w81RczhincQurds=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GOqln30R3ZkO0XT5GW2YJGS3OFJKBnXxH+V/gZZg3gynf5pkfGUb8P121YTqzs65wDkWEAz11E6Bl4G2I3ZMwg/lcYDHxZKoxWy+U1RzHq3ZwBvIvcdgBrRbrrr/ATZD7J2PFBq8T1M8kQU3v3OPF8f1keDnPkmv9jVLQow1J8Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c+jKn06Z; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dmatlack.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c+jKn06Z" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6c3982a0c65so45055257b3.1 for ; Fri, 23 Aug 2024 16:57:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724457421; x=1725062221; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W2SOE6tX7v/OlcyQI0vQz2k5OLAtSjTk1r+cXVAyDEI=; b=c+jKn06Zvuj0xzJBWSyfWAkQAXJP2+/o+4Tidk68LBC0cuISu7QWsShPmAZHE2YEP2 ALmUMPxCSRBw8GNA0fpoDXf/069ZwWAaG54yonieRy4fdr55BfvTquyz2ZvsTAUpGeYB xDij7T0rewyxHCpD7ZcQEnadj9prfRHX6qyoU3YN8sP3b/tywGEFB7WCyYGTDVYuwPHN CTAidYvnprbwS9djJMTn4Pc0QMU3Sas6N2Ni10UJ6oYsOtb1GipuEqclpFWTLprZLPW6 48VzvdTBzrV8Dn2GrnqrhtPKEem8JVfykGoLgjSalV8ac1+ValxvTLXQ8beygA59+tGm YvoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724457421; x=1725062221; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W2SOE6tX7v/OlcyQI0vQz2k5OLAtSjTk1r+cXVAyDEI=; b=vwsFGICBYU/4JuDQgC0v18eN2aAoYUjDlFRuhiUGYXSOlauiPxs2iS889jtwcybWbt lXE9J9QRMV7cv5Rh+7moY0ahLSgwNHSWW5xjAB2xsu/DUmp58CoriMuaxTDNviylYH64 gwRBcrjX63phnagvX9flP18Xc5NhpNyTG++mNRia25hNp4JxFNXmceMH5Dth0LGi00YF AIuMNHBXA4BipMLA626X4DDRQpHXnJK4AOC/rD0yd9SkRU6nKUtOPBiz/ANcyRnYBTco T0Lu/iyunTJO/qNTJ+OjKHs6hUj005OajKpWPJeLCVst7+Y39tviN3qo8utcDfG0wJtM Aulg== X-Gm-Message-State: AOJu0YxahGlM5MAolXRXQjP8rzsWwq5qTpCINPEF4dyPpjqhl3Et/wY8 ZqcVWjfJFUeRmOKIv/UMc30wH+gspj5e6od4FPse97WOQjfwpH77CT0oPQphZyB/o3kIsQljzFg CGkOpnx9BpA== X-Google-Smtp-Source: AGHT+IFzMvv8Jh0nbvaPUtheFnuf6z4lW4doJwlm5sqQx6U26IiKBfIXR3l5asqPwBrTmqUus/GB3TbEaR/EAQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a5b:92:0:b0:e11:7039:ff92 with SMTP id 3f1490d57ef6-e17a8666f19mr5295276.11.1724457421572; Fri, 23 Aug 2024 16:57:01 -0700 (PDT) Date: Fri, 23 Aug 2024 16:56:48 -0700 In-Reply-To: <20240823235648.3236880-1-dmatlack@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240823235648.3236880-1-dmatlack@google.com> X-Mailer: git-send-email 2.46.0.295.g3b9ea8a38a-goog Message-ID: <20240823235648.3236880-7-dmatlack@google.com> Subject: [PATCH v2 6/6] KVM: x86/mmu: WARN if huge page recovery triggered during dirty logging From: David Matlack To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, David Matlack WARN and bail out of recover_huge_pages_range() if dirty logging is enabled. KVM shouldn't be recovering huge pages during dirty logging anyway, since KVM needs to track writes at 4KiB. However its not out of the possibility that that changes in the future. If KVM wants to recover huge pages during dirty logging, make_huge_spte() must be updated to write-protect the new huge page mapping. Otherwise, writes through the newly recovered huge page mapping will not be tracked. Note that this potential risk did not exist back when KVM zapped to recover huge page mappings, since subsequent accesses would just be faulted in at PG_LEVEL_4K if dirty logging was enabled. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/tdp_mmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 4c1cd41750ad..301a2c19bfe9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1619,6 +1619,9 @@ static void recover_huge_pages_range(struct kvm *kvm, u64 huge_spte; int r; + if (WARN_ON_ONCE(kvm_slot_dirty_track_enabled(slot))) + return; + rcu_read_lock(); for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) {