From patchwork Wed Oct 14 18:26:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838131 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D19D814B2 for ; Wed, 14 Oct 2020 18:27:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A58C42222A for ; Wed, 14 Oct 2020 18:27:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MogfLG0Y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388136AbgJNS1K (ORCPT ); Wed, 14 Oct 2020 14:27:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388073AbgJNS1H (ORCPT ); Wed, 14 Oct 2020 14:27:07 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CF69C061755 for ; Wed, 14 Oct 2020 11:27:06 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id co16so2369pjb.1 for ; Wed, 14 Oct 2020 11:27:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=4bjkl7LU/ijpP4j3wkVH8RGMkGEhF98fqB9pEclfJrk=; b=MogfLG0Ygib8pC0aC4AVyfrCXTiWPZ+I8kFw8RCzn3NdcLonBPkaOwCYFHij0wrUAI FC7DasDbwNlLO+rUZlGNaSlybrvy7GnhjbIu+ybG2F207pU+7yDQ3pZwjyA7mz7Se71Y IbGhUgcX8ZusRoGk1n+nJVPB5ijxIuMYWUe8jNExK5+bFn/SX2CZz8r0nexXNXueUQ/E 9x+skWzS9XR/eIuI+oPRvPlaI8ytWfXAyfH7Ve1uqyOucO9S6Nw9Blw5xvSWhcB1PUKI sb9+2pWbAnF5Bl7Co9B2gAVjAn4JQKg1Y1w9zfsCP97MMQdo8gsk6cNChxbaeq/oLx/Q lfaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4bjkl7LU/ijpP4j3wkVH8RGMkGEhF98fqB9pEclfJrk=; b=cumnB+4ZcEVdtMgFfaWsc7UFZFaZRgIyYB6h3HECZnX3E/KUeYEPX6LpPXhb501fdg J0Q/NwreW38Ad8+214atPZZx9OOYzDynHyzdy7HCFAPBm0nLqvpPzMLNexdP7x77Yw/V Rb0mgW3sxOMtEeI7igr3PcCyCk5Xz21ETrjkuvtqOCPmxh4+Ukuv/qU9qQjWchLdsYC8 FfsK8Y2r5lZ+UIS8HRqzoZxmgMifiPpPf/EfEsetAAUEDNLtNiRvLbpPxDO5dXlQudhl NOpiNmF2Klix2p1QVZZxjfCVJCzqvJPjFfgKtM0ztpQMVGI9lt5qGruW8BeCVjj3kWXe B6fg== X-Gm-Message-State: AOAM533ANK0NXGYbjULV9QFvvlqE4zLp3OByin8JO3js1++br1Ovaj7B q7UCEk9AeAJXU9Aaj/E3M8f0IdqSFOS/ X-Google-Smtp-Source: ABdhPJy35YA8qiNQX7ZBjgjicHgSqdVWg6euqKaGUEbJi3LTSM58YPkjX30wFc214EhwPbtHDhSobtXpgdka Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a62:884d:0:b029:152:1a5f:1123 with SMTP id l74-20020a62884d0000b02901521a5f1123mr580761pfd.28.1602700025984; Wed, 14 Oct 2020 11:27:05 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:41 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-2-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 01/20] kvm: x86/mmu: Separate making SPTEs from set_spte From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Separate the functions for generating leaf page table entries from the function that inserts them into the paging structure. This refactoring will facilitate changes to the MMU sychronization model to use atomic compare / exchanges (which are not guaranteed to succeed) instead of a monolithic MMU lock. No functional change expected. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This commit introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon Reviewed-by: Peter Shier --- arch/x86/kvm/mmu/mmu.c | 49 ++++++++++++++++++++++++++++-------------- 1 file changed, 33 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 32e0e5c0524e5..6c9db349600c8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2987,20 +2987,15 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) #define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) #define SET_SPTE_SPURIOUS BIT(2) -static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, - unsigned int pte_access, int level, - gfn_t gfn, kvm_pfn_t pfn, bool speculative, - bool can_unsync, bool host_writable) +static int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, + bool can_unsync, bool host_writable, bool ad_disabled, + u64 *new_spte) { u64 spte = 0; int ret = 0; - struct kvm_mmu_page *sp; - if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) - return 0; - - sp = sptep_to_sp(sptep); - if (sp_ad_disabled(sp)) + if (ad_disabled) spte |= SPTE_AD_DISABLED_MASK; else if (kvm_vcpu_ad_need_write_protect(vcpu)) spte |= SPTE_AD_WRPROT_ONLY_MASK; @@ -3053,8 +3048,8 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * is responsibility of mmu_get_page / kvm_sync_page. * Same reasoning can be applied to dirty page accounting. */ - if (!can_unsync && is_writable_pte(*sptep)) - goto set_pte; + if (!can_unsync && is_writable_pte(old_spte)) + goto out; if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", @@ -3065,15 +3060,37 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, } } - if (pte_access & ACC_WRITE_MASK) { - kvm_vcpu_mark_page_dirty(vcpu, gfn); + if (pte_access & ACC_WRITE_MASK) spte |= spte_shadow_dirty_mask(spte); - } if (speculative) spte = mark_spte_for_access_track(spte); -set_pte: +out: + *new_spte = spte; + return ret; +} + +static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, + unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, bool speculative, + bool can_unsync, bool host_writable) +{ + u64 spte; + struct kvm_mmu_page *sp; + int ret; + + if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) + return 0; + + sp = sptep_to_sp(sptep); + + ret = make_spte(vcpu, pte_access, level, gfn, pfn, *sptep, speculative, + can_unsync, host_writable, sp_ad_disabled(sp), &spte); + + if (spte & PT_WRITABLE_MASK) + kvm_vcpu_mark_page_dirty(vcpu, gfn); + if (*sptep == spte) ret |= SET_SPTE_SPURIOUS; else if (mmu_spte_update(sptep, spte)) From patchwork Wed Oct 14 18:26:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838129 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3516B14B2 for ; Wed, 14 Oct 2020 18:27:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F252B2222C for ; Wed, 14 Oct 2020 18:27:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fW/JGUhO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388286AbgJNS1L (ORCPT ); Wed, 14 Oct 2020 14:27:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388099AbgJNS1I (ORCPT ); Wed, 14 Oct 2020 14:27:08 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62EB3C0613D3 for ; Wed, 14 Oct 2020 11:27:08 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id h1so70291pll.10 for ; Wed, 14 Oct 2020 11:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=RC4N2/gqC7dvT5r7OYRXQDjOiOBDyQ1/me8gzCR93Cg=; b=fW/JGUhOe2535gv/T8yzmsT6ybOII5POWX22I9oluIINUAwxctY+mxjc5tjBSpw1gt LjQ19FYgBr3EFDsYQZcgzJvEwP5Pe+ZiKWo9W1NGmKoFE1uSD0iItjWULQpdlSi/U6HE fE7KmRKp5fJlH2bKxXbDpGbV3SERr1lLGmxaiaHl+cWksLcN/igCzdu2+wF8QRXIKd+n Wtc9S6KebRwF1MavgID6RLPnmFevP05jkevPcOw6KqAlYGOIsgrSqeyoM5R9cB04zscN UDkRJWXm0uyKIKrzUngiG43S/FIHUcxi84WiWdVZWgbbaat3Jem+xALkmlVdTRlllSr+ Kw9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RC4N2/gqC7dvT5r7OYRXQDjOiOBDyQ1/me8gzCR93Cg=; b=MSKfWoAhHsB6DuGanQPzgTke0PA/pRiL3aI33EMPkhKxT2xoAe1SYoGxGioIiixYqX 8UDZ6gYZ9eoAldWkbjWM7H8lt+T85Z7abW3SvKUXPC56cYF8BGwNzuwLSf0Ay4osqSqX oqsVRb0/+Lje7GBI8jHSY6yYULylQZJ+cTnPizoLPM0aFoMrqb1ZUSRByYJZ+QmJ+0Vy 7KHJXaXdO2wBAy/C0VgZap5M66tOjjIF5X8V5y7OEQo2SlOG71no/XJTKUd8qWVzO2jI WgsaufQN60p15qs673vV1bL+sig4werTtcQvqvoX654xRgs/LGPjWqmsMdxZaFc4RiJQ h1TQ== X-Gm-Message-State: AOAM531HL1ANHBN5RKGFA+hAKwXEpasn6e+CJNUslYd+tZUl2wyBS/me zKrQBXAk1HEcetnnNNDzTUBTcUzZc5Jz X-Google-Smtp-Source: ABdhPJwnuWxI8owj41gSglpnb1Cp16F+sCPh/GAGpI0q55qSKS74QeOhVKURouObzliIqDHJYFOvDa3TtmDg Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a63:5914:: with SMTP id n20mr214535pgb.69.1602700027785; Wed, 14 Oct 2020 11:27:07 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:42 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-3-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 02/20] kvm: x86/mmu: Introduce tdp_iter From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDP iterator implements a pre-order traversal of a TDP paging structure. This iterator will be used in future patches to create an efficient implementation of the KVM MMU for the TDP case. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/Makefile | 3 +- arch/x86/kvm/mmu/mmu.c | 66 ------------ arch/x86/kvm/mmu/mmu_internal.h | 66 ++++++++++++ arch/x86/kvm/mmu/tdp_iter.c | 176 ++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_iter.h | 56 ++++++++++ 5 files changed, 300 insertions(+), 67 deletions(-) create mode 100644 arch/x86/kvm/mmu/tdp_iter.c create mode 100644 arch/x86/kvm/mmu/tdp_iter.h diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 7f86a14aed0e9..4525c1151bf99 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -15,7 +15,8 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ - hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o + hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o \ + mmu/tdp_iter.o kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \ vmx/evmcs.o vmx/nested.o vmx/posted_intr.o diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6c9db349600c8..6d82784ed5679 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -121,28 +121,6 @@ module_param(dbg, bool, 0644); #define PTE_PREFETCH_NUM 8 -#define PT_FIRST_AVAIL_BITS_SHIFT 10 -#define PT64_SECOND_AVAIL_BITS_SHIFT 54 - -/* - * The mask used to denote special SPTEs, which can be either MMIO SPTEs or - * Access Tracking SPTEs. - */ -#define SPTE_SPECIAL_MASK (3ULL << 52) -#define SPTE_AD_ENABLED_MASK (0ULL << 52) -#define SPTE_AD_DISABLED_MASK (1ULL << 52) -#define SPTE_AD_WRPROT_ONLY_MASK (2ULL << 52) -#define SPTE_MMIO_MASK (3ULL << 52) - -#define PT64_LEVEL_BITS 9 - -#define PT64_LEVEL_SHIFT(level) \ - (PAGE_SHIFT + (level - 1) * PT64_LEVEL_BITS) - -#define PT64_INDEX(address, level)\ - (((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1)) - - #define PT32_LEVEL_BITS 10 #define PT32_LEVEL_SHIFT(level) \ @@ -155,19 +133,6 @@ module_param(dbg, bool, 0644); #define PT32_INDEX(address, level)\ (((address) >> PT32_LEVEL_SHIFT(level)) & ((1 << PT32_LEVEL_BITS) - 1)) - -#ifdef CONFIG_DYNAMIC_PHYSICAL_MASK -#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) -#else -#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) -#endif -#define PT64_LVL_ADDR_MASK(level) \ - (PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT64_LEVEL_BITS))) - 1)) -#define PT64_LVL_OFFSET_MASK(level) \ - (PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ - * PT64_LEVEL_BITS))) - 1)) - #define PT32_BASE_ADDR_MASK PAGE_MASK #define PT32_DIR_BASE_ADDR_MASK \ (PAGE_MASK & ~((1ULL << (PAGE_SHIFT + PT32_LEVEL_BITS)) - 1)) @@ -192,8 +157,6 @@ module_param(dbg, bool, 0644); #define SPTE_HOST_WRITEABLE (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) #define SPTE_MMU_WRITEABLE (1ULL << (PT_FIRST_AVAIL_BITS_SHIFT + 1)) -#define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) - /* make pte_list_desc fit well in cache line */ #define PTE_LIST_EXT 3 @@ -349,11 +312,6 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 access_mask) } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); -static bool is_mmio_spte(u64 spte) -{ - return (spte & SPTE_SPECIAL_MASK) == SPTE_MMIO_MASK; -} - static inline bool sp_ad_disabled(struct kvm_mmu_page *sp) { return sp->role.ad_disabled; @@ -626,35 +584,11 @@ static int is_nx(struct kvm_vcpu *vcpu) return vcpu->arch.efer & EFER_NX; } -static int is_shadow_present_pte(u64 pte) -{ - return (pte != 0) && !is_mmio_spte(pte); -} - -static int is_large_pte(u64 pte) -{ - return pte & PT_PAGE_SIZE_MASK; -} - -static int is_last_spte(u64 pte, int level) -{ - if (level == PG_LEVEL_4K) - return 1; - if (is_large_pte(pte)) - return 1; - return 0; -} - static bool is_executable_pte(u64 spte) { return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask; } -static kvm_pfn_t spte_to_pfn(u64 pte) -{ - return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT; -} - static gfn_t pse36_gfn_delta(u32 gpte) { int shift = 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 3acf3b8eb469d..74ccbf001a42e 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -2,6 +2,8 @@ #ifndef __KVM_X86_MMU_INTERNAL_H #define __KVM_X86_MMU_INTERNAL_H +#include "mmu.h" + #include #include @@ -60,4 +62,68 @@ void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, struct kvm_memory_slot *slot, u64 gfn); +#define PT64_LEVEL_BITS 9 + +#define PT64_LEVEL_SHIFT(level) \ + (PAGE_SHIFT + (level - 1) * PT64_LEVEL_BITS) + +#define PT64_INDEX(address, level)\ + (((address) >> PT64_LEVEL_SHIFT(level)) & ((1 << PT64_LEVEL_BITS) - 1)) +#define SHADOW_PT_INDEX(addr, level) PT64_INDEX(addr, level) + +#define PT_FIRST_AVAIL_BITS_SHIFT 10 +#define PT64_SECOND_AVAIL_BITS_SHIFT 54 + +/* + * The mask used to denote special SPTEs, which can be either MMIO SPTEs or + * Access Tracking SPTEs. + */ +#define SPTE_SPECIAL_MASK (3ULL << 52) +#define SPTE_AD_ENABLED_MASK (0ULL << 52) +#define SPTE_AD_DISABLED_MASK (1ULL << 52) +#define SPTE_AD_WRPROT_ONLY_MASK (2ULL << 52) +#define SPTE_MMIO_MASK (3ULL << 52) + +#ifdef CONFIG_DYNAMIC_PHYSICAL_MASK +#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) +#else +#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) +#endif +#define PT64_LVL_ADDR_MASK(level) \ + (PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \ + * PT64_LEVEL_BITS))) - 1)) +#define PT64_LVL_OFFSET_MASK(level) \ + (PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ + * PT64_LEVEL_BITS))) - 1)) + +/* Functions for interpreting SPTEs */ +static inline bool is_mmio_spte(u64 spte) +{ + return (spte & SPTE_SPECIAL_MASK) == SPTE_MMIO_MASK; +} + +static inline int is_shadow_present_pte(u64 pte) +{ + return (pte != 0) && !is_mmio_spte(pte); +} + +static inline int is_large_pte(u64 pte) +{ + return pte & PT_PAGE_SIZE_MASK; +} + +static inline int is_last_spte(u64 pte, int level) +{ + if (level == PG_LEVEL_4K) + return 1; + if (is_large_pte(pte)) + return 1; + return 0; +} + +static inline kvm_pfn_t spte_to_pfn(u64 pte) +{ + return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT; +} + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c new file mode 100644 index 0000000000000..b07e9f0c5d4aa --- /dev/null +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -0,0 +1,176 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "mmu_internal.h" +#include "tdp_iter.h" + +/* + * Recalculates the pointer to the SPTE for the current GFN and level and + * reread the SPTE. + */ +static void tdp_iter_refresh_sptep(struct tdp_iter *iter) +{ + iter->sptep = iter->pt_path[iter->level - 1] + + SHADOW_PT_INDEX(iter->gfn << PAGE_SHIFT, iter->level); + iter->old_spte = READ_ONCE(*iter->sptep); +} + +static gfn_t round_gfn_for_level(gfn_t gfn, int level) +{ + return gfn - (gfn % KVM_PAGES_PER_HPAGE(level)); +} + +/* + * Sets a TDP iterator to walk a pre-order traversal of the paging structure + * rooted at root_pt, starting with the walk to translate goal_gfn. + */ +void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, + int min_level, gfn_t goal_gfn) +{ + WARN_ON(root_level < 1); + WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); + + iter->goal_gfn = goal_gfn; + iter->root_level = root_level; + iter->min_level = min_level; + iter->level = root_level; + iter->pt_path[iter->level - 1] = root_pt; + + iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level); + tdp_iter_refresh_sptep(iter); + + iter->valid = true; +} + +/* + * Given an SPTE and its level, returns a pointer containing the host virtual + * address of the child page table referenced by the SPTE. Returns null if + * there is no such entry. + */ +u64 *spte_to_child_pt(u64 spte, int level) +{ + /* + * There's no child entry if this entry isn't present or is a + * last-level entry. + */ + if (!is_shadow_present_pte(spte) || is_last_spte(spte, level)) + return NULL; + + return __va(spte_to_pfn(spte) << PAGE_SHIFT); +} + +/* + * Steps down one level in the paging structure towards the goal GFN. Returns + * true if the iterator was able to step down a level, false otherwise. + */ +static bool try_step_down(struct tdp_iter *iter) +{ + u64 *child_pt; + + if (iter->level == iter->min_level) + return false; + + /* + * Reread the SPTE before stepping down to avoid traversing into page + * tables that are no longer linked from this entry. + */ + iter->old_spte = READ_ONCE(*iter->sptep); + + child_pt = spte_to_child_pt(iter->old_spte, iter->level); + if (!child_pt) + return false; + + iter->level--; + iter->pt_path[iter->level - 1] = child_pt; + iter->gfn = round_gfn_for_level(iter->goal_gfn, iter->level); + tdp_iter_refresh_sptep(iter); + + return true; +} + +/* + * Steps to the next entry in the current page table, at the current page table + * level. The next entry could point to a page backing guest memory or another + * page table, or it could be non-present. Returns true if the iterator was + * able to step to the next entry in the page table, false if the iterator was + * already at the end of the current page table. + */ +static bool try_step_side(struct tdp_iter *iter) +{ + /* + * Check if the iterator is already at the end of the current page + * table. + */ + if (!((iter->gfn + KVM_PAGES_PER_HPAGE(iter->level)) % + KVM_PAGES_PER_HPAGE(iter->level + 1))) + return false; + + iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); + iter->goal_gfn = iter->gfn; + iter->sptep++; + iter->old_spte = READ_ONCE(*iter->sptep); + + return true; +} + +/* + * Tries to traverse back up a level in the paging structure so that the walk + * can continue from the next entry in the parent page table. Returns true on a + * successful step up, false if already in the root page. + */ +static bool try_step_up(struct tdp_iter *iter) +{ + if (iter->level == iter->root_level) + return false; + + iter->level++; + iter->gfn = round_gfn_for_level(iter->gfn, iter->level); + tdp_iter_refresh_sptep(iter); + + return true; +} + +/* + * Step to the next SPTE in a pre-order traversal of the paging structure. + * To get to the next SPTE, the iterator either steps down towards the goal + * GFN, if at a present, non-last-level SPTE, or over to a SPTE mapping a + * highter GFN. + * + * The basic algorithm is as follows: + * 1. If the current SPTE is a non-last-level SPTE, step down into the page + * table it points to. + * 2. If the iterator cannot step down, it will try to step to the next SPTE + * in the current page of the paging structure. + * 3. If the iterator cannot step to the next entry in the current page, it will + * try to step up to the parent paging structure page. In this case, that + * SPTE will have already been visited, and so the iterator must also step + * to the side again. + */ +void tdp_iter_next(struct tdp_iter *iter) +{ + if (try_step_down(iter)) + return; + + do { + if (try_step_side(iter)) + return; + } while (try_step_up(iter)); + iter->valid = false; +} + +/* + * Restart the walk over the paging structure from the root, starting from the + * highest gfn the iterator had previously reached. Assumes that the entire + * paging structure, except the root page, may have been completely torn down + * and rebuilt. + */ +void tdp_iter_refresh_walk(struct tdp_iter *iter) +{ + gfn_t goal_gfn = iter->goal_gfn; + + if (iter->gfn > goal_gfn) + goal_gfn = iter->gfn; + + tdp_iter_start(iter, iter->pt_path[iter->root_level - 1], + iter->root_level, iter->min_level, goal_gfn); +} + diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h new file mode 100644 index 0000000000000..d629a53e1b73f --- /dev/null +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef __KVM_X86_MMU_TDP_ITER_H +#define __KVM_X86_MMU_TDP_ITER_H + +#include + +#include "mmu.h" + +/* + * A TDP iterator performs a pre-order walk over a TDP paging structure. + */ +struct tdp_iter { + /* + * The iterator will traverse the paging structure towards the mapping + * for this GFN. + */ + gfn_t goal_gfn; + /* Pointers to the page tables traversed to reach the current SPTE */ + u64 *pt_path[PT64_ROOT_MAX_LEVEL]; + /* A pointer to the current SPTE */ + u64 *sptep; + /* The lowest GFN mapped by the current SPTE */ + gfn_t gfn; + /* The level of the root page given to the iterator */ + int root_level; + /* The lowest level the iterator should traverse to */ + int min_level; + /* The iterator's current level within the paging structure */ + int level; + /* A snapshot of the value at sptep */ + u64 old_spte; + /* + * Whether the iterator has a valid state. This will be false if the + * iterator walks off the end of the paging structure. + */ + bool valid; +}; + +/* + * Iterates over every SPTE mapping the GFN range [start, end) in a + * preorder traversal. + */ +#define for_each_tdp_pte(iter, root, root_level, start, end) \ + for (tdp_iter_start(&iter, root, root_level, PG_LEVEL_4K, start); \ + iter.valid && iter.gfn < end; \ + tdp_iter_next(&iter)) + +u64 *spte_to_child_pt(u64 pte, int level); + +void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, + int min_level, gfn_t goal_gfn); +void tdp_iter_next(struct tdp_iter *iter); +void tdp_iter_refresh_walk(struct tdp_iter *iter); + +#endif /* __KVM_X86_MMU_TDP_ITER_H */ From patchwork Wed Oct 14 18:26:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838127 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1B03921 for ; Wed, 14 Oct 2020 18:27:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A97A72222C for ; Wed, 14 Oct 2020 18:27:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SLHFxfkl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388196AbgJNS1L (ORCPT ); Wed, 14 Oct 2020 14:27:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388111AbgJNS1K (ORCPT ); Wed, 14 Oct 2020 14:27:10 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2376DC0613D5 for ; Wed, 14 Oct 2020 11:27:10 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id r128so71014pfr.8 for ; Wed, 14 Oct 2020 11:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=PzIR+EOh/unf0NU1aPgGvKEmZudPmqlf31mWPtYkVEU=; b=SLHFxfklstQR4pen4l7RRE7SamkB/oxdWYCdNd1cIZ5z7Fi8sOUFfSj+Az6RSOS0G5 P9QhAqV3T1gBOhE8urco8dRVODvYEcJd8xp/MpQJxJ4xlFd/3EGZVIrh3atdZ+BEuK2z 2kGT6VjBOnrYdF7iB+hRuIVJuAiaJcZs9gFex/2jZZJ2oxhI/V8w89n5Go/pisG8H4R+ F5dl1aV6xir2B7jxCOlvNI5mV0cO8vRKv5FdJSM0ivucolW7xHqCeNtti30zWoGtZFg3 dIrX9ZvIREB563k/eemtEBgm6EJQsPkKQes3cOzEFSneJzBnH4PjGayPzwwxeqdVzHKq MP3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PzIR+EOh/unf0NU1aPgGvKEmZudPmqlf31mWPtYkVEU=; b=igk62OO2WtHvVmpaSKwyjE5ghXnEDOBkZ+OR6Khhua3tYDkN+JHGbF4hD41QGbLuxM G/ZT0trZIXFjHRkyAE3MgPzIt7GBRP1fzmGBDProGzfnOpbgKKqTCu8N2rvMf+27a2L2 N8F8ZWBkfNf0tIkQKTaTCbBdL+3eyIBqlEVuZBRRYbTXAB0vvCs3JyNOZOnNSMdYEGR8 vMR2rKRUo3/YHznTL5G2VBBOZPeWPSlVAcXGgiM9DlTD149aOHEnfx8KF8YGqqH2TOOz 9PPgkLJJob1Vsm4qYfR5qnL64LPPBodOeNmbpThXHd0qtiqN4aCKt8b2thf5XGXMw2oe m7aw== X-Gm-Message-State: AOAM532WtiQ5skw8fU2BGq2tA8gcIZzuCVDO+7NllF0pRXfTefc6wi7k NxnqKTAx/smgEkzQHD/gnePRYJtbAcwJ X-Google-Smtp-Source: ABdhPJzPFxHNlrphi/IM33kezXuNfCffMAGt262orHLd0qa33LMT1aK1wrJKTwkkfLgE1hX4WUkSeTTp+BGA Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:902:7884:b029:d3:7817:ed58 with SMTP id q4-20020a1709027884b02900d37817ed58mr692291pll.14.1602700029486; Wed, 14 Oct 2020 11:27:09 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:43 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-4-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 03/20] kvm: x86/mmu: Init / Uninit the TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDP MMU offers an alternative mode of operation to the x86 shadow paging based MMU, optimized for running an L1 guest with TDP. The TDP MMU will require new fields that need to be initialized and torn down. Add hooks into the existing KVM MMU initialization process to do that initialization / cleanup. Currently the initialization and cleanup fucntions do not do very much, however more operations will be added in future patches. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 9 ++++++++ arch/x86/kvm/Makefile | 2 +- arch/x86/kvm/mmu/mmu.c | 5 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 38 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 10 +++++++++ 5 files changed, 63 insertions(+), 1 deletion(-) create mode 100644 arch/x86/kvm/mmu/tdp_mmu.c create mode 100644 arch/x86/kvm/mmu/tdp_mmu.h diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d0f77235da923..6b6dbc20ce23a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -980,6 +980,15 @@ struct kvm_arch { struct kvm_pmu_event_filter *pmu_event_filter; struct task_struct *nx_lpage_recovery_thread; + + /* + * Whether the TDP MMU is enabled for this VM. This contains a + * snapshot of the TDP MMU module parameter from when the VM was + * created and remains unchanged for the life of the VM. If this is + * true, TDP MMU handler functions will run for various MMU + * operations. + */ + bool tdp_mmu_enabled; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index 4525c1151bf99..fd6b1b0cc27c0 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -16,7 +16,7 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \ hyperv.o debugfs.o mmu/mmu.o mmu/page_track.o \ - mmu/tdp_iter.o + mmu/tdp_iter.o mmu/tdp_mmu.o kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \ vmx/evmcs.o vmx/nested.o vmx/posted_intr.o diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6d82784ed5679..f53d29e09367c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -19,6 +19,7 @@ #include "ioapic.h" #include "mmu.h" #include "mmu_internal.h" +#include "tdp_mmu.h" #include "x86.h" #include "kvm_cache_regs.h" #include "kvm_emulate.h" @@ -5833,6 +5834,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) { struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; + kvm_mmu_init_tdp_mmu(kvm); + node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); @@ -5843,6 +5846,8 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; kvm_page_track_unregister_notifier(kvm, node); + + kvm_mmu_uninit_tdp_mmu(kvm); } void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c new file mode 100644 index 0000000000000..b3809835e90b1 --- /dev/null +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "tdp_mmu.h" + +static bool __read_mostly tdp_mmu_enabled = false; + +static bool is_tdp_mmu_enabled(void) +{ +#ifdef CONFIG_X86_64 + if (!READ_ONCE(tdp_mmu_enabled)) + return false; + + if (WARN_ONCE(!tdp_enabled, + "Creating a VM with TDP MMU enabled requires TDP.")) + return false; + + return true; + +#else + return false; +#endif /* CONFIG_X86_64 */ +} + +/* Initializes the TDP MMU for the VM, if enabled. */ +void kvm_mmu_init_tdp_mmu(struct kvm *kvm) +{ + if (!is_tdp_mmu_enabled()) + return; + + /* This should not be changed for the lifetime of the VM. */ + kvm->arch.tdp_mmu_enabled = true; +} + +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) +{ + if (!kvm->arch.tdp_mmu_enabled) + return; +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h new file mode 100644 index 0000000000000..cd4a562a70e9a --- /dev/null +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef __KVM_X86_MMU_TDP_MMU_H +#define __KVM_X86_MMU_TDP_MMU_H + +#include + +void kvm_mmu_init_tdp_mmu(struct kvm *kvm); +void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); +#endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDA1B14B2 for ; Wed, 14 Oct 2020 18:28:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98EE92222C for ; Wed, 14 Oct 2020 18:28:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eiyNGXV4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388404AbgJNS2v (ORCPT ); Wed, 14 Oct 2020 14:28:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388308AbgJNS1M (ORCPT ); Wed, 14 Oct 2020 14:27:12 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19105C061755 for ; Wed, 14 Oct 2020 11:27:12 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id z68so374273ybh.22 for ; Wed, 14 Oct 2020 11:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=kbtiydczGlRPVky2V3hZiVwIUxGSMKlIppd1Dsk+ppk=; b=eiyNGXV4mXXOKRA0Bdtsas8Qdt0ilp5amAZ+r0jK4MBi1XJzXre6FEy5lYfsG39xN6 JkNUc8bZPTtZBfVbrBxP0tZfa8cjjbMkIDDHYXW7sxbaXJyfVBXAW9wvv3GAYKV+/oUN YxljJa8fZ7IzW6aM6bqkaIVvLgGSIYYIFKzHzqvXSJ1b3UyXA58F+Huknpyrj8tbN5Hx qvncMUWrsnTIVbCUXCghWabQykq6SFHsZQiaShLKdPcWQTzD/LaHkTrZ5j6vRJuS/xsV gWO3NHiS4li1T+Qq8HlvvC8t05LPp1Xv9SAjqIxapLxK5ZpVkgp/MZUAyEaUhnW+mCT7 PUtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kbtiydczGlRPVky2V3hZiVwIUxGSMKlIppd1Dsk+ppk=; b=auqpFQ355MdNb/B103KBwJFvcOuxMcqyQb8zRsrgauaxIDCe1bOvpggLLKYV8mkj+V 4Pdktt0OID5fm3JPl262bmA2zA0YsrYrs9jmL010cxWZhaYrZr+QKQ80gGnrM+7ahe2I ktHZ66riVsHloABHgK04GMa0ULcxGl8TOekPvei/uj8CLo/igtRTkuOkZSh/3XOZtwxe 9i8Xnns3lHOVWzwVYGnlKX9W1IQ+6pg4JOxztpZm5nVGCfPJ3Gd4XQZGwii09FxD1QRh VY0qkAyLMF6PyR5zZB2zmeY/ouSd9xYODURHWJS62/9iQvNCozkSUki8pG0aA4AIPaiP KftA== X-Gm-Message-State: AOAM5325+AaUsysh1yHATWdQERJPw2pwbQOHDnJKcZ53OozIGIZaRITE uZnWjUqTDHV2bJ3JegxwGg9RDOtGrfiD X-Google-Smtp-Source: ABdhPJxFanFjdMy21cvcB5h0OikkVC7vIIXWhLFNuuHZkoIRLhHUtAwCNFDHs4iGZkuLoB6M463gGvCehyoG Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a25:b90e:: with SMTP id x14mr52899ybj.346.1602700031240; Wed, 14 Oct 2020 11:27:11 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:44 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-5-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 04/20] kvm: x86/mmu: Allocate and free TDP MMU roots From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDP MMU must be able to allocate paging structure root pages and track the usage of those pages. Implement a similar, but separate system for root page allocation to that of the x86 shadow paging implementation. When future patches add synchronization model changes to allow for parallel page faults, these pages will need to be handled differently from the x86 shadow paging based MMU's root pages. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 29 +++++--- arch/x86/kvm/mmu/mmu_internal.h | 24 +++++++ arch/x86/kvm/mmu/tdp_mmu.c | 114 ++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 5 ++ 5 files changed, 162 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6b6dbc20ce23a..e0ec1dd271a32 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -989,6 +989,7 @@ struct kvm_arch { * operations. */ bool tdp_mmu_enabled; + struct list_head tdp_mmu_roots; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f53d29e09367c..a3340ed59ad1d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -144,11 +144,6 @@ module_param(dbg, bool, 0644); #define PT64_PERM_MASK (PT_PRESENT_MASK | PT_WRITABLE_MASK | shadow_user_mask \ | shadow_x_mask | shadow_nx_mask | shadow_me_mask) -#define ACC_EXEC_MASK 1 -#define ACC_WRITE_MASK PT_WRITABLE_MASK -#define ACC_USER_MASK PT_USER_MASK -#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) - /* The mask for the R/X bits in EPT PTEs */ #define PT64_EPT_READABLE_MASK 0x1ull #define PT64_EPT_EXECUTABLE_MASK 0x4ull @@ -209,7 +204,7 @@ struct kvm_shadow_walk_iterator { __shadow_walk_next(&(_walker), spte)) static struct kmem_cache *pte_list_desc_cache; -static struct kmem_cache *mmu_page_header_cache; +struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; static u64 __read_mostly shadow_nx_mask; @@ -3588,9 +3583,13 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, return; sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK); - --sp->root_count; - if (!sp->root_count && sp->role.invalid) - kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); + + if (kvm_mmu_put_root(sp)) { + if (sp->tdp_mmu_page) + kvm_tdp_mmu_free_root(kvm, sp); + else if (sp->role.invalid) + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); + } *root_hpa = INVALID_PAGE; } @@ -3680,8 +3679,16 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu) hpa_t root; unsigned i; - if (shadow_root_level >= PT64_ROOT_4LEVEL) { - root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, true); + if (vcpu->kvm->arch.tdp_mmu_enabled) { + root = kvm_tdp_mmu_get_vcpu_root_hpa(vcpu); + + if (!VALID_PAGE(root)) + return -ENOSPC; + vcpu->arch.mmu->root_hpa = root; + } else if (shadow_root_level >= PT64_ROOT_4LEVEL) { + root = mmu_alloc_root(vcpu, 0, 0, shadow_root_level, + true); + if (!VALID_PAGE(root)) return -ENOSPC; vcpu->arch.mmu->root_hpa = root; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 74ccbf001a42e..6cedf578c9a8d 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -43,8 +43,12 @@ struct kvm_mmu_page { /* Number of writes since the last time traversal visited this page. */ atomic_t write_flooding_count; + + bool tdp_mmu_page; }; +extern struct kmem_cache *mmu_page_header_cache; + static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) { struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT); @@ -96,6 +100,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, (PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ * PT64_LEVEL_BITS))) - 1)) +#define ACC_EXEC_MASK 1 +#define ACC_WRITE_MASK PT_WRITABLE_MASK +#define ACC_USER_MASK PT_USER_MASK +#define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) + /* Functions for interpreting SPTEs */ static inline bool is_mmio_spte(u64 spte) { @@ -126,4 +135,19 @@ static inline kvm_pfn_t spte_to_pfn(u64 pte) return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT; } +static inline void kvm_mmu_get_root(struct kvm_mmu_page *sp) +{ + BUG_ON(!sp->root_count); + + ++sp->root_count; +} + +static inline bool kvm_mmu_put_root(struct kvm_mmu_page *sp) +{ + --sp->root_count; + + return !sp->root_count; +} + + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b3809835e90b1..09a84a6e157b6 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#include "mmu.h" +#include "mmu_internal.h" #include "tdp_mmu.h" static bool __read_mostly tdp_mmu_enabled = false; @@ -29,10 +31,122 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) /* This should not be changed for the lifetime of the VM. */ kvm->arch.tdp_mmu_enabled = true; + + INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); } void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) { if (!kvm->arch.tdp_mmu_enabled) return; + + WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots)); +} + +#define for_each_tdp_mmu_root(_kvm, _root) \ + list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) + +bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) +{ + struct kvm_mmu_page *sp; + + sp = to_shadow_page(hpa); + + return sp->tdp_mmu_page && sp->root_count; +} + +void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) +{ + lockdep_assert_held(&kvm->mmu_lock); + + WARN_ON(root->root_count); + WARN_ON(!root->tdp_mmu_page); + + list_del(&root->link); + + free_page((unsigned long)root->spt); + kmem_cache_free(mmu_page_header_cache, root); +} + +static void put_tdp_mmu_root(struct kvm *kvm, struct kvm_mmu_page *root) +{ + if (kvm_mmu_put_root(root)) + kvm_tdp_mmu_free_root(kvm, root); +} + +static void get_tdp_mmu_root(struct kvm *kvm, struct kvm_mmu_page *root) +{ + lockdep_assert_held(&kvm->mmu_lock); + + kvm_mmu_get_root(root); +} + +static union kvm_mmu_page_role page_role_for_level(struct kvm_vcpu *vcpu, + int level) +{ + union kvm_mmu_page_role role; + + role = vcpu->arch.mmu->mmu_role.base; + role.level = vcpu->arch.mmu->shadow_root_level; + role.direct = true; + role.gpte_is_8_bytes = true; + role.access = ACC_ALL; + + return role; +} + +static struct kvm_mmu_page *alloc_tdp_mmu_page(struct kvm_vcpu *vcpu, gfn_t gfn, + int level) +{ + struct kvm_mmu_page *sp; + + sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache); + sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache); + set_page_private(virt_to_page(sp->spt), (unsigned long)sp); + + sp->role.word = page_role_for_level(vcpu, level).word; + sp->gfn = gfn; + sp->tdp_mmu_page = true; + + return sp; +} + +static struct kvm_mmu_page *get_tdp_mmu_vcpu_root(struct kvm_vcpu *vcpu) +{ + union kvm_mmu_page_role role; + struct kvm *kvm = vcpu->kvm; + struct kvm_mmu_page *root; + + role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level); + + spin_lock(&kvm->mmu_lock); + + /* Check for an existing root before allocating a new one. */ + for_each_tdp_mmu_root(kvm, root) { + if (root->role.word == role.word) { + get_tdp_mmu_root(kvm, root); + spin_unlock(&kvm->mmu_lock); + return root; + } + } + + root = alloc_tdp_mmu_page(vcpu, 0, vcpu->arch.mmu->shadow_root_level); + root->root_count = 1; + + list_add(&root->link, &kvm->arch.tdp_mmu_roots); + + spin_unlock(&kvm->mmu_lock); + + return root; +} + +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_page *root; + + root = get_tdp_mmu_vcpu_root(vcpu); + if (!root) + return INVALID_PAGE; + + return __pa(root->spt); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index cd4a562a70e9a..ac0ef91294420 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -7,4 +7,9 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); + +bool is_tdp_mmu_root(struct kvm *kvm, hpa_t root); +hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); +void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root); + #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838165 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C3FD6921 for ; Wed, 14 Oct 2020 18:28:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 87B652223F for ; Wed, 14 Oct 2020 18:28:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ukZxyYwl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388170AbgJNS2u (ORCPT ); Wed, 14 Oct 2020 14:28:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388378AbgJNS1O (ORCPT ); Wed, 14 Oct 2020 14:27:14 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED86DC0613D4 for ; Wed, 14 Oct 2020 11:27:13 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id z68so374349ybh.22 for ; Wed, 14 Oct 2020 11:27:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=LJccCj22FhIcLSjpiTZevAsK4N+j9vDaHHFlKDDiBnI=; b=ukZxyYwlOUoVwqospygztboy0EKBoV9iaHlOZ0mK5Obs8Fs0gB34h6mOzY1J1U+0wJ xtOwUmwc1waP5LOSfVR3XO1jQezkqWTCaQCXxWg2upRKWi4bFgJYQprMO/dpiX4GJK9g H4cjUuJK8PstICMl46JV+mCpkbFFwQRTtcHl5Wbm1cOBd7/HUOHVWVgwjjJex8wLwA0E 58iJQW0D/xsaT1rOJCtLU+iHgdmTxFXPyRkbqE0X8VfQRP55JwX3mS+xaxAJPYKTrjen 8EbWU8etK3+Kypa0lYZLNqA4DlPD7TRXgXuxS4wOPnUiQrF32x5ZMchHoNzaP1kU+yEG 99pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LJccCj22FhIcLSjpiTZevAsK4N+j9vDaHHFlKDDiBnI=; b=K3qwHrNaHMB9spbJQY/OwFUmaeTgqKSHub2wc54Vhe2Z6uu+G/MVqrKKzPgcGloUJ+ PysZJT5NEmZNYYJ725Yz3KiDshD7kPFMPIR9EulgvPrBHo1EV8JxlZj2jKDKhk1jZMY7 3w1kKe8WiENDlbzoti2SRJWQIdl1zoeRhhhb05mKUaudEsbC/F2q5StotGXdtsw1+E+f 4TCmErAOFs92DX6YhNBgrm9/T6DnPoHdmH9cTJWnhhjRIVh+6zCLHTqt1rwiTCc23DHm LwWFiVoPBMd9DqMkx/gSPZ23uPh3Oy7YpLWyUoS/sQLKoRRY6zp22ZdFeYfJFS/zgAT8 8jBg== X-Gm-Message-State: AOAM531q2gj8InKm6oXTY9XRoNmvkfC4+Qn87INKiZK3MYRFlow6qI02 g1w5IR5CcTR9Zk22Ls2SyKDnfex63b2p X-Google-Smtp-Source: ABdhPJxIxoh7GTshMJizIQPmBpFS9Q30EuEi84YWMrSvMc58df1Ux6Qcq1UISoiGUBxkzy1pBg7y8NsgpmUU Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a25:f84:: with SMTP id 126mr831740ybp.377.1602700033158; Wed, 14 Oct 2020 11:27:13 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:45 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-6-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 05/20] kvm: x86/mmu: Add functions to handle changed TDP SPTEs From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The existing bookkeeping done by KVM when a PTE is changed is spread around several functions. This makes it difficult to remember all the stats, bitmaps, and other subsystems that need to be updated whenever a PTE is modified. When a non-leaf PTE is marked non-present or becomes a leaf PTE, page table memory must also be freed. To simplify the MMU and facilitate the use of atomic operations on SPTEs in future patches, create functions to handle some of the bookkeeping required as a result of a change. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 39 +---------- arch/x86/kvm/mmu/mmu_internal.h | 38 +++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 112 ++++++++++++++++++++++++++++++++ 3 files changed, 152 insertions(+), 37 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a3340ed59ad1d..8bf20723c6177 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -105,21 +105,6 @@ enum { AUDIT_POST_SYNC }; -#undef MMU_DEBUG - -#ifdef MMU_DEBUG -static bool dbg = 0; -module_param(dbg, bool, 0644); - -#define pgprintk(x...) do { if (dbg) printk(x); } while (0) -#define rmap_printk(x...) do { if (dbg) printk(x); } while (0) -#define MMU_WARN_ON(x) WARN_ON(x) -#else -#define pgprintk(x...) do { } while (0) -#define rmap_printk(x...) do { } while (0) -#define MMU_WARN_ON(x) do { } while (0) -#endif - #define PTE_PREFETCH_NUM 8 #define PT32_LEVEL_BITS 10 @@ -211,7 +196,6 @@ static u64 __read_mostly shadow_nx_mask; static u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ static u64 __read_mostly shadow_user_mask; static u64 __read_mostly shadow_accessed_mask; -static u64 __read_mostly shadow_dirty_mask; static u64 __read_mostly shadow_mmio_value; static u64 __read_mostly shadow_mmio_access_mask; static u64 __read_mostly shadow_present_mask; @@ -287,8 +271,8 @@ static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm, kvm_flush_remote_tlbs(kvm); } -static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, - u64 start_gfn, u64 pages) +void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, + u64 pages) { struct kvm_tlb_range range; @@ -324,12 +308,6 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) return vcpu->arch.mmu == &vcpu->arch.guest_mmu; } -static inline bool spte_ad_enabled(u64 spte) -{ - MMU_WARN_ON(is_mmio_spte(spte)); - return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_DISABLED_MASK; -} - static inline bool spte_ad_need_write_protect(u64 spte) { MMU_WARN_ON(is_mmio_spte(spte)); @@ -347,12 +325,6 @@ static inline u64 spte_shadow_accessed_mask(u64 spte) return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; } -static inline u64 spte_shadow_dirty_mask(u64 spte) -{ - MMU_WARN_ON(is_mmio_spte(spte)); - return spte_ad_enabled(spte) ? shadow_dirty_mask : 0; -} - static inline bool is_access_track_spte(u64 spte) { return !spte_ad_enabled(spte) && (spte & shadow_acc_track_mask) == 0; @@ -767,13 +739,6 @@ static bool is_accessed_spte(u64 spte) : !is_access_track_spte(spte); } -static bool is_dirty_spte(u64 spte) -{ - u64 dirty_mask = spte_shadow_dirty_mask(spte); - - return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; -} - /* Rules for using mmu_spte_set: * Set the sptep from nonpresent to present. * Note: the sptep being assigned *must* be either not present diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 6cedf578c9a8d..c053a157e4d55 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -8,6 +8,21 @@ #include +#undef MMU_DEBUG + +#ifdef MMU_DEBUG +static bool dbg = 0; +module_param(dbg, bool, 0644); + +#define pgprintk(x...) do { if (dbg) printk(x); } while (0) +#define rmap_printk(x...) do { if (dbg) printk(x); } while (0) +#define MMU_WARN_ON(x) WARN_ON(x) +#else +#define pgprintk(x...) do { } while (0) +#define rmap_printk(x...) do { } while (0) +#define MMU_WARN_ON(x) do { } while (0) +#endif + struct kvm_mmu_page { struct list_head link; struct hlist_node hash_link; @@ -105,6 +120,8 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, #define ACC_USER_MASK PT_USER_MASK #define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) +static u64 __read_mostly shadow_dirty_mask; + /* Functions for interpreting SPTEs */ static inline bool is_mmio_spte(u64 spte) { @@ -150,4 +167,25 @@ static inline bool kvm_mmu_put_root(struct kvm_mmu_page *sp) } +static inline bool spte_ad_enabled(u64 spte) +{ + MMU_WARN_ON(is_mmio_spte(spte)); + return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_DISABLED_MASK; +} + +static inline u64 spte_shadow_dirty_mask(u64 spte) +{ + MMU_WARN_ON(is_mmio_spte(spte)); + return spte_ad_enabled(spte) ? shadow_dirty_mask : 0; +} + +static inline bool is_dirty_spte(u64 spte) +{ + u64 dirty_mask = spte_shadow_dirty_mask(spte); + + return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; +} + +void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, + u64 pages); #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 09a84a6e157b6..f2bd3a6928ce9 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2,6 +2,7 @@ #include "mmu.h" #include "mmu_internal.h" +#include "tdp_iter.h" #include "tdp_mmu.h" static bool __read_mostly tdp_mmu_enabled = false; @@ -150,3 +151,114 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) return __pa(root->spt); } + +static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, + u64 old_spte, u64 new_spte, int level); + +/** + * handle_changed_spte - handle bookkeeping associated with an SPTE change + * @kvm: kvm instance + * @as_id: the address space of the paging structure the SPTE was a part of + * @gfn: the base GFN that was mapped by the SPTE + * @old_spte: The value of the SPTE before the change + * @new_spte: The value of the SPTE after the change + * @level: the level of the PT the SPTE is part of in the paging structure + * + * Handle bookkeeping that might result from the modification of a SPTE. + * This function must be called for all TDP SPTE modifications. + */ +static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, + u64 old_spte, u64 new_spte, int level) +{ + bool was_present = is_shadow_present_pte(old_spte); + bool is_present = is_shadow_present_pte(new_spte); + bool was_leaf = was_present && is_last_spte(old_spte, level); + bool is_leaf = is_present && is_last_spte(new_spte, level); + bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + u64 *pt; + u64 old_child_spte; + int i; + + WARN_ON(level > PT64_ROOT_MAX_LEVEL); + WARN_ON(level < PG_LEVEL_4K); + WARN_ON(gfn % KVM_PAGES_PER_HPAGE(level)); + + /* + * If this warning were to trigger it would indicate that there was a + * missing MMU notifier or a race with some notifier handler. + * A present, leaf SPTE should never be directly replaced with another + * present leaf SPTE pointing to a differnt PFN. A notifier handler + * should be zapping the SPTE before the main MM's page table is + * changed, or the SPTE should be zeroed, and the TLBs flushed by the + * thread before replacement. + */ + if (was_leaf && is_leaf && pfn_changed) { + pr_err("Invalid SPTE change: cannot replace a present leaf\n" + "SPTE with another present leaf SPTE mapping a\n" + "different PFN!\n" + "as_id: %d gfn: %llx old_spte: %llx new_spte: %llx level: %d", + as_id, gfn, old_spte, new_spte, level); + + /* + * Crash the host to prevent error propagation and guest data + * courruption. + */ + BUG(); + } + + if (old_spte == new_spte) + return; + + /* + * The only times a SPTE should be changed from a non-present to + * non-present state is when an MMIO entry is installed/modified/ + * removed. In that case, there is nothing to do here. + */ + if (!was_present && !is_present) { + /* + * If this change does not involve a MMIO SPTE, it is + * unexpected. Log the change, though it should not impact the + * guest since both the former and current SPTEs are nonpresent. + */ + if (WARN_ON(!is_mmio_spte(old_spte) && !is_mmio_spte(new_spte))) + pr_err("Unexpected SPTE change! Nonpresent SPTEs\n" + "should not be replaced with another,\n" + "different nonpresent SPTE, unless one or both\n" + "are MMIO SPTEs.\n" + "as_id: %d gfn: %llx old_spte: %llx new_spte: %llx level: %d", + as_id, gfn, old_spte, new_spte, level); + return; + } + + + if (was_leaf && is_dirty_spte(old_spte) && + (!is_dirty_spte(new_spte) || pfn_changed)) + kvm_set_pfn_dirty(spte_to_pfn(old_spte)); + + /* + * Recursively handle child PTs if the change removed a subtree from + * the paging structure. + */ + if (was_present && !was_leaf && (pfn_changed || !is_present)) { + pt = spte_to_child_pt(old_spte, level); + + for (i = 0; i < PT64_ENT_PER_PAGE; i++) { + old_child_spte = *(pt + i); + *(pt + i) = 0; + handle_changed_spte(kvm, as_id, + gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)), + old_child_spte, 0, level - 1); + } + + kvm_flush_remote_tlbs_with_address(kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); + + free_page((unsigned long)pt); + } +} + +static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, + u64 old_spte, u64 new_spte, int level) +{ + __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level); +} From patchwork Wed Oct 14 18:26:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838133 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5B96C921 for ; Wed, 14 Oct 2020 18:27:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 35E892222C for ; Wed, 14 Oct 2020 18:27:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OAJeWAMC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388998AbgJNS11 (ORCPT ); Wed, 14 Oct 2020 14:27:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388816AbgJNS10 (ORCPT ); Wed, 14 Oct 2020 14:27:26 -0400 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA755C0613D6 for ; Wed, 14 Oct 2020 11:27:15 -0700 (PDT) Received: by mail-qv1-xf4a.google.com with SMTP id w16so84280qvj.14 for ; Wed, 14 Oct 2020 11:27:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=VK4nUWuE6Urr4OKvNaeKhHJ/uqy3tmhC2rICDI8pQEM=; b=OAJeWAMCiUkqNclZIZf9eCtx/a5JmLB6dQP0x/vHoRZN6eRjepQJ5IfxrSSSNjYTYA Jd09oJC3odnZ1Wh+n+RhpcXhBpCP4xxAUic4bJ1G+4tdDUhnuSeoZ8NyFzBuLqLQS/Ab cVy7VO5/g+ilZj/s3i6HHMo5C5oVQ/84hldrGUDWYSUAL5hXcY7OeV8EAx6gw2UWROTP B9xz91Sl+QVxN4LD+FNBkILXAqN//pmeoXnuAnDnTXpHttILErUf2QFND92a8N4EnMjS 1crQLtNcD5SWGdSYYSe9sP7W0i7AY5DMRQbiln5R8bEa4B4UfXBXQl+xM2389bVsuAM+ Gk0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VK4nUWuE6Urr4OKvNaeKhHJ/uqy3tmhC2rICDI8pQEM=; b=GtBp0tLdDbofxbS983HIoVyFNACiON3W1El5puVZUkn1IucKsh8aZWxEo4uhtkDC+h gn9m1tCiEJT26qlsiLYKLknwFxSV3sEXAbhLAreUZSiEMMHqS0+7kbhPVVaDcowXoeK8 Unq02ZNgYdVA3njB9VQ5mUkCVUDzkvn3O/38GCsvD8pVfJ/oOYNygb2dmTULzOLB6oLt yAmQi3pJpJ/GE3RgWjfddQ/d80r2eJyAez9LfLV5UiF1dwJDEogOJbv3XaE+wNyxAk16 8H2woa93ECaOAb0BHDevsMiKB6mKByYm7Lt+WXZE/3Ss3bET966Ye+MWOY0wBkXF0nTC X5zA== X-Gm-Message-State: AOAM533oxMoEdsCq1yPUfIj0bYo/1ctKq0FzKwzWcz2yOJK+X+b17nB+ L74e4VBS6rtiqbwIuJ8SguXnkeNOK0rw X-Google-Smtp-Source: ABdhPJyDBH3cxsFvKx/KaPbcmYDwJ74J5KkBtAYRqy4tXdQWuOmgGQKly65dlFs0LkNo/wLpP2LAlGH+A4SU Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:ad4:5192:: with SMTP id b18mr641006qvp.14.1602700034772; Wed, 14 Oct 2020 11:27:14 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:46 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-7-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 06/20] KVM: Cache as_id in kvm_memory_slot From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Xu Cache the address space ID just like the slot ID. It will be used in order to fill in the dirty ring entries. Suggested-by: Paolo Bonzini Suggested-by: Sean Christopherson Reviewed-by: Sean Christopherson Signed-off-by: Peter Xu --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 05e3c2fb3ef78..c6f45687ba89c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -346,6 +346,7 @@ struct kvm_memory_slot { unsigned long userspace_addr; u32 flags; short id; + u16 as_id; }; static inline unsigned long kvm_dirty_bitmap_bytes(struct kvm_memory_slot *memslot) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 68edd25dcb11f..2e85392131252 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1247,6 +1247,11 @@ static int kvm_delete_memslot(struct kvm *kvm, memset(&new, 0, sizeof(new)); new.id = old->id; + /* + * This is only for debugging purpose; it should never be referenced + * for a removed memslot. + */ + new.as_id = as_id; r = kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE); if (r) @@ -1313,6 +1318,7 @@ int __kvm_set_memory_region(struct kvm *kvm, if (!mem->memory_size) return kvm_delete_memslot(kvm, mem, &old, as_id); + new.as_id = as_id; new.id = id; new.base_gfn = mem->guest_phys_addr >> PAGE_SHIFT; new.npages = mem->memory_size >> PAGE_SHIFT; From patchwork Wed Oct 14 18:26:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838139 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 79748921 for ; Wed, 14 Oct 2020 18:27:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 466A72222A for ; Wed, 14 Oct 2020 18:27:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DAmtf0nm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388879AbgJNS11 (ORCPT ); Wed, 14 Oct 2020 14:27:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388617AbgJNS1Z (ORCPT ); Wed, 14 Oct 2020 14:27:25 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DB43C0613DA for ; Wed, 14 Oct 2020 11:27:17 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id 62so81161pfv.3 for ; Wed, 14 Oct 2020 11:27:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=aAKqStLipLNglfnBbmU4uyjlvVKmWH6/nWK4t/GH5Is=; b=DAmtf0nm71gj2upbd1Pn2uJRMrrJxd4M5dUhRTc/4DlackeHh1+BqWFO0MGJ7kSRjX zx56TqPsi42djPGNaSj4cgWI2dCt3GF1ESoY5sugLNZaO65ZbWytCYjoMUJvTA9PCeav uBYcb5XloRf3CApZ2M14dZhI/OI/G2EY4qvNqKpDVlD+6xk0hD64Z6MZlhYzJd8+WVub SZ5swH0/c3YDNWBq7+O3c78sn5Q5WGbI3QJB98nnQuEvcU/3O3CCnajv0f13QXwq8nTh 5DYV5/LpNK8T+kNlT/Trk/Qia9iiHgla5p5p5eZ0d+ABLfLRjsHmO6HaNIU3IovMpXEc gheA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aAKqStLipLNglfnBbmU4uyjlvVKmWH6/nWK4t/GH5Is=; b=gXpe4kojUI1GfyYfgm+lhb2Ff13Nwfkbv2cGjF7Gk5Y0a8JjNQnBIdrQP4LQsYqP0X jrKmJKeeVT9lum3bnqIKE6nRjTnFjjRxSJ/4Uu+5UBh1qicHk6iKojWdG5Bs7HaBmFeI f1jLklIO3t2C+J6UPWIALGbPri6OsCdwn87qjJpI2Q7cAahyjsUP1e0eDOz+Nn0lMMLM XwCtlfalPsUy1kKaSAq1Jzz+9hY4VcWqngGonkuHDgsRcCA2yei7VuozvpOHGSD62W5a Een6JsFF9PKpXLEZYEz67iS2TYxWVWJaqne0SCSoSlPozkFjg9iLCQ/PA5o/j5qri8Tp 8k9g== X-Gm-Message-State: AOAM532tRs1XVrt86tJBChcyVGZbcJJ31wEEixA2n5nFGUexDLj7qxNH 6ezeeXe3SveJ2xo1TmIugAwErFAOlacn X-Google-Smtp-Source: ABdhPJz2yHgeAniS7COQn6lwNA9KzYfa0mNAGrrqZgWeCdb07jIgmKw1Nw7EQ2FYMayr3yTAxpE4s6szK5Wz Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:aa7:96f8:0:b029:152:94c0:7e5 with SMTP id i24-20020aa796f80000b029015294c007e5mr558245pfq.76.1602700036584; Wed, 14 Oct 2020 11:27:16 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:47 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-8-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 07/20] kvm: x86/mmu: Support zapping SPTEs in the TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add functions to zap SPTEs to the TDP MMU. These are needed to tear down TDP MMU roots properly and implement other MMU functions which require tearing down mappings. Future patches will add functions to populate the page tables, but as for this patch there will not be any work for these functions to do. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 15 +++++ arch/x86/kvm/mmu/tdp_iter.c | 5 ++ arch/x86/kvm/mmu/tdp_iter.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 109 ++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 2 + 5 files changed, 132 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8bf20723c6177..337ab6823e312 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5787,6 +5787,10 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) kvm_reload_remote_mmus(kvm); kvm_zap_obsolete_pages(kvm); + + if (kvm->arch.tdp_mmu_enabled) + kvm_tdp_mmu_zap_all(kvm); + spin_unlock(&kvm->mmu_lock); } @@ -5827,6 +5831,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) struct kvm_memslots *slots; struct kvm_memory_slot *memslot; int i; + bool flush; spin_lock(&kvm->mmu_lock); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { @@ -5846,6 +5851,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } } + if (kvm->arch.tdp_mmu_enabled) { + flush = kvm_tdp_mmu_zap_gfn_range(kvm, gfn_start, gfn_end); + if (flush) + kvm_flush_remote_tlbs(kvm); + } + spin_unlock(&kvm->mmu_lock); } @@ -6012,6 +6023,10 @@ void kvm_mmu_zap_all(struct kvm *kvm) } kvm_mmu_commit_zap_page(kvm, &invalid_list); + + if (kvm->arch.tdp_mmu_enabled) + kvm_tdp_mmu_zap_all(kvm); + spin_unlock(&kvm->mmu_lock); } diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index b07e9f0c5d4aa..701eb753b701e 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -174,3 +174,8 @@ void tdp_iter_refresh_walk(struct tdp_iter *iter) iter->root_level, iter->min_level, goal_gfn); } +u64 *tdp_iter_root_pt(struct tdp_iter *iter) +{ + return iter->pt_path[iter->root_level - 1]; +} + diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index d629a53e1b73f..884ed2c70bfed 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -52,5 +52,6 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, int min_level, gfn_t goal_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_refresh_walk(struct tdp_iter *iter); +u64 *tdp_iter_root_pt(struct tdp_iter *iter); #endif /* __KVM_X86_MMU_TDP_ITER_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f2bd3a6928ce9..9b5cd4a832f1a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -56,8 +56,13 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) return sp->tdp_mmu_page && sp->root_count; } +static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end); + void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) { + gfn_t max_gfn = 1ULL << (boot_cpu_data.x86_phys_bits - PAGE_SHIFT); + lockdep_assert_held(&kvm->mmu_lock); WARN_ON(root->root_count); @@ -65,6 +70,8 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) list_del(&root->link); + zap_gfn_range(kvm, root, 0, max_gfn); + free_page((unsigned long)root->spt); kmem_cache_free(mmu_page_header_cache, root); } @@ -155,6 +162,11 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu) static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level); +static int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) +{ + return sp->role.smm ? 1 : 0; +} + /** * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance @@ -262,3 +274,100 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, { __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level); } + +static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, + u64 new_spte) +{ + u64 *root_pt = tdp_iter_root_pt(iter); + struct kvm_mmu_page *root = sptep_to_sp(root_pt); + int as_id = kvm_mmu_page_as_id(root); + + *iter->sptep = new_spte; + + handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, + iter->level); +} + +#define tdp_root_for_each_pte(_iter, _root, _start, _end) \ + for_each_tdp_pte(_iter, _root->spt, _root->role.level, _start, _end) + +static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter) +{ + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { + kvm_flush_remote_tlbs(kvm); + cond_resched_lock(&kvm->mmu_lock); + tdp_iter_refresh_walk(iter); + return true; + } + + return false; +} + +/* + * Tears down the mappings for the range of gfns, [start, end), and frees the + * non-root pages mapping GFNs strictly within that range. Returns true if + * SPTEs have been cleared and a TLB flush is needed before releasing the + * MMU lock. + */ +static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end) +{ + struct tdp_iter iter; + bool flush_needed = false; + + tdp_root_for_each_pte(iter, root, start, end) { + if (!is_shadow_present_pte(iter.old_spte)) + continue; + + /* + * If this is a non-last-level SPTE that covers a larger range + * than should be zapped, continue, and zap the mappings at a + * lower level. + */ + if ((iter.gfn < start || + iter.gfn + KVM_PAGES_PER_HPAGE(iter.level) > end) && + !is_last_spte(iter.old_spte, iter.level)) + continue; + + tdp_mmu_set_spte(kvm, &iter, 0); + + flush_needed = !tdp_mmu_iter_cond_resched(kvm, &iter); + } + return flush_needed; +} + +/* + * Tears down the mappings for the range of gfns, [start, end), and frees the + * non-root pages mapping GFNs strictly within that range. Returns true if + * SPTEs have been cleared and a TLB flush is needed before releasing the + * MMU lock. + */ +bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_mmu_page *root; + bool flush = false; + + for_each_tdp_mmu_root(kvm, root) { + /* + * Take a reference on the root so that it cannot be freed if + * this thread releases the MMU lock and yields in this loop. + */ + get_tdp_mmu_root(kvm, root); + + flush |= zap_gfn_range(kvm, root, start, end); + + put_tdp_mmu_root(kvm, root); + } + + return flush; +} + +void kvm_tdp_mmu_zap_all(struct kvm *kvm) +{ + gfn_t max_gfn = 1ULL << (boot_cpu_data.x86_phys_bits - PAGE_SHIFT); + bool flush; + + flush = kvm_tdp_mmu_zap_gfn_range(kvm, 0, max_gfn); + if (flush) + kvm_flush_remote_tlbs(kvm); +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index ac0ef91294420..6de2d007fc03c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -12,4 +12,6 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t root); hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root); +bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_tdp_mmu_zap_all(struct kvm *kvm); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838151 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0D8314B2 for ; Wed, 14 Oct 2020 18:28:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAC002222C for ; Wed, 14 Oct 2020 18:28:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="olYHCR2N" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389320AbgJNS1o (ORCPT ); Wed, 14 Oct 2020 14:27:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388661AbgJNS10 (ORCPT ); Wed, 14 Oct 2020 14:27:26 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6C3CC0613DC for ; Wed, 14 Oct 2020 11:27:18 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id k9so98151pgq.19 for ; Wed, 14 Oct 2020 11:27:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=wY4tgs0pi6sIzg93cSmFgz86D4AP0S31kjVXLjyJIw0=; b=olYHCR2NXYq0J8YqJvQH0WOXdRPl6mlbpCuZbG+u1S889Dhkvlt86OKpSfQgYmh2cT A/kYeakkwnl6TjH8W09rHezZUNYOmMWz7JWDDKFYlz5IwExB8KpTujnNuJ8ynHtTQfiL ll+J0xJ99N5OwHP+UD8jy+ZGLJwkcXGXqO0XXs8rU5oV+qq01h8KtRw1oJBv+dWjZuUm ehKZO5InCluPB/TR3OOiLhlTcvIR7//sU51JMOKST+yeE6XmyRsEs61Bka6xHYMvRzeG ubiz77vDCqUNqq+AsHptb7ncaHRovE1zd70HTcX5mmgIoftgIcPF4JJKCZQhKB3p+1KF H4FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wY4tgs0pi6sIzg93cSmFgz86D4AP0S31kjVXLjyJIw0=; b=roQapK8IOpav5blM/McqdntfLejuKr1q8lCF3XcOSCLM7Clx9XuFADeIIjFj7gV2gG wMLbgo1eIA2pOcmHUfdsspeoQIZDYqu4O4MzQ3KQzOdBfag1+78nse8icWqWsJt3YA+G B05e7qaGuDdRpJrqS3ZMNkodq/NtJUvEavJbOZIGMFbE0CnrcRK+2tQcnQofWOzTkJ6a cUX5ptX+dzySUagl+34AVo8qtjAqnkKo2V4eWUtavClvqsKpAXuNm8e9XRJUwhEKiYcP Bye9523aHB+7lNAt+fu8OYEKhXZH+NHe4T4Oa7+U4r6Ld0EEwDMJKciT1RGE1g8SASR8 MK2A== X-Gm-Message-State: AOAM5328VIDqzdxl15/GYJZ3d0uQPegFPnOR2f5uWa0OfF+yoxmEZahW hIj60E2fWE+0DmoDz1h6QUKKPLKJA30+ X-Google-Smtp-Source: ABdhPJwXOeBFjFNi1VtMFt6WRQ9LuOV/lmvwx45bGJF9r02uD5Cnpqm88v1FGDw0Elkeg1BKVbkyCtLJ7TY1 Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:902:ec02:b029:d1:fc2b:fe95 with SMTP id l2-20020a170902ec02b02900d1fc2bfe95mr241980pld.79.1602700038329; Wed, 14 Oct 2020 11:27:18 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:48 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-9-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 08/20] kvm: x86/mmu: Separate making non-leaf sptes from link_shadow_page From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The TDP MMU page fault handler will need to be able to create non-leaf SPTEs to build up the paging structures. Rather than re-implementing the function, factor the SPTE creation out of link_shadow_page. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 337ab6823e312..05024b8ae5a4d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2468,21 +2468,30 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } -static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, - struct kvm_mmu_page *sp) +static u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { u64 spte; - BUILD_BUG_ON(VMX_EPT_WRITABLE_MASK != PT_WRITABLE_MASK); - - spte = __pa(sp->spt) | shadow_present_mask | PT_WRITABLE_MASK | + spte = __pa(child_pt) | shadow_present_mask | PT_WRITABLE_MASK | shadow_user_mask | shadow_x_mask | shadow_me_mask; - if (sp_ad_disabled(sp)) + if (ad_disabled) spte |= SPTE_AD_DISABLED_MASK; else spte |= shadow_accessed_mask; + return spte; +} + +static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep, + struct kvm_mmu_page *sp) +{ + u64 spte; + + BUILD_BUG_ON(VMX_EPT_WRITABLE_MASK != PT_WRITABLE_MASK); + + spte = make_nonleaf_spte(sp->spt, sp_ad_disabled(sp)); + mmu_spte_set(sptep, spte); mmu_page_add_parent_pte(vcpu, sp, sptep); From patchwork Wed Oct 14 18:26:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838161 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48D4114B2 for ; Wed, 14 Oct 2020 18:28:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F11122246 for ; Wed, 14 Oct 2020 18:28:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Lt4lARNV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388766AbgJNS10 (ORCPT ); Wed, 14 Oct 2020 14:27:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388653AbgJNS1Z (ORCPT ); Wed, 14 Oct 2020 14:27:25 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2E39C0613E1 for ; Wed, 14 Oct 2020 11:27:20 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 21so2710pje.1 for ; Wed, 14 Oct 2020 11:27:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=pmB0fJH6G/7pCwGkSP5eKp4yjsD8tyP5Ccs7FrGYSIE=; b=Lt4lARNVq3oUmC6LOEGVbaugWfqxCdFnU9VBXlGGaHp9frXxtFU2lIwKb33FWGZ5bz o/yqkXycm7zXcFFDJy9Y2vQYeYO7n3xPxPm9wVshN4KmRUvFlb+Sh0f+b/xsKSn/B7pU 97YnhUU6aVvPhTI4feVAui61SHzChYhZRmrXlag6hLwLdZlFLMrY0hIimqBMGAvcbgiW /xIB3FL55GLcpaB0whR5R+TyOLCVSAEvrgyGMTGt+WTynEHIz6cwxLV3U6ezARXXGvKv whwhjKSn12BccMbIfYBPKQbeKnDg0BvjyYrQQLBjTaK3BHHfeRxbtUm6N5RbnXLpEHba 7AxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pmB0fJH6G/7pCwGkSP5eKp4yjsD8tyP5Ccs7FrGYSIE=; b=sqlJ1m0ZtgM86ih61TWSXBln7lRY+QJnQp6C/YgVkRLlRmd3/p+De7rZJRtscrlmeK UVpfW2S7ALEDJLNSpgHoj0Lw62iwGodYTkzk/8hW2Tsw8NRUwQAzhMiFHBJKCCJ79rhq xIXJkhLKxvtoIJ3RYgwEj8xJh6ojMjy+6HW3R9ifmY6Y63krjYD7/OftKBVBs/B9paDh 98SRJXtDC/WisOUrvqV6E0fs4UVRNqYvtfwMWC7C5FyIf6WHXo30QscmhXqDAjOy8F0r nY2ZDSQg4QQaU8CPp1cSqxgcYuyKv5iaSj0g3kf9qGelaC1SVQoQCDXgf3O0g4FzIv9Y /8Ng== X-Gm-Message-State: AOAM533cPaYuXpgzKzgchGb1S38kRYmYNFUFCl4/vGVGhim2tz72lUCH xVdPAOpaPQOKdMAy3+qTa4GFUZOO9lyZ X-Google-Smtp-Source: ABdhPJyXvuUY3lw4g5yd24ffkp6Lp/jBsszBP6LsC3kfd+f5bG3lP57XTxJlcwNaLMiyGQa5kfXfjzUAqicL Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:90a:ab92:: with SMTP id n18mr423994pjq.233.1602700040119; Wed, 14 Oct 2020 11:27:20 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:49 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-10-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 09/20] kvm: x86/mmu: Remove disallowed_hugepage_adjust shadow_walk_iterator arg From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to avoid creating executable hugepages in the TDP MMU PF handler, remove the dependency between disallowed_hugepage_adjust and the shadow_walk_iterator. This will open the function up to being used by the TDP MMU PF handler in a future patch. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 13 +++++++------ arch/x86/kvm/mmu/paging_tmpl.h | 3 ++- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 05024b8ae5a4d..288b97e96202e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3243,13 +3243,12 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, return level; } -static void disallowed_hugepage_adjust(struct kvm_shadow_walk_iterator it, - gfn_t gfn, kvm_pfn_t *pfnp, int *levelp) +static void disallowed_hugepage_adjust(u64 spte, gfn_t gfn, int cur_level, + kvm_pfn_t *pfnp, int *levelp) { int level = *levelp; - u64 spte = *it.sptep; - if (it.level == level && level > PG_LEVEL_4K && + if (cur_level == level && level > PG_LEVEL_4K && is_shadow_present_pte(spte) && !is_large_pte(spte)) { /* @@ -3259,7 +3258,8 @@ static void disallowed_hugepage_adjust(struct kvm_shadow_walk_iterator it, * patching back for them into pfn the next 9 bits of * the address. */ - u64 page_mask = KVM_PAGES_PER_HPAGE(level) - KVM_PAGES_PER_HPAGE(level - 1); + u64 page_mask = KVM_PAGES_PER_HPAGE(level) - + KVM_PAGES_PER_HPAGE(level - 1); *pfnp |= gfn & page_mask; (*levelp)--; } @@ -3292,7 +3292,8 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, * large page, as the leaf could be executable. */ if (nx_huge_page_workaround_enabled) - disallowed_hugepage_adjust(it, gfn, &pfn, &level); + disallowed_hugepage_adjust(*it.sptep, gfn, it.level, + &pfn, &level); base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); if (it.level == level) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 9a1a15f19beb6..50e268eb8e1a9 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -695,7 +695,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, * large page, as the leaf could be executable. */ if (nx_huge_page_workaround_enabled) - disallowed_hugepage_adjust(it, gw->gfn, &pfn, &level); + disallowed_hugepage_adjust(*it.sptep, gw->gfn, it.level, + &pfn, &level); base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); if (it.level == level) From patchwork Wed Oct 14 18:26:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838137 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE3A714B2 for ; Wed, 14 Oct 2020 18:27:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B22BC2222A for ; Wed, 14 Oct 2020 18:27:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iQmi7ClC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389046AbgJNS12 (ORCPT ); Wed, 14 Oct 2020 14:27:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388915AbgJNS11 (ORCPT ); Wed, 14 Oct 2020 14:27:27 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A10EC0613E3 for ; Wed, 14 Oct 2020 11:27:22 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id gv16so326437pjb.9 for ; Wed, 14 Oct 2020 11:27:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=pklO+jwBIipq5NQFFX826HVIUc5GwohyzBnzowsJx+o=; b=iQmi7ClCXbTDTA9EZX41sdojQr4nYCEbzmICck2/AdyPhSwU/oveu5W9m6IU0m5kiB Y//60xdDNEyIhLushifYssrTWXjaS0Y0X3kHWRPfk4W0ihEIabXcVQ9/E0Q65yk+mLe3 /apl6JY9Dze27wqLAxPI87ZO980oHuVyyZIdm3SAkPwJOpi5bF/ojoaX/ILLuuRagq6F uqDUc+tEF2ncof/1LsdSQCgw/OKenuF95etsWYf/jiQ20WIIEr7u1iz1H2dwZXSA5wM7 k88bnDdPJsFxcIIlbthHRc/ujzKZh6nMHOIdicuUrjt8OnfmgtbWr9QI+HGV4YwqENfq bbmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pklO+jwBIipq5NQFFX826HVIUc5GwohyzBnzowsJx+o=; b=K+XtRP3jV9SO9ZCgIdCOHP6a7mGy2RVXgvWM9EPzPtvdIzIUrt6H8nCUn2EU7grZd9 d7OuNn2HvyuMh8OJoL2ejcXUXETwQTqNjck+u7IoefQ5+JMa+6wqAeYe1nXM7pNopfEm JMwHCm417sJuVU9GQOZiWLvmEnnC5oSxWTzxHhABePhYlhE0OC7CZPyJHmV6vX3AMiDj 4RBCWhjjmGbUPGP/r+ksQRLx/sOxBXVxsnODY23+b8biSWkoWlh7QtbAOqVGv6Y/BDnN H+LGJ4SGYmCQTT9Dpk2zNJVNUtcGOZzpfMGR7CKoj2Ugl9aFR9YE/r0XO00uZdvrEuMV CPPg== X-Gm-Message-State: AOAM533Y8OQ3/ljLFp6kgAqC9vQIqimwwZQcOtBCexStRctDElehZ9B0 ifnrWBX3jF0C2w2VngPOO8Akx5PR0sZP X-Google-Smtp-Source: ABdhPJxZN/dZw0hHtNcSqmqzmvg48WqFfCKRdyaASKbcwWArn92spnZndm4D3O+B3b9cOPm+JebgUJ5iy4dR Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a63:d74b:: with SMTP id w11mr208702pgi.147.1602700042015; Wed, 14 Oct 2020 11:27:22 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:50 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-11-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 10/20] kvm: x86/mmu: Add TDP MMU PF handler From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add functions to handle page faults in the TDP MMU. These page faults are currently handled in much the same way as the x86 shadow paging based MMU, however the ordering of some operations is slightly different. Future patches will add eager NX splitting, a fast page fault handler, and parallel page faults. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 82 +++++++-------------- arch/x86/kvm/mmu/mmu_internal.h | 59 +++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 124 ++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 5 ++ 4 files changed, 212 insertions(+), 58 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 288b97e96202e..421a12a247b67 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -141,23 +141,6 @@ enum { /* make pte_list_desc fit well in cache line */ #define PTE_LIST_EXT 3 -/* - * Return values of handle_mmio_page_fault, mmu.page_fault, and fast_page_fault(). - * - * RET_PF_RETRY: let CPU fault again on the address. - * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. - * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. - * RET_PF_FIXED: The faulting entry has been fixed. - * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. - */ -enum { - RET_PF_RETRY = 0, - RET_PF_EMULATE, - RET_PF_INVALID, - RET_PF_FIXED, - RET_PF_SPURIOUS, -}; - struct pte_list_desc { u64 *sptes[PTE_LIST_EXT]; struct pte_list_desc *more; @@ -195,19 +178,11 @@ static struct percpu_counter kvm_total_used_mmu_pages; static u64 __read_mostly shadow_nx_mask; static u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ static u64 __read_mostly shadow_user_mask; -static u64 __read_mostly shadow_accessed_mask; static u64 __read_mostly shadow_mmio_value; static u64 __read_mostly shadow_mmio_access_mask; static u64 __read_mostly shadow_present_mask; static u64 __read_mostly shadow_me_mask; -/* - * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK; - * shadow_acc_track_mask is the set of bits to be cleared in non-accessed - * pages. - */ -static u64 __read_mostly shadow_acc_track_mask; - /* * The mask/shift to use for saving the original R/X bits when marking the PTE * as not-present for access tracking purposes. We do not save the W bit as the @@ -314,22 +289,11 @@ static inline bool spte_ad_need_write_protect(u64 spte) return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_ENABLED_MASK; } -static bool is_nx_huge_page_enabled(void) +bool is_nx_huge_page_enabled(void) { return READ_ONCE(nx_huge_pages); } -static inline u64 spte_shadow_accessed_mask(u64 spte) -{ - MMU_WARN_ON(is_mmio_spte(spte)); - return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; -} - -static inline bool is_access_track_spte(u64 spte) -{ - return !spte_ad_enabled(spte) && (spte & shadow_acc_track_mask) == 0; -} - /* * Due to limited space in PTEs, the MMIO generation is a 19 bit subset of * the memslots generation and is derived as follows: @@ -377,7 +341,7 @@ static u64 get_mmio_spte_generation(u64 spte) return gen; } -static u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) +u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) { u64 gen = kvm_vcpu_memslots(vcpu)->generation & MMIO_SPTE_GEN_MASK; @@ -2468,7 +2432,7 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator) __shadow_walk_next(iterator, *iterator->sptep); } -static u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) +u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled) { u64 spte; @@ -2886,15 +2850,10 @@ static bool kvm_is_mmio_pfn(kvm_pfn_t pfn) E820_TYPE_RAM); } -/* Bits which may be returned by set_spte() */ -#define SET_SPTE_WRITE_PROTECTED_PT BIT(0) -#define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) -#define SET_SPTE_SPURIOUS BIT(2) - -static int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, - gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, - bool can_unsync, bool host_writable, bool ad_disabled, - u64 *new_spte) +int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, + bool can_unsync, bool host_writable, bool ad_disabled, + u64 *new_spte) { u64 spte = 0; int ret = 0; @@ -3187,9 +3146,9 @@ static int host_pfn_mapping_level(struct kvm_vcpu *vcpu, gfn_t gfn, return level; } -static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, - int max_level, kvm_pfn_t *pfnp, - bool huge_page_disallowed, int *req_level) +int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, int max_level, + kvm_pfn_t *pfnp, bool huge_page_disallowed, + int *req_level) { struct kvm_memory_slot *slot; struct kvm_lpage_info *linfo; @@ -3243,8 +3202,8 @@ static int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, return level; } -static void disallowed_hugepage_adjust(u64 spte, gfn_t gfn, int cur_level, - kvm_pfn_t *pfnp, int *levelp) +void disallowed_hugepage_adjust(u64 spte, gfn_t gfn, int cur_level, + kvm_pfn_t *pfnp, int *levelp) { int level = *levelp; @@ -4068,9 +4027,11 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, if (page_fault_handle_page_track(vcpu, error_code, gfn)) return RET_PF_EMULATE; - r = fast_page_fault(vcpu, gpa, error_code); - if (r != RET_PF_INVALID) - return r; + if (!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) { + r = fast_page_fault(vcpu, gpa, error_code); + if (r != RET_PF_INVALID) + return r; + } r = mmu_topup_memory_caches(vcpu, false); if (r) @@ -4092,8 +4053,13 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, r = make_mmu_pages_available(vcpu); if (r) goto out_unlock; - r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, pfn, - prefault, is_tdp); + + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) + r = kvm_tdp_mmu_map(vcpu, gpa, error_code, map_writable, + max_level, pfn, prefault, is_tdp); + else + r = __direct_map(vcpu, gpa, error_code, map_writable, max_level, + pfn, prefault, is_tdp); out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index c053a157e4d55..f7fe5616eff98 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -121,6 +121,14 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, #define ACC_ALL (ACC_EXEC_MASK | ACC_WRITE_MASK | ACC_USER_MASK) static u64 __read_mostly shadow_dirty_mask; +static u64 __read_mostly shadow_accessed_mask; + +/* + * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK; + * shadow_acc_track_mask is the set of bits to be cleared in non-accessed + * pages. + */ +static u64 __read_mostly shadow_acc_track_mask; /* Functions for interpreting SPTEs */ static inline bool is_mmio_spte(u64 spte) @@ -186,6 +194,57 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } +static inline u64 spte_shadow_accessed_mask(u64 spte) +{ + MMU_WARN_ON(is_mmio_spte(spte)); + return spte_ad_enabled(spte) ? shadow_accessed_mask : 0; +} + +static inline bool is_access_track_spte(u64 spte) +{ + return !spte_ad_enabled(spte) && (spte & shadow_acc_track_mask) == 0; +} + void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); + +/* + * Return values of handle_mmio_page_fault, mmu.page_fault, and fast_page_fault(). + * + * RET_PF_RETRY: let CPU fault again on the address. + * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. + * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_FIXED: The faulting entry has been fixed. + * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. + */ +enum { + RET_PF_RETRY = 0, + RET_PF_EMULATE, + RET_PF_INVALID, + RET_PF_FIXED, + RET_PF_SPURIOUS, +}; + +/* Bits which may be returned by set_spte() */ +#define SET_SPTE_WRITE_PROTECTED_PT BIT(0) +#define SET_SPTE_NEED_REMOTE_TLB_FLUSH BIT(1) +#define SET_SPTE_SPURIOUS BIT(2) + +int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, + gfn_t gfn, kvm_pfn_t pfn, u64 old_spte, bool speculative, + bool can_unsync, bool host_writable, bool ad_disabled, + u64 *new_spte); +u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access); +u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled); + +int kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, int max_level, + kvm_pfn_t *pfnp, bool huge_page_disallowed, + int *req_level); +void disallowed_hugepage_adjust(u64 spte, gfn_t gfn, int cur_level, + kvm_pfn_t *pfnp, int *levelp); + +bool is_nx_huge_page_enabled(void); + +void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9b5cd4a832f1a..f92c12c4ce31a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -291,6 +291,10 @@ static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ for_each_tdp_pte(_iter, _root->spt, _root->role.level, _start, _end) +#define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ + for_each_tdp_pte(_iter, __va(_mmu->root_hpa), \ + _mmu->shadow_root_level, _start, _end) + static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter) { if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { @@ -371,3 +375,123 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) if (flush) kvm_flush_remote_tlbs(kvm); } + +/* + * Installs a last-level SPTE to handle a TDP page fault. + * (NPT/EPT violation/misconfiguration) + */ +static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, int write, + int map_writable, + struct tdp_iter *iter, + kvm_pfn_t pfn, bool prefault) +{ + u64 new_spte; + int ret = 0; + int make_spte_ret = 0; + + if (unlikely(is_noslot_pfn(pfn))) + new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); + else + make_spte_ret = make_spte(vcpu, ACC_ALL, iter->level, iter->gfn, + pfn, iter->old_spte, prefault, true, + map_writable, !shadow_accessed_mask, + &new_spte); + + tdp_mmu_set_spte(vcpu->kvm, iter, new_spte); + + /* + * If the page fault was caused by a write but the page is write + * protected, emulation is needed. If the emulation was skipped, + * the vCPU would have the same fault again. + */ + if (make_spte_ret & SET_SPTE_WRITE_PROTECTED_PT) { + if (write) + ret = RET_PF_EMULATE; + kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); + } + + /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ + if (unlikely(is_mmio_spte(new_spte))) + ret = RET_PF_EMULATE; + + if (!prefault) + vcpu->stat.pf_fixed++; + + return ret; +} + +/* + * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing + * page tables and SPTEs to translate the faulting guest physical address. + */ +int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, + int map_writable, int max_level, kvm_pfn_t pfn, + bool prefault, bool is_tdp) +{ + bool nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(); + bool write = error_code & PFERR_WRITE_MASK; + bool exec = error_code & PFERR_FETCH_MASK; + bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; + struct kvm_mmu *mmu = vcpu->arch.mmu; + struct tdp_iter iter; + struct kvm_mmu_memory_cache *pf_pt_cache = + &vcpu->arch.mmu_shadow_page_cache; + u64 *child_pt; + u64 new_spte; + int ret; + gfn_t gfn = gpa >> PAGE_SHIFT; + int level; + int req_level; + + BUG_ON(!is_tdp); + BUG_ON(!VALID_PAGE(vcpu->arch.mmu->root_hpa)); + BUG_ON(!is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)); + + level = kvm_mmu_hugepage_adjust(vcpu, gfn, max_level, &pfn, + huge_page_disallowed, &req_level); + + tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + if (nx_huge_page_workaround_enabled) + disallowed_hugepage_adjust(iter.old_spte, gfn, + iter.level, &pfn, &level); + + if (iter.level == level) + break; + + /* + * If there is an SPTE mapping a large page at a higher level + * than the target, that SPTE must be cleared and replaced + * with a non-leaf SPTE. + */ + if (is_shadow_present_pte(iter.old_spte) && + is_large_pte(iter.old_spte)) { + tdp_mmu_set_spte(vcpu->kvm, &iter, 0); + + kvm_flush_remote_tlbs_with_address(vcpu->kvm, iter.gfn, + KVM_PAGES_PER_HPAGE(iter.level)); + + /* + * The iter must explicitly re-read the spte here + * because the new value informs the !present + * path below. + */ + iter.old_spte = READ_ONCE(*iter.sptep); + } + + if (!is_shadow_present_pte(iter.old_spte)) { + child_pt = kvm_mmu_memory_cache_alloc(pf_pt_cache); + clear_page(child_pt); + new_spte = make_nonleaf_spte(child_pt, + !shadow_accessed_mask); + + tdp_mmu_set_spte(vcpu->kvm, &iter, new_spte); + } + } + + BUG_ON(iter.level != level); + + ret = tdp_mmu_map_handle_target_level(vcpu, write, map_writable, &iter, + pfn, prefault); + + return ret; +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 6de2d007fc03c..4d111a4dd332f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -14,4 +14,9 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root); bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end); void kvm_tdp_mmu_zap_all(struct kvm *kvm); + +int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, + int map_writable, int max_level, kvm_pfn_t pfn, + bool prefault, bool is_tdp); + #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838157 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB2B5921 for ; Wed, 14 Oct 2020 18:28:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 83B982222A for ; Wed, 14 Oct 2020 18:28:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oWJ8N4+o" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388809AbgJNS10 (ORCPT ); Wed, 14 Oct 2020 14:27:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388669AbgJNS1Z (ORCPT ); Wed, 14 Oct 2020 14:27:25 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E9ACC0613E5 for ; Wed, 14 Oct 2020 11:27:24 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id r4so294563qta.9 for ; Wed, 14 Oct 2020 11:27:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=XE3TlFzPB074DCulw3ZKAATgb2BIibEai56f3kal+II=; b=oWJ8N4+og/FhbMEIhzH50l+ThvY+HucabBowIojdpkR3aSqw6s5d2lAhtkzAunqP25 NtUwx36AprsPytZcLL7fhxigSF/lBPNTAhKeQDX43LzJ4MjLv8iU1V3XYnuKfY8nSL91 DXKs078mOYbgurWZEvzuf0WMM2Vj5Hoe43xnVjW6sprBIwrRzUmudwoHwRJMZ1mcmgo4 ya7RApj4mZD6uSQxyb+sIz2gp78C5mXuhKUnnPnvSGWdo8F2kxwX1aA+Wgr7YYQKyZym BOtxB9AhLsBC4V0AhyJM0Tzt/augj4K1yLQgYo1smetdiDqbnH/iBh8cBk/tivBycfLs cMKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XE3TlFzPB074DCulw3ZKAATgb2BIibEai56f3kal+II=; b=MTfZboGfMNMzoArKAXN9bt3l0IbPZ/2ErVeJ0h9ih1u3IZNYUH785ugwdVpEsDsJh5 CH+xP805pk3nb873m07Ou8hN0QFBpeQELvnFxr4trAk9E+TwJHZvYgu3SQKP4rkdwsr5 ywFvekwXT3JsZS68f4QKr5fJViSX/PgpeLqC6Qpz+Z4uu0i7lkvE2uJd+wYVvCYlPzpP XmCgmUgUUZ4QsVZ4PUCq+3idjhO7tNcZwCV7UHkaHicTc7khBdVQy+nO1zfNlZDq+zN8 7eHRDkSeUVxjdcd9cVOmHCHG1OPef3LapWT6WyFVeDTtNZ6dIRwQbkNs62Dj0SY0NSRN mmyA== X-Gm-Message-State: AOAM53385XNshISEVyblHAlDjew5TWRTMOGCjbi6y55llILUTie9vBFy krwQtfumN4CNrB6aeHVEc8e3p+UqC0Qu X-Google-Smtp-Source: ABdhPJwNRTYpSplJuslVIsVTEt8/hpXbKi+Gqbly0DAYKHpB7qLSlzVjx3Ww+s/kJWlFlupZGsL+FFrGM8xC Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a0c:bb83:: with SMTP id i3mr824901qvg.15.1602700043714; Wed, 14 Oct 2020 11:27:23 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:51 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-12-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 11/20] kvm: x86/mmu: Allocate struct kvm_mmu_pages for all pages in TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Attach struct kvm_mmu_pages to every page in the TDP MMU to track metadata, facilitate NX reclaim, and enable inproved parallelism of MMU operations in future patches. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 4 ++++ arch/x86/kvm/mmu/tdp_mmu.c | 13 ++++++++++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e0ec1dd271a32..2568dcd134156 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -989,7 +989,11 @@ struct kvm_arch { * operations. */ bool tdp_mmu_enabled; + + /* List of struct tdp_mmu_pages being used as roots */ struct list_head tdp_mmu_roots; + /* List of struct tdp_mmu_pages not being used as roots */ + struct list_head tdp_mmu_pages; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index f92c12c4ce31a..78d41a1949651 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -34,6 +34,7 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) kvm->arch.tdp_mmu_enabled = true; INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); + INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); } void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) @@ -188,6 +189,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool is_leaf = is_present && is_last_spte(new_spte, level); bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); u64 *pt; + struct kvm_mmu_page *sp; u64 old_child_spte; int i; @@ -253,6 +255,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, */ if (was_present && !was_leaf && (pfn_changed || !is_present)) { pt = spte_to_child_pt(old_spte, level); + sp = sptep_to_sp(pt); + + list_del(&sp->link); for (i = 0; i < PT64_ENT_PER_PAGE; i++) { old_child_spte = *(pt + i); @@ -266,6 +271,7 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, KVM_PAGES_PER_HPAGE(level)); free_page((unsigned long)pt); + kmem_cache_free(mmu_page_header_cache, sp); } } @@ -434,8 +440,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; struct kvm_mmu *mmu = vcpu->arch.mmu; struct tdp_iter iter; - struct kvm_mmu_memory_cache *pf_pt_cache = - &vcpu->arch.mmu_shadow_page_cache; + struct kvm_mmu_page *sp; u64 *child_pt; u64 new_spte; int ret; @@ -479,7 +484,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, } if (!is_shadow_present_pte(iter.old_spte)) { - child_pt = kvm_mmu_memory_cache_alloc(pf_pt_cache); + sp = alloc_tdp_mmu_page(vcpu, iter.gfn, iter.level); + list_add(&sp->link, &vcpu->kvm->arch.tdp_mmu_pages); + child_pt = sp->spt; clear_page(child_pt); new_spte = make_nonleaf_spte(child_pt, !shadow_accessed_mask); From patchwork Wed Oct 14 18:26:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A4A3921 for ; Wed, 14 Oct 2020 18:28:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3124B21D81 for ; Wed, 14 Oct 2020 18:28:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pe85GixK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389802AbgJNS2e (ORCPT ); Wed, 14 Oct 2020 14:28:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388613AbgJNS10 (ORCPT ); Wed, 14 Oct 2020 14:27:26 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39308C061755 for ; Wed, 14 Oct 2020 11:27:26 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id v7so50386plp.23 for ; Wed, 14 Oct 2020 11:27:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=hkNKsBF60m7BvmN3QxbxSqMRTzsy0y9J5Po6mL/IB9c=; b=pe85GixKyXi2QiVC4AItexRyw4KUvjDLrDmc8khVTPhKaOlEPZXzdEGtecVI+Xzwy5 npOTd7JLcCpJNm6fAbt5Aaj2G9me3jSjBY8+5ew5MJUHhUYKruA6UvJukKmLYxgf0QAd R1oKvX8QHBroCei1ZbmQpfH0+Wmb4R6DjLR7uCusYQfucVWHVDVfKzWcewRXb4S7CETF 41lbjMozCLvRE8vzQDJg53NNlkfBX8NVhGBeOQsgm4US9za0+Fx53EY+1vaxD5aIPZcE EECzbhxXeZhxihWXAJnetxKRFXCCeyMcMEtfmsRRtQHxIlmMPkW26NVfJYuTbxQtQ0Xn KNKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hkNKsBF60m7BvmN3QxbxSqMRTzsy0y9J5Po6mL/IB9c=; b=HhCU7eqvrqKCyYACK3OI0lztYk23StOyuauMVl6f9865if2g/1gEcVhd/GXi5MXYMB SDCqnqQ9M25en4seJV4s/Lu+7DjwOwhyOZ4YID8Fubnx/Onf0OMNP/KaUHrW2bjvUC8m /Uqgps1U0j3Wb+8ueBfGtEuVAjRH3ew4aKdH19cCWsoQVvgPLpNOyukO3Q5OUx3w3Ssw S6pUnvUEHLxH6dBK0F8Yff/iNFV8+y9/kp5j9dO1UWsbtS3inZY7FNBZSFCxr1ZHHkyy s87lUprwn9HOtF+g89KU08XhtkacxQHekEzcd3ZhgyN8gaTkQgeA0xKhEo0Fceh26ZwY thUg== X-Gm-Message-State: AOAM531oSOpTDJx/sNK7NoTc6gUmuJ7bibI1+ksT+FFsXbjK38mJebSs K5Me3XHYjVFhPmTx8AKO0OiP53wFQ+zy X-Google-Smtp-Source: ABdhPJyqyNioSrc7lGAo0sz2wPtjFzYdMZGpHTDD0T0wHnBt5QgzHp91ljqQw1MjMpG+dOcjNcng/JsG7dDP Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:902:8b89:b029:d2:4345:5dd with SMTP id ay9-20020a1709028b89b02900d2434505ddmr256316plb.57.1602700045732; Wed, 14 Oct 2020 11:27:25 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:52 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-13-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 12/20] kvm: x86/mmu: Support invalidate range MMU notifier for TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to interoperate correctly with the rest of KVM and other Linux subsystems, the TDP MMU must correctly handle various MMU notifiers. Add hooks to handle the invalidate range family of MMU notifiers. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 9 ++++- arch/x86/kvm/mmu/tdp_mmu.c | 80 +++++++++++++++++++++++++++++++++++--- arch/x86/kvm/mmu/tdp_mmu.h | 2 + 3 files changed, 85 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 421a12a247b67..00534133f99fc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1781,7 +1781,14 @@ static int kvm_handle_hva(struct kvm *kvm, unsigned long hva, int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, unsigned flags) { - return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp); + int r; + + r = kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp); + + if (kvm->arch.tdp_mmu_enabled) + r |= kvm_tdp_mmu_zap_hva_range(kvm, start, end); + + return r; } int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 78d41a1949651..9ec6c26ed6619 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -58,7 +58,7 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa) } static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end); + gfn_t start, gfn_t end, bool can_yield); void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) { @@ -71,7 +71,7 @@ void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root) list_del(&root->link); - zap_gfn_range(kvm, root, 0, max_gfn); + zap_gfn_range(kvm, root, 0, max_gfn, false); free_page((unsigned long)root->spt); kmem_cache_free(mmu_page_header_cache, root); @@ -318,9 +318,14 @@ static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter) * non-root pages mapping GFNs strictly within that range. Returns true if * SPTEs have been cleared and a TLB flush is needed before releasing the * MMU lock. + * If can_yield is true, will release the MMU lock and reschedule if the + * scheduler needs the CPU or there is contention on the MMU lock. If this + * function cannot yield, it will not release the MMU lock or reschedule and + * the caller must ensure it does not supply too large a GFN range, or the + * operation can cause a soft lockup. */ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, - gfn_t start, gfn_t end) + gfn_t start, gfn_t end, bool can_yield) { struct tdp_iter iter; bool flush_needed = false; @@ -341,7 +346,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, tdp_mmu_set_spte(kvm, &iter, 0); - flush_needed = !tdp_mmu_iter_cond_resched(kvm, &iter); + if (can_yield) + flush_needed = !tdp_mmu_iter_cond_resched(kvm, &iter); + else + flush_needed = true; } return flush_needed; } @@ -364,7 +372,7 @@ bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end) */ get_tdp_mmu_root(kvm, root); - flush |= zap_gfn_range(kvm, root, start, end); + flush |= zap_gfn_range(kvm, root, start, end, true); put_tdp_mmu_root(kvm, root); } @@ -502,3 +510,65 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, return ret; } + +static int kvm_tdp_mmu_handle_hva_range(struct kvm *kvm, unsigned long start, + unsigned long end, unsigned long data, + int (*handler)(struct kvm *kvm, struct kvm_memory_slot *slot, + struct kvm_mmu_page *root, gfn_t start, + gfn_t end, unsigned long data)) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + struct kvm_mmu_page *root; + int ret = 0; + int as_id; + + for_each_tdp_mmu_root(kvm, root) { + /* + * Take a reference on the root so that it cannot be freed if + * this thread releases the MMU lock and yields in this loop. + */ + get_tdp_mmu_root(kvm, root); + + as_id = kvm_mmu_page_as_id(root); + slots = __kvm_memslots(kvm, as_id); + kvm_for_each_memslot(memslot, slots) { + unsigned long hva_start, hva_end; + gfn_t gfn_start, gfn_end; + + hva_start = max(start, memslot->userspace_addr); + hva_end = min(end, memslot->userspace_addr + + (memslot->npages << PAGE_SHIFT)); + if (hva_start >= hva_end) + continue; + /* + * {gfn(page) | page intersects with [hva_start, hva_end)} = + * {gfn_start, gfn_start+1, ..., gfn_end-1}. + */ + gfn_start = hva_to_gfn_memslot(hva_start, memslot); + gfn_end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, memslot); + + ret |= handler(kvm, memslot, root, gfn_start, + gfn_end, data); + } + + put_tdp_mmu_root(kvm, root); + } + + return ret; +} + +static int zap_gfn_range_hva_wrapper(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct kvm_mmu_page *root, gfn_t start, + gfn_t end, unsigned long unused) +{ + return zap_gfn_range(kvm, root, start, end, false); +} + +int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, + unsigned long end) +{ + return kvm_tdp_mmu_handle_hva_range(kvm, start, end, 0, + zap_gfn_range_hva_wrapper); +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 4d111a4dd332f..026ceb6284102 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -19,4 +19,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, int map_writable, int max_level, kvm_pfn_t pfn, bool prefault, bool is_tdp); +int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, + unsigned long end); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838135 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8759F921 for ; Wed, 14 Oct 2020 18:27:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 417592222C for ; Wed, 14 Oct 2020 18:27:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BdHIEDlq" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389185AbgJNS1d (ORCPT ); Wed, 14 Oct 2020 14:27:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389101AbgJNS13 (ORCPT ); Wed, 14 Oct 2020 14:27:29 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3252CC061755 for ; Wed, 14 Oct 2020 11:27:28 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id 88so87820pla.1 for ; Wed, 14 Oct 2020 11:27:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=VI5/Pw2MzQNcasIq6ZWSOveKR1kTIPf2ndCQ73ieBgs=; b=BdHIEDlqcnGXgcwr852S828XphdU4eYFsVFhKyJlAvvzeaMTQ2qG5bqZecWjmvrXtZ rb3t5PKM7yoBSmMAFC/3tUGyROxWa/va4JKnnFONCeF6NsX0w4dRwP81sUu7BfF+Dqfg csrY14LbMMpmBBzEBBfeeMPMCn4Ga2NTYMa8HeeggoX87cfWQKEKVlhOzbBuFEUrXWZw W1loWCCmA7OhyG3ROqx9JT+WfEsuZeUd8G8OgxkV2aCWeWS+dtYOjWHUaLzXc9HTd10X Wh5gSan8ia1x6bzO5btGusS1txnarFJnrVf+nYzVI57dRu9UWb6WwH4FyBr3tYIsyDpK Qlpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VI5/Pw2MzQNcasIq6ZWSOveKR1kTIPf2ndCQ73ieBgs=; b=hlg+CvM+bsjNT8ILQtE8QpVSiT0UZfH+woS23HG+hKXQfO6COM8Z/nS0LD1J134o06 BQHuw4n914L2uWzR06jnemshY+cznzEC45iBtKrT2q0Rocoyye1EpM3L3WjPtqldh4db AzRwA3ZTcdsAamscfuYtq+UcEMx6NUHC0UojLeGDMfQA6Y+lv182R0uy0876C9RhJOBO fb8p/8fKir4ZQwGTizkrLK43w7nMExgtIfnuGWZH82zazVOULjrGiGFkcnljgGIJZ0oH v2Stg2iCOCdGaDs4L28KWlnnHdvsmABBKta3NkhNRzqzQHXZGrs67GJaZCYhwFbjYKFn owcg== X-Gm-Message-State: AOAM530wPc1K2ff8AkTbDT3c2x7lesT4DpiowPS6m2pZTgmlKtclpA/C R+rVL99xMGy7et+XwF9F3mjO/SULEYBA X-Google-Smtp-Source: ABdhPJz1QBMN2RSBI6dnloZch5sHXamwZ8ITSsdnrPdf1YaNNok84MmRjtEIEo6Uwy1PosyHmFg1sndoUF1S Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:902:23:b029:d5:b88a:c782 with SMTP id 32-20020a1709020023b02900d5b88ac782mr361350pla.5.1602700047640; Wed, 14 Oct 2020 11:27:27 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:53 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-14-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 13/20] kvm: x86/mmu: Add access tracking for tdp_mmu From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to interoperate correctly with the rest of KVM and other Linux subsystems, the TDP MMU must correctly handle various MMU notifiers. The main Linux MM uses the access tracking MMU notifiers for swap and other features. Add hooks to handle the test/flush HVA (range) family of MMU notifiers. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 34 +++++----- arch/x86/kvm/mmu/mmu_internal.h | 17 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 113 ++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.h | 4 ++ 4 files changed, 145 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 00534133f99fc..e6ab79d8f215f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -175,8 +175,6 @@ static struct kmem_cache *pte_list_desc_cache; struct kmem_cache *mmu_page_header_cache; static struct percpu_counter kvm_total_used_mmu_pages; -static u64 __read_mostly shadow_nx_mask; -static u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ static u64 __read_mostly shadow_user_mask; static u64 __read_mostly shadow_mmio_value; static u64 __read_mostly shadow_mmio_access_mask; @@ -221,7 +219,6 @@ static u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; static u8 __read_mostly shadow_phys_bits; static void mmu_spte_set(u64 *sptep, u64 spte); -static bool is_executable_pte(u64 spte); static union kvm_mmu_page_role kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu); @@ -516,11 +513,6 @@ static int is_nx(struct kvm_vcpu *vcpu) return vcpu->arch.efer & EFER_NX; } -static bool is_executable_pte(u64 spte) -{ - return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask; -} - static gfn_t pse36_gfn_delta(u32 gpte) { int shift = 32 - PT32_DIR_PSE36_SHIFT - PAGE_SHIFT; @@ -695,14 +687,6 @@ static bool spte_has_volatile_bits(u64 spte) return false; } -static bool is_accessed_spte(u64 spte) -{ - u64 accessed_mask = spte_shadow_accessed_mask(spte); - - return accessed_mask ? spte & accessed_mask - : !is_access_track_spte(spte); -} - /* Rules for using mmu_spte_set: * Set the sptep from nonpresent to present. * Note: the sptep being assigned *must* be either not present @@ -838,7 +822,7 @@ static u64 mmu_spte_get_lockless(u64 *sptep) return __get_spte_lockless(sptep); } -static u64 mark_spte_for_access_track(u64 spte) +u64 mark_spte_for_access_track(u64 spte) { if (spte_ad_enabled(spte)) return spte & ~shadow_accessed_mask; @@ -1842,12 +1826,24 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) { - return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp); + int young = false; + + young = kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp); + if (kvm->arch.tdp_mmu_enabled) + young |= kvm_tdp_mmu_age_hva_range(kvm, start, end); + + return young; } int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) { - return kvm_handle_hva(kvm, hva, 0, kvm_test_age_rmapp); + int young = false; + + young = kvm_handle_hva(kvm, hva, 0, kvm_test_age_rmapp); + if (kvm->arch.tdp_mmu_enabled) + young |= kvm_tdp_mmu_test_age_hva(kvm, hva); + + return young; } #ifdef MMU_DEBUG diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index f7fe5616eff98..d886fe750be38 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -122,6 +122,8 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, static u64 __read_mostly shadow_dirty_mask; static u64 __read_mostly shadow_accessed_mask; +static u64 __read_mostly shadow_nx_mask; +static u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ /* * SPTEs used by MMUs without A/D bits are marked with SPTE_AD_DISABLED_MASK; @@ -205,6 +207,19 @@ static inline bool is_access_track_spte(u64 spte) return !spte_ad_enabled(spte) && (spte & shadow_acc_track_mask) == 0; } +static inline bool is_accessed_spte(u64 spte) +{ + u64 accessed_mask = spte_shadow_accessed_mask(spte); + + return accessed_mask ? spte & accessed_mask + : !is_access_track_spte(spte); +} + +static inline bool is_executable_pte(u64 spte) +{ + return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask; +} + void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); @@ -247,4 +262,6 @@ bool is_nx_huge_page_enabled(void); void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); +u64 mark_spte_for_access_track(u64 spte); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 9ec6c26ed6619..575970d8805a4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -168,6 +168,18 @@ static int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) return sp->role.smm ? 1 : 0; } +static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) +{ + bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + + if (!is_shadow_present_pte(old_spte) || !is_last_spte(old_spte, level)) + return; + + if (is_accessed_spte(old_spte) && + (!is_accessed_spte(new_spte) || pfn_changed)) + kvm_set_pfn_accessed(spte_to_pfn(old_spte)); +} + /** * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance @@ -279,10 +291,11 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level) { __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level); + handle_changed_spte_acc_track(old_spte, new_spte, level); } -static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, - u64 new_spte) +static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, + u64 new_spte, bool record_acc_track) { u64 *root_pt = tdp_iter_root_pt(iter); struct kvm_mmu_page *root = sptep_to_sp(root_pt); @@ -290,13 +303,36 @@ static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, *iter->sptep = new_spte; - handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, - iter->level); + __handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, + iter->level); + if (record_acc_track) + handle_changed_spte_acc_track(iter->old_spte, new_spte, + iter->level); +} + +static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, + u64 new_spte) +{ + __tdp_mmu_set_spte(kvm, iter, new_spte, true); +} + +static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, + struct tdp_iter *iter, + u64 new_spte) +{ + __tdp_mmu_set_spte(kvm, iter, new_spte, false); } #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ for_each_tdp_pte(_iter, _root->spt, _root->role.level, _start, _end) +#define tdp_root_for_each_leaf_pte(_iter, _root, _start, _end) \ + tdp_root_for_each_pte(_iter, _root, _start, _end) \ + if (!is_shadow_present_pte(_iter.old_spte) || \ + !is_last_spte(_iter.old_spte, _iter.level)) \ + continue; \ + else + #define tdp_mmu_for_each_pte(_iter, _mmu, _start, _end) \ for_each_tdp_pte(_iter, __va(_mmu->root_hpa), \ _mmu->shadow_root_level, _start, _end) @@ -572,3 +608,72 @@ int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, return kvm_tdp_mmu_handle_hva_range(kvm, start, end, 0, zap_gfn_range_hva_wrapper); } + +/* + * Mark the SPTEs range of GFNs [start, end) unaccessed and return non-zero + * if any of the GFNs in the range have been accessed. + */ +static int age_gfn_range(struct kvm *kvm, struct kvm_memory_slot *slot, + struct kvm_mmu_page *root, gfn_t start, gfn_t end, + unsigned long unused) +{ + struct tdp_iter iter; + int young = 0; + u64 new_spte = 0; + + tdp_root_for_each_leaf_pte(iter, root, start, end) { + /* + * If we have a non-accessed entry we don't need to change the + * pte. + */ + if (!is_accessed_spte(iter.old_spte)) + continue; + + new_spte = iter.old_spte; + + if (spte_ad_enabled(new_spte)) { + clear_bit((ffs(shadow_accessed_mask) - 1), + (unsigned long *)&new_spte); + } else { + /* + * Capture the dirty status of the page, so that it doesn't get + * lost when the SPTE is marked for access tracking. + */ + if (is_writable_pte(new_spte)) + kvm_set_pfn_dirty(spte_to_pfn(new_spte)); + + new_spte = mark_spte_for_access_track(new_spte); + } + + tdp_mmu_set_spte_no_acc_track(kvm, &iter, new_spte); + young = 1; + } + + return young; +} + +int kvm_tdp_mmu_age_hva_range(struct kvm *kvm, unsigned long start, + unsigned long end) +{ + return kvm_tdp_mmu_handle_hva_range(kvm, start, end, 0, + age_gfn_range); +} + +static int test_age_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, + struct kvm_mmu_page *root, gfn_t gfn, gfn_t unused, + unsigned long unused2) +{ + struct tdp_iter iter; + + tdp_root_for_each_leaf_pte(iter, root, gfn, gfn + 1) + if (is_accessed_spte(iter.old_spte)) + return 1; + + return 0; +} + +int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva) +{ + return kvm_tdp_mmu_handle_hva_range(kvm, hva, hva + 1, 0, + test_age_gfn); +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 026ceb6284102..bdb86f61e75eb 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -21,4 +21,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); + +int kvm_tdp_mmu_age_hva_range(struct kvm *kvm, unsigned long start, + unsigned long end); +int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838153 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7042F14B2 for ; Wed, 14 Oct 2020 18:28:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 401EA2222C for ; Wed, 14 Oct 2020 18:28:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fPtj58mi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389189AbgJNS2U (ORCPT ); Wed, 14 Oct 2020 14:28:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389147AbgJNS1b (ORCPT ); Wed, 14 Oct 2020 14:27:31 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48703C061755 for ; Wed, 14 Oct 2020 11:27:30 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r4so94528pgl.20 for ; Wed, 14 Oct 2020 11:27:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=IF7O9+8t5pNEsHOvCgk0rMRWZOkswBOXctC0HBVzeOE=; b=fPtj58miaz6nE/u62aMG3z0LNt5YebUvTIP2FbVpmo8tAaAGIrKOA+GUGAojLqEqKo HH47bZ87fPNiSLDGw+qtn1YX+R3fihYXEG1SE4IfHhRK4QYvKbAkIEmllAwKZk12kqL8 841pfizOXqQ1PBGu1Pzk3sZrc2vF1hpTUCY11Pa9UBRVBr0+HYphFUUsjBzINxUTV5OA wmiINs9LNTbeH6oIxv0rYj2gdnbpZDMcGCynEkgb88lhhHbXSRupYKdR55XGAjodUfJ6 G4yjyBLiMtkUfmAEWzFl0fvBdxow/vQ4ptBF+xRGEfzYdXGUx+UrDcaolGGFUZTMqdBT Bbig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IF7O9+8t5pNEsHOvCgk0rMRWZOkswBOXctC0HBVzeOE=; b=efE/7COFwFqDlgkHCIrpj+YJCOLsowaGrxK2dc1WPpauJ3+zSJmi7tDto1zSNohF/v 3ZwDUCZZG6CTX8Iean3PGZFejMmeqV9a6uoyC7nlV9FkMBXordFtCJNr4UkzIcp4x4t8 G9FJBbfN7G+/ihSqPn/WjsMCyDS3GuO8zvV45+0DU0AIlyn5UFVFrUaAHb0YDSBOupZh jkAUXORuSX5cIQLRD1pjAwWLlMAW36YuNlnx1lvr9whrn82Yt0ui6IgLJdd6dhOP5V0m RN2X5sm9mc425KhoV4w4ZAOC3Nux/vvgTlR2Fzu5Z6B9ndxzynRt1U67Q9lrILOV5eJj RHCA== X-Gm-Message-State: AOAM532tA1Ni/wZq0VGDpHdiIVFiBPrK+pO1oTOLBz/5leehpnEPHOfr 3hd2clTgladFCKkDSKtGAgyC7wY1QY4A X-Google-Smtp-Source: ABdhPJx8PXc/N5pCc+Bm9+YEAfoIbIsqvl0LfxTiWu8fwYteKgqSU4fMtIYHQ/Ld9a8iNrFRVuA/YbM8O5Kw Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:902:b7c3:b029:d4:bc6e:8aae with SMTP id v3-20020a170902b7c3b02900d4bc6e8aaemr408683plz.12.1602700049618; Wed, 14 Oct 2020 11:27:29 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:54 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-15-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 14/20] kvm: x86/mmu: Support changed pte notifier in tdp MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to interoperate correctly with the rest of KVM and other Linux subsystems, the TDP MMU must correctly handle various MMU notifiers. Add a hook and handle the change_pte MMU notifier. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 21 ++++++------- arch/x86/kvm/mmu/mmu_internal.h | 29 +++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 56 +++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 3 ++ 4 files changed, 98 insertions(+), 11 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e6ab79d8f215f..ef9ea3f45241b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -135,9 +135,6 @@ enum { #include -#define SPTE_HOST_WRITEABLE (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) -#define SPTE_MMU_WRITEABLE (1ULL << (PT_FIRST_AVAIL_BITS_SHIFT + 1)) - /* make pte_list_desc fit well in cache line */ #define PTE_LIST_EXT 3 @@ -1615,13 +1612,8 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, pte_list_remove(rmap_head, sptep); goto restart; } else { - new_spte = *sptep & ~PT64_BASE_ADDR_MASK; - new_spte |= (u64)new_pfn << PAGE_SHIFT; - - new_spte &= ~PT_WRITABLE_MASK; - new_spte &= ~SPTE_HOST_WRITEABLE; - - new_spte = mark_spte_for_access_track(new_spte); + new_spte = kvm_mmu_changed_pte_notifier_make_spte( + *sptep, new_pfn); mmu_spte_clear_track_bits(sptep); mmu_spte_set(sptep, new_spte); @@ -1777,7 +1769,14 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte) { - return kvm_handle_hva(kvm, hva, (unsigned long)&pte, kvm_set_pte_rmapp); + int r; + + r = kvm_handle_hva(kvm, hva, (unsigned long)&pte, kvm_set_pte_rmapp); + + if (kvm->arch.tdp_mmu_enabled) + r |= kvm_tdp_mmu_set_spte_hva(kvm, hva, &pte); + + return r; } static int kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d886fe750be38..49c3a04d2b894 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -115,6 +115,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, (PT64_BASE_ADDR_MASK & ((1ULL << (PAGE_SHIFT + (((level) - 1) \ * PT64_LEVEL_BITS))) - 1)) +#ifdef CONFIG_DYNAMIC_PHYSICAL_MASK +#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1)) +#else +#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)) +#endif + #define ACC_EXEC_MASK 1 #define ACC_WRITE_MASK PT_WRITABLE_MASK #define ACC_USER_MASK PT_USER_MASK @@ -132,6 +138,12 @@ static u64 __read_mostly shadow_x_mask; /* mutual exclusive with nx_mask */ */ static u64 __read_mostly shadow_acc_track_mask; +#define PT_FIRST_AVAIL_BITS_SHIFT 10 +#define PT64_SECOND_AVAIL_BITS_SHIFT 54 + +#define SPTE_HOST_WRITEABLE (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) +#define SPTE_MMU_WRITEABLE (1ULL << (PT_FIRST_AVAIL_BITS_SHIFT + 1)) + /* Functions for interpreting SPTEs */ static inline bool is_mmio_spte(u64 spte) { @@ -264,4 +276,21 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); u64 mark_spte_for_access_track(u64 spte); +static inline u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, + kvm_pfn_t new_pfn) +{ + u64 new_spte; + + new_spte = old_spte & ~PT64_BASE_ADDR_MASK; + new_spte |= (u64)new_pfn << PAGE_SHIFT; + + new_spte &= ~PT_WRITABLE_MASK; + new_spte &= ~SPTE_HOST_WRITEABLE; + + new_spte = mark_spte_for_access_track(new_spte); + + return new_spte; +} + + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 575970d8805a4..90abd55c89375 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -677,3 +677,59 @@ int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva) return kvm_tdp_mmu_handle_hva_range(kvm, hva, hva + 1, 0, test_age_gfn); } + +/* + * Handle the changed_pte MMU notifier for the TDP MMU. + * data is a pointer to the new pte_t mapping the HVA specified by the MMU + * notifier. + * Returns non-zero if a flush is needed before releasing the MMU lock. + */ +static int set_tdp_spte(struct kvm *kvm, struct kvm_memory_slot *slot, + struct kvm_mmu_page *root, gfn_t gfn, gfn_t unused, + unsigned long data) +{ + struct tdp_iter iter; + pte_t *ptep = (pte_t *)data; + kvm_pfn_t new_pfn; + u64 new_spte; + int need_flush = 0; + + WARN_ON(pte_huge(*ptep)); + + new_pfn = pte_pfn(*ptep); + + tdp_root_for_each_pte(iter, root, gfn, gfn + 1) { + if (iter.level != PG_LEVEL_4K) + continue; + + if (!is_shadow_present_pte(iter.old_spte)) + break; + + tdp_mmu_set_spte(kvm, &iter, 0); + + kvm_flush_remote_tlbs_with_address(kvm, iter.gfn, 1); + + if (!pte_write(*ptep)) { + new_spte = kvm_mmu_changed_pte_notifier_make_spte( + iter.old_spte, new_pfn); + + tdp_mmu_set_spte(kvm, &iter, new_spte); + } + + need_flush = 1; + } + + if (need_flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn, 1); + + return 0; +} + +int kvm_tdp_mmu_set_spte_hva(struct kvm *kvm, unsigned long address, + pte_t *host_ptep) +{ + return kvm_tdp_mmu_handle_hva_range(kvm, address, address + 1, + (unsigned long)host_ptep, + set_tdp_spte); +} + diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index bdb86f61e75eb..6569792f40d4f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -25,4 +25,7 @@ int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start, int kvm_tdp_mmu_age_hva_range(struct kvm *kvm, unsigned long start, unsigned long end); int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva); + +int kvm_tdp_mmu_set_spte_hva(struct kvm *kvm, unsigned long address, + pte_t *host_ptep); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838155 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6656921 for ; Wed, 14 Oct 2020 18:28:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9F1A32222A for ; Wed, 14 Oct 2020 18:28:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gHohKNtW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389148AbgJNS2T (ORCPT ); Wed, 14 Oct 2020 14:28:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389154AbgJNS1c (ORCPT ); Wed, 14 Oct 2020 14:27:32 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13409C0613D4 for ; Wed, 14 Oct 2020 11:27:32 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id y7so51559pff.20 for ; Wed, 14 Oct 2020 11:27:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=KKZHJtN9WqXgoLgBZcTpcZR4f5FGT7hOVvseCh/jLD8=; b=gHohKNtW2c5TmlJfXi/3rU0gyJiqyjHyPQhe77N8pHNMqjdQbI80GoTHlCGJbH2heZ 0OsYXtbr3ktioOcu/GGvBBnoUVqM8EuHKnO68Wuz76Td8mKNR5MzCm5dNsD02Y6+PNM4 DOo1JBNcSQksG6EESJ/q8EXOVA8L+zS9EZOtZnmBY+/Tglkwj48HEtRpxRydxMNcujk8 ledMDL33kwgxvneqNCVZ+Jftjaz/S47XKWZCFaFXDwyoBfZPQDXqXeFa+fR2Lz5MCMCH U5SE2cp4WVJ9MFcEtvrlZTX5+n37M0fuAhVZ/t27Jjev9ZVLy055drOUni9zvCOhWVkP 9Luw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KKZHJtN9WqXgoLgBZcTpcZR4f5FGT7hOVvseCh/jLD8=; b=e0C81W12IXYm7gNn1TIITWup51w/+LQE/QWBytezy1l/0t13vMgS4mAREMbP+MPBQR L5rIIQY371fQKV+qcazAIpiftDOH+gR3fNwteNQQyL4VQeb55eY665ybPbHWPc7t3rRg W7hSF9R6vRCsjukyGZJkdWgbWH2YzevvuYW/XxCFnqPYMXQBWgVuoCNqnmoELpA1Qm8c XSJvUAK96C6MFCOdHZ1QI1+/8x89LY/p0sdXx/yCPMRaR5kEyWH9zHzTKpxoTOkmDJdM bMyiHvrbQKEcSCgev8vlIxQyZfenZ5fUTPcviVs+nXgunBUNWhOG6YIrXycDylgTGDE1 4SXw== X-Gm-Message-State: AOAM530VfM5q3zimQrPNE1EwW8cifYmlkEcI+gU9ClTwXPXwLC5092E9 6IY3hfIeuspJ8RTpsMDZC2PG3RRA5/KH X-Google-Smtp-Source: ABdhPJyDMNlYqOv1eulnBDaRClPa67kalSmyoiSoBZFtJVhaUek7N6JC5WY1G7twpjYOjEFZVfckq0FQNKPL Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:90a:94cc:: with SMTP id j12mr482198pjw.106.1602700051568; Wed, 14 Oct 2020 11:27:31 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:55 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-16-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 15/20] kvm: x86/mmu: Support dirty logging for the TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Dirty logging is a key feature of the KVM MMU and must be supported by the TDP MMU. Add support for both the write protection and PML dirty logging modes. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 20 ++- arch/x86/kvm/mmu/mmu_internal.h | 6 + arch/x86/kvm/mmu/tdp_iter.h | 7 +- arch/x86/kvm/mmu/tdp_mmu.c | 292 +++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/tdp_mmu.h | 10 ++ include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 6 +- 7 files changed, 327 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ef9ea3f45241b..b2ce57761d2f1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -277,12 +277,6 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) return vcpu->arch.mmu == &vcpu->arch.guest_mmu; } -static inline bool spte_ad_need_write_protect(u64 spte) -{ - MMU_WARN_ON(is_mmio_spte(spte)); - return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_ENABLED_MASK; -} - bool is_nx_huge_page_enabled(void) { return READ_ONCE(nx_huge_pages); @@ -1483,6 +1477,9 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; + if (kvm->arch.tdp_mmu_enabled) + kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, + slot->base_gfn + gfn_offset, mask, true); while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1509,6 +1506,9 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, { struct kvm_rmap_head *rmap_head; + if (kvm->arch.tdp_mmu_enabled) + kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, + slot->base_gfn + gfn_offset, mask, false); while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -5853,6 +5853,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, start_level, KVM_MAX_HUGEPAGE_LEVEL, false); + if (kvm->arch.tdp_mmu_enabled) + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K); spin_unlock(&kvm->mmu_lock); /* @@ -5941,6 +5943,8 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); + if (kvm->arch.tdp_mmu_enabled) + flush |= kvm_tdp_mmu_clear_dirty_slot(kvm, memslot); spin_unlock(&kvm->mmu_lock); /* @@ -5962,6 +5966,8 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_large_level(kvm, memslot, slot_rmap_write_protect, false); + if (kvm->arch.tdp_mmu_enabled) + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_2M); spin_unlock(&kvm->mmu_lock); if (flush) @@ -5976,6 +5982,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false); + if (kvm->arch.tdp_mmu_enabled) + flush |= kvm_tdp_mmu_slot_set_dirty(kvm, memslot); spin_unlock(&kvm->mmu_lock); if (flush) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 49c3a04d2b894..a7230532bb845 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -232,6 +232,12 @@ static inline bool is_executable_pte(u64 spte) return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask; } +static inline bool spte_ad_need_write_protect(u64 spte) +{ + MMU_WARN_ON(is_mmio_spte(spte)); + return (spte & SPTE_SPECIAL_MASK) != SPTE_AD_ENABLED_MASK; +} + void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, u64 start_gfn, u64 pages); diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 884ed2c70bfed..47170d0dc98e5 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -41,11 +41,14 @@ struct tdp_iter { * Iterates over every SPTE mapping the GFN range [start, end) in a * preorder traversal. */ -#define for_each_tdp_pte(iter, root, root_level, start, end) \ - for (tdp_iter_start(&iter, root, root_level, PG_LEVEL_4K, start); \ +#define for_each_tdp_pte_min_level(iter, root, root_level, min_level, start, end) \ + for (tdp_iter_start(&iter, root, root_level, min_level, start); \ iter.valid && iter.gfn < end; \ tdp_iter_next(&iter)) +#define for_each_tdp_pte(iter, root, root_level, start, end) \ + for_each_tdp_pte_min_level(iter, root, root_level, PG_LEVEL_4K, start, end) + u64 *spte_to_child_pt(u64 pte, int level); void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 90abd55c89375..099c7d68aeb1d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -180,6 +180,24 @@ static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) kvm_set_pfn_accessed(spte_to_pfn(old_spte)); } +static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, + u64 old_spte, u64 new_spte, int level) +{ + bool pfn_changed; + struct kvm_memory_slot *slot; + + if (level > PG_LEVEL_4K) + return; + + pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); + + if ((!is_writable_pte(old_spte) || pfn_changed) && + is_writable_pte(new_spte)) { + slot = __gfn_to_memslot(__kvm_memslots(kvm, as_id), gfn); + mark_page_dirty_in_slot(slot, gfn); + } +} + /** * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance @@ -292,10 +310,13 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, { __handle_changed_spte(kvm, as_id, gfn, old_spte, new_spte, level); handle_changed_spte_acc_track(old_spte, new_spte, level); + handle_changed_spte_dirty_log(kvm, as_id, gfn, old_spte, + new_spte, level); } static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, - u64 new_spte, bool record_acc_track) + u64 new_spte, bool record_acc_track, + bool record_dirty_log) { u64 *root_pt = tdp_iter_root_pt(iter); struct kvm_mmu_page *root = sptep_to_sp(root_pt); @@ -308,19 +329,30 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, if (record_acc_track) handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); + if (record_dirty_log) + handle_changed_spte_dirty_log(kvm, as_id, iter->gfn, + iter->old_spte, new_spte, + iter->level); } static inline void tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - __tdp_mmu_set_spte(kvm, iter, new_spte, true); + __tdp_mmu_set_spte(kvm, iter, new_spte, true, true); } static inline void tdp_mmu_set_spte_no_acc_track(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - __tdp_mmu_set_spte(kvm, iter, new_spte, false); + __tdp_mmu_set_spte(kvm, iter, new_spte, false, true); +} + +static inline void tdp_mmu_set_spte_no_dirty_log(struct kvm *kvm, + struct tdp_iter *iter, + u64 new_spte) +{ + __tdp_mmu_set_spte(kvm, iter, new_spte, true, false); } #define tdp_root_for_each_pte(_iter, _root, _start, _end) \ @@ -644,6 +676,7 @@ static int age_gfn_range(struct kvm *kvm, struct kvm_memory_slot *slot, new_spte = mark_spte_for_access_track(new_spte); } + new_spte &= ~shadow_dirty_mask; tdp_mmu_set_spte_no_acc_track(kvm, &iter, new_spte); young = 1; @@ -733,3 +766,256 @@ int kvm_tdp_mmu_set_spte_hva(struct kvm *kvm, unsigned long address, set_tdp_spte); } +/* + * Remove write access from all the SPTEs mapping GFNs [start, end). If + * skip_4k is set, SPTEs that map 4k pages, will not be write-protected. + * Returns true if an SPTE has been changed and the TLBs need to be flushed. + */ +static bool wrprot_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end, int min_level) +{ + struct tdp_iter iter; + u64 new_spte; + bool spte_set = false; + + BUG_ON(min_level > KVM_MAX_HUGEPAGE_LEVEL); + + for_each_tdp_pte_min_level(iter, root->spt, root->role.level, + min_level, start, end) { + if (!is_shadow_present_pte(iter.old_spte) || + !is_last_spte(iter.old_spte, iter.level)) + continue; + + new_spte = iter.old_spte & ~PT_WRITABLE_MASK; + + tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + spte_set = true; + + tdp_mmu_iter_cond_resched(kvm, &iter); + } + return spte_set; +} + +/* + * Remove write access from all the SPTEs mapping GFNs in the memslot. Will + * only affect leaf SPTEs down to min_level. + * Returns true if an SPTE has been changed and the TLBs need to be flushed. + */ +bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot, + int min_level) +{ + struct kvm_mmu_page *root; + int root_as_id; + bool spte_set = false; + + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + /* + * Take a reference on the root so that it cannot be freed if + * this thread releases the MMU lock and yields in this loop. + */ + get_tdp_mmu_root(kvm, root); + + spte_set = wrprot_gfn_range(kvm, root, slot->base_gfn, + slot->base_gfn + slot->npages, min_level) || + spte_set; + + put_tdp_mmu_root(kvm, root); + } + + return spte_set; +} + +/* + * Clear the dirty status of all the SPTEs mapping GFNs in the memslot. If + * AD bits are enabled, this will involve clearing the dirty bit on each SPTE. + * If AD bits are not enabled, this will require clearing the writable bit on + * each SPTE. Returns true if an SPTE has been changed and the TLBs need to + * be flushed. + */ +static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end) +{ + struct tdp_iter iter; + u64 new_spte; + bool spte_set = false; + + tdp_root_for_each_leaf_pte(iter, root, start, end) { + if (spte_ad_need_write_protect(iter.old_spte)) { + if (is_writable_pte(iter.old_spte)) + new_spte = iter.old_spte & ~PT_WRITABLE_MASK; + else + continue; + } else { + if (iter.old_spte & shadow_dirty_mask) + new_spte = iter.old_spte & ~shadow_dirty_mask; + else + continue; + } + + tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + spte_set = true; + + tdp_mmu_iter_cond_resched(kvm, &iter); + } + return spte_set; +} + +/* + * Clear the dirty status of all the SPTEs mapping GFNs in the memslot. If + * AD bits are enabled, this will involve clearing the dirty bit on each SPTE. + * If AD bits are not enabled, this will require clearing the writable bit on + * each SPTE. Returns true if an SPTE has been changed and the TLBs need to + * be flushed. + */ +bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + struct kvm_mmu_page *root; + int root_as_id; + bool spte_set = false; + + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + /* + * Take a reference on the root so that it cannot be freed if + * this thread releases the MMU lock and yields in this loop. + */ + get_tdp_mmu_root(kvm, root); + + spte_set = clear_dirty_gfn_range(kvm, root, slot->base_gfn, + slot->base_gfn + slot->npages) || spte_set; + + put_tdp_mmu_root(kvm, root); + } + + return spte_set; +} + +/* + * Clears the dirty status of all the 4k SPTEs mapping GFNs for which a bit is + * set in mask, starting at gfn. The given memslot is expected to contain all + * the GFNs represented by set bits in the mask. If AD bits are enabled, + * clearing the dirty status will involve clearing the dirty bit on each SPTE + * or, if AD bits are not enabled, clearing the writable bit on each SPTE. + */ +static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t gfn, unsigned long mask, bool wrprot) +{ + struct tdp_iter iter; + u64 new_spte; + + tdp_root_for_each_leaf_pte(iter, root, gfn + __ffs(mask), + gfn + BITS_PER_LONG) { + if (!mask) + break; + + if (iter.level > PG_LEVEL_4K || + !(mask & (1UL << (iter.gfn - gfn)))) + continue; + + if (wrprot || spte_ad_need_write_protect(iter.old_spte)) { + if (is_writable_pte(iter.old_spte)) + new_spte = iter.old_spte & ~PT_WRITABLE_MASK; + else + continue; + } else { + if (iter.old_spte & shadow_dirty_mask) + new_spte = iter.old_spte & ~shadow_dirty_mask; + else + continue; + } + + tdp_mmu_set_spte_no_dirty_log(kvm, &iter, new_spte); + + mask &= ~(1UL << (iter.gfn - gfn)); + } +} + +/* + * Clears the dirty status of all the 4k SPTEs mapping GFNs for which a bit is + * set in mask, starting at gfn. The given memslot is expected to contain all + * the GFNs represented by set bits in the mask. If AD bits are enabled, + * clearing the dirty status will involve clearing the dirty bit on each SPTE + * or, if AD bits are not enabled, clearing the writable bit on each SPTE. + */ +void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, unsigned long mask, + bool wrprot) +{ + struct kvm_mmu_page *root; + int root_as_id; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); + } +} + +/* + * Set the dirty status of all the SPTEs mapping GFNs in the memslot. This is + * only used for PML, and so will involve setting the dirty bit on each SPTE. + * Returns true if an SPTE has been changed and the TLBs need to be flushed. + */ +static bool set_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t start, gfn_t end) +{ + struct tdp_iter iter; + u64 new_spte; + bool spte_set = false; + + tdp_root_for_each_pte(iter, root, start, end) { + if (!is_shadow_present_pte(iter.old_spte)) + continue; + + new_spte = iter.old_spte | shadow_dirty_mask; + + tdp_mmu_set_spte(kvm, &iter, new_spte); + spte_set = true; + + tdp_mmu_iter_cond_resched(kvm, &iter); + } + + return spte_set; +} + +/* + * Set the dirty status of all the SPTEs mapping GFNs in the memslot. This is + * only used for PML, and so will involve setting the dirty bit on each SPTE. + * Returns true if an SPTE has been changed and the TLBs need to be flushed. + */ +bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + struct kvm_mmu_page *root; + int root_as_id; + bool spte_set = false; + + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + /* + * Take a reference on the root so that it cannot be freed if + * this thread releases the MMU lock and yields in this loop. + */ + get_tdp_mmu_root(kvm, root); + + spte_set = set_dirty_gfn_range(kvm, root, slot->base_gfn, + slot->base_gfn + slot->npages) || spte_set; + + put_tdp_mmu_root(kvm, root); + } + return spte_set; +} + diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 6569792f40d4f..add8bb97c56dd 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -28,4 +28,14 @@ int kvm_tdp_mmu_test_age_hva(struct kvm *kvm, unsigned long hva); int kvm_tdp_mmu_set_spte_hva(struct kvm *kvm, unsigned long address, pte_t *host_ptep); + +bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm, struct kvm_memory_slot *slot, + int min_level); +bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, + struct kvm_memory_slot *slot); +void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, unsigned long mask, + bool wrprot); +bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot); #endif /* __KVM_X86_MMU_TDP_MMU_H */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c6f45687ba89c..7f2e2a09ebbd9 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -798,6 +798,7 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); bool kvm_vcpu_is_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn); +void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2e85392131252..2541a17ff1c45 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -143,8 +143,6 @@ static void hardware_disable_all(void); static void kvm_io_bus_destroy(struct kvm_io_bus *bus); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn); - __visible bool kvm_rebooting; EXPORT_SYMBOL_GPL(kvm_rebooting); @@ -2645,8 +2643,7 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_clear_guest); -static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, - gfn_t gfn) +void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, gfn_t gfn) { if (memslot && memslot->dirty_bitmap) { unsigned long rel_gfn = gfn - memslot->base_gfn; @@ -2654,6 +2651,7 @@ static void mark_page_dirty_in_slot(struct kvm_memory_slot *memslot, set_bit_le(rel_gfn, memslot->dirty_bitmap); } } +EXPORT_SYMBOL_GPL(mark_page_dirty_in_slot); void mark_page_dirty(struct kvm *kvm, gfn_t gfn) { From patchwork Wed Oct 14 18:26:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838141 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A2B714B2 for ; Wed, 14 Oct 2020 18:27:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 730862222C for ; Wed, 14 Oct 2020 18:27:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="s0GCOtZv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389350AbgJNS1u (ORCPT ); Wed, 14 Oct 2020 14:27:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389426AbgJNS1s (ORCPT ); Wed, 14 Oct 2020 14:27:48 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AF76C0613D8 for ; Wed, 14 Oct 2020 11:27:34 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id m126so349133qkd.13 for ; Wed, 14 Oct 2020 11:27:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=784t8a81EqGn9vK4Ge4RJ0ESjE9KKEOgOyqmynlKty8=; b=s0GCOtZv55Gjd1XHXBNohAfLhPXXnWUIpl5wRjQa0+qx7m9d24d8lNaf2d5GleFDYx FleZ1dK7ZTfsCYqTl3v8ni6T9+4V+z2sDpnoQRfG2nA/pIO4I4vguFIGpMIQZdbmggBo 87aIbQ3ugz5pmTHQDNkNwjji7NWAj9QFs4HvpHNHsVtEzowDK/n+MXtcofriSQVxpBCz AOF1ImkpJ3oHPXTl5IyeBlaNcRU0Mph+k+mCNJtE1a7dwdvyHgrYkHPvK3lkbAxzgd70 xLH0KM8bpSCr+0Q9RXVm88RwpmUaA48qFNld6VHL5c2f2OSaDu506sYRjPsZkCnaJrA+ Oy6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=784t8a81EqGn9vK4Ge4RJ0ESjE9KKEOgOyqmynlKty8=; b=rCNLJHN846NF76YBcpJ9GF55JiT2pez1ygzVkv3HYFhXoE5AROdIaNojxP8LUjnmEJ sG/AfkmPpR8wRQdWkETgMRj0ohkte3qvTtw6u9jdPjLS+2mM7bD64B7ZTxWhx2yCJ3fv tT2IUuQpo3kpk85LTu1aq3kQUZ1sLtDKfsYHdF9uC/6d34xLIS0sld440u+UZG4Iwr5w PlWZKG/h7ohsLtjinDL2sjA6k8Mk5zFNARgB31AL1O6JQljUzrTieBJc8xDcxmLwGysd hGq10FMBo0lNXPno4uqS/mJMohAIUeibdWwbKls8/j8ntxPjOA5ho5uLl3243+KOwOzW 6u3g== X-Gm-Message-State: AOAM5331noqxDJ5WiYLIfeL2QG/B2Yi6ACLG5R5oPwUyU1BLI3gogRlW qndAkGv3eb2lN62A/eMUliYATSiA1oAG X-Google-Smtp-Source: ABdhPJz0zfyLeokjzsNy8itq7rVjzagA+2DwEboiDW0FQFTIsV+gGl/ieGnSnFL25RShz2/4Btae//px32HM Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a0c:9e0e:: with SMTP id p14mr547966qve.25.1602700053163; Wed, 14 Oct 2020 11:27:33 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:56 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-17-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 16/20] kvm: x86/mmu: Support disabling dirty logging for the tdp MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Dirty logging ultimately breaks down MMU mappings to 4k granularity. When dirty logging is no longer needed, these granaular mappings represent a useless performance penalty. When dirty logging is disabled, search the paging structure for mappings that could be re-constituted into a large page mapping. Zap those mappings so that they can be faulted in again at a higher mapping level. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 3 ++ arch/x86/kvm/mmu/tdp_mmu.c | 59 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 2 ++ 3 files changed, 64 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b2ce57761d2f1..8fcf5e955c475 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5918,6 +5918,9 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, spin_lock(&kvm->mmu_lock); slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot, kvm_mmu_zap_collapsible_spte, true); + + if (kvm->arch.tdp_mmu_enabled) + kvm_tdp_mmu_zap_collapsible_sptes(kvm, memslot); spin_unlock(&kvm->mmu_lock); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 099c7d68aeb1d..94624cc1df84c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1019,3 +1019,62 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot) return spte_set; } +/* + * Clear non-leaf entries (and free associated page tables) which could + * be replaced by large mappings, for GFNs within the slot. + */ +static void zap_collapsible_spte_range(struct kvm *kvm, + struct kvm_mmu_page *root, + gfn_t start, gfn_t end) +{ + struct tdp_iter iter; + kvm_pfn_t pfn; + bool spte_set = false; + + tdp_root_for_each_pte(iter, root, start, end) { + if (!is_shadow_present_pte(iter.old_spte) || + is_last_spte(iter.old_spte, iter.level)) + continue; + + pfn = spte_to_pfn(iter.old_spte); + if (kvm_is_reserved_pfn(pfn) || + !PageTransCompoundMap(pfn_to_page(pfn))) + continue; + + tdp_mmu_set_spte(kvm, &iter, 0); + spte_set = true; + + spte_set = !tdp_mmu_iter_cond_resched(kvm, &iter); + } + + if (spte_set) + kvm_flush_remote_tlbs(kvm); +} + +/* + * Clear non-leaf entries (and free associated page tables) which could + * be replaced by large mappings, for GFNs within the slot. + */ +void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot) +{ + struct kvm_mmu_page *root; + int root_as_id; + + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + /* + * Take a reference on the root so that it cannot be freed if + * this thread releases the MMU lock and yields in this loop. + */ + get_tdp_mmu_root(kvm, root); + + zap_collapsible_spte_range(kvm, root, slot->base_gfn, + slot->base_gfn + slot->npages); + + put_tdp_mmu_root(kvm, root); + } +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index add8bb97c56dd..dc4cdc5cc29f5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -38,4 +38,6 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, gfn_t gfn, unsigned long mask, bool wrprot); bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot); +void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, + const struct kvm_memory_slot *slot); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838143 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D306A921 for ; Wed, 14 Oct 2020 18:27:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE8802222C for ; Wed, 14 Oct 2020 18:27:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="b37+flGV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389292AbgJNS1r (ORCPT ); Wed, 14 Oct 2020 14:27:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389380AbgJNS1q (ORCPT ); Wed, 14 Oct 2020 14:27:46 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D552C0613DC for ; Wed, 14 Oct 2020 11:27:35 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id w13so54868plq.20 for ; Wed, 14 Oct 2020 11:27:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NfhHIv01r/iE20pouU33Lbt4LnC3KxWGgnbW83wgNA4=; b=b37+flGVdYc9pMf69O6wTflTQ0F23977EsCnDucMNS2a/nbjlnWlMyF2a7QyV9jcTH yKPRrJcQNLnWU7fQgXg9qz5jDCgwDx8YvqgFI4WqCZHKhnPvd8QueGefDAYtL0x7LUHC 38NHii5LekFJL+HhAfxerFLEDaVDwD8OxL4ilLR4rLNGRgX3Cz4qDy+56R7hQOpJLnn1 zj/Rsz8c5lDdBBN35eGshLjX5mWgp8MELqwI+Beqho3sPOOE0QA/dxvIeWJmmYf3xuYx swmPZnCQXCcaFHreu7dFdsq+lyq+fxMuldXyYaSUbVZxAyaWDIazrdMxis1X8kH/xoYK ViWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NfhHIv01r/iE20pouU33Lbt4LnC3KxWGgnbW83wgNA4=; b=XOFIz4vK4W0qkyFntpRhPSFsi0krTxyE9r379mnw4OwPIzOxpAjRPsqVt0+uopvXvd NVZvBPpfDfuCyvZKYo3GcHZ0G6KzP6mB/ZUPCSsTjvg112okVinYaUQcUW8oyGVRfIj8 ZqDzZD83rD9Z5dIbgImYRCz+m4gRKKJ7/rAuNi5PvSzlVhFz9Nqcx4swieql5bXtDVoU bT0LkvCY30WcJQteJEK3Qdrh5bUqpYvvKqaA2BHZhtubdepRz3q0do3BlJyeQpJBNO0N q1e0U+87JP259DJHx8jDKr8jk5uU1FAMVnoF6oGU1LzTgRN7Hv7aI8Sm6KJ2KXTEhqsP nJYA== X-Gm-Message-State: AOAM531h/S0UchMbm318amecYUZGFwEQQTDAx6gGE3V974eIXHso9nj2 gNZtWJmjl+cFjTySQPqsGdh3J8l2SNJ7 X-Google-Smtp-Source: ABdhPJwh+4pXkt1KDW+/VoCNIzHgIyI4NOVovjSw8yRkOVMMgj+zgajSBgIoPeCREGWWy0ljrAsA/FfWvD7z Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a63:570d:: with SMTP id l13mr242121pgb.172.1602700055147; Wed, 14 Oct 2020 11:27:35 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:57 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-18-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 17/20] kvm: x86/mmu: Support write protection for nesting in tdp MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org To support nested virtualization, KVM will sometimes need to write protect pages which are part of a shadowed paging structure or are not writable in the shadowed paging structure. Add a function to write protect GFN mappings for this purpose. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 4 +++ arch/x86/kvm/mmu/tdp_mmu.c | 50 ++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 3 +++ 3 files changed, 57 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8fcf5e955c475..58d2412817c87 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1553,6 +1553,10 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, write_protected |= __rmap_write_protect(kvm, rmap_head, true); } + if (kvm->arch.tdp_mmu_enabled) + write_protected |= + kvm_tdp_mmu_write_protect_gfn(kvm, slot, gfn); + return write_protected; } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 94624cc1df84c..c471f2e977d11 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1078,3 +1078,53 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, put_tdp_mmu_root(kvm, root); } } + +/* + * Removes write access on the last level SPTE mapping this GFN and unsets the + * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * Returns true if an SPTE was set and a TLB flush is needed. + */ +static bool write_protect_gfn(struct kvm *kvm, struct kvm_mmu_page *root, + gfn_t gfn) +{ + struct tdp_iter iter; + u64 new_spte; + bool spte_set = false; + + tdp_root_for_each_leaf_pte(iter, root, gfn, gfn + 1) { + if (!is_writable_pte(iter.old_spte)) + break; + + new_spte = iter.old_spte & + ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); + + tdp_mmu_set_spte(kvm, &iter, new_spte); + spte_set = true; + } + + return spte_set; +} + +/* + * Removes write access on the last level SPTE mapping this GFN and unsets the + * SPTE_MMU_WRITABLE bit to ensure future writes continue to be intercepted. + * Returns true if an SPTE was set and a TLB flush is needed. + */ +bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct kvm_mmu_page *root; + int root_as_id; + bool spte_set = false; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_tdp_mmu_root(kvm, root) { + root_as_id = kvm_mmu_page_as_id(root); + if (root_as_id != slot->as_id) + continue; + + spte_set = write_protect_gfn(kvm, root, gfn) || spte_set; + } + return spte_set; +} + diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index dc4cdc5cc29f5..b66283db43221 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -40,4 +40,7 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, const struct kvm_memory_slot *slot); + +bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, + struct kvm_memory_slot *slot, gfn_t gfn); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838147 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD958921 for ; Wed, 14 Oct 2020 18:28:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C0C82223F for ; Wed, 14 Oct 2020 18:28:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mkUcMghC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389384AbgJNS2B (ORCPT ); Wed, 14 Oct 2020 14:28:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389386AbgJNS1q (ORCPT ); Wed, 14 Oct 2020 14:27:46 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76990C0613E0 for ; Wed, 14 Oct 2020 11:27:37 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id r9so67629plo.13 for ; Wed, 14 Oct 2020 11:27:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=FFBzQPOjostuVzGT0bgr6TgN0SRwyWzgmeRr5iYkwnQ=; b=mkUcMghCvfrUqEuyEm6x7qZcLWh9Ix40Wu5iZpadYLZt7PjkLYJzlDP/SaU+7Jn0+y 2X5YqavG/fKCI5ScTplhCy1hh+T+jpS4bQS3Yav8aUkFV/nl8obM6grgsgLiULLB4oRW XVFXUiBtVBEFOMDeRFuNqZwyNeDgBC3YFhWLMyrgK0N2+TW4XzJs797KdZ23h/uDKX6T 1WvmN9+gGo1nf8offO7nBCRcV6Ueqf2tOEAJQaiq5fFonxVm/Sqazq91ERUN6Akg0LQP qqXEB3LgfKtm78vqSWy7IeqW9jmMonLlHSlEah64HxTFfLUHeV/fAcAz0YFBTci05QQ7 Zclg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FFBzQPOjostuVzGT0bgr6TgN0SRwyWzgmeRr5iYkwnQ=; b=NY4I+ytsbJfEMCLCEN/pwFvlzr+DXb0DUQwiZnuiRFd2QvVvubQEjXhk7uLq5JecZF bOr4s4zFIM4oPyPf7ljatnynGTNui/OQWzQ7zLBzj8m/PgVKlmMc4dmb7KasZWu0O9a6 +8DlyHziVqQMG5AC1uILXYCu/Qn7ZQ2CRy2Y32RryWkK9R9TI2vugnpfeqAxNevh6u+V cuwdreaFAjbmAPOkQWInS83xZiQ/C7jle9y1qDXD7IbJpNlhutaz+nhRjvLn57VhLgGJ brJL3jTFRyYth1hm5RVQReJF2QjQYTK/Y4pumYOpY540nb973bl0AfO3bjZKVIHVAKa8 qgRw== X-Gm-Message-State: AOAM532sPy4BZ4YVF9KBiNgsWYOmKuOcG3pTh+oeAe8+VQ1dg1lboGrW tSM717pvMu2mOI9+J5jYeoBADyYlsflW X-Google-Smtp-Source: ABdhPJyjOmUPlOarDzqS9oyS9DKD2dl3s55+Hd1qG5PogxJspVRQF1+CbOBRSo9z4JKIQHsixeoQRbroRqiR Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:902:fe86:b029:d4:d451:eb56 with SMTP id x6-20020a170902fe86b02900d4d451eb56mr602760plm.79.1602700056977; Wed, 14 Oct 2020 11:27:36 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:58 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-19-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 18/20] kvm: x86/mmu: Support MMIO in the TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon , kernel test robot , Dan Carpenter Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In order to support MMIO, KVM must be able to walk the TDP paging structures to find mappings for a given GFN. Support this walk for the TDP MMU. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 v2: Thanks to Dan Carpenter and kernel test robot for finding that root was used uninitialized in get_mmio_spte. Signed-off-by: Ben Gardon Reported-by: kernel test robot Reported-by: Dan Carpenter --- arch/x86/kvm/mmu/mmu.c | 70 ++++++++++++++++++++++++++------------ arch/x86/kvm/mmu/tdp_mmu.c | 18 ++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 2 ++ 3 files changed, 69 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 58d2412817c87..2e8bf8d19c35a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3853,54 +3853,82 @@ static bool mmio_info_in_cache(struct kvm_vcpu *vcpu, u64 addr, bool direct) return vcpu_match_mmio_gva(vcpu, addr); } -/* return true if reserved bit is detected on spte. */ -static bool -walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) +/* + * Return the level of the lowest level SPTE added to sptes. + * That SPTE may be non-present. + */ +static int get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) { struct kvm_shadow_walk_iterator iterator; - u64 sptes[PT64_ROOT_MAX_LEVEL], spte = 0ull; - struct rsvd_bits_validate *rsvd_check; - int root, leaf; - bool reserved = false; + int leaf = vcpu->arch.mmu->root_level; + u64 spte; - rsvd_check = &vcpu->arch.mmu->shadow_zero_check; walk_shadow_page_lockless_begin(vcpu); - for (shadow_walk_init(&iterator, vcpu, addr), - leaf = root = iterator.level; + for (shadow_walk_init(&iterator, vcpu, addr); shadow_walk_okay(&iterator); __shadow_walk_next(&iterator, spte)) { + leaf = iterator.level; spte = mmu_spte_get_lockless(iterator.sptep); sptes[leaf - 1] = spte; - leaf--; if (!is_shadow_present_pte(spte)) break; + } + + walk_shadow_page_lockless_end(vcpu); + + return leaf; +} + +/* return true if reserved bit is detected on spte. */ +static bool get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr, u64 *sptep) +{ + u64 sptes[PT64_ROOT_MAX_LEVEL]; + struct rsvd_bits_validate *rsvd_check; + int root = vcpu->arch.mmu->root_level; + int leaf; + int level; + bool reserved = false; + + if (!VALID_PAGE(vcpu->arch.mmu->root_hpa)) { + *sptep = 0ull; + return reserved; + } + + if (is_tdp_mmu_root(vcpu->kvm, vcpu->arch.mmu->root_hpa)) + leaf = kvm_tdp_mmu_get_walk(vcpu, addr, sptes); + else + leaf = get_walk(vcpu, addr, sptes); + + rsvd_check = &vcpu->arch.mmu->shadow_zero_check; + + for (level = root; level >= leaf; level--) { + if (!is_shadow_present_pte(sptes[level - 1])) + break; /* * Use a bitwise-OR instead of a logical-OR to aggregate the * reserved bit and EPT's invalid memtype/XWR checks to avoid * adding a Jcc in the loop. */ - reserved |= __is_bad_mt_xwr(rsvd_check, spte) | - __is_rsvd_bits_set(rsvd_check, spte, iterator.level); + reserved |= __is_bad_mt_xwr(rsvd_check, sptes[level - 1]) | + __is_rsvd_bits_set(rsvd_check, sptes[level - 1], + level); } - walk_shadow_page_lockless_end(vcpu); - if (reserved) { pr_err("%s: detect reserved bits on spte, addr 0x%llx, dump hierarchy:\n", __func__, addr); - while (root > leaf) { + for (level = root; level >= leaf; level--) pr_err("------ spte 0x%llx level %d.\n", - sptes[root - 1], root); - root--; - } + sptes[level - 1], level); } - *sptep = spte; + *sptep = sptes[leaf - 1]; + return reserved; } @@ -3912,7 +3940,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) if (mmio_info_in_cache(vcpu, addr, direct)) return RET_PF_EMULATE; - reserved = walk_shadow_page_get_mmio_spte(vcpu, addr, &spte); + reserved = get_mmio_spte(vcpu, addr, &spte); if (WARN_ON(reserved)) return -EINVAL; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c471f2e977d11..b1515b89606e1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1128,3 +1128,21 @@ bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, return spte_set; } +/* + * Return the level of the lowest level SPTE added to sptes. + * That SPTE may be non-present. + */ +int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes) +{ + struct tdp_iter iter; + struct kvm_mmu *mmu = vcpu->arch.mmu; + int leaf = vcpu->arch.mmu->shadow_root_level; + gfn_t gfn = addr >> PAGE_SHIFT; + + tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) { + leaf = iter.level; + sptes[leaf - 1] = iter.old_spte; + } + + return leaf; +} diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index b66283db43221..f890048dfcba5 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -43,4 +43,6 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn); + +int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes); #endif /* __KVM_X86_MMU_TDP_MMU_H */ From patchwork Wed Oct 14 18:26:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838149 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6CD1921 for ; Wed, 14 Oct 2020 18:28:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D3F92222C for ; Wed, 14 Oct 2020 18:28:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G0CBA16R" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389398AbgJNS2A (ORCPT ); Wed, 14 Oct 2020 14:28:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40224 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389409AbgJNS1q (ORCPT ); Wed, 14 Oct 2020 14:27:46 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A08CAC0613E2 for ; Wed, 14 Oct 2020 11:27:39 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id k6so94568qvg.9 for ; Wed, 14 Oct 2020 11:27:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=89LrHzVsLlwPBCGn84hGo/p+FJBhRaw2texxmtA4jtA=; b=G0CBA16RiGAumu6zAemSqyOdhCK2t0J6mY2WbIOvQp14LFcD7co9CLm/gSj3d7ou9G XtE2cAaxSEe5Ua17Si2vIdYX7lVuMAei6cqwgXO6puYzYxn3e81DiB8pen/ANenriYWS MQ8HAiCMx2x+8+Ge7jbI98vKWUhOXzsfhAORKXZP76rjJ+BQNRXGXa4/exCXtxqUCxI+ MBcrDurIBPgrBVhTz3yzlEWtcNu1/fqxh+9osjW3AmiiPEO+J8wWZM7eZtPR65BaBhyB 0FOqWx9w2DrlrYhNfql9GH7DSU2FjrjKgWSmGLnPNMSAQPk6s+ePGXMmClf3tubOYaTl kndA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=89LrHzVsLlwPBCGn84hGo/p+FJBhRaw2texxmtA4jtA=; b=aujTsKOxLnlsKO2Vq/D5IkDHv6XAHS8u1ETnfg0OCCI+dpnkMSfsJ/ZCSfVFJlroWT /9A2RSvpnZy8EETTHkiW02u9+4W7ljIM5to/iFQoqNmGFzBCEzWGTihgGXasw+9MfqgS h0XExoNxaS7mK3Dz5jAZIrIr8YO7hbASXpgWT/3Ga0V4vOGc1etgsa75Ct+8xSA+0KsT o6QTknn3Xhx7g1K2ybzliNn988x+PzB8XRnQLhp/RdfTPl0C7eOFLbnTD6qEWSsJpO3W elgR8iz8GFHDjUFll6kqTf5uqZxtwDBzaX/IrJY6RjbK019yXr4lt61CKRjScLQXM/Nq /FsQ== X-Gm-Message-State: AOAM533Xdt42swlXdA4CeXqKxIqDnkLK5e3sikzFgtCrxbZJ55w+Dt0/ by/DfvRJdVzEXgwEHx2gESzcIwjPlnCz X-Google-Smtp-Source: ABdhPJwawAS2Dd68rKKMGCq+qgkWISOp7hkWeeEYdOms1yoISAraLw8ftyqMJBLQy+cDH0TqvBG2hrHKkY7D Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a05:6214:2ed:: with SMTP id h13mr593520qvu.26.1602700058739; Wed, 14 Oct 2020 11:27:38 -0700 (PDT) Date: Wed, 14 Oct 2020 11:26:59 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-20-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 19/20] kvm: x86/mmu: Don't clear write flooding count for direct roots From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Direct roots don't have a write flooding count because the guest can't affect that paging structure. Thus there's no need to clear the write flooding count on a fast CR3 switch for direct roots. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2e8bf8d19c35a..3935c10278736 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4266,7 +4266,13 @@ static void __kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd, */ vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); - __clear_sp_write_flooding_count(to_shadow_page(vcpu->arch.mmu->root_hpa)); + /* + * If this is a direct root page, it doesn't have a write flooding + * count. Otherwise, clear the write flooding count. + */ + if (!new_role.direct) + __clear_sp_write_flooding_count( + to_shadow_page(vcpu->arch.mmu->root_hpa)); } void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd, bool skip_tlb_flush, From patchwork Wed Oct 14 18:27:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11838145 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0A1E921 for ; Wed, 14 Oct 2020 18:28:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9801B2222C for ; Wed, 14 Oct 2020 18:28:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gqrdTZZb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388661AbgJNS17 (ORCPT ); Wed, 14 Oct 2020 14:27:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389404AbgJNS1r (ORCPT ); Wed, 14 Oct 2020 14:27:47 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C3F9C0613E4 for ; Wed, 14 Oct 2020 11:27:41 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id r16so58536pls.19 for ; Wed, 14 Oct 2020 11:27:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=OcS1EoJ2FiK+HayZIKclc/lLgHYhtZFFT/BEnIUqPtw=; b=gqrdTZZbZaq/iyuZCewS1i3wq0fFAH+YCBe+xm4GuJhyWNRLH7mIUO6ZXSAI3wTQtH X9OtBcm0pQBwuily72zEf8uLqSwvwqgKsXjevutNMjEumteq7vdF9MnbTfywihzv2XcB R/NgpsvXSqmdKPF5bemwwijWm7R4ivQHNjhhQt8SPsuQ0ijC81bCgd3J7B6ZubdKKl76 asFOECVh4dcXsfeOIk/SzZcvzts8AKnn84gTa5RM7CNxw6NO16B2wIBgNbb4F8FZHl47 mZLaTP6dzehwLPTl+JNX/GJ9i81oBJKyn+JHhyr+KSuFxwpX2se0grNjyYSGOoaMOd3q lavw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OcS1EoJ2FiK+HayZIKclc/lLgHYhtZFFT/BEnIUqPtw=; b=tWagGiXLTtEeB1y0skbN0eiluB12PZ2/LHHHA2i4+RI7PGBKjqcu5zlMQu5CfqTzr8 SXxL1FliY+ztVZRA6nD6N+sK1f8oFUAtt+mVLZBwR0ECH9h1zZyrpcTksARrrwnzmk8T 0yuSaY2wGYfGfnwUBN84RC/4BDxLpuUAcmh/kMWGfBgMAZj1gYTe0kZwp3Psk/GVosWz KPNGHEoIZst3LEdff8uYLOOEBg47yuLQpMe9nnc0/xemBjR088wd0NDybEhU3C7xNNiV a8q9GcY35bSiT4MEuiFvxPYIq8d4EEvB93UtVN3IkdQ7iBWT7FDLdcEttAfcGItMmnnb tkxQ== X-Gm-Message-State: AOAM533VuTx5QKiSEzZLGAR7omu1lX4N+t0uEgtsMHsnxTHwBpz9Zm0q SkZPh5OcD1T1P6YiFxvg0WI4/8Nqovjg X-Google-Smtp-Source: ABdhPJwbIUToJOUkI5LeHYo/HRh0q1F3xz2yZtpM9T4o/hxwf2ISyxtWJvFoOEvLo+A8KLAUpaxEZpK8gdgo Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:90b:1496:: with SMTP id js22mr522321pjb.20.1602700060562; Wed, 14 Oct 2020 11:27:40 -0700 (PDT) Date: Wed, 14 Oct 2020 11:27:00 -0700 In-Reply-To: <20201014182700.2888246-1-bgardon@google.com> Message-Id: <20201014182700.2888246-21-bgardon@google.com> Mime-Version: 1.0 References: <20201014182700.2888246-1-bgardon@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog Subject: [PATCH v2 20/20] kvm: x86/mmu: NX largepage recovery for TDP MMU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Cannon Matthews , Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When KVM maps a largepage backed region at a lower level in order to make it executable (i.e. NX large page shattering), it reduces the TLB performance of that region. In order to avoid making this degradation permanent, KVM must periodically reclaim shattered NX largepages by zapping them and allowing them to be rebuilt in the page fault handler. With this patch, the TDP MMU does not respect KVM's rate limiting on reclaim. It traverses the entire TDP structure every time. This will be addressed in a future patch. Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell machine. This series introduced no new failures. This series can be viewed in Gerrit at: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538 Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu.c | 13 +++++++++---- arch/x86/kvm/mmu/mmu_internal.h | 3 +++ arch/x86/kvm/mmu/tdp_mmu.c | 6 ++++++ 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3935c10278736..5c8a35e4c872b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1030,7 +1030,7 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) kvm_mmu_gfn_disallow_lpage(slot, gfn); } -static void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) +void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) { if (sp->lpage_disallowed) return; @@ -1058,7 +1058,7 @@ static void unaccount_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp) kvm_mmu_gfn_allow_lpage(slot, gfn); } -static void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) +void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp) { --kvm->stat.nx_lpage_splits; sp->lpage_disallowed = false; @@ -6362,8 +6362,13 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) struct kvm_mmu_page, lpage_disallowed_link); WARN_ON_ONCE(!sp->lpage_disallowed); - kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); - WARN_ON_ONCE(sp->lpage_disallowed); + if (sp->tdp_mmu_page) + kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, + sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level)); + else { + kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); + WARN_ON_ONCE(sp->lpage_disallowed); + } if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { kvm_mmu_commit_zap_page(kvm, &invalid_list); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index a7230532bb845..88899a2666d86 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -299,4 +299,7 @@ static inline u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, } +void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index b1515b89606e1..2949759c6aa84 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -289,6 +289,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, list_del(&sp->link); + if (sp->lpage_disallowed) + unaccount_huge_nx_page(kvm, sp); + for (i = 0; i < PT64_ENT_PER_PAGE; i++) { old_child_spte = *(pt + i); *(pt + i) = 0; @@ -567,6 +570,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, new_spte = make_nonleaf_spte(child_pt, !shadow_accessed_mask); + if (huge_page_disallowed && req_level >= iter.level) + account_huge_nx_page(vcpu->kvm, sp); + tdp_mmu_set_spte(vcpu->kvm, &iter, new_spte); } }