From patchwork Tue Nov 7 15:00:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449013 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC50337143 for ; Tue, 7 Nov 2023 15:04:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TGGNiwTg" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1140B5FC5; Tue, 7 Nov 2023 07:01:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369319; x=1730905319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=37FWYcpLa4H36V6RbrD0OJ4ot4mIfrZPxfPMK6yV4vw=; b=TGGNiwTgp3G0Duy2VF3R4QRZflJdAJCUR8nPTYSuwSHllky1QDxsm8N5 R+bsKS45ovefpr3wAAVFdjrAXpYyVeWR1KA0TRhDkzV6CR7vrAprFvhPe 4Qke4dZsptoeHCC2UsGThSrif94+OwyaD/WWR/dqplH3XSJzqyMrCiy9S 8x5nukXfsoGC9O4SAAvgmZBuMguktrQ/GrnQHIW6WiympXOz2WYyTgNlk v17ZLH2FuUmo6Ish6e53c1rPWkykJIvQZfUAHbjmw77gBjwjWnuV+rR6r OznAzGc9ZsB1BVm/pbDEc760T6U3R6hXoFVcXXV3qd/aCLWYNNvLyaV29 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397488" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397488" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446466" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:50 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists Date: Tue, 7 Nov 2023 07:00:28 -0800 Message-Id: <439c7be59c35a03bced88a44567431e721fab3da.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Cannot map a private page as large page if any smaller mapping exists. It has to wait for all the not-mapped smaller page to be mapped and promote it to larger mapping. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/mmu/tdp_mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 2c5257628881..d806574f7f2d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) { int r; - if (fault->nx_huge_page_workaround_enabled) + if (fault->nx_huge_page_workaround_enabled || + kvm_gfn_shared_mask(vcpu->kvm)) disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); /* From patchwork Tue Nov 7 15:00:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449034 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4278038F96 for ; Tue, 7 Nov 2023 15:05:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FLiAjDwN" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E0535BBF; Tue, 7 Nov 2023 07:01:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369319; x=1730905319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bcd5Hg5bG1RMZIsJP60JxxJf7hZanQ3sSPGs+CnvT3Q=; b=FLiAjDwN0lpxxFwlq6+xzrjozdLRE7b3U2n8dxrDKt+psh4mXpH6j3xf qGp7ucCNrm1IvGz+azgGuC1tc6Bmb/XDRcgRIIC19FaNy4VM0+QsmjYvY qGQyjz1aQR5hb2241fpnhqfWWLAqbIPsmr+aG7svs53AW2EpyY+I8CGzO N30QXbYijYt8JwpOeeJbcqESolD2JmjmPLbMNIdvIubvs+Ie6+NkacVYy q+HteMx8/UDenM6T4cNynAaNduFCrXA3xcS8QpLNGeMcPQ+ZztkmWR0vW wP+gr/FPGvD7SG1Cb6h9M3f6Kb1GhaDXehuYNBuCUCxgk2jB1otE8vtXh A==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397497" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397497" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446479" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:51 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL Date: Tue, 7 Nov 2023 07:00:29 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li tdh_mem_page_aug() will support 2MB large page in the near future. Cache flush also needs to be 2MB instead of 4KB in such cases. Introduce a helper function to flush cache with page size info in preparation for large pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata Reviewed-by: Binbin Wu --- arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index fd73a1731bf8..e726102d3523 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -6,6 +6,7 @@ #include +#include #include #include #include @@ -62,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9, void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out); #endif +static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) +{ + clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); +} + /* * TDX module acquires its internal lock for resources. It doesn't spin to get * locks because of its restrictions of allowed execution time. Instead, it @@ -94,21 +100,21 @@ static inline u64 tdx_seamcall_sept(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9, static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source, struct tdx_module_args *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); } static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page, struct tdx_module_args *out) { - clflush_cache_range(__va(page), PAGE_SIZE); + tdx_clflush_page(page, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out); } @@ -126,21 +132,21 @@ static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t gpa, int level, static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr) { - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return tdx_seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL); } static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_args *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, struct tdx_module_args *out) { - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); } @@ -157,13 +163,13 @@ static inline u64 tdh_mng_key_config(hpa_t tdr) static inline u64 tdh_mng_create(hpa_t tdr, int hkid) { - clflush_cache_range(__va(tdr), PAGE_SIZE); + tdx_clflush_page(tdr, PG_LEVEL_4K); return tdx_seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL); } static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr) { - clflush_cache_range(__va(tdvpr), PAGE_SIZE); + tdx_clflush_page(tdvpr, PG_LEVEL_4K); return tdx_seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL); } From patchwork Tue Nov 7 15:00:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449028 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2125338DFE for ; Tue, 7 Nov 2023 15:05:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="l32Ees4V" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 113545FC4; Tue, 7 Nov 2023 07:01:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369319; x=1730905319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mDyKKLtldjXHa72w/7rjtP4b1miz0xjRHML4kod9w+Q=; b=l32Ees4V6Xp/oAPyMKcil+zxOE3jLEsjDLAHg0mqxb+9MZIhj5jW6JVZ v+Ii/1uoAUCxJOoDu8+4QWk2DUjM/9NU+q3DFeJX8dfFbstDNgAhSLYkD U/jRdeRKx6IQnbPoBpmDe2cbftSloZD9vPX0RkdAB3j3vppQPIlVfgqP6 prLn+T391CM72RDseTNjUQQbGFGMo6ih+H6OF5oQw8PLJlaHhNHV8WVGA naXsEUfNR/7q2h8JGyOAbMAUM6dH3p7kKUoh2QBAkzSvayNd5iFqGXUDM fHD9kLaPXwvS/pWOMcROFXilDHl3jbYW7YjbBkwNFIB8wvK+PcUderohi A==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397511" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397511" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446507" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:52 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() Date: Tue, 7 Nov 2023 07:00:30 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Level info is needed in tdh_clflush_page() to generate the correct page size. Besides, explicitly pass level info to SEAMCALL instead of assuming it's zero. It works naturally when 2MB support lands. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 7 ++++--- arch/x86/kvm/vmx/tdx_ops.h | 19 ++++++++++++------- 2 files changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 8b58d91bda4e..2d5c86e06c5f 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1468,7 +1468,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, union tdx_sept_entry entry; u64 err; - err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out); + err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) { tdx_unpin(kvm, pfn); return -EAGAIN; @@ -1497,6 +1497,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { + int tdx_level = pg_level_to_tdx_sept_level(level); struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); hpa_t hpa = pfn_to_hpa(pfn); gpa_t gpa = gfn_to_gpa(gfn); @@ -1531,8 +1532,8 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, kvm_tdx->source_pa = INVALID_PAGE; do { - err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, hpa, source_pa, - &out); + err = tdh_mem_page_add(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, + source_pa, &out); /* * This path is executed during populating initial guest memory * image. i.e. before running any vcpu. Race is rare. diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index e726102d3523..0f2df7198bde 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -63,6 +63,11 @@ static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9, void pr_tdx_error(u64 op, u64 error_code, const struct tdx_module_args *out); #endif +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level) +{ + return tdx_level + 1; +} + static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) { clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); @@ -104,11 +109,11 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); } -static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source, - struct tdx_module_args *out) +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa, + hpa_t source, struct tdx_module_args *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, gpa | level, tdr, hpa, source, out); } static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page, @@ -143,11 +148,11 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); } -static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa, struct tdx_module_args *out) { - tdx_clflush_page(hpa, PG_LEVEL_4K); - return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); + return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, gpa | level, tdr, hpa, 0, out); } static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level, From patchwork Tue Nov 7 15:00:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449026 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 751BC38F99 for ; Tue, 7 Nov 2023 15:05:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Uea/Q4wR" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB59C5FD9; Tue, 7 Nov 2023 07:01:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369319; x=1730905319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KFe97r0IFPBKhdaTOSedc51npmLY26TZ0rOptuo2w7Q=; b=Uea/Q4wRBIhjEPvCP1a/z1H7Ziexf/kwLw/TANwrP6cMs7ScRG0gzjgy YZ5uM4NMwN4jSL8+sy3RKppeK3VjYziLwnCyFhnlnYMnVS3Nvw+0Xj4bO p1Vl7JWEIYHsR0sBFBtmJuC0+23oIzXgvklDcGIIBIxWZSJ3oxySR4q/h Jg3+rpU3azHZvy4Xm/ds0xaFz9e9IHxcbhUlBWfdUxFy+vHbagtp5sIf3 /Vh1pzGIGc+ox6wmwrx26UKIeUOvecDW6MTiiTONycWoEVyUWIxRdc570 IHnlvWturVY/msMazHANySU0QrXxq4n9MvrAf5ZY13JMpleVQBNnrjkrq g==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397521" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397521" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446531" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:53 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() Date: Tue, 7 Nov 2023 07:00:31 -0800 Message-Id: <7b024367db5909ffc22e6762acd0569c3a82ccd3.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Extend tdx_measure_page() to pass size info so that it can measure large page as well. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2d5c86e06c5f..a728175c4a6d 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1434,13 +1434,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); } -static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa) +static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size) { struct tdx_module_args out; u64 err; int i; - for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) { + WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE); + + for (i = 0; i < size; i += TDX_EXTENDMR_CHUNKSIZE) { err = tdh_mr_extend(kvm_tdx->tdr_pa, gpa + i, &out); if (KVM_BUG_ON(err, &kvm_tdx->kvm)) { pr_tdx_error(TDH_MR_EXTEND, err, &out); @@ -1544,7 +1546,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, tdx_unpin(kvm, pfn); return -EIO; } else if (measure) - tdx_measure_page(kvm_tdx, gpa); + tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); return 0; From patchwork Tue Nov 7 15:00:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449012 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F37113717B for ; Tue, 7 Nov 2023 15:05:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="O9Oi4YP/" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 431335FEA; Tue, 7 Nov 2023 07:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369320; x=1730905320; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=icTrF8o6LyxIxovBNl655YH6mhHs5TgjhfZ3qg2BIJY=; b=O9Oi4YP/FIX4GPa5/unaa5NbdgIJg1Y/XmriFoU/gQD5A76FrhEvXTVQ Dza7xMV5CMmCy5BvmNa0/OHeJjAJxvoaEhetExuTC5tv2d/OXgdxyaMXB KBBz3/MIBjK5KR9CWXM8hZDGLKoNnpNIL4jxX9ZO+JxrUNnzBE9L4ZtAW Gfp2XTvnX83rOe6n4qQxBoXRRBZLjnx12Snch1inw5ecsY0574E9G1aiI e8sgKv2JmMROloZlp/4Vd/I5dFkoSKSBDkARgzxssQzoKJ873YQu7rWOT XoxUvdk8PM27ppGbuOhoNawhuvxW5dH6th7LJItOpfHGNet9/qKTysM4o Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397539" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397539" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446560" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:54 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() Date: Tue, 7 Nov 2023 07:00:32 -0800 Message-Id: <31552f714f2fd8178f9467e9afaaf28ba3de3c7b.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case, it needs to reclaim and clear the page as 2MB size. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index a728175c4a6d..0fca863faeee 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -200,12 +200,13 @@ static void tdx_disassociate_vp_on_cpu(struct kvm_vcpu *vcpu) smp_call_function_single(cpu, tdx_disassociate_vp_arg, vcpu, 1); } -static void tdx_clear_page(unsigned long page_pa) +static void tdx_clear_page(unsigned long page_pa, int size) { const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); void *page = __va(page_pa); unsigned long i; + WARN_ON_ONCE(size % PAGE_SIZE); /* * When re-assign one page from old keyid to a new keyid, MOVDIR64B is * required to clear/write the page with new keyid to prevent integrity @@ -214,7 +215,7 @@ static void tdx_clear_page(unsigned long page_pa) * clflush doesn't flush cache with HKID set. The cache line could be * poisoned (even without MKTME-i), clear the poison bit. */ - for (i = 0; i < PAGE_SIZE; i += 64) + for (i = 0; i < size; i += 64) movdir64b(page + i, zero_page); /* * MOVDIR64B store uses WC buffer. Prevent following memory reads @@ -223,7 +224,7 @@ static void tdx_clear_page(unsigned long page_pa) __mb(); } -static int __tdx_reclaim_page(hpa_t pa) +static int __tdx_reclaim_page(hpa_t pa, enum pg_level level) { struct tdx_module_args out; u64 err; @@ -241,17 +242,19 @@ static int __tdx_reclaim_page(hpa_t pa) pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out); return -EIO; } + /* out.r8 == tdx sept page level */ + WARN_ON_ONCE(out.r8 != pg_level_to_tdx_sept_level(level)); return 0; } -static int tdx_reclaim_page(hpa_t pa) +static int tdx_reclaim_page(hpa_t pa, enum pg_level level) { int r; - r = __tdx_reclaim_page(pa); + r = __tdx_reclaim_page(pa, level); if (!r) - tdx_clear_page(pa); + tdx_clear_page(pa, KVM_HPAGE_SIZE(level)); return r; } @@ -265,7 +268,7 @@ static void tdx_reclaim_td_page(unsigned long td_page_pa) * was already flushed by TDH.PHYMEM.CACHE.WB before here, So * cache doesn't need to be flushed again. */ - if (tdx_reclaim_page(td_page_pa)) + if (tdx_reclaim_page(td_page_pa, PG_LEVEL_4K)) /* * Leak the page on failure: * tdx_reclaim_page() returns an error if and only if there's an @@ -497,7 +500,7 @@ void tdx_vm_free(struct kvm *kvm) if (!kvm_tdx->tdr_pa) return; - if (__tdx_reclaim_page(kvm_tdx->tdr_pa)) + if (__tdx_reclaim_page(kvm_tdx->tdr_pa, PG_LEVEL_4K)) return; /* * TDX module maps TDR with TDX global HKID. TDX module may access TDR @@ -510,7 +513,7 @@ void tdx_vm_free(struct kvm *kvm) pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); return; } - tdx_clear_page(kvm_tdx->tdr_pa); + tdx_clear_page(kvm_tdx->tdr_pa, PAGE_SIZE); free_page((unsigned long)__va(kvm_tdx->tdr_pa)); kvm_tdx->tdr_pa = 0; @@ -1597,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, * The HKID assigned to this TD was already freed and cache * was already flushed. We don't have to flush again. */ - err = tdx_reclaim_page(hpa); + err = tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; tdx_unpin(kvm, pfn); @@ -1630,7 +1633,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); return -EIO; } - tdx_clear_page(hpa); + tdx_clear_page(hpa, PAGE_SIZE); tdx_unpin(kvm, pfn); return 0; } @@ -1742,7 +1745,7 @@ static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, * already flushed. We don't have to flush again. */ if (!is_hkid_assigned(kvm_tdx)) - return tdx_reclaim_page(__pa(private_spt)); + return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K); /* * free_private_spt() is (obviously) called when a shadow page is being From patchwork Tue Nov 7 15:00:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449016 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 763D2374D3 for ; Tue, 7 Nov 2023 15:05:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="RomM1X1Y" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 554FD5FF0; Tue, 7 Nov 2023 07:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369320; x=1730905320; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KQvqcB0WOESPtAk76uQOtSyo0Bs6cLlqLsSh9gYdX0g=; b=RomM1X1Y1kSz6IzuFPShTNWBgs7fLEPfODCmcjYoFv8OueUV/KHcNkbR GOoYbSlmgryOfgDK7ISKOKQXwQsUsWnqIrCXXZ0MGE5hq5mjAm0mZFX04 peThsnAKx0c4H4zGt/wNAC6JaatMbU2mUZRsZZ7Nm0RSEVIMDb5rx82/m OGK3P6RnOlS6UQQLTdQGH9vc2MIoDhZsug5OipGq8dN347O85M40pecvL NZDvVvdVbit9ORNWy43jEDbwdJvdksXYLtpH3Mm73lYvQ81UYqSrxtw+L F66nVRhqOnIbGNcpdc2Qt7zznrAo13CNUUrYBxEoyUT+I6aV0KkmEXsE+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397575" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397575" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446577" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:55 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page Date: Tue, 7 Nov 2023 07:00:33 -0800 Message-Id: <4e4cf138469fcac9115d09481cb3ba46c543bae8.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Allow large page level AUG and REMOVE for TDX pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 66 ++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 0fca863faeee..31598b84811f 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1454,11 +1454,12 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size) } } -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) +static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level) { - struct page *page = pfn_to_page(pfn); + int i; - put_page(page); + for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) + put_page(pfn_to_page(pfn + i)); } static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, @@ -1475,7 +1476,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EAGAIN; } if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) { @@ -1492,7 +1493,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } @@ -1528,7 +1529,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EINVAL; } @@ -1546,7 +1547,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, } while (unlikely(err == TDX_ERROR_SEPT_BUSY)); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } else if (measure) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); @@ -1559,10 +1560,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; /* * Because restricted mem doesn't support page migration with @@ -1572,7 +1570,8 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, * TODO: Once restricted mem introduces callback on page migration, * implement it and remove get_page/put_page(). */ - get_page(pfn_to_page(pfn)); + for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) + get_page(pfn_to_page(pfn + i)); if (likely(is_td_finalized(kvm_tdx))) return tdx_sept_page_aug(kvm, gfn, level, pfn); @@ -1589,11 +1588,9 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, gpa_t gpa = gfn_to_gpa(gfn); hpa_t hpa = pfn_to_hpa(pfn); hpa_t hpa_with_hkid; + int r = 0; u64 err; - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; if (unlikely(!is_hkid_assigned(kvm_tdx))) { /* @@ -1603,7 +1600,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, err = tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return 0; } @@ -1620,22 +1617,27 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, return -EIO; } - hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); - do { - /* - * TDX_OPERAND_BUSY can happen on locking PAMT entry. Because - * this page was removed above, other thread shouldn't be - * repeatedly operating on this page. Just retry loop. - */ - err = tdh_phymem_page_wbinvd(hpa_with_hkid); - } while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); - if (KVM_BUG_ON(err, kvm)) { - pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); - return -EIO; + for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) { + hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); + do { + /* + * TDX_OPERAND_BUSY can happen on locking PAMT entry. + * Because this page was removed above, other thread + * shouldn't be repeatedly operating on this page. + * Simple retry should work. + */ + err = tdh_phymem_page_wbinvd(hpa_with_hkid); + } while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + r = -EIO; + } else { + tdx_clear_page(hpa, PAGE_SIZE); + tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + } + hpa += PAGE_SIZE; } - tdx_clear_page(hpa, PAGE_SIZE); - tdx_unpin(kvm, pfn); - return 0; + return r; } static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, From patchwork Tue Nov 7 15:00:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449038 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4B47374F1 for ; Tue, 7 Nov 2023 15:05:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iru87apn" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 601A85FF1; Tue, 7 Nov 2023 07:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369320; x=1730905320; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JPqvuw1OwPASTlyGEvq1L8KuERMyASBUe253S0yDYLI=; b=iru87apnKUws0qZ+6KwZKEU8gu8eyTAjUAW7jkdUGryUelRPNVOmKGlm lJcgp5QemJonGnSCSvIJ9Sv61ZQq+Y+5Qkwm+U5ppEsy+MyBv2fTQAq/l TSNNd74IH7Oy+BqfcSSt+xHh+1KjG5wCYq6p4rzDyZ87/ulkYtouZBLy9 6viZPNjaO/Ka+BGjgxlXmmTZEkmMQMZOjSY0LnmKrg8LKbvaYdfCEq6gw qBmN8yHkT7UBhvJnuC2cVWHFbRGbD1XUEt9LMdvgq1DQw9px7GPzNQOyU JadsVtdZTejxwppxPMBucoBXBvaKDJuOqR3xVQ+fvG+5XWWs7Tk0zjNLz g==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397581" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397581" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446792" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:56 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code Date: Tue, 7 Nov 2023 07:00:34 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/mmu/mmu.c | 5 +++++ arch/x86/kvm/vmx/common.h | 6 +++++- arch/x86/kvm/vmx/tdx.c | 15 ++++++++++++++- arch/x86/kvm/vmx/tdx.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 47 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index edcafcd650db..eed36c1eedb7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -261,6 +261,8 @@ enum x86_intercept_stage; #define PFERR_FETCH_BIT 4 #define PFERR_PK_BIT 5 #define PFERR_SGX_BIT 15 +#define PFERR_LEVEL_START_BIT 29 +#define PFERR_LEVEL_END_BIT 31 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 #define PFERR_GUEST_ENC_BIT 34 @@ -273,6 +275,7 @@ enum x86_intercept_stage; #define PFERR_FETCH_MASK BIT(PFERR_FETCH_BIT) #define PFERR_PK_MASK BIT(PFERR_PK_BIT) #define PFERR_SGX_MASK BIT(PFERR_SGX_BIT) +#define PFERR_LEVEL_MASK GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_START_BIT) #define PFERR_GUEST_FINAL_MASK BIT_ULL(PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) #define PFERR_GUEST_ENC_MASK BIT_ULL(PFERR_GUEST_ENC_BIT) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index eb17a508c5d1..265177cedf37 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4615,6 +4615,11 @@ bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma) int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + u8 err_level = (fault->error_code & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT; + + if (err_level) + fault->max_level = min(fault->max_level, err_level); + /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index 027aa4175d2c..bb00433932ee 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -67,7 +67,8 @@ static inline void vmx_handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu, } static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa, - unsigned long exit_qualification) + unsigned long exit_qualification, + int err_page_level) { u64 error_code; @@ -90,6 +91,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa, if (kvm_is_private_gpa(vcpu->kvm, gpa)) error_code |= PFERR_GUEST_ENC_MASK; + if (err_page_level > 0) + error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK; + return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 31598b84811f..e4167f08b58b 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1803,7 +1803,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) { + union tdx_ext_exit_qualification ext_exit_qual; unsigned long exit_qual; + int err_page_level = 0; + + ext_exit_qual.full = tdexit_ext_exit_qual(vcpu); + + if (ext_exit_qual.type >= NUM_EXT_EXIT_QUAL) { + pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification type 0x%x\n", + tdexit_gpa(vcpu), ext_exit_qual.type); + kvm_vm_bugged(vcpu->kvm); + return 0; + } else if (ext_exit_qual.type == EXT_EXIT_QUAL_ACCEPT) { + err_page_level = tdx_sept_level_to_pg_level(ext_exit_qual.req_sept_level); + } if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) { /* @@ -1830,7 +1843,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) } trace_kvm_page_fault(vcpu, tdexit_gpa(vcpu), exit_qual); - return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual); + return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_page_level); } static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 54c3f6b83571..37ee944c36a1 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -72,6 +72,25 @@ union tdx_exit_reason { u64 full; }; +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + struct vcpu_tdx { struct kvm_vcpu vcpu; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 28732925792e..ae9ba0731521 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa))) return kvm_emulate_instruction(vcpu, 0); - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); } static int handle_ept_misconfig(struct kvm_vcpu *vcpu) From patchwork Tue Nov 7 15:00:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449025 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2353838DCF for ; Tue, 7 Nov 2023 15:05:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DvBd7RgG" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 634D25FF2; Tue, 7 Nov 2023 07:01:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369320; x=1730905320; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+LUQ4SRp+zcV29CpWhNabpshgVToAyo4Mcjem4Bu5Zc=; b=DvBd7RgGYAIJFDgxMlaQX4eQpD1nP06Xq2wxlAmsZMZvuimxJo75THx9 jiodh6u4a6/c5//g5oujZCkinXs0FWgDGG75L2UFZ9Bths/2PZMxmJLOb nek/m2oDCR/CZJ/IOUMqhjeOPLzxJ0MA2+PLG8lYuHpjSph1hl7LAFjWn wda6e6qOuAEPyCzZGIMPHFH51/flO1BbHwNYpM1PjE9d1bdvxuj5q6amV IzD09wBKnQEiwWVFFYF3isqC4Ru5kzhTfVRBOb+wTcUZnj6BJ1k58lna3 QG1cP8Mq03JSJVAGwQz9M/dazJfidA9qMbvQ18jZOe0nRIrfkNQLFXcUk A==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397591" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397591" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446814" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:57 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs Date: Tue, 7 Nov 2023 07:00:35 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li When kvm_faultin_pfn(), it doesn't have the info regarding which page level will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a 2M page. Move the guest private pages pinning logic right before TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/vmx/tdx.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e4167f08b58b..7b81811eb404 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1454,7 +1454,8 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size) } } -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level) +static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, + enum pg_level level) { int i; @@ -1476,7 +1477,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EAGAIN; } if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) { @@ -1493,7 +1494,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn, } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } @@ -1529,7 +1530,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EINVAL; } @@ -1547,7 +1548,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn, } while (unlikely(err == TDX_ERROR_SEPT_BUSY)); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return -EIO; } else if (measure) tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level)); @@ -1600,7 +1601,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, err = tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn, level); + tdx_unpin(kvm, gfn, pfn, level); return 0; } @@ -1633,7 +1634,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, r = -EIO; } else { tdx_clear_page(hpa, PAGE_SIZE); - tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K); } hpa += PAGE_SIZE; } From patchwork Tue Nov 7 15:00:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13448998 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DF91347C0 for ; Tue, 7 Nov 2023 15:04:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UtkTsLTQ" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58C2A618F; Tue, 7 Nov 2023 07:02:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369321; x=1730905321; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ycbt3t1b2wsaF+TK1M5DAhXRxNfTAEnLJ2demb3PXfM=; b=UtkTsLTQM3rWPSis3Z4r7ccK0g4uJFGvPyZVoHhxIrawta4qw93BmIJz 5p7vmYLQQ9R/xW2oZBW21pSjxF5Q5yvA4MeNvCvpI7FChlF49rW5iZEvz 4OD8PQR7gTNE13Ysv+fYvoww/NEJ3+o5wdfctxpeAU1nY8uAQcBNkCzfc nW0Br7zlzNH3F9tBsjDu9sScLcXY5y097fBrSUXjFOh8b+/6NYhc01lvL lrVZoz33Dd93xoFYFgX1poc51uhu/VT9uJVu3qwP3OIclQygGbJbqZzpG 2jjypUDe2o9uxijhMPGqR2XpjMwp4N0OKWcQ4t3Tn0fLm1H7llEqYzWFr Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397596" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397596" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446827" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:58 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler Date: Tue, 7 Nov 2023 07:00:36 -0800 Message-Id: <71943490df987be8a3a3e131b12750e8c6d82afc.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/vmx/common.h | 2 +- arch/x86/kvm/vmx/tdx.c | 7 ++++++- arch/x86/kvm/vmx/tdx.h | 19 ------------------- arch/x86/kvm/vmx/tdx_arch.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 6 files changed, 29 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index eed36c1eedb7..c16823f3326e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -285,6 +285,8 @@ enum x86_intercept_stage; PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK) +#define PFERR_LEVEL(err_code) (((err_code) & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT) + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index bb00433932ee..787f59c44abc 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -91,7 +91,7 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa, if (kvm_is_private_gpa(vcpu->kvm, gpa)) error_code |= PFERR_GUEST_ENC_MASK; - if (err_page_level > 0) + if (err_page_level > PG_LEVEL_NONE) error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK; return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 7b81811eb404..c614ab20c191 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -2713,6 +2713,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd) struct kvm_tdx_init_mem_region region; struct kvm_vcpu *vcpu; struct page *page; + u64 error_code; int idx, ret = 0; bool added = false; @@ -2770,7 +2771,11 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd) kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) | (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION); - ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR, + /* TODO: large page support. */ + error_code = TDX_SEPT_PFERR; + error_code |= (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) & + PFERR_LEVEL_MASK; + ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code, PG_LEVEL_4K); put_page(page); if (ret) diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 37ee944c36a1..54c3f6b83571 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -72,25 +72,6 @@ union tdx_exit_reason { u64 full; }; -union tdx_ext_exit_qualification { - struct { - u64 type : 4; - u64 reserved0 : 28; - u64 req_sept_level : 3; - u64 err_sept_level : 3; - u64 err_sept_state : 8; - u64 err_sept_is_leaf : 1; - u64 reserved1 : 17; - }; - u64 full; -}; - -enum tdx_ext_exit_qualification_type { - EXT_EXIT_QUAL_NONE, - EXT_EXIT_QUAL_ACCEPT, - NUM_EXT_EXIT_QUAL, -}; - struct vcpu_tdx { struct kvm_vcpu vcpu; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 9f93250d22b9..ba41fefa47ee 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -218,4 +218,23 @@ union tdx_sept_level_state { u64 raw; }; +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE = 0, + EXT_EXIT_QUAL_ACCEPT, + NUM_EXT_EXIT_QUAL, +}; + #endif /* __KVM_X86_TDX_ARCH_H */ diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index ae9ba0731521..fb3913df6a5d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5753,7 +5753,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && kvm_vcpu_is_illegal_gpa(vcpu, gpa))) return kvm_emulate_instruction(vcpu, 0); - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, 0); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL_NONE); } static int handle_ept_misconfig(struct kvm_vcpu *vcpu) From patchwork Tue Nov 7 15:00:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13448997 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E8B134CDE for ; Tue, 7 Nov 2023 15:04:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="VM4hRFr2" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14240619E; Tue, 7 Nov 2023 07:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369322; x=1730905322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=SefIc9ILSqdc9XL8YiZXGRG69G2enCZ2JSkPRZ5JvNk=; b=VM4hRFr2T+Pa1iQHfjAiYkwCRi/m0amln0MKrjhxG4liBgOTGK38tESS MiZ9o0cNaxtme89iptVnhnyWJ04rJmTU93sa7q+TTzFtClMZ78LTt8LrZ 8StUlYig7LtXJdOwBAv/h7zQ/qRivio04KYgChV8llj5LRMWItV1TllkO C7kL9c5pIefydLtUNdIs535HbBcE16BFjAD+Y/3hv/xfYLz2w50+fN6wM /Inm7CHI6THkINHYQyxL8mi1peO74ilXufRIoqbI+l6DROb1dtVmlniCB KK8Y7QcDfu/WF4Knj8dun1wGgokg2z8Fa0cTqPKpFA2oVkgZiKNr+vtNO Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397608" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397608" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446838" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:00:59 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split Date: Tue, 7 Nov 2023 07:00:37 -0800 Message-Id: <042103ab22aee26739a391b9f893c1dc78654e67.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Make tdp_mmu_alloc_sp_split() aware of private page table. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d8d46a4d00ff..1da98be74ad2 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -202,6 +202,15 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu, struct kvm_m } } +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp) +{ + gfp &= ~__GFP_ZERO; + sp->private_spt = (void *)__get_free_page(gfp); + if (!sp->private_spt) + return -ENOMEM; + return 0; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { if (sp->private_spt) @@ -230,6 +239,11 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu, struct kvm_m { } +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp) +{ + return -ENOMEM; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d806574f7f2d..7873e9ee82ad 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1583,8 +1583,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp, union kvm_mm sp->role = role; sp->spt = (void *)__get_free_page(gfp); - /* TODO: large page support for private GPA. */ - WARN_ON_ONCE(kvm_mmu_page_role_is_private(role)); + if (kvm_mmu_page_role_is_private(role)) { + if (kvm_alloc_private_spt_for_split(sp, gfp)) { + free_page((unsigned long)sp->spt); + sp->spt = NULL; + } + } if (!sp->spt) { kmem_cache_free(mmu_page_header_cache, sp); return NULL; From patchwork Tue Nov 7 15:00:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449000 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39F7E34CE2 for ; Tue, 7 Nov 2023 15:04:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aRtfcDJK" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 143E1619F; Tue, 7 Nov 2023 07:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369322; x=1730905322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mUGTrGbIIGBltmJxOA24uTc5A/R5LVIH6K6kMj7lVUc=; b=aRtfcDJKqOPWXq0CFjb5KS9OcKJEPnPf/SUApdYeGYe+fCBocgzlWV07 Y8iFfPhOyYs3jYAME2ea1rnDuGAvSS2Sb7OL44P5V8XoVVbTkYAoLnxOQ zjaQkEgavqy36VWaCh/pKSDFncg+MyEvv+RUNJtUHF23eYBhuEE+5MYp8 ZX5I+cBBRK46edInrEhY8LXULUjxOkVZWkIBw2JAN0LsNh61NrZErX4C7 hkruPQPcaFNrd3MazVSxvXlFQRKGAxvuLB+8wwbyPX30T1G+cWsqO/2Zh x5ga61bqxwgciulH2IxDHIaG31KyUneN6Q7B94yFh2Tpo90IebXQ5N0pc w==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397614" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397614" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446853" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:01 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf Date: Tue, 7 Nov 2023 07:00:38 -0800 Message-Id: <8b43a9203c34b5330c4ea5901da5dac3458ac98d.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 6 +-- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 68 +++++++++++++++++++++++++++++++-- 4 files changed, 78 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index b0f103641547..557479737962 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -93,6 +93,7 @@ config KVM_INTEL tristate "KVM for Intel (and compatible) processors support" depends on KVM && IA32_FEAT_CTL select KVM_SW_PROTECTED_VM if INTEL_TDX_HOST + select KVM_GENERIC_MEMORY_ATTRIBUTES if INTEL_TDX_HOST select KVM_PRIVATE_MEM if INTEL_TDX_HOST help Provides support for KVM on processors equipped with Intel's VT diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 265177cedf37..0bf043812644 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7463,8 +7463,8 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, return kvm_unmap_gfn_range(kvm, range); } -static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, - int level) +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) { return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG; } @@ -7491,7 +7491,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, return kvm_range_has_memory_attributes(kvm, start, end, attrs); for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { - if (hugepage_test_mixed(slot, gfn, level - 1) || + if (kvm_hugepage_test_mixed(slot, gfn, level - 1) || attrs != kvm_get_memory_attributes(kvm, gfn)) return false; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1da98be74ad2..653e96769956 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -460,4 +460,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level); +#else +static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level) +{ + return false; +} +#endif + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7873e9ee82ad..a209a67decae 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -964,6 +964,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) return true; } + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -975,13 +983,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield, bool flush, bool zap_private) { + bool is_private = is_private_sp(root); + struct kvm_mmu_page *split_sp = NULL; struct tdp_iter iter; end = min(end, tdp_mmu_max_gfn_exclusive()); lockdep_assert_held_write(&kvm->mmu_lock); - WARN_ON_ONCE(zap_private && !is_private_sp(root)); + WARN_ON_ONCE(zap_private && !is_private); if (!zap_private && is_private_sp(root)) return false; @@ -1006,12 +1016,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; + if (is_private && kvm_gfn_shared_mask(kvm) && + is_large_pte(iter.old_spte)) { + gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_hugepage_test_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + WARN_ON_ONCE(!can_yield); + if (split_sp) { + sp = split_sp; + split_sp = NULL; + sp->role = tdp_iter_child_role(&iter); + } else { + WARN_ON(iter.yielded); + if (flush && can_yield) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + if (iter.yielded) { + split_sp = sp; + continue; + } + } + KVM_BUG_ON(!sp, kvm); + + tdp_mmu_init_sp(sp, iter.sptep, iter.gfn); + if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) { + kvm_flush_remote_tlbs(kvm); + flush = false; + /* force retry on this gfn. */ + iter.yielded = true; + } else + flush = true; + continue; + } + } + tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush = true; } rcu_read_unlock(); + if (split_sp) { + WARN_ON(!can_yield); + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + + write_unlock(&kvm->mmu_lock); + tdp_mmu_free_sp(split_sp); + write_lock(&kvm->mmu_lock); + } + /* * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need * to provide RCU protection as no 'struct kvm_mmu_page' will be freed. @@ -1606,8 +1670,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, KVM_BUG_ON(kvm_mmu_page_role_is_private(role) != is_private_sptep(iter->sptep), kvm); - /* TODO: Large page isn't supported for private SPTE yet. */ - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm); /* * Since we are allocating while under the MMU lock we have to be From patchwork Tue Nov 7 15:00:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449005 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4072034CE3 for ; Tue, 7 Nov 2023 15:04:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IAOLPOaa" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15D8561A1; Tue, 7 Nov 2023 07:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369322; x=1730905322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OUVQYH8tGD1GjKHwVs4m6rotG1u9g8ZzFX1XSzVcyHc=; b=IAOLPOaaLtBredq0MvkY6iNb+OSSIHMJw0TLaeYn9xLOgaUlRCmtdouH kecLGF6oomZtE1PYr2JI0nhjtUd8mxua2tgNofqR3hyOFQAVBLmS9vGze 08XcB4Pz9sRyGBuDe+66/hgwhcM2N80PsTdT4yAYdvUPY5/aADo+Zw+6u g9ex8tAo3MvYJUQ1KuohLo36IL2q179QbFVLjRkvd8YvARt78tY1cXvM6 b0UCLs3bGyoSxL8ryP71EeN/oOcwuB/x0oJ753Sx3SCJe4UOqhHIulhR4 k8apAoThttmoVCqLE1kWtrnl6f1X1GafLk6RYlzeU08elcYQ2dseNhMxJ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397619" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397619" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446860" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:02 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared Date: Tue, 7 Nov 2023 07:00:39 -0800 Message-Id: <051d18f03ff70a66387ec37988d1ffd29f43f4f5.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li When mapping the shared page for TDX, it needs to zap private alias. In the case that private page is mapped as large page (2MB), it can be removed directly only when the whole 2MB is converted to shared. Otherwise, it has to split 2MB page into 512 4KB page, and only remove the pages that converted to shared. When a present large leaf spte switches to present non-leaf spte, TDX needs to split the corresponding SEPT page to reflect it. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++++++++++++----- arch/x86/kvm/vmx/tdx.c | 25 +++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_ops.h | 7 +++++++ 6 files changed, 50 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 8ef0ed217f6e..3deb6ab4f291 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -103,6 +103,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) +KVM_X86_OP_OPTIONAL(split_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index c16823f3326e..e75a461bdea7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1753,6 +1753,8 @@ struct kvm_x86_ops { void *private_spt); int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index a209a67decae..734ee822b43c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -599,23 +599,34 @@ static int __must_check __set_private_spte_present(struct kvm *kvm, tdp_ptep_t s { bool was_present = is_shadow_present_pte(old_spte); bool is_present = is_shadow_present_pte(new_spte); + bool was_leaf = was_present && is_last_spte(old_spte, level); bool is_leaf = is_present && is_last_spte(new_spte, level); kvm_pfn_t new_pfn = spte_to_pfn(new_spte); + void *private_spt; int ret = 0; lockdep_assert_held(&kvm->mmu_lock); - /* TDP MMU doesn't change present -> present */ - KVM_BUG_ON(was_present, kvm); /* * Use different call to either set up middle level * private page table, or leaf. */ - if (is_leaf) + if (level > PG_LEVEL_4K && was_leaf && !is_leaf) { + /* + * splitting large page into 4KB. + * tdp_mmu_split_huage_page() => tdp_mmu_link_sp() + */ + private_spt = get_private_spt(gfn, new_spte, level); + KVM_BUG_ON(!private_spt, kvm); + ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + kvm_flush_remote_tlbs(kvm); + if (!ret) + ret = static_call(kvm_x86_split_private_spt)(kvm, gfn, + level, private_spt); + } else if (is_leaf) ret = static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn); else { - void *private_spt = get_private_spt(gfn, new_spte, level); - + private_spt = get_private_spt(gfn, new_spte, level); KVM_BUG_ON(!private_spt, kvm); ret = static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_spt); } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index c614ab20c191..91eca578a7da 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1662,6 +1662,28 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, return 0; } +static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level = pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + hpa_t hpa = __pa(private_spt); + struct tdx_module_args out; + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + err = tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out); + return -EIO; + } + + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1675,8 +1697,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, if (unlikely(!is_hkid_assigned(kvm_tdx))) return 0; - /* For now large page isn't supported yet. */ - WARN_ON_ONCE(level != PG_LEVEL_4K); err = tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) return -EAGAIN; @@ -3183,6 +3203,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops) x86_ops->link_private_spt = tdx_sept_link_private_spt; x86_ops->free_private_spt = tdx_sept_free_private_spt; + x86_ops->split_private_spt = tdx_sept_split_private_spt; x86_ops->set_private_spte = tdx_sept_set_private_spte; x86_ops->remove_private_spte = tdx_sept_remove_private_spte; x86_ops->zap_private_spte = tdx_sept_zap_private_spte; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index ba41fefa47ee..cab6a74446a0 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -21,6 +21,7 @@ #define TDH_MNG_CREATE 9 #define TDH_VP_CREATE 10 #define TDH_MNG_RD 11 +#define TDH_MEM_PAGE_DEMOTE 15 #define TDH_MR_EXTEND 16 #define TDH_MR_FINALIZE 17 #define TDH_VP_FLUSH 18 diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 0f2df7198bde..38ab0ab1509c 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -183,6 +183,13 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_args *out) return tdx_seamcall(TDH_MNG_RD, tdr, field, 0, 0, out); } +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t page, + struct tdx_module_args *out) +{ + tdx_clflush_page(page, PG_LEVEL_4K); + return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_args *out) { From patchwork Tue Nov 7 15:00:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449008 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C18634CFB for ; Tue, 7 Nov 2023 15:04:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="NNHxMkqN" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB7B761B6; Tue, 7 Nov 2023 07:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369322; x=1730905322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o/K8K5hIz7RX4ML6gqByyWD3lSWYcFLUDm9rmftbRog=; b=NNHxMkqNCoe7ltqluZExQL5ujd75lCtjVjAXAy9Hpd8H9FnGSmqQL7ow gaYJVpTEYtzya3o6D47wuIVQZ/W8OcQLoe/E/W4262+Ts8+9ErcWSgFlb 6yM0O4tMVE/+jb16rzTeBoJP7pm9PXTXOq13xYInKsqPPbrKFxo5llIZE jc0IHlds1cosMfw2uZ3/iN7PVyaIK56DKL6x3k2McRr99eviRtTr+0Gvo I8D/33Z0Yz6NhbdAYs9HunQv8qUuHJu4y9wRC6HylxFN+n43R2xaor8SL aFda0j6fZ4GzVyrcaAL+sKhuEf0xC53apPtCldhrz251RUWeI6t7niGyZ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397623" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397623" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446871" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:02 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page Date: Tue, 7 Nov 2023 07:00:40 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata When a large page is passed to the KVM page fault handler and some of sub pages are already populated, try to merge sub pages into a large page. This situation can happen when the guest converts small pages into shared and convert it back into private. When a large page is passed to KVM mmu page fault handler and the spte corresponding to the page is non-leaf (one or more of sub pages are already populated at lower page level), the current kvm mmu zaps non-leaf spte at a large page level, and populate a leaf spte at that level. Thus small pages are converted into a large page. However, it doesn't work for TDX because zapping and re-populating results in zeroing page content. Instead, populate all small pages and merge them into a large page. Merging pages into a large page can fail when some sub pages are accepted and some are not. In such case, with the assumption that guest tries to accept at large page size for performance when possible, don't try to be smart to identify which page is still pending, map all pages at lower page level, and let vcpu re-execute. Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 2 + arch/x86/include/asm/kvm_host.h | 4 + arch/x86/kvm/mmu/tdp_iter.c | 37 +++++-- arch/x86/kvm/mmu/tdp_iter.h | 2 + arch/x86/kvm/mmu/tdp_mmu.c | 172 ++++++++++++++++++++++++++++- 5 files changed, 207 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 3deb6ab4f291..9a7d4db304c7 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -104,9 +104,11 @@ KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) KVM_X86_OP_OPTIONAL(split_private_spt) +KVM_X86_OP_OPTIONAL(merge_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) +KVM_X86_OP_OPTIONAL(unzap_private_spte) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) KVM_X86_OP(get_l2_tsc_multiplier) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e75a461bdea7..0254c382f4f0 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -146,6 +146,7 @@ #define KVM_MAX_HUGEPAGE_LEVEL PG_LEVEL_1G #define KVM_NR_PAGE_SIZES (KVM_MAX_HUGEPAGE_LEVEL - PG_LEVEL_4K + 1) #define KVM_HPAGE_GFN_SHIFT(x) (((x) - 1) * 9) +#define KVM_HPAGE_GFN_MASK(x) (~((1UL << KVM_HPAGE_GFN_SHIFT(x)) - 1)) #define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x)) #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) @@ -1755,11 +1756,14 @@ struct kvm_x86_ops { void *private_spt); int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*merge_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level); + int (*unzap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level); bool (*has_wbinvd_exit)(void); diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index bd30ebfb2f2c..f33226fcd62a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -71,6 +71,14 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level) return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT); } +static void step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + iter->level--; + iter->pt_path[iter->level - 1] = child_pt; + iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level); + tdp_iter_refresh_sptep(iter); +} + /* * Steps down one level in the paging structure towards the goal GFN. Returns * true if the iterator was able to step down a level, false otherwise. @@ -92,14 +100,28 @@ static bool try_step_down(struct tdp_iter *iter) if (!child_pt) return false; - iter->level--; - iter->pt_path[iter->level - 1] = child_pt; - iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level); - tdp_iter_refresh_sptep(iter); - + step_down(iter, child_pt); return true; } +/* Steps down for freezed spte. Don't re-read sptep because it was freezed. */ +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + WARN_ON_ONCE(!child_pt); + WARN_ON_ONCE(iter->yielded); + WARN_ON_ONCE(iter->level == iter->min_level); + + step_down(iter, child_pt); +} + +void tdp_iter_step_side(struct tdp_iter *iter) +{ + iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); + iter->next_last_level_gfn = iter->gfn; + iter->sptep++; + iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); +} + /* * Steps to the next entry in the current page table, at the current page table * level. The next entry could point to a page backing guest memory or another @@ -117,10 +139,7 @@ static bool try_step_side(struct tdp_iter *iter) (SPTE_ENT_PER_PAGE - 1)) return false; - iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); - iter->next_last_level_gfn = iter->gfn; - iter->sptep++; - iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); + tdp_iter_step_side(iter); return true; } diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index a9c9cd0db20a..ca00db799a50 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -134,6 +134,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); +void tdp_iter_step_side(struct tdp_iter *iter); +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt); static inline union kvm_mmu_page_role tdp_iter_child_role(struct tdp_iter *iter) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 734ee822b43c..c8a4bd052c71 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1224,6 +1224,176 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm, bool skip_private) } } +static int tdp_mmu_iter_step_side(int i, struct tdp_iter *iter) +{ + i++; + + /* + * if i = SPTE_ENT_PER_PAGE, tdp_iter_step_side() results + * in reading the entry beyond the last entry. + */ + if (i < SPTE_ENT_PER_PAGE) + tdp_iter_step_side(iter); + + return i; +} + +static int tdp_mmu_merge_private_spt(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + u64 *sptep = rcu_dereference(iter->sptep); + u64 old_spte = iter->old_spte; + struct kvm_mmu_page *child_sp; + struct kvm *kvm = vcpu->kvm; + struct tdp_iter child_iter; + int level = iter->level; + gfn_t gfn = iter->gfn; + tdp_ptep_t child_pt; + u64 child_spte; + int ret = 0; + int i; + + /* + * TDX KVM supports only 2MB large page. It's not supported to merge + * 2MB pages into 1GB page at the moment. + */ + WARN_ON_ONCE(fault->goal_level != PG_LEVEL_2M); + WARN_ON_ONCE(iter->level != PG_LEVEL_2M); + WARN_ON_ONCE(!is_large_pte(new_spte)); + + /* Freeze the spte to prevent other threads from working spte. */ + if (!try_cmpxchg64(sptep, &iter->old_spte, REMOVED_SPTE)) + return -EBUSY; + + /* + * Step down to the child spte. Because tdp_iter_next() assumes the + * parent spte isn't freezed, do it manually. + */ + child_pt = spte_to_child_pt(iter->old_spte, iter->level); + child_sp = sptep_to_sp(child_pt); + WARN_ON_ONCE(child_sp->role.level != PG_LEVEL_4K); + WARN_ON_ONCE(!kvm_mmu_page_role_is_private(child_sp->role)); + + /* Don't modify iter as the caller will use iter after this function. */ + child_iter = *iter; + /* Adjust the target gfn to the head gfn of the large page. */ + child_iter.next_last_level_gfn &= -KVM_PAGES_PER_HPAGE(level); + tdp_iter_step_down(&child_iter, child_pt); + + /* + * All child pages are required to be populated for merging them into a + * large page. Populate all child spte. + */ + for (i = 0; i < SPTE_ENT_PER_PAGE; i = tdp_mmu_iter_step_side(i, &child_iter)) { + int tmp; + + WARN_ON_ONCE(child_iter.level != PG_LEVEL_4K); + + if (is_shadow_present_pte(child_iter.old_spte)) { + /* TODO: relocate page for huge page. */ + if (WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) != + spte_to_pfn(new_spte) + i)) { + if (!ret) + ret = -EAGAIN; + continue; + } + /* + * When SEPT_VE_DISABLE=true and the page state is + * pending, this case can happen. Just resume the vcpu + * again with the expectation for other vcpu to accept + * this page. + */ + if (child_iter.gfn == fault->gfn) { + if (!ret) + ret = -EAGAIN; + } + continue; + } + + child_spte = make_huge_page_split_spte(kvm, new_spte, child_sp->role, i); + /* + * Because other thread may have started to operate on this spte + * before freezing the parent spte, Use atomic version to + * prevent race. + */ + tmp = tdp_mmu_set_spte_atomic(vcpu->kvm, &child_iter, child_spte); + if (tmp == -EBUSY || tmp == -EAGAIN) { + /* + * There was a race condition. Populate remaining 4K + * spte to resolve fault->gfn to guarantee the forward + * progress. + */ + if (!ret) + ret = tmp; + } else if (tmp) { + ret = tmp; + goto out; + } + } + if (ret) + goto out; + + /* Prevent the Secure-EPT entry from being used. */ + ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + if (ret) + goto out; + kvm_flush_remote_tlbs_range(kvm, gfn & KVM_HPAGE_GFN_MASK(level), + KVM_PAGES_PER_HPAGE(level)); + + /* Merge pages into a large page. */ + ret = static_call(kvm_x86_merge_private_spt)(kvm, gfn, level, + kvm_mmu_private_spt(child_sp)); + /* + * Failed to merge pages because some pages are accepted and some are + * pending. Since the child page was mapped above, let vcpu run. + */ + if (ret) { + if (static_call(kvm_x86_unzap_private_spte)(kvm, gfn, level)) + old_spte = SHADOW_NONPRESENT_VALUE | + (spte_to_pfn(old_spte) << PAGE_SHIFT) | + PT_PAGE_SIZE_MASK; + goto out; + } + + /* Unfreeze spte. */ + iter->old_spte = new_spte; + __kvm_tdp_mmu_write_spte(sptep, new_spte); + + /* + * Free unused child sp. Secure-EPT page was already freed at TDX level + * by kvm_x86_merge_private_spt(). + */ + tdp_unaccount_mmu_page(kvm, child_sp); + tdp_mmu_free_sp(child_sp); + return -EAGAIN; + +out: + iter->old_spte = old_spte; + __kvm_tdp_mmu_write_spte(sptep, old_spte); + return ret; +} + +static int __tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + /* + * The private page has smaller-size pages. For example, the child + * pages was converted from shared to page, and now it can be mapped as + * a large page. Try to merge small pages into a large page. + */ + if (fault->slot && + kvm_gfn_shared_mask(vcpu->kvm) && + iter->level > PG_LEVEL_4K && + kvm_is_private_gpa(vcpu->kvm, fault->addr) && + is_shadow_present_pte(iter->old_spte) && + !is_large_pte(iter->old_spte)) + return tdp_mmu_merge_private_spt(vcpu, fault, iter, new_spte); + + return tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte); +} + /* * Installs a last-level SPTE to handle a TDP page fault. * (NPT/EPT violation/misconfiguration) @@ -1265,7 +1435,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; - else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte)) + else if (__tdp_mmu_map_handle_target_level(vcpu, fault, iter, new_spte)) return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) From patchwork Tue Nov 7 15:00:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13448999 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF35734CD3 for ; Tue, 7 Nov 2023 15:04:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KMU3p1k0" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13D51619C; Tue, 7 Nov 2023 07:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369322; x=1730905322; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sfcvaEnOpo4ero8hHRfU6N/0voaPVRE4zSXR2iFJIJ0=; b=KMU3p1k043Cj2m4eEp+vSg+TcV6RwEk5rc5rWkr+vSMO2i+wdAoy3FFY zwDmBoo9rhlpTAxOURUn8ziNRe9ohqJYCfzskEacS04oYPwepFFDCakfr B1gccTOq2NiX0Aso6aBVWGaHyYzWPTZcfqATnzdOG34sSh7GBXHWWRfpV 72kRFDFZxLGUcs3mW/ATbwOoq5pr6D6E47/qFe0rfEmq+j86YT0/nTNv7 HGj4egUqugMHxqJeXoykR4FwrUe69f8fnHzFWBGRVvpTzWAytR+ZC3+UJ 08fGRR7JI6xEU9cgw7hzz38vOHWA1I91rE69qbbjk8g4JMFtBsZnfxnYL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397630" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397630" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446878" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:03 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement merge pages into a large page Date: Tue, 7 Nov 2023 07:00:41 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Implement merge_private_stp callback. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 72 ++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_errno.h | 2 + arch/x86/kvm/vmx/tdx_ops.h | 6 +++ 4 files changed, 81 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 91eca578a7da..df53a971ff21 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1684,6 +1684,49 @@ static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn, return 0; } +static int tdx_sept_merge_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level = pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + struct tdx_module_args out; + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + err = tdh_mem_page_promote(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (unlikely(err == (TDX_EPT_INVALID_PROMOTE_CONDITIONS | + TDX_OPERAND_ID_RCX))) + /* + * Some pages are accepted, some pending. Need to wait for TD + * to accept all pages. Tell it the caller. + */ + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_PROMOTE, err, &out); + return -EIO; + } + WARN_ON_ONCE(out.rcx != __pa(private_spt)); + + /* + * TDH.MEM.PAGE.PROMOTE frees the Secure-EPT page for the lower level. + * Flush cache for reuse. + */ + do { + err = tdh_phymem_page_wbinvd(set_hkid_to_hpa(__pa(private_spt), + to_kvm_tdx(kvm)->hkid)); + } while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); + if (WARN_ON_ONCE(err)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + return -EIO; + } + + tdx_clear_page(__pa(private_spt), PAGE_SIZE); + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1758,6 +1801,33 @@ static void tdx_track(struct kvm *kvm) } +static int tdx_sept_unzap_private_spte(struct kvm *kvm, gfn_t gfn, + enum pg_level level) +{ + int tdx_level = pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + struct tdx_module_args out; + u64 err; + + do { + err = tdh_mem_range_unblock(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + + /* + * tdh_mem_range_block() is accompanied with tdx_track() via kvm + * remote tlb flush. Wait for the caller of + * tdh_mem_range_block() to complete TDX track. + */ + } while (err == (TDX_TLB_TRACKING_NOT_DONE | TDX_OPERAND_ID_SEPT)); + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_RANGE_UNBLOCK, err, &out); + return -EIO; + } + return 0; +} + static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt) { @@ -3204,9 +3274,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops) x86_ops->link_private_spt = tdx_sept_link_private_spt; x86_ops->free_private_spt = tdx_sept_free_private_spt; x86_ops->split_private_spt = tdx_sept_split_private_spt; + x86_ops->merge_private_spt = tdx_sept_merge_private_spt; x86_ops->set_private_spte = tdx_sept_set_private_spte; x86_ops->remove_private_spte = tdx_sept_remove_private_spte; x86_ops->zap_private_spte = tdx_sept_zap_private_spte; + x86_ops->unzap_private_spte = tdx_sept_unzap_private_spte; return 0; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index cab6a74446a0..fb7e54953a85 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -29,6 +29,7 @@ #define TDH_MNG_KEY_FREEID 20 #define TDH_MNG_INIT 21 #define TDH_VP_INIT 22 +#define TDH_MEM_PAGE_PROMOTE 23 #define TDH_MEM_SEPT_RD 25 #define TDH_VP_RD 26 #define TDH_MNG_KEY_RECLAIMID 27 diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h index bb093e292fef..940d6de332eb 100644 --- a/arch/x86/kvm/vmx/tdx_errno.h +++ b/arch/x86/kvm/vmx/tdx_errno.h @@ -23,6 +23,8 @@ #define TDX_FLUSHVP_NOT_DONE 0x8000082400000000ULL #define TDX_EPT_WALK_FAILED 0xC0000B0000000000ULL #define TDX_EPT_ENTRY_NOT_FREE 0xC0000B0200000000ULL +#define TDX_TLB_TRACKING_NOT_DONE 0xC0000B0800000000ULL +#define TDX_EPT_INVALID_PROMOTE_CONDITIONS 0xC0000B0900000000ULL #define TDX_EPT_ENTRY_STATE_INCORRECT 0xC0000B0D00000000ULL /* diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 38ab0ab1509c..774fee3b2d46 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -190,6 +190,12 @@ static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t pag return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, gpa | level, tdr, page, 0, out); } +static inline u64 tdh_mem_page_promote(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_args *out) +{ + return tdx_seamcall_sept(TDH_MEM_PAGE_PROMOTE, gpa | level, tdr, 0, 0, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_args *out) { From patchwork Tue Nov 7 15:00:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449006 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E322A3589E for ; Tue, 7 Nov 2023 15:04:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Rj3Clj01" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52C91658A; Tue, 7 Nov 2023 07:02:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369324; x=1730905324; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JDnzHoeAz9cXEruHJnpAEIxBR3VtDux3aJfiykJUT1I=; b=Rj3Clj01fY5v0bOMz5l43fgzfJU4XJuwPTnyeh52z7NSWf7T7FpCxnBd uy13WgICRBEOvh3y8JJwXR7WCIbu6wJ4gj6hEocnvAiKktiND0fMG2LVq /CDWBYHjbI7JzFz6DwKqksyUlriFFL+E6CjJ4IcRaaCRsre5BFGf7B+VN T7Wfuc3TuVsEuQYZKDAx6xFgw3kWaGgHdulHmAgoOnswSzwAUNEmt4STV ZaFQsuh2TMNEhdIgrIjnhE3jhWOT6M0yHWL9Zmk2SHGrrDmAgC/GUy2Le IXbIyWqxwQXwAcw13b8bcVpYhoyQEFHYf+3iOYTnhdBgE67IbAY1eI3yF w==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397634" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397634" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446882" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:04 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot Date: Tue, 7 Nov 2023 07:00:42 -0800 Message-Id: <075de567893a2b09bdfb203ae7ecd1867e5c3d8e.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata struct kvm_page_fault.req_level is the page level which takes care of the faulted-in page size. For now its calculation is only for the conventional kvm memslot by host_pfn_mapping_level() that traverses page table. However, host_pfn_mapping_level() cannot be used for private kvm memslot because pages of private kvm memlost aren't mapped into user virtual address space. Instead page order is given when getting pfn. Remember it in struct kvm_page_fault and use it. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 34 +++++++++++++++++---------------- arch/x86/kvm/mmu/mmu_internal.h | 12 +++++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0bf043812644..0aec7c11f4e2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3158,10 +3158,10 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, static int __kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) + gfn_t gfn, int max_level, int host_level, + bool is_private) { struct kvm_lpage_info *linfo; - int host_level; max_level = min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3170,24 +3170,23 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) - return max_level; - if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - host_level = host_pfn_mapping_level(kvm, gfn, slot); + if (!is_private) { + WARN_ON_ONCE(host_level != PG_LEVEL_NONE); + host_level = host_pfn_mapping_level(kvm, gfn, slot); + } + WARN_ON_ONCE(host_level == PG_LEVEL_NONE); return min(host_level, max_level); } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool faultin_private) { - bool is_private = kvm_slot_can_be_private(slot) && - kvm_mem_is_private(kvm, gfn); - - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private); + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, + PG_LEVEL_NONE, faultin_private); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -3212,7 +3211,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault */ fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, fault->gfn, fault->max_level, - fault->is_private); + fault->host_level, + kvm_is_faultin_private(fault)); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -4336,6 +4336,7 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { int max_order, r; + u8 max_level; if (!kvm_slot_can_be_private(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4349,8 +4350,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, return r; } - fault->max_level = min(kvm_max_level_for_order(max_order), - fault->max_level); + max_level = kvm_max_level_for_order(max_order); + fault->host_level = max_level; + fault->max_level = min(max_level, fault->max_level); fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); return RET_PF_CONTINUE; @@ -4400,7 +4402,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return -EFAULT; } - if (fault->is_private) + if (kvm_is_faultin_private(fault)) return kvm_faultin_pfn_private(vcpu, fault); async = false; @@ -6809,7 +6811,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_remote_tlbs_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 653e96769956..6b540a10fd67 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -357,6 +357,9 @@ struct kvm_page_fault { * is changing its own translation in the guest page tables. */ bool write_fault_to_shadow_pgtable; + + /* valid only for private memslot && private gfn */ + enum pg_level host_level; }; int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); @@ -451,7 +454,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool faultin_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); @@ -469,4 +472,11 @@ static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t g } #endif +static inline bool kvm_is_faultin_private(const struct kvm_page_fault *fault) +{ + if (IS_ENABLED(CONFIG_KVM_GENERIC_PRIVATE_MEM)) + return fault->is_private && kvm_slot_can_be_private(fault->slot); + return false; +} + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index c8a4bd052c71..173e4e9053fc 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2179,7 +2179,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, false); if (max_mapping_level < iter.level) continue; From patchwork Tue Nov 7 15:00:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13449010 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E778E358A1 for ; Tue, 7 Nov 2023 15:04:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bnQvFMAN" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51A746589; Tue, 7 Nov 2023 07:02:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1699369324; x=1730905324; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q3TJwPU2x84AeNfhtv3TuHWkPxgrJoMqRE9Tijrab38=; b=bnQvFMANpc8uGoLcImRnk2NV8XEosYgFJXizqfrWIQoRbz8lwI3qY3vD o1TP+UfE6QyJViZYg835zPWi6HF+EyUBHemfoL80tCdcBYH9bttTRqjZM uut68IuuEt6mOj0WxKBhjqpGnOK4Kn0I3nCtXxECB89v6dyYXgeaeq3uR RxUK+zebHYnj56b5RPgTRUT6mj1d47NcDdWn3ALcC9mgxfiO3LEpCl5eN FLj3VY6fUXuHoB3M0nJoOWRod4JlRe6Jwgm1tQQLVeURcE7BM8hpzzWgO D2EJBHJHABsKx2P0zTOvGmIkn+UiVgIZF3sVIyaWTaseZBXAW0IVpVe8Z w==; X-IronPort-AV: E=McAfee;i="6600,9927,10887"; a="388397638" X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="388397638" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,284,1694761200"; d="scan'208";a="10446887" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2023 07:01:05 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST Date: Tue, 7 Nov 2023 07:00:43 -0800 Message-Id: <53a932848a6288232f020a8635887acab462f649.1699368363.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Now that everything is there to support 2MB page for TD guest. Because TDX module TDH.MEM.PAGE.AUG supports 4KB page and 2MB page, set struct kvm_arch.tdp_max_page_level to 2MB page level. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++------- arch/x86/kvm/vmx/tdx.c | 4 ++-- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 173e4e9053fc..3ec823d66c79 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1560,14 +1560,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp->nx_huge_page_disallowed = fault->huge_page_disallowed; - if (is_shadow_present_pte(iter.old_spte)) { - /* - * TODO: large page support. - * Doesn't support large page for TDX now - */ - KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm); + if (is_shadow_present_pte(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); - } else + else r = tdp_mmu_link_sp(kvm, &iter, sp, true); /* diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index df53a971ff21..9550a310177a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -560,8 +560,8 @@ int tdx_vm_init(struct kvm *kvm) */ kvm_mmu_set_mmio_spte_value(kvm, 0); - /* TODO: Enable 2mb and 1gb large page support. */ - kvm->arch.tdp_max_page_level = PG_LEVEL_4K; + /* TDH.MEM.PAGE.AUG supports up to 2MB page. */ + kvm->arch.tdp_max_page_level = PG_LEVEL_2M; /* * This function initializes only KVM software construct. It doesn't