From patchwork Tue Jan 23 00:22:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526603 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64F1614D446; Tue, 23 Jan 2024 00:22:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969360; cv=none; b=YHzy4fRC1dbZBUszKlgglCCcraSXCwE1TQQd+CoJCShUnijRJKPSb/k1UHckfr7ptN7XaY56eJKMdPBOIfBv2Lqf9vPXvab2FJq581Q5CUtVH8dx+a3LVnCTGjYP6CvXZwFXyIO/CU1W53CLwk6bxhnWbzFmjcinoj6pG85CfYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969360; c=relaxed/simple; bh=OAbS5X7Ec6oK3BAEUtYbpsRqvWhxi59qTw2XYhyFAZg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Z+/Rg+EaWT09U4y8UBWdepO1bL0nKV5CaWNSclmJDFvQdR5MHMoC24bb7/QKcbRkXKsoifdGIR1BNkZv27IWzk21YGvnmsvmBQS5ILkt6Odf6iiAfP/YA5qh8nQ6NQYiDKmnuW9crDSsGf+jnWgT7Ug//MYiR9bbt10Phwz44qo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LWc5Xdor; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LWc5Xdor" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969358; x=1737505358; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OAbS5X7Ec6oK3BAEUtYbpsRqvWhxi59qTw2XYhyFAZg=; b=LWc5XdorqlsCD9iCENUkjUrX7B5h1bHW7JgysE7Kv7Ts6P+CSyt00zts 5nju/VdPgmHG8dW3NL8Pljywd6wPmgPGSqMcf9FuvrkuCxTIIZnZGFzV2 iHaLVJ4r6IjwbxqgMeRWe0bcWW38vWgOBJtq+CgZw77xLQGcF6Zr0cUa+ HW1D9YFxV8Y4i54oeXbWXEZtOLOBKvNQGDl1RJ+T0IW7q0g1tH+8uYdNU SXxHQP7zwHH91Z+yIlz8SwXKHNE5ntH3r+mnX27HWd5b6YCTvLVgv8qdF lnrVDGc4XiudZSqxfyxp74F+1R+4KGbNjVyDLKaEQXdrPmzKEocBH6YEt w==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125637" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125637" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825619" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:36 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li , Binbin Wu Subject: [PATCH v7 01/13] KVM: TDX: Flush cache based on page size before TDX SEAMCALL Date: Mon, 22 Jan 2024 16:22:16 -0800 Message-Id: <0fc03e6439409b54e0477128c33e11438a46253f.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li tdh_mem_page_aug() will support 2MB large page in the near future. Cache flush also needs to be 2MB instead of 4KB in such cases. Introduce a helper function to flush cache with page size info in preparation for large pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata Reviewed-by: Binbin Wu --- v6: - catch up tdx_seamcall() change --- arch/x86/kvm/vmx/tdx_ops.h | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 3513d5df10ee..2afd927eaa45 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -6,6 +6,7 @@ #include +#include #include #include #include @@ -58,6 +59,11 @@ static inline int pg_level_to_tdx_sept_level(enum pg_level level) return level - 1; } +static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) +{ + clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); +} + /* * TDX module acquires its internal lock for resources. It doesn't spin to get * locks because of its restrictions of allowed execution time. Instead, it @@ -95,7 +101,7 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) .rdx = tdr, }; - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return tdx_seamcall(TDH_MNG_ADDCX, &in, NULL); } @@ -109,7 +115,7 @@ static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source .r9 = source, }; - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_ADD, &in, out); } @@ -122,7 +128,7 @@ static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page, .r8 = page, }; - clflush_cache_range(__va(page), PAGE_SIZE); + tdx_clflush_page(page, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_SEPT_ADD, &in, out); } @@ -155,7 +161,7 @@ static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr) .rdx = tdvpr, }; - clflush_cache_range(__va(addr), PAGE_SIZE); + tdx_clflush_page(addr, PG_LEVEL_4K); return tdx_seamcall(TDH_VP_ADDCX, &in, NULL); } @@ -168,7 +174,7 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, .r8 = hpa, }; - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, &in, out); } @@ -181,7 +187,7 @@ static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, .r8 = hpa, }; - clflush_cache_range(__va(hpa), PAGE_SIZE); + tdx_clflush_page(hpa, PG_LEVEL_4K); return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, &in, out); } @@ -212,7 +218,7 @@ static inline u64 tdh_mng_create(hpa_t tdr, int hkid) .rdx = hkid, }; - clflush_cache_range(__va(tdr), PAGE_SIZE); + tdx_clflush_page(tdr, PG_LEVEL_4K); return tdx_seamcall(TDH_MNG_CREATE, &in, NULL); } @@ -223,7 +229,7 @@ static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr) .rdx = tdr, }; - clflush_cache_range(__va(tdvpr), PAGE_SIZE); + tdx_clflush_page(tdvpr, PG_LEVEL_4K); return tdx_seamcall(TDH_VP_CREATE, &in, NULL); } From patchwork Tue Jan 23 00:22:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526604 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F2B614D45E; Tue, 23 Jan 2024 00:22:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969361; cv=none; b=F3KocvEZT1Bv5ya8FP7kw36xmPApThSKm12q4wkPz/aLr1XoYprMmrfZEXrAhSyxIDIxohvXGZrxZzjCWq/pAZOnRBNtlpu9izmIGfeUoQisTe92tN3QWrIN63j64B57mYYGBCUwK5LMqKO/rxhCCx2vhVwVDdgXvilrNg7jKCs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969361; c=relaxed/simple; bh=R8cBMPqil2+5pjGW+al9piEQU44M2m54kdZoD0oUf8w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LwD+YJZ3kmAKBCe3cQq5Creock9LduXAhnmUftqPPkkV/Mv+DnXlVHbI1XUlVX6asNmlbE0b50mW1VaFuBEmJ/FnzPGD+xa27gm+S8j8np4vXxJBDS6MbKycQiTL8F9yL4M+a+1msb4e7SYO+WfdriLqtZ9uw5WGUuLWp65sUjo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GfxJBnDD; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GfxJBnDD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969359; x=1737505359; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R8cBMPqil2+5pjGW+al9piEQU44M2m54kdZoD0oUf8w=; b=GfxJBnDDhfz42jEJPsWkzAqSaPyVDSW/CFp1Rc/tVaK6v37NJICwlbNs CiXbXtnBWBkLrf6wirmw//dgGhGCU3lYwHtP+PIeDCzNWvQtQ+C1HTxNY 9LbMKhDyMq99Fs+WiLkc4Tyd31D5ZAKJw/BgZormfhUfXAWmEz6EalOKv EwgxC2+PcVW3G/F97k2Y4b+IkB37i0R5Q3bg5BbJH70ZviCxKAcsKuEcs a+7jqHPH/miPwuRXtTXwv5RjpV8hXccKPTqi0q8TJGfm5ewnjtYY6AfIU 35cV9JQN1QxJ/K6L4vPt1dL1SSivJJwxQvHr2PEkjWpS2XcHEYm96VHSk g==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125645" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125645" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825624" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:36 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 02/13] KVM: TDX: Pass KVM page level to tdh_mem_page_aug() Date: Mon, 22 Jan 2024 16:22:17 -0800 Message-Id: <63c4832507b9b10383e00b33ce2ab6e756ecdf3b.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Level info is needed in tdx_clflush_page() to generate the correct page size. Besides, explicitly pass level info to SEAMCALL instead of assuming it's zero. It works naturally when 2MB support lands. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- v7: - Don't pass level to tdh_mem_page_add() as it supports only 4K page. - catch up for change of tdx_seamcall() --- arch/x86/kvm/vmx/tdx.c | 2 +- arch/x86/kvm/vmx/tdx_ops.h | 12 +++++++++--- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 67bb0c4c73a7..549dec05ccad 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1520,7 +1520,7 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn, union tdx_sept_entry entry; u64 err; - err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, hpa, &out); + err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) { tdx_unpin(kvm, pfn); return -EAGAIN; diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 2afd927eaa45..ce722e917d14 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -59,6 +59,11 @@ static inline int pg_level_to_tdx_sept_level(enum pg_level level) return level - 1; } +static inline enum pg_level tdx_sept_level_to_pg_level(int tdx_level) +{ + return tdx_level + 1; +} + static inline void tdx_clflush_page(hpa_t addr, enum pg_level level) { clflush_cache_range(__va(addr), KVM_HPAGE_SIZE(level)); @@ -108,6 +113,7 @@ static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source, struct tdx_module_args *out) { + /* TDH.MEM.PAGE.ADD() suports only 4K page. tdx 4K page level = 0 */ struct tdx_module_args in = { .rcx = gpa, .rdx = tdr, @@ -178,16 +184,16 @@ static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, return tdx_seamcall_sept(TDH_MEM_PAGE_RELOCATE, &in, out); } -static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, int level, hpa_t hpa, struct tdx_module_args *out) { struct tdx_module_args in = { - .rcx = gpa, + .rcx = gpa | level, .rdx = tdr, .r8 = hpa, }; - tdx_clflush_page(hpa, PG_LEVEL_4K); + tdx_clflush_page(hpa, tdx_sept_level_to_pg_level(level)); return tdx_seamcall_sept(TDH_MEM_PAGE_AUG, &in, out); } From patchwork Tue Jan 23 00:22:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526605 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF6EC14DB4A; Tue, 23 Jan 2024 00:22:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969361; cv=none; b=eAUBqgsJcOI/w+n6+BhPzHElzyZoXQ+sKP4vAYxChe6rcDj1dYWTK1zViDya6DzTmnx3q9Ss3XrOt+da1q0T6+sZ3hf+2AaP4LLxk0jDk/YCq1KpEn+Ee0p1/yasuetUWL4utL6aWoBfUewt6m/Ljcfa8KPOLwFfCzQ93Xw2JBo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969361; c=relaxed/simple; bh=yBYDoMpYcORK+18VbncuiQAFR1zA0t3kn0ekXcEBpzc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QfEJB4OMfMrNe7ofv1CGW3kHBDWrRa/OHA+KA96XyOHOpfvpilrhqt7vVlIJF/i/V2Y/iHizUH3/1KGpTxg0h30FmcW1+usjeRdiFZbXOAlsGhjvMnkSLPBTk+/fME2r48Hsuc7h1UGOUCHGY9gYPiY/1sNYFQfrvavAJhnEy+s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I5iZ/ABY; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I5iZ/ABY" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969359; x=1737505359; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yBYDoMpYcORK+18VbncuiQAFR1zA0t3kn0ekXcEBpzc=; b=I5iZ/ABY2IKQsopv2VnDOlE3HHP9W6BmHJmJZVQwnL4EOXGdyugdmiw5 PHj3sjhrgvTj4RKnPag3mfQgN6c1qOVEu5dgpBjXGaxJo9UDh4eclLJJt 5O8zAmGJuANBZNhNQK26H4hkRdilNzp9diyb24y3e28BqC4y82677zkxa WPy/j2MD4bM6mI00Nd2aoHfh2i8igBxUrM7+kayvYE2XIWOp8ml+Mhh1r R6hmLsZpHnLO0P54CVTwaJxsgQIhqLdO+xwIqo7NGhs6L4lED6s1ect/c Q2ID42mJ7S4Vrdg+WVeF5yal6FkqLKqfd9pAxawrs/gEXB202CyPAUIHG Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125650" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125650" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825630" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:37 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 03/13] KVM: TDX: Pass size to reclaim_page() Date: Mon, 22 Jan 2024 16:22:18 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li A 2MB large page can be tdh_mem_page_aug()'ed to TD directly. In this case, it needs to reclaim and clear the page as 2MB size. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- v5: - Change type of page size from int to unsigned long --- arch/x86/kvm/vmx/tdx.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 549dec05ccad..68f3a4c40be4 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -277,12 +277,13 @@ static void tdx_disassociate_vp_on_cpu(struct kvm_vcpu *vcpu) smp_call_function_single(cpu, tdx_disassociate_vp_arg, vcpu, 1); } -static void tdx_clear_page(unsigned long page_pa) +static void tdx_clear_page(unsigned long page_pa, unsigned long size) { const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); void *page = __va(page_pa); unsigned long i; + WARN_ON_ONCE(size % PAGE_SIZE); /* * When re-assign one page from old keyid to a new keyid, MOVDIR64B is * required to clear/write the page with new keyid to prevent integrity @@ -291,7 +292,7 @@ static void tdx_clear_page(unsigned long page_pa) * clflush doesn't flush cache with HKID set. The cache line could be * poisoned (even without MKTME-i), clear the poison bit. */ - for (i = 0; i < PAGE_SIZE; i += 64) + for (i = 0; i < size; i += 64) movdir64b(page + i, zero_page); /* * MOVDIR64B store uses WC buffer. Prevent following memory reads @@ -300,7 +301,7 @@ static void tdx_clear_page(unsigned long page_pa) __mb(); } -static int __tdx_reclaim_page(hpa_t pa) +static int __tdx_reclaim_page(hpa_t pa, enum pg_level level) { struct tdx_module_args out; u64 err; @@ -318,17 +319,19 @@ static int __tdx_reclaim_page(hpa_t pa) pr_tdx_error(TDH_PHYMEM_PAGE_RECLAIM, err, &out); return -EIO; } + /* out.r8 == tdx sept page level */ + WARN_ON_ONCE(out.r8 != pg_level_to_tdx_sept_level(level)); return 0; } -static int tdx_reclaim_page(hpa_t pa) +static int tdx_reclaim_page(hpa_t pa, enum pg_level level) { int r; - r = __tdx_reclaim_page(pa); + r = __tdx_reclaim_page(pa, level); if (!r) - tdx_clear_page(pa); + tdx_clear_page(pa, KVM_HPAGE_SIZE(level)); return r; } @@ -342,7 +345,7 @@ static void tdx_reclaim_control_page(unsigned long td_page_pa) * was already flushed by TDH.PHYMEM.CACHE.WB before here, So * cache doesn't need to be flushed again. */ - if (tdx_reclaim_page(td_page_pa)) + if (tdx_reclaim_page(td_page_pa, PG_LEVEL_4K)) /* * Leak the page on failure: * tdx_reclaim_page() returns an error if and only if there's an @@ -573,7 +576,7 @@ void tdx_vm_free(struct kvm *kvm) if (!kvm_tdx->tdr_pa) return; - if (__tdx_reclaim_page(kvm_tdx->tdr_pa)) + if (__tdx_reclaim_page(kvm_tdx->tdr_pa, PG_LEVEL_4K)) return; /* * TDX module maps TDR with TDX global HKID. TDX module may access TDR @@ -586,7 +589,7 @@ void tdx_vm_free(struct kvm *kvm) pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); return; } - tdx_clear_page(kvm_tdx->tdr_pa); + tdx_clear_page(kvm_tdx->tdr_pa, PAGE_SIZE); free_page((unsigned long)__va(kvm_tdx->tdr_pa)); kvm_tdx->tdr_pa = 0; @@ -1654,7 +1657,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, * The HKID assigned to this TD was already freed and cache * was already flushed. We don't have to flush again. */ - err = tdx_reclaim_page(hpa); + err = tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; tdx_unpin(kvm, pfn); @@ -1687,7 +1690,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); return -EIO; } - tdx_clear_page(hpa); + tdx_clear_page(hpa, PAGE_SIZE); tdx_unpin(kvm, pfn); return 0; } @@ -1799,7 +1802,7 @@ static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, * already flushed. We don't have to flush again. */ if (!is_hkid_assigned(kvm_tdx)) - return tdx_reclaim_page(__pa(private_spt)); + return tdx_reclaim_page(__pa(private_spt), PG_LEVEL_4K); /* * free_private_spt() is (obviously) called when a shadow page is being From patchwork Tue Jan 23 00:22:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526606 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18C2C14DB52; Tue, 23 Jan 2024 00:22:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969361; cv=none; b=XATqWTfE1qYyB0CBjSbY3r+Lt1mhL4Qg8DKZ4/c7yivZnTYx8TbUzPBDlaA0QNTgYBQaPAVDx4pinLX6fzZUM8r/MzvVeMBz0CRkuhVKO1xMHLNDUqM9ti7SrYCTxZSshmdh+j4QAd1TztRZR2Zal1MFFC9VssOEY8Sm2fcYYbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969361; c=relaxed/simple; bh=B2fYQbNPR8MbbYhfXBQKVD17dagYwkNTN36DYUV0ZJ0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cv8Bdy+8n4FKBxyriX1FIS1mci4tQq7pxJvixjaY99cfpH6trNdO83ZHBTLhbzrX4O0Jg41zZK233y7X4/dSRe+tLCEirp1Z4lMGe448DukLcxscDw39edo/QWWEEw1lHxjlXHtWnA70zottjHOcMOTnOp4JWHWy5Mg3P0VJAXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dYgkNXlQ; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dYgkNXlQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969360; x=1737505360; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B2fYQbNPR8MbbYhfXBQKVD17dagYwkNTN36DYUV0ZJ0=; b=dYgkNXlQmHT/rldo/W7S8HLNZPk1yDDparFkRrqs/RwsNQPkdlluHATK ZcMmPHb4q3LzKbMGa4k2yO+PzXz1nhDNe+g8+J2ODyi5oZG8pd1QLOEHW lKj5xdiuGWfVh2AggjEemCUVMtaJbf1jPdyvZ+eJWck3zytpAyUIXE6jf i7CPNZh1uoMTKNFdT268Yvptnu+5RE5M5ZE0MpLxO6Dv4fmUwmsPQc1yx ZZ5BqJKhUjqReEydPLoqQtZoC7zsv/5P5+BQh7e/ive0vEEqneOP0HL16 geNm7AdvSdwiVG/Ej7acFJDTTaXkfJia6mfyC8FsAgKagxcnUMV7mSZVi A==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125657" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125657" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825634" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:37 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 04/13] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page Date: Mon, 22 Jan 2024 16:22:19 -0800 Message-Id: <4a2f6212b3efb1fa7a51f0eafc4ed333e08eb07d.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Allow large page level AUG and REMOVE for TDX pages. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/tdx.c | 68 ++++++++++++++++++++++-------------------- 1 file changed, 35 insertions(+), 33 deletions(-) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 68f3a4c40be4..e2a0d521f806 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1504,11 +1504,12 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); } -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn) +static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, enum pg_level level) { - struct page *page = pfn_to_page(pfn); + int i; - put_page(page); + for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) + put_page(pfn_to_page(pfn + i)); } static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn, @@ -1525,7 +1526,7 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn, err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EAGAIN; } if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) { @@ -1534,7 +1535,7 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn, if (level_state.level == tdx_level && level_state.state == TDX_SEPT_PENDING && entry.leaf && entry.pfn == pfn && entry.sve) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); WARN_ON_ONCE(!(to_kvm_tdx(kvm)->attributes & TDX_TD_ATTR_SEPT_VE_DISABLE)); return -EAGAIN; @@ -1542,7 +1543,7 @@ static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn, } if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } @@ -1578,7 +1579,7 @@ static int tdx_mem_page_add(struct kvm *kvm, gfn_t gfn, * always uses vcpu 0's page table and protected by vcpu->mutex). */ if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) { - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EINVAL; } @@ -1596,7 +1597,7 @@ static int tdx_mem_page_add(struct kvm *kvm, gfn_t gfn, } while (unlikely(err == TDX_ERROR_SEPT_BUSY)); if (KVM_BUG_ON(err, kvm)) { pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out); - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return -EIO; } else if (measure) { for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) { @@ -1616,10 +1617,7 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn) { struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; /* * Because restricted mem doesn't support page migration with @@ -1629,7 +1627,8 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn, * TODO: Once restricted mem introduces callback on page migration, * implement it and remove get_page/put_page(). */ - get_page(pfn_to_page(pfn)); + for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) + get_page(pfn_to_page(pfn + i)); if (likely(is_td_finalized(kvm_tdx))) return tdx_mem_page_aug(kvm, gfn, level, pfn); @@ -1646,11 +1645,9 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, gpa_t gpa = gfn_to_gpa(gfn); hpa_t hpa = pfn_to_hpa(pfn); hpa_t hpa_with_hkid; + int r = 0; u64 err; - - /* TODO: handle large pages. */ - if (KVM_BUG_ON(level != PG_LEVEL_4K, kvm)) - return -EINVAL; + int i; if (unlikely(!is_hkid_assigned(kvm_tdx))) { /* @@ -1660,7 +1657,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, err = tdx_reclaim_page(hpa, level); if (KVM_BUG_ON(err, kvm)) return -EIO; - tdx_unpin(kvm, pfn); + tdx_unpin(kvm, pfn, level); return 0; } @@ -1677,22 +1674,27 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn, return -EIO; } - hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); - do { - /* - * TDX_OPERAND_BUSY can happen on locking PAMT entry. Because - * this page was removed above, other thread shouldn't be - * repeatedly operating on this page. Just retry loop. - */ - err = tdh_phymem_page_wbinvd(hpa_with_hkid); - } while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); - if (KVM_BUG_ON(err, kvm)) { - pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); - return -EIO; + for (i = 0; i < KVM_PAGES_PER_HPAGE(level); i++) { + hpa_with_hkid = set_hkid_to_hpa(hpa, (u16)kvm_tdx->hkid); + do { + /* + * TDX_OPERAND_BUSY can happen on locking PAMT entry. + * Because this page was removed above, other thread + * shouldn't be repeatedly operating on this page. + * Simple retry should work. + */ + err = tdh_phymem_page_wbinvd(hpa_with_hkid); + } while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + r = -EIO; + } else { + tdx_clear_page(hpa, PAGE_SIZE); + tdx_unpin(kvm, pfn + i, PG_LEVEL_4K); + } + hpa += PAGE_SIZE; } - tdx_clear_page(hpa, PAGE_SIZE); - tdx_unpin(kvm, pfn); - return 0; + return r; } static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, From patchwork Tue Jan 23 00:22:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526607 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15FE314DB74; Tue, 23 Jan 2024 00:22:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969362; cv=none; b=Zq1Gt3oID4JhKRge6oR2c0lBO8h1oPQLayh1BxQ74RLtJ0U5G35ecVKTklgqtnEjj7/EqXqP4GqGKkDQoUbqxmdZsO9zOmPBOJU0lMCcXWaC9bt2YznIRUI7sM/WuHwhslvxrHbmLFh6MmIgf1C+yIkYljmC9iE1ehVi5oyD4Kk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969362; c=relaxed/simple; bh=Xx29hkn+VuZzl9NNsCbqSpiFZ4s0wixGphpqZM+kOB0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jgKbB/6YJ9Az5wZv0liU6GGPj/sq7Itwy5OgrMR1/UzATlUeEHiw1aorXtmuEQHcKMJH6yJWOPVyuw8nqM+n4/jIjpG9cA8mcVecy/nCMqiESI1IQQv8y3/U4kl2DIwGsEJU98F86OX+MQoH5IOyU1VD4XKJWGbxCZ76oSi8EFM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AZwbK56h; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AZwbK56h" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969361; x=1737505361; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Xx29hkn+VuZzl9NNsCbqSpiFZ4s0wixGphpqZM+kOB0=; b=AZwbK56hVAtv8IIAo9ah8RZhFnLZpw1KDs8AqIiJKpE01vS1TEyoDkim 4bMTyNQLMky0ia3yS05pX+6suUjdADtz/mgk1gHFt+9ja2PTQOt0LLVBM FzMxfiqEgluL2DDg5ZYrqvI/aAdi+3ylIL8774wEJPXK4+7Mw0Qgvscsh V4VaqzgV47WcnLBU+wBxPG2qKKfkuRClAvyLwcNCntLGu8MC23pl2N9S7 MTTLrQ0nBR9hImEqO6ELxYZaqYsEx1i5MtqlbSSICj1Ucxis3HjPGiICR d4QGFNJVNGgf9VzG+WaTEmAj39zRVbUFUS3F0ycvMocxh3NBQ8l4uPtMG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125666" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125666" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825643" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:38 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 05/13] KVM: MMU: Introduce level info in PFERR code Date: Mon, 22 Jan 2024 16:22:20 -0800 Message-Id: <3eadceecdf5e0ed2677dbcd9d0d58963f7fa038b.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD guest will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to KVM MMU page fault handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 5 +++++ arch/x86/kvm/mmu/mmu.c | 5 +++++ 2 files changed, 10 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b83a790b01c8..3a2237ed9dba 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -262,6 +262,8 @@ enum x86_intercept_stage; #define PFERR_FETCH_BIT 4 #define PFERR_PK_BIT 5 #define PFERR_SGX_BIT 15 +#define PFERR_LEVEL_START_BIT 29 +#define PFERR_LEVEL_END_BIT 31 #define PFERR_GUEST_FINAL_BIT 32 #define PFERR_GUEST_PAGE_BIT 33 #define PFERR_GUEST_ENC_BIT 34 @@ -274,6 +276,7 @@ enum x86_intercept_stage; #define PFERR_FETCH_MASK BIT(PFERR_FETCH_BIT) #define PFERR_PK_MASK BIT(PFERR_PK_BIT) #define PFERR_SGX_MASK BIT(PFERR_SGX_BIT) +#define PFERR_LEVEL_MASK GENMASK_ULL(PFERR_LEVEL_END_BIT, PFERR_LEVEL_START_BIT) #define PFERR_GUEST_FINAL_MASK BIT_ULL(PFERR_GUEST_FINAL_BIT) #define PFERR_GUEST_PAGE_MASK BIT_ULL(PFERR_GUEST_PAGE_BIT) #define PFERR_GUEST_ENC_MASK BIT_ULL(PFERR_GUEST_ENC_BIT) @@ -283,6 +286,8 @@ enum x86_intercept_stage; PFERR_WRITE_MASK | \ PFERR_PRESENT_MASK) +#define PFERR_LEVEL(err_code) (((err_code) & PFERR_LEVEL_MASK) >> PFERR_LEVEL_START_BIT) + /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 53eb9508cde2..971dbd9c95cc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4611,6 +4611,11 @@ bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma) int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + u8 err_level = PFERR_LEVEL(fault->error_code); + + if (err_level) + fault->max_level = min(fault->max_level, err_level); + /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype From patchwork Tue Jan 23 00:22:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526613 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9072E14E2CF; Tue, 23 Jan 2024 00:22:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969368; cv=none; b=pYaRKxyYE+XUjJ06ARitPwf4flTqFD4UOR7pwwBQQ8ALSEmLzuKilv3Wx7ktjDNoABtW54EWwsbQMdHK3Rx/QanApptRQaz0beEx8v98oM+q8/aBWug6r4Cqqx7F54ujI66P/cHq1urCx1KCPCBvTjxi7shzSUWHYMD6mWUAqv0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969368; c=relaxed/simple; bh=anTVTNUztn2JGpPeY0WJZiZzHDCKn+UgMrVUKZCXJiY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=pvbfWwmixfNAJZpAl/u/wYsSZw9fiMf6moUdYwGnrISzyYRfxfHADgzQc69awjGeH9JyBUzRc/iZdtxKOD1Z9wcnHjc38XxJEKegYmFUrOhU86JPVIBxolwfYIwcGUdSZaD95Q1hEZ5ecCKk/TuVBEefzTfR55fywj/s3YBKQq4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=K6mCCVX0; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="K6mCCVX0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969361; x=1737505361; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=anTVTNUztn2JGpPeY0WJZiZzHDCKn+UgMrVUKZCXJiY=; b=K6mCCVX0Lu9qNP9SUzPl92iaDHZKsthce1L9m30oZzHbGeRmiakmXEsE v8Unt1gfFkJVra4FYFRfH+TXll92F9re0X5K0rhPNEN6mwSzP2XT9/2wJ Wu7UXbbqSUg1ikTQsLffcWeQhrrYE4QQvLd6KAXWn1vP7IxRJnWRnkvpC 21PX+/cwuGTcBt66GHuIKVuZwSTPOgs6wc2ZoiPLkuNqVEqsefYRnSpyY Ln5hLAdqX1J4TgsOLHBRJyBxgV+6Z3TqGk9xZjnS6eDaMItghlMWKjxcL IVQQQEe0oZLZJDVKiECsocC+MIESO7NsgT4WmnFAZivsSt04Y+PtAiDRb w==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125670" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125670" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825647" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:39 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 06/13] KVM: TDX: Pass desired page level in err code for page fault handler Date: Mon, 22 Jan 2024 16:22:21 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li For TDX, EPT violation can happen when TDG.MEM.PAGE.ACCEPT. And TDG.MEM.PAGE.ACCEPT contains the desired accept page level of TD guest. 1. KVM can map it with 4KB page while TD guest wants to accept 2MB page. TD geust will get TDX_PAGE_SIZE_MISMATCH and it should try to accept 4KB size. 2. KVM can map it with 2MB page while TD guest wants to accept 4KB page. KVM needs to honor it because a) there is no way to tell guest KVM maps it as 2MB size. And b) guest accepts it in 4KB size since guest knows some other 4KB page in the same 2MB range will be used as shared page. For case 2, it need to pass desired page level to MMU's page_fault_handler. Use bit 29:31 of kvm PF error code for this purpose. Signed-off-by: Xiaoyao Li --- arch/x86/kvm/vmx/common.h | 6 +++++- arch/x86/kvm/vmx/tdx.c | 22 ++++++++++++++++++++-- arch/x86/kvm/vmx/tdx_arch.h | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 2 +- 4 files changed, 45 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/common.h b/arch/x86/kvm/vmx/common.h index 027aa4175d2c..787f59c44abc 100644 --- a/arch/x86/kvm/vmx/common.h +++ b/arch/x86/kvm/vmx/common.h @@ -67,7 +67,8 @@ static inline void vmx_handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu, } static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa, - unsigned long exit_qualification) + unsigned long exit_qualification, + int err_page_level) { u64 error_code; @@ -90,6 +91,9 @@ static inline int __vmx_handle_ept_violation(struct kvm_vcpu *vcpu, gpa_t gpa, if (kvm_is_private_gpa(vcpu->kvm, gpa)) error_code |= PFERR_GUEST_ENC_MASK; + if (err_page_level > PG_LEVEL_NONE) + error_code |= (err_page_level << PFERR_LEVEL_START_BIT) & PFERR_LEVEL_MASK; + return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0); } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index e2a0d521f806..747152af0882 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1858,7 +1858,20 @@ void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) { + union tdx_ext_exit_qualification ext_exit_qual; unsigned long exit_qual; + int err_page_level = 0; + + ext_exit_qual.full = tdexit_ext_exit_qual(vcpu); + + if (ext_exit_qual.type >= NUM_EXT_EXIT_QUAL) { + pr_err("EPT violation at gpa 0x%lx, with invalid ext exit qualification type 0x%x\n", + tdexit_gpa(vcpu), ext_exit_qual.type); + kvm_vm_bugged(vcpu->kvm); + return 0; + } else if (ext_exit_qual.type == EXT_EXIT_QUAL_ACCEPT) { + err_page_level = tdx_sept_level_to_pg_level(ext_exit_qual.req_sept_level); + } if (kvm_is_private_gpa(vcpu->kvm, tdexit_gpa(vcpu))) { /* @@ -1885,7 +1898,7 @@ static int tdx_handle_ept_violation(struct kvm_vcpu *vcpu) } trace_kvm_page_fault(vcpu, tdexit_gpa(vcpu), exit_qual); - return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual); + return __vmx_handle_ept_violation(vcpu, tdexit_gpa(vcpu), exit_qual, err_page_level); } static int tdx_handle_ept_misconfig(struct kvm_vcpu *vcpu) @@ -2752,6 +2765,7 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd) struct kvm_tdx_init_mem_region region; struct kvm_vcpu *vcpu; struct page *page; + u64 error_code; int idx, ret = 0; bool added = false; @@ -2809,7 +2823,11 @@ static int tdx_init_mem_region(struct kvm *kvm, struct kvm_tdx_cmd *cmd) kvm_tdx->source_pa = pfn_to_hpa(page_to_pfn(page)) | (cmd->flags & KVM_TDX_MEASURE_MEMORY_REGION); - ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, TDX_SEPT_PFERR, + /* TODO: large page support. */ + error_code = TDX_SEPT_PFERR; + error_code |= (PG_LEVEL_4K << PFERR_LEVEL_START_BIT) & + PFERR_LEVEL_MASK; + ret = kvm_mmu_map_tdp_page(vcpu, region.gpa, error_code, PG_LEVEL_4K); put_page(page); if (ret) diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index 0207cce72b27..eb62b8804cb4 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -227,6 +227,25 @@ union tdx_sept_level_state { u64 raw; }; +union tdx_ext_exit_qualification { + struct { + u64 type : 4; + u64 reserved0 : 28; + u64 req_sept_level : 3; + u64 err_sept_level : 3; + u64 err_sept_state : 8; + u64 err_sept_is_leaf : 1; + u64 reserved1 : 17; + }; + u64 full; +}; + +enum tdx_ext_exit_qualification_type { + EXT_EXIT_QUAL_NONE = 0, + EXT_EXIT_QUAL_ACCEPT = 1, + NUM_EXT_EXIT_QUAL, +}; + /* * Global scope metadata field ID. * See Table "Global Scope Metadata", TDX module 1.5 ABI spec. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 79f031b2b727..695e4ad022d3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5752,7 +5752,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) if (unlikely(allow_smaller_maxphyaddr && !kvm_vcpu_is_legal_gpa(vcpu, gpa))) return kvm_emulate_instruction(vcpu, 0); - return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification); + return __vmx_handle_ept_violation(vcpu, gpa, exit_qualification, PG_LEVEL_NONE); } static int handle_ept_misconfig(struct kvm_vcpu *vcpu) From patchwork Tue Jan 23 00:22:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526608 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC5DF14E2DE; Tue, 23 Jan 2024 00:22:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969363; cv=none; b=lg83S+OYP03sAGti2HbK5V+dyZG9WQ3Mon68JHzg5mv1MzSQqExxqhthZNfZLHSv4pM0veXkzI5IwuMy2d03QcoU8bgHmyrovhz0cGgglzNZVfizvLiJ521NeGesLZnR6qNqwArGraGfkit6znerMQLCOoZm/F9XbOZPdiwv4SI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969363; c=relaxed/simple; bh=D8ucLB5R3WU1RLu3fuizNfiGPSEyK3aQUihpWV4tz0k=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YD//OtF/rQm7nicCaLUXMvTORLFVj5YRCVQUkCD2/IebJN2+6fmS0QNfJv83AP51GHS2ZLhaDk9aj7eEkCEk9KY5tyA1nAMOQy4dW50hipC8cWU0GObYbaObvqIhcZserLWcNcUfay8ShqPZVfpHOHPEwx9lSs/dXRcAk0y25L8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bqtzD1ZM; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bqtzD1ZM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969361; x=1737505361; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=D8ucLB5R3WU1RLu3fuizNfiGPSEyK3aQUihpWV4tz0k=; b=bqtzD1ZMQfzQtLpeY+iU+kCQW6TjpFXWeaN87YfHYtxGVpJTYxaJ+fVH LMAW6T7bLe8oXmoeyRLu3n5TuOUrWSBaln/23Fo6hnkMJciShaJgMGRP5 M9TGXKI/di4Rc3te1+7Nw3d2FN3f5A8hCozVwwKh1M/A1KIIM0o00jiSX dtY/fWWImyL+4Lpbj8S1DaFKzSt8mnc8iCGB+yfLGUFRGk7zotXHTxVRi X9i2gla7xvKw/cBy+vY5dqltM3KmFeUAL58VPAjp0tCb4qN7tJ8nyDqy+ k/fHC5Oo0dkHzGCbFmm5ztXTdJGhTPR318zH/u1sCI5polY5DHKtwIA7g w==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125675" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125675" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825650" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:39 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v7 07/13] KVM: x86/tdp_mmu: Allocate private page table for large page split Date: Mon, 22 Jan 2024 16:22:22 -0800 Message-Id: <2e0999bc6c5d1ebdf07d195f5e99e6c8b2141378.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Make tdp_mmu_alloc_sp_split() aware of private page table. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.c | 8 ++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index e9eafc2f7885..9888ea0046ea 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -203,6 +203,15 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu, struct kvm_m } } +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp) +{ + gfp &= ~__GFP_ZERO; + sp->private_spt = (void *)__get_free_page(gfp); + if (!sp->private_spt) + return -ENOMEM; + return 0; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { if (sp->private_spt) @@ -231,6 +240,11 @@ static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu, struct kvm_m { } +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp) +{ + return -ENOMEM; +} + static inline void kvm_mmu_free_private_spt(struct kvm_mmu_page *sp) { } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 25c201686d1f..7991934b3f37 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1593,8 +1593,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp, union kvm_mm sp->role = role; sp->spt = (void *)__get_free_page(gfp); - /* TODO: large page support for private GPA. */ - WARN_ON_ONCE(kvm_mmu_page_role_is_private(role)); + if (kvm_mmu_page_role_is_private(role)) { + if (kvm_alloc_private_spt_for_split(sp, gfp)) { + free_page((unsigned long)sp->spt); + sp->spt = NULL; + } + } if (!sp->spt) { kmem_cache_free(mmu_page_header_cache, sp); return NULL; From patchwork Tue Jan 23 00:22:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526609 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D237114F538; Tue, 23 Jan 2024 00:22:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969364; cv=none; b=L7VqMbZBR9nwbWOFaXv1N3tS1epo5z9ikCBYl82Rhwh6ghIP1hXkb7UdYY6kbgdvcK/0KIRG/s4SAb6I7Ar1bBDZL46sFAcYc5jvlTo07gjLoJY4eOW/nuyK+8wuXXc6QPtQDrizBWwEKWco4MxiVAmbCmnDrPqmGqbPpk0SLcI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969364; c=relaxed/simple; bh=KX7PJQwd4bEJKGoSjmNKx7nJ7E6cYVc8W8UDuuUN2zY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CDwfIUsAoQ0F8gJ6BTEBOGCy2sCbtmnKSP9Pf4gC6dt+bpFpACd0TdpXNVq6WlrSQGZjF2DRJAXVxeQKnXAuQg1tqGrat52ru9CR2+G1/QFfh3nFQnCAavqNSq3qSvMxHyUndWPCuWHVcjpaLshbyIXu5NvdPfscCAutNWh6XUU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Yi1w5Rix; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Yi1w5Rix" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969362; x=1737505362; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KX7PJQwd4bEJKGoSjmNKx7nJ7E6cYVc8W8UDuuUN2zY=; b=Yi1w5Rixo2RgUZVXHQAPoKv//Bh9FTgkQugbcwJZtAcdDwNjLltqB8gY bmcklW7DYl6kl9A8ON9BAX1VlP07onsilZneZxQJ30i2VgJNcAu1FQqxX 8JIu5a6EHRlYxsEzsZrxqBSJgIUPHkcBLpUxUbx8GfTnqadEC0+7XvbnA MYPTr2v7OjGZZBqKwDjXdAB+q4R0E5A1A7DrO59o6ERIJ2dJpfpGvIMIf dSmrgdhXHYLb/jZZVAtE3zgbBS1znZ+//qCJkR+8zkSJREegzbiAswWF8 dTS7pxcRa9KjKGJYMeVul7eBetBNYYEA0HRUQJcAIXeiO9iQ8/bwnXQ8q A==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125681" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125681" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825653" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:39 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 08/13] KVM: x86/tdp_mmu: Split the large page when zap leaf Date: Mon, 22 Jan 2024 16:22:23 -0800 Message-Id: <3391bf2cf96df8744e0abc023d8af9ec677ee4e8.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li --- v7: - remote unnecessary tlb shoot down in tdp_mmu_zap_leafs() to free unused split_sp. --- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 6 ++-- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 60 ++++++++++++++++++++++++++++++--- 4 files changed, 69 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fa00abb9ab39..1aa37d494ae9 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -89,6 +89,7 @@ config KVM_INTEL tristate "KVM for Intel (and compatible) processors support" depends on KVM && IA32_FEAT_CTL select KVM_SW_PROTECTED_VM if INTEL_TDX_HOST + select KVM_GENERIC_MEMORY_ATTRIBUTES if INTEL_TDX_HOST select KVM_PRIVATE_MEM if INTEL_TDX_HOST help Provides support for KVM on processors equipped with Intel's VT diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 971dbd9c95cc..a9e7a3d2d362 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7461,8 +7461,8 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, return kvm_unmap_gfn_range(kvm, range); } -static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, - int level) +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) { return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG; } @@ -7489,7 +7489,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, return kvm_range_has_memory_attributes(kvm, start, end, attrs); for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { - if (hugepage_test_mixed(slot, gfn, level - 1) || + if (kvm_hugepage_test_mixed(slot, gfn, level - 1) || attrs != kvm_get_memory_attributes(kvm, gfn)) return false; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 9888ea0046ea..cc0a95e554b5 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -461,4 +461,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level); +#else +static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level) +{ + return false; +} +#endif + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 7991934b3f37..98de2c093815 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -953,6 +953,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) return true; } + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -964,14 +972,16 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield, bool flush, bool zap_private) { + bool is_private = is_private_sp(root); + struct kvm_mmu_page *split_sp = NULL; struct tdp_iter iter; end = min(end, tdp_mmu_max_gfn_exclusive()); lockdep_assert_held_write(&kvm->mmu_lock); - WARN_ON_ONCE(zap_private && !is_private_sp(root)); - if (!zap_private && is_private_sp(root)) + WARN_ON_ONCE(zap_private && !is_private); + if (!zap_private && is_private) return false; /* @@ -995,12 +1005,56 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; + if (is_private && kvm_gfn_shared_mask(kvm) && + is_large_pte(iter.old_spte)) { + gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_hugepage_test_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + WARN_ON_ONCE(!can_yield); + if (split_sp) { + sp = split_sp; + split_sp = NULL; + sp->role = tdp_iter_child_role(&iter); + } else { + WARN_ON(iter.yielded); + if (flush && can_yield) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + if (iter.yielded) { + split_sp = sp; + continue; + } + } + KVM_BUG_ON(!sp, kvm); + + tdp_mmu_init_sp(sp, iter.sptep, iter.gfn); + if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) { + /* force retry on this gfn. */ + iter.yielded = true; + split_sp = sp; + } else + flush = true; + continue; + } + } + tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush = true; } rcu_read_unlock(); + if (split_sp) + tdp_mmu_free_sp(split_sp); + /* * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need * to provide RCU protection as no 'struct kvm_mmu_page' will be freed. @@ -1617,8 +1671,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, kvm_lockdep_assert_mmu_lock_held(kvm, shared); KVM_BUG_ON(kvm_mmu_page_role_is_private(role) != is_private_sptep(iter->sptep), kvm); - /* TODO: Large page isn't supported for private SPTE yet. */ - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm); /* * Since we are allocating while under the MMU lock we have to be From patchwork Tue Jan 23 00:22:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526610 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A25C114F533; Tue, 23 Jan 2024 00:22:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969365; cv=none; b=H/uarzhDEdtTLw+12hiYot8RLI1QH9DgjRth6ju7JeWLNRt6IoL1jbWWIEbCE5N0kp87lM5L1Iwdidd6jrWBCe2EjsNI5nK9Q5f8j3vQWABY92YQa7jD3YuX15W7fXjOAY2Bs9jb3cluE8g8cMb0/t/m6jDU09EDSMQjjBt2zwk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969365; c=relaxed/simple; bh=kM+vIsAToVEntSe3/3/9ndsSycv4FTnSynsT+d7GTFU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=t6uaNmZzSVLPUcSjByH1AII2YeJqc8cZjFEoyQVcj0sbOScdqaIVCcGrkUss8ixsAF5XHzK77YIy0yhDMkeNugUQGlMLsrCzr5vKksCh+X3QTs6IrX8MUpjZa/LEEtvNbwHmKFTT2oq/F78bSiMHL2gjANOmZxNiAKzG2vpna7A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=nEpL8G0o; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="nEpL8G0o" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969363; x=1737505363; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kM+vIsAToVEntSe3/3/9ndsSycv4FTnSynsT+d7GTFU=; b=nEpL8G0oo4mIclSW+qQYsHc2n2bWo8pVsRIkQmvB16JAesy0PAmFLhMH zGfyvfe6WmHnO8H3zMwrFVyj5i9jx6Zr6mg6RcP4SOZzat/QuqZEUJiE0 SRIKIuaoiB/D1KfEPf/gkgMYwlN2UdAav0C/W1frLoLEaagiR/T/1ZRmL aCLovA2Ih1PvHcBTO3SUIo+UlUivE4dHJDs224K+1e+FvajTBTCeAHsRx wxVJN7AlHLvwpQ3Ufu7K9SAUJ5tPPVUf6ymjRZJ5/rwoSmO1+mJIYCpn2 AN0qMdc7zCXKoFMZOymasrUspeiJgVOyMXdBm0NKzWWOjp/j9PFV0EGKf g==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125689" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125689" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825656" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:40 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 09/13] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared Date: Mon, 22 Jan 2024 16:22:24 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li When mapping the shared page for TDX, it needs to zap private alias. In the case that private page is mapped as large page (2MB), it can be removed directly only when the whole 2MB is converted to shared. Otherwise, it has to split 2MB page into 512 4KB page, and only remove the pages that converted to shared. When a present large leaf spte switches to present non-leaf spte, TDX needs to split the corresponding SEPT page to reflect it. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- v7: - catch up for tdx_seamcall() change - typo in a comment of __set_private_spte_present() - improved a comment in tdx_sept_split_private_spt() v6: - repeat TDH.MEM.PAGE.DEMOTE on TDX_INTERRUPTED_RESTARTABLE --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 21 ++++++++++++++++----- arch/x86/kvm/vmx/tdx.c | 27 +++++++++++++++++++++++++-- arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_errno.h | 1 + arch/x86/kvm/vmx/tdx_ops.h | 13 +++++++++++++ 7 files changed, 59 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 527db174d6b5..08c55c3d6e5b 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -105,6 +105,7 @@ KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) +KVM_X86_OP_OPTIONAL(split_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3a2237ed9dba..8123fad88750 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1783,6 +1783,8 @@ struct kvm_x86_ops { void *private_spt); int (*free_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 98de2c093815..3f7307938982 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -588,23 +588,34 @@ static int __must_check __set_private_spte_present(struct kvm *kvm, tdp_ptep_t s { bool was_present = is_shadow_present_pte(old_spte); bool is_present = is_shadow_present_pte(new_spte); + bool was_leaf = was_present && is_last_spte(old_spte, level); bool is_leaf = is_present && is_last_spte(new_spte, level); kvm_pfn_t new_pfn = spte_to_pfn(new_spte); + void *private_spt; int ret = 0; lockdep_assert_held(&kvm->mmu_lock); - /* TDP MMU doesn't change present -> present */ - KVM_BUG_ON(was_present, kvm); /* * Use different call to either set up middle level * private page table, or leaf. */ - if (is_leaf) + if (level > PG_LEVEL_4K && was_leaf && !is_leaf) { + /* + * splitting large page into 4KB. + * tdp_mmu_split_huge_page() => tdp_mmu_link_sp() + */ + private_spt = get_private_spt(gfn, new_spte, level); + KVM_BUG_ON(!private_spt, kvm); + ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + kvm_flush_remote_tlbs(kvm); + if (!ret) + ret = static_call(kvm_x86_split_private_spt)(kvm, gfn, + level, private_spt); + } else if (is_leaf) ret = static_call(kvm_x86_set_private_spte)(kvm, gfn, level, new_pfn); else { - void *private_spt = get_private_spt(gfn, new_spte, level); - + private_spt = get_private_spt(gfn, new_spte, level); KVM_BUG_ON(!private_spt, kvm); ret = static_call(kvm_x86_link_private_spt)(kvm, gfn, level, private_spt); } diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 747152af0882..10dbe4a4db7a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1718,6 +1718,30 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn, return 0; } +static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level = pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + hpa_t hpa = __pa(private_spt); + struct tdx_module_args out; + u64 err; + + /* See comment in tdx_sept_set_private_spte() to pin pages. */ + do { + err = tdh_mem_page_demote(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out); + } while (err == TDX_INTERRUPTED_RESTARTABLE); + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_DEMOTE, err, &out); + return -EIO; + } + + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1731,8 +1755,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, if (unlikely(!is_hkid_assigned(kvm_tdx))) return 0; - /* For now large page isn't supported yet. */ - WARN_ON_ONCE(level != PG_LEVEL_4K); err = tdh_mem_range_block(kvm_tdx->tdr_pa, gpa, tdx_level, &out); if (unlikely(err == TDX_ERROR_SEPT_BUSY)) return -EAGAIN; @@ -3286,6 +3308,7 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops) x86_ops->link_private_spt = tdx_sept_link_private_spt; x86_ops->free_private_spt = tdx_sept_free_private_spt; + x86_ops->split_private_spt = tdx_sept_split_private_spt; x86_ops->set_private_spte = tdx_sept_set_private_spte; x86_ops->remove_private_spte = tdx_sept_remove_private_spte; x86_ops->zap_private_spte = tdx_sept_zap_private_spte; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index eb62b8804cb4..e663abaa3aa0 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -21,6 +21,7 @@ #define TDH_MNG_CREATE 9 #define TDH_VP_CREATE 10 #define TDH_MNG_RD 11 +#define TDH_MEM_PAGE_DEMOTE 15 #define TDH_MR_EXTEND 16 #define TDH_MR_FINALIZE 17 #define TDH_VP_FLUSH 18 diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h index bb093e292fef..d08b4d14e57b 100644 --- a/arch/x86/kvm/vmx/tdx_errno.h +++ b/arch/x86/kvm/vmx/tdx_errno.h @@ -11,6 +11,7 @@ */ #define TDX_NON_RECOVERABLE_VCPU 0x4000000100000000ULL #define TDX_INTERRUPTED_RESUMABLE 0x8000000300000000ULL +#define TDX_INTERRUPTED_RESTARTABLE 0x8000000400000000ULL #define TDX_OPERAND_INVALID 0xC000010000000000ULL #define TDX_OPERAND_BUSY 0x8000020000000000ULL #define TDX_PREVIOUS_TLB_EPOCH_BUSY 0x8000020100000000ULL diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index ce722e917d14..772e2e7d61e7 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -249,6 +249,19 @@ static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_args *out) return tdx_seamcall(TDH_MNG_RD, &in, out); } +static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t page, + struct tdx_module_args *out) +{ + struct tdx_module_args in = { + .rcx = gpa | level, + .rdx = tdr, + .r8 = page, + }; + + tdx_clflush_page(page, PG_LEVEL_4K); + return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, &in, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_args *out) { From patchwork Tue Jan 23 00:22:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526611 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B231B14D45E; Tue, 23 Jan 2024 00:22:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969366; cv=none; b=jsQ6VR+33VUIYAT8VadkZGQ8O/L4CxOfZ+p6C53axG9UPIC9oSpk+Lui4W1XvV/ThXxfej9v5sTY5SG92gFC3Rujbsh77gJSk0B3SE7VseK95zs0YzCsFrg5uWkGYTfwoVSOtawxRSOEZsOgQPX1aja0fPa++dj/K5bIogVqLAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969366; c=relaxed/simple; bh=Pk23GfO1+xv+r2v/tTA08MRpYztkXVfkC6AZBWmQV60=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OIjAG52lCnfta9w/66eFLeuZwRFeBRdYuKP+tfhlfHy52QPRYhCUPo8Wvjh9HZ6RK7TIkQd7oPk7qY1XpgDrSeC3JaF6eJj0ROepmj7EPXX7BvtJg5tZXniJE2/GCGTgnRixtfj521qleM4aTOk+XsF5yWAv9RaEOPVR/GUt020= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=b1J+FmIf; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="b1J+FmIf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969364; x=1737505364; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Pk23GfO1+xv+r2v/tTA08MRpYztkXVfkC6AZBWmQV60=; b=b1J+FmIfoC5PoGbocjqqIkTmas5LJv6BbfiI+SIdDkonjDIKEZmM7Sl6 TgRWy6bEiffeeClOenY0WSAS5truRhbykPHzE/6ToNBv8xPzXiJk63SOL 8tsIRHk8cYFqv+LrJ9G54g3RpYcVh8mURU23AoF4tAGyb1AdVh5czeSvU EZOxV9uGoCUkFc3BZF4qCZOcYmKA2fqcAyi2Lw1f5B5u5WfEMAORoRSxv V5ooaAP+dMlVtTVfNL2xbX1jIm23Dw8SwG5pprE66nWeXq7akhpbAi0pR dyhQb0FQ6/Txvg2syA50yVwS5gscWvX8ncY7BPit/hhWI61YHoRH0pcpq w==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125696" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125696" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825659" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:40 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v7 10/13] KVM: x86/tdp_mmu: Try to merge pages into a large page Date: Mon, 22 Jan 2024 16:22:25 -0800 Message-Id: <5dec631851838d86314b86b6ebe95a1c7d77f386.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata When a large page is passed to the KVM page fault handler and some of sub pages are already populated, try to merge sub pages into a large page. This situation can happen when the guest converts small pages into shared and convert it back into private. When a large page is passed to KVM mmu page fault handler and the spte corresponding to the page is non-leaf (one or more of sub pages are already populated at lower page level), the current kvm mmu zaps non-leaf spte at a large page level, and populate a leaf spte at that level. Thus small pages are converted into a large page. However, it doesn't work for TDX because zapping and re-populating results in zeroing page content. Instead, populate all small pages and merge them into a large page. Merging pages into a large page can fail when some sub pages are accepted and some are not. In such case, with the assumption that guest tries to accept at large page size for performance when possible, don't try to be smart to identify which page is still pending, map all pages at lower page level, and let vcpu re-execute. Signed-off-by: Isaku Yamahata --- v7: - typo freezed => frozen - return 0 when page is merged into 2M large page instead of -EAGAIN v5: - Fix memory leak --- arch/x86/include/asm/kvm-x86-ops.h | 2 + arch/x86/include/asm/kvm_host.h | 4 + arch/x86/kvm/mmu/tdp_iter.c | 37 ++++-- arch/x86/kvm/mmu/tdp_iter.h | 2 + arch/x86/kvm/mmu/tdp_mmu.c | 176 ++++++++++++++++++++++++++++- 5 files changed, 211 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 08c55c3d6e5b..f4d3a9d1b613 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -106,9 +106,11 @@ KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_private_spt) KVM_X86_OP_OPTIONAL(free_private_spt) KVM_X86_OP_OPTIONAL(split_private_spt) +KVM_X86_OP_OPTIONAL(merge_private_spt) KVM_X86_OP_OPTIONAL(set_private_spte) KVM_X86_OP_OPTIONAL(remove_private_spte) KVM_X86_OP_OPTIONAL(zap_private_spte) +KVM_X86_OP_OPTIONAL(unzap_private_spte) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) KVM_X86_OP(get_l2_tsc_multiplier) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 8123fad88750..43614c6b84f8 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -147,6 +147,7 @@ #define KVM_MAX_HUGEPAGE_LEVEL PG_LEVEL_1G #define KVM_NR_PAGE_SIZES (KVM_MAX_HUGEPAGE_LEVEL - PG_LEVEL_4K + 1) #define KVM_HPAGE_GFN_SHIFT(x) (((x) - 1) * 9) +#define KVM_HPAGE_GFN_MASK(x) (~((1UL << KVM_HPAGE_GFN_SHIFT(x)) - 1)) #define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x)) #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) @@ -1785,11 +1786,14 @@ struct kvm_x86_ops { void *private_spt); int (*split_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt); + int (*merge_private_spt)(struct kvm *kvm, gfn_t gfn, enum pg_level level, + void *private_spt); int (*set_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*remove_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level, kvm_pfn_t pfn); int (*zap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level); + int (*unzap_private_spte)(struct kvm *kvm, gfn_t gfn, enum pg_level level); bool (*has_wbinvd_exit)(void); diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 04c247bfe318..c4a18703f88a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -71,6 +71,14 @@ tdp_ptep_t spte_to_child_pt(u64 spte, int level) return (tdp_ptep_t)__va(spte_to_pfn(spte) << PAGE_SHIFT); } +static void step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + iter->level--; + iter->pt_path[iter->level - 1] = child_pt; + iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level); + tdp_iter_refresh_sptep(iter); +} + /* * Steps down one level in the paging structure towards the goal GFN. Returns * true if the iterator was able to step down a level, false otherwise. @@ -92,14 +100,28 @@ static bool try_step_down(struct tdp_iter *iter) if (!child_pt) return false; - iter->level--; - iter->pt_path[iter->level - 1] = child_pt; - iter->gfn = gfn_round_for_level(iter->next_last_level_gfn, iter->level); - tdp_iter_refresh_sptep(iter); - + step_down(iter, child_pt); return true; } +/* Steps down for frozen spte. Don't re-read sptep because it was frozen. */ +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt) +{ + WARN_ON_ONCE(!child_pt); + WARN_ON_ONCE(iter->yielded); + WARN_ON_ONCE(iter->level == iter->min_level); + + step_down(iter, child_pt); +} + +void tdp_iter_step_side(struct tdp_iter *iter) +{ + iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); + iter->next_last_level_gfn = iter->gfn; + iter->sptep++; + iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); +} + /* * Steps to the next entry in the current page table, at the current page table * level. The next entry could point to a page backing guest memory or another @@ -117,10 +139,7 @@ static bool try_step_side(struct tdp_iter *iter) (SPTE_ENT_PER_PAGE - 1)) return false; - iter->gfn += KVM_PAGES_PER_HPAGE(iter->level); - iter->next_last_level_gfn = iter->gfn; - iter->sptep++; - iter->old_spte = kvm_tdp_mmu_read_spte(iter->sptep); + tdp_iter_step_side(iter); return true; } diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index a9c9cd0db20a..ca00db799a50 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -134,6 +134,8 @@ void tdp_iter_start(struct tdp_iter *iter, struct kvm_mmu_page *root, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); +void tdp_iter_step_side(struct tdp_iter *iter); +void tdp_iter_step_down(struct tdp_iter *iter, tdp_ptep_t child_pt); static inline union kvm_mmu_page_role tdp_iter_child_role(struct tdp_iter *iter) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3f7307938982..bd9ec77e7933 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1205,6 +1205,180 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm, bool skip_private) } } +static int tdp_mmu_iter_step_side(int i, struct tdp_iter *iter) +{ + i++; + + /* + * if i = SPTE_ENT_PER_PAGE, tdp_iter_step_side() results + * in reading the entry beyond the last entry. + */ + if (i < SPTE_ENT_PER_PAGE) + tdp_iter_step_side(iter); + + return i; +} + +static int tdp_mmu_merge_private_spt(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + u64 *sptep = rcu_dereference(iter->sptep); + u64 old_spte = iter->old_spte; + struct kvm_mmu_page *child_sp; + struct kvm *kvm = vcpu->kvm; + struct tdp_iter child_iter; + int level = iter->level; + gfn_t gfn = iter->gfn; + tdp_ptep_t child_pt; + u64 child_spte; + int ret = 0; + int i; + + /* + * TDX KVM supports only 2MB large page. It's not supported to merge + * 2MB pages into 1GB page at the moment. + */ + WARN_ON_ONCE(fault->goal_level != PG_LEVEL_2M); + WARN_ON_ONCE(iter->level != PG_LEVEL_2M); + WARN_ON_ONCE(!is_large_pte(new_spte)); + + /* Freeze the spte to prevent other threads from working spte. */ + if (!try_cmpxchg64(sptep, &iter->old_spte, REMOVED_SPTE)) + return -EBUSY; + + /* + * Step down to the child spte. Because tdp_iter_next() assumes the + * parent spte isn't frozen, do it manually. + */ + child_pt = spte_to_child_pt(iter->old_spte, iter->level); + child_sp = sptep_to_sp(child_pt); + WARN_ON_ONCE(child_sp->role.level != PG_LEVEL_4K); + WARN_ON_ONCE(!kvm_mmu_page_role_is_private(child_sp->role)); + + /* Don't modify iter as the caller will use iter after this function. */ + child_iter = *iter; + /* Adjust the target gfn to the head gfn of the large page. */ + child_iter.next_last_level_gfn &= -KVM_PAGES_PER_HPAGE(level); + tdp_iter_step_down(&child_iter, child_pt); + + /* + * All child pages are required to be populated for merging them into a + * large page. Populate all child spte. + */ + for (i = 0; i < SPTE_ENT_PER_PAGE; i = tdp_mmu_iter_step_side(i, &child_iter)) { + int tmp; + + WARN_ON_ONCE(child_iter.level != PG_LEVEL_4K); + + if (is_shadow_present_pte(child_iter.old_spte)) { + /* TODO: relocate page for huge page. */ + if (WARN_ON_ONCE(spte_to_pfn(child_iter.old_spte) != + spte_to_pfn(new_spte) + i)) { + if (!ret) + ret = -EAGAIN; + continue; + } + /* + * When SEPT_VE_DISABLE=true and the page state is + * pending, this case can happen. Just resume the vcpu + * again with the expectation for other vcpu to accept + * this page. + */ + if (child_iter.gfn == fault->gfn) { + if (!ret) + ret = -EAGAIN; + } + continue; + } + + child_spte = make_huge_page_split_spte(kvm, new_spte, child_sp->role, i); + /* + * Because other thread may have started to operate on this spte + * before freezing the parent spte, Use atomic version to + * prevent race. + */ + tmp = tdp_mmu_set_spte_atomic(vcpu->kvm, &child_iter, child_spte); + if (tmp == -EBUSY || tmp == -EAGAIN) { + /* + * There was a race condition. Populate remaining 4K + * spte to resolve fault->gfn to guarantee the forward + * progress. + */ + if (!ret) + ret = tmp; + } else if (tmp) { + ret = tmp; + goto out; + } + } + if (ret) + goto out; + + /* Prevent the Secure-EPT entry from being used. */ + ret = static_call(kvm_x86_zap_private_spte)(kvm, gfn, level); + if (ret) + goto out; + kvm_flush_remote_tlbs_range(kvm, gfn & KVM_HPAGE_GFN_MASK(level), + KVM_PAGES_PER_HPAGE(level)); + + /* Merge pages into a large page. */ + ret = static_call(kvm_x86_merge_private_spt)(kvm, gfn, level, + kvm_mmu_private_spt(child_sp)); + /* + * Failed to merge pages because some pages are accepted and some are + * pending. Since the child page was mapped above, let vcpu run. + */ + if (ret) { + if (static_call(kvm_x86_unzap_private_spte)(kvm, gfn, level)) + old_spte = SHADOW_NONPRESENT_VALUE | + (spte_to_pfn(old_spte) << PAGE_SHIFT) | + PT_PAGE_SIZE_MASK; + goto out; + } + + /* Update stats manually as we don't use tdp_mmu_set_spte{, _atomic}(). */ + kvm_update_page_stats(kvm, level - 1, -SPTE_ENT_PER_PAGE); + kvm_update_page_stats(kvm, level, 1); + + /* Unfreeze spte. */ + iter->old_spte = new_spte; + __kvm_tdp_mmu_write_spte(sptep, new_spte); + + /* + * Free unused child sp. Secure-EPT page was already freed at TDX level + * by kvm_x86_merge_private_spt(). + */ + tdp_unaccount_mmu_page(kvm, child_sp); + tdp_mmu_free_sp(child_sp); + return 0; + +out: + iter->old_spte = old_spte; + __kvm_tdp_mmu_write_spte(sptep, old_spte); + return ret; +} + +static int __tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, + struct tdp_iter *iter, u64 new_spte) +{ + /* + * The private page has smaller-size pages. For example, the child + * pages was converted from shared to page, and now it can be mapped as + * a large page. Try to merge small pages into a large page. + */ + if (fault->slot && + kvm_gfn_shared_mask(vcpu->kvm) && + iter->level > PG_LEVEL_4K && + kvm_is_private_gpa(vcpu->kvm, fault->addr) && + is_shadow_present_pte(iter->old_spte) && + !is_large_pte(iter->old_spte)) + return tdp_mmu_merge_private_spt(vcpu, fault, iter, new_spte); + + return tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte); +} + /* * Installs a last-level SPTE to handle a TDP page fault. * (NPT/EPT violation/misconfiguration) @@ -1246,7 +1420,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (new_spte == iter->old_spte) ret = RET_PF_SPURIOUS; - else if (tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte)) + else if (__tdp_mmu_map_handle_target_level(vcpu, fault, iter, new_spte)) return RET_PF_RETRY; else if (is_shadow_present_pte(iter->old_spte) && !is_last_spte(iter->old_spte, iter->level)) From patchwork Tue Jan 23 00:22:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526612 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 926FB1509BA; Tue, 23 Jan 2024 00:22:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969367; cv=none; b=hUoNrXU/Y7tMJerok3CoZigONc6EihZksiAF9jFf5NgPIfsLxVVl7hj9hMlIPx5iGLJ/MW/MqKBdWLLeJAxvpBTr/H/EcLLQneTcGr0UIe+8oDhF2R1os2GqBBQeg1+NYKRicYpMGJ6A351P3Jo4iUhXNkQ18UXKDlyoJevrjCc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969367; c=relaxed/simple; bh=RlU8JoF2lJVCiCoA2T+BoaCaWes3i9Y4CE5OV12pJso=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HEJLx9ebiYnA9d0yTx0H7P3ELszlEU8uowwjr8G5b7VFBtp27FE2Lbl7MKCqC0qh/aEll7dBo4nt3m/oWSXBtp55j8mZ1Do7bXMJxgUP25Ba2QGi1VHYaZ9qd9+M9guysVBXcuPxb0WGTOOjvD1EDvgjwBygdGJ6p5oPMAXs1RM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OrJK0lf3; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OrJK0lf3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969365; x=1737505365; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RlU8JoF2lJVCiCoA2T+BoaCaWes3i9Y4CE5OV12pJso=; b=OrJK0lf3KF0efLzKxIZVAO7Uq1ot3wiHD1Amg/kP6PSz2uilEIbz+l/6 mCNky5m28yc3nxtxUtJ87Qb45Rw19qQmDDSQGx8D7ecMukJoRnLUV2bH/ IIumGZGE/bQv/UQRgloSO0U7792PnFQumCoythTES0ChWPEmZoD74Mx2c ipI6JkReVbPEXZuulGA6ipcTp8QFeekMJYsf63BMFilj6K88YhmLeKooX RqT8QHIub4w5NOO7ukjDmudWtB/HTcgGOTOn0h9SV+kSJgtJ5/oXDDc6H FqUwjv6nfR3jztV/OZbKrfVwe8yyNhJdXrR9Y5SUKH1uApDaxHNoh+8ia g==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125699" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125699" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825663" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:40 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v7 11/13] KVM: TDX: Implement merge pages into a large page Date: Mon, 22 Jan 2024 16:22:26 -0800 Message-Id: <4eac53599fb87c41aee14577b66d0a832e6c836b.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata Implement merge_private_stp callback. Signed-off-by: Isaku Yamahata --- v7: - Fix subject, x86/tdp_mmu => TDX - comment: use ulink instead of free for clarity v6: - repeat TDH.MEM.PAGE.PROMOTE() on TDX_INTERRUPTED_RESTARTABLE --- arch/x86/kvm/vmx/tdx.c | 74 ++++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx_arch.h | 1 + arch/x86/kvm/vmx/tdx_errno.h | 2 + arch/x86/kvm/vmx/tdx_ops.h | 11 ++++++ 4 files changed, 88 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 10dbe4a4db7a..f26caa496d1b 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1742,6 +1742,51 @@ static int tdx_sept_split_private_spt(struct kvm *kvm, gfn_t gfn, return 0; } +static int tdx_sept_merge_private_spt(struct kvm *kvm, gfn_t gfn, + enum pg_level level, void *private_spt) +{ + int tdx_level = pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + struct tdx_module_args out; + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + u64 err; + + /* See comment in tdx_sept_set_private_spte() */ + do { + err = tdh_mem_page_promote(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + } while (err == TDX_INTERRUPTED_RESTARTABLE); + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (unlikely(err == (TDX_EPT_INVALID_PROMOTE_CONDITIONS | + TDX_OPERAND_ID_RCX))) + /* + * Some pages are accepted, some pending. Need to wait for TD + * to accept all pages. Tell it the caller. + */ + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_PAGE_PROMOTE, err, &out); + return -EIO; + } + WARN_ON_ONCE(out.rcx != __pa(private_spt)); + + /* + * TDH.MEM.PAGE.PROMOTE unlinks the Secure-EPT page for the lower level. + * Flush cache for reuse. + */ + do { + err = tdh_phymem_page_wbinvd(set_hkid_to_hpa(__pa(private_spt), + to_kvm_tdx(kvm)->hkid)); + } while (unlikely(err == (TDX_OPERAND_BUSY | TDX_OPERAND_ID_RCX))); + if (WARN_ON_ONCE(err)) { + pr_tdx_error(TDH_PHYMEM_PAGE_WBINVD, err, NULL); + return -EIO; + } + + tdx_clear_page(__pa(private_spt), PAGE_SIZE); + return 0; +} + static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn, enum pg_level level) { @@ -1816,6 +1861,33 @@ static void tdx_track(struct kvm *kvm) } +static int tdx_sept_unzap_private_spte(struct kvm *kvm, gfn_t gfn, + enum pg_level level) +{ + int tdx_level = pg_level_to_tdx_sept_level(level); + struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm); + gpa_t gpa = gfn_to_gpa(gfn) & KVM_HPAGE_MASK(level); + struct tdx_module_args out; + u64 err; + + do { + err = tdh_mem_range_unblock(kvm_tdx->tdr_pa, gpa, tdx_level, &out); + + /* + * tdh_mem_range_block() is accompanied with tdx_track() via kvm + * remote tlb flush. Wait for the caller of + * tdh_mem_range_block() to complete TDX track. + */ + } while (err == (TDX_TLB_TRACKING_NOT_DONE | TDX_OPERAND_ID_SEPT)); + if (unlikely(err == TDX_ERROR_SEPT_BUSY)) + return -EAGAIN; + if (KVM_BUG_ON(err, kvm)) { + pr_tdx_error(TDH_MEM_RANGE_UNBLOCK, err, &out); + return -EIO; + } + return 0; +} + static int tdx_sept_free_private_spt(struct kvm *kvm, gfn_t gfn, enum pg_level level, void *private_spt) { @@ -3309,9 +3381,11 @@ int __init tdx_hardware_setup(struct kvm_x86_ops *x86_ops) x86_ops->link_private_spt = tdx_sept_link_private_spt; x86_ops->free_private_spt = tdx_sept_free_private_spt; x86_ops->split_private_spt = tdx_sept_split_private_spt; + x86_ops->merge_private_spt = tdx_sept_merge_private_spt; x86_ops->set_private_spte = tdx_sept_set_private_spte; x86_ops->remove_private_spte = tdx_sept_remove_private_spte; x86_ops->zap_private_spte = tdx_sept_zap_private_spte; + x86_ops->unzap_private_spte = tdx_sept_unzap_private_spte; return 0; diff --git a/arch/x86/kvm/vmx/tdx_arch.h b/arch/x86/kvm/vmx/tdx_arch.h index e663abaa3aa0..aef6103c6515 100644 --- a/arch/x86/kvm/vmx/tdx_arch.h +++ b/arch/x86/kvm/vmx/tdx_arch.h @@ -29,6 +29,7 @@ #define TDH_MNG_KEY_FREEID 20 #define TDH_MNG_INIT 21 #define TDH_VP_INIT 22 +#define TDH_MEM_PAGE_PROMOTE 23 #define TDH_MEM_SEPT_RD 25 #define TDH_VP_RD 26 #define TDH_MNG_KEY_RECLAIMID 27 diff --git a/arch/x86/kvm/vmx/tdx_errno.h b/arch/x86/kvm/vmx/tdx_errno.h index d08b4d14e57b..4142487f987e 100644 --- a/arch/x86/kvm/vmx/tdx_errno.h +++ b/arch/x86/kvm/vmx/tdx_errno.h @@ -24,6 +24,8 @@ #define TDX_FLUSHVP_NOT_DONE 0x8000082400000000ULL #define TDX_EPT_WALK_FAILED 0xC0000B0000000000ULL #define TDX_EPT_ENTRY_NOT_FREE 0xC0000B0200000000ULL +#define TDX_TLB_TRACKING_NOT_DONE 0xC0000B0800000000ULL +#define TDX_EPT_INVALID_PROMOTE_CONDITIONS 0xC0000B0900000000ULL #define TDX_EPT_ENTRY_STATE_INCORRECT 0xC0000B0D00000000ULL /* diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h index 772e2e7d61e7..e2b9f1c3d67f 100644 --- a/arch/x86/kvm/vmx/tdx_ops.h +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -262,6 +262,17 @@ static inline u64 tdh_mem_page_demote(hpa_t tdr, gpa_t gpa, int level, hpa_t pag return tdx_seamcall_sept(TDH_MEM_PAGE_DEMOTE, &in, out); } +static inline u64 tdh_mem_page_promote(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_args *out) +{ + struct tdx_module_args in = { + .rcx = gpa | level, + .rdx = tdr, + }; + + return tdx_seamcall_sept(TDH_MEM_PAGE_PROMOTE, &in, out); +} + static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, struct tdx_module_args *out) { From patchwork Tue Jan 23 00:22:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526614 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E05D6151CC1; Tue, 23 Jan 2024 00:22:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969368; cv=none; b=NuPPFlW66iz3GnhbPS1No8PPOdKErALsLNrUyrvFlbYhLXyO96xCrPV7m3ZC/75+rGa2yz1kCuunOqphSer+GB0nnnxlMkapVXasK0yGRXaqN7a6lvQH+q1ejnnC06qrSnLftZMFDvKKB14LhecA05SAXB2wxFRwJ3ncn3KRgZU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969368; c=relaxed/simple; bh=UuGG+E/cBBOlFOuGRHBHh8cTWAmu4YzcsHsZq3wM5To=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=P6xNJ9aQ3b+X2/PzRF9YPEILIvEF+CmJcYUkVu9XtTtxK72btusYb4JhONfSpqDS+0dnwZeGsoGiHyD+iALcn30XjkL2lmlLGAwgICwiwjM7eaoTv0bUPcS0BFYhxKZfMlW65eg+t40nLon0WmNFqRlc9ZShzrHTOOfCeLO1P3Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=SL/GUP1G; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SL/GUP1G" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969366; x=1737505366; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UuGG+E/cBBOlFOuGRHBHh8cTWAmu4YzcsHsZq3wM5To=; b=SL/GUP1GSV7XaATdgiHxZfWn8tqh9uhVcLWXG3gk2iDsIwzf+FVMsi5k DJzCYMwto4YW4kUN2CmvN7MHDVYTNGB/pxZ/9HvRGHezskZGNBbg11cWv wd+wW09JWyGSRyOIoS7km0K5lBgZSCUWGUE352pUBgSxwWodE4t5KlwVp LdDwWfVQOwO7lADbh383EmRjPqm6fltlLmuansJGzL54rtsSGaVzWcYtO A+mPA1MWtMvQ9SSz1WrdgfS4sV2uOOFzdhWLbmIwEkZSwTSYd+qrWtF+C O//ey1PDB4kO5RKPcwc2L6/EAToUPiK1tcBSsyB5GOGAFIUGEMPgJZXoS g==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125703" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125703" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825668" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:41 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com Subject: [PATCH v7 12/13] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot Date: Mon, 22 Jan 2024 16:22:27 -0800 Message-Id: <9b2b24606fc5c80402c8565e2213dec3b6a20cc7.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Isaku Yamahata struct kvm_page_fault.req_level is the page level which takes care of the faulted-in page size. For now its calculation is only for the conventional kvm memslot by host_pfn_mapping_level() that traverses page table. However, host_pfn_mapping_level() cannot be used for private kvm memslot because private pages of private kvm memlost aren't mapped into user virtual address space. Instead, page order is given when getting pfn. Remember it in struct kvm_page_fault and use it. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 34 +++++++++++++++++---------------- arch/x86/kvm/mmu/mmu_internal.h | 12 +++++++++++- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a9e7a3d2d362..c7c816c969a9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3154,10 +3154,10 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, static int __kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) + gfn_t gfn, int max_level, int host_level, + bool is_private) { struct kvm_lpage_info *linfo; - int host_level; max_level = min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3166,24 +3166,23 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) - return max_level; - if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - host_level = host_pfn_mapping_level(kvm, gfn, slot); + if (!is_private) { + WARN_ON_ONCE(host_level != PG_LEVEL_NONE); + host_level = host_pfn_mapping_level(kvm, gfn, slot); + } + WARN_ON_ONCE(host_level == PG_LEVEL_NONE); return min(host_level, max_level); } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool faultin_private) { - bool is_private = kvm_slot_can_be_private(slot) && - kvm_mem_is_private(kvm, gfn); - - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private); + return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, + PG_LEVEL_NONE, faultin_private); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -3208,7 +3207,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault */ fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, fault->gfn, fault->max_level, - fault->is_private); + fault->host_level, + kvm_is_faultin_private(fault)); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -4332,6 +4332,7 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { int max_order, r; + u8 max_level; if (!kvm_slot_can_be_private(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4345,8 +4346,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, return r; } - fault->max_level = min(kvm_max_level_for_order(max_order), - fault->max_level); + max_level = kvm_max_level_for_order(max_order); + fault->host_level = max_level; + fault->max_level = min(max_level, fault->max_level); fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); return RET_PF_CONTINUE; @@ -4396,7 +4398,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return -EFAULT; } - if (fault->is_private) + if (kvm_is_faultin_private(fault)) return kvm_faultin_pfn_private(vcpu, fault); async = false; @@ -6805,7 +6807,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_remote_tlbs_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index cc0a95e554b5..1e6bf3875779 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -358,6 +358,9 @@ struct kvm_page_fault { * is changing its own translation in the guest page tables. */ bool write_fault_to_shadow_pgtable; + + /* valid only for private memslot && private gfn */ + enum pg_level host_level; }; int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); @@ -452,7 +455,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool faultin_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); @@ -470,4 +473,11 @@ static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t g } #endif +static inline bool kvm_is_faultin_private(const struct kvm_page_fault *fault) +{ + if (IS_ENABLED(CONFIG_KVM_GENERIC_PRIVATE_MEM)) + return fault->is_private && kvm_slot_can_be_private(fault->slot); + return false; +} + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bd9ec77e7933..42c8cf1abdf8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -2183,7 +2183,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, false); if (max_mapping_level < iter.level) continue; From patchwork Tue Jan 23 00:22:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Isaku Yamahata X-Patchwork-Id: 13526615 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 905E2151CE4; Tue, 23 Jan 2024 00:22:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=134.134.136.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969369; cv=none; b=jRtLXcROeJ06l69BwO6rMAgOZv8y/9ShAyztO9Cea27bM03OgoyRPASr5yB7KGhSiwMllnnQE314jBmIUOoaEtiYyeSLnky+f41zIw97cDKGeKszwwGJ1ME712IZnewpLl/B6M2LASaLGsKRVHrHxD4SapHWePVay5BkE+XU9co= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705969369; c=relaxed/simple; bh=oOdtFrbtkdulJhQpZoj4hAR/TTunbrVEH9i/DDy2uWA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Wc41dBcMI1AwKDOc7VGNGx5SFKjNdCcw5Vy/g/ZbWffLfNcQtnJ1rnjWIUMJr9t4ia/ozux+fixGpSUB1sIK2T7NvmmHM8/g9leg/TTSFcpcKkXatkoPZjrZHGULbgsP4UeMuyVM/JkrVtPWtO0H1u62D62L5xfLPCZuzDzX0bw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=fHIGO4Wx; arc=none smtp.client-ip=134.134.136.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fHIGO4Wx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1705969367; x=1737505367; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oOdtFrbtkdulJhQpZoj4hAR/TTunbrVEH9i/DDy2uWA=; b=fHIGO4Wx0fautjWxbPgo54EvTc/SRVTBgOY3rD/50y+oJSrqv+NLDqy2 NOfkhWIcSTvxivm8RqEtJR+izOfaP01EPgfni01QrI3A712hkiEksmMPf SFUseUgykEwmQcsMpoerlO69fOD6ERy24JOEccLhSo7ZCgUDcCv7/fi3t rHlhb0tRkVgjaiR2OHiu5MGcEvc54aho3W4IAYIxLoO+esKGYaewDGhE3 gFk4BmOSD6Hiy5BJAD+vRRZX9ntlfQwOU6WOgPPs92/326inTv+qmiHVY WB5iIJESj8z4O+jyx0tHVqzmvZFw8KvsHNij4W1DI/DtMDZwvRse6BTdr A==; X-IronPort-AV: E=McAfee;i="6600,9927,10961"; a="405125706" X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="405125706" Received: from fmviesa001.fm.intel.com ([10.60.135.141]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,212,1701158400"; d="scan'208";a="27825672" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2024 16:22:41 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [PATCH v7 13/13] KVM: TDX: Allow 2MB large page for TD GUEST Date: Mon, 22 Jan 2024 16:22:28 -0800 Message-Id: <1f8f8f8a9450cd83e2a38abec26b0725b6d1ded4.1705965958.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Xiaoyao Li Now that everything is there to support 2MB page for TD guest. Because TDX module TDH.MEM.PAGE.AUG supports 4KB page and 2MB page, set struct kvm_arch.tdp_max_page_level to 2MB page level. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/tdp_mmu.c | 9 ++------- arch/x86/kvm/vmx/tdx.c | 4 ++-- 2 files changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 42c8cf1abdf8..feb499dc381e 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1544,14 +1544,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) sp->nx_huge_page_disallowed = fault->huge_page_disallowed; - if (is_shadow_present_pte(iter.old_spte)) { - /* - * TODO: large page support. - * Doesn't support large page for TDX now - */ - KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm); + if (is_shadow_present_pte(iter.old_spte)) r = tdp_mmu_split_huge_page(kvm, &iter, sp, true); - } else + else r = tdp_mmu_link_sp(kvm, &iter, sp, true); /* diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index f26caa496d1b..7ef1d3536f0e 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -636,8 +636,8 @@ int tdx_vm_init(struct kvm *kvm) */ kvm_mmu_set_mmio_spte_value(kvm, 0); - /* TODO: Enable 2mb and 1gb large page support. */ - kvm->arch.tdp_max_page_level = PG_LEVEL_4K; + /* TDH.MEM.PAGE.AUG supports up to 2MB page. */ + kvm->arch.tdp_max_page_level = PG_LEVEL_2M; /* * This function initializes only KVM software construct. It doesn't