From patchwork Thu May 4 08:47:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13230937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5467C77B78 for ; Thu, 4 May 2023 08:48:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230131AbjEDIsB (ORCPT ); Thu, 4 May 2023 04:48:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229935AbjEDIr6 (ORCPT ); Thu, 4 May 2023 04:47:58 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3627D197 for ; Thu, 4 May 2023 01:47:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683190077; x=1714726077; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eOW017PxGqL1Rd63A+r6F2P7WVekCj7OfV/lMsN6bL4=; b=I+fXCw8NCkC7Kr8TzA3Q5mhDleF/Xp1R4q5nWG0fmVoOj6Sm9P40R5iX vIkWe3cvSlpdlNbAKi1VVWjHkzr5Uq80esc6X5zNsZlF5u6thF1B/yTgq kqmkKkEzkM9KHLips3sSXNyzzA2Q6NPFZ+ve4UGzOkA1OfdUogOzyvLNN pGBDJSa7lxfHkoodP2BOUtu+qX0CwTpt6s17KHbkNKIgIHq6srPdssUU1 tJI32Empw239rEDE+t1c6npeDHmcRju6p3ljR3gSHXyhWqWX821HoCDwr Ni0dMcd9ZDR+wiD90+CpjyiA/ZfS2H6Xl/ED16i2vaFDUtrIwAX9LE1QL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="435178177" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="435178177" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:47:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="766480464" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="766480464" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.238.1.46]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:47:55 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests v4 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Date: Thu, 4 May 2023 16:47:48 +0800 Message-Id: <20230504084751.968-2-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230504084751.968-1-binbin.wu@linux.intel.com> References: <20230504084751.968-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If LAM is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field. Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57 on vmlaunch tests when LAM is supported. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 3 +++ x86/vmx_tests.c | 6 +++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 6555056..901df98 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -55,6 +55,8 @@ #define X86_CR0_PG BIT(X86_CR0_PG_BIT) #define X86_CR3_PCID_MASK GENMASK(11, 0) +#define X86_CR3_LAM_U57_BIT (61) +#define X86_CR3_LAM_U48_BIT (62) #define X86_CR4_VME_BIT (0) #define X86_CR4_VME BIT(X86_CR4_VME_BIT) @@ -249,6 +251,7 @@ static inline bool is_intel(void) #define X86_FEATURE_FLUSH_L1D (CPUID(0x7, 0, EDX, 28)) #define X86_FEATURE_ARCH_CAPABILITIES (CPUID(0x7, 0, EDX, 29)) #define X86_FEATURE_PKS (CPUID(0x7, 0, ECX, 31)) +#define X86_FEATURE_LAM (CPUID(0x7, 1, EAX, 26)) /* * Extended Leafs, a.k.a. AMD defined diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 7952ccb..217befe 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -6998,7 +6998,11 @@ static void test_host_ctl_regs(void) cr3 = cr3_saved | (1ul << i); vmcs_write(HOST_CR3, cr3); report_prefix_pushf("HOST_CR3 %lx", cr3); - test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD); + if (this_cpu_has(X86_FEATURE_LAM) && + ((i == X86_CR3_LAM_U57_BIT) || (i == X86_CR3_LAM_U48_BIT))) + test_vmx_vmlaunch(0); + else + test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD); report_prefix_pop(); } From patchwork Thu May 4 08:47:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13230938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 099F5C7EE22 for ; Thu, 4 May 2023 08:48:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229653AbjEDIsD (ORCPT ); Thu, 4 May 2023 04:48:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbjEDIsB (ORCPT ); Thu, 4 May 2023 04:48:01 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8195F197 for ; Thu, 4 May 2023 01:47:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683190079; x=1714726079; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7/1Zqqsz5UM3qAbmLwfSg8Hyrfd2/V9jyVlkWTPYTmk=; b=d/aVFrMKxSyIqgurFIIJPZF/JNetBDtoM7ArK2VEA8JmQwbjLhJaf8j+ 1rMIl5jqMsmF6CPseynhIn9KeZh+Tk2FZnSEg0QIK77ElGcxeVYT62wnl yQVCLTeW4vehwRzpvaDAIzVvSPxsOE6BH+8EMg7fp4hJSBRdLTBS6ED+Q kNUdscNgbGOdNgI5FXf7m7evz2uQHtbPyvdjAqi1TvhEgDhMEBoL2Zow8 RWyvUhTmo57+bD+lyjp1V/IbW0KolGDuQwnlOpOryQ3RMiG4HjN0xDVeP 1jHHKFAgMg36L30Lug5xJcV0wkKwmEtBOzw+Jcm5518l3i7cgk4ukV1H4 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="435178181" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="435178181" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:47:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="766480473" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="766480473" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.238.1.46]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:47:57 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests v4 2/4] x86: Add test case for LAM_SUP Date: Thu, 4 May 2023 16:47:49 +0800 Message-Id: <20230504084751.968-3-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230504084751.968-1-binbin.wu@linux.intel.com> References: <20230504084751.968-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Robert Hoo This unit test covers: 1. CR4.LAM_SUP toggles. 2. Memory & MMIO access with supervisor mode address with LAM metadata. 3. INVLPG memory operand doesn't contain LAM meta data, if the address is non-canonical form then the INVLPG is the same as a NOP (no #GP). 4. INVPCID memory operand (descriptor pointer) could contain LAM meta data, however, the address in the descriptor should be canonical. In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM. LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chap 7 LINEAR ADDRESS MASKING (LAM) Signed-off-by: Robert Hoo Co-developed-by: Binbin Wu Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 10 ++ x86/Makefile.x86_64 | 1 + x86/lam.c | 243 ++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 10 ++ 4 files changed, 264 insertions(+) create mode 100644 x86/lam.c diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 901df98..6803c9c 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -8,6 +8,14 @@ #include #define NONCANONICAL 0xaaaaaaaaaaaaaaaaull +#define LAM57_MASK GENMASK_ULL(62, 57) +#define LAM48_MASK GENMASK_ULL(62, 48) + +/* Set metadata with non-canonical pattern in mask bits of a linear address */ +static inline u64 set_la_non_canonical(u64 src, u64 mask) +{ + return (src & ~mask) | (NONCANONICAL & mask); +} #ifdef __x86_64__ # define R "r" @@ -107,6 +115,8 @@ #define X86_CR4_CET BIT(X86_CR4_CET_BIT) #define X86_CR4_PKS_BIT (24) #define X86_CR4_PKS BIT(X86_CR4_PKS_BIT) +#define X86_CR4_LAM_SUP_BIT (28) +#define X86_CR4_LAM_SUP BIT(X86_CR4_LAM_SUP_BIT) #define X86_EFLAGS_CF_BIT (0) #define X86_EFLAGS_CF BIT(X86_EFLAGS_CF_BIT) diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index f483dea..fa11eb3 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -34,6 +34,7 @@ tests += $(TEST_DIR)/rdpru.$(exe) tests += $(TEST_DIR)/pks.$(exe) tests += $(TEST_DIR)/pmu_lbr.$(exe) tests += $(TEST_DIR)/pmu_pebs.$(exe) +tests += $(TEST_DIR)/lam.$(exe) ifeq ($(CONFIG_EFI),y) tests += $(TEST_DIR)/amd_sev.$(exe) diff --git a/x86/lam.c b/x86/lam.c new file mode 100644 index 0000000..4e631c5 --- /dev/null +++ b/x86/lam.c @@ -0,0 +1,243 @@ +/* + * Intel LAM unit test + * + * Copyright (C) 2023 Intel + * + * Author: Robert Hoo + * Binbin Wu + * + * This work is licensed under the terms of the GNU LGPL, version 2 or + * later. + */ + +#include "libcflat.h" +#include "processor.h" +#include "desc.h" +#include "vmalloc.h" +#include "alloc_page.h" +#include "vm.h" +#include "asm/io.h" +#include "ioram.h" + +#define FLAGS_LAM_ACTIVE BIT_ULL(0) +#define FLAGS_LA57 BIT_ULL(1) + +struct invpcid_desc { + u64 pcid : 12; + u64 rsv : 52; + u64 addr; +}; + +static inline bool is_la57(void) +{ + return !!(read_cr4() & X86_CR4_LA57); +} + +static inline bool lam_sup_active(void) +{ + return !!(read_cr4() & X86_CR4_LAM_SUP); +} + +static void cr4_set_lam_sup(void *data) +{ + unsigned long cr4; + + cr4 = read_cr4(); + write_cr4_safe(cr4 | X86_CR4_LAM_SUP); +} + +static void cr4_clear_lam_sup(void *data) +{ + unsigned long cr4; + + cr4 = read_cr4(); + write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP); +} + +static void test_cr4_lam_set_clear(bool has_lam) +{ + bool fault; + + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL); + report((fault != has_lam) && (lam_sup_active() == has_lam), + "Set CR4.LAM_SUP"); + + fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL); + report(!fault, "Clear CR4.LAM_SUP"); +} + +/* Refer to emulator.c */ +static void do_mov(void *mem) +{ + unsigned long t1, t2; + + t1 = 0x123456789abcdefull & -1ul; + asm volatile("mov %[t1], (%[mem])\n\t" + "mov (%[mem]), %[t2]" + : [t2]"=r"(t2) + : [t1]"r"(t1), [mem]"r"(mem) + : "memory"); + report(t1 == t2, "Mov result check"); +} + +static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4) +{ + bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE); + u64 lam_mask = arg2; + u64 *ptr = (u64 *)arg3; + bool is_mmio = !!arg4; + bool fault; + + fault = test_for_exception(GP_VECTOR, do_mov, ptr); + report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory"); + + ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask); + fault = test_for_exception(GP_VECTOR, do_mov, ptr); + report(fault != lam_active,"Test tagged addr (%s)", + is_mmio ? "MMIO" : "Memory"); + + return 0; +} + +static void do_invlpg(void *mem) +{ + invlpg(mem); +} + +static void do_invlpg_fep(void *mem) +{ + asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory"); +} + +/* invlpg with tagged address is same as NOP, no #GP expected. */ +static void test_invlpg(u64 lam_mask, void *va, bool fep) +{ + bool fault; + u64 *ptr; + + ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask); + if (fep) + fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr); + else + fault = test_for_exception(GP_VECTOR, do_invlpg, ptr); + + report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : ""); +} + +static void do_invpcid(void *desc) +{ + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc; + + asm volatile("invpcid %0, %1" : + : "m" (*desc_ptr), "r" (0UL) + : "memory"); +} + +/* LAM doesn't apply to the target address in the descriptor of invpcid */ +static void test_invpcid(u64 flags, u64 lam_mask, void *data) +{ + /* + * Reuse the memory address for the descriptor since stack memory + * address in KUT doesn't follow the kernel address space partitions. + */ + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data; + bool lam_active = !!(flags & FLAGS_LAM_ACTIVE); + bool fault; + + if (!this_cpu_has(X86_FEATURE_PCID) || + !this_cpu_has(X86_FEATURE_INVPCID)) { + report_skip("INVPCID not supported"); + return; + } + + memset(desc_ptr, 0, sizeof(struct invpcid_desc)); + desc_ptr->addr = (u64)data; + + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(!fault, "INVPCID: untagged pointer + untagged addr"); + + desc_ptr->addr = set_la_non_canonical(desc_ptr->addr, lam_mask); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault, "INVPCID: untagged pointer + tagged addr"); + + desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr, + lam_mask); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault, "INVPCID: tagged pointer + tagged addr"); + + desc_ptr = (struct invpcid_desc *)data; + desc_ptr->addr = (u64)data; + desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr, + lam_mask); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault != lam_active, "INVPCID: tagged pointer + untagged addr"); +} + +static void test_lam_sup(bool has_lam, bool fep_available) +{ + void *vaddr, *vaddr_mmio; + phys_addr_t paddr; + u64 lam_mask = LAM48_MASK; + u64 flags = 0; + bool fault; + + /* + * KUT initializes vfree_top to 0 for X86_64, and each virtual address + * allocation decreases the size from vfree_top. It's guaranteed that + * the return value of alloc_vpage() is considered as kernel mode + * address and canonical since only a small mount virtual address range + * is allocated in this test. + */ + vaddr = alloc_vpage(); + vaddr_mmio = alloc_vpage(); + paddr = virt_to_phys(alloc_page()); + install_page(current_page_table(), paddr, vaddr); + install_page(current_page_table(), IORAM_BASE_PHYS, vaddr_mmio); + + test_cr4_lam_set_clear(has_lam); + + /* Set for the following LAM_SUP tests */ + if (has_lam) { + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL); + report(!fault && lam_sup_active(), "Set CR4.LAM_SUP"); + } + + if (lam_sup_active()) + flags |= FLAGS_LAM_ACTIVE; + + if (is_la57()) { + flags |= FLAGS_LA57; + lam_mask = LAM57_MASK; + } + + /* Test for normal memory */ + test_ptr(flags, lam_mask, (u64)vaddr, false); + /* Test for MMIO to trigger instruction emulation */ + test_ptr(flags, lam_mask, (u64)vaddr_mmio, true); + test_invpcid(flags, lam_mask, vaddr); + test_invlpg(lam_mask, vaddr, false); + if (fep_available) + test_invlpg(lam_mask, vaddr, true); +} + +int main(int ac, char **av) +{ + bool has_lam; + bool fep_available = is_fep_available(); + + setup_vm(); + + has_lam = this_cpu_has(X86_FEATURE_LAM); + if (!has_lam) + report_info("This CPU doesn't support LAM feature\n"); + else + report_info("This CPU supports LAM feature\n"); + + if (!fep_available) + report_skip("Skipping tests the forced emulation, " + "use kvm.force_emulation_prefix=1 to enable\n"); + + test_lam_sup(has_lam, fep_available); + + return report_summary(); +} diff --git a/x86/unittests.cfg b/x86/unittests.cfg index f4e7b25..b8a628e 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -492,3 +492,13 @@ file = cet.flat arch = x86_64 smp = 2 extra_params = -enable-kvm -m 2048 -cpu host + +[intel-lam] +file = lam.flat +arch = x86_64 +extra_params = -enable-kvm -cpu host + +[intel-no-lam] +file = lam.flat +arch = x86_64 +extra_params = -enable-kvm -cpu host,-lam From patchwork Thu May 4 08:47:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13230939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10D09C77B78 for ; Thu, 4 May 2023 08:48:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229972AbjEDIsG (ORCPT ); Thu, 4 May 2023 04:48:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229935AbjEDIsC (ORCPT ); Thu, 4 May 2023 04:48:02 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44A16E7 for ; Thu, 4 May 2023 01:48:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683190081; x=1714726081; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bt0aVFMYEdT199JVfyo0ABUg08xNX11NopPb93MJjaA=; b=YSUFUhEuyd1vHe8s0Jny/YTAboyDrmrJye3eQsbX71pnxjvyJmtWM3Jc c+FqZSF6HjSFJdTbW13XyBQTF3f6/RyIv3lWVJT6FKUhwguefQiD9oAm9 hJqHDiV3z4sGr48Fy8+9hjTcbUvKavcnkgwEdGEWD6F96j3btiJATYJ+h CtxEJ95uAoBlJTAJzEC85kKOG9QCHlKpwL+P0PXNnXPN/KN1+M/P4td6y DR+zKG3ued3/SrTNrusxwQ7mnY8T+w4PTVo6ePq+drzmPzTWxVZqUxR7U Q8AXVqNBpQock4mFUcUmdZurzpkIU6CQTOKJA8IkSST6hCJ41d9gcRO3E A==; X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="435178188" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="435178188" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:48:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="766480486" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="766480486" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.238.1.46]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:47:59 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests v4 3/4] x86: Add test cases for LAM_{U48,U57} Date: Thu, 4 May 2023 16:47:50 +0800 Message-Id: <20230504084751.968-4-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230504084751.968-1-binbin.wu@linux.intel.com> References: <20230504084751.968-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This unit test covers: 1. CR3 LAM bits toggles. 2. Memory/MMIO access with user mode address containing LAM metadata. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 2 ++ x86/lam.c | 76 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 6803c9c..d45566a 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -64,7 +64,9 @@ static inline u64 set_la_non_canonical(u64 src, u64 mask) #define X86_CR3_PCID_MASK GENMASK(11, 0) #define X86_CR3_LAM_U57_BIT (61) +#define X86_CR3_LAM_U57 BIT_ULL(X86_CR3_LAM_U57_BIT) #define X86_CR3_LAM_U48_BIT (62) +#define X86_CR3_LAM_U48 BIT_ULL(X86_CR3_LAM_U48_BIT) #define X86_CR4_VME_BIT (0) #define X86_CR4_VME BIT(X86_CR4_VME_BIT) diff --git a/x86/lam.c b/x86/lam.c index 4e631c5..2920747 100644 --- a/x86/lam.c +++ b/x86/lam.c @@ -18,6 +18,9 @@ #include "vm.h" #include "asm/io.h" #include "ioram.h" +#include "usermode.h" + +#define CR3_LAM_BITS_MASK (X86_CR3_LAM_U48 | X86_CR3_LAM_U57) #define FLAGS_LAM_ACTIVE BIT_ULL(0) #define FLAGS_LA57 BIT_ULL(1) @@ -38,6 +41,16 @@ static inline bool lam_sup_active(void) return !!(read_cr4() & X86_CR4_LAM_SUP); } +static inline bool lam_u48_active(void) +{ + return (read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48; +} + +static inline bool lam_u57_active(void) +{ + return !!(read_cr3() & X86_CR3_LAM_U57); +} + static void cr4_set_lam_sup(void *data) { unsigned long cr4; @@ -83,6 +96,7 @@ static void do_mov(void *mem) static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4) { bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE); + bool la_57 = !!(arg1 & FLAGS_LA57); u64 lam_mask = arg2; u64 *ptr = (u64 *)arg3; bool is_mmio = !!arg4; @@ -96,6 +110,17 @@ static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4) report(fault != lam_active,"Test tagged addr (%s)", is_mmio ? "MMIO" : "Memory"); + /* + * This test case is only triggered when LAM_U57 is active and 4-level + * paging is used. For the case, bit[56:47] aren't all 0 triggers #GP. + */ + if (lam_active && (lam_mask == LAM57_MASK) && !la_57) { + ptr = (u64 *)set_la_non_canonical((u64)ptr, LAM48_MASK); + fault = test_for_exception(GP_VECTOR, do_mov, ptr); + report(fault, "Test non-LAM-canonical addr (%s)", + is_mmio ? "MMIO" : "Memory"); + } + return 0; } @@ -220,6 +245,56 @@ static void test_lam_sup(bool has_lam, bool fep_available) test_invlpg(lam_mask, vaddr, true); } +static void test_lam_user_mode(bool has_lam, u64 lam_mask, u64 mem, u64 mmio) +{ + unsigned r; + bool raised_vector; + unsigned long cr3 = read_cr3() & ~CR3_LAM_BITS_MASK; + u64 flags = 0; + + if (is_la57()) + flags |= FLAGS_LA57; + + if (has_lam) { + if (lam_mask == LAM48_MASK) { + r = write_cr3_safe(cr3 | X86_CR3_LAM_U48); + report((r == 0) && lam_u48_active(), "Set LAM_U48"); + } else { + r = write_cr3_safe(cr3 | X86_CR3_LAM_U57); + report((r == 0) && lam_u57_active(), "Set LAM_U57"); + } + } + if (lam_u48_active() || lam_u57_active()) + flags |= FLAGS_LAM_ACTIVE; + + run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mem, + false, &raised_vector); + run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mmio, + true, &raised_vector); +} + +static void test_lam_user(bool has_lam) +{ + phys_addr_t paddr; + + /* + * The physical address of AREA_NORMAL is within 36 bits, so that using + * identical mapping, the linear address will be considered as user mode + * address from the view of LAM, and the metadata bits are not used as + * address for both LAM48 and LAM57. + */ + paddr = virt_to_phys(alloc_pages_flags(0, AREA_NORMAL)); + _Static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL, + "Identical mapping range check"); + + /* + * Physical memory & MMIO have already been identical mapped in + * setup_mmu(). + */ + test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS); + test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS); +} + int main(int ac, char **av) { bool has_lam; @@ -238,6 +313,7 @@ int main(int ac, char **av) "use kvm.force_emulation_prefix=1 to enable\n"); test_lam_sup(has_lam, fep_available); + test_lam_user(has_lam); return report_summary(); } From patchwork Thu May 4 08:47:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13230940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C637C77B7C for ; Thu, 4 May 2023 08:48:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230159AbjEDIsH (ORCPT ); Thu, 4 May 2023 04:48:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230083AbjEDIsG (ORCPT ); Thu, 4 May 2023 04:48:06 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AAE93586 for ; Thu, 4 May 2023 01:48:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1683190085; x=1714726085; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sbjB8woQe6rmZ8r1YOWkI+Aglor4+/apfpOtOU91qGI=; b=hZo64q+Tcrz8HW6NEN6la9KYfS8ZSBJf39Rk4ney3iH4pRyK26SwgOX+ Hivwyb7+5Rk2fTkO0boPF+W6eSoH3M72welq7+jiGSj1jDZYYztq/w0H0 u79yNBfxPJHG0AijqwwQ9B5nj6C1HWNJYgq7H6fAU8QRFxs3ZWe69pUgn tnU3K6dkla0sbxZHB2WroJEdAzZHMeFMEkSx4WdKaZ6Zgn5RyZV/1Lys+ jFB0roJJudHiE3mL/IBUS2f9m+N968IxYzXAJ9cstUlCRBuId9ndkfP2w mnjT9QGX29JtFmLHsv5+sFmAZkWt52nyXs4hsKWTjjDY4ZqMuAlrlFqxJ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="435178195" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="435178195" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:48:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10699"; a="766480503" X-IronPort-AV: E=Sophos;i="5.99,249,1677571200"; d="scan'208";a="766480503" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.238.1.46]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2023 01:48:01 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM Date: Thu, 4 May 2023 16:47:51 +0800 Message-Id: <20230504084751.968-5-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230504084751.968-1-binbin.wu@linux.intel.com> References: <20230504084751.968-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When LAM is on, the linear address of INVVPID operand can contain metadata, and the linear address in the INVVPID descriptor can contain metadata. The added cases use tagged descriptor address or/and tagged target invalidation address to make sure the behaviors are expected when LAM is on. Also, INVVPID cases can be used as the common test cases for VMX instruction VMExits. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- x86/vmx_tests.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 51 insertions(+), 1 deletion(-) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 217befe..678c9ec 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -3225,6 +3225,54 @@ static void invvpid_test_not_in_vmx_operation(void) TEST_ASSERT(!vmx_on()); } +/* LAM applies to the target address inside the descriptor of invvpid */ +static void invvpid_test_lam(void) +{ + void *vaddr; + struct invvpid_operand *operand; + u64 lam_mask = LAM48_MASK; + bool fault; + + if (!this_cpu_has(X86_FEATURE_LAM)) { + report_skip("LAM is not supported, skip INVVPID with LAM"); + return; + } + write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); + + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) + lam_mask = LAM57_MASK; + + vaddr = alloc_vpage(); + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); + /* + * Since the stack memory address in KUT doesn't follow kernel address + * space partition rule, reuse the memory address for descriptor and + * the target address in the descriptor of invvpid. + */ + operand = (struct invvpid_operand *)vaddr; + operand->vpid = 0xffff; + operand->gla = (u64)vaddr; + + operand = (struct invvpid_operand *)vaddr; + operand->gla = set_la_non_canonical(operand->gla, lam_mask); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr"); + + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand, + lam_mask); + operand->gla = (u64)vaddr; + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr"); + + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand, + lam_mask); + operand->gla = set_la_non_canonical(operand->gla, lam_mask); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr"); + + write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); +} + /* * This does not test real-address mode, virtual-8086 mode, protected mode, * or CPL > 0. @@ -3274,8 +3322,10 @@ static void invvpid_test(void) /* * The gla operand is only validated for single-address INVVPID. */ - if (types & (1u << INVVPID_ADDR)) + if (types & (1u << INVVPID_ADDR)) { try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL); + invvpid_test_lam(); + } invvpid_test_gp(); invvpid_test_ss();