From patchwork Tue May 30 02:43:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13259174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D338C7EE23 for ; Tue, 30 May 2023 02:45:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229647AbjE3Co6 (ORCPT ); Mon, 29 May 2023 22:44:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230260AbjE3Cou (ORCPT ); Mon, 29 May 2023 22:44:50 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDCB6FC for ; Mon, 29 May 2023 19:44:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685414652; x=1716950652; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Nfrh5MlO6jgGG7ooapUh7ZazwXagg+3zDhLO/bwvz8U=; b=DCOfNU6GEX1MGvPDieB+boK08LQkoLGqbPKL6VRzxyolqj7jIPqWDJDT GdxZ2zue5hVImdJbvO3u5OimpWmSYkCDehMh/hRc0D0fONaU/cM/MeiuQ 3BzlIhQsCiuLsxl2PFGH7FShr7GtAaNl9JDyUhXSBVq4GctNgrNa6dyJo HTvp11gQ1ikI2H/iju4toUBpDAEPJfJXgfg+7wgu3YaYCCeg0SXN10OTy hehIR2FJEc8XMjlWVwrq0Wal9HUAtd7bUrJb43pOxke6BmcaBWPTGMkqJ CDiP0XqXOjqxuVXs9ijfrKpSRttBRrpz0qe+MO7ZNIRVz1reLaj5rqeTE w==; X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="418287007" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="418287007" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="656658795" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="656658795" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.208.104]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:04 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [PATCH v5 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Date: Tue, 30 May 2023 10:43:53 +0800 Message-Id: <20230530024356.24870-2-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530024356.24870-1-binbin.wu@linux.intel.com> References: <20230530024356.24870-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field. Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57 on vmlaunch tests when LAM is supported. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 3 +++ x86/vmx_tests.c | 6 +++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 6555056..901df98 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -55,6 +55,8 @@ #define X86_CR0_PG BIT(X86_CR0_PG_BIT) #define X86_CR3_PCID_MASK GENMASK(11, 0) +#define X86_CR3_LAM_U57_BIT (61) +#define X86_CR3_LAM_U48_BIT (62) #define X86_CR4_VME_BIT (0) #define X86_CR4_VME BIT(X86_CR4_VME_BIT) @@ -249,6 +251,7 @@ static inline bool is_intel(void) #define X86_FEATURE_FLUSH_L1D (CPUID(0x7, 0, EDX, 28)) #define X86_FEATURE_ARCH_CAPABILITIES (CPUID(0x7, 0, EDX, 29)) #define X86_FEATURE_PKS (CPUID(0x7, 0, ECX, 31)) +#define X86_FEATURE_LAM (CPUID(0x7, 1, EAX, 26)) /* * Extended Leafs, a.k.a. AMD defined diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 7952ccb..217befe 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -6998,7 +6998,11 @@ static void test_host_ctl_regs(void) cr3 = cr3_saved | (1ul << i); vmcs_write(HOST_CR3, cr3); report_prefix_pushf("HOST_CR3 %lx", cr3); - test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD); + if (this_cpu_has(X86_FEATURE_LAM) && + ((i == X86_CR3_LAM_U57_BIT) || (i == X86_CR3_LAM_U48_BIT))) + test_vmx_vmlaunch(0); + else + test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD); report_prefix_pop(); } From patchwork Tue May 30 02:43:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13259175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D0B4C7EE29 for ; Tue, 30 May 2023 02:45:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230288AbjE3CpC (ORCPT ); Mon, 29 May 2023 22:45:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230317AbjE3Co4 (ORCPT ); Mon, 29 May 2023 22:44:56 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B6B1188 for ; Mon, 29 May 2023 19:44:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685414658; x=1716950658; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yV+k8lGWrOybotYsPAVnRBsTiKRCsKQc6gdEqu8fvDY=; b=IsgxhXw4UObpXWjqEuwDp6bVa6AUuP7CvGknTP3MN/6GuiTtXozFhuFj OiUNXlqQ1u3kwbfJd/4tpCWf/x/7gGfItpbakNeVVANZlHhiL3icQE4El DW0avlb+gyDSfoGgUf5L1SH34xFqmKoqBrBnH3hhPYhZcqjJwjwI/y+Ew sD/9MrF1dZxJr2PyHmEh5T2fP1ihIne0pt79jvCdilUT2IV3wiTqNNQTl PdHWQcFvRrsCT1eOZg7EEs4ji/80HjZ5ckqPbv1WqjQhObAd+gLSJm7gX zAgWoTe+1tn65OOTwz38vdK5dhqEXzcG/vRRiz9ulOc/wJA0vP7Bm5jAG w==; X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="418287016" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="418287016" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="656658821" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="656658821" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.208.104]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:06 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [PATCH v5 2/4] x86: Add test case for LAM_SUP Date: Tue, 30 May 2023 10:43:54 +0800 Message-Id: <20230530024356.24870-3-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530024356.24870-1-binbin.wu@linux.intel.com> References: <20230530024356.24870-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Robert Hoo This unit test covers: 1. CR4.LAM_SUP toggles. 2. Memory & MMIO access with supervisor mode address with LAM metadata. 3. INVLPG memory operand doesn't contain LAM meta data, if the address is non-canonical form then the INVLPG is the same as a NOP (no #GP). 4. INVPCID memory operand (descriptor pointer) could contain LAM meta data, however, the address in the descriptor should be canonical. In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM. LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chapter LINEAR ADDRESS MASKING (LAM) Signed-off-by: Robert Hoo Co-developed-by: Binbin Wu Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 10 ++ x86/Makefile.x86_64 | 1 + x86/lam.c | 243 ++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 10 ++ 4 files changed, 264 insertions(+) create mode 100644 x86/lam.c diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 901df98..6803c9c 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -8,6 +8,14 @@ #include #define NONCANONICAL 0xaaaaaaaaaaaaaaaaull +#define LAM57_MASK GENMASK_ULL(62, 57) +#define LAM48_MASK GENMASK_ULL(62, 48) + +/* Set metadata with non-canonical pattern in mask bits of a linear address */ +static inline u64 set_la_non_canonical(u64 src, u64 mask) +{ + return (src & ~mask) | (NONCANONICAL & mask); +} #ifdef __x86_64__ # define R "r" @@ -107,6 +115,8 @@ #define X86_CR4_CET BIT(X86_CR4_CET_BIT) #define X86_CR4_PKS_BIT (24) #define X86_CR4_PKS BIT(X86_CR4_PKS_BIT) +#define X86_CR4_LAM_SUP_BIT (28) +#define X86_CR4_LAM_SUP BIT(X86_CR4_LAM_SUP_BIT) #define X86_EFLAGS_CF_BIT (0) #define X86_EFLAGS_CF BIT(X86_EFLAGS_CF_BIT) diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index f483dea..fa11eb3 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -34,6 +34,7 @@ tests += $(TEST_DIR)/rdpru.$(exe) tests += $(TEST_DIR)/pks.$(exe) tests += $(TEST_DIR)/pmu_lbr.$(exe) tests += $(TEST_DIR)/pmu_pebs.$(exe) +tests += $(TEST_DIR)/lam.$(exe) ifeq ($(CONFIG_EFI),y) tests += $(TEST_DIR)/amd_sev.$(exe) diff --git a/x86/lam.c b/x86/lam.c new file mode 100644 index 0000000..4e631c5 --- /dev/null +++ b/x86/lam.c @@ -0,0 +1,243 @@ +/* + * Intel LAM unit test + * + * Copyright (C) 2023 Intel + * + * Author: Robert Hoo + * Binbin Wu + * + * This work is licensed under the terms of the GNU LGPL, version 2 or + * later. + */ + +#include "libcflat.h" +#include "processor.h" +#include "desc.h" +#include "vmalloc.h" +#include "alloc_page.h" +#include "vm.h" +#include "asm/io.h" +#include "ioram.h" + +#define FLAGS_LAM_ACTIVE BIT_ULL(0) +#define FLAGS_LA57 BIT_ULL(1) + +struct invpcid_desc { + u64 pcid : 12; + u64 rsv : 52; + u64 addr; +}; + +static inline bool is_la57(void) +{ + return !!(read_cr4() & X86_CR4_LA57); +} + +static inline bool lam_sup_active(void) +{ + return !!(read_cr4() & X86_CR4_LAM_SUP); +} + +static void cr4_set_lam_sup(void *data) +{ + unsigned long cr4; + + cr4 = read_cr4(); + write_cr4_safe(cr4 | X86_CR4_LAM_SUP); +} + +static void cr4_clear_lam_sup(void *data) +{ + unsigned long cr4; + + cr4 = read_cr4(); + write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP); +} + +static void test_cr4_lam_set_clear(bool has_lam) +{ + bool fault; + + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL); + report((fault != has_lam) && (lam_sup_active() == has_lam), + "Set CR4.LAM_SUP"); + + fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL); + report(!fault, "Clear CR4.LAM_SUP"); +} + +/* Refer to emulator.c */ +static void do_mov(void *mem) +{ + unsigned long t1, t2; + + t1 = 0x123456789abcdefull & -1ul; + asm volatile("mov %[t1], (%[mem])\n\t" + "mov (%[mem]), %[t2]" + : [t2]"=r"(t2) + : [t1]"r"(t1), [mem]"r"(mem) + : "memory"); + report(t1 == t2, "Mov result check"); +} + +static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4) +{ + bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE); + u64 lam_mask = arg2; + u64 *ptr = (u64 *)arg3; + bool is_mmio = !!arg4; + bool fault; + + fault = test_for_exception(GP_VECTOR, do_mov, ptr); + report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory"); + + ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask); + fault = test_for_exception(GP_VECTOR, do_mov, ptr); + report(fault != lam_active,"Test tagged addr (%s)", + is_mmio ? "MMIO" : "Memory"); + + return 0; +} + +static void do_invlpg(void *mem) +{ + invlpg(mem); +} + +static void do_invlpg_fep(void *mem) +{ + asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory"); +} + +/* invlpg with tagged address is same as NOP, no #GP expected. */ +static void test_invlpg(u64 lam_mask, void *va, bool fep) +{ + bool fault; + u64 *ptr; + + ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask); + if (fep) + fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr); + else + fault = test_for_exception(GP_VECTOR, do_invlpg, ptr); + + report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : ""); +} + +static void do_invpcid(void *desc) +{ + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc; + + asm volatile("invpcid %0, %1" : + : "m" (*desc_ptr), "r" (0UL) + : "memory"); +} + +/* LAM doesn't apply to the target address in the descriptor of invpcid */ +static void test_invpcid(u64 flags, u64 lam_mask, void *data) +{ + /* + * Reuse the memory address for the descriptor since stack memory + * address in KUT doesn't follow the kernel address space partitions. + */ + struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data; + bool lam_active = !!(flags & FLAGS_LAM_ACTIVE); + bool fault; + + if (!this_cpu_has(X86_FEATURE_PCID) || + !this_cpu_has(X86_FEATURE_INVPCID)) { + report_skip("INVPCID not supported"); + return; + } + + memset(desc_ptr, 0, sizeof(struct invpcid_desc)); + desc_ptr->addr = (u64)data; + + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(!fault, "INVPCID: untagged pointer + untagged addr"); + + desc_ptr->addr = set_la_non_canonical(desc_ptr->addr, lam_mask); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault, "INVPCID: untagged pointer + tagged addr"); + + desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr, + lam_mask); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault, "INVPCID: tagged pointer + tagged addr"); + + desc_ptr = (struct invpcid_desc *)data; + desc_ptr->addr = (u64)data; + desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr, + lam_mask); + fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr); + report(fault != lam_active, "INVPCID: tagged pointer + untagged addr"); +} + +static void test_lam_sup(bool has_lam, bool fep_available) +{ + void *vaddr, *vaddr_mmio; + phys_addr_t paddr; + u64 lam_mask = LAM48_MASK; + u64 flags = 0; + bool fault; + + /* + * KUT initializes vfree_top to 0 for X86_64, and each virtual address + * allocation decreases the size from vfree_top. It's guaranteed that + * the return value of alloc_vpage() is considered as kernel mode + * address and canonical since only a small mount virtual address range + * is allocated in this test. + */ + vaddr = alloc_vpage(); + vaddr_mmio = alloc_vpage(); + paddr = virt_to_phys(alloc_page()); + install_page(current_page_table(), paddr, vaddr); + install_page(current_page_table(), IORAM_BASE_PHYS, vaddr_mmio); + + test_cr4_lam_set_clear(has_lam); + + /* Set for the following LAM_SUP tests */ + if (has_lam) { + fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL); + report(!fault && lam_sup_active(), "Set CR4.LAM_SUP"); + } + + if (lam_sup_active()) + flags |= FLAGS_LAM_ACTIVE; + + if (is_la57()) { + flags |= FLAGS_LA57; + lam_mask = LAM57_MASK; + } + + /* Test for normal memory */ + test_ptr(flags, lam_mask, (u64)vaddr, false); + /* Test for MMIO to trigger instruction emulation */ + test_ptr(flags, lam_mask, (u64)vaddr_mmio, true); + test_invpcid(flags, lam_mask, vaddr); + test_invlpg(lam_mask, vaddr, false); + if (fep_available) + test_invlpg(lam_mask, vaddr, true); +} + +int main(int ac, char **av) +{ + bool has_lam; + bool fep_available = is_fep_available(); + + setup_vm(); + + has_lam = this_cpu_has(X86_FEATURE_LAM); + if (!has_lam) + report_info("This CPU doesn't support LAM feature\n"); + else + report_info("This CPU supports LAM feature\n"); + + if (!fep_available) + report_skip("Skipping tests the forced emulation, " + "use kvm.force_emulation_prefix=1 to enable\n"); + + test_lam_sup(has_lam, fep_available); + + return report_summary(); +} diff --git a/x86/unittests.cfg b/x86/unittests.cfg index f4e7b25..b8a628e 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -492,3 +492,13 @@ file = cet.flat arch = x86_64 smp = 2 extra_params = -enable-kvm -m 2048 -cpu host + +[intel-lam] +file = lam.flat +arch = x86_64 +extra_params = -enable-kvm -cpu host + +[intel-no-lam] +file = lam.flat +arch = x86_64 +extra_params = -enable-kvm -cpu host,-lam From patchwork Tue May 30 02:43:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13259176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0EFCC77B7E for ; Tue, 30 May 2023 02:45:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbjE3Cpb (ORCPT ); Mon, 29 May 2023 22:45:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230198AbjE3CpS (ORCPT ); Mon, 29 May 2023 22:45:18 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C155F1B4 for ; Mon, 29 May 2023 19:44:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685414681; x=1716950681; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bt0aVFMYEdT199JVfyo0ABUg08xNX11NopPb93MJjaA=; b=OuKV8u1OFcPdWwJPISO0Rdp5GSM4Sn/HRTr/EoEukDNVs/wHTuedOUvf DkdklqdXEnr5f+m8nWJnrCFyVHX2dc8l9iBp9/MDR3Uz2TZWYBp/jqd5U ktHlrqApzJE69p9thiWQODuZfSaMNfSuqKgg0htskRTXaD1jbpgeQ9s3U o3045eZblJ7wVnWiLVPk0/vwab9i3lVKVEve0UdBOuFw3INw5Q38zm7fs Hv+3Hxcdk++ULEC6lrO5JQLFTJlj1w0zkrskJavf1pQj5UftKKni9XgTQ UvVMBANQPcc2WhyS2VMm/FdC1E10r95m6kHaIoKGtGVGhEukbXBmyk8aj g==; X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="418287027" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="418287027" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="656658839" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="656658839" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.208.104]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:08 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [PATCH v5 3/4] x86: Add test cases for LAM_{U48,U57} Date: Tue, 30 May 2023 10:43:55 +0800 Message-Id: <20230530024356.24870-4-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530024356.24870-1-binbin.wu@linux.intel.com> References: <20230530024356.24870-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This unit test covers: 1. CR3 LAM bits toggles. 2. Memory/MMIO access with user mode address containing LAM metadata. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- lib/x86/processor.h | 2 ++ x86/lam.c | 76 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 6803c9c..d45566a 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -64,7 +64,9 @@ static inline u64 set_la_non_canonical(u64 src, u64 mask) #define X86_CR3_PCID_MASK GENMASK(11, 0) #define X86_CR3_LAM_U57_BIT (61) +#define X86_CR3_LAM_U57 BIT_ULL(X86_CR3_LAM_U57_BIT) #define X86_CR3_LAM_U48_BIT (62) +#define X86_CR3_LAM_U48 BIT_ULL(X86_CR3_LAM_U48_BIT) #define X86_CR4_VME_BIT (0) #define X86_CR4_VME BIT(X86_CR4_VME_BIT) diff --git a/x86/lam.c b/x86/lam.c index 4e631c5..2920747 100644 --- a/x86/lam.c +++ b/x86/lam.c @@ -18,6 +18,9 @@ #include "vm.h" #include "asm/io.h" #include "ioram.h" +#include "usermode.h" + +#define CR3_LAM_BITS_MASK (X86_CR3_LAM_U48 | X86_CR3_LAM_U57) #define FLAGS_LAM_ACTIVE BIT_ULL(0) #define FLAGS_LA57 BIT_ULL(1) @@ -38,6 +41,16 @@ static inline bool lam_sup_active(void) return !!(read_cr4() & X86_CR4_LAM_SUP); } +static inline bool lam_u48_active(void) +{ + return (read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48; +} + +static inline bool lam_u57_active(void) +{ + return !!(read_cr3() & X86_CR3_LAM_U57); +} + static void cr4_set_lam_sup(void *data) { unsigned long cr4; @@ -83,6 +96,7 @@ static void do_mov(void *mem) static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4) { bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE); + bool la_57 = !!(arg1 & FLAGS_LA57); u64 lam_mask = arg2; u64 *ptr = (u64 *)arg3; bool is_mmio = !!arg4; @@ -96,6 +110,17 @@ static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4) report(fault != lam_active,"Test tagged addr (%s)", is_mmio ? "MMIO" : "Memory"); + /* + * This test case is only triggered when LAM_U57 is active and 4-level + * paging is used. For the case, bit[56:47] aren't all 0 triggers #GP. + */ + if (lam_active && (lam_mask == LAM57_MASK) && !la_57) { + ptr = (u64 *)set_la_non_canonical((u64)ptr, LAM48_MASK); + fault = test_for_exception(GP_VECTOR, do_mov, ptr); + report(fault, "Test non-LAM-canonical addr (%s)", + is_mmio ? "MMIO" : "Memory"); + } + return 0; } @@ -220,6 +245,56 @@ static void test_lam_sup(bool has_lam, bool fep_available) test_invlpg(lam_mask, vaddr, true); } +static void test_lam_user_mode(bool has_lam, u64 lam_mask, u64 mem, u64 mmio) +{ + unsigned r; + bool raised_vector; + unsigned long cr3 = read_cr3() & ~CR3_LAM_BITS_MASK; + u64 flags = 0; + + if (is_la57()) + flags |= FLAGS_LA57; + + if (has_lam) { + if (lam_mask == LAM48_MASK) { + r = write_cr3_safe(cr3 | X86_CR3_LAM_U48); + report((r == 0) && lam_u48_active(), "Set LAM_U48"); + } else { + r = write_cr3_safe(cr3 | X86_CR3_LAM_U57); + report((r == 0) && lam_u57_active(), "Set LAM_U57"); + } + } + if (lam_u48_active() || lam_u57_active()) + flags |= FLAGS_LAM_ACTIVE; + + run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mem, + false, &raised_vector); + run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mmio, + true, &raised_vector); +} + +static void test_lam_user(bool has_lam) +{ + phys_addr_t paddr; + + /* + * The physical address of AREA_NORMAL is within 36 bits, so that using + * identical mapping, the linear address will be considered as user mode + * address from the view of LAM, and the metadata bits are not used as + * address for both LAM48 and LAM57. + */ + paddr = virt_to_phys(alloc_pages_flags(0, AREA_NORMAL)); + _Static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL, + "Identical mapping range check"); + + /* + * Physical memory & MMIO have already been identical mapped in + * setup_mmu(). + */ + test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS); + test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS); +} + int main(int ac, char **av) { bool has_lam; @@ -238,6 +313,7 @@ int main(int ac, char **av) "use kvm.force_emulation_prefix=1 to enable\n"); test_lam_sup(has_lam, fep_available); + test_lam_user(has_lam); return report_summary(); } From patchwork Tue May 30 02:43:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Binbin Wu X-Patchwork-Id: 13259177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6823FC7EE23 for ; Tue, 30 May 2023 02:45:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230304AbjE3Cpe (ORCPT ); Mon, 29 May 2023 22:45:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230317AbjE3CpZ (ORCPT ); Mon, 29 May 2023 22:45:25 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DF83E4 for ; Mon, 29 May 2023 19:44:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685414690; x=1716950690; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Rp72kGIs1NcqxbdM/XTJvaQ5IDVUmkr8Udft7D9zFKc=; b=DpJNDXz6Pw8h6PYANLSwPgn5M0Uw5vJ2wo+383JRMbV2Sr56YfkJibve /XRFlXmqVv0YcSBcivIbeYKPdmKKa01DLzE3rOnBK7LBVVMlz7Bvem17g Jnx1YLdOWMguVUj6k+eIa0CtDj09nP0i+mg59jTyWXCdVayNQlkTrdO0g st31P/PGhhMCwikA4hw7EC3i63EpHOUAVXgyBx4+WsferSqkCpvXu+gTp CX3CEzZpLtKf3VNrG8qeciNpOp94oaHYT6D0XFSW7Cn0N8zGMSOdwzzGF H1Hc4oHuDCB7CSxPxVaBWHp7QbmILyJKU0EIwgZDMlcw5Ohrxy51jownP A==; X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="418287034" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="418287034" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="656658863" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="656658863" Received: from binbinwu-mobl.ccr.corp.intel.com ([10.254.208.104]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:44:10 -0700 From: Binbin Wu To: kvm@vger.kernel.org Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com, robert.hu@linux.intel.com, binbin.wu@linux.intel.com Subject: [PATCH v5 4/4] x86: Add test case for INVVPID with LAM Date: Tue, 30 May 2023 10:43:56 +0800 Message-Id: <20230530024356.24870-5-binbin.wu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530024356.24870-1-binbin.wu@linux.intel.com> References: <20230530024356.24870-1-binbin.wu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org LAM applies to the linear address of INVVPID operand, however, it doesn't apply to the linear address in the INVVPID descriptor. The added cases use tagged operand or tagged target invalidation address to make sure the behaviors are expected when LAM is on. Also, INVVPID case using tagged operand can be used as the common test cases for VMX instruction VMExits. Signed-off-by: Binbin Wu Reviewed-by: Chao Gao --- x86/vmx_tests.c | 46 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 45 insertions(+), 1 deletion(-) diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 217befe..3f3f203 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -3225,6 +3225,48 @@ static void invvpid_test_not_in_vmx_operation(void) TEST_ASSERT(!vmx_on()); } +/* LAM applies to the target address inside the descriptor of invvpid */ +static void invvpid_test_lam(void) +{ + void *vaddr; + struct invvpid_operand *operand; + u64 lam_mask = LAM48_MASK; + bool fault; + + if (!this_cpu_has(X86_FEATURE_LAM)) { + report_skip("LAM is not supported, skip INVVPID with LAM"); + return; + } + write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); + + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) + lam_mask = LAM57_MASK; + + vaddr = alloc_vpage(); + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); + /* + * Since the stack memory address in KUT doesn't follow kernel address + * space partition rule, reuse the memory address for descriptor and + * the target address in the descriptor of invvpid. + */ + operand = (struct invvpid_operand *)vaddr; + operand->vpid = 0xffff; + operand->gla = (u64)vaddr; + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand, + lam_mask); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged operand"); + + /* + * LAM doesn't apply to the address inside the descriptor, expected + * failure and VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID set in + * VMX_INST_ERROR. + */ + try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL); + + write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); +} + /* * This does not test real-address mode, virtual-8086 mode, protected mode, * or CPL > 0. @@ -3274,8 +3316,10 @@ static void invvpid_test(void) /* * The gla operand is only validated for single-address INVVPID. */ - if (types & (1u << INVVPID_ADDR)) + if (types & (1u << INVVPID_ADDR)) { try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL); + invvpid_test_lam(); + } invvpid_test_gp(); invvpid_test_ss();