Message ID | 20240122085354.9510-5-binbin.wu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86: Add test cases for LAM | expand |
On Mon, Jan 22, 2024, Binbin Wu wrote: > + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) Checking for feature support seems superfluous, e.g. LA57 should never be set if it's unsupported. Then you can do lam_mask = is_la57_enabled() ? LAM57_MASK : LAM48_MASK; > + lam_mask = LAM57_MASK; > + > + vaddr = alloc_vpage(); > + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); > + /* > + * Since the stack memory address in KUT doesn't follow kernel address > + * space partition rule, reuse the memory address for descriptor and > + * the target address in the descriptor of invvpid. > + */ > + operand = (struct invvpid_operand *)vaddr; Why bother backing the virtual address? MOV needs a valid translation, but INVVPID does not (ditto for INVLPG and INVPCID, though it might be simpler and easier to just use the allocated address for those). > + operand->vpid = 0xffff; > + operand->gla = (u64)vaddr; > + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand, > + lam_mask); > + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); > + report(!fault, "INVVPID (LAM on): tagged operand");
On 6/6/2024 2:57 AM, Sean Christopherson wrote: > On Mon, Jan 22, 2024, Binbin Wu wrote: >> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) > Checking for feature support seems superfluous, e.g. LA57 should never be set if > it's unsupported. Then you can do > > lam_mask = is_la57_enabled() ? LAM57_MASK : LAM48_MASK; OK, will drop the feature check. > >> + lam_mask = LAM57_MASK; >> + >> + vaddr = alloc_vpage(); >> + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); >> + /* >> + * Since the stack memory address in KUT doesn't follow kernel address >> + * space partition rule, reuse the memory address for descriptor and >> + * the target address in the descriptor of invvpid. >> + */ >> + operand = (struct invvpid_operand *)vaddr; > Why bother backing the virtual address? MOV needs a valid translation, but > INVVPID does not (ditto for INVLPG and INVPCID, though it might be simpler and > easier to just use the allocated address for those). OK, will remove the unnecessary code. > >> + operand->vpid = 0xffff; >> + operand->gla = (u64)vaddr; >> + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand, >> + lam_mask); >> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >> + report(!fault, "INVVPID (LAM on): tagged operand");
On 6/18/2024 1:55 PM, Binbin Wu wrote: > > > On 6/6/2024 2:57 AM, Sean Christopherson wrote: >> On Mon, Jan 22, 2024, Binbin Wu wrote: >>> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) >> Checking for feature support seems superfluous, e.g. LA57 should >> never be set if >> it's unsupported. Then you can do >> >> lam_mask = is_la57_enabled() ? LAM57_MASK : LAM48_MASK; > OK, will drop the feature check. > >> >>> + lam_mask = LAM57_MASK; >>> + >>> + vaddr = alloc_vpage(); >>> + install_page(current_page_table(), virt_to_phys(alloc_page()), >>> vaddr); >>> + /* >>> + * Since the stack memory address in KUT doesn't follow kernel >>> address >>> + * space partition rule, reuse the memory address for >>> descriptor and >>> + * the target address in the descriptor of invvpid. >>> + */ >>> + operand = (struct invvpid_operand *)vaddr; >> Why bother backing the virtual address? MOV needs a valid >> translation, but >> INVVPID does not (ditto for INVLPG and INVPCID, though it might be >> simpler and >> easier to just use the allocated address for those). > > OK, will remove the unnecessary code. Sorry, the backing is still needed here. The target address inside the invvpid descriptor (operand->gla) doesn't need a valid translation, but the invvpid descriptor itself needs a valid translation. > >> >>> + operand->vpid = 0xffff; >>> + operand->gla = (u64)vaddr; >>> + operand = (struct invvpid_operand >>> *)set_la_non_canonical((u64)operand, >>> + lam_mask); >>> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >>> + report(!fault, "INVVPID (LAM on): tagged operand"); > >
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 63080361..a855bdcb 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -3220,6 +3220,48 @@ static void invvpid_test_not_in_vmx_operation(void) TEST_ASSERT(!vmx_on()); } +/* LAM doesn't apply to the linear address inside the descriptor of invvpid */ +static void invvpid_test_lam(void) +{ + void *vaddr; + struct invvpid_operand *operand; + u64 lam_mask = LAM48_MASK; + bool fault; + + if (!this_cpu_has(X86_FEATURE_LAM)) { + report_skip("LAM is not supported, skip INVVPID with LAM"); + return; + } + write_cr4(read_cr4() | X86_CR4_LAM_SUP); + + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) + lam_mask = LAM57_MASK; + + vaddr = alloc_vpage(); + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); + /* + * Since the stack memory address in KUT doesn't follow kernel address + * space partition rule, reuse the memory address for descriptor and + * the target address in the descriptor of invvpid. + */ + operand = (struct invvpid_operand *)vaddr; + operand->vpid = 0xffff; + operand->gla = (u64)vaddr; + operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand, + lam_mask); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged operand"); + + /* + * Verify that LAM doesn't apply to the address inside the descriptor + * even when LAM is enabled. i.e., the address in the descriptor should + * be canonical. + */ + try_invvpid(INVVPID_ADDR, 0xffff, (u64)operand); + + write_cr4(read_cr4() & ~X86_CR4_LAM_SUP); +} + /* * This does not test real-address mode, virtual-8086 mode, protected mode, * or CPL > 0. @@ -3269,8 +3311,10 @@ static void invvpid_test(void) /* * The gla operand is only validated for single-address INVVPID. */ - if (types & (1u << INVVPID_ADDR)) + if (types & (1u << INVVPID_ADDR)) { try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL); + invvpid_test_lam(); + } invvpid_test_gp(); invvpid_test_ss();