Message ID | 20230412075134.21240-5-binbin.wu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86: Add test cases for LAM | expand |
On Wed, Apr 12, 2023 at 03:51:34PM +0800, Binbin Wu wrote: >When LAM is on, the linear address of INVVPID operand can contain >metadata, and the linear address in the INVVPID descriptor can >contain metadata. > >The added cases use tagged descriptor address or/and tagged target >invalidation address to make sure the behaviors are expected when >LAM is on. >Also, INVVPID cases can be used as the common test cases for VMX >instruction VMExits. > >Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> with a few cosmetic comments below: >--- > x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 60 insertions(+) > >diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c >index 5ee1264..381ca1c 100644 >--- a/x86/vmx_tests.c >+++ b/x86/vmx_tests.c >@@ -3225,6 +3225,65 @@ static void invvpid_test_not_in_vmx_operation(void) > TEST_ASSERT(!vmx_on()); > } > >+#define LAM57_MASK GENMASK_ULL(62, 57) >+#define LAM48_MASK GENMASK_ULL(62, 48) >+ >+static inline u64 set_metadata(u64 src, u64 metadata_mask) >+{ >+ return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask); >+} Can you move the duplicate defintions and functions to a header file? >+ >+/* LAM applies to the target address inside the descriptor of invvpid */ >+static void invvpid_test_lam(void) >+{ >+ void *vaddr; >+ struct invvpid_operand *operand; >+ u64 lam_mask = LAM48_MASK; >+ bool fault; >+ >+ if (!this_cpu_has(X86_FEATURE_LAM)) { >+ report_skip("LAM is not supported, skip INVVPID with LAM"); >+ return; >+ } ... >+ >+ if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) >+ lam_mask = LAM57_MASK; >+ >+ vaddr = alloc_vpage(); >+ install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); >+ /* >+ * Since the stack memory address in KUT doesn't follow kernel address >+ * space partition rule, reuse the memory address for descriptor and >+ * the target address in the descriptor of invvpid. >+ */ >+ operand = (struct invvpid_operand *)vaddr; >+ operand->vpid = 0xffff; >+ operand->gla = (u64)vaddr; >+ >+ write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); >+ if (!(read_cr4() & X86_CR4_LAM_SUP)) { >+ report_skip("Failed to enable LAM_SUP"); >+ return; >+ } It might be better to enable LAM_SUP right after above check for the LAM CPUID bit. And no need to verify the result because there is a dedicated test case already in patch 2. >+ >+ operand = (struct invvpid_operand *)vaddr; >+ operand->gla = set_metadata(operand->gla, lam_mask); >+ fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >+ report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr"); >+ >+ operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask); >+ operand->gla = (u64)vaddr; >+ fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >+ report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr"); >+ >+ operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask); >+ operand->gla = set_metadata(operand->gla, lam_mask); >+ fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >+ report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr"); >+ >+ write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); >+} >+ > /* > * This does not test real-address mode, virtual-8086 mode, protected mode, > * or CPL > 0. >@@ -3282,6 +3341,7 @@ static void invvpid_test(void) > invvpid_test_pf(); > invvpid_test_compatibility_mode(); > invvpid_test_not_in_vmx_operation(); >+ invvpid_test_lam(); operand->gla is checked only in INVVPID_ADDR mode. So, the lam test should be moved under "if (types & (1u << INVVPID_ADDR))" a few lines above. > } > > /* >-- >2.25.1 >
On 4/21/2023 1:38 PM, Chao Gao wrote: > On Wed, Apr 12, 2023 at 03:51:34PM +0800, Binbin Wu wrote: >> When LAM is on, the linear address of INVVPID operand can contain >> metadata, and the linear address in the INVVPID descriptor can >> contain metadata. >> >> The added cases use tagged descriptor address or/and tagged target >> invalidation address to make sure the behaviors are expected when >> LAM is on. >> Also, INVVPID cases can be used as the common test cases for VMX >> instruction VMExits. >> >> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> > Reviewed-by: Chao Gao <chao.gao@intel.com> > > with a few cosmetic comments below: > >> --- >> x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 60 insertions(+) >> >> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c >> index 5ee1264..381ca1c 100644 >> --- a/x86/vmx_tests.c >> +++ b/x86/vmx_tests.c >> @@ -3225,6 +3225,65 @@ static void invvpid_test_not_in_vmx_operation(void) >> TEST_ASSERT(!vmx_on()); >> } >> >> +#define LAM57_MASK GENMASK_ULL(62, 57) >> +#define LAM48_MASK GENMASK_ULL(62, 48) >> + >> +static inline u64 set_metadata(u64 src, u64 metadata_mask) >> +{ >> + return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask); >> +} > Can you move the duplicate defintions and functions to a header file? Then add a new header file lam.h? Didn't find a suitable existant header file to add these definitions. > >> + >> +/* LAM applies to the target address inside the descriptor of invvpid */ >> +static void invvpid_test_lam(void) >> +{ >> + void *vaddr; >> + struct invvpid_operand *operand; >> + u64 lam_mask = LAM48_MASK; >> + bool fault; >> + >> + if (!this_cpu_has(X86_FEATURE_LAM)) { >> + report_skip("LAM is not supported, skip INVVPID with LAM"); >> + return; >> + } > ... > >> + >> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) >> + lam_mask = LAM57_MASK; >> + >> + vaddr = alloc_vpage(); >> + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); >> + /* >> + * Since the stack memory address in KUT doesn't follow kernel address >> + * space partition rule, reuse the memory address for descriptor and >> + * the target address in the descriptor of invvpid. >> + */ >> + operand = (struct invvpid_operand *)vaddr; >> + operand->vpid = 0xffff; >> + operand->gla = (u64)vaddr; >> + >> + write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); >> + if (!(read_cr4() & X86_CR4_LAM_SUP)) { >> + report_skip("Failed to enable LAM_SUP"); >> + return; >> + } > It might be better to enable LAM_SUP right after above check for the LAM CPUID > bit. And no need to verify the result because there is a dedicated test case > already in patch 2. OK. >> + >> + operand = (struct invvpid_operand *)vaddr; >> + operand->gla = set_metadata(operand->gla, lam_mask); >> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >> + report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr"); >> + >> + operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask); >> + operand->gla = (u64)vaddr; >> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >> + report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr"); >> + >> + operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask); >> + operand->gla = set_metadata(operand->gla, lam_mask); >> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >> + report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr"); >> + >> + write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); >> +} >> + >> /* >> * This does not test real-address mode, virtual-8086 mode, protected mode, >> * or CPL > 0. >> @@ -3282,6 +3341,7 @@ static void invvpid_test(void) >> invvpid_test_pf(); >> invvpid_test_compatibility_mode(); >> invvpid_test_not_in_vmx_operation(); >> + invvpid_test_lam(); > operand->gla is checked only in INVVPID_ADDR mode. So, the lam test should be > moved under "if (types & (1u << INVVPID_ADDR))" a few lines above. Yes, will update it. > >> } >> >> /* >> -- >> 2.25.1 >>
On 4/23/2023 11:41 AM, Binbin Wu wrote: > > On 4/21/2023 1:38 PM, Chao Gao wrote: >> On Wed, Apr 12, 2023 at 03:51:34PM +0800, Binbin Wu wrote: >>> When LAM is on, the linear address of INVVPID operand can contain >>> metadata, and the linear address in the INVVPID descriptor can >>> contain metadata. >>> >>> The added cases use tagged descriptor address or/and tagged target >>> invalidation address to make sure the behaviors are expected when >>> LAM is on. >>> Also, INVVPID cases can be used as the common test cases for VMX >>> instruction VMExits. >>> >>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> >> Reviewed-by: Chao Gao <chao.gao@intel.com> >> >> with a few cosmetic comments below: >> >>> --- >>> x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 60 insertions(+) >>> >>> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c >>> index 5ee1264..381ca1c 100644 >>> --- a/x86/vmx_tests.c >>> +++ b/x86/vmx_tests.c >>> @@ -3225,6 +3225,65 @@ static void >>> invvpid_test_not_in_vmx_operation(void) >>> TEST_ASSERT(!vmx_on()); >>> } >>> >>> +#define LAM57_MASK GENMASK_ULL(62, 57) >>> +#define LAM48_MASK GENMASK_ULL(62, 48) >>> + >>> +static inline u64 set_metadata(u64 src, u64 metadata_mask) >>> +{ >>> + return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask); >>> +} >> Can you move the duplicate defintions and functions to a header file? > > Then add a new header file lam.h? > Didn't find a suitable existant header file to add these definitions. I am about to move the definitions to lib/x86/processor.h and rename set_metadata() to set_la_non_canonical() to be more generic. index e00a32b..236b537 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -8,6 +8,14 @@ #include <stdint.h> #define NONCANONICAL 0xaaaaaaaaaaaaaaaaull +#define LAM57_MASK GENMASK_ULL(62, 57) +#define LAM48_MASK GENMASK_ULL(62, 48) + +/* Set metadata with non-canonical pattern in mask bits of a linear address */ +static inline u64 set_la_non_canonical(u64 src, u64 mask) +{ + return (src & ~mask) | (NONCANONICAL & mask); +} > > >> >>> + >>> +/* LAM applies to the target address inside the descriptor of >>> invvpid */ >>> +static void invvpid_test_lam(void) >>> +{ >>> + void *vaddr; >>> + struct invvpid_operand *operand; >>> + u64 lam_mask = LAM48_MASK; >>> + bool fault; >>> + >>> + if (!this_cpu_has(X86_FEATURE_LAM)) { >>> + report_skip("LAM is not supported, skip INVVPID with LAM"); >>> + return; >>> + } >> ... >> >>> + >>> + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) >>> + lam_mask = LAM57_MASK; >>> + >>> + vaddr = alloc_vpage(); >>> + install_page(current_page_table(), virt_to_phys(alloc_page()), >>> vaddr); >>> + /* >>> + * Since the stack memory address in KUT doesn't follow kernel >>> address >>> + * space partition rule, reuse the memory address for >>> descriptor and >>> + * the target address in the descriptor of invvpid. >>> + */ >>> + operand = (struct invvpid_operand *)vaddr; >>> + operand->vpid = 0xffff; >>> + operand->gla = (u64)vaddr; >>> + >>> + write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); >>> + if (!(read_cr4() & X86_CR4_LAM_SUP)) { >>> + report_skip("Failed to enable LAM_SUP"); >>> + return; >>> + } >> It might be better to enable LAM_SUP right after above check for the >> LAM CPUID >> bit. And no need to verify the result because there is a dedicated >> test case >> already in patch 2. > > OK. > > >>> + >>> + operand = (struct invvpid_operand *)vaddr; >>> + operand->gla = set_metadata(operand->gla, lam_mask); >>> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >>> + report(!fault, "INVVPID (LAM on): untagged pointer + tagged >>> addr"); >>> + >>> + operand = (struct invvpid_operand *)set_metadata((u64)operand, >>> lam_mask); >>> + operand->gla = (u64)vaddr; >>> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >>> + report(!fault, "INVVPID (LAM on): tagged pointer + untagged >>> addr"); >>> + >>> + operand = (struct invvpid_operand *)set_metadata((u64)operand, >>> lam_mask); >>> + operand->gla = set_metadata(operand->gla, lam_mask); >>> + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); >>> + report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr"); >>> + >>> + write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); >>> +} >>> + >>> /* >>> * This does not test real-address mode, virtual-8086 mode, >>> protected mode, >>> * or CPL > 0. >>> @@ -3282,6 +3341,7 @@ static void invvpid_test(void) >>> invvpid_test_pf(); >>> invvpid_test_compatibility_mode(); >>> invvpid_test_not_in_vmx_operation(); >>> + invvpid_test_lam(); >> operand->gla is checked only in INVVPID_ADDR mode. So, the lam test >> should be >> moved under "if (types & (1u << INVVPID_ADDR))" a few lines above. > > Yes, will update it. > > >> >>> } >>> >>> /* >>> -- >>> 2.25.1 >>>
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index 5ee1264..381ca1c 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -3225,6 +3225,65 @@ static void invvpid_test_not_in_vmx_operation(void) TEST_ASSERT(!vmx_on()); } +#define LAM57_MASK GENMASK_ULL(62, 57) +#define LAM48_MASK GENMASK_ULL(62, 48) + +static inline u64 set_metadata(u64 src, u64 metadata_mask) +{ + return (src & ~metadata_mask) | (NONCANONICAL & metadata_mask); +} + +/* LAM applies to the target address inside the descriptor of invvpid */ +static void invvpid_test_lam(void) +{ + void *vaddr; + struct invvpid_operand *operand; + u64 lam_mask = LAM48_MASK; + bool fault; + + if (!this_cpu_has(X86_FEATURE_LAM)) { + report_skip("LAM is not supported, skip INVVPID with LAM"); + return; + } + + if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57) + lam_mask = LAM57_MASK; + + vaddr = alloc_vpage(); + install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr); + /* + * Since the stack memory address in KUT doesn't follow kernel address + * space partition rule, reuse the memory address for descriptor and + * the target address in the descriptor of invvpid. + */ + operand = (struct invvpid_operand *)vaddr; + operand->vpid = 0xffff; + operand->gla = (u64)vaddr; + + write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP); + if (!(read_cr4() & X86_CR4_LAM_SUP)) { + report_skip("Failed to enable LAM_SUP"); + return; + } + + operand = (struct invvpid_operand *)vaddr; + operand->gla = set_metadata(operand->gla, lam_mask); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr"); + + operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask); + operand->gla = (u64)vaddr; + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr"); + + operand = (struct invvpid_operand *)set_metadata((u64)operand, lam_mask); + operand->gla = set_metadata(operand->gla, lam_mask); + fault = test_for_exception(GP_VECTOR, ds_invvpid, operand); + report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr"); + + write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP); +} + /* * This does not test real-address mode, virtual-8086 mode, protected mode, * or CPL > 0. @@ -3282,6 +3341,7 @@ static void invvpid_test(void) invvpid_test_pf(); invvpid_test_compatibility_mode(); invvpid_test_not_in_vmx_operation(); + invvpid_test_lam(); } /*
When LAM is on, the linear address of INVVPID operand can contain metadata, and the linear address in the INVVPID descriptor can contain metadata. The added cases use tagged descriptor address or/and tagged target invalidation address to make sure the behaviors are expected when LAM is on. Also, INVVPID cases can be used as the common test cases for VMX instruction VMExits. Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> --- x86/vmx_tests.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+)