diff mbox series

[RFC] KVM: SVM: reduce guest MAXPHYADDR by one in case C-bit is a physical bit

Message ID 20211015150524.2030966-1-vkuznets@redhat.com (mailing list archive)
State New, archived
Headers show
Series [RFC] KVM: SVM: reduce guest MAXPHYADDR by one in case C-bit is a physical bit | expand

Commit Message

Vitaly Kuznetsov Oct. 15, 2021, 3:05 p.m. UTC
Several selftests (memslot_modification_stress_test, kvm_page_table_test,
dirty_log_perf_test,.. ) which rely on vm_get_max_gfn() started to fail
since commit ef4c9f4f65462 ("KVM: selftests: Fix 32-bit truncation of
vm_get_max_gfn()") on AMD EPYC 7401P:

 ./tools/testing/selftests/kvm/demand_paging_test
 Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
 guest physical test memory offset: 0xffffbffff000
 Finished creating vCPUs and starting uffd threads
 Started all vCPUs
 ==== Test Assertion Failure ====
   demand_paging_test.c:63: false
   pid=47131 tid=47134 errno=0 - Success
      1	0x000000000040281b: vcpu_worker at demand_paging_test.c:63
      2	0x00007fb36716e431: ?? ??:0
      3	0x00007fb36709c912: ?? ??:0
   Invalid guest sync status: exit_reason=SHUTDOWN

The commit, however, seems to be correct, it just revealed an already
present issue. AMD CPUs which support SEV may have a reduced physical
address space, e.g. on AMD EPYC 7401P I see:

 Address sizes:  43 bits physical, 48 bits virtual

The guest physical address space, however, is not reduced as stated in
commit e39f00f60ebd ("KVM: x86: Use kernel's x86_phys_bits to handle
reduced MAXPHYADDR"). This seems to be almost correct, however, APM has one
more clause (15.34.6):

  Note that because guest physical addresses are always translated through
  the nested page tables, the size of the guest physical address space is
  not impacted by any physical address space reduction indicated in CPUID
  8000_001F[EBX]. If the C-bit is a physical address bit however, the guest
  physical address space is effectively reduced by 1 bit.

Implement the reduction.

Fixes: e39f00f60ebd (KVM: x86: Use kernel's x86_phys_bits to handle reduced MAXPHYADDR)
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
- RFC: I may have misdiagnosed the problem as I didn't dig to where exactly
 the guest crashes.
---
 arch/x86/kvm/cpuid.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

Comments

Sean Christopherson Oct. 15, 2021, 3:24 p.m. UTC | #1
On Fri, Oct 15, 2021, Vitaly Kuznetsov wrote:
> Several selftests (memslot_modification_stress_test, kvm_page_table_test,
> dirty_log_perf_test,.. ) which rely on vm_get_max_gfn() started to fail
> since commit ef4c9f4f65462 ("KVM: selftests: Fix 32-bit truncation of
> vm_get_max_gfn()") on AMD EPYC 7401P:
> 
>  ./tools/testing/selftests/kvm/demand_paging_test
>  Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
>  guest physical test memory offset: 0xffffbffff000

This look a lot like the signature I remember from the original bug[1].  I assume
you're hitting the magic HyperTransport region[2].  I thought that was fixed, but
the hack-a-fix for selftests never got applied[3].

[1] https://lore.kernel.org/lkml/20210623230552.4027702-4-seanjc@google.com/
[2] https://lkml.kernel.org/r/7e3a90c0-75a1-b8fe-dbcf-bda16502ace9@amd.com
[3] https://lkml.kernel.org/r/20210805105423.412878-1-pbonzini@redhat.com

>  Finished creating vCPUs and starting uffd threads
>  Started all vCPUs
>  ==== Test Assertion Failure ====
>    demand_paging_test.c:63: false
>    pid=47131 tid=47134 errno=0 - Success
>       1	0x000000000040281b: vcpu_worker at demand_paging_test.c:63
>       2	0x00007fb36716e431: ?? ??:0
>       3	0x00007fb36709c912: ?? ??:0
>    Invalid guest sync status: exit_reason=SHUTDOWN
> 
> The commit, however, seems to be correct, it just revealed an already
> present issue. AMD CPUs which support SEV may have a reduced physical
> address space, e.g. on AMD EPYC 7401P I see:
> 
>  Address sizes:  43 bits physical, 48 bits virtual
> 
> The guest physical address space, however, is not reduced as stated in
> commit e39f00f60ebd ("KVM: x86: Use kernel's x86_phys_bits to handle
> reduced MAXPHYADDR"). This seems to be almost correct, however, APM has one
> more clause (15.34.6):
> 
>   Note that because guest physical addresses are always translated through
>   the nested page tables, the size of the guest physical address space is
>   not impacted by any physical address space reduction indicated in CPUID
>   8000_001F[EBX]. If the C-bit is a physical address bit however, the guest
>   physical address space is effectively reduced by 1 bit.
> 
> Implement the reduction.
> 
> Fixes: e39f00f60ebd (KVM: x86: Use kernel's x86_phys_bits to handle reduced MAXPHYADDR)
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
> - RFC: I may have misdiagnosed the problem as I didn't dig to where exactly
>  the guest crashes.
> ---
>  arch/x86/kvm/cpuid.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 751aa85a3001..04ae280a0b66 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -923,13 +923,20 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>  		 *
>  		 * If TDP is enabled but an explicit guest MAXPHYADDR is not
>  		 * provided, use the raw bare metal MAXPHYADDR as reductions to
> -		 * the HPAs do not affect GPAs.
> +		 * the HPAs do not affect GPAs. The value, however, has to be
> +		 * reduced by 1 in case C-bit is a physical bit (APM section
> +		 * 15.34.6).
>  		 */
> -		if (!tdp_enabled)
> +		if (!tdp_enabled) {
>  			g_phys_as = boot_cpu_data.x86_phys_bits;
> -		else if (!g_phys_as)
> +		} else if (!g_phys_as) {
>  			g_phys_as = phys_as;
>  
> +			if (kvm_cpu_cap_has(X86_FEATURE_SEV) &&
> +			    (cpuid_ebx(0x8000001f) & 0x3f) < g_phys_as)
> +				g_phys_as -= 1;

This is incorrect, non-SEV guests do not see a reduced address space.  See Tom's
explanation[*]

[*] https://lkml.kernel.org/r/324a95ee-b962-acdf-9bd7-b8b23b9fb991@amd.com

> +		}
> +
>  		entry->eax = g_phys_as | (virt_as << 8);
>  		entry->edx = 0;
>  		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
> -- 
> 2.31.1
>
Maxim Levitsky Oct. 17, 2021, 7:54 a.m. UTC | #2
On Fri, 2021-10-15 at 15:24 +0000, Sean Christopherson wrote:
> On Fri, Oct 15, 2021, Vitaly Kuznetsov wrote:
> > Several selftests (memslot_modification_stress_test, kvm_page_table_test,
> > dirty_log_perf_test,.. ) which rely on vm_get_max_gfn() started to fail
> > since commit ef4c9f4f65462 ("KVM: selftests: Fix 32-bit truncation of
> > vm_get_max_gfn()") on AMD EPYC 7401P:
> > 
> >  ./tools/testing/selftests/kvm/demand_paging_test
> >  Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
> >  guest physical test memory offset: 0xffffbffff000
> 
> This look a lot like the signature I remember from the original bug[1].  I assume
> you're hitting the magic HyperTransport region[2].  I thought that was fixed, but
> the hack-a-fix for selftests never got applied[3].

Hi Vitaly and everyone!

You are the 3rd person to suffer from this issue :-( Sean Christopherson was first, I was second.

I reported this, then I think we found out that it is not the HyperTransport region after all,
and I think that the whole thing lost in 'trying to get answers from AMD'.

https://lore.kernel.org/lkml/ac72b77c-f633-923b-8019-69347db706be@redhat.com/


I'll say, a hack to reduce it by 1 bit is still better that failing tests,
at least until AMD explains to us, about what is going on.

Sorry that you had to debug this.

Best regards,
	Maxim Levitsky 


> 
> [1] https://lore.kernel.org/lkml/20210623230552.4027702-4-seanjc@google.com/
> [2] https://lkml.kernel.org/r/7e3a90c0-75a1-b8fe-dbcf-bda16502ace9@amd.com
> [3] https://lkml.kernel.org/r/20210805105423.412878-1-pbonzini@redhat.com
> 
> >  Finished creating vCPUs and starting uffd threads
> >  Started all vCPUs
> >  ==== Test Assertion Failure ====
> >    demand_paging_test.c:63: false
> >    pid=47131 tid=47134 errno=0 - Success
> >       1	0x000000000040281b: vcpu_worker at demand_paging_test.c:63
> >       2	0x00007fb36716e431: ?? ??:0
> >       3	0x00007fb36709c912: ?? ??:0
> >    Invalid guest sync status: exit_reason=SHUTDOWN
> > 
> > The commit, however, seems to be correct, it just revealed an already
> > present issue. AMD CPUs which support SEV may have a reduced physical
> > address space, e.g. on AMD EPYC 7401P I see:
> > 
> >  Address sizes:  43 bits physical, 48 bits virtual
> > 
> > The guest physical address space, however, is not reduced as stated in
> > commit e39f00f60ebd ("KVM: x86: Use kernel's x86_phys_bits to handle
> > reduced MAXPHYADDR"). This seems to be almost correct, however, APM has one
> > more clause (15.34.6):
> > 
> >   Note that because guest physical addresses are always translated through
> >   the nested page tables, the size of the guest physical address space is
> >   not impacted by any physical address space reduction indicated in CPUID
> >   8000_001F[EBX]. If the C-bit is a physical address bit however, the guest
> >   physical address space is effectively reduced by 1 bit.
> > 
> > Implement the reduction.
> > 
> > Fixes: e39f00f60ebd (KVM: x86: Use kernel's x86_phys_bits to handle reduced MAXPHYADDR)
> > Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> > ---
> > - RFC: I may have misdiagnosed the problem as I didn't dig to where exactly
> >  the guest crashes.
> > ---
> >  arch/x86/kvm/cpuid.c | 13 ++++++++++---
> >  1 file changed, 10 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index 751aa85a3001..04ae280a0b66 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -923,13 +923,20 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
> >  		 *
> >  		 * If TDP is enabled but an explicit guest MAXPHYADDR is not
> >  		 * provided, use the raw bare metal MAXPHYADDR as reductions to
> > -		 * the HPAs do not affect GPAs.
> > +		 * the HPAs do not affect GPAs. The value, however, has to be
> > +		 * reduced by 1 in case C-bit is a physical bit (APM section
> > +		 * 15.34.6).
> >  		 */
> > -		if (!tdp_enabled)
> > +		if (!tdp_enabled) {
> >  			g_phys_as = boot_cpu_data.x86_phys_bits;
> > -		else if (!g_phys_as)
> > +		} else if (!g_phys_as) {
> >  			g_phys_as = phys_as;
> >  
> > +			if (kvm_cpu_cap_has(X86_FEATURE_SEV) &&
> > +			    (cpuid_ebx(0x8000001f) & 0x3f) < g_phys_as)
> > +				g_phys_as -= 1;
> 
> This is incorrect, non-SEV guests do not see a reduced address space.  See Tom's
> explanation[*]
> 
> [*] https://lkml.kernel.org/r/324a95ee-b962-acdf-9bd7-b8b23b9fb991@amd.com
> 
> > +		}
> > +
> >  		entry->eax = g_phys_as | (virt_as << 8);
> >  		entry->edx = 0;
> >  		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
> > -- 
> > 2.31.1
> >
Vitaly Kuznetsov Oct. 18, 2021, 7:39 a.m. UTC | #3
Sean Christopherson <seanjc@google.com> writes:

> On Fri, Oct 15, 2021, Vitaly Kuznetsov wrote:
>> Several selftests (memslot_modification_stress_test, kvm_page_table_test,
>> dirty_log_perf_test,.. ) which rely on vm_get_max_gfn() started to fail
>> since commit ef4c9f4f65462 ("KVM: selftests: Fix 32-bit truncation of
>> vm_get_max_gfn()") on AMD EPYC 7401P:
>> 
>>  ./tools/testing/selftests/kvm/demand_paging_test
>>  Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
>>  guest physical test memory offset: 0xffffbffff000
>
> This look a lot like the signature I remember from the original bug[1].  I assume
> you're hitting the magic HyperTransport region[2].  I thought that was fixed, but
> the hack-a-fix for selftests never got applied[3].
>
> [1] https://lore.kernel.org/lkml/20210623230552.4027702-4-seanjc@google.com/

Hey,

it seems I'm only three months late to the party!

> [2] https://lkml.kernel.org/r/7e3a90c0-75a1-b8fe-dbcf-bda16502ace9@amd.com
> [3] https://lkml.kernel.org/r/20210805105423.412878-1-pbonzini@redhat.com
>

This patch helps indeed, thanks! Paolo, any particular reason you
haven't queued it yet?

>>  Finished creating vCPUs and starting uffd threads
>>  Started all vCPUs
>>  ==== Test Assertion Failure ====
>>    demand_paging_test.c:63: false
>>    pid=47131 tid=47134 errno=0 - Success
>>       1	0x000000000040281b: vcpu_worker at demand_paging_test.c:63
>>       2	0x00007fb36716e431: ?? ??:0
>>       3	0x00007fb36709c912: ?? ??:0
>>    Invalid guest sync status: exit_reason=SHUTDOWN
>> 
>> The commit, however, seems to be correct, it just revealed an already
>> present issue. AMD CPUs which support SEV may have a reduced physical
>> address space, e.g. on AMD EPYC 7401P I see:
>> 
>>  Address sizes:  43 bits physical, 48 bits virtual
>> 
>> The guest physical address space, however, is not reduced as stated in
>> commit e39f00f60ebd ("KVM: x86: Use kernel's x86_phys_bits to handle
>> reduced MAXPHYADDR"). This seems to be almost correct, however, APM has one
>> more clause (15.34.6):
>> 
>>   Note that because guest physical addresses are always translated through
>>   the nested page tables, the size of the guest physical address space is
>>   not impacted by any physical address space reduction indicated in CPUID
>>   8000_001F[EBX]. If the C-bit is a physical address bit however, the guest
>>   physical address space is effectively reduced by 1 bit.
>> 
>> Implement the reduction.
>> 
>> Fixes: e39f00f60ebd (KVM: x86: Use kernel's x86_phys_bits to handle reduced MAXPHYADDR)
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>> - RFC: I may have misdiagnosed the problem as I didn't dig to where exactly
>>  the guest crashes.
>> ---
>>  arch/x86/kvm/cpuid.c | 13 ++++++++++---
>>  1 file changed, 10 insertions(+), 3 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>> index 751aa85a3001..04ae280a0b66 100644
>> --- a/arch/x86/kvm/cpuid.c
>> +++ b/arch/x86/kvm/cpuid.c
>> @@ -923,13 +923,20 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
>>  		 *
>>  		 * If TDP is enabled but an explicit guest MAXPHYADDR is not
>>  		 * provided, use the raw bare metal MAXPHYADDR as reductions to
>> -		 * the HPAs do not affect GPAs.
>> +		 * the HPAs do not affect GPAs. The value, however, has to be
>> +		 * reduced by 1 in case C-bit is a physical bit (APM section
>> +		 * 15.34.6).
>>  		 */
>> -		if (!tdp_enabled)
>> +		if (!tdp_enabled) {
>>  			g_phys_as = boot_cpu_data.x86_phys_bits;
>> -		else if (!g_phys_as)
>> +		} else if (!g_phys_as) {
>>  			g_phys_as = phys_as;
>>  
>> +			if (kvm_cpu_cap_has(X86_FEATURE_SEV) &&
>> +			    (cpuid_ebx(0x8000001f) & 0x3f) < g_phys_as)
>> +				g_phys_as -= 1;
>
> This is incorrect, non-SEV guests do not see a reduced address space.  See Tom's
> explanation[*]
>
> [*] https://lkml.kernel.org/r/324a95ee-b962-acdf-9bd7-b8b23b9fb991@amd.com
>

I see, thanks for the pointer.

>> +		}
>> +
>>  		entry->eax = g_phys_as | (virt_as << 8);
>>  		entry->edx = 0;
>>  		cpuid_entry_override(entry, CPUID_8000_0008_EBX);
>> -- 
>> 2.31.1
>> 
>
Vitaly Kuznetsov Oct. 18, 2021, 7:42 a.m. UTC | #4
Maxim Levitsky <mlevitsk@redhat.com> writes:

> On Fri, 2021-10-15 at 15:24 +0000, Sean Christopherson wrote:
>> On Fri, Oct 15, 2021, Vitaly Kuznetsov wrote:
>> > Several selftests (memslot_modification_stress_test, kvm_page_table_test,
>> > dirty_log_perf_test,.. ) which rely on vm_get_max_gfn() started to fail
>> > since commit ef4c9f4f65462 ("KVM: selftests: Fix 32-bit truncation of
>> > vm_get_max_gfn()") on AMD EPYC 7401P:
>> > 
>> >  ./tools/testing/selftests/kvm/demand_paging_test
>> >  Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
>> >  guest physical test memory offset: 0xffffbffff000
>> 
>> This look a lot like the signature I remember from the original bug[1].  I assume
>> you're hitting the magic HyperTransport region[2].  I thought that was fixed, but
>> the hack-a-fix for selftests never got applied[3].
>
> Hi Vitaly and everyone!
>
> You are the 3rd person to suffer from this issue :-( Sean Christopherson was first, I was second.
>
> I reported this, then I think we found out that it is not the HyperTransport region after all,
> and I think that the whole thing lost in 'trying to get answers from AMD'.
>
> https://lore.kernel.org/lkml/ac72b77c-f633-923b-8019-69347db706be@redhat.com/
>
>
> I'll say, a hack to reduce it by 1 bit is still better that failing tests,
> at least until AMD explains to us, about what is going on.
>
> Sorry that you had to debug this.

I didn't spend too much time on this, that's the reson for 'RFC' :-) I
agree we need at least a short-term solution as permanently failing
tests may start masking newly introduces issues.
Vitaly Kuznetsov Oct. 18, 2021, 11:23 a.m. UTC | #5
Vitaly Kuznetsov <vkuznets@redhat.com> writes:

> Sean Christopherson <seanjc@google.com> writes:
>
>> On Fri, Oct 15, 2021, Vitaly Kuznetsov wrote:
>>> Several selftests (memslot_modification_stress_test, kvm_page_table_test,
>>> dirty_log_perf_test,.. ) which rely on vm_get_max_gfn() started to fail
>>> since commit ef4c9f4f65462 ("KVM: selftests: Fix 32-bit truncation of
>>> vm_get_max_gfn()") on AMD EPYC 7401P:
>>> 
>>>  ./tools/testing/selftests/kvm/demand_paging_test
>>>  Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
>>>  guest physical test memory offset: 0xffffbffff000
>>
>> This look a lot like the signature I remember from the original bug[1].  I assume
>> you're hitting the magic HyperTransport region[2].  I thought that was fixed, but
>> the hack-a-fix for selftests never got applied[3].
>>
>> [1] https://lore.kernel.org/lkml/20210623230552.4027702-4-seanjc@google.com/
>
> Hey,
>
> it seems I'm only three months late to the party!
>
>> [2] https://lkml.kernel.org/r/7e3a90c0-75a1-b8fe-dbcf-bda16502ace9@amd.com
>> [3] https://lkml.kernel.org/r/20210805105423.412878-1-pbonzini@redhat.com
>>
>
> This patch helps indeed

FWIW, 'access_tracking_perf_test' remains broken even after the patch is
applied:

# ./access_tracking_perf_test 
Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
guest physical test memory offset: 0xfffcbffff000

Populating memory             : 3.858448918s
Writing to populated memory   : 0.937319626s
Reading from populated memory : 0.003073207s
==== Test Assertion Failure ====
  lib/kvm_util.c:1382: false
  pid=6422 tid=6425 errno=4 - Interrupted system call
     1	0x000000000040667d: addr_gpa2hva at kvm_util.c:1382
     2	 (inlined by) addr_gpa2hva at kvm_util.c:1376
     3	 (inlined by) addr_gva2hva at kvm_util.c:2245
     4	0x0000000000402909: lookup_pfn at access_tracking_perf_test.c:98
     5	 (inlined by) mark_vcpu_memory_idle at access_tracking_perf_test.c:152
     6	 (inlined by) vcpu_thread_main at access_tracking_perf_test.c:232
     7	0x00007fd02d1cb431: ?? ??:0
     8	0x00007fd02d0f9912: ?? ??:0
  No vm physical memory at 0xfcbffff000

(and my cpuid hack reducing guest physical address space by half doesn't
seem to help either)
Paolo Bonzini Oct. 18, 2021, 11:44 a.m. UTC | #6
On 17/10/21 09:54, Maxim Levitsky wrote:
> 
> I'll say, a hack to reduce it by 1 bit is still better that failing 
> tests, at least until AMD explains to us, about what is going on.

What's going on is documented in the thread at
https://yhbt.net/lore/all/4f46f3ab-60e4-3118-1438-10a1e17cd900@suse.com/:

> That doesn't really follow what Andrew gave us, namely:
> 
> 1) On parts with <40 bits, its fully hidden from software
>
> 2) Before Fam17h, it was always 12G just below 1T, even if there was
> more RAM above this location
>
> 3) On Fam17h and later, it is variable based on SME, and is either
> just below 2^48 (no encryption) or 2^43 (encryption)

If you can use this information to implement the fix, that'd be very 
nice.  I didn't apply the hackish fix because I wanted to test it on a 
SME-enabled box.

Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 751aa85a3001..04ae280a0b66 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -923,13 +923,20 @@  static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 		 *
 		 * If TDP is enabled but an explicit guest MAXPHYADDR is not
 		 * provided, use the raw bare metal MAXPHYADDR as reductions to
-		 * the HPAs do not affect GPAs.
+		 * the HPAs do not affect GPAs. The value, however, has to be
+		 * reduced by 1 in case C-bit is a physical bit (APM section
+		 * 15.34.6).
 		 */
-		if (!tdp_enabled)
+		if (!tdp_enabled) {
 			g_phys_as = boot_cpu_data.x86_phys_bits;
-		else if (!g_phys_as)
+		} else if (!g_phys_as) {
 			g_phys_as = phys_as;
 
+			if (kvm_cpu_cap_has(X86_FEATURE_SEV) &&
+			    (cpuid_ebx(0x8000001f) & 0x3f) < g_phys_as)
+				g_phys_as -= 1;
+		}
+
 		entry->eax = g_phys_as | (virt_as << 8);
 		entry->edx = 0;
 		cpuid_entry_override(entry, CPUID_8000_0008_EBX);