Message ID | 20200108001210.12913-1-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM | expand |
On 08/01/20 01:12, Sean Christopherson wrote: > Remove the bogus 64-bit only condition from the check that disables MMIO > spte optimization when the system supports the max PA, i.e. doesn't have > any reserved PA bits. 32-bit KVM always uses PAE paging for the shadow > MMU, and per Intel's SDM: > > PAE paging translates 32-bit linear addresses to 52-bit physical > addresses. > > The kernel's restrictions on max physical addresses are limits on how > much memory the kernel can reasonably use, not what physical addresses > are supported by hardware. > > Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support") > Cc: stable@vger.kernel.org > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 7269130ea5e2..d9c07343d979 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -6191,7 +6191,7 @@ static void kvm_set_mmio_spte_mask(void) > * If reserved bit is not supported, clear the present bit to disable > * mmio page fault. > */ > - if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52) > + if (shadow_phys_bits == 52) > mask &= ~1ull; > > kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK); > Queued, thanks. Paolo
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7269130ea5e2..d9c07343d979 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6191,7 +6191,7 @@ static void kvm_set_mmio_spte_mask(void) * If reserved bit is not supported, clear the present bit to disable * mmio page fault. */ - if (IS_ENABLED(CONFIG_X86_64) && shadow_phys_bits == 52) + if (shadow_phys_bits == 52) mask &= ~1ull; kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
Remove the bogus 64-bit only condition from the check that disables MMIO spte optimization when the system supports the max PA, i.e. doesn't have any reserved PA bits. 32-bit KVM always uses PAE paging for the shadow MMU, and per Intel's SDM: PAE paging translates 32-bit linear addresses to 52-bit physical addresses. The kernel's restrictions on max physical addresses are limits on how much memory the kernel can reasonably use, not what physical addresses are supported by hardware. Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)