diff mbox series

[v2,61/66] KVM: x86: Don't propagate MMU lpage support to memslot.disallow_lpage

Message ID 20200302235709.27467-62-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: Introduce KVM cpu caps | expand

Commit Message

Sean Christopherson March 2, 2020, 11:57 p.m. UTC
Stop propagating MMU large page support into a memslot's disallow_lpage
now that the MMU's max_page_level handles the scenario where VMX's EPT is
enabled and EPT doesn't support 2M pages.

No functional change intended.

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 3 ---
 arch/x86/kvm/x86.c     | 6 ++----
 2 files changed, 2 insertions(+), 7 deletions(-)

Comments

Paolo Bonzini March 3, 2020, 3:31 p.m. UTC | #1
On 03/03/20 00:57, Sean Christopherson wrote:
> Stop propagating MMU large page support into a memslot's disallow_lpage
> now that the MMU's max_page_level handles the scenario where VMX's EPT is
> enabled and EPT doesn't support 2M pages.
> 
> No functional change intended.
> 
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kvm/vmx/vmx.c | 3 ---
>  arch/x86/kvm/x86.c     | 6 ++----
>  2 files changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index f8eb081b63fe..1fbe54dc3263 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7698,9 +7698,6 @@ static __init int hardware_setup(void)
>  	if (!cpu_has_vmx_tpr_shadow())
>  		kvm_x86_ops->update_cr8_intercept = NULL;
>  
> -	if (enable_ept && !cpu_has_vmx_ept_2m_page())
> -		kvm_disable_largepages();
> -
>  #if IS_ENABLED(CONFIG_HYPERV)
>  	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
>  	    && enable_ept) {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 4fdf5b04f148..cc9b543d210b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9863,11 +9863,9 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
>  		ugfn = slot->userspace_addr >> PAGE_SHIFT;
>  		/*
>  		 * If the gfn and userspace address are not aligned wrt each
> -		 * other, or if explicitly asked to, disable large page
> -		 * support for this slot
> +		 * other, disable large page support for this slot.
>  		 */
> -		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
> -		    !kvm_largepages_enabled()) {
> +		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
>  			unsigned long j;
>  
>  			for (j = 0; j < lpages; ++j)
> 

This should technically go in the next patch.

Paolo
Sean Christopherson March 3, 2020, 4 p.m. UTC | #2
On Tue, Mar 03, 2020 at 04:31:15PM +0100, Paolo Bonzini wrote:
> On 03/03/20 00:57, Sean Christopherson wrote:
> > Stop propagating MMU large page support into a memslot's disallow_lpage
> > now that the MMU's max_page_level handles the scenario where VMX's EPT is
> > enabled and EPT doesn't support 2M pages.
> > 
> > No functional change intended.
> > 
> > Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kvm/vmx/vmx.c | 3 ---
> >  arch/x86/kvm/x86.c     | 6 ++----
> >  2 files changed, 2 insertions(+), 7 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index f8eb081b63fe..1fbe54dc3263 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7698,9 +7698,6 @@ static __init int hardware_setup(void)
> >  	if (!cpu_has_vmx_tpr_shadow())
> >  		kvm_x86_ops->update_cr8_intercept = NULL;
> >  
> > -	if (enable_ept && !cpu_has_vmx_ept_2m_page())
> > -		kvm_disable_largepages();
> > -
> >  #if IS_ENABLED(CONFIG_HYPERV)
> >  	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
> >  	    && enable_ept) {
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 4fdf5b04f148..cc9b543d210b 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -9863,11 +9863,9 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
> >  		ugfn = slot->userspace_addr >> PAGE_SHIFT;
> >  		/*
> >  		 * If the gfn and userspace address are not aligned wrt each
> > -		 * other, or if explicitly asked to, disable large page
> > -		 * support for this slot
> > +		 * other, disable large page support for this slot.
> >  		 */
> > -		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
> > -		    !kvm_largepages_enabled()) {
> > +		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
> >  			unsigned long j;
> >  
> >  			for (j = 0; j < lpages; ++j)
> > 
> 
> This should technically go in the next patch.

Hmm, yeah, I agree.  IIRC I split it this way so that the next patch didn't
touch any arch code, but removing only the call to kvm_disable_largepages()
would be cleaner.
diff mbox series

Patch

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f8eb081b63fe..1fbe54dc3263 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7698,9 +7698,6 @@  static __init int hardware_setup(void)
 	if (!cpu_has_vmx_tpr_shadow())
 		kvm_x86_ops->update_cr8_intercept = NULL;
 
-	if (enable_ept && !cpu_has_vmx_ept_2m_page())
-		kvm_disable_largepages();
-
 #if IS_ENABLED(CONFIG_HYPERV)
 	if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH
 	    && enable_ept) {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4fdf5b04f148..cc9b543d210b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9863,11 +9863,9 @@  static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot,
 		ugfn = slot->userspace_addr >> PAGE_SHIFT;
 		/*
 		 * If the gfn and userspace address are not aligned wrt each
-		 * other, or if explicitly asked to, disable large page
-		 * support for this slot
+		 * other, disable large page support for this slot.
 		 */
-		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) ||
-		    !kvm_largepages_enabled()) {
+		if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) {
 			unsigned long j;
 
 			for (j = 0; j < lpages; ++j)