diff mbox

[v2] kvm: mmu: don't do memslot overflow check

Message ID 1429064694-3072-1-git-send-email-wanpeng.li@linux.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wanpeng Li April 15, 2015, 2:24 a.m. UTC
As Andres pointed out:

| I don't understand the value of this check here. Are we looking for a
| broken memslot? Shouldn't this be a BUG_ON? Is this the place to care
| about these things? npages is capped to KVM_MEM_MAX_NR_PAGES, i.e.
| 2^31. A 64 bit overflow would be caused by a gigantic gfn_start which
| would be trouble in many other ways.

This patch drops the memslot overflow check to make the codes more simple.

Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
v1 -> v2:
 * Fix Andres's name
 * Add Andres's Reviewed-by 

 arch/x86/kvm/mmu.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

Comments

Paolo Bonzini April 15, 2015, 3:01 p.m. UTC | #1
On 15/04/2015 04:24, Wanpeng Li wrote:
> As Andres pointed out:
> 
> | I don't understand the value of this check here. Are we looking for a
> | broken memslot? Shouldn't this be a BUG_ON? Is this the place to care
> | about these things? npages is capped to KVM_MEM_MAX_NR_PAGES, i.e.
> | 2^31. A 64 bit overflow would be caused by a gigantic gfn_start which
> | would be trouble in many other ways.
> 
> This patch drops the memslot overflow check to make the codes more simple.
> 
> Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> v1 -> v2:
>  * Fix Andres's name
>  * Add Andres's Reviewed-by 
> 
>  arch/x86/kvm/mmu.c | 12 ++----------
>  1 file changed, 2 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2a0d77e..9265fda 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4505,19 +4505,12 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
>  	bool flush = false;
>  	unsigned long *rmapp;
>  	unsigned long last_index, index;
> -	gfn_t gfn_start, gfn_end;
>  
>  	spin_lock(&kvm->mmu_lock);
>  
> -	gfn_start = memslot->base_gfn;
> -	gfn_end = memslot->base_gfn + memslot->npages - 1;
> -
> -	if (gfn_start >= gfn_end)
> -		goto out;
> -
>  	rmapp = memslot->arch.rmap[0];
> -	last_index = gfn_to_index(gfn_end, memslot->base_gfn,
> -					PT_PAGE_TABLE_LEVEL);
> +	last_index = gfn_to_index(memslot->base_gfn + memslot->npages - 1,
> +				memslot->base_gfn, PT_PAGE_TABLE_LEVEL);
>  
>  	for (index = 0; index <= last_index; ++index, ++rmapp) {
>  		if (*rmapp)
> @@ -4535,7 +4528,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
>  	if (flush)
>  		kvm_flush_remote_tlbs(kvm);
>  
> -out:
>  	spin_unlock(&kvm->mmu_lock);
>  }
>  
> 

Thanks, queued for 4.1.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2a0d77e..9265fda 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4505,19 +4505,12 @@  void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
 	bool flush = false;
 	unsigned long *rmapp;
 	unsigned long last_index, index;
-	gfn_t gfn_start, gfn_end;
 
 	spin_lock(&kvm->mmu_lock);
 
-	gfn_start = memslot->base_gfn;
-	gfn_end = memslot->base_gfn + memslot->npages - 1;
-
-	if (gfn_start >= gfn_end)
-		goto out;
-
 	rmapp = memslot->arch.rmap[0];
-	last_index = gfn_to_index(gfn_end, memslot->base_gfn,
-					PT_PAGE_TABLE_LEVEL);
+	last_index = gfn_to_index(memslot->base_gfn + memslot->npages - 1,
+				memslot->base_gfn, PT_PAGE_TABLE_LEVEL);
 
 	for (index = 0; index <= last_index; ++index, ++rmapp) {
 		if (*rmapp)
@@ -4535,7 +4528,6 @@  void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
 	if (flush)
 		kvm_flush_remote_tlbs(kvm);
 
-out:
 	spin_unlock(&kvm->mmu_lock);
 }