diff mbox series

KVM: x86: remove check on rmap in for_each_slot_rmap_range()

Message ID 20180926065407.27518-1-richard.weiyang@gmail.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: remove check on rmap in for_each_slot_rmap_range() | expand

Commit Message

Wei Yang Sept. 26, 2018, 6:54 a.m. UTC
In loop for_each_slot_rmap_range(), slot_rmap_walk_okay() will check the
rmap before continue the loop body.

This patch removes the duplicate check on rmap in the loop body.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 arch/x86/kvm/mmu.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Sean Christopherson Sept. 26, 2018, 1:49 p.m. UTC | #1
On Wed, Sep 26, 2018 at 02:54:07PM +0800, kvm-owner@vger.kernel.org wrote:
> In loop for_each_slot_rmap_range(), slot_rmap_walk_okay() will check the
> rmap before continue the loop body.
> 
> This patch removes the duplicate check on rmap in the loop body.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Paolo Bonzini Oct. 1, 2018, 2:33 p.m. UTC | #2
On 26/09/2018 08:54, Wei Yang wrote:
> In loop for_each_slot_rmap_range(), slot_rmap_walk_okay() will check the
> rmap before continue the loop body.
> 
> This patch removes the duplicate check on rmap in the loop body.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  arch/x86/kvm/mmu.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 9ef1438be5f5..371d200ffd4a 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -5456,8 +5456,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  
>  	for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn,
>  			end_gfn, &iterator) {
> -		if (iterator.rmap)
> -			flush |= fn(kvm, iterator.rmap);
> +		flush |= fn(kvm, iterator.rmap);
>  
>  		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
>  			if (flush && lock_flush_tlb) {
> 

Queued, thanks.

Paolo
Wei Yang Dec. 30, 2018, 8:23 a.m. UTC | #3
On Mon, Oct 01, 2018 at 04:33:45PM +0200, Paolo Bonzini wrote:
>On 26/09/2018 08:54, Wei Yang wrote:
>> In loop for_each_slot_rmap_range(), slot_rmap_walk_okay() will check the
>> rmap before continue the loop body.
>> 
>> This patch removes the duplicate check on rmap in the loop body.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  arch/x86/kvm/mmu.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 9ef1438be5f5..371d200ffd4a 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -5456,8 +5456,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
>>  
>>  	for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn,
>>  			end_gfn, &iterator) {
>> -		if (iterator.rmap)
>> -			flush |= fn(kvm, iterator.rmap);
>> +		flush |= fn(kvm, iterator.rmap);
>>  
>>  		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
>>  			if (flush && lock_flush_tlb) {
>> 
>
>Queued, thanks.

Paolo,

I didn't see this in upstream. Is this missed?

>
>Paolo
Wei Yang Feb. 5, 2019, 12:30 p.m. UTC | #4
On Mon, Oct 01, 2018 at 04:33:45PM +0200, Paolo Bonzini wrote:
>On 26/09/2018 08:54, Wei Yang wrote:
>> In loop for_each_slot_rmap_range(), slot_rmap_walk_okay() will check the
>> rmap before continue the loop body.
>> 
>> This patch removes the duplicate check on rmap in the loop body.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  arch/x86/kvm/mmu.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 9ef1438be5f5..371d200ffd4a 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -5456,8 +5456,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
>>  
>>  	for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn,
>>  			end_gfn, &iterator) {
>> -		if (iterator.rmap)
>> -			flush |= fn(kvm, iterator.rmap);
>> +		flush |= fn(kvm, iterator.rmap);
>>  
>>  		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
>>  			if (flush && lock_flush_tlb) {
>> 
>
>Queued, thanks.
>
>Paolo

Hi, Paolo

Is this one queued?
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9ef1438be5f5..371d200ffd4a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5456,8 +5456,7 @@  slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
 
 	for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn,
 			end_gfn, &iterator) {
-		if (iterator.rmap)
-			flush |= fn(kvm, iterator.rmap);
+		flush |= fn(kvm, iterator.rmap);
 
 		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
 			if (flush && lock_flush_tlb) {