diff mbox series

KVM: x86: pvec always has more than 1 sp

Message ID 20180916080716.7270-1-richard.weiyang@gmail.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: pvec always has more than 1 sp | expand

Commit Message

Wei Yang Sept. 16, 2018, 8:07 a.m. UTC
Here is the code flow related to mmu_pages_first():

    mmu_unsync_walk()
        mmu_pages_add()
	__mmu_unsync_walk()
    for_each_sp()
        mmu_pages_first()

Every time when mmu_pages_first() is invoked, pvec is prepared by
mmu_unsync_walk() which insert at least one sp in pvec.

This patch removes the check on pvec->nr since this doesn't happen.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 arch/x86/kvm/mmu.c | 3 ---
 1 file changed, 3 deletions(-)

Comments

Wei Yang Oct. 10, 2018, 1:19 p.m. UTC | #1
Would someone like to take a look at this one?

On Sun, Sep 16, 2018 at 04:07:16PM +0800, Wei Yang wrote:
>Here is the code flow related to mmu_pages_first():
>
>    mmu_unsync_walk()
>        mmu_pages_add()
>	__mmu_unsync_walk()
>    for_each_sp()
>        mmu_pages_first()
>
>Every time when mmu_pages_first() is invoked, pvec is prepared by
>mmu_unsync_walk() which insert at least one sp in pvec.
>
>This patch removes the check on pvec->nr since this doesn't happen.
>
>Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>---
> arch/x86/kvm/mmu.c | 3 ---
> 1 file changed, 3 deletions(-)
>
>diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>index 899c029cff0d..0caaaa25e88b 100644
>--- a/arch/x86/kvm/mmu.c
>+++ b/arch/x86/kvm/mmu.c
>@@ -2267,9 +2267,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec,
> 	struct kvm_mmu_page *sp;
> 	int level;
> 
>-	if (pvec->nr == 0)
>-		return 0;
>-
> 	WARN_ON(pvec->page[0].idx != INVALID_INDEX);
> 
> 	sp = pvec->page[0].sp;
>-- 
>2.15.1
Paolo Bonzini Oct. 15, 2018, 5:15 p.m. UTC | #2
On 16/09/2018 10:07, Wei Yang wrote:
> Here is the code flow related to mmu_pages_first():
> 
>     mmu_unsync_walk()
>         mmu_pages_add()
> 	__mmu_unsync_walk()
>     for_each_sp()
>         mmu_pages_first()
> 
> Every time when mmu_pages_first() is invoked, pvec is prepared by
> mmu_unsync_walk() which insert at least one sp in pvec.
> 
> This patch removes the check on pvec->nr since this doesn't happen.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  arch/x86/kvm/mmu.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 899c029cff0d..0caaaa25e88b 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2267,9 +2267,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec,
>  	struct kvm_mmu_page *sp;
>  	int level;
>  
> -	if (pvec->nr == 0)
> -		return 0;
> -
>  	WARN_ON(pvec->page[0].idx != INVALID_INDEX);
>  
>  	sp = pvec->page[0].sp;
> 

It's harmless and it may not be the case in the future for other uses of
for_each_sp, so I'd rather leave it here (anyway I'd change it to a
WARN_ON, not remove it altogether).

Paolo
Wei Yang Oct. 16, 2018, 3:38 a.m. UTC | #3
On Mon, Oct 15, 2018 at 07:15:44PM +0200, Paolo Bonzini wrote:
>On 16/09/2018 10:07, Wei Yang wrote:
>> Here is the code flow related to mmu_pages_first():
>> 
>>     mmu_unsync_walk()
>>         mmu_pages_add()
>> 	__mmu_unsync_walk()
>>     for_each_sp()
>>         mmu_pages_first()
>> 
>> Every time when mmu_pages_first() is invoked, pvec is prepared by
>> mmu_unsync_walk() which insert at least one sp in pvec.
>> 
>> This patch removes the check on pvec->nr since this doesn't happen.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  arch/x86/kvm/mmu.c | 3 ---
>>  1 file changed, 3 deletions(-)
>> 
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 899c029cff0d..0caaaa25e88b 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -2267,9 +2267,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec,
>>  	struct kvm_mmu_page *sp;
>>  	int level;
>>  
>> -	if (pvec->nr == 0)
>> -		return 0;
>> -
>>  	WARN_ON(pvec->page[0].idx != INVALID_INDEX);
>>  
>>  	sp = pvec->page[0].sp;
>> 
>
>It's harmless and it may not be the case in the future for other uses of
>for_each_sp, so I'd rather leave it here (anyway I'd change it to a
>WARN_ON, not remove it altogether).
>

Reasonable, thanks for your comment.

>Paolo
diff mbox series

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 899c029cff0d..0caaaa25e88b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2267,9 +2267,6 @@  static int mmu_pages_first(struct kvm_mmu_pages *pvec,
 	struct kvm_mmu_page *sp;
 	int level;
 
-	if (pvec->nr == 0)
-		return 0;
-
 	WARN_ON(pvec->page[0].idx != INVALID_INDEX);
 
 	sp = pvec->page[0].sp;