diff mbox series

[v2] KVM: x86/mmu: fix counting of rmap entries in pte_list_add

Message ID 1600837138-21110-1-git-send-email-lirongqing@baidu.com (mailing list archive)
State New, archived
Headers show
Series [v2] KVM: x86/mmu: fix counting of rmap entries in pte_list_add | expand

Commit Message

Li RongQing Sept. 23, 2020, 4:58 a.m. UTC
counting of rmap entries was missed when desc->sptes is full
and desc->more is NULL

and merging two PTE_LIST_EXT-1 check as one, to avoids the
extra comparison to give slightly optimization

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---

Comments

Sean Christopherson Sept. 25, 2020, 4:43 p.m. UTC | #1
On Wed, Sep 23, 2020 at 12:58:58PM +0800, Li RongQing wrote:
> counting of rmap entries was missed when desc->sptes is full
> and desc->more is NULL
> 
> and merging two PTE_LIST_EXT-1 check as one, to avoids the
> extra comparison to give slightly optimization

Please write complete sentences, and use proper capitalization and punctuation.
It's not a big deal for short changelogs, but it's crucial for readability of
larger changelogs.

E.g.

  Fix an off-by-one style bug in pte_list_add() where it failed to account
  the last full set of SPTEs, i.e. when desc->sptes is full and desc->more
  is NULL.

  Merge the two "PTE_LIST_EXT-1" checks as part of the fix to avoid an
  extra comparison.

> Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>

No need to give me credit, I just nitpicked the code, identifying the bug
and the fix was all you. :-)

Thanks for the fix!

> Signed-off-by: Li RongQing <lirongqing@baidu.com>

Paolo,

Although it's a bug fix, I don't think this needs a Fixes / Cc:stable.  The bug
only results in rmap recycling being delayed by one rmap.  Stable kernels can
probably live with an off-by-one bug given that RMAP_RECYCLE_THRESHOLD is
completely arbitrary. :-)

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

> ---
> diff with v1: merge two check as one
> 
>  arch/x86/kvm/mmu/mmu.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a5d0207e7189..c4068be6bb3f 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1273,12 +1273,14 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
>  	} else {
>  		rmap_printk("pte_list_add: %p %llx many->many\n", spte, *spte);
>  		desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
> -		while (desc->sptes[PTE_LIST_EXT-1] && desc->more) {
> -			desc = desc->more;
> +		while (desc->sptes[PTE_LIST_EXT-1]) {
>  			count += PTE_LIST_EXT;
> -		}
> -		if (desc->sptes[PTE_LIST_EXT-1]) {
> -			desc->more = mmu_alloc_pte_list_desc(vcpu);
> +
> +			if (!desc->more) {
> +				desc->more = mmu_alloc_pte_list_desc(vcpu);
> +				desc = desc->more;
> +				break;
> +			}
>  			desc = desc->more;
>  		}
>  		for (i = 0; desc->sptes[i]; ++i)
> -- 
> 2.16.2
>
Li RongQing Sept. 26, 2020, 6:34 a.m. UTC | #2
> -----Original Message-----
> From: Sean Christopherson [mailto:sean.j.christopherson@intel.com]
> Sent: Saturday, September 26, 2020 12:44 AM
> To: Li,Rongqing <lirongqing@baidu.com>
> Cc: kvm@vger.kernel.org; x86@kernel.org
> Subject: Re: [PATCH][v2] KVM: x86/mmu: fix counting of rmap entries in
> pte_list_add
> 
> On Wed, Sep 23, 2020 at 12:58:58PM +0800, Li RongQing wrote:
> > counting of rmap entries was missed when desc->sptes is full and
> > desc->more is NULL
> >
> > and merging two PTE_LIST_EXT-1 check as one, to avoids the extra
> > comparison to give slightly optimization
> 
> Please write complete sentences, and use proper capitalization and
> punctuation.
> It's not a big deal for short changelogs, but it's crucial for readability of larger
> changelogs.
> 
> E.g.
> 
>   Fix an off-by-one style bug in pte_list_add() where it failed to account
>   the last full set of SPTEs, i.e. when desc->sptes is full and desc->more
>   is NULL.
> 
>   Merge the two "PTE_LIST_EXT-1" checks as part of the fix to avoid an
>   extra comparison.
> 
> > Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> No need to give me credit, I just nitpicked the code, identifying the bug and the
> fix was all you. :-)
> 
> Thanks for the fix!
> 
> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> 
> Paolo,
> 
> Although it's a bug fix, I don't think this needs a Fixes / Cc:stable.  The bug
> only results in rmap recycling being delayed by one rmap.  Stable kernels can
> probably live with an off-by-one bug given that RMAP_RECYCLE_THRESHOLD is
> completely arbitrary. :-)
> 
> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 

Thank you very much , I will send V3

-Li
diff mbox series

Patch

diff with v1: merge two check as one

 arch/x86/kvm/mmu/mmu.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index a5d0207e7189..c4068be6bb3f 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1273,12 +1273,14 @@  static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
 	} else {
 		rmap_printk("pte_list_add: %p %llx many->many\n", spte, *spte);
 		desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
-		while (desc->sptes[PTE_LIST_EXT-1] && desc->more) {
-			desc = desc->more;
+		while (desc->sptes[PTE_LIST_EXT-1]) {
 			count += PTE_LIST_EXT;
-		}
-		if (desc->sptes[PTE_LIST_EXT-1]) {
-			desc->more = mmu_alloc_pte_list_desc(vcpu);
+
+			if (!desc->more) {
+				desc->more = mmu_alloc_pte_list_desc(vcpu);
+				desc = desc->more;
+				break;
+			}
 			desc = desc->more;
 		}
 		for (i = 0; desc->sptes[i]; ++i)