Message ID | 20210616155032.1117176-1-mlevitsk@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: fix 32 bit build | expand |
On Wed, Jun 16, 2021, Maxim Levitsky wrote: > Now that kvm->stat.nx_lpage_splits is 64 bit, use DIV_ROUND_UP_ULL > when doing division. I went the "cast to an unsigned long" route. I prefer the cast approach because to_zap is also an unsigned long, i.e. using DIV_ROUND_UP_ULL() could look like a truncation bug. In practice, nx_lpage_splits can't be more than an unsigned long so it's largely a moot point, I just like the more explicit "this is doing something odd". https://lkml.kernel.org/r/20210615162905.2132937-1-seanjc@google.com > Fixes: 7ee093d4f3f5 ("KVM: switch per-VM stats to u64") > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 720ceb0a1f5c..97372225f183 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -6054,7 +6054,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) > write_lock(&kvm->mmu_lock); > > ratio = READ_ONCE(nx_huge_pages_recovery_ratio); > - to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0; > + to_zap = ratio ? DIV_ROUND_UP_ULL(kvm->stat.nx_lpage_splits, ratio) : 0; > for ( ; to_zap; --to_zap) { > if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) > break; > -- > 2.26.3 >
On Wed, 2021-06-16 at 15:59 +0000, Sean Christopherson wrote: > On Wed, Jun 16, 2021, Maxim Levitsky wrote: > > Now that kvm->stat.nx_lpage_splits is 64 bit, use DIV_ROUND_UP_ULL > > when doing division. > > I went the "cast to an unsigned long" route. I prefer the cast approach because > to_zap is also an unsigned long, i.e. using DIV_ROUND_UP_ULL() could look like a > truncation bug. In practice, nx_lpage_splits can't be more than an unsigned long > so it's largely a moot point, I just like the more explicit "this is doing > something odd". > > https://lkml.kernel.org/r/20210615162905.2132937-1-seanjc@google.com > > > Fixes: 7ee093d4f3f5 ("KVM: switch per-VM stats to u64") > > Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> > > --- > > arch/x86/kvm/mmu/mmu.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 720ceb0a1f5c..97372225f183 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -6054,7 +6054,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) > > write_lock(&kvm->mmu_lock); > > > > ratio = READ_ONCE(nx_huge_pages_recovery_ratio); > > - to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0; > > + to_zap = ratio ? DIV_ROUND_UP_ULL(kvm->stat.nx_lpage_splits, ratio) : 0; > > for ( ; to_zap; --to_zap) { > > if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) > > break; > > -- > > 2.26.3 > > Cool, makes sense. I didn't notice your patch (I did look at the list but since the subject didn't mention the build breakage I didn't notice). I just wanted to send this patch to avoid someone else spending time figuring it out. Thanks, Best regards, Maxim Levitsky
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 720ceb0a1f5c..97372225f183 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6054,7 +6054,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) write_lock(&kvm->mmu_lock); ratio = READ_ONCE(nx_huge_pages_recovery_ratio); - to_zap = ratio ? DIV_ROUND_UP(kvm->stat.nx_lpage_splits, ratio) : 0; + to_zap = ratio ? DIV_ROUND_UP_ULL(kvm->stat.nx_lpage_splits, ratio) : 0; for ( ; to_zap; --to_zap) { if (list_empty(&kvm->arch.lpage_disallowed_mmu_pages)) break;
Now that kvm->stat.nx_lpage_splits is 64 bit, use DIV_ROUND_UP_ULL when doing division. Fixes: 7ee093d4f3f5 ("KVM: switch per-VM stats to u64") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)