Message ID | 20201027214300.1342-2-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86/mmu: Add macro for hugepage GFN mask | expand |
On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > Add a helper to compute the GFN mask given a hugepage level, KVM is > accumulating quite a few users with the addition of the TDP MMU. > > Note, gcc is clever enough to use a single NEG instruction instead of > SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of > round_gfn_for_level()'s direct two's complement trickery. As far as I can tell this patch has no functional changes intended. Please correct me if that's not correct. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Ben Gardon <bgardon@google.com> > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 2 +- > arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- > arch/x86/kvm/mmu/tdp_iter.c | 2 +- > 4 files changed, 5 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index d44858b69353..6ea046415f29 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -119,6 +119,7 @@ > #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) > #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) > #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) > +#define KVM_HPAGE_GFN_MASK(x) (~(KVM_PAGES_PER_HPAGE(x) - 1)) NIT: I know x follows the convention on adjacent macros, but this would be clearer to me if x was changed to level. (Probably for all the macros in this block) > > static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) > { > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 17587f496ec7..3bfc7ee44e51 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2886,7 +2886,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > disallowed_hugepage_adjust(*it.sptep, gfn, it.level, > &pfn, &level); > > - base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); > + base_gfn = gfn & KVM_HPAGE_GFN_MASK(it.level); > if (it.level == level) > break; > > diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h > index 50e268eb8e1a..76ee36f2afd2 100644 > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, > disallowed_hugepage_adjust(*it.sptep, gw->gfn, it.level, > &pfn, &level); > > - base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); > + base_gfn = gw->gfn & KVM_HPAGE_GFN_MASK(it.level); > if (it.level == level) > break; > > @@ -751,7 +751,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, > bool *write_fault_to_shadow_pgtable) > { > int level; > - gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1); > + gfn_t mask = KVM_HPAGE_GFN_MASK(walker->level); > bool self_changed = false; > > if (!(walker->pte_access & ACC_WRITE_MASK || > diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c > index 87b7e16911db..c6e914c96641 100644 > --- a/arch/x86/kvm/mmu/tdp_iter.c > +++ b/arch/x86/kvm/mmu/tdp_iter.c > @@ -17,7 +17,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) > > static gfn_t round_gfn_for_level(gfn_t gfn, int level) > { > - return gfn & -KVM_PAGES_PER_HPAGE(level); > + return gfn & KVM_HPAGE_GFN_MASK(level); > } > > /* > -- > 2.28.0 >
On Tue, Oct 27, 2020 at 03:17:40PM -0700, Ben Gardon wrote: > On Tue, Oct 27, 2020 at 2:43 PM Sean Christopherson > <sean.j.christopherson@intel.com> wrote: > > > > Add a helper to compute the GFN mask given a hugepage level, KVM is > > accumulating quite a few users with the addition of the TDP MMU. > > > > Note, gcc is clever enough to use a single NEG instruction instead of > > SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of > > round_gfn_for_level()'s direct two's complement trickery. > > As far as I can tell this patch has no functional changes intended. > Please correct me if that's not correct. Correct. :-) > > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > > Reviewed-by: Ben Gardon <bgardon@google.com> > > > --- > > arch/x86/include/asm/kvm_host.h | 1 + > > arch/x86/kvm/mmu/mmu.c | 2 +- > > arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- > > arch/x86/kvm/mmu/tdp_iter.c | 2 +- > > 4 files changed, 5 insertions(+), 4 deletions(-) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index d44858b69353..6ea046415f29 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -119,6 +119,7 @@ > > #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) > > #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) > > #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) > > +#define KVM_HPAGE_GFN_MASK(x) (~(KVM_PAGES_PER_HPAGE(x) - 1)) > > NIT: I know x follows the convention on adjacent macros, but this > would be clearer to me if x was changed to level. (Probably for all > the macros in this block) Agreed. I'll spin a v2 and opportunistically change them all to "level" in this patch. I'll also add "No function change intendedâ„¢." to patches 1 and 3. > > static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) > > {
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d44858b69353..6ea046415f29 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -119,6 +119,7 @@ #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) #define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) +#define KVM_HPAGE_GFN_MASK(x) (~(KVM_PAGES_PER_HPAGE(x) - 1)) static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 17587f496ec7..3bfc7ee44e51 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2886,7 +2886,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, disallowed_hugepage_adjust(*it.sptep, gfn, it.level, &pfn, &level); - base_gfn = gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); + base_gfn = gfn & KVM_HPAGE_GFN_MASK(it.level); if (it.level == level) break; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 50e268eb8e1a..76ee36f2afd2 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -698,7 +698,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, disallowed_hugepage_adjust(*it.sptep, gw->gfn, it.level, &pfn, &level); - base_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); + base_gfn = gw->gfn & KVM_HPAGE_GFN_MASK(it.level); if (it.level == level) break; @@ -751,7 +751,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, bool *write_fault_to_shadow_pgtable) { int level; - gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1); + gfn_t mask = KVM_HPAGE_GFN_MASK(walker->level); bool self_changed = false; if (!(walker->pte_access & ACC_WRITE_MASK || diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index 87b7e16911db..c6e914c96641 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -17,7 +17,7 @@ static void tdp_iter_refresh_sptep(struct tdp_iter *iter) static gfn_t round_gfn_for_level(gfn_t gfn, int level) { - return gfn & -KVM_PAGES_PER_HPAGE(level); + return gfn & KVM_HPAGE_GFN_MASK(level); } /*
Add a helper to compute the GFN mask given a hugepage level, KVM is accumulating quite a few users with the addition of the TDP MMU. Note, gcc is clever enough to use a single NEG instruction instead of SUB+NOT, i.e. use the more common "~(level -1)" pattern instead of round_gfn_for_level()'s direct two's complement trickery. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 4 ++-- arch/x86/kvm/mmu/tdp_iter.c | 2 +- 4 files changed, 5 insertions(+), 4 deletions(-)