Message ID | 20090609213312.750051328@amt.cnet (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Marcelo Tosatti wrote: > This way there is no need to add explicit checks in every > for_each_shadow_entry user. > > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> > > Index: kvm/arch/x86/kvm/mmu.c > =================================================================== > --- kvm.orig/arch/x86/kvm/mmu.c > +++ kvm/arch/x86/kvm/mmu.c > @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ > { > if (iterator->level < PT_PAGE_TABLE_LEVEL) > return false; > + > + if (iterator->level == PT_PAGE_TABLE_LEVEL) > + if (is_large_pte(*iterator->sptep)) > + return false; > > s/==/>/?
Avi Kivity wrote: > Marcelo Tosatti wrote: >> This way there is no need to add explicit checks in every >> for_each_shadow_entry user. >> >> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> >> >> Index: kvm/arch/x86/kvm/mmu.c >> =================================================================== >> --- kvm.orig/arch/x86/kvm/mmu.c >> +++ kvm/arch/x86/kvm/mmu.c >> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ >> { >> if (iterator->level < PT_PAGE_TABLE_LEVEL) >> return false; >> + >> + if (iterator->level == PT_PAGE_TABLE_LEVEL) >> + if (is_large_pte(*iterator->sptep)) >> + return false; >> >> > s/==/>/? > Ah, it's actually fine. But changing == to >= will make it 1GBpage-ready.
On Wed, Jun 10, 2009 at 12:21:05PM +0300, Avi Kivity wrote: > Avi Kivity wrote: >> Marcelo Tosatti wrote: >>> This way there is no need to add explicit checks in every >>> for_each_shadow_entry user. >>> >>> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> >>> >>> Index: kvm/arch/x86/kvm/mmu.c >>> =================================================================== >>> --- kvm.orig/arch/x86/kvm/mmu.c >>> +++ kvm/arch/x86/kvm/mmu.c >>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ >>> { >>> if (iterator->level < PT_PAGE_TABLE_LEVEL) >>> return false; >>> + >>> + if (iterator->level == PT_PAGE_TABLE_LEVEL) >>> + if (is_large_pte(*iterator->sptep)) >>> + return false; >>> >>> >> s/==/>/? >> > > Ah, it's actually fine. But changing == to >= will make it 1GBpage-ready. Humpf, better check level explicitly before interpreting bit 7, so lets skip this for 1GB pages. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti wrote: >>>> @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ >>>> { >>>> if (iterator->level < PT_PAGE_TABLE_LEVEL) >>>> return false; >>>> + >>>> + if (iterator->level == PT_PAGE_TABLE_LEVEL) >>>> + if (is_large_pte(*iterator->sptep)) >>>> + return false; >>>> >>>> >>>> >>> s/==/>/? >>> >>> >> Ah, it's actually fine. But changing == to >= will make it 1GBpage-ready. >> > > Humpf, better check level explicitly before interpreting bit 7, so lets > skip this for 1GB pages. > > Okay. But I'm rewriting shadow_walk_* afterwards.
Index: kvm/arch/x86/kvm/mmu.c =================================================================== --- kvm.orig/arch/x86/kvm/mmu.c +++ kvm/arch/x86/kvm/mmu.c @@ -1273,6 +1273,11 @@ static bool shadow_walk_okay(struct kvm_ { if (iterator->level < PT_PAGE_TABLE_LEVEL) return false; + + if (iterator->level == PT_PAGE_TABLE_LEVEL) + if (is_large_pte(*iterator->sptep)) + return false; + iterator->index = SHADOW_PT_INDEX(iterator->addr, iterator->level); iterator->sptep = ((u64 *)__va(iterator->shadow_addr)) + iterator->index; return true;
This way there is no need to add explicit checks in every for_each_shadow_entry user. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html