Message ID | 5049908A.7070501@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 09/07/2012 09:13 AM, Xiao Guangrong wrote: > We can not directly call kvm_release_pfn_clean to release the pfn > since we can meet noslot pfn which is used to cache mmio info into > spte > > Introduce mmu_release_pfn_clean to do this kind of thing > > Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> > --- > arch/x86/kvm/mmu.c | 19 ++++++++++++++----- > arch/x86/kvm/paging_tmpl.h | 4 ++-- > 2 files changed, 16 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 399c177..3c10bca 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2432,6 +2432,16 @@ done: > return ret; > } > > +/* > + * The primary user is page fault path which call it to properly > + * release noslot_pfn. > + */ > +static void mmu_release_pfn_clean(pfn_t pfn) > +{ > + if (!is_error_pfn(pfn)) > + kvm_release_pfn_clean(pfn); > +} > + Too many APIs, each slightly different. How do I know which one to call? Please change kvm_release_pfn_*() instead, calling some arch hook (or even #ifdef CONFIG_KVM_HAS_FAST_MMIO) to check for the special case.
On 09/10/2012 04:22 PM, Avi Kivity wrote: > On 09/07/2012 09:13 AM, Xiao Guangrong wrote: >> We can not directly call kvm_release_pfn_clean to release the pfn >> since we can meet noslot pfn which is used to cache mmio info into >> spte >> >> Introduce mmu_release_pfn_clean to do this kind of thing >> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> >> --- >> arch/x86/kvm/mmu.c | 19 ++++++++++++++----- >> arch/x86/kvm/paging_tmpl.h | 4 ++-- >> 2 files changed, 16 insertions(+), 7 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 399c177..3c10bca 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -2432,6 +2432,16 @@ done: >> return ret; >> } >> >> +/* >> + * The primary user is page fault path which call it to properly >> + * release noslot_pfn. >> + */ >> +static void mmu_release_pfn_clean(pfn_t pfn) >> +{ >> + if (!is_error_pfn(pfn)) >> + kvm_release_pfn_clean(pfn); >> +} >> + > > Too many APIs, each slightly different. How do I know which one to call? It is only used in mmu and it is a static function. > > Please change kvm_release_pfn_*() instead, calling some arch hook (or > even #ifdef CONFIG_KVM_HAS_FAST_MMIO) to check for the special case. We only need to call it on page fault path. If we change the common API other x86 components have to suffer from it. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/10/2012 11:37 AM, Xiao Guangrong wrote: > On 09/10/2012 04:22 PM, Avi Kivity wrote: >> On 09/07/2012 09:13 AM, Xiao Guangrong wrote: >>> We can not directly call kvm_release_pfn_clean to release the pfn >>> since we can meet noslot pfn which is used to cache mmio info into >>> spte >>> >>> Introduce mmu_release_pfn_clean to do this kind of thing >>> >>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> >>> --- >>> arch/x86/kvm/mmu.c | 19 ++++++++++++++----- >>> arch/x86/kvm/paging_tmpl.h | 4 ++-- >>> 2 files changed, 16 insertions(+), 7 deletions(-) >>> >>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >>> index 399c177..3c10bca 100644 >>> --- a/arch/x86/kvm/mmu.c >>> +++ b/arch/x86/kvm/mmu.c >>> @@ -2432,6 +2432,16 @@ done: >>> return ret; >>> } >>> >>> +/* >>> + * The primary user is page fault path which call it to properly >>> + * release noslot_pfn. >>> + */ >>> +static void mmu_release_pfn_clean(pfn_t pfn) >>> +{ >>> + if (!is_error_pfn(pfn)) >>> + kvm_release_pfn_clean(pfn); >>> +} >>> + >> >> Too many APIs, each slightly different. How do I know which one to call? > > It is only used in mmu and it is a static function. Still, how do I know which one to call? The name tells me nothing. When I read the code, how do I know if a call is correct or not? > >> >> Please change kvm_release_pfn_*() instead, calling some arch hook (or >> even #ifdef CONFIG_KVM_HAS_FAST_MMIO) to check for the special case. > > We only need to call it on page fault path. If we change the common API > other x86 components have to suffer from it. This way, I have to suffer from it. btw, what about another approach, to avoid those paths completely? Avoid calling __direct_map() with error_pfn, and jump to a label after kvm_release_pfn_clean() in page_fault(), etc?
On 09/10/2012 05:02 PM, Avi Kivity wrote: > On 09/10/2012 11:37 AM, Xiao Guangrong wrote: >> On 09/10/2012 04:22 PM, Avi Kivity wrote: >>> On 09/07/2012 09:13 AM, Xiao Guangrong wrote: >>>> We can not directly call kvm_release_pfn_clean to release the pfn >>>> since we can meet noslot pfn which is used to cache mmio info into >>>> spte >>>> >>>> Introduce mmu_release_pfn_clean to do this kind of thing >>>> >>>> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> >>>> --- >>>> arch/x86/kvm/mmu.c | 19 ++++++++++++++----- >>>> arch/x86/kvm/paging_tmpl.h | 4 ++-- >>>> 2 files changed, 16 insertions(+), 7 deletions(-) >>>> >>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >>>> index 399c177..3c10bca 100644 >>>> --- a/arch/x86/kvm/mmu.c >>>> +++ b/arch/x86/kvm/mmu.c >>>> @@ -2432,6 +2432,16 @@ done: >>>> return ret; >>>> } >>>> >>>> +/* >>>> + * The primary user is page fault path which call it to properly >>>> + * release noslot_pfn. >>>> + */ >>>> +static void mmu_release_pfn_clean(pfn_t pfn) >>>> +{ >>>> + if (!is_error_pfn(pfn)) >>>> + kvm_release_pfn_clean(pfn); >>>> +} >>>> + >>> >>> Too many APIs, each slightly different. How do I know which one to call? >> >> It is only used in mmu and it is a static function. > > Still, how do I know which one to call? The name tells me nothing. > When I read the code, how do I know if a call is correct or not? > >> >>> >>> Please change kvm_release_pfn_*() instead, calling some arch hook (or >>> even #ifdef CONFIG_KVM_HAS_FAST_MMIO) to check for the special case. >> >> We only need to call it on page fault path. If we change the common API >> other x86 components have to suffer from it. > > This way, I have to suffer from it. Sorry. :( > > btw, what about another approach, to avoid those paths completely? > Avoid calling __direct_map() with error_pfn, and jump to a label after > kvm_release_pfn_clean() in page_fault(), etc? I will try it. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 399c177..3c10bca 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2432,6 +2432,16 @@ done: return ret; } +/* + * The primary user is page fault path which call it to properly + * release noslot_pfn. + */ +static void mmu_release_pfn_clean(pfn_t pfn) +{ + if (!is_error_pfn(pfn)) + kvm_release_pfn_clean(pfn); +} + static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, unsigned pt_access, unsigned pte_access, int user_fault, int write_fault, @@ -2497,8 +2507,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, } } - if (!is_error_pfn(pfn)) - kvm_release_pfn_clean(pfn); + mmu_release_pfn_clean(pfn); } static void nonpaging_new_cr3(struct kvm_vcpu *vcpu) @@ -2618,7 +2627,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write, 1, ACC_ALL, iterator.sptep); if (!sp) { pgprintk("nonpaging_map: ENOMEM\n"); - kvm_release_pfn_clean(pfn); + mmu_release_pfn_clean(pfn); return -ENOMEM; } @@ -2882,7 +2891,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code, out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + mmu_release_pfn_clean(pfn); return 0; } @@ -3350,7 +3359,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + mmu_release_pfn_clean(pfn); return 0; } diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index bf8c42b..f075259 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -544,7 +544,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, out_gpte_changed: if (sp) kvm_mmu_put_page(sp, it.sptep); - kvm_release_pfn_clean(pfn); + mmu_release_pfn_clean(pfn); return NULL; } @@ -645,7 +645,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + mmu_release_pfn_clean(pfn); return 0; }
We can not directly call kvm_release_pfn_clean to release the pfn since we can meet noslot pfn which is used to cache mmio info into spte Introduce mmu_release_pfn_clean to do this kind of thing Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> --- arch/x86/kvm/mmu.c | 19 ++++++++++++++----- arch/x86/kvm/paging_tmpl.h | 4 ++-- 2 files changed, 16 insertions(+), 7 deletions(-)