Message ID | 505C0FCF.2070308@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote: > We can not directly call kvm_release_pfn_clean to release the pfn > since we can meet noslot pfn which is used to cache mmio info into > spte > Wouldn't it be better to move the check into kvm_release_pfn_clean()? > Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> > --- > arch/x86/kvm/mmu.c | 6 ++++-- > arch/x86/kvm/paging_tmpl.h | 6 ++++-- > 2 files changed, 8 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index aa0b469..0f56169 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2877,7 +2877,8 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code, > > out_unlock: > spin_unlock(&vcpu->kvm->mmu_lock); > - kvm_release_pfn_clean(pfn); > + if (likely(!is_noslot_pfn(pfn))) > + kvm_release_pfn_clean(pfn); > return 0; > } > > @@ -3345,7 +3346,8 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, > > out_unlock: > spin_unlock(&vcpu->kvm->mmu_lock); > - kvm_release_pfn_clean(pfn); > + if (likely(!is_noslot_pfn(pfn))) > + kvm_release_pfn_clean(pfn); > return 0; > } > > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index bf8c42b..9ce6bc0 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -544,7 +544,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, > out_gpte_changed: > if (sp) > kvm_mmu_put_page(sp, it.sptep); > - kvm_release_pfn_clean(pfn); > + if (likely(!is_noslot_pfn(pfn))) > + kvm_release_pfn_clean(pfn); > return NULL; > } > > @@ -645,7 +646,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, > > out_unlock: > spin_unlock(&vcpu->kvm->mmu_lock); > - kvm_release_pfn_clean(pfn); > + if (likely(!is_noslot_pfn(pfn))) > + kvm_release_pfn_clean(pfn); > return 0; > } > > -- > 1.7.7.6 > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/23/2012 05:13 PM, Gleb Natapov wrote: > On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote: >> We can not directly call kvm_release_pfn_clean to release the pfn >> since we can meet noslot pfn which is used to cache mmio info into >> spte >> > Wouldn't it be better to move the check into kvm_release_pfn_clean()? I think there is no reason for us to prefer to adding this branch in the common code. :) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote: > On 09/23/2012 05:13 PM, Gleb Natapov wrote: > > On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote: > >> We can not directly call kvm_release_pfn_clean to release the pfn > >> since we can meet noslot pfn which is used to cache mmio info into > >> spte > >> > > Wouldn't it be better to move the check into kvm_release_pfn_clean()? > > I think there is no reason for us to prefer to adding this branch in > the common code. :) Is the function performance critical? Is function called without the check on a hot path? The function already contains much heavier kvm_is_mmio_pfn() check. If most/all function invocation require check before call it's better to move it inside. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/24/2012 07:24 PM, Gleb Natapov wrote: > On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote: >> On 09/23/2012 05:13 PM, Gleb Natapov wrote: >>> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote: >>>> We can not directly call kvm_release_pfn_clean to release the pfn >>>> since we can meet noslot pfn which is used to cache mmio info into >>>> spte >>>> >>> Wouldn't it be better to move the check into kvm_release_pfn_clean()? >> >> I think there is no reason for us to prefer to adding this branch in >> the common code. :) > > Is the function performance critical? Is function called without the check > on a hot path? The function already contains much heavier kvm_is_mmio_pfn() > check. If most/all function invocation require check before call it's > better to move it inside. It is not most/all functions need do this check - it is only needed on x86 mmu page-fault/prefetch path. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote: > On 09/24/2012 07:24 PM, Gleb Natapov wrote: > > On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote: > >> On 09/23/2012 05:13 PM, Gleb Natapov wrote: > >>> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote: > >>>> We can not directly call kvm_release_pfn_clean to release the pfn > >>>> since we can meet noslot pfn which is used to cache mmio info into > >>>> spte > >>>> > >>> Wouldn't it be better to move the check into kvm_release_pfn_clean()? > >> > >> I think there is no reason for us to prefer to adding this branch in > >> the common code. :) > > > > Is the function performance critical? Is function called without the check > > on a hot path? The function already contains much heavier kvm_is_mmio_pfn() > > check. If most/all function invocation require check before call it's > > better to move it inside. > > It is not most/all functions need do this check - it is only needed on x86 mmu > page-fault/prefetch path. At least on x86 there 7 calls to kvm_release_pfn_clean(), 5 of them are guarded by is_noslot_pfn() (after this patch) and one by even stronger is_error_pfn(). I guess when/if other architectures will add MMIO MMU caching they will need to guard kvm_release_pfn_clean() by is_noslot_pfn() too in most cases. I am not insisting, but as this patch shows it is easy to miss the check before calling the function. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/24/2012 08:04 PM, Gleb Natapov wrote: > On Mon, Sep 24, 2012 at 07:49:37PM +0800, Xiao Guangrong wrote: >> On 09/24/2012 07:24 PM, Gleb Natapov wrote: >>> On Mon, Sep 24, 2012 at 12:59:32PM +0800, Xiao Guangrong wrote: >>>> On 09/23/2012 05:13 PM, Gleb Natapov wrote: >>>>> On Fri, Sep 21, 2012 at 02:57:19PM +0800, Xiao Guangrong wrote: >>>>>> We can not directly call kvm_release_pfn_clean to release the pfn >>>>>> since we can meet noslot pfn which is used to cache mmio info into >>>>>> spte >>>>>> >>>>> Wouldn't it be better to move the check into kvm_release_pfn_clean()? >>>> >>>> I think there is no reason for us to prefer to adding this branch in >>>> the common code. :) >>> >>> Is the function performance critical? Is function called without the check >>> on a hot path? The function already contains much heavier kvm_is_mmio_pfn() >>> check. If most/all function invocation require check before call it's >>> better to move it inside. >> >> It is not most/all functions need do this check - it is only needed on x86 mmu >> page-fault/prefetch path. > At least on x86 there 7 calls to kvm_release_pfn_clean(), 5 of them are > guarded by is_noslot_pfn() (after this patch) 3 places after the whole patchset (There are some cleanups after this patch). > and one by even stronger is_error_pfn(). This one is: | if (!is_error_pfn(pfn)) { | kvm_release_pfn_clean(pfn); | return true; | } | | return false; We can change it to: | if (is_error_pfn(pfn)) | return false; | | kvm_release_pfn_clean(pfn); | return true; > I guess when/if other architectures will add MMIO MMU > caching they will need to guard kvm_release_pfn_clean() by is_noslot_pfn() > too in most cases. I am not insisting, but as this patch shows it is > easy to miss the check before calling the function. Sounds reasonable. I will consider it if Avi/Marcelo have no object on it. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/24/2012 02:32 PM, Xiao Guangrong wrote: > 3 places after the whole patchset (There are some cleanups after this patch). > >> and one by even stronger is_error_pfn(). > > This one is: > > | if (!is_error_pfn(pfn)) { > | kvm_release_pfn_clean(pfn); > | return true; > | } > | > | return false; > > We can change it to: > > | if (is_error_pfn(pfn)) > | return false; > | > | kvm_release_pfn_clean(pfn); > | return true; > >> I guess when/if other architectures will add MMIO MMU >> caching they will need to guard kvm_release_pfn_clean() by is_noslot_pfn() >> too in most cases. I am not insisting, but as this patch shows it is >> easy to miss the check before calling the function. > > Sounds reasonable. I will consider it if Avi/Marcelo have no object on > it. I think it's a good idea. Looks like we traded the unscalable error pages for these branches, I think it's a reasonable tradeoff.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index aa0b469..0f56169 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2877,7 +2877,8 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code, out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + if (likely(!is_noslot_pfn(pfn))) + kvm_release_pfn_clean(pfn); return 0; } @@ -3345,7 +3346,8 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + if (likely(!is_noslot_pfn(pfn))) + kvm_release_pfn_clean(pfn); return 0; } diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index bf8c42b..9ce6bc0 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -544,7 +544,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, out_gpte_changed: if (sp) kvm_mmu_put_page(sp, it.sptep); - kvm_release_pfn_clean(pfn); + if (likely(!is_noslot_pfn(pfn))) + kvm_release_pfn_clean(pfn); return NULL; } @@ -645,7 +646,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, out_unlock: spin_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(pfn); + if (likely(!is_noslot_pfn(pfn))) + kvm_release_pfn_clean(pfn); return 0; }
We can not directly call kvm_release_pfn_clean to release the pfn since we can meet noslot pfn which is used to cache mmio info into spte Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> --- arch/x86/kvm/mmu.c | 6 ++++-- arch/x86/kvm/paging_tmpl.h | 6 ++++-- 2 files changed, 8 insertions(+), 4 deletions(-)