Message ID | 50530337.2080208@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Sep 14, 2012 at 06:13:11PM +0800, Xiao Guangrong wrote: > On 09/14/2012 05:59 PM, Xiao Guangrong wrote: > > > + return FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true); > > Sorry, this was wrong. Update this patch. > > [PATCH v2 5/5] KVM: MMU: introduce FNAME(prefetch_gpte) > > The only different thing between FNAME(update_pte) and FNAME(pte_prefetch) > is that the former is allowed to prefetch gfn from dirty logged slot, so > introduce a common function to prefetch spte > > Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> IMHO, for the human reader, the meaning of the two functions is different and therefore separation is justified. Moreover they already share common code via FNAME(prefetch_invalid_gpte) (which BTW, is a confusing name because the function does not prefetch gpte, it validates gpte). -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/15/2012 11:31 PM, Marcelo Tosatti wrote: > On Fri, Sep 14, 2012 at 06:13:11PM +0800, Xiao Guangrong wrote: >> On 09/14/2012 05:59 PM, Xiao Guangrong wrote: >> >>> + return FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true); >> >> Sorry, this was wrong. Update this patch. >> >> [PATCH v2 5/5] KVM: MMU: introduce FNAME(prefetch_gpte) >> >> The only different thing between FNAME(update_pte) and FNAME(pte_prefetch) >> is that the former is allowed to prefetch gfn from dirty logged slot, so >> introduce a common function to prefetch spte >> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> > > IMHO, for the human reader, the meaning of the two functions is > different and therefore separation is justified. Moreover they already Actually, these two functions do the same things, both of them prefetch spte based on gpte. The only different is, in the case of update_pte, we have high opportunity that the spte can be accessed soon (that why it is allowed to prefetch dirty logged gfn). > share common code via FNAME(prefetch_invalid_gpte) (which BTW, is a > confusing name because the function does not prefetch gpte, it validates > gpte). I think it is not too bad for it not just validates gpte, it also can drop spte if the gpte is invalid, it is also the behaviour that modify spte based on gpte. ;) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 1a738c5..32facf7 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -356,35 +356,45 @@ no_present: return true; } -static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, const void *pte) +static void +FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + u64 *spte, pt_element_t gpte, bool no_dirty_log) { - pt_element_t gpte; unsigned pte_access; + gfn_t gfn; pfn_t pfn; - gpte = *(const pt_element_t *)pte; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return; pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); + + gfn = gpte_to_gfn(gpte); pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte, true); - pfn = gfn_to_pfn_atomic(vcpu->kvm, gpte_to_gfn(gpte)); + pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, + no_dirty_log && (pte_access & ACC_WRITE_MASK)); if (mmu_invalid_pfn(pfn)) return; /* * we call mmu_set_spte() with host_writable = true because that - * vcpu->arch.update_pte.pfn was fetched from get_user_pages(write = 1). + * pte_prefetch_gfn_to_pfn always gets a writable pfn. */ mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0, - NULL, PT_PAGE_TABLE_LEVEL, - gpte_to_gfn(gpte), pfn, true, true); + NULL, PT_PAGE_TABLE_LEVEL, gfn, pfn, true, true); if (!is_error_pfn(pfn)) kvm_release_pfn_clean(pfn); } +static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + u64 *spte, const void *pte) +{ + pt_element_t gpte = *(const pt_element_t *)pte; + + return FNAME(prefetch_gpte)(vcpu, sp, spte, gpte, false); +} + static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu, struct guest_walker *gw, int level) { @@ -428,35 +438,13 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, spte = sp->spt + i; for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) { - pt_element_t gpte; - unsigned pte_access; - gfn_t gfn; - pfn_t pfn; - if (spte == sptep) continue; if (is_shadow_present_pte(*spte)) continue; - gpte = gptep[i]; - - if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) - continue; - - pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte, - true); - gfn = gpte_to_gfn(gpte); - pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, - pte_access & ACC_WRITE_MASK); - if (mmu_invalid_pfn(pfn)) - break; - - mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0, - NULL, PT_PAGE_TABLE_LEVEL, gfn, - pfn, true, true); - if (!is_error_pfn(pfn)) - kvm_release_pfn_clean(pfn); + FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true); } }