From patchwork Fri Sep 14 10:13:11 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 1456721 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 237F5DFFD0 for ; Fri, 14 Sep 2012 10:13:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758711Ab2INKNW (ORCPT ); Fri, 14 Sep 2012 06:13:22 -0400 Received: from e28smtp08.in.ibm.com ([122.248.162.8]:54286 "EHLO e28smtp08.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758220Ab2INKNU (ORCPT ); Fri, 14 Sep 2012 06:13:20 -0400 Received: from /spool/local by e28smtp08.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 14 Sep 2012 15:43:17 +0530 Received: from d28relay02.in.ibm.com (9.184.220.59) by e28smtp08.in.ibm.com (192.168.1.138) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 14 Sep 2012 15:43:15 +0530 Received: from d28av04.in.ibm.com (d28av04.in.ibm.com [9.184.220.66]) by d28relay02.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q8EADEH840960050; Fri, 14 Sep 2012 15:43:14 +0530 Received: from d28av04.in.ibm.com (loopback [127.0.0.1]) by d28av04.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q8EADDlt020597; Fri, 14 Sep 2012 20:13:14 +1000 Received: from localhost.localdomain ([9.123.236.99]) by d28av04.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q8EADBvL020483; Fri, 14 Sep 2012 20:13:11 +1000 Message-ID: <50530337.2080208@linux.vnet.ibm.com> Date: Fri, 14 Sep 2012 18:13:11 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120827 Thunderbird/15.0 MIME-Version: 1.0 To: Xiao Guangrong CC: Avi Kivity , Marcelo Tosatti , LKML , KVM Subject: Re: [PATCH v2 5/5] KVM: MMU: introduce FNAME(prefetch_gpte) References: <5052FF61.3070600@linux.vnet.ibm.com> <5053000E.3050303@linux.vnet.ibm.com> In-Reply-To: <5053000E.3050303@linux.vnet.ibm.com> x-cbid: 12091410-2000-0000-0000-0000091D1544 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On 09/14/2012 05:59 PM, Xiao Guangrong wrote: > + return FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true); Sorry, this was wrong. Update this patch. [PATCH v2 5/5] KVM: MMU: introduce FNAME(prefetch_gpte) The only different thing between FNAME(update_pte) and FNAME(pte_prefetch) is that the former is allowed to prefetch gfn from dirty logged slot, so introduce a common function to prefetch spte Signed-off-by: Xiao Guangrong --- arch/x86/kvm/paging_tmpl.h | 50 ++++++++++++++++--------------------------- 1 files changed, 19 insertions(+), 31 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 1a738c5..32facf7 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -356,35 +356,45 @@ no_present: return true; } -static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, const void *pte) +static void +FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + u64 *spte, pt_element_t gpte, bool no_dirty_log) { - pt_element_t gpte; unsigned pte_access; + gfn_t gfn; pfn_t pfn; - gpte = *(const pt_element_t *)pte; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return; pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); + + gfn = gpte_to_gfn(gpte); pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte, true); - pfn = gfn_to_pfn_atomic(vcpu->kvm, gpte_to_gfn(gpte)); + pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, + no_dirty_log && (pte_access & ACC_WRITE_MASK)); if (mmu_invalid_pfn(pfn)) return; /* * we call mmu_set_spte() with host_writable = true because that - * vcpu->arch.update_pte.pfn was fetched from get_user_pages(write = 1). + * pte_prefetch_gfn_to_pfn always gets a writable pfn. */ mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0, - NULL, PT_PAGE_TABLE_LEVEL, - gpte_to_gfn(gpte), pfn, true, true); + NULL, PT_PAGE_TABLE_LEVEL, gfn, pfn, true, true); if (!is_error_pfn(pfn)) kvm_release_pfn_clean(pfn); } +static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + u64 *spte, const void *pte) +{ + pt_element_t gpte = *(const pt_element_t *)pte; + + return FNAME(prefetch_gpte)(vcpu, sp, spte, gpte, false); +} + static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu, struct guest_walker *gw, int level) { @@ -428,35 +438,13 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, spte = sp->spt + i; for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) { - pt_element_t gpte; - unsigned pte_access; - gfn_t gfn; - pfn_t pfn; - if (spte == sptep) continue; if (is_shadow_present_pte(*spte)) continue; - gpte = gptep[i]; - - if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) - continue; - - pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte, - true); - gfn = gpte_to_gfn(gpte); - pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, - pte_access & ACC_WRITE_MASK); - if (mmu_invalid_pfn(pfn)) - break; - - mmu_set_spte(vcpu, spte, sp->role.access, pte_access, 0, 0, - NULL, PT_PAGE_TABLE_LEVEL, gfn, - pfn, true, true); - if (!is_error_pfn(pfn)) - kvm_release_pfn_clean(pfn); + FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true); } }