From patchwork Wed Jan 23 10:06:01 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 2023381 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 4E3C5DF280 for ; Wed, 23 Jan 2013 10:06:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754671Ab3AWKGM (ORCPT ); Wed, 23 Jan 2013 05:06:12 -0500 Received: from e23smtp04.au.ibm.com ([202.81.31.146]:55574 "EHLO e23smtp04.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754481Ab3AWKGK (ORCPT ); Wed, 23 Jan 2013 05:06:10 -0500 Received: from /spool/local by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 23 Jan 2013 19:58:09 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 23 Jan 2013 19:58:06 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 38A9D2CE804A; Wed, 23 Jan 2013 21:06:05 +1100 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r0N9s9Na6095212; Wed, 23 Jan 2013 20:54:09 +1100 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r0NA63dh028149; Wed, 23 Jan 2013 21:06:04 +1100 Received: from localhost.localdomain ([9.123.236.211]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r0NA61Nk028093; Wed, 23 Jan 2013 21:06:02 +1100 Message-ID: <50FFB609.9000205@linux.vnet.ibm.com> Date: Wed, 23 Jan 2013 18:06:01 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Xiao Guangrong CC: Marcelo Tosatti , Avi Kivity , Gleb Natapov , LKML , KVM Subject: [PATCH v2 04/12] KVM: MMU: simplify set_spte References: <50FFB5A1.5090708@linux.vnet.ibm.com> In-Reply-To: <50FFB5A1.5090708@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13012309-9264-0000-0000-0000030CCA2C Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org For the logic, the function can be divided into two parts: one is adjusting pte_access, the rest one is setting spte according the pte_access. It makes the code more readable Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 51 ++++++++++++++++++++++++++------------------------- 1 files changed, 26 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index a999755..af8bcb2 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2336,32 +2336,13 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, return 0; spte = PT_PRESENT_MASK; - if (!speculative) - spte |= shadow_accessed_mask; - - if (pte_access & ACC_EXEC_MASK) - spte |= shadow_x_mask; - else - spte |= shadow_nx_mask; - - if (pte_access & ACC_USER_MASK) - spte |= shadow_user_mask; - - if (level > PT_PAGE_TABLE_LEVEL) - spte |= PT_PAGE_SIZE_MASK; - if (tdp_enabled) - spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, - kvm_is_mmio_pfn(pfn)); if (host_writable) spte |= SPTE_HOST_WRITEABLE; else pte_access &= ~ACC_WRITE_MASK; - spte |= (u64)pfn << PAGE_SHIFT; - if (pte_access & ACC_WRITE_MASK) { - /* * Other vcpu creates new sp in the window between * mapping_level() and acquiring mmu-lock. We can @@ -2369,11 +2350,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * be fixed if guest refault. */ if (level > PT_PAGE_TABLE_LEVEL && - has_wrprotected_page(vcpu->kvm, gfn, level)) + has_wrprotected_page(vcpu->kvm, gfn, level)) goto done; - spte |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE; - /* * Optimization: for pte sync, if spte was writable the hash * lookup is unnecessary (and expensive). Write protection @@ -2381,21 +2360,43 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, * Same reasoning can be applied to dirty page accounting. */ if (!can_unsync && is_writable_pte(*sptep)) - goto set_pte; + goto out_access_adjust; if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret = 1; pte_access &= ~ACC_WRITE_MASK; - spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); } } +out_access_adjust: + if (!speculative) + spte |= shadow_accessed_mask; + + if (pte_access & ACC_EXEC_MASK) + spte |= shadow_x_mask; + else + spte |= shadow_nx_mask; + + if (pte_access & ACC_USER_MASK) + spte |= shadow_user_mask; + if (pte_access & ACC_WRITE_MASK) + spte |= PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE; + + if (level > PT_PAGE_TABLE_LEVEL) + spte |= PT_PAGE_SIZE_MASK; + + if (tdp_enabled) + spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, + kvm_is_mmio_pfn(pfn)); + + spte |= (u64)pfn << PAGE_SHIFT; + + if (is_writable_pte(spte)) mark_page_dirty(vcpu->kvm, gfn); -set_pte: if (mmu_spte_update(sptep, spte)) kvm_flush_remote_tlbs(vcpu->kvm); done: