From patchwork Sun May 19 04:52:21 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nakajima, Jun" X-Patchwork-Id: 2589761 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 6CE7B3FD4E for ; Sun, 19 May 2013 04:53:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751301Ab3ESEwq (ORCPT ); Sun, 19 May 2013 00:52:46 -0400 Received: from mail-da0-f43.google.com ([209.85.210.43]:41639 "EHLO mail-da0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751082Ab3ESEwf (ORCPT ); Sun, 19 May 2013 00:52:35 -0400 Received: by mail-da0-f43.google.com with SMTP id u7so3209589dae.2 for ; Sat, 18 May 2013 21:52:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=LE1Kg2ySzqG83rYKqZPOl8WKlr01WYGGZrxFQIhdt0c=; b=ApvVvYF2xDfPb1RvMyfn+8PE4Pb6or3pF2VwBeeglGjqyIw9GlsF/MHdUuor5apf8n UEdvtFKw4zvNB1z1EhlftgMX4WzblbWbC0jPGfxvt8NP6PH8BpSPUdCAj64jt2tgcKt+ Vs80n9Xn4XiaHGQP5CXif2jMZjlMVKn3OqJuLC6+LA4Kk5pPwLqOEHI+O9fSE2pj9VIB WkK3bD+XCtIYu0UtMrJF9z9aUsp/r8ZCKwBJc5wbUwitYVScJmrytREgp7wcEFN9YxOF FsHCj9r9HRAPDgm7kffMQNeY0DnKXqoYNMJt5SsqerNhAl68BK4YrN0ThcEwZcV9vyFK kZFg== X-Received: by 10.66.253.34 with SMTP id zx2mr56870410pac.35.1368939154659; Sat, 18 May 2013 21:52:34 -0700 (PDT) Received: from localhost (c-98-207-34-191.hsd1.ca.comcast.net. [98.207.34.191]) by mx.google.com with ESMTPSA id uv1sm18219693pbc.16.2013.05.18.21.52.32 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sat, 18 May 2013 21:52:33 -0700 (PDT) From: Jun Nakajima To: kvm@vger.kernel.org Cc: Gleb Natapov , Paolo Bonzini Subject: [PATCH v3 02/13] nEPT: Move gpte_access() and prefetch_invalid_gpte() to paging_tmpl.h Date: Sat, 18 May 2013 21:52:21 -0700 Message-Id: <1368939152-11406-2-git-send-email-jun.nakajima@intel.com> X-Mailer: git-send-email 1.8.2.1.610.g562af5b In-Reply-To: <1368939152-11406-1-git-send-email-jun.nakajima@intel.com> References: <1368939152-11406-1-git-send-email-jun.nakajima@intel.com> X-Gm-Message-State: ALoCoQm5e7jxJSv3DHJA5q2rUcqz5VQJPgNG0BX/lif9TVm6EOvj8dk0xi1wBad/8XtP4nSrC/s3 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nadav Har'El For preparation, we just move gpte_access() and prefetch_invalid_gpte() from mmu.c to paging_tmpl.h. Signed-off-by: Nadav Har'El Signed-off-by: Jun Nakajima Signed-off-by: Xinhao Xu Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu.c | 30 ------------------------------ arch/x86/kvm/paging_tmpl.h | 40 +++++++++++++++++++++++++++++++++++----- 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 004cc87..117233f 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2488,26 +2488,6 @@ static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn, return gfn_to_pfn_memslot_atomic(slot, gfn); } -static bool prefetch_invalid_gpte(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp, u64 *spte, - u64 gpte) -{ - if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) - goto no_present; - - if (!is_present_gpte(gpte)) - goto no_present; - - if (!(gpte & PT_ACCESSED_MASK)) - goto no_present; - - return false; - -no_present: - drop_spte(vcpu->kvm, spte); - return true; -} - static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, u64 *start, u64 *end) @@ -3408,16 +3388,6 @@ static bool sync_mmio_spte(u64 *sptep, gfn_t gfn, unsigned access, return false; } -static inline unsigned gpte_access(struct kvm_vcpu *vcpu, u64 gpte) -{ - unsigned access; - - access = (gpte & (PT_WRITABLE_MASK | PT_USER_MASK)) | ACC_EXEC_MASK; - access &= ~(gpte >> PT64_NX_SHIFT); - - return access; -} - static inline bool is_last_gpte(struct kvm_mmu *mmu, unsigned level, unsigned gpte) { unsigned index; diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index da20860..df34d4a 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -103,6 +103,36 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, return (ret != orig_pte); } +static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *sp, u64 *spte, + u64 gpte) +{ + if (is_rsvd_bits_set(&vcpu->arch.mmu, gpte, PT_PAGE_TABLE_LEVEL)) + goto no_present; + + if (!is_present_gpte(gpte)) + goto no_present; + + if (!(gpte & PT_ACCESSED_MASK)) + goto no_present; + + return false; + +no_present: + drop_spte(vcpu->kvm, spte); + return true; +} + +static inline unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, u64 gpte) +{ + unsigned access; + + access = (gpte & (PT_WRITABLE_MASK | PT_USER_MASK)) | ACC_EXEC_MASK; + access &= ~(gpte >> PT64_NX_SHIFT); + + return access; +} + static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, struct guest_walker *walker, @@ -225,7 +255,7 @@ retry_walk: } accessed_dirty &= pte; - pte_access = pt_access & gpte_access(vcpu, pte); + pte_access = pt_access & FNAME(gpte_access)(vcpu, pte); walker->ptes[walker->level - 1] = pte; } while (!is_last_gpte(mmu, walker->level, pte)); @@ -309,13 +339,13 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, gfn_t gfn; pfn_t pfn; - if (prefetch_invalid_gpte(vcpu, sp, spte, gpte)) + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte); gfn = gpte_to_gfn(gpte); - pte_access = sp->role.access & gpte_access(vcpu, gpte); + pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte); protect_clean_gpte(&pte_access, gpte); pfn = pte_prefetch_gfn_to_pfn(vcpu, gfn, no_dirty_log && (pte_access & ACC_WRITE_MASK)); @@ -782,14 +812,14 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) sizeof(pt_element_t))) return -EINVAL; - if (prefetch_invalid_gpte(vcpu, sp, &sp->spt[i], gpte)) { + if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { vcpu->kvm->tlbs_dirty++; continue; } gfn = gpte_to_gfn(gpte); pte_access = sp->role.access; - pte_access &= gpte_access(vcpu, gpte); + pte_access &= FNAME(gpte_access)(vcpu, gpte); protect_clean_gpte(&pte_access, gpte); if (sync_mmio_spte(&sp->spt[i], gfn, pte_access, &nr_present))