From patchwork Tue Jun 14 17:03:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 879572 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p5EH3oKY005272 for ; Tue, 14 Jun 2011 17:03:50 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752158Ab1FNRDs (ORCPT ); Tue, 14 Jun 2011 13:03:48 -0400 Received: from mail-px0-f179.google.com ([209.85.212.179]:52838 "EHLO mail-px0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751708Ab1FNRDr (ORCPT ); Tue, 14 Jun 2011 13:03:47 -0400 Received: by pxi2 with SMTP id 2so3780921pxi.10 for ; Tue, 14 Jun 2011 10:03:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:date:from:to:cc:subject:message-id:in-reply-to :references:x-mailer:mime-version:content-type :content-transfer-encoding; bh=KVC6aS23fgAzj8YSDxMXyJ+CHLFFTdXxDoLQRxxBnqc=; b=LJq8AixIZD9CbBToPWhhsI6hKMdMQkerXH8xR4nMNIn/wAtTAfWZlIoMVxz6yHwluh +tIvVb2hpMp6Nt73mxtCmfC3T2CBT/QBiOKAU0tlTUqjZfpW4prJ8LZ0BeWj96e9NsJK FGa98ZOZE//0k9d6FE9Ne5s2A8hHjXj9+C3+Q= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type:content-transfer-encoding; b=eMR8DDXHMpxfigGdW0NhZ/9zcj3itTeHlFCGPlBl5s48cfQfoOJSMUeZWBa8za3RGT 4FtcDPZRJ0LJQS4XUWsCEr8P9haqn3AEv5Eit4CUA3paUg2Q+spV7N/yZ69nMs4KhHo6 WLCRnHWGOHcCOSIAjhL2d6NkAFj5Q8qD6zmaU= Received: by 10.142.248.38 with SMTP id v38mr1279168wfh.15.1308071027110; Tue, 14 Jun 2011 10:03:47 -0700 (PDT) Received: from amd (x096101.dynamic.ppp.asahi-net.or.jp [122.249.96.101]) by mx.google.com with ESMTPS id k4sm5704251pbl.27.2011.06.14.10.03.44 (version=SSLv3 cipher=OTHER); Tue, 14 Jun 2011 10:03:46 -0700 (PDT) Date: Wed, 15 Jun 2011 02:03:43 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp, mingo@elte.hu Subject: [PATCH 3/3] KVM: MMU: Use helpers to clean up walk_addr_generic() Message-Id: <20110615020343.991f0b86.takuya.yoshikawa@gmail.com> In-Reply-To: <20110615020003.15722a29.takuya.yoshikawa@gmail.com> References: <20110615020003.15722a29.takuya.yoshikawa@gmail.com> X-Mailer: Sylpheed 3.1.0 (GTK+ 2.24.4; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Tue, 14 Jun 2011 17:03:53 +0000 (UTC) From: Takuya Yoshikawa Introduce two new helpers: set_accessed_bit() and is_last_gpte(). These names were suggested by Ingo and Avi. Cc: Ingo Molnar Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/paging_tmpl.h | 57 ++++++++++++++++++++++++++++++++----------- 1 files changed, 42 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 92fe275..d655a4b6 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -113,6 +113,43 @@ static unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, pt_element_t gpte) return access; } +static int FNAME(set_accessed_bit)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + gfn_t table_gfn, unsigned index, + pt_element_t __user *ptep_user, + pt_element_t *ptep) +{ + int ret; + + trace_kvm_mmu_set_accessed_bit(table_gfn, index, sizeof(*ptep)); + ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, + *ptep, *ptep|PT_ACCESSED_MASK); + if (unlikely(ret)) + return ret; + + mark_page_dirty(vcpu->kvm, table_gfn); + *ptep |= PT_ACCESSED_MASK; + + return 0; +} + +static bool FNAME(is_last_gpte)(struct guest_walker *walker, + struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, + pt_element_t gpte) +{ + if (walker->level == PT_PAGE_TABLE_LEVEL) + return true; + + if ((walker->level == PT_DIRECTORY_LEVEL) && is_large_pte(gpte) && + (PTTYPE == 64 || is_pse(vcpu))) + return true; + + if ((walker->level == PT_PDPE_LEVEL) && is_large_pte(gpte) && + (mmu->root_level == PT64_ROOT_LEVEL)) + return true; + + return false; +} + /* * Fetch a guest pte for a guest virtual address */ @@ -214,31 +251,21 @@ retry_walk: if (!eperm && unlikely(!(pte & PT_ACCESSED_MASK))) { int ret; - trace_kvm_mmu_set_accessed_bit(table_gfn, index, - sizeof(pte)); - ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, - pte, pte|PT_ACCESSED_MASK); - if (unlikely(ret < 0)) { + + ret = FNAME(set_accessed_bit)(vcpu, mmu, table_gfn, + index, ptep_user, &pte); + if (ret < 0) { errcode |= PFERR_PRESENT_MASK; goto error; } else if (ret) goto retry_walk; - - mark_page_dirty(vcpu->kvm, table_gfn); - pte |= PT_ACCESSED_MASK; } pte_access = pt_access & FNAME(gpte_access)(vcpu, pte); walker->ptes[walker->level - 1] = pte; - if ((walker->level == PT_PAGE_TABLE_LEVEL) || - ((walker->level == PT_DIRECTORY_LEVEL) && - is_large_pte(pte) && - (PTTYPE == 64 || is_pse(vcpu))) || - ((walker->level == PT_PDPE_LEVEL) && - is_large_pte(pte) && - mmu->root_level == PT64_ROOT_LEVEL)) { + if (FNAME(is_last_gpte)(walker, vcpu, mmu, pte)) { int lvl = walker->level; gpa_t real_gpa; gfn_t gfn;