From patchwork Sun May 1 05:33:07 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 744892 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p415XGVK026047 for ; Sun, 1 May 2011 05:33:16 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751839Ab1EAFdN (ORCPT ); Sun, 1 May 2011 01:33:13 -0400 Received: from mail-pv0-f174.google.com ([74.125.83.174]:33102 "EHLO mail-pv0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751323Ab1EAFdN (ORCPT ); Sun, 1 May 2011 01:33:13 -0400 Received: by pvg12 with SMTP id 12so2798942pvg.19 for ; Sat, 30 Apr 2011 22:33:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:date:from:to:cc:subject:message-id:in-reply-to :references:x-mailer:mime-version:content-type :content-transfer-encoding; bh=5I8kTA4dm5+wFBQebTDolTZCw4KSifTlNXh7MQSJ1LY=; b=EQiTDo+nIRSxTlVe5hvQ+sod0+XyliBNzQt77p32pBMYpuqMip256hk7fNKI+PsdLW /Il1Gqupogm6CpZenArkXa3U8YRjZj9A8ub17ZeBJAJS0vaKSdh7CUOD6/UmMNCK237O 7zfgtQ82ubXpEkqkimpJ7oPsb4t17mheEmCP4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type:content-transfer-encoding; b=EmJrFlVVxFAc/9uEQjy3yWE2+UEnC4TmIJAVkoEngZ97o45X3zc3tUOc2EYz4uGgwR iO5KmmV+hEz98njWcwO38y48ohQsjKpTw1BJRDVAen8JME5T9ABT40MnywyqFKl3fTyf 5kgvwD/7rX7vr2fXdW+c+LEUa9WR8lI+SoBgY= Received: by 10.68.68.68 with SMTP id u4mr7450967pbt.297.1304227991948; Sat, 30 Apr 2011 22:33:11 -0700 (PDT) Received: from amd (s198099.dynamic.ppp.asahi-net.or.jp [220.157.198.99]) by mx.google.com with ESMTPS id q2sm2588555pbs.5.2011.04.30.22.33.09 (version=SSLv3 cipher=OTHER); Sat, 30 Apr 2011 22:33:10 -0700 (PDT) Date: Sun, 1 May 2011 14:33:07 +0900 From: Takuya Yoshikawa To: avi@redhat.com, mtosatti@redhat.com Cc: kvm@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp Subject: [PATCH 1/1 v2] KVM: MMU: Use ptep_user for cmpxchg_gpte() Message-Id: <20110501143307.1bcfd375.takuya.yoshikawa@gmail.com> In-Reply-To: <20110501143026.9eb3c875.takuya.yoshikawa@gmail.com> References: <20110501143026.9eb3c875.takuya.yoshikawa@gmail.com> X-Mailer: Sylpheed 3.1.0beta2 (GTK+ 2.22.0; x86_64-pc-linux-gnu) Mime-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Sun, 01 May 2011 05:33:16 +0000 (UTC) From: Takuya Yoshikawa The address of the gpte was already calculated and stored in ptep_user before entering cmpxchg_gpte(). This patch makes cmpxchg_gpte() to use that to make it clear that we are using the same address during walk_addr_generic(). Note that the unlikely annotations are used to show that the conditions are something unusual rather than for performance. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/paging_tmpl.h | 26 ++++++++++++-------------- 1 files changed, 12 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 52450a6..f9d9af1 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -79,21 +79,19 @@ static gfn_t gpte_to_gfn_lvl(pt_element_t gpte, int lvl) } static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gfn_t table_gfn, unsigned index, - pt_element_t orig_pte, pt_element_t new_pte) + pt_element_t __user *ptep_user, unsigned index, + pt_element_t orig_pte, pt_element_t new_pte) { + int npages; pt_element_t ret; pt_element_t *table; struct page *page; - gpa_t gpa; - gpa = mmu->translate_gpa(vcpu, table_gfn << PAGE_SHIFT, - PFERR_USER_MASK|PFERR_WRITE_MASK); - if (gpa == UNMAPPED_GVA) + npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, &page); + /* Check if the user is doing something meaningless. */ + if (unlikely(npages != 1)) return -EFAULT; - page = gfn_to_page(vcpu->kvm, gpa_to_gfn(gpa)); - table = kmap_atomic(page, KM_USER0); ret = CMPXCHG(&table[index], orig_pte, new_pte); kunmap_atomic(table, KM_USER0); @@ -234,9 +232,9 @@ walk: int ret; trace_kvm_mmu_set_accessed_bit(table_gfn, index, sizeof(pte)); - ret = FNAME(cmpxchg_gpte)(vcpu, mmu, table_gfn, - index, pte, pte|PT_ACCESSED_MASK); - if (ret < 0) { + ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, + pte, pte|PT_ACCESSED_MASK); + if (unlikely(ret < 0)) { present = false; break; } else if (ret) @@ -293,9 +291,9 @@ walk: int ret; trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte)); - ret = FNAME(cmpxchg_gpte)(vcpu, mmu, table_gfn, index, pte, - pte|PT_DIRTY_MASK); - if (ret < 0) { + ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, + pte, pte|PT_DIRTY_MASK); + if (unlikely(ret < 0)) { present = false; goto error; } else if (ret)