From patchwork Thu Apr 30 11:36:02 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 6302301 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7662DBEEE1 for ; Thu, 30 Apr 2015 11:40:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7697420123 for ; Thu, 30 Apr 2015 11:40:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5E78D20108 for ; Thu, 30 Apr 2015 11:39:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751789AbbD3Ljc (ORCPT ); Thu, 30 Apr 2015 07:39:32 -0400 Received: from mail-wi0-f182.google.com ([209.85.212.182]:35275 "EHLO mail-wi0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751013AbbD3Lgh (ORCPT ); Thu, 30 Apr 2015 07:36:37 -0400 Received: by widdi4 with SMTP id di4so15635560wid.0; Thu, 30 Apr 2015 04:36:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=9vzghlwAsbAjL6ELuqdPU+vixQbfhOutW8NnZVrXRNk=; b=aNUMoiNZ18/tjhbRCUYvBGGQ4KLl4aqUJ+Z0iQJIflu092uGjDbWwLel4weZ7DHJ8Y 8DqzeZZQXld+mzp3rEF72PORGiVnjQQCp8dYUTPddSxwrTNwCbpwW/BoxpElW4DGKrLC bM+1cgP4zYc2C5+F21RDwheYV/eAowC9oXFxILbs18B5rwyFkmGEtAmABu24/kULq/Dp 200VVVic5vgVXJNAKLPcO09dUgFm35pQ9dUQcNk0HiJFfyYGmTCAu6qMb7OwxfTkvGvt UKymAIo8ThAfdxsP9qxpVyANwiVxwwSzrm5X5Vyf9WU2vD5QhniiUEokpk8Y803brxkx COaA== X-Received: by 10.180.100.227 with SMTP id fb3mr4712919wib.90.1430393795743; Thu, 30 Apr 2015 04:36:35 -0700 (PDT) Received: from 640k.localdomain (dynamic-adsl-94-39-185-241.clienti.tiscali.it. [94.39.185.241]) by mx.google.com with ESMTPSA id v3sm2064527wix.8.2015.04.30.04.36.33 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 04:36:34 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, bsd@redhat.com, guangrong.xiao@linux.intel.com, Yang Zhang , wanpeng.li@linux.intel.com Subject: [PATCH 03/13] KVM: export __gfn_to_pfn_memslot, drop gfn_to_pfn_async Date: Thu, 30 Apr 2015 13:36:02 +0200 Message-Id: <1430393772-27208-4-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1430393772-27208-1-git-send-email-pbonzini@redhat.com> References: <1430393772-27208-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP gfn_to_pfn_async is used in just one place, and because of x86-specific treatment that place will need to look at the memory slot. Hence inline it into try_async_pf and export __gfn_to_pfn_memslot. The patch also switches the subsequent call to gfn_to_pfn_prot to use __gfn_to_pfn_memslot. For now this is just a small optimization, but having a memslot argument will also be useful when implementing SMRAM (which will need an x86-specific function for gfn-to-memslot conversion). Finally, remove the now-unused async argument of __gfn_to_pfn. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu.c | 9 +++++---- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 26 ++++++++------------------ 3 files changed, 15 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 209fe1477465..371109546382 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3475,10 +3475,12 @@ static bool can_do_async_pf(struct kvm_vcpu *vcpu) static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, gva_t gva, pfn_t *pfn, bool write, bool *writable) { + struct kvm_memory_slot *slot; bool async; - *pfn = gfn_to_pfn_async(vcpu->kvm, gfn, &async, write, writable); - + slot = gfn_to_memslot(vcpu->kvm, gfn); + async = false; + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); if (!async) return false; /* *pfn has correct page already */ @@ -3492,8 +3494,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, return true; } - *pfn = gfn_to_pfn_prot(vcpu->kvm, gfn, write, writable); - + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable); return false; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ad45054309a0..647ad05b05af 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -538,13 +538,13 @@ void kvm_release_page_dirty(struct page *page); void kvm_set_page_accessed(struct page *page); pfn_t gfn_to_pfn_atomic(struct kvm *kvm, gfn_t gfn); -pfn_t gfn_to_pfn_async(struct kvm *kvm, gfn_t gfn, bool *async, - bool write_fault, bool *writable); pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn); pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); +pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, + bool *async, bool write_fault, bool *writable); void kvm_release_pfn_clean(pfn_t pfn); void kvm_set_pfn_dirty(pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b6d415156283..3382de0302a0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1355,9 +1355,8 @@ exit: return pfn; } -static pfn_t -__gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, - bool *async, bool write_fault, bool *writable) +pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, + bool *async, bool write_fault, bool *writable) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -1376,44 +1375,35 @@ __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, return hva_to_pfn(addr, atomic, async, write_fault, writable); } +EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); -static pfn_t __gfn_to_pfn(struct kvm *kvm, gfn_t gfn, bool atomic, bool *async, +static pfn_t __gfn_to_pfn(struct kvm *kvm, gfn_t gfn, bool atomic, bool write_fault, bool *writable) { struct kvm_memory_slot *slot; - if (async) - *async = false; - slot = gfn_to_memslot(kvm, gfn); - return __gfn_to_pfn_memslot(slot, gfn, atomic, async, write_fault, + return __gfn_to_pfn_memslot(slot, gfn, atomic, NULL, write_fault, writable); } pfn_t gfn_to_pfn_atomic(struct kvm *kvm, gfn_t gfn) { - return __gfn_to_pfn(kvm, gfn, true, NULL, true, NULL); + return __gfn_to_pfn(kvm, gfn, true, true, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_atomic); -pfn_t gfn_to_pfn_async(struct kvm *kvm, gfn_t gfn, bool *async, - bool write_fault, bool *writable) -{ - return __gfn_to_pfn(kvm, gfn, false, async, write_fault, writable); -} -EXPORT_SYMBOL_GPL(gfn_to_pfn_async); - pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { - return __gfn_to_pfn(kvm, gfn, false, NULL, true, NULL); + return __gfn_to_pfn(kvm, gfn, false, true, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn); pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn(kvm, gfn, false, NULL, write_fault, writable); + return __gfn_to_pfn(kvm, gfn, false, write_fault, writable); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot);