From patchwork Sun Jul 29 08:16:28 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 1252091 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 7BDB43FC5A for ; Sun, 29 Jul 2012 08:16:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752700Ab2G2IQh (ORCPT ); Sun, 29 Jul 2012 04:16:37 -0400 Received: from e28smtp04.in.ibm.com ([122.248.162.4]:37288 "EHLO e28smtp04.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752010Ab2G2IQf (ORCPT ); Sun, 29 Jul 2012 04:16:35 -0400 Received: from /spool/local by e28smtp04.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 29 Jul 2012 13:46:33 +0530 Received: from d28relay05.in.ibm.com (9.184.220.62) by e28smtp04.in.ibm.com (192.168.1.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sun, 29 Jul 2012 13:46:31 +0530 Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67]) by d28relay05.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q6T8GUDc25821292; Sun, 29 Jul 2012 13:46:31 +0530 Received: from d28av05.in.ibm.com (loopback [127.0.0.1]) by d28av05.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q6T8GUZl005744; Sun, 29 Jul 2012 18:16:30 +1000 Received: from localhost.localdomain ([9.123.236.99]) by d28av05.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q6T8GT3T005707; Sun, 29 Jul 2012 18:16:29 +1000 Message-ID: <5014F15C.60400@linux.vnet.ibm.com> Date: Sun, 29 Jul 2012 16:16:28 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120717 Thunderbird/14.0 MIME-Version: 1.0 To: Xiao Guangrong CC: Avi Kivity , Marcelo Tosatti , LKML , KVM Subject: [PATCH 7/9] KVM: define kvm_bad_page statically References: <5014F053.8020305@linux.vnet.ibm.com> In-Reply-To: <5014F053.8020305@linux.vnet.ibm.com> x-cbid: 12072908-5564-0000-0000-000003CA4410 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org It is used to eliminate the overload of function call and cleanup the code Signed-off-by: Xiao Guangrong --- include/linux/kvm_host.h | 9 +++++++-- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 13 +------------ 3 files changed, 9 insertions(+), 15 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 387ecc5..08a9da9 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -68,6 +68,13 @@ static inline int is_invalid_pfn(pfn_t pfn) return !is_noslot_pfn(pfn) && is_error_pfn(pfn); } +#define kvm_bad_page (ERR_PTR(-ENOENT)) + +static inline int is_error_page(struct page *page) +{ + return IS_ERR(page); +} + /* * vcpu->requests bit members */ @@ -410,7 +417,6 @@ id_to_memslot(struct kvm_memslots *slots, int id) return slot; } -int is_error_page(struct page *page); int kvm_is_error_hva(unsigned long addr); int kvm_set_memory_region(struct kvm *kvm, struct kvm_userspace_memory_region *mem, @@ -437,7 +443,6 @@ void kvm_arch_flush_shadow(struct kvm *kvm); int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, struct page **pages, int nr_pages); -struct page *get_bad_page(void); struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn); unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn); void kvm_release_page_clean(struct page *page); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 7972278..aa38af6 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -203,7 +203,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu) if (!work) return -ENOMEM; - work->page = get_bad_page(); + work->page = kvm_bad_page; INIT_LIST_HEAD(&work->queue); /* for list_del to work */ spin_lock(&vcpu->async_pf.lock); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 51aaba4..f09f48a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -931,17 +931,6 @@ void kvm_disable_largepages(void) } EXPORT_SYMBOL_GPL(kvm_disable_largepages); -int is_error_page(struct page *page) -{ - return IS_ERR(page); -} -EXPORT_SYMBOL_GPL(is_error_page); - -struct page *get_bad_page(void) -{ - return ERR_PTR(-ENOENT); -} - static inline unsigned long bad_hva(void) { return PAGE_OFFSET; @@ -1188,7 +1177,7 @@ static struct page *kvm_pfn_to_page(pfn_t pfn) WARN_ON(kvm_is_mmio_pfn(pfn)); if (is_error_pfn(pfn) || kvm_is_mmio_pfn(pfn)) - return get_bad_page(); + return kvm_bad_page; return pfn_to_page(pfn); }