From patchwork Thu Aug 31 09:36:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9931661 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5024860362 for ; Thu, 31 Aug 2017 09:38:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41C0C288D0 for ; Thu, 31 Aug 2017 09:38:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 36845288D3; Thu, 31 Aug 2017 09:38:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6E204288D0 for ; Thu, 31 Aug 2017 09:38:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dnLtb-0006AV-S7; Thu, 31 Aug 2017 09:36:19 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dnLta-00067X-87 for xen-devel@lists.xenproject.org; Thu, 31 Aug 2017 09:36:18 +0000 Received: from [85.158.143.35] by server-9.bemta-6.messagelabs.com id EA/92-03422-198D7A95; Thu, 31 Aug 2017 09:36:17 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpkkeJIrShJLcpLzFFi42JxWrohUnfijeW RBjPWsVp83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBlL+64xFfwKrHjauJW5gfGFbRcjB4eEgL/E p98yXYycHGwCOhJTn15iBQmLCKhI3N5rAGIyC5RLdEyoBakQFgiQeHxpNjtImEVAVeLPrhqQM K+AjcSTE+eZQWwJAXmJXW0XWUFsTqD4kkUNjCC2kIC1xKGVy5ghbBWJ9VNnsUH0CkqcnPmEBc RmFpCQOPjiBfMERt5ZSFKzkKQWMDKtYtQoTi0qSy3SNTTQSyrKTM8oyU3MzAHyzPRyU4uLE9N TcxKTivWS83M3MQKDhgEIdjAefx93iFGSg0lJlDfswvJIIb6k/JTKjMTijPii0pzU4kOMMhwc ShK8AteBcoJFqempFWmZOcDwhUlLcPAoifDmgqR5iwsSc4sz0yFSpxgVpcR500ASAiCJjNI8u DZYzFxilJUS5mUEOkSIpyC1KDezBFX+FaM4B6OSMC8DyBSezLwSuOmvgBYzAS2O9VoKsrgkES El1cCYvrc3OWqvFbODf8Y1zuSW3K7jEf9Fd+50U62/kWvWqyBRuPHWsa+3PLnqs3tur7Kc4hC Uf9K/M0eMuWt30b8VD/+tvfXz2balm2cfaBK6s65pmwX31rcMZXx3tpbJRP6WN2bWmP5i6mPP dKdNCSnM64+y/3KVvOgukvz2T910NtcF+zzFOj8rsRRnJBpqMRcVJwIAObYJh5QCAAA= X-Env-Sender: prvs=4091fd0e1=Paul.Durrant@citrix.com X-Msg-Ref: server-16.tower-21.messagelabs.com!1504172175!68477978!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 50785 invoked from network); 31 Aug 2017 09:36:16 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 31 Aug 2017 09:36:16 -0000 X-IronPort-AV: E=Sophos;i="5.41,451,1498521600"; d="scan'208";a="437804562" From: Paul Durrant To: Date: Thu, 31 Aug 2017 10:36:02 +0100 Message-ID: <20170831093605.2757-10-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170831093605.2757-1-paul.durrant@citrix.com> References: <20170831093605.2757-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v3 09/12] x86/hvm/ioreq: simplify code and use consistent naming X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch re-works much of the ioreq server initialization and teardown code: - The hvm_map/unmap_ioreq_gfn() functions are expanded to call through to hvm_alloc/free_ioreq_gfn() rather than expecting them to be called separately by outer functions. - Several functions now test the validity of the hvm_ioreq_page gfn value to determine whether they need to act. This means can be safely called for the bufioreq page even when it is not used. - hvm_add/remove_ioreq_gfn() simply return in the case of the default IOREQ server so callers no longer need to test before calling. - hvm_ioreq_server_setup_pages() is renamed to hvm_ioreq_server_map_pages() to mirror the existing hvm_ioreq_server_unmap_pages(). All of this significantly shortens the code. Signed-off-by: Paul Durrant Reviewed-by: Roger Pau Monné --- Cc: Jan Beulich Cc: Andrew Cooper v3: - Rebased on top of 's->is_default' to 'IS_DEFAULT(s)' changes. - Minor updates in response to review comments from Roger. --- xen/arch/x86/hvm/ioreq.c | 188 ++++++++++++++++++----------------------------- 1 file changed, 72 insertions(+), 116 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 0e92763384..fac82ae934 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -189,63 +189,78 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_gfn(struct domain *d, unsigned long *gfn) +#define IS_DEFAULT(s) \ + (s == s->domain->arch.hvm_domain.ioreq_server.server[DEFAULT_IOSERVID]) + +static unsigned long hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) { + struct domain *d = s->domain; unsigned int i; - int rc; - rc = -ENOMEM; + ASSERT(!IS_DEFAULT(s)); + for ( i = 0; i < sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8; i++ ) { if ( test_and_clear_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask) ) - { - *gfn = d->arch.hvm_domain.ioreq_gfn.base + i; - rc = 0; - break; - } + return d->arch.hvm_domain.ioreq_gfn.base + i; } - return rc; + return gfn_x(INVALID_GFN); } -static void hvm_free_ioreq_gfn(struct domain *d, unsigned long gfn) +static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, + unsigned long gfn) { + struct domain *d = s->domain; unsigned int i = gfn - d->arch.hvm_domain.ioreq_gfn.base; - if ( gfn != gfn_x(INVALID_GFN) ) - set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); + ASSERT(!IS_DEFAULT(s)); + ASSERT(gfn != gfn_x(INVALID_GFN)); + + set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); } -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + if ( iorp->gfn == gfn_x(INVALID_GFN) ) + return; + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page = NULL; + + if ( !IS_DEFAULT(s) ) + hvm_free_ioreq_gfn(s, iorp->gfn); + + iorp->gfn = gfn_x(INVALID_GFN); } -static int hvm_map_ioreq_page( - struct hvm_ioreq_server *s, bool buf, unsigned long gfn) +static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { struct domain *d = s->domain; struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - void *va; int rc; - if ( (rc = prepare_ring_for_helper(d, gfn, &page, &va)) ) - return rc; - - if ( (iorp->va != NULL) || d->is_dying ) - { - destroy_ring_for_helper(&va, page); + if ( d->is_dying ) return -EINVAL; - } - iorp->va = va; - iorp->page = page; - iorp->gfn = gfn; + if ( IS_DEFAULT(s) ) + iorp->gfn = buf ? + d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] : + d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN]; + else + iorp->gfn = hvm_alloc_ioreq_gfn(s); + + if ( iorp->gfn == gfn_x(INVALID_GFN) ) + return -ENOMEM; - return 0; + rc = prepare_ring_for_helper(d, iorp->gfn, &iorp->page, &iorp->va); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, buf); + + return rc; } bool is_ioreq_server_page(struct domain *d, const struct page_info *page) @@ -264,8 +279,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) if ( !s ) continue; - if ( (s->ioreq.va && s->ioreq.page == page) || - (s->bufioreq.va && s->bufioreq.page == page) ) + if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) { found = true; break; @@ -277,20 +291,30 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_remove_ioreq_gfn( - struct domain *d, struct hvm_ioreq_page *iorp) +static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) + { + struct domain *d = s->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( IS_DEFAULT(s) || iorp->gfn == gfn_x(INVALID_GFN) ) + return; + if ( guest_physmap_remove_page(d, _gfn(iorp->gfn), _mfn(page_to_mfn(iorp->page)), 0) ) domain_crash(d); clear_page(iorp->va); } -static int hvm_add_ioreq_gfn( - struct domain *d, struct hvm_ioreq_page *iorp) +static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { + struct domain *d = s->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; + if ( IS_DEFAULT(s) || iorp->gfn == gfn_x(INVALID_GFN) ) + return 0; + clear_page(iorp->va); rc = guest_physmap_add_page(d, _gfn(iorp->gfn), @@ -314,9 +338,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } -#define IS_DEFAULT(s) \ - (s == s->domain->arch.hvm_domain.ioreq_server.server[DEFAULT_IOSERVID]) - static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v) { @@ -428,78 +449,25 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) } static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s, - unsigned long ioreq_gfn, - unsigned long bufioreq_gfn) + bool handle_bufioreq) { int rc; - rc = hvm_map_ioreq_page(s, false, ioreq_gfn); - if ( rc ) - return rc; - - if ( bufioreq_gfn != gfn_x(INVALID_GFN) ) - rc = hvm_map_ioreq_page(s, true, bufioreq_gfn); - - if ( rc ) - hvm_unmap_ioreq_page(s, false); - - return rc; -} - -static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, - bool handle_bufioreq) -{ - struct domain *d = s->domain; - unsigned long ioreq_gfn = gfn_x(INVALID_GFN); - unsigned long bufioreq_gfn = gfn_x(INVALID_GFN); - int rc; - - if ( IS_DEFAULT(s) ) - { - /* - * The default ioreq server must handle buffered ioreqs, for - * backwards compatibility. - */ - ASSERT(handle_bufioreq); - return hvm_ioreq_server_map_pages(s, - d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN], - d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]); - } - - rc = hvm_alloc_ioreq_gfn(d, &ioreq_gfn); + rc = hvm_map_ioreq_gfn(s, false); if ( !rc && handle_bufioreq ) - rc = hvm_alloc_ioreq_gfn(d, &bufioreq_gfn); - - if ( !rc ) - rc = hvm_ioreq_server_map_pages(s, ioreq_gfn, bufioreq_gfn); + rc = hvm_map_ioreq_gfn(s, true); if ( rc ) - { - hvm_free_ioreq_gfn(d, ioreq_gfn); - hvm_free_ioreq_gfn(d, bufioreq_gfn); - } + hvm_unmap_ioreq_gfn(s, false); return rc; } static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; - bool handle_bufioreq = !!s->bufioreq.va; - - if ( handle_bufioreq ) - hvm_unmap_ioreq_page(s, true); - - hvm_unmap_ioreq_page(s, false); - - if ( !IS_DEFAULT(s) ) - { - if ( handle_bufioreq ) - hvm_free_ioreq_gfn(d, s->bufioreq.gfn); - - hvm_free_ioreq_gfn(d, s->ioreq.gfn); - } + hvm_unmap_ioreq_gfn(s, true); + hvm_unmap_ioreq_gfn(s, false); } static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) @@ -557,22 +525,15 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; struct hvm_ioreq_vcpu *sv; - bool handle_bufioreq = !!s->bufioreq.va; spin_lock(&s->lock); if ( s->enabled ) goto done; - if ( !IS_DEFAULT(s) ) - { - hvm_remove_ioreq_gfn(d, &s->ioreq); - - if ( handle_bufioreq ) - hvm_remove_ioreq_gfn(d, &s->bufioreq); - } + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); s->enabled = true; @@ -587,21 +548,13 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; - bool handle_bufioreq = !!s->bufioreq.va; - spin_lock(&s->lock); if ( !s->enabled ) goto done; - if ( !IS_DEFAULT(s) ) - { - if ( handle_bufioreq ) - hvm_add_ioreq_gfn(d, &s->bufioreq); - - hvm_add_ioreq_gfn(d, &s->ioreq); - } + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); s->enabled = false; @@ -623,6 +576,9 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); + s->ioreq.gfn = gfn_x(INVALID_GFN); + s->bufioreq.gfn = gfn_x(INVALID_GFN); + rc = hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; @@ -630,7 +586,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC ) s->bufioreq_atomic = true; - rc = hvm_ioreq_server_setup_pages( + rc = hvm_ioreq_server_map_pages( s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); if ( rc ) goto fail_map;