From patchwork Tue Sep 5 11:37:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9938491 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F0AED6038C for ; Tue, 5 Sep 2017 11:40:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E2E7F28938 for ; Tue, 5 Sep 2017 11:40:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D7FC428948; Tue, 5 Sep 2017 11:40:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E594528938 for ; Tue, 5 Sep 2017 11:40:08 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dpCAY-0007Pp-LD; Tue, 05 Sep 2017 11:37:26 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dpCAW-0007Nn-Os for xen-devel@lists.xenproject.org; Tue, 05 Sep 2017 11:37:24 +0000 Received: from [193.109.254.147] by server-1.bemta-6.messagelabs.com id DC/20-03414-47C8EA95; Tue, 05 Sep 2017 11:37:24 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprNIsWRWlGSWpSXmKPExsXitHRDpG5xz7p Ig2cTJC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oy/yz6xFzwMqDiz8RZTA+N9my5GTg4JAX+J V7N/s4PYbAI6ElOfXmLtYuTgEBFQkbi91wDEZBYol+iYUAtSISwQIDHh8ElWEJsFqOLsnTtMI DavgI3EifZlbBAT5SV2tV0Eq+EEij+/ewhsupCAtcSfQ2+h6gUlTs58wgJiMwtoSrRuh7iAGa i3eetsZoh6FYn1U2exTWDkm4WkZRaSlllIWhYwMq9i1ChOLSpLLdI1tNBLKspMzyjJTczM0TU 0MNPLTS0uTkxPzUlMKtZLzs/dxAgMNAYg2MF4c2PAIUZJDiYlUd7QxHWRQnxJ+SmVGYnFGfFF pTmpxYcYZTg4lCR4Z3UB5QSLUtNTK9Iyc4AhD5OW4OBREuF1BUnzFhck5hZnpkOkTjEqSonzb gNJCIAkMkrz4NpgcXaJUVZKmJcR6BAhnoLUotzMElT5V4ziHIxKwrzS3UBTeDLzSuCmvwJazA S0uOrlGpDFJYkIKakGxtoJ5dIpkQaf0nV/Oz46Nj85d5+Fxve98j/6Xf9lHb25dIa9QiyHYiP bcaU3a+VlWEpXJOksvjTn/7xj324l6NQH9V3dvj74wuWAxXz767x/db2PuH5VOuf/Vb3b22+K sB7b1pCQW+JxYL/e+ZmC/w5c3rbe5vl+l4WmMjO2ZW+4cuRJSVP1T08lluKMREMt5qLiRABSi m1urgIAAA== X-Env-Sender: prvs=414c423db=Paul.Durrant@citrix.com X-Msg-Ref: server-5.tower-27.messagelabs.com!1504611439!109691739!4 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 31380 invoked from network); 5 Sep 2017 11:37:23 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 5 Sep 2017 11:37:23 -0000 X-IronPort-AV: E=Sophos;i="5.41,480,1498521600"; d="scan'208";a="438388729" From: Paul Durrant To: Date: Tue, 5 Sep 2017 12:37:13 +0100 Message-ID: <20170905113716.3960-10-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170905113716.3960-1-paul.durrant@citrix.com> References: <20170905113716.3960-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v4 09/12] x86/hvm/ioreq: simplify code and use consistent naming X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch re-works much of the ioreq server initialization and teardown code: - The hvm_map/unmap_ioreq_gfn() functions are expanded to call through to hvm_alloc/free_ioreq_gfn() rather than expecting them to be called separately by outer functions. - Several functions now test the validity of the hvm_ioreq_page gfn value to determine whether they need to act. This means can be safely called for the bufioreq page even when it is not used. - hvm_add/remove_ioreq_gfn() simply return in the case of the default IOREQ server so callers no longer need to test before calling. - hvm_ioreq_server_setup_pages() is renamed to hvm_ioreq_server_map_pages() to mirror the existing hvm_ioreq_server_unmap_pages(). All of this significantly shortens the code. Signed-off-by: Paul Durrant Reviewed-by: Roger Pau Monné Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper v3: - Rebased on top of 's->is_default' to 'IS_DEFAULT(s)' changes. - Minor updates in response to review comments from Roger. --- xen/arch/x86/hvm/ioreq.c | 183 ++++++++++++++++++----------------------------- 1 file changed, 69 insertions(+), 114 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 287572bd1f..de04ea815b 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -201,63 +201,75 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_gfn(struct domain *d, unsigned long *gfn) +static unsigned long hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) { + struct domain *d = s->domain; unsigned int i; - int rc; - rc = -ENOMEM; + ASSERT(!IS_DEFAULT(s)); + for ( i = 0; i < sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8; i++ ) { if ( test_and_clear_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask) ) - { - *gfn = d->arch.hvm_domain.ioreq_gfn.base + i; - rc = 0; - break; - } + return d->arch.hvm_domain.ioreq_gfn.base + i; } - return rc; + return gfn_x(INVALID_GFN); } -static void hvm_free_ioreq_gfn(struct domain *d, unsigned long gfn) +static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, + unsigned long gfn) { + struct domain *d = s->domain; unsigned int i = gfn - d->arch.hvm_domain.ioreq_gfn.base; - if ( gfn != gfn_x(INVALID_GFN) ) - set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); + ASSERT(!IS_DEFAULT(s)); + ASSERT(gfn != gfn_x(INVALID_GFN)); + + set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); } -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + if ( iorp->gfn == gfn_x(INVALID_GFN) ) + return; + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page = NULL; + + if ( !IS_DEFAULT(s) ) + hvm_free_ioreq_gfn(s, iorp->gfn); + + iorp->gfn = gfn_x(INVALID_GFN); } -static int hvm_map_ioreq_page( - struct hvm_ioreq_server *s, bool buf, unsigned long gfn) +static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { struct domain *d = s->domain; struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - void *va; int rc; - if ( (rc = prepare_ring_for_helper(d, gfn, &page, &va)) ) - return rc; - - if ( (iorp->va != NULL) || d->is_dying ) - { - destroy_ring_for_helper(&va, page); + if ( d->is_dying ) return -EINVAL; - } - iorp->va = va; - iorp->page = page; - iorp->gfn = gfn; + if ( IS_DEFAULT(s) ) + iorp->gfn = buf ? + d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] : + d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN]; + else + iorp->gfn = hvm_alloc_ioreq_gfn(s); + + if ( iorp->gfn == gfn_x(INVALID_GFN) ) + return -ENOMEM; - return 0; + rc = prepare_ring_for_helper(d, iorp->gfn, &iorp->page, &iorp->va); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, buf); + + return rc; } bool is_ioreq_server_page(struct domain *d, const struct page_info *page) @@ -273,8 +285,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) if ( !s ) continue; - if ( (s->ioreq.va && s->ioreq.page == page) || - (s->bufioreq.va && s->bufioreq.page == page) ) + if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) { found = true; break; @@ -286,20 +297,30 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_remove_ioreq_gfn( - struct domain *d, struct hvm_ioreq_page *iorp) +static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) + { + struct domain *d = s->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( IS_DEFAULT(s) || iorp->gfn == gfn_x(INVALID_GFN) ) + return; + if ( guest_physmap_remove_page(d, _gfn(iorp->gfn), _mfn(page_to_mfn(iorp->page)), 0) ) domain_crash(d); clear_page(iorp->va); } -static int hvm_add_ioreq_gfn( - struct domain *d, struct hvm_ioreq_page *iorp) +static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { + struct domain *d = s->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; + if ( IS_DEFAULT(s) || iorp->gfn == gfn_x(INVALID_GFN) ) + return 0; + clear_page(iorp->va); rc = guest_physmap_add_page(d, _gfn(iorp->gfn), @@ -323,7 +344,6 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } } - static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, struct vcpu *v) { @@ -435,78 +455,25 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) } static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s, - unsigned long ioreq_gfn, - unsigned long bufioreq_gfn) + bool handle_bufioreq) { int rc; - rc = hvm_map_ioreq_page(s, false, ioreq_gfn); - if ( rc ) - return rc; - - if ( bufioreq_gfn != gfn_x(INVALID_GFN) ) - rc = hvm_map_ioreq_page(s, true, bufioreq_gfn); - - if ( rc ) - hvm_unmap_ioreq_page(s, false); - - return rc; -} - -static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, - bool handle_bufioreq) -{ - struct domain *d = s->domain; - unsigned long ioreq_gfn = gfn_x(INVALID_GFN); - unsigned long bufioreq_gfn = gfn_x(INVALID_GFN); - int rc; - - if ( IS_DEFAULT(s) ) - { - /* - * The default ioreq server must handle buffered ioreqs, for - * backwards compatibility. - */ - ASSERT(handle_bufioreq); - return hvm_ioreq_server_map_pages(s, - d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN], - d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]); - } - - rc = hvm_alloc_ioreq_gfn(d, &ioreq_gfn); + rc = hvm_map_ioreq_gfn(s, false); if ( !rc && handle_bufioreq ) - rc = hvm_alloc_ioreq_gfn(d, &bufioreq_gfn); - - if ( !rc ) - rc = hvm_ioreq_server_map_pages(s, ioreq_gfn, bufioreq_gfn); + rc = hvm_map_ioreq_gfn(s, true); if ( rc ) - { - hvm_free_ioreq_gfn(d, ioreq_gfn); - hvm_free_ioreq_gfn(d, bufioreq_gfn); - } + hvm_unmap_ioreq_gfn(s, false); return rc; } static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; - bool handle_bufioreq = !!s->bufioreq.va; - - if ( handle_bufioreq ) - hvm_unmap_ioreq_page(s, true); - - hvm_unmap_ioreq_page(s, false); - - if ( !IS_DEFAULT(s) ) - { - if ( handle_bufioreq ) - hvm_free_ioreq_gfn(d, s->bufioreq.gfn); - - hvm_free_ioreq_gfn(d, s->ioreq.gfn); - } + hvm_unmap_ioreq_gfn(s, true); + hvm_unmap_ioreq_gfn(s, false); } static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) @@ -564,22 +531,15 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; struct hvm_ioreq_vcpu *sv; - bool handle_bufioreq = !!s->bufioreq.va; spin_lock(&s->lock); if ( s->enabled ) goto done; - if ( !IS_DEFAULT(s) ) - { - hvm_remove_ioreq_gfn(d, &s->ioreq); - - if ( handle_bufioreq ) - hvm_remove_ioreq_gfn(d, &s->bufioreq); - } + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); s->enabled = true; @@ -594,21 +554,13 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; - bool handle_bufioreq = !!s->bufioreq.va; - spin_lock(&s->lock); if ( !s->enabled ) goto done; - if ( !IS_DEFAULT(s) ) - { - if ( handle_bufioreq ) - hvm_add_ioreq_gfn(d, &s->bufioreq); - - hvm_add_ioreq_gfn(d, &s->ioreq); - } + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); s->enabled = false; @@ -630,6 +582,9 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); + s->ioreq.gfn = gfn_x(INVALID_GFN); + s->bufioreq.gfn = gfn_x(INVALID_GFN); + rc = hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; @@ -637,7 +592,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC ) s->bufioreq_atomic = true; - rc = hvm_ioreq_server_setup_pages( + rc = hvm_ioreq_server_map_pages( s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); if ( rc ) goto fail_map;