From patchwork Mon Oct 30 17:48:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10032987 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B016A600C5 for ; Mon, 30 Oct 2017 17:51:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A51D328918 for ; Mon, 30 Oct 2017 17:51:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 99CDD28926; Mon, 30 Oct 2017 17:51:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E629228919 for ; Mon, 30 Oct 2017 17:51:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1e9EAv-0006ly-73; Mon, 30 Oct 2017 17:48:37 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1e9EAu-0006lG-2T for xen-devel@lists.xenproject.org; Mon, 30 Oct 2017 17:48:36 +0000 Received: from [85.158.139.211] by server-1.bemta-5.messagelabs.com id EC/A6-21876-3F567F95; Mon, 30 Oct 2017 17:48:35 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrHIsWRWlGSWpSXmKPExsXitHSDve6n1O+ RBp/+61l83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBnH5v1kLdjuX/GjewNbA2OzTRcjJ4eEgL/E zEtHmEBsNgEdialPL7F2MXJwiAioSNzeawASZhYIk1i94CIjiC0sECjRvqoRrJxFQFViybRPz CA2r4C1xPVNT5kgRspL7Gq7yApicwrYSKzr+AdWIwRUc/JmNyNEvaDEyZlPWCDma0q0bv/NDm HLSzRvnQ1VryKxfuostgmMfLOQtMxC0jILScsCRuZVjBrFqUVlqUW6RhZ6SUWZ6RkluYmZObq GBqZ6uanFxYnpqTmJScV6yfm5mxiBoVbPwMC4g7Fvld8hRkkOJiVR3p2O3yOF+JLyUyozEosz 4otKc1KLDzHKcHAoSfBeSAHKCRalpqdWpGXmAIMeJi3BwaMkwlsGkuYtLkjMLc5Mh0idYjTm6 Lh59w8Tx7OZrxuYhVjy8vNSpcR51YDxJCQAUppRmgc3CBaNlxhlpYR5GRkYGIR4ClKLcjNLUO VfMYpzMCoJ894CWciTmVcCt+8V0ClMQKdoSH4BOaUkESEl1cDo/7fr8WGm2GMr7hkzfSi98K6 yPI9XRiQ1U0FGYsuDqJK8duOGNfYuU08mV4avr5zvdKrBe1bNZy+bZF1p1x67xgbNJRf8DRWE Joe+su00FFyhUMLfLmywQMOz8sLBM/zGrEtFOhT2bFz9qKxDcstX0UepyrWNq90YvaVefnb6O b3pgHR+lxJLcUaioRZzUXEiAHZME93BAgAA X-Env-Sender: prvs=469f563c5=Paul.Durrant@citrix.com X-Msg-Ref: server-8.tower-206.messagelabs.com!1509385712!109204356!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 42348 invoked from network); 30 Oct 2017 17:48:34 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP; 30 Oct 2017 17:48:34 -0000 X-IronPort-AV: E=Sophos;i="5.44,320,1505779200"; d="scan'208";a="456882988" From: Paul Durrant To: Date: Mon, 30 Oct 2017 17:48:20 +0000 Message-ID: <20171030174829.4518-3-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171030174829.4518-1-paul.durrant@citrix.com> References: <20171030174829.4518-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Paul Durrant Subject: [Xen-devel] [PATCH v13 02/11] x86/hvm/ioreq: simplify code and use consistent naming X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch re-works much of the ioreq server initialization and teardown code: - The hvm_map/unmap_ioreq_gfn() functions are expanded to call through to hvm_alloc/free_ioreq_gfn() rather than expecting them to be called separately by outer functions. - Several functions now test the validity of the hvm_ioreq_page gfn value to determine whether they need to act. This means can be safely called for the bufioreq page even when it is not used. - hvm_add/remove_ioreq_gfn() simply return in the case of the default IOREQ server so callers no longer need to test before calling. - hvm_ioreq_server_setup_pages() is renamed to hvm_ioreq_server_map_pages() to mirror the existing hvm_ioreq_server_unmap_pages(). All of this significantly shortens the code. Signed-off-by: Paul Durrant Reviewed-by: Roger Pau Monné Reviewed-by: Wei Liu Acked-by: Jan Beulich --- Cc: Andrew Cooper v3: - Rebased on top of 's->is_default' to 'IS_DEFAULT(s)' changes. - Minor updates in response to review comments from Roger. --- xen/arch/x86/hvm/ioreq.c | 182 ++++++++++++++++++----------------------------- 1 file changed, 69 insertions(+), 113 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index da31918bb1..c21fa9f280 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -210,63 +210,75 @@ bool handle_hvm_io_completion(struct vcpu *v) return true; } -static int hvm_alloc_ioreq_gfn(struct domain *d, unsigned long *gfn) +static unsigned long hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s) { + struct domain *d = s->domain; unsigned int i; - int rc; - rc = -ENOMEM; + ASSERT(!IS_DEFAULT(s)); + for ( i = 0; i < sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8; i++ ) { if ( test_and_clear_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask) ) - { - *gfn = d->arch.hvm_domain.ioreq_gfn.base + i; - rc = 0; - break; - } + return d->arch.hvm_domain.ioreq_gfn.base + i; } - return rc; + return gfn_x(INVALID_GFN); } -static void hvm_free_ioreq_gfn(struct domain *d, unsigned long gfn) +static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, + unsigned long gfn) { + struct domain *d = s->domain; unsigned int i = gfn - d->arch.hvm_domain.ioreq_gfn.base; - if ( gfn != gfn_x(INVALID_GFN) ) - set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); + ASSERT(!IS_DEFAULT(s)); + ASSERT(gfn != gfn_x(INVALID_GFN)); + + set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask); } -static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool buf) +static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + if ( iorp->gfn == gfn_x(INVALID_GFN) ) + return; + destroy_ring_for_helper(&iorp->va, iorp->page); + iorp->page = NULL; + + if ( !IS_DEFAULT(s) ) + hvm_free_ioreq_gfn(s, iorp->gfn); + + iorp->gfn = gfn_x(INVALID_GFN); } -static int hvm_map_ioreq_page( - struct hvm_ioreq_server *s, bool buf, unsigned long gfn) +static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { struct domain *d = s->domain; struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; - struct page_info *page; - void *va; int rc; - if ( (rc = prepare_ring_for_helper(d, gfn, &page, &va)) ) - return rc; - - if ( (iorp->va != NULL) || d->is_dying ) - { - destroy_ring_for_helper(&va, page); + if ( d->is_dying ) return -EINVAL; - } - iorp->va = va; - iorp->page = page; - iorp->gfn = gfn; + if ( IS_DEFAULT(s) ) + iorp->gfn = buf ? + d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] : + d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN]; + else + iorp->gfn = hvm_alloc_ioreq_gfn(s); - return 0; + if ( iorp->gfn == gfn_x(INVALID_GFN) ) + return -ENOMEM; + + rc = prepare_ring_for_helper(d, iorp->gfn, &iorp->page, &iorp->va); + + if ( rc ) + hvm_unmap_ioreq_gfn(s, buf); + + return rc; } bool is_ioreq_server_page(struct domain *d, const struct page_info *page) @@ -279,8 +291,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) FOR_EACH_IOREQ_SERVER(d, id, s) { - if ( (s->ioreq.va && s->ioreq.page == page) || - (s->bufioreq.va && s->bufioreq.page == page) ) + if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) { found = true; break; @@ -292,20 +303,30 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) return found; } -static void hvm_remove_ioreq_gfn( - struct domain *d, struct hvm_ioreq_page *iorp) +static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) + { + struct domain *d = s->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( IS_DEFAULT(s) || iorp->gfn == gfn_x(INVALID_GFN) ) + return; + if ( guest_physmap_remove_page(d, _gfn(iorp->gfn), _mfn(page_to_mfn(iorp->page)), 0) ) domain_crash(d); clear_page(iorp->va); } -static int hvm_add_ioreq_gfn( - struct domain *d, struct hvm_ioreq_page *iorp) +static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) { + struct domain *d = s->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; + if ( IS_DEFAULT(s) || iorp->gfn == gfn_x(INVALID_GFN) ) + return 0; + clear_page(iorp->va); rc = guest_physmap_add_page(d, _gfn(iorp->gfn), @@ -440,78 +461,25 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) } static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s, - unsigned long ioreq_gfn, - unsigned long bufioreq_gfn) + bool handle_bufioreq) { int rc; - rc = hvm_map_ioreq_page(s, false, ioreq_gfn); - if ( rc ) - return rc; - - if ( bufioreq_gfn != gfn_x(INVALID_GFN) ) - rc = hvm_map_ioreq_page(s, true, bufioreq_gfn); - - if ( rc ) - hvm_unmap_ioreq_page(s, false); - - return rc; -} - -static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, - bool handle_bufioreq) -{ - struct domain *d = s->domain; - unsigned long ioreq_gfn = gfn_x(INVALID_GFN); - unsigned long bufioreq_gfn = gfn_x(INVALID_GFN); - int rc; - - if ( IS_DEFAULT(s) ) - { - /* - * The default ioreq server must handle buffered ioreqs, for - * backwards compatibility. - */ - ASSERT(handle_bufioreq); - return hvm_ioreq_server_map_pages(s, - d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN], - d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN]); - } - - rc = hvm_alloc_ioreq_gfn(d, &ioreq_gfn); + rc = hvm_map_ioreq_gfn(s, false); if ( !rc && handle_bufioreq ) - rc = hvm_alloc_ioreq_gfn(d, &bufioreq_gfn); - - if ( !rc ) - rc = hvm_ioreq_server_map_pages(s, ioreq_gfn, bufioreq_gfn); + rc = hvm_map_ioreq_gfn(s, true); if ( rc ) - { - hvm_free_ioreq_gfn(d, ioreq_gfn); - hvm_free_ioreq_gfn(d, bufioreq_gfn); - } + hvm_unmap_ioreq_gfn(s, false); return rc; } static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; - bool handle_bufioreq = !!s->bufioreq.va; - - if ( handle_bufioreq ) - hvm_unmap_ioreq_page(s, true); - - hvm_unmap_ioreq_page(s, false); - - if ( !IS_DEFAULT(s) ) - { - if ( handle_bufioreq ) - hvm_free_ioreq_gfn(d, s->bufioreq.gfn); - - hvm_free_ioreq_gfn(d, s->ioreq.gfn); - } + hvm_unmap_ioreq_gfn(s, true); + hvm_unmap_ioreq_gfn(s, false); } static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) @@ -571,22 +539,15 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; struct hvm_ioreq_vcpu *sv; - bool handle_bufioreq = !!s->bufioreq.va; spin_lock(&s->lock); if ( s->enabled ) goto done; - if ( !IS_DEFAULT(s) ) - { - hvm_remove_ioreq_gfn(d, &s->ioreq); - - if ( handle_bufioreq ) - hvm_remove_ioreq_gfn(d, &s->bufioreq); - } + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); s->enabled = true; @@ -601,21 +562,13 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { - struct domain *d = s->domain; - bool handle_bufioreq = !!s->bufioreq.va; - spin_lock(&s->lock); if ( !s->enabled ) goto done; - if ( !IS_DEFAULT(s) ) - { - if ( handle_bufioreq ) - hvm_add_ioreq_gfn(d, &s->bufioreq); - - hvm_add_ioreq_gfn(d, &s->ioreq); - } + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); s->enabled = false; @@ -637,6 +590,9 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); + s->ioreq.gfn = gfn_x(INVALID_GFN); + s->bufioreq.gfn = gfn_x(INVALID_GFN); + rc = hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; @@ -644,7 +600,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC ) s->bufioreq_atomic = true; - rc = hvm_ioreq_server_setup_pages( + rc = hvm_ioreq_server_map_pages( s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); if ( rc ) goto fail_map;