From patchwork Tue Aug 22 14:51:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9915465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 57235603FF for ; Tue, 22 Aug 2017 14:54:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2A532288A0 for ; Tue, 22 Aug 2017 14:54:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1F62828617; Tue, 22 Aug 2017 14:54:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B6C1C28882 for ; Tue, 22 Aug 2017 14:54:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dkAX3-0002TC-Vw; Tue, 22 Aug 2017 14:51:53 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dkAX2-0002PB-CE for xen-devel@lists.xenproject.org; Tue, 22 Aug 2017 14:51:52 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id EA/C7-09901-7054C995; Tue, 22 Aug 2017 14:51:51 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpikeJIrShJLcpLzFFi42JxWrohUpfddU6 kwbJ7ghbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bOvQvYCq4FVhz9+oqxgfG5dRcjJ4eEgL/E 489HGEFsNgEdialPL7F2MXJwiAioSNzeawBiMguUS3RMqAWpEBaIkbi4cxUriM0ioCrx4sQ5F hCbV8BaYtHG06wQE+UldrVdBLM5BWwkHh6awAIyRgio5tb1QJCwENDw9VNnsUG0CkqcnPkEbA yzgITEwRcvmCcw8s5CkpqFJLWAkWkVo0ZxalFZapGusZFeUlFmekZJbmJmjq6hgZlebmpxcWJ 6ak5iUrFecn7uJkZg2DAAwQ7G0+sCDzFKcjApifJO/j47UogvKT+lMiOxOCO+qDQntfgQowwH h5IE73znOZFCgkWp6akVaZk5wACGSUtw8CiJ8IaCpHmLCxJzizPTIVKnGBWlxHk9QBICIImM0 jy4NljUXGKUlRLmZQQ6RIinILUoN7MEVf4VozgHo5IwbwHIFJ7MvBK46a+AFjMBLTZsnQayuC QRISXVwOgZfFlGV0f/Z81Uzzoff5MnWyIlGxX0H3I+6fVhaNP7rzw7UXdlR6hZhNqJsxLSa77 FW/GaHt8353ZzXzRDsIBhpsnqdK+bJdxZZ453ONtb3ylpuJR6c5Noqf3kR7f7HBg6z22ekrvr vfiz5HOG3VN6T8hve5olfOv8H/G557PuVcazHHuar8RSnJFoqMVcVJwIAHP8iZCVAgAA X-Env-Sender: prvs=40066d99f=Paul.Durrant@citrix.com X-Msg-Ref: server-15.tower-27.messagelabs.com!1503413507!60068124!3 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 12658 invoked from network); 22 Aug 2017 14:51:50 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 22 Aug 2017 14:51:50 -0000 X-IronPort-AV: E=Sophos;i="5.41,412,1498521600"; d="scan'208";a="436412969" From: Paul Durrant To: Date: Tue, 22 Aug 2017 15:51:02 +0100 Message-ID: <20170822145107.6877-9-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170822145107.6877-1-paul.durrant@citrix.com> References: <20170822145107.6877-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Andrew Cooper , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v2 REPOST 08/12] x86/hvm/ioreq: move is_default into struct hvm_ioreq_server X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Legacy emulators use the 'default' IOREQ server which has slightly different semantics than other, explicitly created, IOREQ servers. Because of this, most of the initialization and teardown code needs to know whether the server is default or not. This is currently achieved by passing an is_default boolean argument to the functions in question, whereas this argument could be avoided by adding a field to the hvm_ioreq_server structure which is also passed as an argument to all the relevant functions. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/hvm/ioreq.c | 80 ++++++++++++++++++---------------------- xen/include/asm-x86/hvm/domain.h | 1 + 2 files changed, 36 insertions(+), 45 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 5e01e1a6d2..5737082238 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -302,7 +302,7 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, } static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, - bool is_default, struct vcpu *v) + struct vcpu *v) { struct hvm_ioreq_vcpu *sv; int rc; @@ -331,7 +331,7 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, goto fail3; s->bufioreq_evtchn = rc; - if ( is_default ) + if ( s->is_default ) d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] = s->bufioreq_evtchn; } @@ -431,7 +431,6 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s, } static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, - bool is_default, bool handle_bufioreq) { struct domain *d = s->domain; @@ -439,7 +438,7 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, unsigned long bufioreq_gfn = gfn_x(INVALID_GFN); int rc; - if ( is_default ) + if ( s->is_default ) { /* * The default ioreq server must handle buffered ioreqs, for @@ -468,8 +467,7 @@ static int hvm_ioreq_server_setup_pages(struct hvm_ioreq_server *s, return rc; } -static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s, - bool is_default) +static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) { struct domain *d = s->domain; bool handle_bufioreq = !!s->bufioreq.va; @@ -479,7 +477,7 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s, hvm_unmap_ioreq_page(s, false); - if ( !is_default ) + if ( !s->is_default ) { if ( handle_bufioreq ) hvm_free_ioreq_gfn(d, s->bufioreq.gfn); @@ -488,25 +486,23 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s, } } -static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s, - bool is_default) +static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) { unsigned int i; - if ( is_default ) + if ( s->is_default ) return; for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) rangeset_destroy(s->range[i]); } -static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, - bool is_default) +static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s) { unsigned int i; int rc; - if ( is_default ) + if ( s->is_default ) goto done; for ( i = 0; i < NR_IO_RANGE_TYPES; i++ ) @@ -537,13 +533,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, return 0; fail: - hvm_ioreq_server_free_rangesets(s, false); + hvm_ioreq_server_free_rangesets(s); return rc; } -static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, - bool is_default) +static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) { struct domain *d = s->domain; struct hvm_ioreq_vcpu *sv; @@ -554,7 +549,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, if ( s->enabled ) goto done; - if ( !is_default ) + if ( !s->is_default ) { hvm_remove_ioreq_gfn(d, &s->ioreq); @@ -573,8 +568,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s, spin_unlock(&s->lock); } -static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, - bool is_default) +static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) { struct domain *d = s->domain; bool handle_bufioreq = !!s->bufioreq.va; @@ -584,7 +578,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, if ( !s->enabled ) goto done; - if ( !is_default ) + if ( !s->is_default ) { if ( handle_bufioreq ) hvm_add_ioreq_gfn(d, &s->bufioreq); @@ -600,8 +594,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s, static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d, domid_t domid, - bool is_default, int bufioreq_handling, - ioservid_t id) + int bufioreq_handling, ioservid_t id) { struct vcpu *v; int rc; @@ -614,7 +607,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, INIT_LIST_HEAD(&s->ioreq_vcpu_list); spin_lock_init(&s->bufioreq_lock); - rc = hvm_ioreq_server_alloc_rangesets(s, is_default); + rc = hvm_ioreq_server_alloc_rangesets(s); if ( rc ) return rc; @@ -622,13 +615,13 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, s->bufioreq_atomic = true; rc = hvm_ioreq_server_setup_pages( - s, is_default, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); + s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); if ( rc ) goto fail_map; for_each_vcpu ( d, v ) { - rc = hvm_ioreq_server_add_vcpu(s, is_default, v); + rc = hvm_ioreq_server_add_vcpu(s, v); if ( rc ) goto fail_add; } @@ -637,21 +630,20 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s, is_default); + hvm_ioreq_server_unmap_pages(s); fail_map: - hvm_ioreq_server_free_rangesets(s, is_default); + hvm_ioreq_server_free_rangesets(s); return rc; } -static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s, - bool is_default) +static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) { ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); - hvm_ioreq_server_unmap_pages(s, is_default); - hvm_ioreq_server_free_rangesets(s, is_default); + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_rangesets(s); } static ioservid_t next_ioservid(struct domain *d) @@ -695,6 +687,8 @@ int hvm_create_ioreq_server(struct domain *d, domid_t domid, if ( !s ) goto fail1; + s->is_default = is_default; + domain_pause(d); spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); @@ -702,7 +696,7 @@ int hvm_create_ioreq_server(struct domain *d, domid_t domid, if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL ) goto fail2; - rc = hvm_ioreq_server_init(s, d, domid, is_default, bufioreq_handling, + rc = hvm_ioreq_server_init(s, d, domid, bufioreq_handling, next_ioservid(d)); if ( rc ) goto fail3; @@ -713,7 +707,7 @@ int hvm_create_ioreq_server(struct domain *d, domid_t domid, if ( is_default ) { d->arch.hvm_domain.default_ioreq_server = s; - hvm_ioreq_server_enable(s, true); + hvm_ioreq_server_enable(s); } if ( id ) @@ -756,11 +750,11 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) p2m_set_ioreq_server(d, 0, s); - hvm_ioreq_server_disable(s, false); + hvm_ioreq_server_disable(s); list_del(&s->list_entry); - hvm_ioreq_server_deinit(s, false); + hvm_ioreq_server_deinit(s); domain_unpause(d); @@ -992,9 +986,9 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, domain_pause(d); if ( enabled ) - hvm_ioreq_server_enable(s, false); + hvm_ioreq_server_enable(s); else - hvm_ioreq_server_disable(s, false); + hvm_ioreq_server_disable(s); domain_unpause(d); @@ -1017,9 +1011,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) &d->arch.hvm_domain.ioreq_server.list, list_entry ) { - bool is_default = (s == d->arch.hvm_domain.default_ioreq_server); - - rc = hvm_ioreq_server_add_vcpu(s, is_default, v); + rc = hvm_ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; } @@ -1066,16 +1058,14 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) &d->arch.hvm_domain.ioreq_server.list, list_entry ) { - bool is_default = (s == d->arch.hvm_domain.default_ioreq_server); - - hvm_ioreq_server_disable(s, is_default); + hvm_ioreq_server_disable(s); - if ( is_default ) + if ( s->is_default ) d->arch.hvm_domain.default_ioreq_server = NULL; list_del(&s->list_entry); - hvm_ioreq_server_deinit(s, is_default); + hvm_ioreq_server_deinit(s); xfree(s); } diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 7f128c05ff..16344d173b 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -71,6 +71,7 @@ struct hvm_ioreq_server { struct rangeset *range[NR_IO_RANGE_TYPES]; bool enabled; bool bufioreq_atomic; + bool is_default; }; /*