From patchwork Fri Aug 11 14:21:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9896173 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 497EE60236 for ; Fri, 11 Aug 2017 14:48:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 38E8D28C3E for ; Fri, 11 Aug 2017 14:48:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D6D328C29; Fri, 11 Aug 2017 14:48:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 73E3728C29 for ; Fri, 11 Aug 2017 14:48:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dgBCl-00012P-90; Fri, 11 Aug 2017 14:46:27 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dgBCj-00011l-IF for xen-devel@lists.xenproject.org; Fri, 11 Aug 2017 14:46:25 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 48/5D-09901-043CD895; Fri, 11 Aug 2017 14:46:24 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprGIsWRWlGSWpSXmKPExsXitHSDva7D4d5 Ig+t/BSy+b5nM5MDocfjDFZYAxijWzLyk/IoE1owtx4+yFNwKqZi78A17A+NR+y5GTg4JAX+J 2y9+soDYbAI6ElOfXmLtYuTgEBFQkbi91wAkzCzwjEnizxFWEFtYIEbi/OxVYDaLgKrEkZUTm UFsXgFbiU/vOlkhRspL7Gq7CGZzAsV/b28As4UEbCRennwFZatIrJ86iw2iV1Di5MwnLBC7JC QOvnjBPIGRdxaS1CwkqQWMTKsYNYpTi8pSi3SNjPWSijLTM0pyEzNzdA0NzPRyU4uLE9NTcxK TivWS83M3MQJDhwEIdjD+mR94iFGSg0lJlDfBpzdSiC8pP6UyI7E4I76oNCe1+BCjDAeHkgTv 2oNAOcGi1PTUirTMHGAQw6QlOHiURHiZDgGleYsLEnOLM9MhUqcYjTk+TdzwhYnj1YT/35iEW PLy81KlxHlXgkwSACnNKM2DGwSLrkuMslLCvIxApwnxFKQW5WaWoMq/YhTnYFQS5vUAWciTmV cCt+8V0ClMQKf0+YCdUpKIkJJqYNRTc9mxak/aqjsey5if9u8w61d/KB7KNvF8xvkM0Z3Sy2+ lrM9KKYjUkT19SuOIFFutwTZ94YcPr17ouc1enyD4aGJWycTGFidfjlkT9vw+Efenr8n63pUf F649yp5m29nMsqc8cjHDbZMUFac5C7ewVBQ/8rn17VnPrn1vd2TIbVmsE/B7nYwSS3FGoqEWc 1FxIgBwDT8RqQIAAA== X-Env-Sender: prvs=389ee98fb=Paul.Durrant@citrix.com X-Msg-Ref: server-14.tower-27.messagelabs.com!1502462780!98976744!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 48450 invoked from network); 11 Aug 2017 14:46:23 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 11 Aug 2017 14:46:23 -0000 X-IronPort-AV: E=Sophos;i="5.41,358,1498521600"; d="scan'208";a="443482529" From: Paul Durrant To: Date: Fri, 11 Aug 2017 15:21:42 +0100 Message-ID: <20170811142143.35787-12-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170811142143.35787-1-paul.durrant@citrix.com> References: <20170811142143.35787-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v2 11/12] x86/hvm/ioreq: defer mapping gfns until they are actually requsted X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP A subsequent patch will introduce a new scheme to allow an emulator to map IOREQ server pages directly from Xen rather than the guest P2M. This patch lays the groundwork for that change by deferring mapping of gfns until their values are requested by an emulator. To that end, the pad field of the xen_dm_op_get_ioreq_server_info structure is re-purposed to a flags field and new flag, XEN_DMOP_no_gfns, defined which modifies the behaviour of XEN_DMOP_get_ioreq_server_info to allow the caller to avoid requesting the gfn values. Signed-off-by: Paul Durrant --- Cc: Ian Jackson Cc: Wei Liu Cc: Andrew Cooper Cc: George Dunlap Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan --- tools/libs/devicemodel/core.c | 8 +++++ tools/libs/devicemodel/include/xendevicemodel.h | 6 ++-- xen/arch/x86/hvm/dm.c | 9 +++-- xen/arch/x86/hvm/ioreq.c | 44 ++++++++++++++----------- xen/include/asm-x86/hvm/domain.h | 2 +- xen/include/public/hvm/dm_op.h | 32 ++++++++++-------- 6 files changed, 61 insertions(+), 40 deletions(-) diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c index fcb260d29b..907c894e77 100644 --- a/tools/libs/devicemodel/core.c +++ b/tools/libs/devicemodel/core.c @@ -188,6 +188,14 @@ int xendevicemodel_get_ioreq_server_info( data->id = id; + /* + * If the caller is not requesting gfn values then instruct the + * hypercall not to retrieve them as this may cause them to be + * mapped. + */ + if (!ioreq_gfn && !bufioreq_gfn) + data->flags = XEN_DMOP_no_gfns; + rc = xendevicemodel_op(dmod, domid, 1, &op, sizeof(op)); if (rc) return rc; diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/libs/devicemodel/include/xendevicemodel.h index 13216db04a..da6b253cfd 100644 --- a/tools/libs/devicemodel/include/xendevicemodel.h +++ b/tools/libs/devicemodel/include/xendevicemodel.h @@ -61,11 +61,11 @@ int xendevicemodel_create_ioreq_server( * @parm domid the domain id to be serviced * @parm id the IOREQ Server id. * @parm ioreq_gfn pointer to a xen_pfn_t to receive the synchronous ioreq - * gfn + * gmfn. (May be NULL if not required) * @parm bufioreq_gfn pointer to a xen_pfn_t to receive the buffered ioreq - * gfn + * gmfn. (May be NULL if not required) * @parm bufioreq_port pointer to a evtchn_port_t to receive the buffered - * ioreq event channel + * ioreq event channel. (May be NULL if not required) * @return 0 on success, -1 on failure. */ int xendevicemodel_get_ioreq_server_info( diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 87ef4b6ca9..c020f0c99f 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -418,16 +418,19 @@ static int dm_op(const struct dmop_args *op_args) { struct xen_dm_op_get_ioreq_server_info *data = &op.u.get_ioreq_server_info; + const uint16_t valid_flags = XEN_DMOP_no_gfns; const_op = false; rc = -EINVAL; - if ( data->pad ) + if ( data->flags & ~valid_flags ) break; rc = hvm_get_ioreq_server_info(d, data->id, - &data->ioreq_gfn, - &data->bufioreq_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : &data->ioreq_gfn, + (data->flags & XEN_DMOP_no_gfns) ? + NULL : &data->bufioreq_gfn, &data->bufioreq_port); break; } diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 3a9aaf1f5d..795c198f95 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -344,7 +344,8 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, sv->ioreq_evtchn = rc; - if ( v->vcpu_id == 0 && s->bufioreq.va != NULL ) + if ( v->vcpu_id == 0 && + (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) { struct domain *d = s->domain; @@ -395,7 +396,8 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s, list_del(&sv->list_entry); - if ( v->vcpu_id == 0 && s->bufioreq.va != NULL ) + if ( v->vcpu_id == 0 && + (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) free_xen_event_channel(v->domain, s->bufioreq_evtchn); free_xen_event_channel(v->domain, sv->ioreq_evtchn); @@ -422,7 +424,8 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) list_del(&sv->list_entry); - if ( v->vcpu_id == 0 && s->bufioreq.va != NULL ) + if ( v->vcpu_id == 0 && + (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) free_xen_event_channel(v->domain, s->bufioreq_evtchn); free_xen_event_channel(v->domain, sv->ioreq_evtchn); @@ -433,14 +436,13 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s) spin_unlock(&s->lock); } -static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s, - bool handle_bufioreq) +static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s) { int rc = -ENOMEM; rc = hvm_map_ioreq_gfn(s, false); - if ( !rc && handle_bufioreq ) + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) rc = hvm_map_ioreq_gfn(s, true); if ( rc ) @@ -568,13 +570,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, if ( rc ) return rc; - if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC ) - s->bufioreq_atomic = true; - - rc = hvm_ioreq_server_map_pages( - s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF); - if ( rc ) - goto fail_map; + s->bufioreq_handling = bufioreq_handling; for_each_vcpu ( d, v ) { @@ -589,9 +585,6 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); - fail_map: - hvm_ioreq_server_free_rangesets(s); - return rc; } @@ -747,11 +740,21 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, if ( s->id != id ) continue; - *ioreq_gfn = gfn_x(s->ioreq.gfn); + if ( ioreq_gfn || bufioreq_gfn ) + { + rc = hvm_ioreq_server_map_pages(s); + if ( rc ) + break; + } + + if ( ioreq_gfn ) + *ioreq_gfn = gfn_x(s->ioreq.gfn); - if ( s->bufioreq.va != NULL ) + if ( s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF ) { - *bufioreq_gfn = gfn_x(s->bufioreq.gfn); + if ( bufioreq_gfn ) + *bufioreq_gfn = gfn_x(s->bufioreq.gfn); + *bufioreq_port = s->bufioreq_evtchn; } @@ -1278,7 +1281,8 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) pg->ptrs.write_pointer += qw ? 2 : 1; /* Canonicalize read/write pointers to prevent their overflow. */ - while ( s->bufioreq_atomic && qw++ < IOREQ_BUFFER_SLOT_NUM && + while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) && + qw++ < IOREQ_BUFFER_SLOT_NUM && pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM ) { union bufioreq_pointers old = pg->ptrs, new; diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 7b93d10209..b8bcd559a5 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -70,7 +70,7 @@ struct hvm_ioreq_server { evtchn_port_t bufioreq_evtchn; struct rangeset *range[NR_IO_RANGE_TYPES]; bool enabled; - bool bufioreq_atomic; + int bufioreq_handling; bool is_default; }; diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 6bbab5fca3..9677bd74e7 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -79,28 +79,34 @@ struct xen_dm_op_create_ioreq_server { * XEN_DMOP_get_ioreq_server_info: Get all the information necessary to * access IOREQ Server . * - * The emulator needs to map the synchronous ioreq structures and buffered - * ioreq ring (if it exists) that Xen uses to request emulation. These are - * hosted in the target domain's gmfns and - * respectively. In addition, if the IOREQ Server is handling buffered - * emulation requests, the emulator needs to bind to event channel - * to listen for them. (The event channels used for - * synchronous emulation requests are specified in the per-CPU ioreq - * structures in ). - * If the IOREQ Server is not handling buffered emulation requests then the - * values handed back in and will both be 0. + * If the IOREQ Server is handling buffered emulation requests, the + * emulator needs to bind to event channel to listen for + * them. (The event channels used for synchronous emulation requests are + * specified in the per-CPU ioreq structures). + * In addition, if the XENMEM_acquire_resource memory op cannot be used, + * the emulator will need to map the synchronous ioreq structures and + * buffered ioreq ring (if it exists) from guest memory. If does + * not contain XEN_DMOP_no_gfns then these pages will be made available and + * the frame numbers passed back in gfns and + * respectively. (If the IOREQ Server is not handling buffered emulation + * only will be valid). */ #define XEN_DMOP_get_ioreq_server_info 2 struct xen_dm_op_get_ioreq_server_info { /* IN - server id */ ioservid_t id; - uint16_t pad; + /* IN - flags */ + uint16_t flags; + +#define _XEN_DMOP_no_gfns 0 +#define XEN_DMOP_no_gfns (1u << _XEN_DMOP_no_gfns) + /* OUT - buffered ioreq port */ evtchn_port_t bufioreq_port; - /* OUT - sync ioreq gfn */ + /* OUT - sync ioreq gfn (see block comment above) */ uint64_aligned_t ioreq_gfn; - /* OUT - buffered ioreq gfn */ + /* OUT - buffered ioreq gfn (see block comment above)*/ uint64_aligned_t bufioreq_gfn; };