From patchwork Tue Jul 25 21:21:59 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9863823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CE0F46038C for ; Tue, 25 Jul 2017 21:24:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B7DF92860F for ; Tue, 25 Jul 2017 21:24:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACA4A28708; Tue, 25 Jul 2017 21:24:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3A51F2860F for ; Tue, 25 Jul 2017 21:24:32 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da7Ha-0004ZJ-5y; Tue, 25 Jul 2017 21:22:22 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da7HY-0004Xm-Ft for xen-devel@lists.xen.org; Tue, 25 Jul 2017 21:22:20 +0000 Received: from [85.158.137.68] by server-7.bemta-3.messagelabs.com id 0E/E4-02177-B86B7795; Tue, 25 Jul 2017 21:22:19 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpgkeJIrShJLcpLzFFi42I5NlE2WbdrW3m kwdarRhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8am7XcYC/p0K+5Pv83WwHhcrYuRi0NIYB2T xJMb61m6GDk5WAQcJK7PfMLWxcjBwSgQI/HghzVImFEgTGLy5SWsIDabgKHE3yeb2EBsEQFpi WufLzOCzGEWmMwo8frdL2aQhLCArcTZVe8YIWaqSkw98QRsPq+Am8T821fBbAkBOYmTxyaDDe UUcJeYP3M7C8RB7YwSH7eeZp7AyLuAkWEVo3pxalFZapGuhV5SUWZ6RkluYmaOrqGBsV5uanF xYnpqTmJSsV5yfu4mRmA4MADBDsYL7c6HGCU5mJREeb/plkcK8SXlp1RmJBZnxBeV5qQWH2KU 4eBQkuCdtBUoJ1iUmp5akZaZAwxMmLQEB4+SCG8vSJq3uCAxtzgzHSJ1itGYY8Pq9V+YOF5N+ P+NSYglLz8vVUqcdylIqQBIaUZpHtwgWMRcYpSVEuZlBDpNiKcgtSg3swRV/hWjOAejkjDvTJ ApPJl5JXD7XgGdwgR0ypwZpSCnlCQipKQaGDMUYhctlJyru3xpoLdB7+KsmXd+5P4yjrhevli kZ97Nqof76+YmptSElN3I/6+gPYsx6kj24qJLzWwzrG5wtt35HdfbP5t/7u3A4J2OV7Z3vcor e+PTt7zhqmnT7tPhuw0+vj4QN2dd4FXrqj+3kp928GTwz8/Xf6/3TVbA4HOwqyabYtrMJ0osx RmJhlrMRcWJAPciLbyTAgAA X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-5.tower-31.messagelabs.com!1501017737!103855293!1 X-Originating-IP: [198.145.29.99] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17223 invoked from network); 25 Jul 2017 21:22:18 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.99) by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 25 Jul 2017 21:22:18 -0000 Received: from localhost.localdomain (162-198-228-33.lightspeed.wlfrct.sbcglobal.net [162.198.228.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4152F22C8F; Tue, 25 Jul 2017 21:22:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4152F22C8F From: Stefano Stabellini To: xen-devel@lists.xen.org Date: Tue, 25 Jul 2017 14:21:59 -0700 Message-Id: <1501017730-12797-2-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1501017730-12797-1-git-send-email-sstabellini@kernel.org> References: <1501017730-12797-1-git-send-email-sstabellini@kernel.org> Cc: jgross@suse.com, Stefano Stabellini , boris.ostrovsky@oracle.com, sstabellini@kernel.org, linux-kernel@vger.kernel.org Subject: [Xen-devel] [PATCH v2 02/13] xen/pvcalls: connect to the backend X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Implement the probe function for the pvcalls frontend. Read the supported versions, max-page-order and function-calls nodes from xenstore. Introduce a data structure named pvcalls_bedata. It contains pointers to the command ring, the event channel, a list of active sockets and a list of passive sockets. Lists accesses are protected by a spin_lock. Introduce a waitqueue to allow waiting for a response on commands sent to the backend. Introduce an array of struct xen_pvcalls_response to store commands responses. Only one frontend<->backend connection is supported at any given time for a guest. Store the active frontend device to a static pointer. Introduce a stub functions for the event handler. Signed-off-by: Stefano Stabellini CC: boris.ostrovsky@oracle.com CC: jgross@suse.com --- drivers/xen/pvcalls-front.c | 153 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 153 insertions(+) diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c index a8d38c2..5e0b265 100644 --- a/drivers/xen/pvcalls-front.c +++ b/drivers/xen/pvcalls-front.c @@ -20,6 +20,29 @@ #include #include +#define PVCALLS_INVALID_ID (UINT_MAX) +#define RING_ORDER XENBUS_MAX_RING_GRANT_ORDER +#define PVCALLS_NR_REQ_PER_RING __CONST_RING_SIZE(xen_pvcalls, XEN_PAGE_SIZE) + +struct pvcalls_bedata { + struct xen_pvcalls_front_ring ring; + grant_ref_t ref; + int irq; + + struct list_head socket_mappings; + struct list_head socketpass_mappings; + spinlock_t pvcallss_lock; + + wait_queue_head_t inflight_req; + struct xen_pvcalls_response rsp[PVCALLS_NR_REQ_PER_RING]; +}; +struct xenbus_device *pvcalls_front_dev; + +static irqreturn_t pvcalls_front_event_handler(int irq, void *dev_id) +{ + return IRQ_HANDLED; +} + static const struct xenbus_device_id pvcalls_front_ids[] = { { "pvcalls" }, { "" } @@ -33,12 +56,142 @@ static int pvcalls_front_remove(struct xenbus_device *dev) static int pvcalls_front_probe(struct xenbus_device *dev, const struct xenbus_device_id *id) { + int ret = -EFAULT, evtchn, ref = -1, i; + unsigned int max_page_order, function_calls, len; + char *versions; + grant_ref_t gref_head = 0; + struct xenbus_transaction xbt; + struct pvcalls_bedata *bedata = NULL; + struct xen_pvcalls_sring *sring; + + if (pvcalls_front_dev != NULL) { + dev_err(&dev->dev, "only one PV Calls connection supported\n"); + return -EINVAL; + } + + versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len); + if (!len) + return -EINVAL; + if (strcmp(versions, "1")) { + kfree(versions); + return -EINVAL; + } + kfree(versions); + ret = xenbus_scanf(XBT_NIL, dev->otherend, + "max-page-order", "%u", &max_page_order); + if (ret <= 0) + return -ENODEV; + if (max_page_order < RING_ORDER) + return -ENODEV; + ret = xenbus_scanf(XBT_NIL, dev->otherend, + "function-calls", "%u", &function_calls); + if (ret <= 0 || function_calls != 1) + return -ENODEV; + pr_info("%s max-page-order is %u\n", __func__, max_page_order); + + bedata = kzalloc(sizeof(struct pvcalls_bedata), GFP_KERNEL); + if (!bedata) + return -ENOMEM; + + init_waitqueue_head(&bedata->inflight_req); + for (i = 0; i < PVCALLS_NR_REQ_PER_RING; i++) + bedata->rsp[i].req_id = PVCALLS_INVALID_ID; + + sring = (struct xen_pvcalls_sring *) __get_free_page(GFP_KERNEL | + __GFP_ZERO); + if (!sring) + goto error; + SHARED_RING_INIT(sring); + FRONT_RING_INIT(&bedata->ring, sring, XEN_PAGE_SIZE); + + ret = xenbus_alloc_evtchn(dev, &evtchn); + if (ret) + goto error; + + bedata->irq = bind_evtchn_to_irqhandler(evtchn, + pvcalls_front_event_handler, + 0, "pvcalls-frontend", dev); + if (bedata->irq < 0) { + ret = bedata->irq; + goto error; + } + + ret = gnttab_alloc_grant_references(1, &gref_head); + if (ret < 0) + goto error; + bedata->ref = ref = gnttab_claim_grant_reference(&gref_head); + if (ref < 0) + goto error; + gnttab_grant_foreign_access_ref(ref, dev->otherend_id, + virt_to_gfn((void *)sring), 0); + + again: + ret = xenbus_transaction_start(&xbt); + if (ret) { + xenbus_dev_fatal(dev, ret, "starting transaction"); + goto error; + } + ret = xenbus_printf(xbt, dev->nodename, "version", "%u", 1); + if (ret) + goto error_xenbus; + ret = xenbus_printf(xbt, dev->nodename, "ring-ref", "%d", ref); + if (ret) + goto error_xenbus; + ret = xenbus_printf(xbt, dev->nodename, "port", "%u", + evtchn); + if (ret) + goto error_xenbus; + ret = xenbus_transaction_end(xbt, 0); + if (ret) { + if (ret == -EAGAIN) + goto again; + xenbus_dev_fatal(dev, ret, "completing transaction"); + goto error; + } + + INIT_LIST_HEAD(&bedata->socket_mappings); + INIT_LIST_HEAD(&bedata->socketpass_mappings); + spin_lock_init(&bedata->pvcallss_lock); + dev_set_drvdata(&dev->dev, bedata); + pvcalls_front_dev = dev; + xenbus_switch_state(dev, XenbusStateInitialised); + return 0; + + error_xenbus: + xenbus_transaction_end(xbt, 1); + xenbus_dev_fatal(dev, ret, "writing xenstore"); + error: + pvcalls_front_remove(dev); + return ret; } static void pvcalls_front_changed(struct xenbus_device *dev, enum xenbus_state backend_state) { + switch (backend_state) { + case XenbusStateReconfiguring: + case XenbusStateReconfigured: + case XenbusStateInitialising: + case XenbusStateInitialised: + case XenbusStateUnknown: + break; + + case XenbusStateInitWait: + break; + + case XenbusStateConnected: + xenbus_switch_state(dev, XenbusStateConnected); + break; + + case XenbusStateClosed: + if (dev->state == XenbusStateClosed) + break; + /* Missed the backend's CLOSING state -- fallthrough */ + case XenbusStateClosing: + xenbus_frontend_closed(dev); + break; + } } static struct xenbus_driver pvcalls_front_driver = {