From patchwork Wed Mar 22 19:03:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9639891 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 323DF6020B for ; Wed, 22 Mar 2017 19:06:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1151628452 for ; Wed, 22 Mar 2017 19:06:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0411C2846F; Wed, 22 Mar 2017 19:06:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0565E28452 for ; Wed, 22 Mar 2017 19:06:32 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cqlYF-0007HP-4i; Wed, 22 Mar 2017 19:04:07 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cqlYD-0007GS-Eb for xen-devel@lists.xenproject.org; Wed, 22 Mar 2017 19:04:05 +0000 Received: from [85.158.143.35] by server-10.bemta-6.messagelabs.com id A3/01-13192-4AAC2D85; Wed, 22 Mar 2017 19:04:04 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpjkeJIrShJLcpLzFFi42I5NlG2Q3fxqUs RBvs261t83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBk7ur+wF7x2rWh9nNPA+N+qi5GLQ0hgKqPE 5r6l7F2MnEDObiaJS7/FQGw2AUOJv082sYHYIgJKEvdWTWYCaWAW2Mck0bv9HVhCWMBCovHAR hYQm0VAVeLj1bnMIDavgKvE6z+HwWokBOQkTh6bzApicwq4SUy5u58RYlkbo8TcD+IQNRkS83 rmsELYXhKLblyCstUkrp7bxDyBkW8BI8MqRo3i1KKy1CJdQwO9pKLM9IyS3MTMHCDPTC83tbg 4MT01JzGpWC85P3cTIzBMGIBgB+O9ZQGHGCU5mJREeeumXooQ4kvKT6nMSCzOiC8qzUktPsQo w8GhJMFrfxIoJ1iUmp5akZaZAwxYmLQEB4+SCG8KSJq3uCAxtzgzHSJ1ilGXY87s3W+YhFjy8 vNSpcR5P58AKhIAKcoozYMbAYueS4yyUsK8jEBHCfEUpBblZpagyr9iFOdgVBLm9QZZxZOZVw K36RXQEUxAR5TtuQByREkiQkqqgbHRUPRm/a8pPBnvb9yzOn/2wSlHfp9zkvu5nvYfbTFUNKs VqejcwrzmSUnHZLUVdadXfkvesfZLpjfvlx0v3qp/kd5Y03vI3/WE0uHGc4GnJ9a9sLwdvKr+ b96KmNdMku0aKf/nfb2/ty+4fOqB7+s6by6L1pn35+fPghP/arbzNt2rmSp3ss9aiaU4I9FQi 7moOBEAEfYuPpkCAAA= X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-9.tower-21.messagelabs.com!1490209442!63589941!1 X-Originating-IP: [198.145.29.136] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.2.3; banners=-,-,- X-VirusChecked: Checked Received: (qmail 28240 invoked from network); 22 Mar 2017 19:04:03 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.136) by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 22 Mar 2017 19:04:03 -0000 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B835C203C2; Wed, 22 Mar 2017 19:04:00 +0000 (UTC) Received: from sstabellini-ThinkPad-X260.attlocal.net (104-6-25-238.lightspeed.sntcca.sbcglobal.net [104.6.25.238]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 542A0203ED; Wed, 22 Mar 2017 19:03:57 +0000 (UTC) From: Stefano Stabellini To: xen-devel@lists.xenproject.org Date: Wed, 22 Mar 2017 12:03:46 -0700 Message-Id: <1490209429-5542-4-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1490209429-5542-1-git-send-email-sstabellini@kernel.org> References: <1490209429-5542-1-git-send-email-sstabellini@kernel.org> X-Virus-Scanned: ClamAV using ClamSMTP Cc: jgross@suse.com, Latchesar Ionkov , sstabellini@kernel.org, Eric Van Hensbergen , linux-kernel@vger.kernel.org, groug@kaod.org, Stefano Stabellini , v9fs-developer@lists.sourceforge.net, Ron Minnich , boris.ostrovsky@oracle.com Subject: [Xen-devel] [PATCH v6 4/7] xen/9pfs: connect to the backend X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Implement functions to handle the xenbus handshake. Upon connection, allocate the rings according to the protocol specification. Initialize a work_struct and a wait_queue. The work_struct will be used to schedule work upon receiving an event channel notification from the backend. The wait_queue will be used to wait when the ring is full and we need to send a new request. Signed-off-by: Stefano Stabellini CC: groug@kaod.org CC: boris.ostrovsky@oracle.com CC: jgross@suse.com CC: Eric Van Hensbergen CC: Ron Minnich CC: Latchesar Ionkov CC: v9fs-developer@lists.sourceforge.net Reviewed-by: Juergen Gross --- net/9p/trans_xen.c | 273 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 273 insertions(+) diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c index 3d07260..481dae4 100644 --- a/net/9p/trans_xen.c +++ b/net/9p/trans_xen.c @@ -37,10 +37,46 @@ #include #include +#include +#include #include #include #include +#define XEN_9PFS_NUM_RINGS 2 +#define XEN_9PFS_RING_ORDER 6 +#define XEN_9PFS_RING_SIZE XEN_FLEX_RING_SIZE(XEN_9PFS_RING_ORDER) + +/* One per ring, more than one per 9pfs share */ +struct xen_9pfs_dataring { + struct xen_9pfs_front_priv *priv; + + struct xen_9pfs_data_intf *intf; + grant_ref_t ref; + int evtchn; + int irq; + /* protect a ring from concurrent accesses */ + spinlock_t lock; + + struct xen_9pfs_data data; + wait_queue_head_t wq; + struct work_struct work; +}; + +/* One per 9pfs share */ +struct xen_9pfs_front_priv { + struct list_head list; + struct xenbus_device *dev; + char *tag; + struct p9_client *client; + + int num_rings; + struct xen_9pfs_dataring *rings; +}; + +static LIST_HEAD(xen_9pfs_devs); +static DEFINE_RWLOCK(xen_9pfs_lock); + static int p9_xen_cancel(struct p9_client *client, struct p9_req_t *req) { return 0; @@ -60,6 +96,25 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) return 0; } +static void p9_xen_response(struct work_struct *work) +{ +} + +static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r) +{ + struct xen_9pfs_dataring *ring = r; + + if (!ring || !ring->priv->client) { + /* ignore spurious interrupt */ + return IRQ_HANDLED; + } + + wake_up_interruptible(&ring->wq); + schedule_work(&ring->work); + + return IRQ_HANDLED; +} + static struct p9_trans_module p9_xen_trans = { .name = "xen", .maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT), @@ -76,25 +131,243 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) { "" } }; +static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv) +{ + int i, j; + + write_lock(&xen_9pfs_lock); + list_del(&priv->list); + write_unlock(&xen_9pfs_lock); + + for (i = 0; i < priv->num_rings; i++) { + if (!priv->rings[i].intf) + break; + if (priv->rings[i].irq > 0) + unbind_from_irqhandler(priv->rings[i].irq, priv->dev); + if (priv->rings[i].data.in) { + for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++) { + grant_ref_t ref; + + ref = priv->rings[i].intf->ref[j]; + gnttab_end_foreign_access(ref, 0, 0); + } + free_pages((unsigned long)priv->rings[i].data.in, + XEN_9PFS_RING_ORDER - + (PAGE_SHIFT - XEN_PAGE_SHIFT)); + } + gnttab_end_foreign_access(priv->rings[i].ref, 0, 0); + free_page((unsigned long)priv->rings[i].intf); + } + kfree(priv->rings); + kfree(priv->tag); + kfree(priv); +} + static int xen_9pfs_front_remove(struct xenbus_device *dev) { + struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev); + + dev_set_drvdata(&dev->dev, NULL); + xen_9pfs_front_free(priv); return 0; } +static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev, + struct xen_9pfs_dataring *ring) +{ + int i = 0; + int ret = -ENOMEM; + void *bytes = NULL; + + init_waitqueue_head(&ring->wq); + spin_lock_init(&ring->lock); + INIT_WORK(&ring->work, p9_xen_response); + + ring->intf = (struct xen_9pfs_data_intf *)get_zeroed_page(GFP_KERNEL); + if (!ring->intf) + return ret; + ret = gnttab_grant_foreign_access(dev->otherend_id, + virt_to_gfn(ring->intf), 0); + if (ret < 0) + goto out; + ring->ref = ret; + bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT)); + if (!bytes) { + ret = -ENOMEM; + goto out; + } + for (; i < (1 << XEN_9PFS_RING_ORDER); i++) { + ret = gnttab_grant_foreign_access( + dev->otherend_id, virt_to_gfn(bytes) + i, 0); + if (ret < 0) + goto out; + ring->intf->ref[i] = ret; + } + ring->intf->ring_order = XEN_9PFS_RING_ORDER; + ring->data.in = bytes; + ring->data.out = bytes + XEN_9PFS_RING_SIZE; + + ret = xenbus_alloc_evtchn(dev, &ring->evtchn); + if (ret) + goto out; + ring->irq = bind_evtchn_to_irqhandler(ring->evtchn, + xen_9pfs_front_event_handler, + 0, "xen_9pfs-frontend", ring); + if (ring->irq >= 0) + return 0; + + xenbus_free_evtchn(dev, ring->evtchn); + ret = ring->irq; +out: + if (bytes) { + for (i--; i >= 0; i--) + gnttab_end_foreign_access(ring->intf->ref[i], 0, 0); + free_pages((unsigned long)bytes, + XEN_9PFS_RING_ORDER - + (PAGE_SHIFT - XEN_PAGE_SHIFT)); + } + gnttab_end_foreign_access(ring->ref, 0, 0); + free_page((unsigned long)ring->intf); + return ret; +} + static int xen_9pfs_front_probe(struct xenbus_device *dev, const struct xenbus_device_id *id) { + int ret, i; + struct xenbus_transaction xbt; + struct xen_9pfs_front_priv *priv = NULL; + char *versions; + unsigned int max_rings, max_ring_order, len; + + versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len); + if (!len) + return -EINVAL; + if (strcmp(versions, "1")) { + kfree(versions); + return -EINVAL; + } + kfree(versions); + max_rings = xenbus_read_unsigned(dev->otherend, "max-rings", 0); + if (max_rings < XEN_9PFS_NUM_RINGS) + return -EINVAL; + max_ring_order = xenbus_read_unsigned(dev->otherend, + "max-ring-page-order", 0); + if (max_ring_order < XEN_9PFS_RING_ORDER) + return -EINVAL; + + priv = kzalloc(sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + priv->dev = dev; + priv->num_rings = XEN_9PFS_NUM_RINGS; + priv->rings = kcalloc(priv->num_rings, sizeof(*priv->rings), + GFP_KERNEL); + if (!priv->rings) { + kfree(priv); + return -ENOMEM; + } + + for (i = 0; i < priv->num_rings; i++) { + priv->rings[i].priv = priv; + ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]); + if (ret < 0) + goto error; + } + + again: + ret = xenbus_transaction_start(&xbt); + if (ret) { + xenbus_dev_fatal(dev, ret, "starting transaction"); + goto error; + } + ret = xenbus_printf(xbt, dev->nodename, "version", "%u", 1); + if (ret) + goto error_xenbus; + ret = xenbus_printf(xbt, dev->nodename, "num-rings", "%u", + priv->num_rings); + if (ret) + goto error_xenbus; + for (i = 0; i < priv->num_rings; i++) { + char str[16]; + + BUILD_BUG_ON(XEN_9PFS_NUM_RINGS > 9); + sprintf(str, "ring-ref%u", i); + ret = xenbus_printf(xbt, dev->nodename, str, "%d", + priv->rings[i].ref); + if (ret) + goto error_xenbus; + + sprintf(str, "event-channel-%u", i); + ret = xenbus_printf(xbt, dev->nodename, str, "%u", + priv->rings[i].evtchn); + if (ret) + goto error_xenbus; + } + priv->tag = xenbus_read(xbt, dev->nodename, "tag", NULL); + if (!priv->tag) { + ret = -EINVAL; + goto error_xenbus; + } + ret = xenbus_transaction_end(xbt, 0); + if (ret) { + if (ret == -EAGAIN) + goto again; + xenbus_dev_fatal(dev, ret, "completing transaction"); + goto error; + } + + write_lock(&xen_9pfs_lock); + list_add_tail(&priv->list, &xen_9pfs_devs); + write_unlock(&xen_9pfs_lock); + dev_set_drvdata(&dev->dev, priv); + xenbus_switch_state(dev, XenbusStateInitialised); + return 0; + + error_xenbus: + xenbus_transaction_end(xbt, 1); + xenbus_dev_fatal(dev, ret, "writing xenstore"); + error: + dev_set_drvdata(&dev->dev, NULL); + xen_9pfs_front_free(priv); + return ret; } static int xen_9pfs_front_resume(struct xenbus_device *dev) { + dev_warn(&dev->dev, "suspsend/resume unsupported\n"); return 0; } static void xen_9pfs_front_changed(struct xenbus_device *dev, enum xenbus_state backend_state) { + switch (backend_state) { + case XenbusStateReconfiguring: + case XenbusStateReconfigured: + case XenbusStateInitialising: + case XenbusStateInitialised: + case XenbusStateUnknown: + break; + + case XenbusStateInitWait: + break; + + case XenbusStateConnected: + xenbus_switch_state(dev, XenbusStateConnected); + break; + + case XenbusStateClosed: + if (dev->state == XenbusStateClosed) + break; + /* Missed the backend's CLOSING state -- fallthrough */ + case XenbusStateClosing: + xenbus_frontend_closed(dev); + break; + } } static struct xenbus_driver xen_9pfs_front_driver = {