From patchwork Wed Jul 5 21:50:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Stabellini X-Patchwork-Id: 9827207 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AAD9860351 for ; Wed, 5 Jul 2017 21:53:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01E2728384 for ; Wed, 5 Jul 2017 21:53:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EADE228536; Wed, 5 Jul 2017 21:53:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C5A1B2847B for ; Wed, 5 Jul 2017 21:53:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dSsCa-0003Oe-SD; Wed, 05 Jul 2017 21:51:16 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dSsCZ-0003Mf-A4 for xen-devel@lists.xen.org; Wed, 05 Jul 2017 21:51:15 +0000 Received: from [85.158.143.35] by server-11.bemta-6.messagelabs.com id 50/D5-03612-25F5D595; Wed, 05 Jul 2017 21:51:14 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpgkeJIrShJLcpLzFFi42I5NlE2WTcwPjb SYNF0K4slHxezODB6HN39mymAMYo1My8pvyKBNePP/F9sBf9VKzbfPMvewLhFoYuRi0NIYB2T xL4F91m7GDk5WAQcJC40T2XpYuTgYBSIkXjwwxokzCgQJjH58hKwEjYBQ4m/TzaxgdgiAtIS1 z5fZgSZwywwmVHi9btfzCAJYQF7ictN99khZqpKnPo3AayBV8Bd4s6ObhYQW0JATuLksclgQz mB4mdaF7JCHNTGKLH3+Q+mCYy8CxgZVjGqF6cWlaUW6RrrJRVlpmeU5CZm5ugaGpjp5aYWFye mp+YkJhXrJefnbmIEhgMDEOxg7PjndIhRkoNJSZT3j35spBBfUn5KZUZicUZ8UWlOavEhRhkO DiUJ3t+xQDnBotT01Iq0zBxgYMKkJTh4lER4VwsDpXmLCxJzizPTIVKnGI05Nqxe/4WJ49WE/ 9+YhFjy8vNSpcR5heKASgVASjNK8+AGwSLmEqOslDAvI9BpQjwFqUW5mSWo8q8YxTkYlYR5P4 Dcw5OZVwK37xXQKUxApyg2xoCcUpKIkJJqYFy+83+gcv2LrZVFCzfy7JLYt1MuZs+zpoNL851 z0ksXa3qfeJ29p0Duiyj3jd0ijT7PH2UuvLTkuJBzwev61AO3r/w0Nu9Jz2X/GV+0TPWj7q/F /dPPMJnemnjBp/qr/7GcNZy3RV4o2fkqXbwmrnrzwjvBw+yf9fr9i6e7cVuKexfcsWrwz1FiK c5INNRiLipOBACqKxM6kwIAAA== X-Env-Sender: sstabellini@kernel.org X-Msg-Ref: server-10.tower-21.messagelabs.com!1499291472!65243718!1 X-Originating-IP: [198.145.29.99] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 38906 invoked from network); 5 Jul 2017 21:51:13 -0000 Received: from mail.kernel.org (HELO mail.kernel.org) (198.145.29.99) by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 5 Jul 2017 21:51:13 -0000 Received: from localhost.localdomain (162-198-228-33.lightspeed.wlfrct.sbcglobal.net [162.198.228.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C1C6C22C8A; Wed, 5 Jul 2017 21:51:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1C6C22C8A From: Stefano Stabellini To: xen-devel@lists.xen.org Date: Wed, 5 Jul 2017 14:50:51 -0700 Message-Id: <1499291458-30231-11-git-send-email-sstabellini@kernel.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1499291458-30231-1-git-send-email-sstabellini@kernel.org> References: <1499291458-30231-1-git-send-email-sstabellini@kernel.org> Cc: jgross@suse.com, Stefano Stabellini , boris.ostrovsky@oracle.com, sstabellini@kernel.org, linux-kernel@vger.kernel.org Subject: [Xen-devel] [PATCH v7 11/18] xen/pvcalls: implement accept command X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Implement the accept command by calling inet_accept. To avoid blocking in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get scheduled on sk_data_ready (for a passive socket, it means that there are connections to accept). Use the reqcopy field to store the request. Accept the new socket from the delayed work function, create a new sock_mapping for it, map the indexes page and data ring, and reply to the other end. Allocate an ioworker for the socket. Only support one outstanding blocking accept request for every socket at any time. Add a field to sock_mapping to remember the passive socket from which an active socket was created. Signed-off-by: Stefano Stabellini Reviewed-by: Juergen Gross CC: boris.ostrovsky@oracle.com CC: jgross@suse.com --- drivers/xen/pvcalls-back.c | 113 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c index c9f773f..f0fdcd4 100644 --- a/drivers/xen/pvcalls-back.c +++ b/drivers/xen/pvcalls-back.c @@ -62,6 +62,7 @@ struct pvcalls_ioworker { struct sock_mapping { struct list_head list; struct pvcalls_fedata *fedata; + struct sockpass_mapping *sockpass; struct socket *sock; uint64_t id; grant_ref_t ref; @@ -282,10 +283,83 @@ static int pvcalls_back_release(struct xenbus_device *dev, static void __pvcalls_back_accept(struct work_struct *work) { + struct sockpass_mapping *mappass = container_of( + work, struct sockpass_mapping, register_work); + struct sock_mapping *map; + struct pvcalls_ioworker *iow; + struct pvcalls_fedata *fedata; + struct socket *sock; + struct xen_pvcalls_response *rsp; + struct xen_pvcalls_request *req; + int notify; + int ret = -EINVAL; + unsigned long flags; + + fedata = mappass->fedata; + /* + * __pvcalls_back_accept can race against pvcalls_back_accept. + * We only need to check the value of "cmd" on read. It could be + * done atomically, but to simplify the code on the write side, we + * use a spinlock. + */ + spin_lock_irqsave(&mappass->copy_lock, flags); + req = &mappass->reqcopy; + if (req->cmd != PVCALLS_ACCEPT) { + spin_unlock_irqrestore(&mappass->copy_lock, flags); + return; + } + spin_unlock_irqrestore(&mappass->copy_lock, flags); + + sock = sock_alloc(); + if (sock == NULL) + goto out_error; + sock->type = mappass->sock->type; + sock->ops = mappass->sock->ops; + + ret = inet_accept(mappass->sock, sock, O_NONBLOCK, true); + if (ret == -EAGAIN) { + sock_release(sock); + goto out_error; + } + + map = pvcalls_new_active_socket(fedata, + req->u.accept.id_new, + req->u.accept.ref, + req->u.accept.evtchn, + sock); + if (!map) { + ret = -EFAULT; + sock_release(sock); + goto out_error; + } + + map->sockpass = mappass; + iow = &map->ioworker; + atomic_inc(&map->read); + atomic_inc(&map->io); + queue_work(iow->wq, &iow->register_work); + +out_error: + rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); + rsp->req_id = req->req_id; + rsp->cmd = req->cmd; + rsp->u.accept.id = req->u.accept.id; + rsp->ret = ret; + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&fedata->ring, notify); + if (notify) + notify_remote_via_irq(fedata->irq); + + mappass->reqcopy.cmd = 0; } static void pvcalls_pass_sk_data_ready(struct sock *sock) { + struct sockpass_mapping *mappass = sock->sk_user_data; + + if (mappass == NULL) + return; + + queue_work(mappass->wq, &mappass->register_work); } static int pvcalls_back_bind(struct xenbus_device *dev, @@ -383,6 +457,45 @@ static int pvcalls_back_listen(struct xenbus_device *dev, static int pvcalls_back_accept(struct xenbus_device *dev, struct xen_pvcalls_request *req) { + struct pvcalls_fedata *fedata; + struct sockpass_mapping *mappass; + int ret = -EINVAL; + struct xen_pvcalls_response *rsp; + unsigned long flags; + + fedata = dev_get_drvdata(&dev->dev); + + down(&fedata->socket_lock); + mappass = radix_tree_lookup(&fedata->socketpass_mappings, + req->u.accept.id); + up(&fedata->socket_lock); + if (mappass == NULL) + goto out_error; + + /* + * Limitation of the current implementation: only support one + * concurrent accept or poll call on one socket. + */ + spin_lock_irqsave(&mappass->copy_lock, flags); + if (mappass->reqcopy.cmd != 0) { + spin_unlock_irqrestore(&mappass->copy_lock, flags); + ret = -EINTR; + goto out_error; + } + + mappass->reqcopy = *req; + spin_unlock_irqrestore(&mappass->copy_lock, flags); + queue_work(mappass->wq, &mappass->register_work); + + /* Tell the caller we don't need to send back a notification yet */ + return -1; + +out_error: + rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); + rsp->req_id = req->req_id; + rsp->cmd = req->cmd; + rsp->u.accept.id = req->u.accept.id; + rsp->ret = ret; return 0; }