From patchwork Fri Nov 10 19:34:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 10053649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 26D8860365 for ; Fri, 10 Nov 2017 19:38:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17CF52AC51 for ; Fri, 10 Nov 2017 19:38:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C9DF2B424; Fri, 10 Nov 2017 19:38:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 704212AC51 for ; Fri, 10 Nov 2017 19:38:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eDF5D-0006Pp-Si; Fri, 10 Nov 2017 19:35:19 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eDF5C-0006Pf-KQ for xen-devel@lists.xenproject.org; Fri, 10 Nov 2017 19:35:18 +0000 Received: from [85.158.137.68] by server-4.bemta-3.messagelabs.com id C2/29-10698-57FF50A5; Fri, 10 Nov 2017 19:35:17 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrBLMWRWlGSWpSXmKPExsXSO6nOVbf0P2u Uwd2t3Bbft0xmcmD0OPzhCksAYxRrZl5SfkUCa0bTjAXMBZc1Kr5PfsHUwPhOqYuRi0NIYCKT RO/xzewQzl9GicblbUwQzgZGiXuLT0BlGhklbj0/y9rFyMnBJqAn0Xr+MzOILSIgJfFxx3awI maB2YwS55dNA0sIC3hL/H07mQnEZhFQlfjeM5sFxOYVsJNomtgMNkhCQF5iV9tFKNtQ4vPGpc wTGHkWMDKsYtQoTi0qSy3SNTTSSyrKTM8oyU3MzNE1NDDWy00tLk5MT81JTCrWS87P3cQI9D8 DEOxgfNXtfIhRkoNJSZTX8AJrlBBfUn5KZUZicUZ8UWlOavEhRhkODiUJXqF/QDnBotT01Iq0 zBxgIMKkJTh4lER4VUDSvMUFibnFmekQqVOMxhzPZr5uYOaYdrW1iVmIJS8/L1VKnPfZX6BSA ZDSjNI8uEGwCLnEKCslzMsIdJoQT0FqUW5mCar8K0ZxDkYlYd6vIFN4MvNK4Pa9AjqFCeiUaH YWkFNKEhFSUg2Mrf2L1tZ4vFVx3y5k1Hf/ei/Pdq3r58RnXyn9PaWBpTapM3Xqu0U7EtmMArb 2urmpvd514cpfx1qzRy4stannVST5uf9Ef611j+ht/Cn3oyHmq/6Zebb9kgG8PgJJfVE6l5r5 N2hMip68wuL30WUT975PXJ7tvrHYZM8MzrOFjEAD1zmVOCmxFGckGmoxFxUnAgCXCPcBiwIAA A== X-Env-Sender: joao.m.martins@oracle.com X-Msg-Ref: server-3.tower-31.messagelabs.com!1510342515!114460609!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 36020 invoked from network); 10 Nov 2017 19:35:16 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 10 Nov 2017 19:35:16 -0000 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id vAAJZC69006378 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 10 Nov 2017 19:35:13 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id vAAJZCw2004137 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 10 Nov 2017 19:35:12 GMT Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id vAAJZBaI030311; Fri, 10 Nov 2017 19:35:12 GMT Received: from paddy.lan (/94.61.137.133) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 10 Nov 2017 11:35:11 -0800 From: Joao Martins To: netdev@vger.kernel.org Date: Fri, 10 Nov 2017 19:34:58 +0000 Message-Id: <20171110193458.14204-1-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 X-Source-IP: userv0022.oracle.com [156.151.31.74] Cc: xen-devel@lists.xenproject.org, Joao Martins , Paul Durrant , Wei Liu Subject: [Xen-devel] [PATCH net-next v1] xen-netback: make copy batch size configurable X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Commit eb1723a29b9a ("xen-netback: refactor guest rx") refactored Rx handling and as a result decreased max grant copy ops from 4352 to 64. Before this commit it would drain the rx_queue (while there are enough slots in the ring to put packets) then copy to all pages and write responses on the ring. With the refactor we do almost the same albeit the last two steps are done every COPY_BATCH_SIZE (64) copies. For big packets, the value of 64 means copying 3 packets best case scenario (17 copies) and worst-case only 1 packet (34 copies, i.e. if all frags plus head cross the 4k grant boundary) which could be the case when packets go from local backend process. Instead of making it static to 64 grant copies, lets allow the user to select its value (while keeping the current as default) by introducing the `copy_batch_size` module parameter. This allows users to select the higher batches (i.e. for better throughput with big packets) as it was prior to the above mentioned commit. Signed-off-by: Joao Martins --- drivers/net/xen-netback/common.h | 6 ++++-- drivers/net/xen-netback/interface.c | 25 ++++++++++++++++++++++++- drivers/net/xen-netback/netback.c | 5 +++++ drivers/net/xen-netback/rx.c | 5 ++++- 4 files changed, 37 insertions(+), 4 deletions(-) diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index a46a1e94505d..a5fe36e098a7 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -129,8 +129,9 @@ struct xenvif_stats { #define COPY_BATCH_SIZE 64 struct xenvif_copy_state { - struct gnttab_copy op[COPY_BATCH_SIZE]; - RING_IDX idx[COPY_BATCH_SIZE]; + struct gnttab_copy *op; + RING_IDX *idx; + unsigned int size; unsigned int num; struct sk_buff_head *completed; }; @@ -381,6 +382,7 @@ extern unsigned int rx_drain_timeout_msecs; extern unsigned int rx_stall_timeout_msecs; extern unsigned int xenvif_max_queues; extern unsigned int xenvif_hash_cache_size; +extern unsigned int xenvif_copy_batch_size; #ifdef CONFIG_DEBUG_FS extern struct dentry *xen_netback_dbg_root; diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index d6dff347f896..a558868a883f 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -516,7 +516,20 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, int xenvif_init_queue(struct xenvif_queue *queue) { + int size = xenvif_copy_batch_size; int err, i; + void *addr; + + addr = vzalloc(size * sizeof(struct gnttab_copy)); + if (!addr) + goto err; + queue->rx_copy.op = addr; + + addr = vzalloc(size * sizeof(RING_IDX)); + if (!addr) + goto err; + queue->rx_copy.idx = addr; + queue->rx_copy.size = size; queue->credit_bytes = queue->remaining_credit = ~0UL; queue->credit_usec = 0UL; @@ -544,7 +557,7 @@ int xenvif_init_queue(struct xenvif_queue *queue) queue->mmap_pages); if (err) { netdev_err(queue->vif->dev, "Could not reserve mmap_pages\n"); - return -ENOMEM; + goto err; } for (i = 0; i < MAX_PENDING_REQS; i++) { @@ -556,6 +569,13 @@ int xenvif_init_queue(struct xenvif_queue *queue) } return 0; + +err: + if (queue->rx_copy.op) + vfree(queue->rx_copy.op); + if (queue->rx_copy.idx) + vfree(queue->rx_copy.idx); + return -ENOMEM; } void xenvif_carrier_on(struct xenvif *vif) @@ -788,6 +808,9 @@ void xenvif_disconnect_ctrl(struct xenvif *vif) */ void xenvif_deinit_queue(struct xenvif_queue *queue) { + vfree(queue->rx_copy.op); + vfree(queue->rx_copy.idx); + queue->rx_copy.size = 0; gnttab_free_pages(MAX_PENDING_REQS, queue->mmap_pages); } diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index a27daa23c9dc..3a5e1d7ac2f4 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -96,6 +96,11 @@ unsigned int xenvif_hash_cache_size = XENVIF_HASH_CACHE_SIZE_DEFAULT; module_param_named(hash_cache_size, xenvif_hash_cache_size, uint, 0644); MODULE_PARM_DESC(hash_cache_size, "Number of flows in the hash cache"); +/* This is the maximum batch of grant copies on Rx */ +unsigned int xenvif_copy_batch_size = COPY_BATCH_SIZE; +module_param_named(copy_batch_size, xenvif_copy_batch_size, uint, 0644); +MODULE_PARM_DESC(copy_batch_size, "Maximum batch of grant copies on Rx"); + static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx, u8 status); diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c index b1cf7c6f407a..793a85f61f9d 100644 --- a/drivers/net/xen-netback/rx.c +++ b/drivers/net/xen-netback/rx.c @@ -168,11 +168,14 @@ static void xenvif_rx_copy_add(struct xenvif_queue *queue, struct xen_netif_rx_request *req, unsigned int offset, void *data, size_t len) { + unsigned int batch_size; struct gnttab_copy *op; struct page *page; struct xen_page_foreign *foreign; - if (queue->rx_copy.num == COPY_BATCH_SIZE) + batch_size = min(xenvif_copy_batch_size, queue->rx_copy.size); + + if (queue->rx_copy.num == batch_size) xenvif_rx_copy_flush(queue); op = &queue->rx_copy.op[queue->rx_copy.num];