From patchwork Mon Nov 13 16:34:02 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 10056405 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4A6F760365 for ; Mon, 13 Nov 2017 16:36:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A298291A6 for ; Mon, 13 Nov 2017 16:36:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2EA22291AF; Mon, 13 Nov 2017 16:36:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 82A07291A6 for ; Mon, 13 Nov 2017 16:36:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eEHgs-0006O1-OQ; Mon, 13 Nov 2017 16:34:30 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1eEHgr-0006NE-N1 for xen-devel@lists.xenproject.org; Mon, 13 Nov 2017 16:34:29 +0000 Received: from [193.109.254.147] by server-10.bemta-6.messagelabs.com id 57/9C-23784-599C90A5; Mon, 13 Nov 2017 16:34:29 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrAIsWRWlGSWpSXmKPExsXSO6nOVbf7JGe UQe8aA4vvWyYzOTB6HP5whSWAMYo1My8pvyKBNWPJ9E7WgktaFbNmLGNuYGxR6WLk4hASmMwk cej1VsYuRk4g5y+jxKOn2RCJjYwSTdses0Ik6iRuLHnDDmKzCKhKPNh1hxnEZhPQk2g9/xnMF hHQkpiwZSsTSDOzwHJGiQOPzgA1s3MICwRIHHcDKeEVMJY4/mEXK8T814wSr171MEIkBCVOzn zCAmIzA8258e8l0BwOIFtaYvk/DpAwp4CHxMOr58FWSQDNaX97kW0Co8AsJN2zkHTPQuhewMi 8ilG9OLWoLLVI11gvqSgzPaMkNzEzR9fQwEwvN7W4ODE9NScxqVgvOT93EyMwNBmAYAdjxz+n Q4ySHExKorwqn9mjhPiS8lMqMxKLM+KLSnNSiw8xynBwKEnwep3gjBISLEpNT61Iy8wBRglMW oKDR0mEVwMkzVtckJhbnJkOkTrFqMvxbObrBmYhlrz8vFQpcd7bx4GKBECKMkrz4EbAIvYSo6 yUMC8j0FFCPAWpRbmZJajyrxjFORiVhHn9QVbxZOaVwG16BXQEE9ARUiD38xaXJCKkpBoYzZZ sEv4w84H6lAs3QxnZBHeUBrcLbFeSkNpeY/3oUt/Dh21Cji8W+XEYL29vUQraO0lgwRa7Fcmv f5wV9/fY1u104Adjvfb51t1H31xOOVH0VFh/0nzdw8uY3raoOmVlMH+wrTC71vBbesnNb5e99 kp/NNtmdNmc16tFZuMpz01TNI9c1Au+p8RSnJFoqMVcVJwIAG5sHLHTAgAA X-Env-Sender: joao.m.martins@oracle.com X-Msg-Ref: server-7.tower-27.messagelabs.com!1510590858!111693214!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 27271 invoked from network); 13 Nov 2017 16:34:19 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 13 Nov 2017 16:34:19 -0000 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id vADGYCFs019497 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 Nov 2017 16:34:13 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id vADGYCc3032156 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 Nov 2017 16:34:12 GMT Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id vADGYCDx016758; Mon, 13 Nov 2017 16:34:12 GMT Received: from paddy (/10.175.192.247) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 13 Nov 2017 08:34:12 -0800 Date: Mon, 13 Nov 2017 16:34:02 +0000 From: Joao Martins To: Paul Durrant Message-ID: <20171113163401.eia4pdyimysfg4h6@paddy> References: <20171110193458.14204-1-joao.m.martins@oracle.com> <8c18502b-11ff-be33-a584-a8bdf8960292@oracle.com> <40fc53458a524c64af50b48e43bfd251@AMSPEX02CL03.citrite.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <40fc53458a524c64af50b48e43bfd251@AMSPEX02CL03.citrite.net> X-Source-IP: aserv0022.oracle.com [141.146.126.234] Cc: "netdev@vger.kernel.org" , Wei Liu , "xen-devel@lists.xenproject.org" Subject: Re: [Xen-devel] [PATCH net-next v1] xen-netback: make copy batch size configurable X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Nov 13, 2017 at 11:58:03AM +0000, Paul Durrant wrote: > On Mon, Nov 13, 2017 at 11:54:00AM +0000, Joao Martins wrote: > > On 11/13/2017 10:33 AM, Paul Durrant wrote: > > > On 11/10/2017 19:35 PM, Joao Martins wrote: [snip] > > >> diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c > > >> index b1cf7c6f407a..793a85f61f9d 100644 > > >> --- a/drivers/net/xen-netback/rx.c > > >> +++ b/drivers/net/xen-netback/rx.c > > >> @@ -168,11 +168,14 @@ static void xenvif_rx_copy_add(struct > > >> xenvif_queue *queue, > > >> struct xen_netif_rx_request *req, > > >> unsigned int offset, void *data, size_t len) > > >> { > > >> + unsigned int batch_size; > > >> struct gnttab_copy *op; > > >> struct page *page; > > >> struct xen_page_foreign *foreign; > > >> > > >> - if (queue->rx_copy.num == COPY_BATCH_SIZE) > > >> + batch_size = min(xenvif_copy_batch_size, queue->rx_copy.size); > > > > > > Surely queue->rx_copy.size and xenvif_copy_batch_size are always > > > identical? Why do you need this statement (and hence stack variable)? > > > > > This statement was to allow to be changed dynamically and would > > affect all newly created guests or running guests if value happened > > to be smaller than initially allocated. But I suppose I should make > > behaviour more consistent with the other params we have right now > > and just look at initially allocated one `queue->rx_copy.batch_size` ? > > Yes, that would certainly be consistent but I can see value in > allowing it to be dynamically tuned, so perhaps adding some re-allocation > code to allow the batch to be grown as well as shrunk might be nice. The shrink one we potentially risk losing data, so we need to gate the reallocation whenever `rx_copy.num` is less than the new requested batch. Worst case means guestrx_thread simply uses the initial allocated value. Anyhow, something like the below scissored diff (on top of your comments): diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index a165a4123396..8e4eaf3a507d 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -359,6 +359,7 @@ irqreturn_t xenvif_ctrl_irq_fn(int irq, void *data); void xenvif_rx_action(struct xenvif_queue *queue); void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb); +int xenvif_rx_copy_realloc(struct xenvif_queue *queue, unsigned int size); void xenvif_carrier_on(struct xenvif *vif); diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 1892bf9327e4..14613b5fcccb 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -516,20 +516,13 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid, int xenvif_init_queue(struct xenvif_queue *queue) { - unsigned int size = xenvif_copy_batch_size; int err, i; - void *addr; - - addr = vzalloc(size * sizeof(struct gnttab_copy)); - if (!addr) - goto err; - queue->rx_copy.op = addr; - addr = vzalloc(size * sizeof(RING_IDX)); - if (!addr) + err = xenvif_rx_copy_realloc(queue, xenvif_copy_batch_size); + if (err) { + netdev_err(queue->vif->dev, "Could not alloc rx_copy\n"); goto err; - queue->rx_copy.idx = addr; - queue->rx_copy.batch_size = size; + } queue->credit_bytes = queue->remaining_credit = ~0UL; queue->credit_usec = 0UL; diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c index be3946cdaaf6..f54bfe72188c 100644 --- a/drivers/net/xen-netback/rx.c +++ b/drivers/net/xen-netback/rx.c @@ -130,6 +130,51 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue) } } +int xenvif_rx_copy_realloc(struct xenvif_queue *queue, unsigned int size) +{ + void *op = NULL, *idx = NULL; + + /* No reallocation if new size doesn't fit ongoing requests */ + if (!size || queue->rx_copy.num > size) + return -EINVAL; + + op = vzalloc(size * sizeof(struct gnttab_copy)); + if (!op) + goto err; + + idx = vzalloc(size * sizeof(RING_IDX)); + if (!idx) + goto err; + + /* Ongoing requests need copying */ + if (queue->rx_copy.num) { + unsigned int tmp; + + tmp = queue->rx_copy.num * sizeof(struct gnttab_copy); + memcpy(op, queue->rx_copy.op, tmp); + + tmp = queue->rx_copy.num * sizeof(RING_IDX); + memcpy(idx, queue->rx_copy.idx, tmp); + } + + if (queue->rx_copy.op || queue->rx_copy.idx) { + vfree(queue->rx_copy.op); + vfree(queue->rx_copy.idx); + } + + queue->rx_copy.op = op; + queue->rx_copy.idx = idx; + queue->rx_copy.batch_size = size; + netdev_dbg(queue->vif->dev, "Reallocated rx_copy for batch size %u\n", + size); + return 0; + +err: + vfree(op); + vfree(idx); + return -ENOMEM; +} + static void xenvif_rx_copy_flush(struct xenvif_queue *queue) { unsigned int i; @@ -168,14 +213,14 @@ static void xenvif_rx_copy_add(struct xenvif_queue *queue, struct xen_netif_rx_request *req, unsigned int offset, void *data, size_t len) { - unsigned int batch_size; struct gnttab_copy *op; struct page *page; struct xen_page_foreign *foreign; - batch_size = min(xenvif_copy_batch_size, queue->rx_copy.batch_size); + if (unlikely(xenvif_copy_batch_size != queue->rx_copy.batch_size)) + xenvif_rx_copy_realloc(queue, xenvif_copy_batch_size); - if (queue->rx_copy.num == batch_size) + if (queue->rx_copy.num == queue->rx_copy.batch_size) xenvif_rx_copy_flush(queue); op = &queue->rx_copy.op[queue->rx_copy.num];