From patchwork Mon Oct 3 07:31:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9360155 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id ABF7E601C0 for ; Mon, 3 Oct 2016 07:53:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACB482890E for ; Mon, 3 Oct 2016 07:53:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A121A289B0; Mon, 3 Oct 2016 07:53:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 497392890E for ; Mon, 3 Oct 2016 07:53:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bqy1d-0000eF-Cv; Mon, 03 Oct 2016 07:51:01 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bqy1c-0000cc-96 for xen-devel@lists.xenproject.org; Mon, 03 Oct 2016 07:51:00 +0000 Received: from [85.158.137.68] by server-7.bemta-3.messagelabs.com id 1A/7D-03271-3ED02F75; Mon, 03 Oct 2016 07:50:59 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprDIsWRWlGSWpSXmKPExsXitHSDve5j3k/ hBieP8ll83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBkPexuYC05IVHy6c4GpgXGeaBcjJ4eEgL/E m/at7CA2m4COxNSnl1hBbBEBO4lLmw+AxZkFiiSWzpnFAmILC0RIPPvSBBZnEVCR6L3+lQ3E5 hVwl3jxZSEjxEw5ifPHfzKD2JwCHhJLz08HqufgEAKqOTTXHCQsBNS6fuosqFZBiZMzn7BArJ KQOPjiBTPEGG6J26enMk9g5JuFpGwWkrIFjEyrGDWKU4vKUot0jUz1kooy0zNKchMzc3QNDYz 1clOLixPTU3MSk4r1kvNzNzECQ6qegYFxB2PrCb9DjJIcTEqivP6bP4YL8SXlp1RmJBZnxBeV 5qQWH2KU4eBQkuB9wPMpXEiwKDU9tSItMwcY3DBpCQ4eJRHeWyBp3uKCxNzizHSI1ClGXY4Pk 6+vZRJiycvPS5US510CUiQAUpRRmgc3AhZplxhlpYR5GRkYGIR4ClKLcjNLUOVfMYpzMCoJ87 4FmcKTmVcCt+kV0BFMQEcEbvkAckRJIkJKqoGxIm9qzrEnOXkmdrlq0Sr24lJHGicuf5/SsjT mzemzWxoYtyY25vq6HZ2z5y3DxSt7XQtWFSoenC0kOk9txpWjuSwmVkzdZ90rlM7dOetQc7W/ U6OTKUsmzyB3ao/q10/XVsl5++W4sckWFRZcTL68+2nw7SP/JqSW/b4Y9JrF4Hr7rzVPM/mUW IozEg21mIuKEwFm6t0erwIAAA== X-Env-Sender: prvs=0774b174f=Paul.Durrant@citrix.com X-Msg-Ref: server-3.tower-31.messagelabs.com!1475481054!64055898!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 2328 invoked from network); 3 Oct 2016 07:50:58 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 3 Oct 2016 07:50:58 -0000 X-IronPort-AV: E=Sophos;i="5.31,289,1473120000"; d="scan'208";a="390285435" From: Paul Durrant To: , Date: Mon, 3 Oct 2016 08:31:11 +0100 Message-ID: <1475479872-23717-7-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1475479872-23717-1-git-send-email-paul.durrant@citrix.com> References: <1475479872-23717-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Paul Durrant , Wei Liu , David Vrabel Subject: [Xen-devel] [PATCH net-next 6/7] xen-netback: batch copies for multiple to-guest rx packets X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: David Vrabel Instead of flushing the copy ops when an packet is complete, complete packets when their copy ops are done. This improves performance by reducing the number of grant copy hypercalls. Latency is still limited by the relatively small size of the copy batch. Signed-off-by: David Vrabel [re-based] Signed-off-by: Paul Durrant --- Cc: Wei Liu --- drivers/net/xen-netback/common.h | 1 + drivers/net/xen-netback/rx.c | 27 +++++++++++++++++---------- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index adef482..5d40603 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -132,6 +132,7 @@ struct xenvif_copy_state { struct gnttab_copy op[COPY_BATCH_SIZE]; RING_IDX idx[COPY_BATCH_SIZE]; unsigned int num; + struct sk_buff_head *completed; }; struct xenvif_queue { /* Per-queue data for xenvif */ diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c index ae822b8..8c8c5b5 100644 --- a/drivers/net/xen-netback/rx.c +++ b/drivers/net/xen-netback/rx.c @@ -133,6 +133,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue) static void xenvif_rx_copy_flush(struct xenvif_queue *queue) { unsigned int i; + int notify; gnttab_batch_copy(queue->rx_copy.op, queue->rx_copy.num); @@ -154,6 +155,13 @@ static void xenvif_rx_copy_flush(struct xenvif_queue *queue) } queue->rx_copy.num = 0; + + /* Push responses for all completed packets. */ + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, notify); + if (notify) + notify_remote_via_irq(queue->rx_irq); + + __skb_queue_purge(queue->rx_copy.completed); } static void xenvif_rx_copy_add(struct xenvif_queue *queue, @@ -279,18 +287,10 @@ static void xenvif_rx_next_skb(struct xenvif_queue *queue, static void xenvif_rx_complete(struct xenvif_queue *queue, struct xenvif_pkt_state *pkt) { - int notify; - - /* Complete any outstanding copy ops for this skb. */ - xenvif_rx_copy_flush(queue); - - /* Push responses and notify. */ + /* All responses are ready to be pushed. */ queue->rx.rsp_prod_pvt = queue->rx.req_cons; - RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, notify); - if (notify) - notify_remote_via_irq(queue->rx_irq); - dev_kfree_skb(pkt->skb); + __skb_queue_tail(queue->rx_copy.completed, pkt->skb); } static void xenvif_rx_next_chunk(struct xenvif_queue *queue, @@ -429,13 +429,20 @@ void xenvif_rx_skb(struct xenvif_queue *queue) void xenvif_rx_action(struct xenvif_queue *queue) { + struct sk_buff_head completed_skbs; unsigned int work_done = 0; + __skb_queue_head_init(&completed_skbs); + queue->rx_copy.completed = &completed_skbs; + while (xenvif_rx_ring_slots_available(queue) && work_done < RX_BATCH_SIZE) { xenvif_rx_skb(queue); work_done++; } + + /* Flush any pending copies and complete all skbs. */ + xenvif_rx_copy_flush(queue); } static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)