From patchwork Tue Oct 4 09:29:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9361525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 375E9600C8 for ; Tue, 4 Oct 2016 09:52:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 260302896A for ; Tue, 4 Oct 2016 09:52:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 19C5E28984; Tue, 4 Oct 2016 09:52:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6FA17287EF for ; Tue, 4 Oct 2016 09:52:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1brMLT-0006FC-9l; Tue, 04 Oct 2016 09:49:07 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1brMLR-0006DX-Vl for xen-devel@lists.xenproject.org; Tue, 04 Oct 2016 09:49:06 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 2F/E2-28497-11B73F75; Tue, 04 Oct 2016 09:49:05 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprLIsWRWlGSWpSXmKPExsXitHSDva5A9ed wg6N3zC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1owDc6+xFZyQqDh2eBpjA+M80S5GTg4JAX+J 1bP2soDYbAI6ElOfXmIFsUUE7CQubT7ADmIzCxRJLJ0zC6xGWCBaYuWtVjYQm0VAReL09/lgc V4Bd4nnG/axQ8yUkzh//CcziM0p4CHx9VQ7WL0QUM3rNzC2isT6qbPYIHoFJU7OfMICsUtC4u CLF8wQc7glbp+eyjyBkW8WkrJZSMoWMDKtYtQoTi0qSy3SNTTWSyrKTM8oyU3MzNE1NDDTy00 tLk5MT81JTCrWS87P3cQIDCoGINjB+GVZwCFGSQ4mJVHeytefwoX4kvJTKjMSizPii0pzUosP McpwcChJ8G6v/BwuJFiUmp5akZaZAwxvmLQEB4+SCO+jCqA0b3FBYm5xZjpE6hSjLseHydfXM gmx5OXnpUqJ8+4BmSEAUpRRmgc3AhZrlxhlpYR5GYGOEuIpSC3KzSxBlX/FKM7BqCTM+xNkCk 9mXgncpldARzABHRG45QPIESWJCCmpBsYulhtd6zVnb4j9X9JaxjZ9pv1T7xe//unFJinKVi+ w1ToryOthndH6reBvk0OG8VoflXPGq7p26nWIlMXLP40QmFx/tvVE8+0LYsw8119KRFxpNNTd dn1XI8cUQz/eqsMfvr0RXJ7R0ry9JLh8zfutRb5nevOr637+mvdAwm3e1+Vt4hv3JCixFGckG moxFxUnAgDv++GAsAIAAA== X-Env-Sender: prvs=0787cd24c=Paul.Durrant@citrix.com X-Msg-Ref: server-8.tower-27.messagelabs.com!1475574542!52842213!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 26803 invoked from network); 4 Oct 2016 09:49:04 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 4 Oct 2016 09:49:04 -0000 X-IronPort-AV: E=Sophos;i="5.31,442,1473120000"; d="scan'208";a="390530430" From: Paul Durrant To: , Date: Tue, 4 Oct 2016 10:29:17 +0100 Message-ID: <1475573358-32414-7-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1475573358-32414-1-git-send-email-paul.durrant@citrix.com> References: <1475573358-32414-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Paul Durrant , Wei Liu , David Vrabel Subject: [Xen-devel] [PATCH v2 net-next 6/7] xen-netback: batch copies for multiple to-guest rx packets X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: David Vrabel Instead of flushing the copy ops when an packet is complete, complete packets when their copy ops are done. This improves performance by reducing the number of grant copy hypercalls. Latency is still limited by the relatively small size of the copy batch. Signed-off-by: David Vrabel [re-based] Signed-off-by: Paul Durrant --- Cc: Wei Liu --- drivers/net/xen-netback/common.h | 1 + drivers/net/xen-netback/rx.c | 27 +++++++++++++++++---------- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 7d12a38..cf68149 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -132,6 +132,7 @@ struct xenvif_copy_state { struct gnttab_copy op[COPY_BATCH_SIZE]; RING_IDX idx[COPY_BATCH_SIZE]; unsigned int num; + struct sk_buff_head *completed; }; struct xenvif_queue { /* Per-queue data for xenvif */ diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c index ae822b8..8c8c5b5 100644 --- a/drivers/net/xen-netback/rx.c +++ b/drivers/net/xen-netback/rx.c @@ -133,6 +133,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue) static void xenvif_rx_copy_flush(struct xenvif_queue *queue) { unsigned int i; + int notify; gnttab_batch_copy(queue->rx_copy.op, queue->rx_copy.num); @@ -154,6 +155,13 @@ static void xenvif_rx_copy_flush(struct xenvif_queue *queue) } queue->rx_copy.num = 0; + + /* Push responses for all completed packets. */ + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, notify); + if (notify) + notify_remote_via_irq(queue->rx_irq); + + __skb_queue_purge(queue->rx_copy.completed); } static void xenvif_rx_copy_add(struct xenvif_queue *queue, @@ -279,18 +287,10 @@ static void xenvif_rx_next_skb(struct xenvif_queue *queue, static void xenvif_rx_complete(struct xenvif_queue *queue, struct xenvif_pkt_state *pkt) { - int notify; - - /* Complete any outstanding copy ops for this skb. */ - xenvif_rx_copy_flush(queue); - - /* Push responses and notify. */ + /* All responses are ready to be pushed. */ queue->rx.rsp_prod_pvt = queue->rx.req_cons; - RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, notify); - if (notify) - notify_remote_via_irq(queue->rx_irq); - dev_kfree_skb(pkt->skb); + __skb_queue_tail(queue->rx_copy.completed, pkt->skb); } static void xenvif_rx_next_chunk(struct xenvif_queue *queue, @@ -429,13 +429,20 @@ void xenvif_rx_skb(struct xenvif_queue *queue) void xenvif_rx_action(struct xenvif_queue *queue) { + struct sk_buff_head completed_skbs; unsigned int work_done = 0; + __skb_queue_head_init(&completed_skbs); + queue->rx_copy.completed = &completed_skbs; + while (xenvif_rx_ring_slots_available(queue) && work_done < RX_BATCH_SIZE) { xenvif_rx_skb(queue); work_done++; } + + /* Flush any pending copies and complete all skbs. */ + xenvif_rx_copy_flush(queue); } static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)