From patchwork Thu May 5 11:19:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9023701 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8DDE89F1C1 for ; Thu, 5 May 2016 11:36:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8BE85203C3 for ; Thu, 5 May 2016 11:36:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 71191203AE for ; Thu, 5 May 2016 11:36:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ayHWi-0007nL-QZ; Thu, 05 May 2016 11:33:04 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ayHWh-0007mk-Ca for xen-devel@lists.xenproject.org; Thu, 05 May 2016 11:33:03 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id 88/22-18833-E6F2B275; Thu, 05 May 2016 11:33:02 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupjkeJIrShJLcpLzFFi42JxWrohUjdPXzv cYOJ0IYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNaNrywyWgod6Fd8ezmdvYLyl1sXIySEh4C9x 8vBkdhCbTUBHYurTS6wgtoiAncS5nvdANgcHs4CnRM8tc5CwsICXxPar/xlBwiwCKhJ7JweDh HkF3CSmtk9hh5goJ3H++E9mEJtTwF2ieclTFhBbCKhm1ofDjBC2isT6qbPYIHoFJU7OfAJWwy wgIXHwxQtmiDncErdPT2WewMg3C0nZLCRlCxiZVjGqF6cWlaUW6RrpJRVlpmeU5CZm5ugaGpj p5aYWFyemp+YkJhXrJefnbmIEhhMDEOxgXPbX6RCjJAeTkijvdiXtcCG+pPyUyozE4oz4otKc 1OJDjDIcHEoSvOF6QDnBotT01Iq0zBxgYMOkJTh4lER440HSvMUFibnFmekQqVOMilLivGIgC QGQREZpHlwbLJouMcpKCfMyAh0ixFOQWpSbWYIq/4pRnINRSZjXGmQKT2ZeCdz0V0CLmYAWv5 +rCbK4JBEhJdXAyLrNNO/HPZO6tgL5uHVGdt2fvf3ulN/xtPHeb1LfoOTl8eGz7mq+Bq0jmze eU7gRyyqxzKz3yvZ5TKYv5kfsW9f8wTe7cGfrt4Ozw5lUtLUvsy3pKnxbofJMZ29/bfgJHQm3 3Nm5NrzvrNNfHKrdHG06QfVSfqTimwubAgWPLctzP8u7/oeSEktxRqKhFnNRcSIA5n5p5aECA AA= X-Env-Sender: prvs=926422013=Paul.Durrant@citrix.com X-Msg-Ref: server-9.tower-21.messagelabs.com!1462447980!12709840!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 52685 invoked from network); 5 May 2016 11:33:02 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 5 May 2016 11:33:02 -0000 X-IronPort-AV: E=Sophos;i="5.24,582,1454976000"; d="scan'208";a="351827071" From: Paul Durrant To: , Date: Thu, 5 May 2016 12:19:29 +0100 Message-ID: <1462447170-1815-4-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1462447170-1815-1-git-send-email-paul.durrant@citrix.com> References: <1462447170-1815-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Paul Durrant , Wei Liu Subject: [Xen-devel] [PATCH net-next 3/4] xen-netback: pass hash value to the frontend X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP My recent patch to include/xen/interface/io/netif.h defines a new extra info type that can be used to pass hash values between backend and guest frontend. This patch adds code to xen-netback to pass hash values calculated for guest receive-side packets (i.e. netback transmit side) to the frontend. Signed-off-by: Paul Durrant Cc: Wei Liu Acked-by: Wei Liu --- drivers/net/xen-netback/interface.c | 13 ++++++- drivers/net/xen-netback/netback.c | 78 +++++++++++++++++++++++++++++++------ 2 files changed, 77 insertions(+), 14 deletions(-) diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index e54b475..b2d945f 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -158,8 +158,17 @@ static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb, struct xenvif *vif = netdev_priv(dev); unsigned int size = vif->hash.size; - if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) - return fallback(dev, skb) % dev->real_num_tx_queues; + if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) { + u16 index = fallback(dev, skb) % dev->real_num_tx_queues; + + /* Make sure there is no hash information in the socket + * buffer otherwise it would be incorrectly forwarded + * to the frontend. + */ + skb_clear_hash(skb); + + return index; + } xenvif_set_skb_hash(vif, skb); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 6509d11..7c72510 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -168,6 +168,8 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue) needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); if (skb_is_gso(skb)) needed++; + if (skb->sw_hash) + needed++; do { prod = queue->rx.sring->req_prod; @@ -285,6 +287,8 @@ struct gop_frag_copy { struct xenvif_rx_meta *meta; int head; int gso_type; + int protocol; + int hash_present; struct page *page; }; @@ -331,8 +335,15 @@ static void xenvif_setup_copy_gop(unsigned long gfn, npo->copy_off += *len; info->meta->size += *len; + if (!info->head) + return; + /* Leave a gap for the GSO descriptor. */ - if (info->head && ((1 << info->gso_type) & queue->vif->gso_mask)) + if ((1 << info->gso_type) & queue->vif->gso_mask) + queue->rx.req_cons++; + + /* Leave a gap for the hash extra segment. */ + if (info->hash_present) queue->rx.req_cons++; info->head = 0; /* There must be something in this buffer now */ @@ -367,6 +378,11 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb .npo = npo, .head = *head, .gso_type = XEN_NETIF_GSO_TYPE_NONE, + /* xenvif_set_skb_hash() will have either set a s/w + * hash or cleared the hash depending on + * whether the the frontend wants a hash for this skb. + */ + .hash_present = skb->sw_hash, }; unsigned long bytes; @@ -555,6 +571,7 @@ void xenvif_kick_thread(struct xenvif_queue *queue) static void xenvif_rx_action(struct xenvif_queue *queue) { + struct xenvif *vif = queue->vif; s8 status; u16 flags; struct xen_netif_rx_response *resp; @@ -590,9 +607,10 @@ static void xenvif_rx_action(struct xenvif_queue *queue) gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod); while ((skb = __skb_dequeue(&rxq)) != NULL) { + struct xen_netif_extra_info *extra = NULL; if ((1 << queue->meta[npo.meta_cons].gso_type) & - queue->vif->gso_prefix_mask) { + vif->gso_prefix_mask) { resp = RING_GET_RESPONSE(&queue->rx, queue->rx.rsp_prod_pvt++); @@ -610,7 +628,7 @@ static void xenvif_rx_action(struct xenvif_queue *queue) queue->stats.tx_bytes += skb->len; queue->stats.tx_packets++; - status = xenvif_check_gop(queue->vif, + status = xenvif_check_gop(vif, XENVIF_RX_CB(skb)->meta_slots_used, &npo); @@ -632,21 +650,57 @@ static void xenvif_rx_action(struct xenvif_queue *queue) flags); if ((1 << queue->meta[npo.meta_cons].gso_type) & - queue->vif->gso_mask) { - struct xen_netif_extra_info *gso = - (struct xen_netif_extra_info *) + vif->gso_mask) { + extra = (struct xen_netif_extra_info *) RING_GET_RESPONSE(&queue->rx, queue->rx.rsp_prod_pvt++); resp->flags |= XEN_NETRXF_extra_info; - gso->u.gso.type = queue->meta[npo.meta_cons].gso_type; - gso->u.gso.size = queue->meta[npo.meta_cons].gso_size; - gso->u.gso.pad = 0; - gso->u.gso.features = 0; + extra->u.gso.type = queue->meta[npo.meta_cons].gso_type; + extra->u.gso.size = queue->meta[npo.meta_cons].gso_size; + extra->u.gso.pad = 0; + extra->u.gso.features = 0; + + extra->type = XEN_NETIF_EXTRA_TYPE_GSO; + extra->flags = 0; + } + + if (skb->sw_hash) { + /* Since the skb got here via xenvif_select_queue() + * we know that the hash has been re-calculated + * according to a configuration set by the frontend + * and therefore we know that it is legitimate to + * pass it to the frontend. + */ + if (resp->flags & XEN_NETRXF_extra_info) + extra->flags |= XEN_NETIF_EXTRA_FLAG_MORE; + else + resp->flags |= XEN_NETRXF_extra_info; + + extra = (struct xen_netif_extra_info *) + RING_GET_RESPONSE(&queue->rx, + queue->rx.rsp_prod_pvt++); - gso->type = XEN_NETIF_EXTRA_TYPE_GSO; - gso->flags = 0; + extra->u.hash.algorithm = + XEN_NETIF_CTRL_HASH_ALGORITHM_TOEPLITZ; + + if (skb->l4_hash) + extra->u.hash.type = + skb->protocol == htons(ETH_P_IP) ? + _XEN_NETIF_CTRL_HASH_TYPE_IPV4_TCP : + _XEN_NETIF_CTRL_HASH_TYPE_IPV6_TCP; + else + extra->u.hash.type = + skb->protocol == htons(ETH_P_IP) ? + _XEN_NETIF_CTRL_HASH_TYPE_IPV4 : + _XEN_NETIF_CTRL_HASH_TYPE_IPV6; + + *(uint32_t *)extra->u.hash.value = + skb_get_hash_raw(skb); + + extra->type = XEN_NETIF_EXTRA_TYPE_HASH; + extra->flags = 0; } xenvif_add_frag_responses(queue, status,