From patchwork Wed May 11 15:16:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9071941 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E552DBF29F for ; Wed, 11 May 2016 15:33:56 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BBE1C201C0 for ; Wed, 11 May 2016 15:33:55 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7907120148 for ; Wed, 11 May 2016 15:33:54 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b0W60-0000Ql-Sz; Wed, 11 May 2016 15:30:44 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1b0W5z-0000QA-CL for xen-devel@lists.xenproject.org; Wed, 11 May 2016 15:30:43 +0000 Received: from [193.109.254.147] by server-8.bemta-14.messagelabs.com id F2/A6-03512-22053375; Wed, 11 May 2016 15:30:42 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupmkeJIrShJLcpLzFFi42JxWrohUlcxwDj cYO9XDYvvWyYzOTB6HP5whSWAMYo1My8pvyKBNaP5dVnBc72KlhlCDYwP1LoYOTkkBPwlfjXu YAex2QR0JKY+vcQKYosI2Emc63kPZjMLaEk0TFzC1MXIwSEs4Ctxd3IaSJhFQFViy4NdbCBhX gE3ibdT5CAmykmcP/6TGcTmFHCXWLXhHBOILQRUMu/vbRYIW0Vi/dRZbCA2r4CgxMmZT1ggNk lIHHzxghlkpIQAt8TfbvsJjHyzkFTNQlK1gJFpFaN6cWpRWWqRroleUlFmekZJbmJmjq6hoYl ebmpxcWJ6ak5iUrFecn7uJkZgIDEAwQ7GFQudDzFKcjApifK+kTYOF+JLyk+pzEgszogvKs1J LT7EKMPBoSTBe8kPKCdYlJqeWpGWmQMMaZi0BAePkghvOUiat7ggMbc4Mx0idYpRUUqcNwUkI QCSyCjNg2uDxdElRlkpYV5GoEOEeApSi3IzS1DlXzGKczAqCfMuBZnCk5lXAjf9FdBiJqDF1d eNQBaXJCKkpBoYRUIW3Lu+TfHW3PeOvm37J8R9lbmk8dfB1+HSgxnTyopWXu1ker91ezDj54m /tLxvynbyuiQyMxQW20iXi9jv/nhk8tstKarn49YtmGfAwyWp+p574umNr1I3fbNyt81h+jJP 7meDllq+hYGbxp1gbiVj4cfTg5O3m95o6piwyvjC8dbcKbUMSizFGYmGWsxFxYkA6/STwp4CA AA= X-Env-Sender: prvs=93251a52a=Paul.Durrant@citrix.com X-Msg-Ref: server-4.tower-27.messagelabs.com!1462980639!40051582!2 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 38907 invoked from network); 11 May 2016 15:30:41 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 11 May 2016 15:30:41 -0000 X-IronPort-AV: E=Sophos;i="5.24,608,1454976000"; d="scan'208";a="353193401" From: Paul Durrant To: , Date: Wed, 11 May 2016 16:16:29 +0100 Message-ID: <1462979790-2824-4-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1462979790-2824-1-git-send-email-paul.durrant@citrix.com> References: <1462979790-2824-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 X-DLP: MIA1 Cc: Paul Durrant Subject: [Xen-devel] [PATCH net-next v2 3/4] xen-netback: pass hash value to the frontend X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP My recent patch to include/xen/interface/io/netif.h defines a new extra info type that can be used to pass hash values between backend and guest frontend. This patch adds code to xen-netback to pass hash values calculated for guest receive-side packets (i.e. netback transmit side) to the frontend. Signed-off-by: Paul Durrant Acked-by: Wei Liu --- drivers/net/xen-netback/interface.c | 13 ++++++- drivers/net/xen-netback/netback.c | 78 +++++++++++++++++++++++++++++++------ 2 files changed, 77 insertions(+), 14 deletions(-) diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 483080f..dcca498 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -158,8 +158,17 @@ static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb, struct xenvif *vif = netdev_priv(dev); unsigned int size = vif->hash.size; - if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) - return fallback(dev, skb) % dev->real_num_tx_queues; + if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) { + u16 index = fallback(dev, skb) % dev->real_num_tx_queues; + + /* Make sure there is no hash information in the socket + * buffer otherwise it would be incorrectly forwarded + * to the frontend. + */ + skb_clear_hash(skb); + + return index; + } xenvif_set_skb_hash(vif, skb); diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 6509d11..7c72510 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -168,6 +168,8 @@ static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue) needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); if (skb_is_gso(skb)) needed++; + if (skb->sw_hash) + needed++; do { prod = queue->rx.sring->req_prod; @@ -285,6 +287,8 @@ struct gop_frag_copy { struct xenvif_rx_meta *meta; int head; int gso_type; + int protocol; + int hash_present; struct page *page; }; @@ -331,8 +335,15 @@ static void xenvif_setup_copy_gop(unsigned long gfn, npo->copy_off += *len; info->meta->size += *len; + if (!info->head) + return; + /* Leave a gap for the GSO descriptor. */ - if (info->head && ((1 << info->gso_type) & queue->vif->gso_mask)) + if ((1 << info->gso_type) & queue->vif->gso_mask) + queue->rx.req_cons++; + + /* Leave a gap for the hash extra segment. */ + if (info->hash_present) queue->rx.req_cons++; info->head = 0; /* There must be something in this buffer now */ @@ -367,6 +378,11 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb .npo = npo, .head = *head, .gso_type = XEN_NETIF_GSO_TYPE_NONE, + /* xenvif_set_skb_hash() will have either set a s/w + * hash or cleared the hash depending on + * whether the the frontend wants a hash for this skb. + */ + .hash_present = skb->sw_hash, }; unsigned long bytes; @@ -555,6 +571,7 @@ void xenvif_kick_thread(struct xenvif_queue *queue) static void xenvif_rx_action(struct xenvif_queue *queue) { + struct xenvif *vif = queue->vif; s8 status; u16 flags; struct xen_netif_rx_response *resp; @@ -590,9 +607,10 @@ static void xenvif_rx_action(struct xenvif_queue *queue) gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod); while ((skb = __skb_dequeue(&rxq)) != NULL) { + struct xen_netif_extra_info *extra = NULL; if ((1 << queue->meta[npo.meta_cons].gso_type) & - queue->vif->gso_prefix_mask) { + vif->gso_prefix_mask) { resp = RING_GET_RESPONSE(&queue->rx, queue->rx.rsp_prod_pvt++); @@ -610,7 +628,7 @@ static void xenvif_rx_action(struct xenvif_queue *queue) queue->stats.tx_bytes += skb->len; queue->stats.tx_packets++; - status = xenvif_check_gop(queue->vif, + status = xenvif_check_gop(vif, XENVIF_RX_CB(skb)->meta_slots_used, &npo); @@ -632,21 +650,57 @@ static void xenvif_rx_action(struct xenvif_queue *queue) flags); if ((1 << queue->meta[npo.meta_cons].gso_type) & - queue->vif->gso_mask) { - struct xen_netif_extra_info *gso = - (struct xen_netif_extra_info *) + vif->gso_mask) { + extra = (struct xen_netif_extra_info *) RING_GET_RESPONSE(&queue->rx, queue->rx.rsp_prod_pvt++); resp->flags |= XEN_NETRXF_extra_info; - gso->u.gso.type = queue->meta[npo.meta_cons].gso_type; - gso->u.gso.size = queue->meta[npo.meta_cons].gso_size; - gso->u.gso.pad = 0; - gso->u.gso.features = 0; + extra->u.gso.type = queue->meta[npo.meta_cons].gso_type; + extra->u.gso.size = queue->meta[npo.meta_cons].gso_size; + extra->u.gso.pad = 0; + extra->u.gso.features = 0; + + extra->type = XEN_NETIF_EXTRA_TYPE_GSO; + extra->flags = 0; + } + + if (skb->sw_hash) { + /* Since the skb got here via xenvif_select_queue() + * we know that the hash has been re-calculated + * according to a configuration set by the frontend + * and therefore we know that it is legitimate to + * pass it to the frontend. + */ + if (resp->flags & XEN_NETRXF_extra_info) + extra->flags |= XEN_NETIF_EXTRA_FLAG_MORE; + else + resp->flags |= XEN_NETRXF_extra_info; + + extra = (struct xen_netif_extra_info *) + RING_GET_RESPONSE(&queue->rx, + queue->rx.rsp_prod_pvt++); - gso->type = XEN_NETIF_EXTRA_TYPE_GSO; - gso->flags = 0; + extra->u.hash.algorithm = + XEN_NETIF_CTRL_HASH_ALGORITHM_TOEPLITZ; + + if (skb->l4_hash) + extra->u.hash.type = + skb->protocol == htons(ETH_P_IP) ? + _XEN_NETIF_CTRL_HASH_TYPE_IPV4_TCP : + _XEN_NETIF_CTRL_HASH_TYPE_IPV6_TCP; + else + extra->u.hash.type = + skb->protocol == htons(ETH_P_IP) ? + _XEN_NETIF_CTRL_HASH_TYPE_IPV4 : + _XEN_NETIF_CTRL_HASH_TYPE_IPV6; + + *(uint32_t *)extra->u.hash.value = + skb_get_hash_raw(skb); + + extra->type = XEN_NETIF_EXTRA_TYPE_HASH; + extra->flags = 0; } xenvif_add_frag_responses(queue, status,