From patchwork Thu Jul 9 20:42:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 6759561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A344BC05AC for ; Thu, 9 Jul 2015 20:49:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6A3152067A for ; Thu, 9 Jul 2015 20:48:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 29FA320557 for ; Thu, 9 Jul 2015 20:48:58 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZDIie-00053e-5a; Thu, 09 Jul 2015 20:46:56 +0000 Received: from smtp02.citrix.com ([66.165.176.63]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZDIhy-0003Qy-Qe for linux-arm-kernel@lists.infradead.org; Thu, 09 Jul 2015 20:46:19 +0000 X-IronPort-AV: E=Sophos;i="5.15,442,1432598400"; d="scan'208";a="282742065" From: Julien Grall To: Subject: [PATCH v2 18/20] net/xen-netback: Make it running on 64KB page granularity Date: Thu, 9 Jul 2015 21:42:30 +0100 Message-ID: <1436474552-31789-19-git-send-email-julien.grall@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1436474552-31789-1-git-send-email-julien.grall@citrix.com> References: <1436474552-31789-1-git-send-email-julien.grall@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150709_134615_129740_551F9AD7 X-CRM114-Status: GOOD ( 21.43 ) X-Spam-Score: -4.6 (----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Julien Grall , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The PV network protocol is using 4KB page granularity. The goal of this patch is to allow a Linux using 64KB page granularity working as a network backend on a non-modified Xen. It's only necessary to adapt the ring size and break skb data in small chunk of 4KB. The rest of the code is relying on the grant table code. Signed-off-by: Julien Grall Cc: Ian Campbell Cc: Wei Liu Cc: netdev@vger.kernel.org --- Improvement such as support of 64KB grant is not taken into consideration in this patch because we have the requirement to run a Linux using 64KB pages on a non-modified Xen. Changes in v2: - Correctly set MAX_GRANT_COPY_OPS and XEN_NETBK_RX_SLOTS_MAX - Don't use XEN_PAGE_SIZE in handle_frag_list as we coalesce fragment into a new skb - Use gnntab_foreach_grant to split a Linux page into grant --- drivers/net/xen-netback/common.h | 15 +++-- drivers/net/xen-netback/netback.c | 138 +++++++++++++++++++++++--------------- 2 files changed, 93 insertions(+), 60 deletions(-) diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 8a495b3..bb68211 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -44,6 +44,7 @@ #include #include #include +#include #include typedef unsigned int pending_ring_idx_t; @@ -64,8 +65,8 @@ struct pending_tx_info { struct ubuf_info callback_struct; }; -#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, PAGE_SIZE) -#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE) +#define XEN_NETIF_TX_RING_SIZE __CONST_RING_SIZE(xen_netif_tx, XEN_PAGE_SIZE) +#define XEN_NETIF_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, XEN_PAGE_SIZE) struct xenvif_rx_meta { int id; @@ -80,16 +81,18 @@ struct xenvif_rx_meta { /* Discriminate from any valid pending_idx value. */ #define INVALID_PENDING_IDX 0xFFFF -#define MAX_BUFFER_OFFSET PAGE_SIZE +#define MAX_BUFFER_OFFSET XEN_PAGE_SIZE #define MAX_PENDING_REQS XEN_NETIF_TX_RING_SIZE +#define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1) + /* It's possible for an skb to have a maximal number of frags * but still be less than MAX_BUFFER_OFFSET in size. Thus the - * worst-case number of copy operations is MAX_SKB_FRAGS per + * worst-case number of copy operations is MAX_XEN_SKB_FRAGS per * ring slot. */ -#define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE) +#define MAX_GRANT_COPY_OPS (MAX_XEN_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE) #define NETBACK_INVALID_HANDLE -1 @@ -203,7 +206,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ /* Maximum number of Rx slots a to-guest packet may use, including the * slot needed for GSO meta-data. */ -#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1) +#define XEN_NETBK_RX_SLOTS_MAX ((MAX_XEN_SKB_FRAGS + 1)) enum state_bit_shift { /* This bit marks that the vif is connected */ diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 3f77030..828085b 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -263,6 +263,65 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue, return meta; } +struct gop_frag_copy +{ + struct xenvif_queue *queue; + struct netrx_pending_operations *npo; + struct xenvif_rx_meta *meta; + int head; + int gso_type; + + struct page *page; +}; + +static void xenvif_gop_frag_copy_grant(unsigned long mfn, unsigned int offset, + unsigned int *len, void *data) +{ + struct gop_frag_copy *info = data; + struct gnttab_copy *copy_gop; + struct xen_page_foreign *foreign; + /* Convenient aliases */ + struct xenvif_queue *queue = info->queue; + struct netrx_pending_operations *npo = info->npo; + struct page *page = info->page; + + BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET); + + if (npo->copy_off == MAX_BUFFER_OFFSET) + info->meta = get_next_rx_buffer(queue, npo); + + if (npo->copy_off + *len > MAX_BUFFER_OFFSET) + *len = MAX_BUFFER_OFFSET - npo->copy_off; + + copy_gop = npo->copy + npo->copy_prod++; + copy_gop->flags = GNTCOPY_dest_gref; + copy_gop->len = *len; + + foreign = xen_page_foreign(page); + if (foreign) { + copy_gop->source.domid = foreign->domid; + copy_gop->source.u.ref = foreign->gref; + copy_gop->flags |= GNTCOPY_source_gref; + } else { + copy_gop->source.domid = DOMID_SELF; + copy_gop->source.u.gmfn = mfn; + } + copy_gop->source.offset = offset; + + copy_gop->dest.domid = queue->vif->domid; + copy_gop->dest.offset = npo->copy_off; + copy_gop->dest.u.ref = npo->copy_gref; + + npo->copy_off += *len; + info->meta->size += *len; + + /* Leave a gap for the GSO descriptor. */ + if (info->head && ((1 << info->gso_type) & queue->vif->gso_mask)) + queue->rx.req_cons++; + + info->head = 0; /* There must be something in this buffer now */ +} + /* * Set up the grant operations for this fragment. If it's a flipping * interface, we also set up the unmap request from here. @@ -272,83 +331,54 @@ static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb struct page *page, unsigned long size, unsigned long offset, int *head) { - struct gnttab_copy *copy_gop; - struct xenvif_rx_meta *meta; + struct gop_frag_copy info = + { + .queue = queue, + .npo = npo, + .head = *head, + .gso_type = XEN_NETIF_GSO_TYPE_NONE, + }; unsigned long bytes; - int gso_type = XEN_NETIF_GSO_TYPE_NONE; if (skb_is_gso(skb)) { if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) - gso_type = XEN_NETIF_GSO_TYPE_TCPV4; + info.gso_type = XEN_NETIF_GSO_TYPE_TCPV4; else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) - gso_type = XEN_NETIF_GSO_TYPE_TCPV6; + info.gso_type = XEN_NETIF_GSO_TYPE_TCPV6; } /* Data must not cross a page boundary. */ BUG_ON(size + offset > PAGE_SIZE<meta + npo->meta_prod - 1; + info.meta = npo->meta + npo->meta_prod - 1; /* Skip unused frames from start of page */ page += offset >> PAGE_SHIFT; offset &= ~PAGE_MASK; while (size > 0) { - struct xen_page_foreign *foreign; - BUG_ON(offset >= PAGE_SIZE); - BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET); - - if (npo->copy_off == MAX_BUFFER_OFFSET) - meta = get_next_rx_buffer(queue, npo); bytes = PAGE_SIZE - offset; if (bytes > size) bytes = size; - if (npo->copy_off + bytes > MAX_BUFFER_OFFSET) - bytes = MAX_BUFFER_OFFSET - npo->copy_off; - copy_gop = npo->copy + npo->copy_prod++; - copy_gop->flags = GNTCOPY_dest_gref; - copy_gop->len = bytes; - - foreign = xen_page_foreign(page); - if (foreign) { - copy_gop->source.domid = foreign->domid; - copy_gop->source.u.ref = foreign->gref; - copy_gop->flags |= GNTCOPY_source_gref; - } else { - copy_gop->source.domid = DOMID_SELF; - copy_gop->source.u.gmfn = - virt_to_mfn(page_address(page)); - } - copy_gop->source.offset = offset; - - copy_gop->dest.domid = queue->vif->domid; - copy_gop->dest.offset = npo->copy_off; - copy_gop->dest.u.ref = npo->copy_gref; - - npo->copy_off += bytes; - meta->size += bytes; - - offset += bytes; + info.page = page; + gnttab_foreach_grant(page, offset, bytes, + xenvif_gop_frag_copy_grant, + &info); size -= bytes; + offset = 0; - /* Next frame */ - if (offset == PAGE_SIZE && size) { + /* Next page */ + if (size) { BUG_ON(!PageCompound(page)); page++; - offset = 0; } - - /* Leave a gap for the GSO descriptor. */ - if (*head && ((1 << gso_type) & queue->vif->gso_mask)) - queue->rx.req_cons++; - - *head = 0; /* There must be something in this buffer now. */ - } + + *head = info.head; } /* @@ -747,7 +777,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue, first->size -= txp->size; slots++; - if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) { + if (unlikely((txp->offset + txp->size) > XEN_PAGE_SIZE)) { netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %u, size: %u\n", txp->offset, txp->size); xenvif_fatal_tx_err(queue->vif); @@ -1241,11 +1271,11 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue, } /* No crossing a page as the payload mustn't fragment. */ - if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) { + if (unlikely((txreq.offset + txreq.size) > XEN_PAGE_SIZE)) { netdev_err(queue->vif->dev, "txreq.offset: %u, size: %u, end: %lu\n", txreq.offset, txreq.size, - (unsigned long)(txreq.offset&~PAGE_MASK) + txreq.size); + (unsigned long)(txreq.offset&~XEN_PAGE_MASK) + txreq.size); xenvif_fatal_tx_err(queue->vif); break; } @@ -1287,7 +1317,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue, virt_to_mfn(skb->data); queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF; queue->tx_copy_ops[*copy_ops].dest.offset = - offset_in_page(skb->data); + offset_in_page(skb->data) & ~XEN_PAGE_MASK; queue->tx_copy_ops[*copy_ops].len = data_len; queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref; @@ -1780,7 +1810,7 @@ int xenvif_map_frontend_rings(struct xenvif_queue *queue, goto err; txs = (struct xen_netif_tx_sring *)addr; - BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE); + BACK_RING_INIT(&queue->tx, txs, XEN_PAGE_SIZE); err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif), &rx_ring_ref, 1, &addr); @@ -1788,7 +1818,7 @@ int xenvif_map_frontend_rings(struct xenvif_queue *queue, goto err; rxs = (struct xen_netif_rx_sring *)addr; - BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE); + BACK_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE); return 0;