From patchwork Fri Dec 11 12:43:02 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shirley Ma X-Patchwork-Id: 66643 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id nBBChxHw025803 for ; Fri, 11 Dec 2009 12:43:59 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754770AbZLKMnK (ORCPT ); Fri, 11 Dec 2009 07:43:10 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752689AbZLKMnK (ORCPT ); Fri, 11 Dec 2009 07:43:10 -0500 Received: from e31.co.us.ibm.com ([32.97.110.149]:34702 "EHLO e31.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750832AbZLKMnI (ORCPT ); Fri, 11 Dec 2009 07:43:08 -0500 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by e31.co.us.ibm.com (8.14.3/8.13.1) with ESMTP id nBBCZkL4025094; Fri, 11 Dec 2009 05:35:46 -0700 Received: from d03av05.boulder.ibm.com (d03av05.boulder.ibm.com [9.17.195.85]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id nBBCh6hZ059774; Fri, 11 Dec 2009 05:43:08 -0700 Received: from d03av05.boulder.ibm.com (loopback [127.0.0.1]) by d03av05.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id nBBCh6j9001343; Fri, 11 Dec 2009 05:43:06 -0700 Received: from [9.65.57.224] (sig-9-65-57-224.mts.ibm.com [9.65.57.224]) by d03av05.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVin) with ESMTP id nBBCh5E4001334; Fri, 11 Dec 2009 05:43:05 -0700 Subject: [PATCH v2 2/4] Defer skb allocation -- new skb_set calls & chain pages in virtio_net From: Shirley Ma To: Rusty Russell Cc: "Michael S. Tsirkin" , Avi Kivity , netdev@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Anthony Liguori In-Reply-To: <1260534506.30371.6.camel@localhost.localdomain> References: <1258697359.7416.14.camel@localhost.localdomain> <200911231123.18898.rusty@rustcorp.com.au> <20091208122134.GA17286@redhat.com> <1260534506.30371.6.camel@localhost.localdomain> Date: Fri, 11 Dec 2009 04:43:02 -0800 Message-Id: <1260535382.30371.20.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.24.5 (2.24.5-2.fc10) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index bb5eb7b..100b4b9 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -80,29 +80,25 @@ static inline struct skb_vnet_hdr *skb_vnet_hdr(struct sk_buff *skb) return (struct skb_vnet_hdr *)skb->cb; } -static void give_a_page(struct virtnet_info *vi, struct page *page) +static void give_pages(struct virtnet_info *vi, struct page *page) { - page->private = (unsigned long)vi->pages; - vi->pages = page; -} + struct page *end; -static void trim_pages(struct virtnet_info *vi, struct sk_buff *skb) -{ - unsigned int i; - - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) - give_a_page(vi, skb_shinfo(skb)->frags[i].page); - skb_shinfo(skb)->nr_frags = 0; - skb->data_len = 0; + /* Find end of list, sew whole thing into vi->pages. */ + for (end = page; end->private; end = (struct page *)end->private); + end->private = (unsigned long)vi->pages; + vi->pages = page; } static struct page *get_a_page(struct virtnet_info *vi, gfp_t gfp_mask) { struct page *p = vi->pages; - if (p) + if (p) { vi->pages = (struct page *)p->private; - else + /* use private to chain pages for big packets */ + p->private = 0; + } else p = alloc_page(gfp_mask); return p; } @@ -128,6 +124,85 @@ static void skb_xmit_done(struct virtqueue *svq) netif_wake_queue(vi->dev); } +static int skb_set_frag(struct sk_buff *skb, struct page *page, + int offset, int len) +{ + int i = skb_shinfo(skb)->nr_frags; + skb_frag_t *f; + + i = skb_shinfo(skb)->nr_frags; + f = &skb_shinfo(skb)->frags[i]; + f->page = page; + f->page_offset = offset; + + if (len > PAGE_SIZE - f->page_offset) + f->size = PAGE_SIZE - f->page_offset; + else + f->size = len; + + skb_shinfo(skb)->nr_frags++; + skb->data_len += f->size; + skb->len += f->size; + + len -= f->size; + return len; +} + +static struct sk_buff *skb_goodcopy(struct virtnet_info *vi, struct page **page, + unsigned int *len) +{ + struct sk_buff *skb; + struct skb_vnet_hdr *hdr; + int copy, hdr_len, offset; + char *p; + + p = page_address(*page); + + skb = netdev_alloc_skb(vi->dev, GOOD_COPY_LEN + NET_IP_ALIGN); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, NET_IP_ALIGN); + hdr = skb_vnet_hdr(skb); + + if (vi->mergeable_rx_bufs) { + hdr_len = sizeof(hdr->mhdr); + offset = hdr_len; + } else { + /* share one page between virtio_net header and data */ + struct padded_vnet_hdr { + struct virtio_net_hdr hdr; + /* This padding makes our data 16 byte aligned */ + char padding[6]; + }; + hdr_len = sizeof(hdr->hdr); + offset = sizeof(struct padded_vnet_hdr); + } + + memcpy(hdr, p, hdr_len); + + *len -= hdr_len; + p += offset; + + copy = *len; + if (copy > skb_tailroom(skb)) + copy = skb_tailroom(skb); + memcpy(skb_put(skb, copy), p, copy); + + *len -= copy; + offset += copy; + + if (*len) { + *len = skb_set_frag(skb, *page, offset, *len); + *page = (struct page *)(*page)->private; + } else { + give_pages(vi, *page); + *page = NULL; + } + + return skb; +} + static void receive_skb(struct net_device *dev, struct sk_buff *skb, unsigned len) { @@ -162,7 +237,7 @@ static void receive_skb(struct net_device *dev, struct sk_buff *skb, len -= copy; if (!len) { - give_a_page(vi, skb_shinfo(skb)->frags[0].page); + give_pages(vi, skb_shinfo(skb)->frags[0].page); skb_shinfo(skb)->nr_frags--; } else { skb_shinfo(skb)->frags[0].page_offset +=