From patchwork Wed Apr 20 19:47:57 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shirley Ma X-Patchwork-Id: 723651 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p3L1MDEC011185 for ; Thu, 21 Apr 2011 01:22:13 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754564Ab1DUBVm (ORCPT ); Wed, 20 Apr 2011 21:21:42 -0400 Received: from e33.co.us.ibm.com ([32.97.110.151]:40382 "EHLO e33.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754400Ab1DUBVm (ORCPT ); Wed, 20 Apr 2011 21:21:42 -0400 Received: from d03relay05.boulder.ibm.com (d03relay05.boulder.ibm.com [9.17.195.107]) by e33.co.us.ibm.com (8.14.4/8.13.1) with ESMTP id p3L1EljO010706; Wed, 20 Apr 2011 19:14:47 -0600 Received: from d03av06.boulder.ibm.com (d03av06.boulder.ibm.com [9.17.195.245]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p3L10lZQ160886; Wed, 20 Apr 2011 19:21:38 -0600 Received: from d03av06.boulder.ibm.com (loopback [127.0.0.1]) by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p3KJr2cE018375; Wed, 20 Apr 2011 13:53:02 -0600 Received: from [9.65.196.155] (sig-9-65-196-155.mts.ibm.com [9.65.196.155]) by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p3KJqxgQ018251; Wed, 20 Apr 2011 13:53:00 -0600 Subject: [PATCH V3 3/8] Add userspace buffers support in skb From: Shirley Ma To: David Miller Cc: mst@redhat.com, Eric Dumazet , Avi Kivity , Arnd Bergmann , netdev@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <1303328216.19336.18.camel@localhost.localdomain> References: <1303328216.19336.18.camel@localhost.localdomain> Date: Wed, 20 Apr 2011 12:47:57 -0700 Message-ID: <1303328877.19336.28.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 (2.28.3-1.fc12) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Thu, 21 Apr 2011 01:22:13 +0000 (UTC) This patch adds userspace buffers support in skb. A new struct skb_ubuf_info is needed to maintain the userspace buffers argument and index, a callback is used to notify userspace to release the buffers once lower device has done DMA (Last reference to that skb has gone). Signed-off-by: Shirley Ma --- include/linux/skbuff.h | 14 ++++++++++++++ net/core/skbuff.c | 15 ++++++++++++++- 2 files changed, 28 insertions(+), 1 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index d0ae90a..47a187b 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -189,6 +189,16 @@ enum { SKBTX_DRV_NEEDS_SK_REF = 1 << 3, }; +/* The callback notifies userspace to release buffers when skb DMA is done in + * lower device, the desc is used to track userspace buffer index. + */ +struct skb_ubuf_info { + /* support buffers allocation from userspace */ + void (*callback)(struct sk_buff *); + void *arg; + size_t desc; +}; + /* This data is invariant across clones and lives at * the end of the header data, ie. at skb->end. */ @@ -211,6 +221,10 @@ struct skb_shared_info { /* Intermediate layers must ensure that destructor_arg * remains valid until skb destructor */ void * destructor_arg; + + /* DMA mapping from/to userspace buffers */ + struct skb_ubuf_info ubuf; + /* must be last field, see pskb_expand_head() */ skb_frag_t frags[MAX_SKB_FRAGS]; }; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 7ebeed0..822c07d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -210,6 +210,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, shinfo = skb_shinfo(skb); memset(shinfo, 0, offsetof(struct skb_shared_info, dataref)); atomic_set(&shinfo->dataref, 1); + shinfo->ubuf.callback = NULL; + shinfo->ubuf.arg = NULL; kmemcheck_annotate_variable(shinfo->destructor_arg); if (fclone) { @@ -327,7 +329,15 @@ static void skb_release_data(struct sk_buff *skb) for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) put_page(skb_shinfo(skb)->frags[i].page); } - + /* + * if skb buf is from userspace, we need to notify the caller + * the lower device DMA has done; + */ + if (skb_shinfo(skb)->ubuf.callback) { + skb_shinfo(skb)->ubuf.callback(skb); + skb_shinfo(skb)->ubuf.callback = NULL; + skb_shinfo(skb)->ubuf.arg = NULL; + } if (skb_has_frag_list(skb)) skb_drop_fraglist(skb); @@ -480,6 +490,9 @@ bool skb_recycle_check(struct sk_buff *skb, int skb_size) if (irqs_disabled()) return false; + if (shinfo->ubuf.callback) + return false; + if (skb_is_nonlinear(skb) || skb->fclone != SKB_FCLONE_UNAVAILABLE) return false;