From patchwork Tue Feb 9 20:48:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12079179 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C99BC433E0 for ; Tue, 9 Feb 2021 21:01:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BEA5C64DB2 for ; Tue, 9 Feb 2021 21:01:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234271AbhBIU7V (ORCPT ); Tue, 9 Feb 2021 15:59:21 -0500 Received: from mail1.protonmail.ch ([185.70.40.18]:64982 "EHLO mail1.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233871AbhBIUvI (ORCPT ); Tue, 9 Feb 2021 15:51:08 -0500 Date: Tue, 09 Feb 2021 20:48:41 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1612903727; bh=2ZGDTP+U1Ebxj8vWtkJrsSoHhtkyLcBn7TvjRNc6gew=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=VlKfBN/r+GewNxB0eXUx8LKYeZatc5GhKMZhyyw6gWk6e44G+zbpeQMgWe6f6wPCW hfzy4q2QMvXBU2xTq8yXZu+uQq4Jbk7Q9a6aXjxtcJ6Ks+boM30JzLw+3DXU1u83dn dI7hYcVSW89Kj87Ij/mnikLb+1TkI7nCXNlM+syGZMQI+sSt/N5jiQVWzVE1In8NVA mymbj9ASCWyyeJF0J5zK+JDn06Vje0YfHjdI4z6xoqI/hzLxx5HQp6dc9j8xXkWILH rEZdWWrRBETwcNKqUTe3uc1EIx996qJyQ0hXfcGZ2sWBusIjIP5J5BXTA2YMZNx7j2 Cgd4bUS4ALdzQ== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Jonathan Lemon , Eric Dumazet , Dmitry Vyukov , Willem de Bruijn , Alexander Lobakin , Randy Dunlap , Kevin Hao , Pablo Neira Ayuso , Jakub Sitnicki , Marco Elver , Dexuan Cui , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Taehee Yoo , Cong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Miaohe Lin , Guillaume Nault , Yonghong Song , zhudi , Michal Kubecek , Marcelo Ricardo Leitner , Dmitry Safonov <0x7f454c46@gmail.com>, Yang Yingliang , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Reply-To: Alexander Lobakin Subject: [v3 net-next 07/10] skbuff: move NAPI cache declarations upper in the file Message-ID: <20210209204533.327360-8-alobakin@pm.me> In-Reply-To: <20210209204533.327360-1-alobakin@pm.me> References: <20210209204533.327360-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org NAPI cache structures will be used for allocating skbuff_heads, so move their declarations a bit upper. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 90 +++++++++++++++++++++++------------------------ 1 file changed, 45 insertions(+), 45 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4be2bb969535..860a9d4f752f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -119,6 +119,51 @@ static void skb_under_panic(struct sk_buff *skb, unsigned int sz, void *addr) skb_panic(skb, sz, addr, __func__); } +#define NAPI_SKB_CACHE_SIZE 64 + +struct napi_alloc_cache { + struct page_frag_cache page; + unsigned int skb_count; + void *skb_cache[NAPI_SKB_CACHE_SIZE]; +}; + +static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache); +static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache); + +static void *__alloc_frag_align(unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) +{ + struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); + + return page_frag_alloc_align(&nc->page, fragsz, gfp_mask, align_mask); +} + +void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +{ + fragsz = SKB_DATA_ALIGN(fragsz); + + return __alloc_frag_align(fragsz, GFP_ATOMIC, align_mask); +} +EXPORT_SYMBOL(__napi_alloc_frag_align); + +void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) +{ + struct page_frag_cache *nc; + void *data; + + fragsz = SKB_DATA_ALIGN(fragsz); + if (in_irq() || irqs_disabled()) { + nc = this_cpu_ptr(&netdev_alloc_cache); + data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); + } else { + local_bh_disable(); + data = __alloc_frag_align(fragsz, GFP_ATOMIC, align_mask); + local_bh_enable(); + } + return data; +} +EXPORT_SYMBOL(__netdev_alloc_frag_align); + /* Caller must provide SKB that is memset cleared */ static void __build_skb_around(struct sk_buff *skb, void *data, unsigned int frag_size) @@ -220,51 +265,6 @@ struct sk_buff *build_skb_around(struct sk_buff *skb, } EXPORT_SYMBOL(build_skb_around); -#define NAPI_SKB_CACHE_SIZE 64 - -struct napi_alloc_cache { - struct page_frag_cache page; - unsigned int skb_count; - void *skb_cache[NAPI_SKB_CACHE_SIZE]; -}; - -static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache); -static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache); - -static void *__alloc_frag_align(unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) -{ - struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); - - return page_frag_alloc_align(&nc->page, fragsz, gfp_mask, align_mask); -} - -void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) -{ - fragsz = SKB_DATA_ALIGN(fragsz); - - return __alloc_frag_align(fragsz, GFP_ATOMIC, align_mask); -} -EXPORT_SYMBOL(__napi_alloc_frag_align); - -void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) -{ - struct page_frag_cache *nc; - void *data; - - fragsz = SKB_DATA_ALIGN(fragsz); - if (in_irq() || irqs_disabled()) { - nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align_mask); - } else { - local_bh_disable(); - data = __alloc_frag_align(fragsz, GFP_ATOMIC, align_mask); - local_bh_enable(); - } - return data; -} -EXPORT_SYMBOL(__netdev_alloc_frag_align); - /* * kmalloc_reserve is a wrapper around kmalloc_node_track_caller that tells * the caller if emergency pfmemalloc reserves are being used. If it is and