Message ID | 20231215171020.687342-5-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | locking: Introduce nested-BH locking. | expand |
Hi Sebastian, kernel test robot noticed the following build warnings: [auto build test WARNING on net-next/main] url: https://github.com/intel-lab-lkp/linux/commits/Sebastian-Andrzej-Siewior/locking-local_lock-Introduce-guard-definition-for-local_lock/20231216-011911 base: net-next/main patch link: https://lore.kernel.org/r/20231215171020.687342-5-bigeasy%40linutronix.de patch subject: [PATCH net-next 04/24] net: Use nested-BH locking for napi_alloc_cache. config: x86_64-randconfig-121-20231216 (https://download.01.org/0day-ci/archive/20231216/202312161210.q8xdLxsl-lkp@intel.com/config) compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231216/202312161210.q8xdLxsl-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202312161210.q8xdLxsl-lkp@intel.com/ sparse warnings: (new ones prefixed by >>) >> net/core/skbuff.c:302:38: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct local_lock_t [usertype] *l @@ got struct local_lock_t [noderef] __percpu * @@ net/core/skbuff.c:302:38: sparse: expected struct local_lock_t [usertype] *l net/core/skbuff.c:302:38: sparse: got struct local_lock_t [noderef] __percpu * net/core/skbuff.c:331:38: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct local_lock_t [usertype] *l @@ got struct local_lock_t [noderef] __percpu * @@ net/core/skbuff.c:331:38: sparse: expected struct local_lock_t [usertype] *l net/core/skbuff.c:331:38: sparse: got struct local_lock_t [noderef] __percpu * net/core/skbuff.c:734:17: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct local_lock_t [usertype] *l @@ got struct local_lock_t [noderef] __percpu * @@ net/core/skbuff.c:734:17: sparse: expected struct local_lock_t [usertype] *l net/core/skbuff.c:734:17: sparse: got struct local_lock_t [noderef] __percpu * net/core/skbuff.c:806:9: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct local_lock_t [usertype] *l @@ got struct local_lock_t [noderef] __percpu * @@ net/core/skbuff.c:806:9: sparse: expected struct local_lock_t [usertype] *l net/core/skbuff.c:806:9: sparse: got struct local_lock_t [noderef] __percpu * net/core/skbuff.c:1317:38: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct local_lock_t [usertype] *l @@ got struct local_lock_t [noderef] __percpu * @@ net/core/skbuff.c:1317:38: sparse: expected struct local_lock_t [usertype] *l net/core/skbuff.c:1317:38: sparse: got struct local_lock_t [noderef] __percpu * net/core/skbuff.c: note: in included file (through include/linux/mmzone.h, include/linux/gfp.h, include/linux/umh.h, include/linux/kmod.h, ...): include/linux/page-flags.h:242:46: sparse: sparse: self-comparison always evaluates to false net/core/skbuff.c: note: in included file (through include/linux/mmzone.h, include/linux/gfp.h, include/linux/umh.h, include/linux/kmod.h, ...): >> include/linux/local_lock.h:71:1: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got struct local_lock_t [usertype] * @@ include/linux/local_lock.h:71:1: sparse: expected void const [noderef] __percpu *__vpp_verify include/linux/local_lock.h:71:1: sparse: got struct local_lock_t [usertype] * >> include/linux/local_lock.h:71:1: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got struct local_lock_t [usertype] * @@ include/linux/local_lock.h:71:1: sparse: expected void const [noderef] __percpu *__vpp_verify include/linux/local_lock.h:71:1: sparse: got struct local_lock_t [usertype] * >> include/linux/local_lock.h:71:1: sparse: sparse: incorrect type in initializer (different address spaces) @@ expected void const [noderef] __percpu *__vpp_verify @@ got struct local_lock_t [usertype] * @@ include/linux/local_lock.h:71:1: sparse: expected void const [noderef] __percpu *__vpp_verify include/linux/local_lock.h:71:1: sparse: got struct local_lock_t [usertype] * net/core/skbuff.c: note: in included file (through include/net/net_namespace.h, include/linux/inet.h): include/linux/skbuff.h:2715:28: sparse: sparse: self-comparison always evaluates to false net/core/skbuff.c: note: in included file (through include/linux/skbuff.h, include/net/net_namespace.h, include/linux/inet.h): include/net/checksum.h:33:39: sparse: sparse: incorrect type in argument 3 (different base types) @@ expected restricted __wsum [usertype] sum @@ got unsigned int @@ include/net/checksum.h:33:39: sparse: expected restricted __wsum [usertype] sum include/net/checksum.h:33:39: sparse: got unsigned int vim +302 net/core/skbuff.c 296 297 void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) 298 { 299 struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); 300 301 fragsz = SKB_DATA_ALIGN(fragsz); > 302 guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); 303 304 return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); 305 } 306 EXPORT_SYMBOL(__napi_alloc_frag_align); 307
On 2023-12-16 12:43:23 [+0800], kernel test robot wrote: > Hi Sebastian, Hi, > >> net/core/skbuff.c:302:38: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct local_lock_t [usertype] *l @@ got struct local_lock_t [noderef] __percpu * @@ updated the guard definition for that. Thanks for the report. Sebastian
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index de9397e45718a..9c3073dcc80f1 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -265,6 +265,7 @@ static void *page_frag_alloc_1k(struct page_frag_1k *nc, gfp_t gfp_mask) #endif struct napi_alloc_cache { + local_lock_t bh_lock; struct page_frag_cache page; struct page_frag_1k page_small; unsigned int skb_count; @@ -272,7 +273,9 @@ struct napi_alloc_cache { }; static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache); -static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache); +static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache) = { + .bh_lock = INIT_LOCAL_LOCK(bh_lock), +}; /* Double check that napi_get_frags() allocates skbs with * skb->head being backed by slab, not a page fragment. @@ -296,6 +299,7 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); + guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); return page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align_mask); } @@ -324,6 +328,7 @@ static struct sk_buff *napi_skb_cache_get(void) struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); struct sk_buff *skb; + guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); if (unlikely(!nc->skb_count)) { nc->skb_count = kmem_cache_alloc_bulk(skbuff_cache, GFP_ATOMIC, @@ -726,9 +731,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, pfmemalloc = nc->pfmemalloc; } else { local_bh_disable(); - nc = this_cpu_ptr(&napi_alloc_cache.page); - data = page_frag_alloc(nc, len, gfp_mask); - pfmemalloc = nc->pfmemalloc; + scoped_guard(local_lock_nested_bh, &napi_alloc_cache.bh_lock) { + nc = this_cpu_ptr(&napi_alloc_cache.page); + data = page_frag_alloc(nc, len, gfp_mask); + pfmemalloc = nc->pfmemalloc; + } local_bh_enable(); } @@ -793,31 +800,32 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, goto skb_success; } - nc = this_cpu_ptr(&napi_alloc_cache); - if (sk_memalloc_socks()) gfp_mask |= __GFP_MEMALLOC; - if (NAPI_HAS_SMALL_PAGE_FRAG && len <= SKB_WITH_OVERHEAD(1024)) { - /* we are artificially inflating the allocation size, but - * that is not as bad as it may look like, as: - * - 'len' less than GRO_MAX_HEAD makes little sense - * - On most systems, larger 'len' values lead to fragment - * size above 512 bytes - * - kmalloc would use the kmalloc-1k slab for such values - * - Builds with smaller GRO_MAX_HEAD will very likely do - * little networking, as that implies no WiFi and no - * tunnels support, and 32 bits arches. - */ - len = SZ_1K; + scoped_guard(local_lock_nested_bh, &napi_alloc_cache.bh_lock) { + nc = this_cpu_ptr(&napi_alloc_cache); + if (NAPI_HAS_SMALL_PAGE_FRAG && len <= SKB_WITH_OVERHEAD(1024)) { + /* we are artificially inflating the allocation size, but + * that is not as bad as it may look like, as: + * - 'len' less than GRO_MAX_HEAD makes little sense + * - On most systems, larger 'len' values lead to fragment + * size above 512 bytes + * - kmalloc would use the kmalloc-1k slab for such values + * - Builds with smaller GRO_MAX_HEAD will very likely do + * little networking, as that implies no WiFi and no + * tunnels support, and 32 bits arches. + */ + len = SZ_1K; - data = page_frag_alloc_1k(&nc->page_small, gfp_mask); - pfmemalloc = NAPI_SMALL_PAGE_PFMEMALLOC(nc->page_small); - } else { - len = SKB_HEAD_ALIGN(len); + data = page_frag_alloc_1k(&nc->page_small, gfp_mask); + pfmemalloc = NAPI_SMALL_PAGE_PFMEMALLOC(nc->page_small); + } else { + len = SKB_HEAD_ALIGN(len); - data = page_frag_alloc(&nc->page, len, gfp_mask); - pfmemalloc = nc->page.pfmemalloc; + data = page_frag_alloc(&nc->page, len, gfp_mask); + pfmemalloc = nc->page.pfmemalloc; + } } if (unlikely(!data)) @@ -1306,6 +1314,7 @@ static void napi_skb_cache_put(struct sk_buff *skb) struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); u32 i; + guard(local_lock_nested_bh)(&napi_alloc_cache.bh_lock); kasan_poison_object_data(skbuff_cache, skb); nc->skb_cache[nc->skb_count++] = skb;
napi_alloc_cache is a per-CPU variable and relies on disabled BH for its locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. Add a local_lock_t to the data structure and use local_lock_nested_bh() for locking. This change adds only lockdep coverage and does not alter the functional behaviour for !PREEMPT_RT. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- net/core/skbuff.c | 57 +++++++++++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 24 deletions(-)