From patchwork Mon Feb 6 17:31:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 13130431 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3896C636D4 for ; Mon, 6 Feb 2023 17:31:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229862AbjBFRbP (ORCPT ); Mon, 6 Feb 2023 12:31:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37630 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230221AbjBFRbL (ORCPT ); Mon, 6 Feb 2023 12:31:11 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71D6F25293 for ; Mon, 6 Feb 2023 09:31:07 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-52a87fc668cso12204107b3.18 for ; Mon, 06 Feb 2023 09:31:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MX89JY1OCrbuxDDb7JH3LhU34Wn6PQ2hUnt/seh9tMM=; b=J01uAmfRdoWOduuci5YKHVGBdPLjsPecoy7ltrD0V73eYVw3kHpB2TUTk+H6Y0A5ur rqoljB7ETpIt3qLZgZ4FWzHvjH2IvVhTxlB5yqL91UQVS/C4J98mnvT5Q4rLsOLynCkO czrnAj7HJ7WtpQz33Mz7+T1Rz5xpx7Z+wBuvzpBv0G/cxlM/NQ56Q1HxPKRaY/f9uxWE IqHH1LK0ASl51c/RcJGkDU78aZ08EfkMb5+L11foqGP2KtDxbWtaJLsinMXLefP+7nva AhQ7YA10Z83eH2IUQEaMCgrpTvrVTNOHy7+v8M2TCeUmcQ9+gbkIhj7peNZj6uC43neW Ohgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MX89JY1OCrbuxDDb7JH3LhU34Wn6PQ2hUnt/seh9tMM=; b=FNG1JHtOCdUvNyp7fCwucJR17qHXWJAmT4Eu0HjZPcsjBNNVhx0wgH+GTgjrRl87rl 1/J0JigZXKsXuyuZu/+CLj7RiwqdKFTsUP/8zT1zLXUlEpwWL74Pqn3SCKui5WfU5D7A VNVPEOComfAe4J932IKnFwHUZxq7c8W4gfqCOvOe2R8XK6VQdrxLwanMR3rHSUxr/4Sw +tz2r0LnfQU3yHoZuiqqBd87tY9Mcf8bFEOEUWlIj4X30f5ByNPubGnMnPO88fFX+6zP 1cWLTUW4Mr+d6wChCAQRFlUSPDFgyAybOeJ6gxXevuF1ur8hn5nixxW2Hp/6RT1gvW2A dUMg== X-Gm-Message-State: AO0yUKXDjAJx3GZDAmFIBwka5kCRMxrbhCWtv6vGAXySoDtRvO626Tq8 DNOidW8qNom5NeGJ+MUvZuaSnpElU1ZYtQ== X-Google-Smtp-Source: AK7set/LLcE8s4Wa00L0BbT/p96kDn0TqHBtjqIt+IK/P9tvjeNqXLGGMKX9HPe4hLj5QSgj/VFAMZMB/QO7cg== X-Received: from edumazet1.c.googlers.com ([fda3:e722:ac3:cc00:2b:7d90:c0a8:395a]) (user=edumazet job=sendgmr) by 2002:a05:6902:18c:b0:80e:e93e:e433 with SMTP id t12-20020a056902018c00b0080ee93ee433mr60175ybh.257.1675704666710; Mon, 06 Feb 2023 09:31:06 -0800 (PST) Date: Mon, 6 Feb 2023 17:31:00 +0000 In-Reply-To: <20230206173103.2617121-1-edumazet@google.com> Mime-Version: 1.0 References: <20230206173103.2617121-1-edumazet@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206173103.2617121-2-edumazet@google.com> Subject: [PATCH v2 net-next 1/4] net: add SKB_HEAD_ALIGN() helper From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski , Paolo Abeni Cc: netdev@vger.kernel.org, eric.dumazet@gmail.com, Soheil Hassas Yeganeh , Eric Dumazet Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org We have many places using this expression: SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) Use of SKB_HEAD_ALIGN() will allow to clean them. Signed-off-by: Eric Dumazet Acked-by: Soheil Hassas Yeganeh --- include/linux/skbuff.h | 8 ++++++++ net/core/skbuff.c | 18 ++++++------------ 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fa95b916342e77601803ba1056f2d2b0646517b..c3df3b55da976dba2f5ba72bfa692329479d6750 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -255,6 +255,14 @@ #define SKB_DATA_ALIGN(X) ALIGN(X, SMP_CACHE_BYTES) #define SKB_WITH_OVERHEAD(X) \ ((X) - SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) + +/* For X bytes available in skb->head, what is the minimal + * allocation needed, knowing struct skb_shared_info needs + * to be aligned. + */ +#define SKB_HEAD_ALIGN(X) (SKB_DATA_ALIGN(X) + \ + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) + #define SKB_MAX_ORDER(X, ORDER) \ SKB_WITH_OVERHEAD((PAGE_SIZE << (ORDER)) - (X)) #define SKB_MAX_HEAD(X) (SKB_MAX_ORDER((X), 0)) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 624e9e4ec116e2a619e49b3d8d8be7ece2ee41cc..4abfc3ba6898d89f4df97bf5f069b291dd5e420f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -558,8 +558,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * aligned memory blocks, unless SLUB/SLAB debug is enabled. * Both skb->head and skb_shared_info are cache line aligned. */ - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); osize = kmalloc_size_roundup(size); data = kmalloc_reserve(osize, gfp_mask, node, &pfmemalloc); if (unlikely(!data)) @@ -632,8 +631,7 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, goto skb_success; } - len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - len = SKB_DATA_ALIGN(len); + len = SKB_HEAD_ALIGN(len); if (sk_memalloc_socks()) gfp_mask |= __GFP_MEMALLOC; @@ -732,8 +730,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, data = page_frag_alloc_1k(&nc->page_small, gfp_mask); pfmemalloc = NAPI_SMALL_PAGE_PFMEMALLOC(nc->page_small); } else { - len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - len = SKB_DATA_ALIGN(len); + len = SKB_HEAD_ALIGN(len); data = page_frag_alloc(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; @@ -1938,8 +1935,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) @@ -6289,8 +6285,7 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) @@ -6408,8 +6403,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_DATA_ALIGN(size); - size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + size = SKB_HEAD_ALIGN(size); size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) From patchwork Mon Feb 6 17:31:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 13130430 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0CE3C636D3 for ; Mon, 6 Feb 2023 17:31:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229784AbjBFRbO (ORCPT ); Mon, 6 Feb 2023 12:31:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229570AbjBFRbL (ORCPT ); Mon, 6 Feb 2023 12:31:11 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FEBD19F07 for ; Mon, 6 Feb 2023 09:31:09 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id h14-20020a258a8e000000b00827819f87e5so12119277ybl.0 for ; Mon, 06 Feb 2023 09:31:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=49y3rrQyPj+ZDUDqhU9vr/v8UwYQu0sYpMXvrnF3QeU=; b=hOyJ1U2t1Bm3NbSYgs1jTzMuFqLaWvJD/PTR5dSdwIKFCN+sw4A9aXUQIbVtLhycbm Nl0NI/pvsVkHw6+GZGp5fSfNm8DUFdTcq+zYDfCKB/c+h5wnImaCUCTPcSmEY79Ur+O8 eDnUXP6UwK4HDWpiPi4i0PCmN8veLC5TbJfITNw0p2u1XjzSiDlPKwuD8bBhgYiHg9a3 eZweKsR4hbVshv2vXyMViaVFiyAIRo3VAhXch0Nf+fslgKEALLvi0kXSMt5SPQbTXURy siMFRvbU4sZwmLnodHVGFI/VQHHUjcyBvNsJmHzm8EyOcBFpUpopoV2miH4+CRRI2fkH nkmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=49y3rrQyPj+ZDUDqhU9vr/v8UwYQu0sYpMXvrnF3QeU=; b=V0M3jz5WU9LXtaX1g/L4qKltQTNuLZdMb8lv2E5pV7gMh2roqZ2oLuHlEDOa+pMwHR eh7lIyFuIOoyHXoCeVUymO4jqGwdWPQbpvkKN1clGkoDCTjuPkGGVfDA8OeB4Z797Rtv hMsWIBTcj1QxlGeddYeVkjmEQn4tsfsrAw9uL3OFDwchLJysUd8OAKqnNc0PiRsXTDUv +7O++tr9xcaM6GoPNrzMktFR0xq5xiqu9O2O/l8pmiml1WJbBkSE8muEFisUk6a9x2+x QV0vHjynHCxOUnnqFiCYWUZE7E0oMuiMdU7Pwp4dWFFHEdd88HotaUJ91TTT8s3dHqcN hThQ== X-Gm-Message-State: AO0yUKXQYcpd8h45U54xnOO1OkqO5vF8AXYBt5JQTN3CiqoZjlJHE6qv zrIJ7TU7zfrl00uX4M3ZWlkjjL2iwwDssg== X-Google-Smtp-Source: AK7set+gEoxp3S858yQBoMN+HkVTyCuPXYchRPSOAUz11U3oZjVun/BTkv/P6x2XPKBc5Tjc/hRdJFZoZFl81w== X-Received: from edumazet1.c.googlers.com ([fda3:e722:ac3:cc00:2b:7d90:c0a8:395a]) (user=edumazet job=sendgmr) by 2002:a05:6902:1813:b0:869:a08:a52f with SMTP id cf19-20020a056902181300b008690a08a52fmr42047ybb.354.1675704668485; Mon, 06 Feb 2023 09:31:08 -0800 (PST) Date: Mon, 6 Feb 2023 17:31:01 +0000 In-Reply-To: <20230206173103.2617121-1-edumazet@google.com> Mime-Version: 1.0 References: <20230206173103.2617121-1-edumazet@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206173103.2617121-3-edumazet@google.com> Subject: [PATCH v2 net-next 2/4] net: remove osize variable in __alloc_skb() From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski , Paolo Abeni Cc: netdev@vger.kernel.org, eric.dumazet@gmail.com, Soheil Hassas Yeganeh , Eric Dumazet Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This is a cleanup patch, to prepare following change. Signed-off-by: Eric Dumazet Acked-by: Soheil Hassas Yeganeh --- net/core/skbuff.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4abfc3ba6898d89f4df97bf5f069b291dd5e420f..333f793f9cdba9946e0bd014e9a0f18bae20771d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -533,7 +533,6 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, { struct kmem_cache *cache; struct sk_buff *skb; - unsigned int osize; bool pfmemalloc; u8 *data; @@ -559,16 +558,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * Both skb->head and skb_shared_info are cache line aligned. */ size = SKB_HEAD_ALIGN(size); - osize = kmalloc_size_roundup(size); - data = kmalloc_reserve(osize, gfp_mask, node, &pfmemalloc); + size = kmalloc_size_roundup(size); + data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); if (unlikely(!data)) goto nodata; /* kmalloc_size_roundup() might give us more room than requested. * Put skb_shared_info exactly at the end of allocated zone, * to allow max possible filling before reallocation. */ - size = SKB_WITH_OVERHEAD(osize); - prefetchw(data + size); + prefetchw(data + SKB_WITH_OVERHEAD(size)); /* * Only clear those fields we need to clear, not those that we will @@ -576,7 +574,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * the tail pointer in struct sk_buff! */ memset(skb, 0, offsetof(struct sk_buff, tail)); - __build_skb_around(skb, data, osize); + __build_skb_around(skb, data, size); skb->pfmemalloc = pfmemalloc; if (flags & SKB_ALLOC_FCLONE) { From patchwork Mon Feb 6 17:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 13130434 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B508C636D4 for ; Mon, 6 Feb 2023 17:31:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230014AbjBFRbT (ORCPT ); Mon, 6 Feb 2023 12:31:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229743AbjBFRbO (ORCPT ); Mon, 6 Feb 2023 12:31:14 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4A6059D2 for ; Mon, 6 Feb 2023 09:31:10 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id e5-20020a056214110500b0053547681552so6127940qvs.8 for ; Mon, 06 Feb 2023 09:31:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n+U430XvaQd6ywNJ0mxNiIZ4Q3ScjXCNbOxPuWt0PHM=; b=YksWyvUaSZyDfoOU/qEUI4uXSZCb+flysO7FCKnFU+gE8ro8ISPsQtEHvjYo2BcV7S 72rpB7iZERfMi1vVaKo6fNY7Fsgsyv3rqWGaAHI+MB9sVdEPIjBSdi9TDTThkVP9Tnmj FXWXkUHGKbxd/Z/i7FZcIaNpBHwUzNWcoeJb9AcpHHYd33bBgEAWquOTovehx3jsRtM+ 2vt4R0JF7YGwkqoQL0DiTo/+sYQ0KQhANllhmwhtaJ6Mbllekrt3wpjjN61B5Isbbd1v qgHS0/HuFAzDz5i9j7K2r4ruesIEW1mwZUzYlJ0FLnDVsv3J2sh54S/E2vZwctF4vVx5 orlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n+U430XvaQd6ywNJ0mxNiIZ4Q3ScjXCNbOxPuWt0PHM=; b=UI6fITTVGadS3/aGoVGEleyh6twVwO0L9v37XTtwXTv+qu/1u6FOf43aavF1xAMar4 cgrSBRNPOgAY60Neh/eDM9K8F4yqXt0BRSimhPOIWpFsvrgFQ+m2lY23yLQkItqBtxL4 AvLL3UkLQQRN3GKmcXrcgTX0px+VBX945gevGYfoH+vMmYC0RTpAjqIiPOAwNi6grg8q n9JAta6RHplocXIBSEAGXCbwAsrnzkDVGZCe6EF5j8unNP9mR65OPPQmh4HxX5PwWygT 8nKk6eCAxnQH9QFRBwfYQHq/VCJiogpToop7IvI44X7LD89Acy+M46fyyFm7Xh4hE1cG lmqA== X-Gm-Message-State: AO0yUKUXqwtjIbFmWSwENLw2yN+cJtWltlVnZMy9Nh6pHealgthByT19 wlBaoMGWlZ6IKjO1jqM+XTcEAffqyv1a1g== X-Google-Smtp-Source: AK7set/dwWCMcRD1gm6r9TIp1HGbFgINRX/xxN+jZGbgWRELQfJ884AdaC5HaMYVRZV5z+JMg5KmRZdH+kdJbQ== X-Received: from edumazet1.c.googlers.com ([fda3:e722:ac3:cc00:2b:7d90:c0a8:395a]) (user=edumazet job=sendgmr) by 2002:a05:622a:1995:b0:3b8:680a:40bd with SMTP id u21-20020a05622a199500b003b8680a40bdmr18223qtc.44.1675704670099; Mon, 06 Feb 2023 09:31:10 -0800 (PST) Date: Mon, 6 Feb 2023 17:31:02 +0000 In-Reply-To: <20230206173103.2617121-1-edumazet@google.com> Mime-Version: 1.0 References: <20230206173103.2617121-1-edumazet@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206173103.2617121-4-edumazet@google.com> Subject: [PATCH v2 net-next 3/4] net: factorize code in kmalloc_reserve() From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski , Paolo Abeni Cc: netdev@vger.kernel.org, eric.dumazet@gmail.com, Soheil Hassas Yeganeh , Eric Dumazet Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org All kmalloc_reserve() callers have to make the same computation, we can factorize them, to prepare following patch in the series. Signed-off-by: Eric Dumazet Acked-by: Soheil Hassas Yeganeh --- net/core/skbuff.c | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 333f793f9cdba9946e0bd014e9a0f18bae20771d..c1232837cd0cb3befce0262fb8fda20272a26d45 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -478,17 +478,20 @@ EXPORT_SYMBOL(napi_build_skb); * may be used. Otherwise, the packet data may be discarded until enough * memory is free */ -static void *kmalloc_reserve(size_t size, gfp_t flags, int node, +static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node, bool *pfmemalloc) { - void *obj; bool ret_pfmemalloc = false; + unsigned int obj_size; + void *obj; + obj_size = SKB_HEAD_ALIGN(*size); + *size = obj_size = kmalloc_size_roundup(obj_size); /* * Try a regular allocation, when that fails and we're not entitled * to the reserves, fail. */ - obj = kmalloc_node_track_caller(size, + obj = kmalloc_node_track_caller(obj_size, flags | __GFP_NOMEMALLOC | __GFP_NOWARN, node); if (obj || !(gfp_pfmemalloc_allowed(flags))) @@ -496,7 +499,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, /* Try again but now we are using pfmemalloc reserves */ ret_pfmemalloc = true; - obj = kmalloc_node_track_caller(size, flags, node); + obj = kmalloc_node_track_caller(obj_size, flags, node); out: if (pfmemalloc) @@ -557,9 +560,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, * aligned memory blocks, unless SLUB/SLAB debug is enabled. * Both skb->head and skb_shared_info are cache line aligned. */ - size = SKB_HEAD_ALIGN(size); - size = kmalloc_size_roundup(size); - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); + data = kmalloc_reserve(&size, gfp_mask, node, &pfmemalloc); if (unlikely(!data)) goto nodata; /* kmalloc_size_roundup() might give us more room than requested. @@ -1933,9 +1934,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_HEAD_ALIGN(size); - size = kmalloc_size_roundup(size); - data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(&size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) goto nodata; size = SKB_WITH_OVERHEAD(size); @@ -6283,9 +6282,7 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_HEAD_ALIGN(size); - size = kmalloc_size_roundup(size); - data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(&size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) return -ENOMEM; size = SKB_WITH_OVERHEAD(size); @@ -6401,9 +6398,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - size = SKB_HEAD_ALIGN(size); - size = kmalloc_size_roundup(size); - data = kmalloc_reserve(size, gfp_mask, NUMA_NO_NODE, NULL); + data = kmalloc_reserve(&size, gfp_mask, NUMA_NO_NODE, NULL); if (!data) return -ENOMEM; size = SKB_WITH_OVERHEAD(size); From patchwork Mon Feb 6 17:31:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 13130433 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B997FC636D3 for ; Mon, 6 Feb 2023 17:31:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbjBFRbR (ORCPT ); Mon, 6 Feb 2023 12:31:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229925AbjBFRbP (ORCPT ); Mon, 6 Feb 2023 12:31:15 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E42644A0 for ; Mon, 6 Feb 2023 09:31:12 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-52a87fc668cso12206707b3.18 for ; Mon, 06 Feb 2023 09:31:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iB+socU0AKk97tweSMWnrEWkBXfyBtmvwaea0l/B/bE=; b=Qp+mWTaWtjlEqzGnmNGm648JI85E2toMLQXSAwzMqSvR8bFQHCxDGHopnBDfQYxfCr O5vDxRFdvXwJZq9IMioMbI6n4QYFiSQWUawhqMDMOcOfq6kEPmlvY6ZraM3G/JUdfEmR dgZA6X+IYn3IKyFU/Aw2UiY3yGab3hh9YVUIWF1Wsxv8GEy1oAQSuk8QP60UO1B7X/g9 dbMsO7eO7U+SngV40V6Lp0ghD1CswBD31/GbRiG1ERVO01M5M1afHqCxyiGY59ygb825 NEFjLI6q4Qlc9oBW31lk+LARasLhL8qvvOjHcDusgwA4XW4kVnVBWb8p+7jrx881IuSI KJ9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iB+socU0AKk97tweSMWnrEWkBXfyBtmvwaea0l/B/bE=; b=RhHTlJaKbaoXFGwwq446aWvtffiu8vHA/aCwSIl9zTwpf00EsrYaOel+oD8SdyaCq5 N1yFk3GwdtPTwOsVkicuNYL6KC3qpTROoQZo/+zHdyJ7KX9rdATNXf8XjIQpq6q9Cv3Z LVunh+yVuYFOMg4N8B8WMS+PMqvys4mgQ6E5KXtjCalY4Q3mo+tRtHEH/ukBw5Jvwq40 aXwLJruuhqX6OhtoyvMvBPtCuNBASJunHQqUJKJOh4yEQs7+ENO4tj3RpASGPNNvuOXW zfJQ+FF/ss9rdGnbRkmkeA+Rl6YLT8ZGcYjRp/ixdFeLv3fG3MNoxyGhrlCKyBAcVvRK ZPiw== X-Gm-Message-State: AO0yUKUuCyjs+w1LIwCd2rfROtv56DlCFF9eNWllFB4mGc2jzfV4Tnoa TvhkyAz4aVBLMcIGZ/v+fAF/KnGdcBlwaA== X-Google-Smtp-Source: AK7set95ho8k1WbBL4IyLS1RRYSwGOkhbTvIIUQsmHfiLbZxy8bbX2mws2ap0qBy9PjoA1i5Ftt6LgJaQ0SLfQ== X-Received: from edumazet1.c.googlers.com ([fda3:e722:ac3:cc00:2b:7d90:c0a8:395a]) (user=edumazet job=sendgmr) by 2002:a05:690c:f82:b0:525:2090:6b9c with SMTP id df2-20020a05690c0f8200b0052520906b9cmr0ywb.2.1675704671554; Mon, 06 Feb 2023 09:31:11 -0800 (PST) Date: Mon, 6 Feb 2023 17:31:03 +0000 In-Reply-To: <20230206173103.2617121-1-edumazet@google.com> Mime-Version: 1.0 References: <20230206173103.2617121-1-edumazet@google.com> X-Mailer: git-send-email 2.39.1.519.gcb327c4b5f-goog Message-ID: <20230206173103.2617121-5-edumazet@google.com> Subject: [PATCH v2 net-next 4/4] net: add dedicated kmem_cache for typical/small skb->head From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski , Paolo Abeni Cc: netdev@vger.kernel.org, eric.dumazet@gmail.com, Soheil Hassas Yeganeh , Eric Dumazet Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Recent removal of ksize() in alloc_skb() increased performance because we no longer read the associated struct page. We have an equivalent cost at kfree_skb() time. kfree(skb->head) has to access a struct page, often cold in cpu caches to get the owning struct kmem_cache. Considering that many allocations are small (at least for TCP ones) we can have our own kmem_cache to avoid the cache line miss. This also saves memory because these small heads are no longer padded to 1024 bytes. CONFIG_SLUB=y $ grep skbuff_small_head /proc/slabinfo skbuff_small_head 2907 2907 640 51 8 : tunables 0 0 0 : slabdata 57 57 0 CONFIG_SLAB=y $ grep skbuff_small_head /proc/slabinfo skbuff_small_head 607 624 640 6 1 : tunables 54 27 8 : slabdata 104 104 5 Notes: - After Kees Cook patches and this one, we might be able to revert commit dbae2b062824 ("net: skb: introduce and use a single page frag cache") because GRO_MAX_HEAD is also small. - This patch is a NOP for CONFIG_SLOB=y builds. Signed-off-by: Eric Dumazet Cc: Paolo Abeni Acked-by: Soheil Hassas Yeganeh --- net/core/skbuff.c | 72 +++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 67 insertions(+), 5 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index c1232837cd0cb3befce0262fb8fda20272a26d45..bdb1e015e32b9386139e9ad73acd6efb3c357118 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -89,6 +89,34 @@ static struct kmem_cache *skbuff_fclone_cache __ro_after_init; #ifdef CONFIG_SKB_EXTENSIONS static struct kmem_cache *skbuff_ext_cache __ro_after_init; #endif + +/* skb_small_head_cache and related code is only supported + * for CONFIG_SLAB and CONFIG_SLUB. + * As soon as SLOB is removed from the kernel, we can clean up this. + */ +#if !defined(CONFIG_SLOB) +# define HAVE_SKB_SMALL_HEAD_CACHE 1 +#endif + +#ifdef HAVE_SKB_SMALL_HEAD_CACHE +static struct kmem_cache *skb_small_head_cache __ro_after_init; + +#define SKB_SMALL_HEAD_SIZE SKB_HEAD_ALIGN(MAX_TCP_HEADER) + +/* We want SKB_SMALL_HEAD_CACHE_SIZE to not be a power of two. + * This should ensure that SKB_SMALL_HEAD_HEADROOM is a unique + * size, and we can differentiate heads from skb_small_head_cache + * vs system slabs by looking at their size (skb_end_offset()). + */ +#define SKB_SMALL_HEAD_CACHE_SIZE \ + (is_power_of_2(SKB_SMALL_HEAD_SIZE) ? \ + (SKB_SMALL_HEAD_SIZE + L1_CACHE_BYTES) : \ + SKB_SMALL_HEAD_SIZE) + +#define SKB_SMALL_HEAD_HEADROOM \ + SKB_WITH_OVERHEAD(SKB_SMALL_HEAD_CACHE_SIZE) +#endif /* HAVE_SKB_SMALL_HEAD_CACHE */ + int sysctl_max_skb_frags __read_mostly = MAX_SKB_FRAGS; EXPORT_SYMBOL(sysctl_max_skb_frags); @@ -486,6 +514,23 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node, void *obj; obj_size = SKB_HEAD_ALIGN(*size); +#ifdef HAVE_SKB_SMALL_HEAD_CACHE + if (obj_size <= SKB_SMALL_HEAD_CACHE_SIZE && + !(flags & KMALLOC_NOT_NORMAL_BITS)) { + + /* skb_small_head_cache has non power of two size, + * likely forcing SLUB to use order-3 pages. + * We deliberately attempt a NOMEMALLOC allocation only. + */ + obj = kmem_cache_alloc_node(skb_small_head_cache, + flags | __GFP_NOMEMALLOC | __GFP_NOWARN, + node); + if (obj) { + *size = SKB_SMALL_HEAD_CACHE_SIZE; + goto out; + } + } +#endif *size = obj_size = kmalloc_size_roundup(obj_size); /* * Try a regular allocation, when that fails and we're not entitled @@ -805,6 +850,16 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data) return page_pool_return_skb_page(virt_to_page(data)); } +static void skb_kfree_head(void *head, unsigned int end_offset) +{ +#ifdef HAVE_SKB_SMALL_HEAD_CACHE + if (end_offset == SKB_SMALL_HEAD_HEADROOM) + kmem_cache_free(skb_small_head_cache, head); + else +#endif + kfree(head); +} + static void skb_free_head(struct sk_buff *skb) { unsigned char *head = skb->head; @@ -814,7 +869,7 @@ static void skb_free_head(struct sk_buff *skb) return; skb_free_frag(head); } else { - kfree(head); + skb_kfree_head(head, skb_end_offset(skb)); } } @@ -1997,7 +2052,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail, return 0; nofrags: - kfree(data); + skb_kfree_head(data, size); nodata: return -ENOMEM; } @@ -4634,6 +4689,13 @@ void __init skb_init(void) 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL); +#ifdef HAVE_SKB_SMALL_HEAD_CACHE + skb_small_head_cache = kmem_cache_create("skbuff_small_head", + SKB_SMALL_HEAD_CACHE_SIZE, + 0, + SLAB_HWCACHE_ALIGN | SLAB_PANIC, + NULL); +#endif skb_extensions_init(); } @@ -6298,7 +6360,7 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off, if (skb_cloned(skb)) { /* drop the old head gracefully */ if (skb_orphan_frags(skb, gfp_mask)) { - kfree(data); + skb_kfree_head(data, size); return -ENOMEM; } for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) @@ -6406,7 +6468,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, memcpy((struct skb_shared_info *)(data + size), skb_shinfo(skb), offsetof(struct skb_shared_info, frags[0])); if (skb_orphan_frags(skb, gfp_mask)) { - kfree(data); + skb_kfree_head(data, size); return -ENOMEM; } shinfo = (struct skb_shared_info *)(data + size); @@ -6442,7 +6504,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off, /* skb_frag_unref() is not needed here as shinfo->nr_frags = 0. */ if (skb_has_frag_list(skb)) kfree_skb_list(skb_shinfo(skb)->frag_list); - kfree(data); + skb_kfree_head(data, size); return -ENOMEM; } skb_release_data(skb, SKB_CONSUMED);