From patchwork Thu Jan 14 23:54:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12021115 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B06F2C433DB for ; Thu, 14 Jan 2021 23:56:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 780EA23A7C for ; Thu, 14 Jan 2021 23:56:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731083AbhANXzq (ORCPT ); Thu, 14 Jan 2021 18:55:46 -0500 Received: from mail-40134.protonmail.ch ([185.70.40.134]:35906 "EHLO mail-40134.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728545AbhANXzq (ORCPT ); Thu, 14 Jan 2021 18:55:46 -0500 Date: Thu, 14 Jan 2021 23:54:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610668503; bh=fkj15JFInAacwkcXWADIVGKRGYKKKaLvXr53v7A5jGA=; h=Date:To:From:Cc:Reply-To:Subject:From; b=AxpLtlAd9rxvfz1UeO9vM5UmufvzQ2aYxgRQVfTNt0hZaAwHMemLnVvS7bSZQDhaG bQeSbycH5J9BL7Q8gB/CkH7fXTHdcme+sT/OScqfEKwqscsnaANhcLkz8LVD/mLFGr GUsvJ5KCAx667B/64+HcbI1Mqh9FD0jKFPbobNeBV0ttXRZ6zdl7dSN7HlvwyVPpuF esE9drroF13bRyzZdjpuiHpUFNGmT43nWD4c0cgEr5TuKvjWupcaS1o/l/n/ExAvH0 T7Dv5zgBx7bAO0wZAcs4HN+6kNyKGzI++b4ahBGwMphGWO/gcQXlT2iYzqWtxB4v0K Hu8JBdt2YjXFw== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Willem de Bruijn , Miaohe Lin , Eric Dumazet , Alexander Lobakin , Guillaume Nault , Yunsheng Lin , Florian Westphal , Steffen Klassert , Dongseok Yi , Yadu Kishore , Al Viro , Marco Elver , Alexander Duyck , "Michael S. Tsirkin" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH net] skbuff: back tiny skbs with kmalloc() in __netdev_alloc_skb() too Message-ID: <20210114235423.232737-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Commit 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") ensured that skbs with data size lower than 1025 bytes will be kmalloc'ed to avoid excessive page cache fragmentation and memory consumption. However, the same issue can still be achieved manually via __netdev_alloc_skb(), where the check for size hasn't been changed. Mirror the condition from __napi_alloc_skb() to prevent from that. Fixes: 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index c1a6f262636a..785daff48030 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -437,7 +437,11 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, len += NET_SKB_PAD; - if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) || + /* If requested length is either too small or too big, + * we use kmalloc() for skb->head allocation. + */ + if (len <= SKB_WITH_OVERHEAD(1024) || + len > SKB_WITH_OVERHEAD(PAGE_SIZE) || (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); if (!skb)