From patchwork Mon Jan 11 18:27:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12011375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD4A8C433DB for ; Mon, 11 Jan 2021 18:28:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 890502250E for ; Mon, 11 Jan 2021 18:28:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389992AbhAKS2J (ORCPT ); Mon, 11 Jan 2021 13:28:09 -0500 Received: from mail-40131.protonmail.ch ([185.70.40.131]:57984 "EHLO mail-40131.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728056AbhAKS2I (ORCPT ); Mon, 11 Jan 2021 13:28:08 -0500 Date: Mon, 11 Jan 2021 18:27:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610389644; bh=zMoAhzMpBXRXHs8lVx+DOgiDZHIU73jEHLkoY9JLnro=; h=Date:To:From:Cc:Reply-To:Subject:From; b=SWUU5ht8L8ZEe+kq1pnXCVazfDuj3E0bZwUcIqz3GWofA9V1xwnen3KkUeccl6/fa hrf9LP2PYE31fih+pPVTs6RjWESZV+Xak2qqPWcRy07faCTzdAy/xxJBRomecvoVM0 7lncmRkMYZn7DP0Sp3mld9wsKupMLOpWImge9tifNRaFoXSIqk9k85bGvkql4uwXjk Lj+lOuLb948PHi7PLXlulZwOB8hKh6+q3S0yBS2treuFPzeVJK0exjyak7ZXbnuhmF roHG4aC0whxt+38tbw5/IoruCAJMcVvJQ1Pih7rjaD7MJ9Nnsdp16sv51w1liXSzGE nutOmEIKsmpmg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Eric Dumazet , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Alexander Lobakin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing Message-ID: <20210111182655.12159-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Inspired by cpu_map_kthread_run() and _kfree_skb_defer() logics. Currently, all sorts of skb allocation always do allocate skbuff_heads one by one via kmem_cache_alloc(). On the other hand, we have percpu napi_alloc_cache to store skbuff_heads queued up for freeing and flush them by bulks. We can use this struct to cache and bulk not only freeing, but also allocation of new skbuff_heads, as well as to reuse cached-to-free heads instead of allocating the new ones. As accessing napi_alloc_cache implies NAPI softirq context, do this only for __napi_alloc_skb() and its derivatives (napi_alloc_skb() and napi_get_frags()). The rough amount of their call sites are 69, which is quite a number. iperf3 showed a nice bump from 910 to 935 Mbits while performing UDP VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be way bigger on more powerful hosts and NICs with tens of Mpps. Patches 1-2 are preparation steps, while 3-5 do the real work. Alexander Lobakin (5): skbuff: rename fields of struct napi_alloc_cache to be more intuitive skbuff: open-code __build_skb() inside __napi_alloc_skb() skbuff: reuse skbuff_heads from flush_skb_cache if available skbuff: allocate skbuff_heads by bulks instead of one by one skbuff: refill skb_cache early from deferred-to-consume entries net/core/skbuff.c | 62 ++++++++++++++++++++++++++++++++++++----------- 1 file changed, 48 insertions(+), 14 deletions(-)