From patchwork Mon Jan 11 18:29:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12011385 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00E39C432C3 for ; Mon, 11 Jan 2021 18:30:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CD27F22ADF for ; Mon, 11 Jan 2021 18:30:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390593AbhAKSae (ORCPT ); Mon, 11 Jan 2021 13:30:34 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:35082 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390532AbhAKSac (ORCPT ); Mon, 11 Jan 2021 13:30:32 -0500 Date: Mon, 11 Jan 2021 18:29:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610389789; bh=ed2E3ZDT8/tIMKq0U8JNRKLCBc4eky4hYA4RXRAiMWs=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=XrpbpgkFYGY8WwMuHq87QqgNIu7l+lsA2yUPH41gqPBBZh3R/kMs6CFmeinU1Zsr5 z58eX9902Rn+oIxj0Cb/AgwmHkbw/RfKHgK019AIcX7euBUWPADyc5rsXFEjOXcEkR vbRB30hBlfCaiQ5x38bHxwxM3fkR1dCEZGjnWqxtFYSAWIsIWEElj/BbtBZmqMHoIl +XhBUJVawl6/qs0Hs2RrKG/B66moh68B4SuT61suhO0XLQhjlRkuiZLAnIQjyxc0bD wOs4lyl1MI62nGmjE9goqazW3kLnrs8eaRRLtvYkFrrO4646b64vzr64wvozj3vrvo G/PxsoNsEPXUg== To: "David S. Miller" , Jakub Kicinski From: Alexander Lobakin Cc: Eric Dumazet , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Alexander Lobakin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: [PATCH net-next 5/5] skbuff: refill skb_cache early from deferred-to-consume entries Message-ID: <20210111182801.12609-5-alobakin@pm.me> In-Reply-To: <20210111182801.12609-1-alobakin@pm.me> References: <20210111182655.12159-1-alobakin@pm.me> <20210111182801.12609-1-alobakin@pm.me> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Instead of unconditional queueing of ready-to-consume skbuff_heads to flush_skb_cache, feed skb_cache with them instead if it's not full already. This greatly reduces the frequency of kmem_cache_alloc_bulk() calls. Signed-off-by: Alexander Lobakin --- net/core/skbuff.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 57a7307689f3..ba0d5611635e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -904,6 +904,11 @@ static inline void _kfree_skb_defer(struct sk_buff *skb) /* drop skb->head and call any destructors for packet */ skb_release_all(skb); + if (nc->skb_count < NAPI_SKB_CACHE_SIZE) { + nc->skb_cache[nc->skb_count++] = skb; + return; + } + /* record skb to CPU local list */ nc->flush_skb_cache[nc->flush_skb_count++] = skb;