From patchwork Mon Jun 7 19:02:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12304487 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8122C47082 for ; Mon, 7 Jun 2021 19:03:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7DDC611C0 for ; Mon, 7 Jun 2021 19:03:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231514AbhFGTFM (ORCPT ); Mon, 7 Jun 2021 15:05:12 -0400 Received: from mail-wr1-f46.google.com ([209.85.221.46]:46034 "EHLO mail-wr1-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230382AbhFGTFM (ORCPT ); Mon, 7 Jun 2021 15:05:12 -0400 Received: by mail-wr1-f46.google.com with SMTP id z8so18772139wrp.12; Mon, 07 Jun 2021 12:03:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gpCyRaWvrKvU/PVxSVlqs7O19Kt+5N+CG6yZ5xm4LxQ=; b=ESqJVSX7FcTzBe9BSWmDQB9qXJFW0Dg1zXZVVqFFjs55PLlB9L0BT7gI0NWT6ZFvlk 6Ro6qYvL27SLa+9VKziPCM5pXV//xsE+N1lxokgWJWIR9U5I+P9dugtQMwR8OA4QL+5+ 5hHG215h8QDuyc0ullY15I99YkjrKDeVIn+bCpLS3kKaynE5oVfo5+GoeTZlh31ZEkQI 2wAwp9rPAdccyB39h54ceFeB56wo7bZYPF5Bd5jyPLrEubiC4zAGU5NGntQyKQ7X2fy2 ACHkoc0bZIHpjQh0V/dhGTBFdAJCLHtSqkBirn77Wan6fLfSSySw146P6AMR5GW2Iyjs VQvA== X-Gm-Message-State: AOAM532mVd6iUH81dwCm4fv/Onhyk67RHnXKN9z3rPBPB9E0SueuHJ+n nk5lHNFdtYBSSOfHjMQ8Vfxp5YIDo4FBkQ== X-Google-Smtp-Source: ABdhPJzmC4x5xCIwkqe1+nWEg0m9lr4FhVpHQo6ynPfRQVlyQhSSXDg3l8PQsFhoSNSw7Be1YbQe6A== X-Received: by 2002:adf:f2ca:: with SMTP id d10mr18183035wrp.314.1623092599073; Mon, 07 Jun 2021 12:03:19 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-37-119-128-179.cust.vodafonedsl.it. [37.119.128.179]) by smtp.gmail.com with ESMTPSA id g17sm12185968wrp.61.2021.06.07.12.03.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 12:03:18 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen , Yonghong Song , Michel Lespinasse , KP Singh , Andrii Nakryiko , Martin KaFai Lau , David Hildenbrand , Song Liu Subject: [PATCH net-next v8 1/5] mm: add a signature in struct page Date: Mon, 7 Jun 2021 21:02:36 +0200 Message-Id: <20210607190240.36900-2-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607190240.36900-1-mcroce@linux.microsoft.com> References: <20210607190240.36900-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Matteo Croce This is needed by the page_pool to avoid recycling a page not allocated via page_pool. The page->signature field is aliased to page->lru.next and page->compound_head, but it can't be set by mistake because the signature value is a bad pointer, and can't trigger a false positive in PageTail() because the last bit is 0. Co-developed-by: Matthew Wilcox (Oracle) Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Matteo Croce --- include/linux/mm.h | 11 ++++++----- include/linux/mm_types.h | 7 +++++++ include/linux/poison.h | 3 +++ net/core/page_pool.c | 6 ++++++ 4 files changed, 22 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c274f75efcf9..a0434e8c2617 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1668,10 +1668,11 @@ struct address_space *page_mapping(struct page *page); static inline bool page_is_pfmemalloc(const struct page *page) { /* - * Page index cannot be this large so this must be - * a pfmemalloc page. + * lru.next has bit 1 set if the page is allocated from the + * pfmemalloc reserves. Callers may simply overwrite it if + * they do not need to preserve that information. */ - return page->index == -1UL; + return (uintptr_t)page->lru.next & BIT(1); } /* @@ -1680,12 +1681,12 @@ static inline bool page_is_pfmemalloc(const struct page *page) */ static inline void set_page_pfmemalloc(struct page *page) { - page->index = -1UL; + page->lru.next = (void *)BIT(1); } static inline void clear_page_pfmemalloc(struct page *page) { - page->index = 0; + page->lru.next = NULL; } /* diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5aacc1c10a45..ed6862eacb52 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -96,6 +96,13 @@ struct page { unsigned long private; }; struct { /* page_pool used by netstack */ + /** + * @pp_magic: magic value to avoid recycling non + * page_pool allocated pages. + */ + unsigned long pp_magic; + struct page_pool *pp; + unsigned long _pp_mapping_pad; /** * @dma_addr: might require a 64-bit value on * 32-bit architectures. diff --git a/include/linux/poison.h b/include/linux/poison.h index aff1c9250c82..d62ef5a6b4e9 100644 --- a/include/linux/poison.h +++ b/include/linux/poison.h @@ -78,4 +78,7 @@ /********** security/ **********/ #define KEY_DESTROY 0xbd +/********** net/core/page_pool.c **********/ +#define PP_SIGNATURE (0x40 + POISON_POINTER_DELTA) + #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 3c4c4c7a0402..e1321bc9d316 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -17,6 +17,7 @@ #include #include #include /* for __put_page() */ +#include #include @@ -221,6 +222,8 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, return NULL; } + page->pp_magic |= PP_SIGNATURE; + /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); @@ -263,6 +266,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, put_page(page); continue; } + page->pp_magic |= PP_SIGNATURE; pool->alloc.cache[pool->alloc.count++] = page; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; @@ -341,6 +345,8 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) DMA_ATTR_SKIP_CPU_SYNC); page_pool_set_dma_addr(page, 0); skip_dma_unmap: + page->pp_magic = 0; + /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. */ From patchwork Mon Jun 7 19:02:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12304489 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 008F5C47082 for ; Mon, 7 Jun 2021 19:03:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DCDAE61153 for ; Mon, 7 Jun 2021 19:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231553AbhFGTFb (ORCPT ); Mon, 7 Jun 2021 15:05:31 -0400 Received: from mail-wr1-f45.google.com ([209.85.221.45]:33756 "EHLO mail-wr1-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231281AbhFGTFb (ORCPT ); Mon, 7 Jun 2021 15:05:31 -0400 Received: by mail-wr1-f45.google.com with SMTP id a20so18869978wrc.0; Mon, 07 Jun 2021 12:03:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d5R/Y787deGXI27/C5ydk9yyfai3EZKj2N6INNdZwRE=; b=LCfnhqgRNAcuEdsmPu1A+jFzGuXC26tu7a43k8UNJZmW5OUbEnJ15SlaDutb3kDoV+ z0Tv/DDlXuZb48YpbMcM3gyRoXsM1r4NJzkgE/neHOmXKJF5VwlIAgFJaN2EjnojWhga mFSrWLxKJDbIztyxBNRv4tj646TLLZ9AlMlbvO+grVgX9JpJJVF24TEFF4tvUaRQVi+H Qx913EFPhKS+T8T6C5QG1hO0vwhr+UdQSCYKi71cO27wImAaY/BUnc4R1scOzX2FcKZF QSEsA8sfinqiE3Vef1aapZVn6cgkjFn/GP6Gw2SiIzAHh0KHMdk/mZuz8Raw9z90E2yO Xyvg== X-Gm-Message-State: AOAM530GgVjsiDnXfK5FNZhtoRXnYAbLDfo83KT/qMVCW6rUoB17skT2 2SebCa6B/JS8iClgT7RUtD9TQ5z4/qkLBw== X-Google-Smtp-Source: ABdhPJwXAVeGKb20OchlDcpFARTCfj5wSWcd0sVVqP41F1+y4gqRBRSun2vUXAiBLD8b4gI1xkR7Cg== X-Received: by 2002:a5d:6082:: with SMTP id w2mr18715401wrt.209.1623092618348; Mon, 07 Jun 2021 12:03:38 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-37-119-128-179.cust.vodafonedsl.it. [37.119.128.179]) by smtp.gmail.com with ESMTPSA id g17sm12185968wrp.61.2021.06.07.12.03.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 12:03:37 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen , Yonghong Song , Michel Lespinasse , KP Singh , Andrii Nakryiko , Martin KaFai Lau , David Hildenbrand , Song Liu Subject: [PATCH net-next v8 2/5] skbuff: add a parameter to __skb_frag_unref Date: Mon, 7 Jun 2021 21:02:37 +0200 Message-Id: <20210607190240.36900-3-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607190240.36900-1-mcroce@linux.microsoft.com> References: <20210607190240.36900-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Matteo Croce This is a prerequisite patch, the next one is enabling recycling of skbs and fragments. Add an extra argument on __skb_frag_unref() to handle recycling, and update the current users of the function with that. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/sky2.c | 2 +- drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +- include/linux/skbuff.h | 8 +++++--- net/core/skbuff.c | 4 ++-- net/tls/tls_device.c | 2 +- 5 files changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 324c280cc22c..8b8bff59c8fe 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -2503,7 +2503,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space, if (length == 0) { /* don't need this page */ - __skb_frag_unref(frag); + __skb_frag_unref(frag, false); --skb_shinfo(skb)->nr_frags; } else { size = min(length, (unsigned) PAGE_SIZE); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index e35e4d7ef4d1..cea62b8f554c 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, fail: while (nr > 0) { nr--; - __skb_frag_unref(skb_shinfo(skb)->frags + nr); + __skb_frag_unref(skb_shinfo(skb)->frags + nr, false); } return 0; } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index dbf820a50a39..7fcfea7e7b21 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3081,10 +3081,12 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) /** * __skb_frag_unref - release a reference on a paged fragment. * @frag: the paged fragment + * @recycle: recycle the page if allocated via page_pool * - * Releases a reference on the paged fragment @frag. + * Releases a reference on the paged fragment @frag + * or recycles the page via the page_pool API. */ -static inline void __skb_frag_unref(skb_frag_t *frag) +static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { put_page(skb_frag_page(frag)); } @@ -3098,7 +3100,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag) */ static inline void skb_frag_unref(struct sk_buff *skb, int f) { - __skb_frag_unref(&skb_shinfo(skb)->frags[f]); + __skb_frag_unref(&skb_shinfo(skb)->frags[f], false); } /** diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3ad22870298c..12b7e90dd2b5 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -664,7 +664,7 @@ static void skb_release_data(struct sk_buff *skb) skb_zcopy_clear(skb, true); for (i = 0; i < shinfo->nr_frags; i++) - __skb_frag_unref(&shinfo->frags[i]); + __skb_frag_unref(&shinfo->frags[i], false); if (shinfo->frag_list) kfree_skb_list(shinfo->frag_list); @@ -3495,7 +3495,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) fragto = &skb_shinfo(tgt)->frags[merge]; skb_frag_size_add(fragto, skb_frag_size(fragfrom)); - __skb_frag_unref(fragfrom); + __skb_frag_unref(fragfrom, false); } /* Reposition in the original skb */ diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 76a6f8c2eec4..ad11db2c4f63 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -127,7 +127,7 @@ static void destroy_record(struct tls_record_info *record) int i; for (i = 0; i < record->num_frags; i++) - __skb_frag_unref(&record->frags[i]); + __skb_frag_unref(&record->frags[i], false); kfree(record); } From patchwork Mon Jun 7 19:02:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12304491 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70C7AC47095 for ; Mon, 7 Jun 2021 19:03:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C0ED61285 for ; Mon, 7 Jun 2021 19:03:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231578AbhFGTFf (ORCPT ); Mon, 7 Jun 2021 15:05:35 -0400 Received: from mail-wm1-f52.google.com ([209.85.128.52]:38732 "EHLO mail-wm1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231565AbhFGTFd (ORCPT ); Mon, 7 Jun 2021 15:05:33 -0400 Received: by mail-wm1-f52.google.com with SMTP id t4-20020a1c77040000b029019d22d84ebdso352628wmi.3; Mon, 07 Jun 2021 12:03:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WFvvDHE28rBb9ZBkWikc+7gFy2OLKyvpJR4FRmz6gF4=; b=BbjW6BH1GkKYc7C+uspixt/HOhpDxVnRKDew07PFyZPxHbr87lCKXL7Rzqljz+C/Wi gtdDin/PC07yy5quXjqWuTR5++ea9KAp9EqPCDCnYL14hsa+PqOmTQ/IGlpkkZqTlq3N rgrsPXtIkdlxSNpCOflsdWTZF3i9YXCF2VWelGoiww4KWZY+TX74cYnI5w5Rqba8odTs eijTjzx/H4AEAVp2QVLfb2/I15WLFXRw13YRuaSq9G/j4GSf2X5ojWIZYMjwLQWylCIr 1sqmODeCJN7G2ulD6EBEnHSqqXZbYYef9uYC/OaYB/LgpX5wNMn8wJ7j54dmt0Bf67oT qIHw== X-Gm-Message-State: AOAM5301tD4Vdffv9cnkw76rLoXCCjFVaVFgXli+lcrL4HkpKl2x2gKj sdIEcXXJlZi7o70R1xweJVsUURUdSc7MhQ== X-Google-Smtp-Source: ABdhPJwwJi22r4mlQh48N6e9FscAasjsEWdKNvTrwCLA0YThNr4JPgBKjLl9HfQKLtVCxzpLMqEVuQ== X-Received: by 2002:a7b:c394:: with SMTP id s20mr576603wmj.24.1623092620430; Mon, 07 Jun 2021 12:03:40 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-37-119-128-179.cust.vodafonedsl.it. [37.119.128.179]) by smtp.gmail.com with ESMTPSA id g17sm12185968wrp.61.2021.06.07.12.03.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 12:03:40 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen , Yonghong Song , Michel Lespinasse , KP Singh , Andrii Nakryiko , Martin KaFai Lau , David Hildenbrand , Song Liu Subject: [PATCH net-next v8 3/5] page_pool: Allow drivers to hint on SKB recycling Date: Mon, 7 Jun 2021 21:02:38 +0200 Message-Id: <20210607190240.36900-4-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607190240.36900-1-mcroce@linux.microsoft.com> References: <20210607190240.36900-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Ilias Apalodimas Up to now several high speed NICs have custom mechanisms of recycling the allocated memory they use for their payloads. Our page_pool API already has recycling capabilities that are always used when we are running in 'XDP mode'. So let's tweak the API and the kernel network stack slightly and allow the recycling to happen even during the standard operation. The API doesn't take into account 'split page' policies used by those drivers currently, but can be extended once we have users for that. The idea is to be able to intercept the packet on skb_release_data(). If it's a buffer coming from our page_pool API recycle it back to the pool for further usage or just release the packet entirely. To achieve that we introduce a bit in struct sk_buff (pp_recycle:1) and a field in struct page (page->pp) to store the page_pool pointer. Storing the information in page->pp allows us to recycle both SKBs and their fragments. We could have skipped the skb bit entirely, since identical information can bederived from struct page. However, in an effort to affect the free path as less as possible, reading a single bit in the skb which is already in cache, is better that trying to derive identical information for the page stored data. The driver or page_pool has to take care of the sync operations on it's own during the buffer recycling since the buffer is, after opting-in to the recycling, never unmapped. Since the gain on the drivers depends on the architecture, we are not enabling recycling by default if the page_pool API is used on a driver. In order to enable recycling the driver must call skb_mark_for_recycle() to store the information we need for recycling in page->pp and enabling the recycling bit, or page_pool_store_mem_info() for a fragment. Co-developed-by: Jesper Dangaard Brouer Signed-off-by: Jesper Dangaard Brouer Co-developed-by: Matteo Croce Signed-off-by: Matteo Croce Signed-off-by: Ilias Apalodimas Reported-by: kernel test robot --- include/linux/skbuff.h | 33 ++++++++++++++++++++++++++++++--- include/net/page_pool.h | 9 +++++++++ net/core/page_pool.c | 22 ++++++++++++++++++++++ net/core/skbuff.c | 20 ++++++++++++++++---- 4 files changed, 77 insertions(+), 7 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 7fcfea7e7b21..b2db9cd9a73f 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,7 @@ #include #include #include +#include #if IS_ENABLED(CONFIG_NF_CONNTRACK) #include #endif @@ -667,6 +668,8 @@ typedef unsigned char *sk_buff_data_t; * @head_frag: skb was allocated from page fragments, * not allocated by kmalloc() or vmalloc(). * @pfmemalloc: skbuff was allocated from PFMEMALLOC reserves + * @pp_recycle: mark the packet for recycling instead of freeing (implies + * page_pool support on driver) * @active_extensions: active extensions (skb_ext_id types) * @ndisc_nodetype: router type (from link layer) * @ooo_okay: allow the mapping of a socket to a queue to be changed @@ -791,10 +794,12 @@ struct sk_buff { fclone:2, peeked:1, head_frag:1, - pfmemalloc:1; + pfmemalloc:1, + pp_recycle:1; /* page_pool recycle indicator */ #ifdef CONFIG_SKB_EXTENSIONS __u8 active_extensions; #endif + /* fields enclosed in headers_start/headers_end are copied * using a single memcpy() in __copy_skb_header() */ @@ -3088,7 +3093,13 @@ static inline void skb_frag_ref(struct sk_buff *skb, int f) */ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) { - put_page(skb_frag_page(frag)); + struct page *page = skb_frag_page(frag); + +#ifdef CONFIG_PAGE_POOL + if (recycle && page_pool_return_skb_page(page)) + return; +#endif + put_page(page); } /** @@ -3100,7 +3111,7 @@ static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) */ static inline void skb_frag_unref(struct sk_buff *skb, int f) { - __skb_frag_unref(&skb_shinfo(skb)->frags[f], false); + __skb_frag_unref(&skb_shinfo(skb)->frags[f], skb->pp_recycle); } /** @@ -4699,5 +4710,21 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb) #endif } +#ifdef CONFIG_PAGE_POOL +static inline void skb_mark_for_recycle(struct sk_buff *skb, struct page *page, + struct page_pool *pp) +{ + skb->pp_recycle = 1; + page_pool_store_mem_info(page, pp); +} +#endif + +static inline bool skb_pp_recycle(struct sk_buff *skb, void *data) +{ + if (!IS_ENABLED(CONFIG_PAGE_POOL) || !skb->pp_recycle) + return false; + return page_pool_return_skb_page(virt_to_page(data)); +} + #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff --git a/include/net/page_pool.h b/include/net/page_pool.h index b4b6de909c93..3dd62dd73027 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -146,6 +146,8 @@ inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) return pool->p.dma_dir; } +bool page_pool_return_skb_page(struct page *page); + struct page_pool *page_pool_create(const struct page_pool_params *params); #ifdef CONFIG_PAGE_POOL @@ -251,4 +253,11 @@ static inline void page_pool_ring_unlock(struct page_pool *pool) spin_unlock_bh(&pool->ring.producer_lock); } +/* Store mem_info on struct page and use it while recycling skb frags */ +static inline +void page_pool_store_mem_info(struct page *page, struct page_pool *pp) +{ + page->pp = pp; +} + #endif /* _NET_PAGE_POOL_H */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e1321bc9d316..5e4eb45b139c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -628,3 +628,25 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +bool page_pool_return_skb_page(struct page *page) +{ + struct page_pool *pp; + + page = compound_head(page); + if (unlikely(page->pp_magic != PP_SIGNATURE)) + return false; + + pp = page->pp; + + /* Driver set this to memory recycling info. Reset it on recycle. + * This will *not* work for NIC using a split-page memory model. + * The page will be returned to the pool here regardless of the + * 'flipped' fragment being in use or not. + */ + page->pp = NULL; + page_pool_put_full_page(pp, page, false); + + return true; +} +EXPORT_SYMBOL(page_pool_return_skb_page); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 12b7e90dd2b5..a0b1d4847efe 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -70,6 +70,7 @@ #include #include #include +#include #include #include @@ -645,10 +646,13 @@ static void skb_free_head(struct sk_buff *skb) { unsigned char *head = skb->head; - if (skb->head_frag) + if (skb->head_frag) { + if (skb_pp_recycle(skb, head)) + return; skb_free_frag(head); - else + } else { kfree(head); + } } static void skb_release_data(struct sk_buff *skb) @@ -664,7 +668,7 @@ static void skb_release_data(struct sk_buff *skb) skb_zcopy_clear(skb, true); for (i = 0; i < shinfo->nr_frags; i++) - __skb_frag_unref(&shinfo->frags[i], false); + __skb_frag_unref(&shinfo->frags[i], skb->pp_recycle); if (shinfo->frag_list) kfree_skb_list(shinfo->frag_list); @@ -1046,6 +1050,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb) n->nohdr = 0; n->peeked = 0; C(pfmemalloc); + C(pp_recycle); n->destructor = NULL; C(tail); C(end); @@ -3495,7 +3500,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) fragto = &skb_shinfo(tgt)->frags[merge]; skb_frag_size_add(fragto, skb_frag_size(fragfrom)); - __skb_frag_unref(fragfrom, false); + __skb_frag_unref(fragfrom, skb->pp_recycle); } /* Reposition in the original skb */ @@ -5285,6 +5290,13 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, if (skb_cloned(to)) return false; + /* The page pool signature of struct page will eventually figure out + * which pages can be recycled or not but for now let's prohibit slab + * allocated and page_pool allocated SKBs from being coalesced. + */ + if (to->pp_recycle != from->pp_recycle) + return false; + if (len <= skb_tailroom(to)) { if (len) BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len)); From patchwork Mon Jun 7 19:02:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12304493 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7558EC47094 for ; Mon, 7 Jun 2021 19:04:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D91561185 for ; Mon, 7 Jun 2021 19:04:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231382AbhFGTFu (ORCPT ); Mon, 7 Jun 2021 15:05:50 -0400 Received: from mail-wr1-f52.google.com ([209.85.221.52]:37583 "EHLO mail-wr1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231565AbhFGTFr (ORCPT ); Mon, 7 Jun 2021 15:05:47 -0400 Received: by mail-wr1-f52.google.com with SMTP id i94so13768610wri.4; Mon, 07 Jun 2021 12:03:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gE1hCISNh35haYkCX09NyfgUq9+VCgnI6QikE3p2Dzw=; b=aOTX+2uVXWGSdyXM3KFZFY01FYqU1FDllDKCViizd/793ehPxp9mdQuxGdfPBU25Lz /UKIUQeX+SegOmGmK6/HtMUnD3/qY+y5VFX/IkmiKPdEAAdunurFoKfJ26p7dnNgdFVi K9IEOk+J36lZNZ7tRuKbNp/Aer/vxz9641yqHR9JS/FflOGBVA5W6Om0e98qNw9YsB1B Sdy8c7G0s4E+VOqhdZaZfeMtxny4HdXKj3vwSXhOziMHiECJ7G8t95ji6WDNbNknkaU+ na2x5cKXB4yNOHHbcAvhjF9M2KeAat1A78YJLUfymRh5sjQTPBwvUc9xjLhOYoAbakbD tqFQ== X-Gm-Message-State: AOAM530wE17pRXZDy6NiRxIk2YcWGYnflcdxIWfc0D7AyODU3L6NHASV t7GkMuxZF+6JqMJ2v46wFTVD+fhDu/CfWQ== X-Google-Smtp-Source: ABdhPJy3njdbB32rUjwGwEDMM8oxRzg8o2coDy27T7MwV8o3VTZgW+GVA/RAkdad4pbD4aw92T3m4g== X-Received: by 2002:adf:9463:: with SMTP id 90mr18731570wrq.401.1623092622477; Mon, 07 Jun 2021 12:03:42 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-37-119-128-179.cust.vodafonedsl.it. [37.119.128.179]) by smtp.gmail.com with ESMTPSA id g17sm12185968wrp.61.2021.06.07.12.03.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 12:03:42 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen , Yonghong Song , Michel Lespinasse , KP Singh , Andrii Nakryiko , Martin KaFai Lau , David Hildenbrand , Song Liu Subject: [PATCH net-next v8 4/5] mvpp2: recycle buffers Date: Mon, 7 Jun 2021 21:02:39 +0200 Message-Id: <20210607190240.36900-5-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607190240.36900-1-mcroce@linux.microsoft.com> References: <20210607190240.36900-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Matteo Croce Use the new recycling API for page_pool. In a drop rate test, the packet rate is almost doubled, from 1110 Kpps to 2128 Kpps. perf top on a stock system shows: Overhead Shared Object Symbol 34.88% [kernel] [k] page_pool_release_page 8.06% [kernel] [k] free_unref_page 6.42% [mvpp2] [k] mvpp2_rx 6.07% [kernel] [k] eth_type_trans 5.18% [kernel] [k] __netif_receive_skb_core 4.95% [kernel] [k] build_skb 4.88% [kernel] [k] kmem_cache_free 3.97% [kernel] [k] kmem_cache_alloc 3.45% [kernel] [k] dev_gro_receive 2.73% [kernel] [k] page_frag_free 2.07% [kernel] [k] __alloc_pages_bulk 1.99% [kernel] [k] arch_local_irq_save 1.84% [kernel] [k] skb_release_data 1.20% [kernel] [k] netif_receive_skb_list_internal With packet rate stable at 1100 Kpps: tx: 0 bps 0 pps rx: 532.7 Mbps 1110 Kpps tx: 0 bps 0 pps rx: 532.6 Mbps 1110 Kpps tx: 0 bps 0 pps rx: 532.4 Mbps 1109 Kpps tx: 0 bps 0 pps rx: 532.1 Mbps 1109 Kpps tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps tx: 0 bps 0 pps rx: 531.9 Mbps 1108 Kpps And this is the same output with recycling enabled: Overhead Shared Object Symbol 12.91% [kernel] [k] eth_type_trans 12.54% [mvpp2] [k] mvpp2_rx 9.67% [kernel] [k] build_skb 9.63% [kernel] [k] __netif_receive_skb_core 8.44% [kernel] [k] page_pool_put_page 8.07% [kernel] [k] kmem_cache_free 7.79% [kernel] [k] kmem_cache_alloc 6.86% [kernel] [k] dev_gro_receive 3.19% [kernel] [k] skb_release_data 2.41% [kernel] [k] netif_receive_skb_list_internal 2.18% [kernel] [k] page_pool_refill_alloc_cache 1.76% [kernel] [k] napi_gro_receive 1.61% [kernel] [k] kfree_skb 1.20% [kernel] [k] dma_sync_single_for_device 1.16% [mvpp2] [k] mvpp2_poll 1.12% [mvpp2] [k] mvpp2_read With packet rate above 2100 Kpps: tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps tx: 0 bps 0 pps rx: 1021 Mbps 2127 Kpps tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps tx: 0 bps 0 pps rx: 1021 Mbps 2128 Kpps tx: 0 bps 0 pps rx: 1022 Mbps 2128 Kpps tx: 0 bps 0 pps rx: 1022 Mbps 2129 Kpps The major performance increase is explained by the fact that the most CPU consuming functions (page_pool_release_page, page_frag_free and free_unref_page) are no longer called on a per packet basis. The test was done by sending to the macchiatobin 64 byte ethernet frames with an invalid ethertype, so the packets are dropped early in the RX path. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index d4fb620f53f3..b1d186abcc6c 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -3997,7 +3997,7 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, } if (pp) - page_pool_release_page(pp, virt_to_page(data)); + skb_mark_for_recycle(skb, virt_to_page(data), pp); else dma_unmap_single_attrs(dev->dev.parent, dma_addr, bm_pool->buf_size, DMA_FROM_DEVICE, From patchwork Mon Jun 7 19:02:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12304495 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 473D8C47082 for ; Mon, 7 Jun 2021 19:04:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2C95161208 for ; Mon, 7 Jun 2021 19:04:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231627AbhFGTF5 (ORCPT ); Mon, 7 Jun 2021 15:05:57 -0400 Received: from mail-wr1-f42.google.com ([209.85.221.42]:44557 "EHLO mail-wr1-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231623AbhFGTFw (ORCPT ); Mon, 7 Jun 2021 15:05:52 -0400 Received: by mail-wr1-f42.google.com with SMTP id f2so18770578wri.11; Mon, 07 Jun 2021 12:03:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KVXA2B0j0E76Hw4rKgteG9BP+cY/0VQ2EiVfHUNMg/4=; b=L/I6KHhV8nWwyacvEtPNzdmBXfUH5ZEsJ4J3th5BcP1l8pDnNmk13+0zJztQKhpGUM gUF6ef9BXpbjnTROVIjQpJyxc56/YboXAH7+OKJLwR8UAR0pnyUqAjL1iSFVsYm4z/DZ opnL8u8J796ZTLQfy37dMVKmMG77Q78HbH/hfh6+BAmHZWJgw+w+cVV+cnnQguH4X3X+ NCi1LBy738cvWEHIKGkv0Ut/DCzdasLNbUTv00VqX+tDj5lS2pJIMqvXw6sPNMKPznGF fWKBZhAut8pI3MOmJ8R7bZSWVU/5p9GuX8Gi/Y6BRICnR7o7vzx/4I+uGkfG9x9p8W3n pw6Q== X-Gm-Message-State: AOAM532uc+6IyeF6e723v6jsE6a3ox0Z+Lny+B3+FJ1SVdAQCmwpke/q xAMm3+GVKrqYcdrkjNyukHn7uNQ7Z1U22w== X-Google-Smtp-Source: ABdhPJzNZJiNdyZFFUxhc150qts/zF4QA/HLJcifyxEuiMByRZ69cr9rwdxpgcIZFH/A7yaeZ1ZcTA== X-Received: by 2002:a5d:698e:: with SMTP id g14mr18619051wru.212.1623092624709; Mon, 07 Jun 2021 12:03:44 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-37-119-128-179.cust.vodafonedsl.it. [37.119.128.179]) by smtp.gmail.com with ESMTPSA id g17sm12185968wrp.61.2021.06.07.12.03.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jun 2021 12:03:44 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen , Yonghong Song , Michel Lespinasse , KP Singh , Andrii Nakryiko , Martin KaFai Lau , David Hildenbrand , Song Liu Subject: [PATCH net-next v8 5/5] mvneta: recycle buffers Date: Mon, 7 Jun 2021 21:02:40 +0200 Message-Id: <20210607190240.36900-6-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210607190240.36900-1-mcroce@linux.microsoft.com> References: <20210607190240.36900-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Matteo Croce Use the new recycling API for page_pool. In a drop rate test, the packet rate increased by 10%, from 296 Kpps to 326 Kpps. perf top on a stock system shows: Overhead Shared Object Symbol 23.66% [kernel] [k] __pi___inval_dcache_area 22.85% [mvneta] [k] mvneta_rx_swbm 7.54% [kernel] [k] kmem_cache_alloc 6.49% [kernel] [k] eth_type_trans 3.94% [kernel] [k] dev_gro_receive 3.91% [kernel] [k] __netif_receive_skb_core 3.91% [kernel] [k] kmem_cache_free 3.76% [kernel] [k] page_pool_release_page 3.56% [kernel] [k] free_unref_page 2.40% [kernel] [k] build_skb 1.49% [kernel] [k] skb_release_data 1.45% [kernel] [k] __alloc_pages_bulk 1.30% [kernel] [k] page_frag_free And this is the same output with recycling enabled: Overhead Shared Object Symbol 26.41% [kernel] [k] __pi___inval_dcache_area 25.00% [mvneta] [k] mvneta_rx_swbm 8.14% [kernel] [k] kmem_cache_alloc 6.84% [kernel] [k] eth_type_trans 4.44% [kernel] [k] __netif_receive_skb_core 4.38% [kernel] [k] kmem_cache_free 4.16% [kernel] [k] dev_gro_receive 3.21% [kernel] [k] page_pool_put_page 2.41% [kernel] [k] build_skb 1.82% [kernel] [k] skb_release_data 1.61% [kernel] [k] napi_gro_receive 1.25% [kernel] [k] page_pool_refill_alloc_cache 1.16% [kernel] [k] __netif_receive_skb_list_core We can see that page_pool_release_page(), free_unref_page() and __alloc_pages_bulk() are no longer on top of the list when receiving traffic. The test was done with mausezahn on the TX side with 64 byte raw ethernet frames. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvneta.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 7d5cd9bc6c99..c15ce06427d0 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, } static struct sk_buff * -mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, +mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool, struct xdp_buff *xdp, u32 desc_status) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); @@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, if (!skb) return ERR_PTR(-ENOMEM); - page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data)); + skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool); skb_reserve(skb, xdp->data - xdp->data_hard_start); skb_put(skb, xdp->data_end - xdp->data); @@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, skb_frag_page(frag), skb_frag_off(frag), skb_frag_size(frag), PAGE_SIZE); - page_pool_release_page(rxq->page_pool, skb_frag_page(frag)); + /* We don't need to reset pp_recycle here. It's already set, so + * just mark fragments for recycling. + */ + page_pool_store_mem_info(skb_frag_page(frag), pool); } return skb; @@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps)) goto next; - skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status); + skb = mvneta_swbm_build_skb(pp, rxq->page_pool, &xdp_buf, desc_status); if (IS_ERR(skb)) { struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);