From patchwork Wed Mar 10 10:46:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12127713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87731C433DB for ; Wed, 10 Mar 2021 10:46:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EBF0864FE1 for ; Wed, 10 Mar 2021 10:46:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EBF0864FE1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 39F388D0194; Wed, 10 Mar 2021 05:46:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 153A58D0148; Wed, 10 Mar 2021 05:46:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D15228D0148; Wed, 10 Mar 2021 05:46:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id B08838D0192 for ; Wed, 10 Mar 2021 05:46:22 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6DF1F18046DB1 for ; Wed, 10 Mar 2021 10:46:22 +0000 (UTC) X-FDA: 77903635404.03.7C1221F Received: from outbound-smtp27.blacknight.com (outbound-smtp27.blacknight.com [81.17.249.195]) by imf10.hostedemail.com (Postfix) with ESMTP id A4614407F8EB for ; Wed, 10 Mar 2021 10:46:17 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp27.blacknight.com (Postfix) with ESMTPS id EDEECCAD05 for ; Wed, 10 Mar 2021 10:46:19 +0000 (GMT) Received: (qmail 23960 invoked from network); 10 Mar 2021 10:46:19 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 10 Mar 2021 10:46:19 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 5/5] net: page_pool: use alloc_pages_bulk in refill code path Date: Wed, 10 Mar 2021 10:46:18 +0000 Message-Id: <20210310104618.22750-6-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210310104618.22750-1-mgorman@techsingularity.net> References: <20210310104618.22750-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Stat-Signature: ebk9xcxrcsok5afbagzacfb6kxbwebfu X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A4614407F8EB Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=outbound-smtp27.blacknight.com; client-ip=81.17.249.195 X-HE-DKIM-Result: none/none X-HE-Tag: 1615373177-773868 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jesper Dangaard Brouer There are cases where the page_pool need to refill with pages from the page allocator. Some workloads cause the page_pool to release pages instead of recycling these pages. For these workload it can improve performance to bulk alloc pages from the page-allocator to refill the alloc cache. For XDP-redirect workload with 100G mlx5 driver (that use page_pool) redirecting xdp_frame packets into a veth, that does XDP_PASS to create an SKB from the xdp_frame, which then cannot return the page to the page_pool. In this case, we saw[1] an improvement of 18.8% from using the alloc_pages_bulk API (3,677,958 pps -> 4,368,926 pps). [1] https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool06_alloc_pages_bulk.org Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Mel Gorman Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 65 ++++++++++++++++++++++++++++---------------- 1 file changed, 41 insertions(+), 24 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 40e1b2beaa6c..ec51bd9454e2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -208,44 +208,61 @@ noinline static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, gfp_t _gfp) { + const int bulk = PP_ALLOC_CACHE_REFILL; + struct page *page, *next, *first_page; unsigned int pp_flags = pool->p.flags; - struct page *page; + unsigned int pp_order = pool->p.order; + int pp_nid = pool->p.nid; + LIST_HEAD(page_list); gfp_t gfp = _gfp; - /* We could always set __GFP_COMP, and avoid this branch, as - * prep_new_page() can handle order-0 with __GFP_COMP. - */ - if (pool->p.order) + /* Don't support bulk alloc for high-order pages */ + if (unlikely(pp_order)) { gfp |= __GFP_COMP; + first_page = alloc_pages_node(pp_nid, gfp, pp_order); + if (unlikely(!first_page)) + return NULL; + goto out; + } - /* FUTURE development: - * - * Current slow-path essentially falls back to single page - * allocations, which doesn't improve performance. This code - * need bulk allocation support from the page allocator code. - */ - - /* Cache was empty, do real allocation */ -#ifdef CONFIG_NUMA - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); -#else - page = alloc_pages(gfp, pool->p.order); -#endif - if (!page) + if (unlikely(!__alloc_pages_bulk_nodemask(gfp, pp_nid, NULL, + bulk, &page_list))) return NULL; - if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + /* First page is extracted and returned to caller */ + first_page = list_first_entry(&page_list, struct page, lru); + list_del(&first_page->lru); + + /* Remaining pages store in alloc.cache */ + list_for_each_entry_safe(page, next, &page_list, lru) { + list_del(&page->lru); + if ((pp_flags & PP_FLAG_DMA_MAP) && + unlikely(!page_pool_dma_map(pool, page))) { + put_page(page); + continue; + } + if (likely(pool->alloc.count < PP_ALLOC_CACHE_SIZE)) { + pool->alloc.cache[pool->alloc.count++] = page; + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, page, + pool->pages_state_hold_cnt); + } else { + put_page(page); + } + } +out: + if (pp_flags & PP_FLAG_DMA_MAP && + unlikely(!page_pool_dma_map(pool, first_page))) { + put_page(first_page); return NULL; } /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); + trace_page_pool_state_hold(pool, first_page, pool->pages_state_hold_cnt); /* When page just alloc'ed is should/must have refcnt 1. */ - return page; + return first_page; } /* For using page_pool replace: alloc_pages() API calls, but provide