From patchwork Thu Jan 5 21:46:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 199FEC4708E for ; Thu, 5 Jan 2023 21:47:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25ED794000A; Thu, 5 Jan 2023 16:46:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2122C940007; Thu, 5 Jan 2023 16:46:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2D8594000A; Thu, 5 Jan 2023 16:46:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D81A1940007 for ; Thu, 5 Jan 2023 16:46:49 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 79FE61C496D for ; Thu, 5 Jan 2023 21:46:49 +0000 (UTC) X-FDA: 80322080538.14.CA1BBB5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 13080100002 for ; Thu, 5 Jan 2023 21:46:47 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=q7k+00bM; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955208; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SzDRrsXgDJhap3jby0wPlv8T8HARZ8t50vYyZr9tEvo=; b=6SKWmLR7lh13tYwTBGdS8U1b4Sd69ydRdtL1E0u4wtwpA+YD8DolvXzn9Mipuqmo6/ffzI g8JKEP0dfdmN7xGQRzyUj7iMRsw72athgSWnDd/DkW0QFf1GBOoPTbPXRSdW+isprLUs+y NnrjrvHbyaYyUHe0eP7VzbR19s4E/YE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=q7k+00bM; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955208; a=rsa-sha256; cv=none; b=4Xa4lM7Q+TOVRvuNLiZq3v85k+v+1OvdHubs0LBc8cD4RHt4V0ZqIhF/Lmfo6VdUZfLOUU d36FcohsLrBR5z/C2E5HwTDikYmN/iXPCaXMxDWAOXBxn3QAo0dKIcoeNbfNLrz2aPTLch dm5d/73ttGaKOZJfxS6oBIckIulB0QQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=SzDRrsXgDJhap3jby0wPlv8T8HARZ8t50vYyZr9tEvo=; b=q7k+00bMqQWBUn2Lf0bv4bJwXD 6swjdyPvVKKoWeIYVVLH4QmEqbBZNMw6Uk+B7jpBAhCbzLUzzypQBbV4JJ88c9PktGsCNLP1ydeEU mZBrFAKM1XX3ztoU55B9vuFbwhx92rB7wbDr5a9wUKTsC+C3N4fbMK2ARdp79aIlbjav4Ub6MZpmY R4xb05PCFugsvoEnPBojXU3wpqpJiGZdIdA4QwED4d9/ANiUdDISs6fp3HLDHzb4R9ZuB3Rpkpqua MV/Jy0ef48o4x13gULbDgbWHszy9Hm3S6GVp/pHNhbGe24ShFpuSNveYik7XQOfqRtbevwYyu7vfk Tl+T+mVA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWmv-At; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 01/24] netmem: Create new type Date: Thu, 5 Jan 2023 21:46:08 +0000 Message-Id: <20230105214631.3939268-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 13080100002 X-Stat-Signature: 1xm36tqjnijqc4r14rkae48u4taftgxh X-Rspam-User: X-HE-Tag: 1672955207-190962 X-HE-Meta: U2FsdGVkX19RHFfIyIf5DEEaZOK97I1IRKG+xIQFxsukkkgUXVeXXFZeYXdvUo2gPqh0s7KFp+ujdRZ34FzqHTQhf2IrMs5WCNJg0C07lMMWB6hI+DFhyu+FmueHaxVMOuWNqD9fmF2c0Kyl8NAJHtLU/sRZxp6VCF0qwq7Zd9568anNafrCSH5c3fpG72YlK8BsqXA/S1SKKOFDz82s8RlGunNoznamFf6k42W9y8jyKvZlDgDliLO1vjvla21fpk8cX2++cH4Rf18g9Yn2mHk+PwdUP9JcL1xirAALm7BE0/I4aaFy8dm3rSS7ldbtEemGw8uYJUMMG5yZD+zH71stN9qEmhL7smXMna2c/NBIY7qp/4q2gkfwkv38Hc2R6ZXYM2AFMe9S2bFqSeIQvOOqlnqke+hwdrK9gFMTKmrUzZGMXpobY7fGUXERTybURLg/boSguZAJvcOlNygYAPI1AFG0odUsw+GVZh94bVkYjEl728IEenT7eN9+uYeuo6K11KICGAlFKEe372xgR7GyKAxLxMeZG5kUCEgL+1QwjWHL/EAJpG6GKrgbjpT8+pEkaHfbNsEn3nycjH1jjXeOIA6IhVUM++k2nhEFmi5zNW8CJcQQWkjIQaz9uHfIPXdLzR+AuERHduD2TXbMv4wTktOC+MHEY4v/oZQqp02fUmdaN2F+vLR09ei4c+urIo2GNizo0u9/zThU5+ydZGsglyWDbxasfY2RCd0JqDjNxfB/Ov5kQY8HZ+uDZsL1dv/PW11uFWWOI6jBWWBeytceNIHOX+c3TiwD6QxZehYaGIVgOEMPJzPENeNudC99ORC8bq6IYya4IwlAwbZxMFrSlceaUeTf7Y2GNKdRSBRKqjAn0euu5A1yELJa2uPG X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As part of simplifying struct page, create a new netmem type which mirrors the page_pool members in struct page. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Acked-by: Ilias Apalodimas --- Documentation/networking/page_pool.rst | 5 +++ include/net/page_pool.h | 46 ++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst index 5db8c263b0c6..2c3c81473b97 100644 --- a/Documentation/networking/page_pool.rst +++ b/Documentation/networking/page_pool.rst @@ -221,3 +221,8 @@ Driver unload /* Driver unload */ page_pool_put_full_page(page_pool, page, false); xdp_rxq_info_unreg(&xdp_rxq); + +Functions and structures +======================== + +.. kernel-doc:: include/net/page_pool.h diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 813c93499f20..cbea4df54918 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -50,6 +50,52 @@ PP_FLAG_DMA_SYNC_DEV |\ PP_FLAG_PAGE_FRAG) +/** + * struct netmem - A memory allocation from a &struct page_pool. + * @flags: The same as the page flags. Do not use directly. + * @pp_magic: Magic value to avoid recycling non page_pool allocated pages. + * @pp: The page pool this netmem was allocated from. + * @dma_addr: Call netmem_get_dma_addr() to read this value. + * @dma_addr_upper: Might need to be 64-bit on 32-bit architectures. + * @pp_frag_count: For frag page support, not supported in 32-bit + * architectures with 64-bit DMA. + * @_mapcount: Do not access this member directly. + * @_refcount: Do not access this member directly. Read it using + * netmem_ref_count() and manipulate it with netmem_get() and netmem_put(). + * + * This struct overlays struct page for now. Do not modify without a + * good understanding of the issues. + */ +struct netmem { + unsigned long flags; + unsigned long pp_magic; + struct page_pool *pp; + /* private: no need to document this padding */ + unsigned long _pp_mapping_pad; /* aliases with folio->mapping */ + /* public: */ + unsigned long dma_addr; + union { + unsigned long dma_addr_upper; + atomic_long_t pp_frag_count; + }; + atomic_t _mapcount; + atomic_t _refcount; +}; + +#define NETMEM_MATCH(pg, nm) \ + static_assert(offsetof(struct page, pg) == offsetof(struct netmem, nm)) +NETMEM_MATCH(flags, flags); +NETMEM_MATCH(lru, pp_magic); +NETMEM_MATCH(pp, pp); +NETMEM_MATCH(mapping, _pp_mapping_pad); +NETMEM_MATCH(dma_addr, dma_addr); +NETMEM_MATCH(dma_addr_upper, dma_addr_upper); +NETMEM_MATCH(pp_frag_count, pp_frag_count); +NETMEM_MATCH(_mapcount, _mapcount); +NETMEM_MATCH(_refcount, _refcount); +#undef NETMEM_MATCH +static_assert(sizeof(struct netmem) <= sizeof(struct page)); + /* * Fast allocation side cache array/stack * From patchwork Thu Jan 5 21:46:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F984C4708E for ; Thu, 5 Jan 2023 21:46:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D70CB900003; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CE4AF900007; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 914B8900003; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3B8A8900004 for ; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EBB371609CE for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) X-FDA: 80322079950.27.BE3233D Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 4F7344000B for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZtCJTQjo; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955194; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GWMUnP6ZrRdKj2hFTaHbCRwTRDJYekvQZX3sCfTMPHs=; b=6BH9Cw7AnASne913aLVudDcTxErhIg+6ab8xKVts01g+4O4nM8+RG2SbXgcljKXC+nly/Z 6Vk3Y+guPb0vULTQ3DMX/A7iBaN7Pvb/RP+RXtl9v5lm0Y0PC3TguH4Auh4dIP4Rp6OFYs osLoIwjRQyqm3gg3m87El2qTENPXlp0= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZtCJTQjo; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955194; a=rsa-sha256; cv=none; b=s59521hOVKRJ5RgZyv3No5N4PfUocs4X4ym7rIUxtMAPKoHhlsigw0TVNSsmk9uRR30P3A 9ZsdRFn8PQyCzYbFqEGMLctpt++p7sAnO+GRWsNmeJzVO26L1ciIELth1Pvs+n1zYdxMz3 GGQCSMtQpgRD9fY19hjRK1QcNyjMADI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GWMUnP6ZrRdKj2hFTaHbCRwTRDJYekvQZX3sCfTMPHs=; b=ZtCJTQjorvaZ8yTcaNI0TKtm8W X1+w3qQkODaIolnPp096kMuSh0atNQoN1z+vkL2A47IAyHaHv4O9LdhsoPVFHa/KNdq+otClH+IJ/ 3eLcFE88KlgfKzZ54h4aVravLNNfu4Uu54ZZcTGtYCtKN0ZYb0MMVWVYqnNj8NROVfSlZn2XQfFgK iP9bqK5zjmZ69Fjvp7w1LnKCXu7V2S0kKDruvuBmTtG5zwTuK/W3BjdTk49iubiYKAj+VtxFZafR+ wEA8nkCiZ1XOaWg2j4rdA9IFIm0MlnmIbgTlPu3n2opR+u1rQt48gQ9x+Sk+LwfXYXJS3YC5DKssI KLMK974Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWmx-D2; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 02/24] netmem: Add utility functions Date: Thu, 5 Jan 2023 21:46:09 +0000 Message-Id: <20230105214631.3939268-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4F7344000B X-Stat-Signature: ap1zf91jpz3f5ojqexjuknqjfd9atd8t X-Rspam-User: X-HE-Tag: 1672955194-401950 X-HE-Meta: U2FsdGVkX1+KEYNaxF4aBjymFJ8sWgNuctrePBTk2DnOZd90p8XrtqZW70SM+g/xAQ5h+2jZAOjp+w5DAbZ5htQoi51L5oyVjEpp7m8AktceR9SzMFtRGcijvZN8fbyysPwoxY5cIdmu3vdvqlpyXMXFWdX22vo2DS0kK7pMM93+VvATXVOwkiefhw1eGpPqCU9MP/ns/wnjIxgQTliwB0eWM16HE7v8uTdfpGll9m0puJFTC1r8csXf9zmP1Z4FCyO2WMu01/a/zwf18lEv5hX6WaBKV29RF9l5bl1/F40GPnysnIB1yLdw+bH57uOiVnHfo6qI61Rk2qQlA0eVCrH0GSdLCWmSA2otw0TSAifAWOiam3QOFOHe5qdm1/wAEBPCXpokBvjtliP0X6Zp8FM+Fbuj4T/50Qqco592gIvb+YXHO6yPsskhTqUN1Gx3LN6EU1yrrg/iMgxjXbprT4rE8QYp7WpDkc3qtN2gNzWz7PVpGTf6QvnQeQEJKRvcivNrVJ5OXWBryZCyDtAfodlH67Zz/Lreqb9hUkLMy1u5/gNCP3YYhKLArcFj3qwnELj8TScfRzTJgGRK3O9G0qa+CfWKk+0+YepnC3jkrNG6JPbbNcbG2ce5PXRSIm/FmGgrgOLLkmI+r1qYbEtqfaMjAr56KdrBMIJ1ZO9QboxPxyI9gbl7s1cUEqaYwwFJuCdOHpnaqlhpxtPFlzBUNbZS66oMlZPPqBY8A8FBvRaYB0F99t3/tZu2o9w2JkHEJXBfnxZRB83AD2UatVxa+nuwSK5ilY7ncvmbFjgE09PSLliEHmpjiuuE2OQBRye1rWUvUychib28SBXNxXQpVTTfgQxa6e93JgZjJUfvnzcDbDyVkbmizOErZ+cUbmke X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: netmem_page() is defined this way to preserve constness. page_netmem() doesn't call compound_head() because netmem users always use the head page; it does include a debugging assert to check that it's true. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer --- include/net/page_pool.h | 59 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index cbea4df54918..84b4ea8af015 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -96,6 +96,65 @@ NETMEM_MATCH(_refcount, _refcount); #undef NETMEM_MATCH static_assert(sizeof(struct netmem) <= sizeof(struct page)); +#define netmem_page(nmem) (_Generic((*nmem), \ + const struct netmem: (const struct page *)nmem, \ + struct netmem: (struct page *)nmem)) + +static inline struct netmem *page_netmem(struct page *page) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + return (struct netmem *)page; +} + +static inline unsigned long netmem_pfn(const struct netmem *nmem) +{ + return page_to_pfn(netmem_page(nmem)); +} + +static inline unsigned long netmem_nid(const struct netmem *nmem) +{ + return page_to_nid(netmem_page(nmem)); +} + +static inline struct netmem *virt_to_netmem(const void *x) +{ + return page_netmem(virt_to_head_page(x)); +} + +static inline void *netmem_to_virt(const struct netmem *nmem) +{ + return page_to_virt(netmem_page(nmem)); +} + +static inline void *netmem_address(const struct netmem *nmem) +{ + return page_address(netmem_page(nmem)); +} + +static inline int netmem_ref_count(const struct netmem *nmem) +{ + return page_ref_count(netmem_page(nmem)); +} + +static inline void netmem_get(struct netmem *nmem) +{ + struct folio *folio = (struct folio *)nmem; + + folio_get(folio); +} + +static inline void netmem_put(struct netmem *nmem) +{ + struct folio *folio = (struct folio *)nmem; + + folio_put(folio); +} + +static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) +{ + return nmem->pp_magic & BIT(1); +} + /* * Fast allocation side cache array/stack * From patchwork Thu Jan 5 21:46:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACD73C4708E for ; Thu, 5 Jan 2023 21:46:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56B2990000B; Thu, 5 Jan 2023 16:46:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 51D40900009; Thu, 5 Jan 2023 16:46:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FC4D90000C; Thu, 5 Jan 2023 16:46:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 03BFE900009 for ; Thu, 5 Jan 2023 16:46:38 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CC5A81A0B36 for ; Thu, 5 Jan 2023 21:46:37 +0000 (UTC) X-FDA: 80322080034.13.83122DD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 2996340003 for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="n/jZKMKK"; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955196; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W4vmcDs403XfI6fIMo3T6IvLzSI+XWgXJovz62XtHH4=; b=3EAu1UafzYuQakyLKlqXb/iLiJoM9VzEluTONAOIDnJ4oL9udNSf5+L+IL1IKdSk4W7DRJ iXafcaggpiTrzkmpcfu+t4It/Dn6WNkSJ3pRKMP+a7ERhEZm2nWQTYqoDb/3HPMBngT7ND 86A6GO0JSY1bxl8Ap+IpguUgkN+twdA= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="n/jZKMKK"; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955196; a=rsa-sha256; cv=none; b=5dwJ+pf1lRTP1P9cZLtbs6fwDhQHrwJVfclPcxGV3IpJ4jlYghpB9fu5Zkkcu939YYxuVN DJdZqBGvzyFDyF2xYGUICWRTbj6YGTNP9Kck45rupTHpFqeyTKkZjE7yHIdSykw6tTKdQF +1h39zylwxmjWA4qFPIUHKCX4IWaLpw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=W4vmcDs403XfI6fIMo3T6IvLzSI+XWgXJovz62XtHH4=; b=n/jZKMKK3aiij1j6pwIiyMsZPm T0LjJu2/hgZfcGGpORq5I3fkh6vjf2ZKIx0yDWVk8CTcXUTGPUYQ9qnemDEcqsErYirn5x9Xl9B4y cqHpI5wfN0H8tp6hqSvs4TKCHnOp59S59dziMWbcx0P9dExsK85AS5+rsPL5SJ76B7f8WMQAvMi5u aXfld+Pha+MGc5RD9BgmOfvMgFJ3KmAyX1lO0PrY/Hgu+nKUEunqtWbe+OEj8g9vfBHpmbIBWHpHQ TscltBITtNKX7ER8xSJ2WOJ4odicjBxlP/k2jhN+wor/+3/wDTQ6hZBalby9WbVcCBMfqpqq73dY3 7mhXMXwQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWmz-FE; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 03/24] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Date: Thu, 5 Jan 2023 21:46:10 +0000 Message-Id: <20230105214631.3939268-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 2996340003 X-Stat-Signature: 3pta9i3ywpsbnb4qhrhgiugfhu778ndf X-Rspam-User: X-HE-Tag: 1672955195-969347 X-HE-Meta: U2FsdGVkX19SIZpbBFtTNEuq0YraAW2iBwyPsw81esbZ/573GWTJMGZzm7YjK5DOFSQUuPWHDbHVehs/fLj68AUW1RmQpmHjroNPs6vU2kwf2oL+9nKMF3t60BLkyoJCPIuP0yfVqj++eXjNu/Zmhw4wWVDOg+mYb2BWTj+DEIiycmy9Sbd3Hy7JZgs/bQh/dNbYttvtU7iHyg0GwiUjz625yN9D5rwDlqwY/OQaOb3d64hYsSmenVsuGXqFo9Q1Dy41ZThLFAtwTXqUT9El12q1NCtK36eib85Th4eIymA/EuL48EWp224iVY4CCO9Hr8T/MICqWHqmYZMF2dHzSa0KOXXPyILG7FdSlsuPnNmq4lexfetn7htgzl/w2UvSa1cl3/BO8zzJsjzcWq7Cy56Nb6oLa7Yd49VFRh8q5imMWY12oKDjAy/xvBio3QuRFDLC+ECysRwTdvS7bXq3+PdZQnJ4oyn9LZqX5Bwgm5pKw6fZHLjFt/l2AKjIvEnGfRhJE6IWua/t/C1TZTHAB1o9rUV419YhqefwTvTZLSm8mZqOU7FmjSnr6hXrEM0Zs7KT2IIzI0NgPWkprJf1I5aaxS4U5hfS16K1ejrNimdJoUpJu+m32a7dnlxb93baoEPQ1MPfxeA01CKQOj0PGfx8ZLUjr+2wt7WFBesXe0p//kKIV3X+AWA7ermJbNS1Yz6F3R94Sf+M24tmSVmLVR/Vlfqvi8Kc0alQ6keShHlAH2WoS8lTTrzQa3RxQy2r5LY2xbBHUK0D66ukbEYle9/8naZ5jxUf50S5XaKfeIzvV5qTUPMcyc9TLnGEERBs4o1fCtAcHRzm87LgYNyHxHPMpA21Ez3i1QlaOUmYIJoz7riQHXDBcu+J95urmGVDAibyf7gri5mCCsdtoJqE+8tfP7tKAJ3GZO+7f2ySC2TN5XGJi+T+gw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into wrappers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 84b4ea8af015..196b585763d9 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -449,21 +449,31 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem) { - dma_addr_t ret = page->dma_addr; + dma_addr_t ret = nmem->dma_addr; if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16; + ret |= (dma_addr_t)nmem->dma_addr_upper << 16 << 16; return ret; } -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +{ + return netmem_get_dma_addr(page_netmem(page)); +} + +static inline void netmem_set_dma_addr(struct netmem *nmem, dma_addr_t addr) { - page->dma_addr = addr; + nmem->dma_addr = addr; if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - page->dma_addr_upper = upper_32_bits(addr); + nmem->dma_addr_upper = upper_32_bits(addr); +} + +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +{ + netmem_set_dma_addr(page_netmem(page), addr); } static inline bool is_page_pool_compiled_in(void) From patchwork Thu Jan 5 21:46:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE435C54EBC for ; Thu, 5 Jan 2023 21:47:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A109940009; Thu, 5 Jan 2023 16:46:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8518C940007; Thu, 5 Jan 2023 16:46:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67C6D940009; Thu, 5 Jan 2023 16:46:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4E90B940007 for ; Thu, 5 Jan 2023 16:46:45 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0BAAB14095F for ; Thu, 5 Jan 2023 21:46:45 +0000 (UTC) X-FDA: 80322080370.19.6579E97 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 78D3CA0015 for ; Thu, 5 Jan 2023 21:46:43 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vqD9NYXN; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955203; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7pOd+F4bhq+GgOKnYySLHgXmJTZdV4zj7peeoX7pjwQ=; b=QjdyJpYP7i9MN2XLotkJjtAFUve5dGbopIF8W9Ml25T7j8cT4R3wKtRUiXeFazQrqfTVVC HqIGjDhD9I9v9Xt+dXvDy8s6MlTgAli+g38Pdy+IaDiwqmtjBjQ8kqx3mhZ6hj8RNYsBL0 bn7uUuBH8DHGgOKWTPX6dlRAukQn20c= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vqD9NYXN; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955203; a=rsa-sha256; cv=none; b=hW9ZNsn2N636EgSBdl9U+8OlTacRCe/6PjAw8DxgOunFjn0TpKvmfnNTXk1AaM9aI54ysp bRpw/zSc2EpfUhRVU2paY9fiX1odtA89PzKVzjLPP3H2XU4+NM6XDsP8ITbFXmzf4ygdv+ uVvA4uHHeQSCPmSf6pCzlmiWzNR+7Js= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7pOd+F4bhq+GgOKnYySLHgXmJTZdV4zj7peeoX7pjwQ=; b=vqD9NYXNeNF42OPqrhYAju/r5f BABiEyLZr+FBEWCk7/zAGOP1F/JnEZ4lLFKrEXHOWDYGECGjo2+RFyJ0l+HS2yaX3FPTKkduHazE0 8IrW7ZS/F82q/0mvbq1asbBW0r5p/cxa4erJAh+xXW1HpYqhDTzvQLUc5cHKs7zJrH7KQsXqckm2s nCQGaHz6UCFS5tq96QetM7qF2FGMskqKqmhUYvO5F5PhaU828VpPgc9k0wzO/WcN7eO93M/T1HHS0 ygXGY9qYEjcJeEKQDQ8UxBhdm2EsiT3oJDz9A5j6Sx1xMfLrWaycvX6z93L+9RaMl3tNgXxPA5att 0trI3KFQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn1-Hv; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 04/24] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Date: Thu, 5 Jan 2023 21:46:11 +0000 Message-Id: <20230105214631.3939268-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 78D3CA0015 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 8iy73817kxy5qdwoimiz93d5iuetixye X-HE-Tag: 1672955203-886577 X-HE-Meta: U2FsdGVkX18nlAMydZMLXH9kb4pSbIfbX4gXsddh7RgZDpTl4MKbdkChaZwFgkemO52d1SOvGklhnjioDJfa45aEAfMnfQ4A+KEMHFczj/NSXYs4eMDflSUOuaDISqVoZ3DGfTy5Ir/3XLjmNN6jkogVNchQzy4ie1Lt6yM3s24tkQu2ogXxr+d8IWXP6Dw0lVmGEt+v8WzGCGE5iByx8qpyYn1s1GEi7W+cdtOKjghNmUEE/SEhzkauMTjuSCEpQyqPgbpYIoYT81cEhVJlrewaDgr3TdqS14sUvXNUIjTJBvzAg406hBaIhzdnT+qa005p8Ru3ZUzRQKmpaTs6NLyIV6GbnfSi4TP+oLcCQMFuNENDw2HCMsnfzWHstzjIvUx32S74Xu0XbN1gVu6QWRneEKXdpj8yxNzhAhDUf6uV7I5umRQZn6eTKAJI2j6hRvyInlW+4aZYWi/fH/X5lWqfz72FJS3KKCuI20Mg/LcBdRuXZunuHA/V3IyejD0mpTUQiaEcFTWWpP/b7L4mKJcmgKxWkdLDGnS81ghDsHyTLqnyN4C2HGfB8kpeLIrdkM+KNnYWyZsMrDqzX0NhG1xIRVlJa57E8RZtgYKQXdkKh9F/1k+7Vg4zi0AUzdOuEMphZmpe2tAJoba/5OGtka+bCI2P94pVYN3THMgSkOoSHoPX/kFGeNTvY8MaxV/UhDSJ+OcGvDJlHsVO8srrEeGeCxzFxJ4WeNKNA/o2Pf/b3fHsHXkFZjDpquVHY/56zK7QB32jZ+d2x5vbCFww0KJnezKzUHXbNleKO7sgIFvWY+yaTosS3XTh3FSxPa9Q0CSblXpM8Nz6HsySEIAEXkSX7tynVFLDCJm1rtktQn42aRQq3soSDT5JxHIId6GSAniPBYsDHw25KM0j3r0gSn/kalzGpD/676t+7OJWVmePyRN7MhC3kQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert page_pool_clear_pp_info() and trace_page_pool_state_release() to take a netmem. Include a wrapper for page_pool_release_page() to avoid converting all callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 14 ++++++++++---- include/trace/events/page_pool.h | 14 +++++++------- net/core/page_pool.c | 18 +++++++++--------- 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 196b585763d9..480baa22bc50 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -18,7 +18,7 @@ * * API keeps track of in-flight pages, in-order to let API user know * when it is safe to dealloactor page_pool object. Thus, API users - * must make sure to call page_pool_release_page() when a page is + * must make sure to call page_pool_release_netmem() when a page is * "leaving" the page_pool. Or call page_pool_put_page() where * appropiate. For maintaining correct accounting. * @@ -354,7 +354,7 @@ struct xdp_mem_info; void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), struct xdp_mem_info *mem); -void page_pool_release_page(struct page_pool *pool, struct page *page); +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem); void page_pool_put_page_bulk(struct page_pool *pool, void **data, int count); #else @@ -367,8 +367,8 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool, struct xdp_mem_info *mem) { } -static inline void page_pool_release_page(struct page_pool *pool, - struct page *page) +static inline void page_pool_release_netmem(struct page_pool *pool, + struct netmem *nmem) { } @@ -378,6 +378,12 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif +static inline void page_pool_release_page(struct page_pool *pool, + struct page *page) +{ + page_pool_release_netmem(pool, page_netmem(page)); +} + void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct); diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index ca534501158b..113aad0c9e5b 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -42,26 +42,26 @@ TRACE_EVENT(page_pool_release, TRACE_EVENT(page_pool_state_release, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 release), + const struct netmem *nmem, u32 release), - TP_ARGS(pool, page, release), + TP_ARGS(pool, nmem, release), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, release) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->release = release; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->release) + TP_printk("page_pool=%p nmem=%p pfn=0x%lx release=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->release) ); TRACE_EVENT(page_pool_state_hold, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 9b203d8660e4..437241aba5a7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -336,10 +336,10 @@ static void page_pool_set_pp_info(struct page_pool *pool, pool->p.init_callback(page, pool->p.init_arg); } -static void page_pool_clear_pp_info(struct page *page) +static void page_pool_clear_pp_info(struct netmem *nmem) { - page->pp_magic = 0; - page->pp = NULL; + nmem->pp_magic = 0; + nmem->pp = NULL; } static struct page *__page_pool_alloc_page_order(struct page_pool *pool, @@ -467,7 +467,7 @@ static s32 page_pool_inflight(struct page_pool *pool) * a regular page (that will eventually be returned to the normal * page-allocator via put_page). */ -void page_pool_release_page(struct page_pool *pool, struct page *page) +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) { dma_addr_t dma; int count; @@ -478,23 +478,23 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) */ goto skip_dma_unmap; - dma = page_pool_get_dma_addr(page); + dma = netmem_get_dma_addr(nmem); /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - page_pool_set_dma_addr(page, 0); + netmem_set_dma_addr(nmem, 0); skip_dma_unmap: - page_pool_clear_pp_info(page); + page_pool_clear_pp_info(nmem); /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. */ count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); - trace_page_pool_state_release(pool, page, count); + trace_page_pool_state_release(pool, nmem, count); } -EXPORT_SYMBOL(page_pool_release_page); +EXPORT_SYMBOL(page_pool_release_netmem); /* Return a page to the page allocator, cleaning up our state */ static void page_pool_return_page(struct page_pool *pool, struct page *page) From patchwork Thu Jan 5 21:46:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB3B9C3DA7A for ; Thu, 5 Jan 2023 21:47:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0E7E90000C; Thu, 5 Jan 2023 16:46:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C48F1900009; Thu, 5 Jan 2023 16:46:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9B7790000C; Thu, 5 Jan 2023 16:46:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8B8BD900009 for ; Thu, 5 Jan 2023 16:46:50 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6DF5FAB35D for ; Thu, 5 Jan 2023 21:46:50 +0000 (UTC) X-FDA: 80322080580.09.5D8EDD0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id 06BFDC0008 for ; Thu, 5 Jan 2023 21:46:48 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aF1qIRem; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955209; a=rsa-sha256; cv=none; b=M9hH11zVViTfn7EIZUHFhIZARL2amIR2eSytuNd1/XL5N2+qP+vLeA4qxsrx/YkQ/ierJ9 BSGziIqo/1aY3YQJLsXG3dX7PMWa7E1T4PBv5GXNJrFZticfqQi/JksnlaQrpukVTG9i5d OqAS70tG3rFiI/L+TisXEwJRoaa6mgU= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aF1qIRem; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955209; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9GpVtqE16bx92cBcnBntQf4rdh1V8b/Phx6mQORBxs8=; b=SirdBAi1HBg0RJH9YpKt8sPch6/M9pznaJOxN0Zc3G6SG0FwXz6xEMeiiTvx/CBC8AkDBR PVvtVS0VmsoGzHhym8zLG25G7ddlueX9dmahpeDakIRuhFzeH4qv/GyS6JLEDqM1yq4t6x yZtrYOaImk844YyephuMG036PoolxqI= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9GpVtqE16bx92cBcnBntQf4rdh1V8b/Phx6mQORBxs8=; b=aF1qIRem52K1wcky0Gn52SPwNu km2bTNaCfqcBFQyJve/fyCbwsu94ys9iCfuOwBk7iuY560aoHeq/wgOwMETDn/qEFozeXhRHzYDqz wekWD8sj6CM9kP3zUg2r5H7SRM8mKje08APTmzfeBXHm3bcxlFEVQFYwq6QpK8IxEHIP/0QrK5Um5 cs0H4v9q17kkY2uogtXVKFXd1mXdIg2iHjMqXHdW38PlspLjvD8oMqGKYVoYjH1jXEeSWKjmYfgur E/k2Pm9rsnlfSSJD1KdzzQAwOT41vygdkznVoVbZyGbH3IGP3XgrjkJzKIZWgyPfMjaXckRCWMLfU AMRXdASQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn3-Kc; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 05/24] page_pool: Start using netmem in allocation path. Date: Thu, 5 Jan 2023 21:46:12 +0000 Message-Id: <20230105214631.3939268-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 06BFDC0008 X-Rspamd-Server: rspam01 X-Stat-Signature: jufj36fcna4jm355sbh6ckkhf6ft5knx X-HE-Tag: 1672955208-866988 X-HE-Meta: U2FsdGVkX1/cdO9cILbRyqKJe4P/X0zAF8PmqLvqdXH4nEA+HaaCSTUPHEbBUdbHfxYyB+gpC+fJf+TB2mPznG4nw/vtXzF8LvoyMBrywLcZsgBw34nBO228+WnHrMzmRcrA02EDnBT9R+/zHspxTGSxfQC0lGzX61/Z5+Ht4WW968mf9olPNHujOqwhIK6R+4AathRlhPp0gtZRWT9P9ZHPQQ4aZLJukjVJ1OcUhGuXZnJR+ul0Weq6bNWzYl6K0GGJxrZNv/oG8kRjNfMbXhdIqImnT8Xq8ng7zNif4s9J2eDrZlDBb3tjGXaxdluozsxlCIg/aMseHeLOPqD4TohoZyVUQA9Xv/1iRonGRCyiCgHeHTpX+38jMucOJvxnofndeNXJW1DKd81WNzfg+8cZHTyFNL+OTGLa3h8Y4FhJi0+kXly79aKPpX8Fmz8tpBKuaZxUihfinhrQtcekX88d0kNMBtnHdOex5G0wTy/gJIUUvg7bWdTgSZpl4iogYs/FV5pDfnR/UPde1CgQf00eO43srTylHaAta2pCHm5S8VfvkWACEEXTvBHO5m1EC28noK0WhWA6tnv8ZxU3dFkfy0h2aT1NiBD+96FZXgnprP1uvFnHLY3rxfcKvtntB+BA1MsESIAfFfhOkpXkGbvwjLExZ8RpH4INdXj6az9JfuH/sdigiYu4jEcpgXxerWOCdotavVjO2hgF6Qy3Win2Q5i+ZlX8NHdwkVJp0weRK1ePCT9J3UknJDWcOSx5iMqHNzq08HoGEN+6KmJvOC1hQ4uDrXobRQDJROg8gjonvbVXFDpHNlnSIe9wUYWP2uwElRKU4gG3Uov7engghtUkYBZMV9FLX5Ts5kp/vDG/QcbBZccT93ruwHsL4hYZjAi5+/P3pNr3Lpq/BwX9IUoRavaagPmPPQ9+miFXuu2w7tqaRgQ1cQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __page_pool_alloc_page_order() and __page_pool_alloc_pages_slow() to use netmem internally. This removes a couple of calls to compound_head() that are hidden inside put_page(). Convert trace_page_pool_state_hold(), page_pool_dma_map() and page_pool_set_pp_info() to take a netmem argument. Saves 83 bytes of text in __page_pool_alloc_page_order() and 98 in __page_pool_alloc_pages_slow() for a total of 181 bytes. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/trace/events/page_pool.h | 14 +++++------ net/core/page_pool.c | 42 +++++++++++++++++--------------- 2 files changed, 29 insertions(+), 27 deletions(-) diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index 113aad0c9e5b..d1237a7ce481 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -67,26 +67,26 @@ TRACE_EVENT(page_pool_state_release, TRACE_EVENT(page_pool_state_hold, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 hold), + const struct netmem *nmem, u32 hold), - TP_ARGS(pool, page, hold), + TP_ARGS(pool, nmem, hold), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, hold) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->hold = hold; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx hold=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->hold) + TP_printk("page_pool=%p netmem=%p pfn=0x%lx hold=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->hold) ); TRACE_EVENT(page_pool_update_nid, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 437241aba5a7..4e985502c569 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -304,8 +304,9 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, pool->p.dma_dir); } -static bool page_pool_dma_map(struct page_pool *pool, struct page *page) +static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) { + struct page *page = netmem_page(nmem); dma_addr_t dma; /* Setup DMA mapping: use 'struct page' area for storing DMA-addr @@ -328,12 +329,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) } static void page_pool_set_pp_info(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; + nmem->pp = pool; + nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(page, pool->p.init_arg); + pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) @@ -345,26 +346,26 @@ static void page_pool_clear_pp_info(struct netmem *nmem) static struct page *__page_pool_alloc_page_order(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; gfp |= __GFP_COMP; - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); - if (unlikely(!page)) + nmem = page_netmem(alloc_pages_node(pool->p.nid, gfp, pool->p.order)); + if (unlikely(!nmem)) return NULL; if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); return NULL; } alloc_stat_inc(pool, slow_high_order); - page_pool_set_pp_info(pool, page); + page_pool_set_pp_info(pool, nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); - return page; + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); + return netmem_page(nmem); } /* slow path */ @@ -398,18 +399,18 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - page = pool->alloc.cache[i]; + struct netmem *nmem = page_netmem(pool->alloc.cache[i]); if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); continue; } - page_pool_set_pp_info(pool, page); - pool->alloc.cache[pool->alloc.count++] = page; + page_pool_set_pp_info(pool, nmem); + pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); } @@ -421,7 +422,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, page = NULL; } - /* When page just alloc'ed is should/must have refcnt 1. */ + /* When page just allocated it should have refcnt 1 (but may have + * speculative references) */ return page; } From patchwork Thu Jan 5 21:46:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AED4AC4708E for ; Thu, 5 Jan 2023 21:47:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DFCD94000D; Thu, 5 Jan 2023 16:47:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 766C0940007; Thu, 5 Jan 2023 16:47:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6307F94000D; Thu, 5 Jan 2023 16:47:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 49F65940007 for ; Thu, 5 Jan 2023 16:47:00 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2642280780 for ; Thu, 5 Jan 2023 21:47:00 +0000 (UTC) X-FDA: 80322081000.29.2D5A7D2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id A0EC8C000E for ; Thu, 5 Jan 2023 21:46:58 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="v5z/fr3P"; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955218; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6uhBgyX+WOyze+nX1i+x8P3OR4HsIe/Xfex6UDdj364=; b=0BBMlDKFxuERYzFCQO+KU7BMHGxXRJW6PpTxW/YwsxRcM/G9+fAF5ebxc34+lfA4EKWE+R zhuXb9KbTGSFsXx9uMZ/LNFY9mJ264YhVmRlv7SfwHGQ6eLuuWygx3coi2O0cGFn2Zd0DZ 4MCuM4FLGMG2FN8gg+2b9gSncCHxfrM= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="v5z/fr3P"; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955218; a=rsa-sha256; cv=none; b=gXcxrb12CjC27bcU+OPghTtRM1XRABrbXGxyKD1sihgeeZGMlUzseXVTkd1li8kH9qBEJ2 O8fjlTCsZQeWCUt/A0PYE93oTbkxVOyDVhM5yzg/IO9v1rNcHWsc5dnfUnljizjKwmmuum iq5wH16j25H6er+1Sj4/bWroOjmQqkM= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6uhBgyX+WOyze+nX1i+x8P3OR4HsIe/Xfex6UDdj364=; b=v5z/fr3PVZGh697DbXoplcl2Z/ BZG57SwPmhuMZpINsUVHfbg4HwLTZkzxCUpekfNQSflEsjNiIBTurjWjNONBZ77UiCfo5z++2YbZO Jd3vzyVOHp7fae4KvEtSTHMuHXm6y3+OIjuYkrYXdzRsQTsrogD6JmfC10Rq5CJ5oWOhA3lwbawhJ K0LqZiP37FtCFstnvEamkO/10fCCEi4mGnvYgwHkraaDlC802vgN6D5r4zaxDMC3tEkpWSYUkfXLR PcmDscpmvVMEfJ0R9yb5o6gJIT2s0mf6WNS8TgCdT5ILvN8N5VbL//vZ81IE4hSWYeQijoHEbmbgd 8TyBMPLg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn5-NJ; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 06/24] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Date: Thu, 5 Jan 2023 21:46:13 +0000 Message-Id: <20230105214631.3939268-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: A0EC8C000E X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: par1wx7pkkpbf6t8ydxr44nx6jifbdbw X-HE-Tag: 1672955218-845209 X-HE-Meta: U2FsdGVkX1+g3VfUk2PfPMqa1U8Yx1Hi2xePOZgGshzdYRdELT4Nl/8n26l5oZ6uKt/hll3eqxUAg16nFWV+ZhPsQJIROVXzBQ2JMqI51o6WcinN4cdT5ifa8h1B9a2mlH/h3d5Hi/Kw1M8O4R5CGBB132LuccIiRg9X2PWaYieEZjXIk4VD84zj8f3EQv6Qx9vRjw84ditdUNbs9Wx9GYxuo9JoNAX1DB+cIhoW4ruOXtJse6lZPaPeZCSF2jBo5anrcbcRLcqDwuZhay/xGR3AAgkGCtbdUYiDKlVdNtynr696GL7LVE/zMcgfYsW/vHIjzmoCgB7SPxOmQxYQWiEADZ1ZQDaOOKMSzXE4DrS1Iqli6PtO7WQLbdT0XQg4B6a+oyUlrXE1cOgngAS++JJeLET7jMM90zfxrttrCOgmFVgQXZL0ogVNdVJCU2Y9qnWh6iHydOyh3YeA0C6YRNqSqqSyOMlnu8MCCxSXd0YO/bGMtLogMpwa7O+oZ80XpznV8Sb7Z/wdvAUxocD3oGafo/G8SBaEWIn26VQAoiKn8u7GjDpNyJyEt/5MSgqND1Dkrs59cPngFBptlT2C2pL+jOO9MOceGvSJleqMWT5316FpgM/xQEA6nvv/Xyk9kZksEvcCshQ6TWMrd0jcvcqBGEBs/6bBQVi1JtkH2NVlT2C4HUvt4JnzxLocS9YIfumdR0Nwt9vJuBE9T+/HzYz5wo9MhMfHAZED79HCWNjWqehn/kJ7d8To1mohkJ+K899KHO3YNKTGqweQSeWGWI+FOAPwxSvlCaeOxHz5UdpKs91+QXltPDyYNrKXABDbZ5ov6wtij/h2N1J+IEKNhjdfWo8gQp0fsqHQkJBOlTwh/5qEmHb5jPMSEnXFZEuq3nwHiIpUdyLbP54FX5v92jBGLn37ecXv6rhSUmllseWXuZQAOPxP6g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes a call to compound_head(), saving 464 bytes of kernel text as page_pool_return_page() is inlined seven times. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 4e985502c569..b606952773a6 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -220,7 +220,13 @@ struct page_pool *page_pool_create(const struct page_pool_params *params) } EXPORT_SYMBOL(page_pool_create); -static void page_pool_return_page(struct page_pool *pool, struct page *page); +static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); + +static inline +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + page_pool_return_netmem(pool, page_netmem(page)); +} noinline static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) @@ -499,11 +505,11 @@ void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) EXPORT_SYMBOL(page_pool_release_netmem); /* Return a page to the page allocator, cleaning up our state */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) { - page_pool_release_page(pool, page); + page_pool_release_netmem(pool, nmem); - put_page(page); + netmem_put(nmem); /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). From patchwork Thu Jan 5 21:46:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F53AC54EBD for ; Thu, 5 Jan 2023 21:47:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3224E94000C; Thu, 5 Jan 2023 16:46:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2AA8E940007; Thu, 5 Jan 2023 16:46:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1728894000C; Thu, 5 Jan 2023 16:46:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F301C940007 for ; Thu, 5 Jan 2023 16:46:56 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D1EA080A45 for ; Thu, 5 Jan 2023 21:46:56 +0000 (UTC) X-FDA: 80322080832.20.C1BE223 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 56DE8C000A for ; Thu, 5 Jan 2023 21:46:54 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="uV5/9afY"; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955215; a=rsa-sha256; cv=none; b=uZG7C5misFi+TdYjCjN5+OxTS8TPu/3wSV0sqICcopFdM93PplHkxnps5Qo6Y7OEqsGlR+ 3+bVX5myL55N/XSCcklICtTXZ2uARg2Ka/wtJbZY1gfX8sTWpii+vs/rm4Vmosf6p14qyA XiTAWiFu/bUDC1ga58XjJOr+o99CgjM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="uV5/9afY"; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955215; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JiZMgS7pP8K35sUweK//q6lTVK6Ins4aBIFLXYHxMXM=; b=StclnW8uKS5l+rqAsQ8yiNAfqzGiQn6JCP9TdPTbxlJAY0f3hRa5mLhoOAnMnJVqT1xs7y kB88s/Vwk+cqD+noFU1HgRl4aKDIU9bt5qHUB/Azd5+tkWnd8+yNBV7MNdYQlD66QmTH4A I70an84OVkrKKTvRAaiTUR4LkPDa5CU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JiZMgS7pP8K35sUweK//q6lTVK6Ins4aBIFLXYHxMXM=; b=uV5/9afYQTK6Ax4FZjDPpMUM37 LPW3S7HgMGYEoWN149Ks4YyYSFd62TzasTqA0XLNdbQhvSOO7x1ioDOrBVe/1tEwhgvtpzYlA81U/ 0vU4/lkVOAeD7xVEnE5Y14DRi3c3gPVyCOyQLrE6/AqWkkiH4zPO1+MTVyN1fNDKUZ/GMTVR0svHU F1lFCFJhcG/mC2I6dRqcRLd481TY87Ihvb2toeuTxhOI6pDAxfMTXjJhwyjCr8ku2mceS/cgW2Q2N DKlMmDi/FFO75wJktAPF+uPclhXSmvOEAj74/z1AxkICFNWkqkHQVrs5ICM5audupzRE8L8j87z5u Y8163Y0g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn7-QL; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 07/24] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Date: Thu, 5 Jan 2023 21:46:14 +0000 Message-Id: <20230105214631.3939268-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 56DE8C000A X-Stat-Signature: psgngfnmqkrz1q1xns4ha9pui1hik7jj X-HE-Tag: 1672955214-989095 X-HE-Meta: U2FsdGVkX1+VAO4qdbBpa+si/4C2Y3/Q+0Kk80XXMuLUu6rko5gyxFNB042I5GxrE2687cjb4EwUtUXBMsQjy3V1Lhk5tTDv9jJCkMbSndyUp7xhNvxe3Oi+wnkqJTfAcjTDcNmpvZiow+GKFbFR/iz/XOCQ2QjSrvhnWRX+NjKBoC71NJlhI5u0dHqlP5oEL23Rwr/Pla7Z2s1Avd9rQiePT2tNGTeVBUWW40M+Kgw3XzTq7wDD6Y7kOSsIe7wpHdDghDPbqNM6tVDH/d4KamRKCkbfj0fHo2bjHwKD+J8hHbEIYirC1y36FF2YBXKbSqTRuUlB8GVCzhz/Sv7hk15czamNozvFTmy6bTvFV1nJCKd4c+BrvtFakWAh+56WiLgxI26DPtX4I6YguHN801fFUC2cThINts+OBaaWWmwjYL4fHi8GHdUgZM8rReXrCedlVM9Y7AXKfDsy62hmYe0+0AOFqTFbLHVyq0O7lLInoAJeFbM+6FFXmawkDy04d/CuJ+VbR1ZjsIpdRlWRnKcLRywvbtr+QQFVgcv8vqOdLCiiV3hjU3HZ6QDGXib5+T8aiXV5n+R8SmIyhMLOOqFmy3vgoQaByT/d/MiO7vIUCZsb19woDOCl2TVEZWibv/k/XZqCd3ERRt8T2yAJzl7iBbcs3l1XhGzO85ITToh2EQ5FpMgcp1FpMQ0nIKHuEKRWXUOg+eAmPBZIHDWBZts9c5DVC6e7+XXNhB4iCbookhi9zA7K4+CXxW9AwEw0ceQ1RYK2jmIt5l7cfVwxBzbc6nqEkb3KDxzi05GMn/IZEAwmBQnLz344SnCaMTwMnRjexcTBiGjtS1IbXEVy9hZxbsxVbsgkGtpTT8YEXd1D6/MGQoW1nIgTzBodK5t/Nd+STLD0lC6/18CkzZz5XxVIuyg4a8p5lHMmkiFAIlzdbgfEgiQ9PA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes the call to compound_head() hidden in put_page() which saves 169 bytes of kernel text as __page_pool_put_page() is inlined twice. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b606952773a6..8f3f7cc5a2d5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -558,8 +558,8 @@ static bool page_pool_recycle_in_cache(struct page *page, * If the page refcnt != 1, then the page will be returned to memory * subsystem. */ -static __always_inline struct page * -__page_pool_put_page(struct page_pool *pool, struct page *page, +static __always_inline struct netmem * +__page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { /* This allocator is optimized for the XDP mode that uses @@ -571,19 +571,20 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * page is NOT reusable when allocated when system is under * some pressure. (page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { - /* Read barrier done in page_ref_count / READ_ONCE */ + if (likely(netmem_ref_count(nmem) == 1 && + !netmem_is_pfmemalloc(nmem))) { + /* Read barrier done in netmem_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, + page_pool_dma_sync_for_device(pool, netmem_page(nmem), dma_sync_size); if (allow_direct && in_serving_softirq() && - page_pool_recycle_in_cache(page, pool)) + page_pool_recycle_in_cache(netmem_page(nmem), pool)) return NULL; /* Page found as candidate for recycling */ - return page; + return nmem; } /* Fallback/non-XDP mode: API user have elevated refcnt. * @@ -599,13 +600,21 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * will be invoking put_page. */ recycle_stat_inc(pool, released_refcnt); - /* Do not replace this with page_pool_return_page() */ - page_pool_release_page(pool, page); - put_page(page); + /* Do not replace this with page_pool_return_netmem() */ + page_pool_release_netmem(pool, nmem); + netmem_put(nmem); return NULL; } +static __always_inline struct page * +__page_pool_put_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) +{ + return netmem_page(__page_pool_put_netmem(pool, page_netmem(page), + dma_sync_size, allow_direct)); +} + void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct) { From patchwork Thu Jan 5 21:46:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9CBBC54EBC for ; Thu, 5 Jan 2023 21:46:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5FC28E0005; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 71B308E000D; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E22A58E0007; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B59238E0008 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6BF7CA0B9A for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.07.9DCFF61 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 5796A140013 for ; Thu, 5 Jan 2023 21:46:31 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DSlYKKAK; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w47n6MmGiQALv1tQiyKVrkQG4HjPnS/MkMXrWJAcNmM=; b=hj8dakpREWVHbtORVgA2hLS87gVpPNWMOWQ0rwN0NNY2lfhhZmWKE3flXlmUY/lr+JtH4q rPPdOkDGv/0SORAWAGjWbBTWmNNCGMGI9b4dg63fnZcjVUIuh6y8RRN+m9784FmUX59Cow xoqEutGsrjrdUkr2nwEpSTB1YXy9x8I= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DSlYKKAK; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955192; a=rsa-sha256; cv=none; b=diE6dPc8NQNBN8El+yite+gUbYP53q9I2nvBPpl/IAsmqRrASjc8cZ+w4w7sBTkLj0965c qRFTU+rLghSMj5p2yjk7oLHqIKjMITuPmsEbQfeqHaM/RDf1bd8Q0ivey0WcNyJ2lDpEB4 iSR1mZUfmPOx8tIMnFEtKNwjgZTdqjc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w47n6MmGiQALv1tQiyKVrkQG4HjPnS/MkMXrWJAcNmM=; b=DSlYKKAKDmrsrfzfoGNdgfJJwk oqHPnyWdTlLYbuCqzlxSE7XFl150cU0bRikDE8+65rxX254tq4RHLKNQ/EzcVyjAa96FGqHUXTVWm sYD3vELTBXNQtKBMPh67irhZhe4ri77E/KVRJD7QTD0BpQ4wh9vo55wfOoI+UacOT4AVUiun4AVgk U9DykfxmvUmear3AHzxtLhOF18tL6R5sSpaNkP0bhu8eq/IVxqVtjvOZpU/EAt1cnHlbxIY4jHnww GgcRPMv6Ndyx1K7AZVreBU0bpHgAZ/f+JhjxuXouyp3H7JJZXgJ3PmpqJrkfXJx78X3rWcBXveSKt X9P1dH0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWn9-T3; Thu, 05 Jan 2023 21:46:33 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 08/24] page_pool: Convert pp_alloc_cache to contain netmem Date: Thu, 5 Jan 2023 21:46:15 +0000 Message-Id: <20230105214631.3939268-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5796A140013 X-Stat-Signature: sg48apmmxthqmkjyequinujffo19a5u9 X-HE-Tag: 1672955191-162982 X-HE-Meta: U2FsdGVkX1+ibwUl17c10a+qXKx8/XsFuN5qHNy0FxvSdbnM/XY/VyRUGn3TnGlOKSJz4ZN/mqWDUj+PYePADv7YlDnG4q+v6IZhm3vM5xfRaWF6caeiqFGQEaOZXazOA8X1GCJ3zEg0zlkRUtwRf4wuKAv/HgkTpcgLh3BIobv8PF6ONbKZsSH8Zb5BBMBrDrqhMLbjZNsUVBoETZ4w+N45ZYtNf0vkS1F5L4f+PXncsJv6KhkhSfsuL1SD3abN/BP/7XEGjjIDpXRfo3ODKAa0aOO8I4Xu+YbFaJWrxVM/DbEJMrRP9B9BrHltbyWB9x0EVFzzvwah4X0FyAVceS3DfUytg0YVq0bCZrNUEadM63jAuCYg3+4LMIvJ/yLtybDL0lK+TKg4NiPo9N5P6Z2qF0/S4xbG/6wM2NvWsJifKqmkwxAL9ZynCUggPb+aY0N9HgYJTNyKyytr/2pm4Mz3kAt39XVvf3z2hkjV82exmwkjklbQA9G7zv2H076/QaROTVihpmoPTh2Tiufh4LyCt5PBlLhMYCzNfRqMkO3SR6l3isKcJCS4QelgJyWJhPerHCFM7KiRDmbqV9Fc9HFdkGfcAADls/UrvfPDU+4OfjbX6pJwdyDMtF/E/uvDmwIen8s78hSAWmmOQTzaulG7MQAw0JCDbYL/11PUWZCkIQ1+SlHsOJ+IXbwDRvbnMPCIiftl/VwgJTH+k6nuYhoqmJKBCohRleAb4aIl3GuxVEPdvZlA2DxY5t+1VJ30uDlNJmjSBZ36e0PaYdSoDgfNWx8lLmlyfWpsdUlRQBpEDo0opQi7oL9xfMxgFRM6EjWW4ikesi8M75nWZQlIptFT134nkB2DHxd2tkrIRBVmUx2Gms1CcKE1/wNstaNAxbbdmDdLIeTlBh9uOABMHmw7C4BPjrjbLIZqHA9OnpduYjpkf2lraA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the type here from page to netmem. It works out well to convert page_pool_refill_alloc_cache() to return a netmem instead of a page as part of this commit. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 2 +- net/core/page_pool.c | 52 ++++++++++++++++++++--------------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 480baa22bc50..63aa530922de 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -173,7 +173,7 @@ static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) #define PP_ALLOC_CACHE_REFILL 64 struct pp_alloc_cache { u32 count; - struct page *cache[PP_ALLOC_CACHE_SIZE]; + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; }; struct page_pool_params { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8f3f7cc5a2d5..c54217ce6b77 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -229,10 +229,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) } noinline -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) +static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { struct ptr_ring *r = &pool->ring; - struct page *page; + struct netmem *nmem; int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ @@ -253,49 +253,49 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) /* Refill alloc array, but only if NUMA match */ do { - page = __ptr_ring_consume(r); - if (unlikely(!page)) + nmem = __ptr_ring_consume(r); + if (unlikely(!nmem)) break; - if (likely(page_to_nid(page) == pref_nid)) { - pool->alloc.cache[pool->alloc.count++] = page; + if (likely(netmem_nid(nmem) == pref_nid)) { + pool->alloc.cache[pool->alloc.count++] = nmem; } else { /* NUMA mismatch; * (1) release 1 page to page-allocator and * (2) break out to fallthrough to alloc_pages_node. * This limit stress on page buddy alloactor. */ - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); alloc_stat_inc(pool, waive); - page = NULL; + nmem = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, refill); } - return page; + return nmem; } /* fast path */ static struct page *__page_pool_get_cached(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ if (likely(pool->alloc.count)) { /* Fast-path */ - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, fast); } else { - page = page_pool_refill_alloc_cache(pool); + nmem = page_pool_refill_alloc_cache(pool); } - return page; + return netmem_page(nmem); } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -391,13 +391,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return pool->alloc.cache[--pool->alloc.count]; + return netmem_page(pool->alloc.cache[--pool->alloc.count]); /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk, - pool->alloc.cache); + (struct page **)pool->alloc.cache); if (unlikely(!nr_pages)) return NULL; @@ -405,7 +405,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - struct netmem *nmem = page_netmem(pool->alloc.cache[i]); + struct netmem *nmem = pool->alloc.cache[i]; if ((pp_flags & PP_FLAG_DMA_MAP) && unlikely(!page_pool_dma_map(pool, nmem))) { netmem_put(nmem); @@ -413,7 +413,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } page_pool_set_pp_info(pool, nmem); - pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); + pool->alloc.cache[pool->alloc.count++] = nmem; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, @@ -422,7 +422,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + page = netmem_page(pool->alloc.cache[--pool->alloc.count]); alloc_stat_inc(pool, slow); } else { page = NULL; @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page; + pool->alloc.cache[pool->alloc.count++] = page_netmem(page); recycle_stat_inc(pool, cached); return true; } @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool *pool) static void page_pool_empty_alloc_cache_once(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; if (pool->destroy_cnt) return; @@ -795,8 +795,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) * call concurrently. */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); /* Caller must provide appropriate safe context, e.g. NAPI. */ void page_pool_update_nid(struct page_pool *pool, int new_nid) { - struct page *page; + struct netmem *nmem; trace_page_pool_update_nid(pool, new_nid); pool->p.nid = new_nid; /* Flush pool alloc cache, as refill will check NUMA node */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } EXPORT_SYMBOL(page_pool_update_nid); From patchwork Thu Jan 5 21:46:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49077C3DA7A for ; Thu, 5 Jan 2023 21:47:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 08A54940008; Thu, 5 Jan 2023 16:46:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EDF93940007; Thu, 5 Jan 2023 16:46:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8224940008; Thu, 5 Jan 2023 16:46:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BE5F4940007 for ; Thu, 5 Jan 2023 16:46:41 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9C904140930 for ; Thu, 5 Jan 2023 21:46:41 +0000 (UTC) X-FDA: 80322080202.14.37A7DFE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id EE27240008 for ; Thu, 5 Jan 2023 21:46:39 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="YDs+Z/ZM"; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=m2s/8OmWFp4OOxmIx+VnG8M6Dr4CWRucDCtkklQeocI=; b=5fuqH7hUJay6Eh2I2pnJUsKca7JXhty65e4u6n44by/zttvUdN3l0Z1KJKytq4w1OjFaRM Zj0cz50QJr0/tD6SiQdbUa/BwfH3eerlKm72kGYIRKfrVi2FJ64h9M2PGLTk32QgL7vs8O JM7nRYGa7j1/eWJzsrUlcwu81tnllJM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="YDs+Z/ZM"; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955200; a=rsa-sha256; cv=none; b=O8EfB46fjeFIXzb+SMgWS/q7ugrT6vrBcIbkoZ+H0FbP15weUfPFOY9ts7lwxdEgwnU+ZA oHP3A+ekeKwPSeAaTZohmgONHme7q3+yLDcOQtxtCs1c+Z/v6HDYvxg+Wx1lPbtygGNEla E5iGhJ0G/Wy9UMmn+9mt17as6q7xUNs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=m2s/8OmWFp4OOxmIx+VnG8M6Dr4CWRucDCtkklQeocI=; b=YDs+Z/ZMgVKmuPMKqkqEJsty1B A1FgDypFwF+s8OqORQz2gfE0byyC6ttEJOSy/A3Brovfffu+CCpHUC2z9ntVMzE1kI2kv4kqlRrTh Y1l0H6sdJzcxuY+NldXWw/oOkzkIrS2qHdRDbK6UORmUKiAmDCCHJcrO+qvQ0eDjZqBaBBll20qea eCv0L+rQKZt3bLtTztxVMECB6tO55V4mWnQDPLzwsGevmrTXaBNBH4rYFtYtlEimLEQgTTnrh0jw1 rFEQL5ZTwGzcOSa3O+IVFSOuR2tow2xdkB9HuW17LZaslvuFrzB/qVMqTBmSy8VYYFuYq6QUUUzaE BfaV415A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4H-00GWnB-Vh; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 09/24] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Date: Thu, 5 Jan 2023 21:46:16 +0000 Message-Id: <20230105214631.3939268-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EE27240008 X-Stat-Signature: 1iq8nt1hhym6jbd6zzi8npsszryhru98 X-Rspam-User: X-HE-Tag: 1672955199-980715 X-HE-Meta: U2FsdGVkX19xCgbfO2xFXB61r54qcsrS+6fSvFvZWpts/7cesMlZN4ryrOEcpVSc7Nux+aKNON/jvrI5NsbNPysIh1bZ9lNXBNHWIT+ATE4bNoA1pg40ffdyV4xIZaXUREnVFk/avj7oSCRXOw5hrF8KYgkygIiZlu3sPUsc9681Y8IhO+rT+5Qr0yWK3MPHfBt2pGPZAA7G1NVKUZevtHs1FZWBKaeaPtVVMolPs1YWImr5q+fKpxELFnre9LkaAGCxy+Ebvzgx9xuRpB0IIfDL9juZFwmMiUS9ResiUiPhHGuNaKy8BwQOSAsKRTKyX4QFRxblsa+wjB7W6Wo0W4glWbqIqrr+/Lx3LCghzlFAWWUxmCnXS0bIXGP1ButR6SduvGHvd/rpS1ueUoNFg1IHfjqrmEUlXc3b55JDgRAIl94ZA7M1gXP92BAKbbHl00Yk4hINzWQD2NGiYP8ZDxEYbClMJAN5pibqBvCUuplpaugexwnarqrzGzRU5zZSFy5i0BnXlT9CyCpE8jZ0MKoiyzNyhvrKJWqc+E/Aaaq2l4tAJ1i0ZMtKVCrgHTBJ/uq4jDCXuTf2ZBGOEhsbpL+r8KP7peGh6ZxsGuYVVnOopHVKmz/gspkSfxed0hK4Pset1Nq/IqRIVPBYiUQtsi2bfvT6YYzGmtliQy45OSDeuVJVLyF6s+gQUHE5p77zUY3xJkmVY4wwwQJwT8KxZ88KJSZMNrvclXhuQYDGHFIwfL2boRZ+lnNe6PXyCa/E05dM4f3GM4bOfAvreEHnSaO/+AWzsZHSqL+Nu2BBCupqup07+ZLF1eVEz98hPgbYJJD9PydwfRw1w+jIyctcDtTS35tPiAh5lSHgXyztihrsJ6BSAcJQUOstvxYckFkamKyc7IGgaGc6xePJHddnIdijzxe3OmaYGWQ0JcvcjfBpFMxTB5DUrg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a page_pool_defrag_page() wrapper. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 63aa530922de..8fe494166427 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -393,7 +393,7 @@ static inline void page_pool_fragment_page(struct page *page, long nr) atomic_long_set(&page->pp_frag_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) { long ret; @@ -406,14 +406,19 @@ static inline long page_pool_defrag_page(struct page *page, long nr) * especially when dealing with a page that may be partitioned * into only 2 or 3 pieces. */ - if (atomic_long_read(&page->pp_frag_count) == nr) + if (atomic_long_read(&nmem->pp_frag_count) == nr) return 0; - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &nmem->pp_frag_count); WARN_ON(ret < 0); return ret; } +static inline long page_pool_defrag_page(struct page *page, long nr) +{ + return page_pool_defrag_netmem(page_netmem(page), nr); +} + static inline bool page_pool_is_last_frag(struct page_pool *pool, struct page *page) { From patchwork Thu Jan 5 21:46:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E4E7C54EBC for ; Thu, 5 Jan 2023 21:47:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0FF3894000B; Thu, 5 Jan 2023 16:46:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08BD4940007; Thu, 5 Jan 2023 16:46:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF58F94000B; Thu, 5 Jan 2023 16:46:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C8569940007 for ; Thu, 5 Jan 2023 16:46:54 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AD313A0B47 for ; Thu, 5 Jan 2023 21:46:54 +0000 (UTC) X-FDA: 80322080748.07.C452618 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 39F9840004 for ; Thu, 5 Jan 2023 21:46:52 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DIPsPoas; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955213; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PdoQKhJ3LtOgA1Y51KMPKfFIcW0uZc7eBAEFhZ/WWAE=; b=i0jDVxqYP0f98sQjZUgl4ijceovZCeHvWTt2ODKSRfZFjua/YGbhg2kNuEbEBGLfEOkLWM KTWcNhGmpIARCvaGaESH9gLL/6fr/kdohnMBRCvJCxh9HF2lWaNa2+4ww3zZ6hVt1Eg9bB qP1WI7VLwxCfAYrbLz/nmaRcvIqnbio= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=DIPsPoas; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955213; a=rsa-sha256; cv=none; b=CWO0gQOMsXANZxvp3RuKhAGgFHWBomCd43uyp2BgQs5iHmZD1Fa24tPXRsOHlXANBAFkd+ DIRD4Orx4nRA4gs42zew8JJ5BNqg+u9n4elF7G+QKtYhVkEEfyR8agSpt6g8g6Vr214zSo +Om7IY8v7OezfUXjHgLV9qtL2S7KqUk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PdoQKhJ3LtOgA1Y51KMPKfFIcW0uZc7eBAEFhZ/WWAE=; b=DIPsPoas28aIXNv1jols4g81YO Z9VmoAsDNbFfHhpE8STvqmcx6Wnz2FY8d2Wab+eI2Zo8SSmNDh29Trf2wxQ273Q9OsGcW99diVYyh 7z9pTa6dk4qHnP32YGwVSAn8D3WriPCTIVPIMHyAiB9ETD7/aGkpkj3zTUHfsk5xbY3du6kzYLa6m u8uHMPFKZknCmkCk1NL5UzqlueHRX005EzOgJDzfuMNipSLPvULIS6BA4QvRg0H9Xtkcc1cK9M3tp sGP3XahwIQJTLN069G8PTsS/scpstQ5oMNazYNhdNgGJZwh1YN9xe+ni/WP+Tk8alsjMtZIYhMpXu KJI/lEow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnD-2C; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 10/24] page_pool: Convert page_pool_put_defragged_page() to netmem Date: Thu, 5 Jan 2023 21:46:17 +0000 Message-Id: <20230105214631.3939268-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 39F9840004 X-Stat-Signature: 7m4xp9s3xxqhiuw4iyq8ak6hqsc5yj73 X-HE-Tag: 1672955212-58750 X-HE-Meta: U2FsdGVkX1/ja0Htn59XavKcBDrAmdNgoa86r5XnnH7pBe+bWr3ib3+tjx+qTdLgs3cBpgJXCLCqCHo/qaaGmss7Yt/2OkpEvShJVlwKdeAYS1/9pO0GrcUEthJUNOuc5vfkk8LvMFNY29TAseU3FiprWw7PE6FSU96egQ1eRV7gD220IhCx+QW5zAvSfIsWMA8LcRI6CN/kU3K3GZDzkkGqcwiMbWOxe413F1KJXbmEX/Yj6vrRB1PyI5YheHJIt8MbUDUUw+63WQuLsQT+hRS/mjn4f12gmj0GJ1KHvNk1yDRn7Qod3i1enC37kEvvoeJKCKt70CKacAaYtiHQT2KdangsbJmn3M6DVTgT9DqBwUXRKvLkNpXJXCHid2ECMVUZLFALMkXJPKY4R5V4JkBz2GJ7+tyb7nAYv8VyDpqe68ZTn242zvRkT5U7ACXCbix1Qpmvjq5rTRz9Nhssr3BnZJatI3tMgF4yV1AvIlaMXmP85H5eb7dwc8ZWxaA4ZJPxMQqrOwdKoCzg2VEa089Vlj70DX2VBVaeBzhDrhRk350qnrCQAGaAaE6MFNLHOD258E4Hu7ySGQVIqMduOxN1AksBhpS0TT2i2YFNpBe2q7PQ1Qh6lbtWoU/FEnjjzhEaJU894OBdTVwYokS5bSm6hJdk8p4f4YQdw2c8o60Fk8lzY5gG8Kr5b2uJ/6Kyff5j6k2vQ+Wm0yU63P71UYJDtzqdHBMSKDxGIFIyIHFzruSXQlbtO0oUMWFfxo1FWj8Yix7Qacr5D1ZlA9OQPC15ZXGhwToQmeR9oCT0D6iFk0Kz3STriMsoxaTKOAafNiqe2I1kCXodV/nyx0/fLNJCtyvnzFhnSWy/A01hNKdnQdENl+idc1kIsEKRZFV0q+lu9nbm+fDGnR4JXoqzriinPR/e1qyz/Xdg5X3pxRWLsi6rlgl1hw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert page_pool_is_last_frag(), page_pool_put_page(), page_pool_recycle_in_ring() and use netmem in page_pool_put_page_bulk(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer --- include/net/page_pool.h | 23 ++++++++++++++++------- net/core/page_pool.c | 29 +++++++++++++++-------------- 2 files changed, 31 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8fe494166427..8b826da3b8b0 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -384,7 +384,7 @@ static inline void page_pool_release_page(struct page_pool *pool, page_pool_release_netmem(pool, page_netmem(page)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); @@ -420,15 +420,15 @@ static inline long page_pool_defrag_page(struct page *page, long nr) } static inline bool page_pool_is_last_frag(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { /* If fragments aren't enabled or count is 0 we were the last user */ return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || - (page_pool_defrag_page(page, 1) == 0); + (page_pool_defrag_netmem(nmem, 1) == 0); } -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_netmem(struct page_pool *pool, + struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { @@ -436,13 +436,22 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_netmem(pool, nmem, dma_sync_size, allow_direct); #endif } +static inline void page_pool_put_page(struct page_pool *pool, + struct page *page, + unsigned int dma_sync_size, + bool allow_direct) +{ + page_pool_put_netmem(pool, page_netmem(page), dma_sync_size, + allow_direct); +} + /* Same as above but will try to sync the entire area pool->max_len */ static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c54217ce6b77..e727a74504c2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -516,14 +516,15 @@ static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) */ } -static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) +static bool page_pool_recycle_in_ring(struct page_pool *pool, + struct netmem *nmem) { int ret; /* BH protection not needed if current is serving softirq */ if (in_serving_softirq()) - ret = ptr_ring_produce(&pool->ring, page); + ret = ptr_ring_produce(&pool->ring, nmem); else - ret = ptr_ring_produce_bh(&pool->ring, page); + ret = ptr_ring_produce_bh(&pool->ring, nmem); if (!ret) { recycle_stat_inc(pool, ring); @@ -615,17 +616,17 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, dma_sync_size, allow_direct)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { - page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); - if (page && !page_pool_recycle_in_ring(pool, page)) { + nmem = __page_pool_put_netmem(pool, nmem, dma_sync_size, allow_direct); + if (nmem && !page_pool_recycle_in_ring(pool, nmem)) { /* Cache full, fallback to free pages */ recycle_stat_inc(pool, ring_full); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_defragged_netmem); /* Caller must not use data area after call, as this function overwrites it */ void page_pool_put_page_bulk(struct page_pool *pool, void **data, @@ -634,16 +635,16 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, int i, bulk_len = 0; for (i = 0; i < count; i++) { - struct page *page = virt_to_head_page(data[i]); + struct netmem *nmem = virt_to_netmem(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) continue; - page = __page_pool_put_page(pool, page, -1, false); + nmem = __page_pool_put_netmem(pool, nmem, -1, false); /* Approved for bulk recycling in ptr_ring cache */ - if (page) - data[bulk_len++] = page; + if (nmem) + data[bulk_len++] = nmem; } if (unlikely(!bulk_len)) @@ -669,7 +670,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, * since put_page() with refcnt == 1 can be an expensive operation */ for (; i < bulk_len; i++) - page_pool_return_page(pool, data[i]); + page_pool_return_netmem(pool, data[i]); } EXPORT_SYMBOL(page_pool_put_page_bulk); From patchwork Thu Jan 5 21:46:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD357C3DA7A for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB8BA8E000A; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC0EE8E0001; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 67B158E0007; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 529038E0003 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 25BDA1C565D for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.10.2F623B1 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 1AB9910000E for ; Thu, 5 Jan 2023 21:46:30 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AZ1Y1AH+; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EeQYJMHII3fp3OKocboX4WAkN9RPKcuLop98P7fQXtk=; b=Il89F9VYq7FG2Hhldjr6f1axeCo5md0iLTqwXQUQqfzmJuBNNxtLLqBYRT1Z2gWVobKYf8 INFPIxNake3we5G90vMOOGfz19JCoFTUwnQ7fU8t+GzIICTB1dfxK6/RUmc9dTEBl5odyn UR1LCZ4tJuAVGq6gyErANmtp3TbJEUI= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AZ1Y1AH+; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955192; a=rsa-sha256; cv=none; b=24V8EQytEwaE6wCvTrrQneaSRRaLOAykCMUK1bUnHCzwna708VqvP8cB7mXJSNyKnDX2MV ODeSxpR25eoHepzQrADbq5w3z2M717mPyzx8KOAPjvtJ7kILldHDoE0Dff8J7nabU/hQWD pDrsWTJpp4+v/YcWdM50rOKGhkJSEUQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EeQYJMHII3fp3OKocboX4WAkN9RPKcuLop98P7fQXtk=; b=AZ1Y1AH+/uF+RM6PRqrmUXQoZc M/LZru31cC6BIHXqNZwO7eZ6SIgOirGp4tBiqTTCVW9LpBMEyi1jD/mCN0J2NvDkJGFY0gZYUd9Ep 9SRGENxcY01USdBXt1gp0sV0sSByQ6IGKsSWxF+mGU/FTFnRTww45hNq3IOBzI4UfHAVaNZDodTKq IdGnOcfx+8q37FpU4C0YPbg6Shd3hgacIC4xmkNSush4JxHf2jppdhn/Gil00V4EIGLmVu8qBmNTr fXJUVufSPWdp35WAOuEM8NlndUTHdwrtYmtgG6t3lvkBo/igKrOstwLXz/M4zEcSZ+Flj58mCNjwi Zt+j5CCw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnF-4u; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 11/24] page_pool: Convert page_pool_empty_ring() to use netmem Date: Thu, 5 Jan 2023 21:46:18 +0000 Message-Id: <20230105214631.3939268-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1AB9910000E X-Stat-Signature: xijg4i63xgedngnfzye5xt8c9mwzurfh X-Rspam-User: X-HE-Tag: 1672955190-248859 X-HE-Meta: U2FsdGVkX1+T4avRV2eo0cUkUSWJ9Hf6YLU8bJGMmxCGwddiz9aQEA6NYkCZ8aZuc0S2PNHFbCXs7CqradYCZluez6QJ0nC6Onl3B/61b/ZNjyUt/ZPFmELTMG30uDFIZ6H9twxPLXHU02RYaKBgmTBxH1BVqFriKbgeLVaJly2ZJjFmlECaLBV4JazUDr1u0kk/fHBtZQso/XBlBYbz4yX+TeWvEbqq2xuylLeijiIto7Tsyudzo6i/3/Z8Mhepf4uSsG0hgYPYLWqgGNgp6gU95uZ/TD5FcLy8uHnTfoi5NSTtypZiGizHyJo3HT/s1dLcmTWXmjowCAK//7biKdbrZSrAVWk3o5vJxlMxayqubT+JlRnhRPtTJ34Bwh1yUYfrZ7guWxzT9EHJ5+/06S4nwPC/RQaWmroA6dC9C8PGYis0ygBZookPrd0fqkx1+RlrNozJ8qTDgdqblTUq3Rtd0mD4kaQxGjc86cpHsTRpRyxRl1DDuXy0cobYuaGfT8q148DUM9Y30V1wAlpJB+umLGq6FjzHWCBUUP5ThF+Uvty8K+bnHXxZLb5QTeim3f6M2xb/CXA5Xvdm05GUki/Hsx7fkyYp44hkZHvkaXbPf+lDVwB24BRrCR2YGMeZG8oXmTHWruHVAuaZcFCemzNo058KlW/cc/nIexjJ7m08YUafusz5GLahAximSXV5RE55zADqbmdfSf3+Obn5s6xm4+zNpksCSbKjJzsfZKQhDl0vkd35BTHs2CmNuMbAOl8CtHzUY8WOe7DjX8sV50HT+mEUAGiwHc1XpZhyRo2VSaFyOzrtu5Rbm1VdD8L7MED01uL+1WYHHAFXcVPPpd9hdBFfyEEPbmvdKvsk7p/THHnewF/rNv6aHyAiLmK8LO8Z0YloTauhgHSIOvPt0CIAnpy7iMP+hfNVBwlQHzuTm24RwSPodA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Retrieve a netmem from the ptr_ring instead of a page. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e727a74504c2..0212244e07e7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -755,16 +755,16 @@ EXPORT_SYMBOL(page_pool_alloc_frag); static void page_pool_empty_ring(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Empty recycle ring */ - while ((page = ptr_ring_consume_bh(&pool->ring))) { + while ((nmem = ptr_ring_consume_bh(&pool->ring)) != NULL) { /* Verify the refcnt invariant of cached pages */ - if (!(page_ref_count(page) == 1)) + if (netmem_ref_count(nmem) != 1) pr_crit("%s() page_pool refcnt %d violation\n", - __func__, page_ref_count(page)); + __func__, netmem_ref_count(nmem)); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } From patchwork Thu Jan 5 21:46:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 451CDC3DA7A for ; Thu, 5 Jan 2023 21:46:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6689900002; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A2AD1900004; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B9A0900007; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 37421900003 for ; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 12AEDC0B3C for ; Thu, 5 Jan 2023 21:46:36 +0000 (UTC) X-FDA: 80322079992.29.42B5CDE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 6D971C000A for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rw7xh7p0; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955194; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E09t+hkiAU+x+XiPYonyux6Fp0qlpaeqrtoR4FA5S+c=; b=AkyUcpU3q+6Txz0ARLRLafBdq22zCldAVsuuXj+PMJzBeCJyFtHvNpgKbtLCVwiuNbwD6L CPy8hL8zzPooUOjEo6tPaYP8gMuy/s7NTX0e1w7peG9OtQGWRDaHtSpG/NOkQ+6DwSgrf9 7QoHV5zBqrMxGE/LOLnSzkXu2ygS1nY= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rw7xh7p0; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955194; a=rsa-sha256; cv=none; b=XXIwEZ6bDpP4f0ESOdP+3p/j5Fo/dSkYMuRNmjrwDAFEhuSfF0yR0LEQIe0Boxc6taj2mr zQK9djgi5ZFgu0WgJV2Vx868p3zUUWZ941yTRBocQmro0L3hwSu2bIJnF4g+frMX79ph7w xob05nW+GAh9P8FcEO4wm0SeoXU4nBg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E09t+hkiAU+x+XiPYonyux6Fp0qlpaeqrtoR4FA5S+c=; b=rw7xh7p0TjYLAaPOaEupk3tOPt NDSRcqv5GjpfzNYdVhFTxq01vywHFyZYSzkoEF4YFw08ADYsHl8+fs9gQ/QJ/PYSr5HT9YosjQOMq Zuesj3Ju4EdAPiQfgvy7Hyj6QJZascVjhebd9Po7QRaZE0auC1255M/LE5cng9YKIATtrI2UsEBcx yLQjVzblxp7/5vli/L/bE8opuJR9N7W4LOddMgh1Y8pSeu4MxgUvagJ5npC0J+r7oVXdrKMPwwHRX Q+wpviR08yhoayANvaUgG1YTP8h9GI/gB/Kq6SzxX3PLbVU0s1pMe3qi++l4zV/ht5ga0PAsp9uJS QYO4GNkw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnV-A2; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Date: Thu, 5 Jan 2023 21:46:19 +0000 Message-Id: <20230105214631.3939268-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 6D971C000A X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: x9cbfihcagmz5sap1tj15cx15ki67wwx X-HE-Tag: 1672955194-756947 X-HE-Meta: U2FsdGVkX1+RJ1BeDeNZ+BUN9xsxYLVUghi+iCB8B8tEkOpvxdCXRZ1T4Q9nbjSOTA8Dx1NxLGksJuFEpIAQo9mBcYOHvQcHMnhc10OFAa3GpWVf7eHbv9bhkALRtqcCsCPemjsJ0T0qDPmqkf+IqoQyp1zJY185ufhciFIzb4nR/YqKnlJhUzd5BPMdyE+SapRR8iiS7c51I3GcsVzlLiGhZ8FO2SeD5cUVI6YaWFhOBQsvV51IUxVkUS7RSkMRIbFLN5w8rCs9pzpGyoRY8dBgLWRi3dwVJSJl2hg0Oftv7pQ9w27oIModjBERcwhkb75omAqwVHUSrNwDUGHNLyzf97KgybXP9B1AM5w95zYA44WDPTib+JjEnz7FYH7jP16scBr/WGaUsEoe7OkXrXEJfxHDerdUbRTOodUMjaQWFUGXvH20s+TcYxybRLXR5WoPi5QiXR4aF1+YX4DZWvYxbA7hjZtFdtPUfF+w0OzbsxLf7+B4UpH5EzOP7dE/4lWbaWwCR0UvP8H5p/6lN0mY4e1YO0CTVzkXbA/OLa/rttbQ1M+hPxFBW2qOybrb9HmYlkzapBIKrGTyrG+K+YvJlJG7uD7irQCvPLCRHx9gzeUgeV4cGNjglEmeEqVkg1T5Hc55rDDOVniTjxz0D2qav304ypOa0EMJ8c5oOdsZ21wh2DRYoEBflhy+yWFwa2/EA367twuM7aaaKpXs+HXaSvlZfQDpALDQc0hhutlXnloBVU7hSTjMHWCxHdthGYHeac4Ux0LtsUOBSg7haghC7HjKDszWSfRji7JUaWZEp4yv2Ttp92LurmfHFSMkY0GYLZFd+6zedZ11objErnBDpFE6uok0uvghTxTYP1VNjlWA4Wi2q20niT/5Qo3MWyKri0j4HdfIemKwm4+p3qhedwvcUJIaDS3o4xZspFOADWOKer9kbg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add wrappers for page_pool_alloc_pages() and page_pool_dev_alloc_netmem(). Also convert __page_pool_alloc_pages_slow() to __page_pool_alloc_netmem_slow() and __page_pool_alloc_page_order() to __page_pool_alloc_netmem(). __page_pool_get_cached() now returns a netmem. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 13 ++++++++++++- net/core/page_pool.c | 39 +++++++++++++++++++-------------------- 2 files changed, 31 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8b826da3b8b0..fbb653c9f1da 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -314,7 +314,18 @@ struct page_pool { u64 destroy_cnt; }; -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp); + +static inline struct netmem *page_pool_dev_alloc_netmem(struct page_pool *pool) +{ + return page_pool_alloc_netmem(pool, GFP_ATOMIC | __GFP_NOWARN); +} + +static inline +struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +{ + return netmem_page(page_pool_alloc_netmem(pool, gfp)); +} static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 0212244e07e7..c7ea487acbaa 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -282,7 +282,7 @@ static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) } /* fast path */ -static struct page *__page_pool_get_cached(struct page_pool *pool) +static struct netmem *__page_pool_get_cached(struct page_pool *pool) { struct netmem *nmem; @@ -295,7 +295,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) nmem = page_pool_refill_alloc_cache(pool); } - return netmem_page(nmem); + return nmem; } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -349,8 +349,8 @@ static void page_pool_clear_pp_info(struct netmem *nmem) nmem->pp = NULL; } -static struct page *__page_pool_alloc_page_order(struct page_pool *pool, - gfp_t gfp) +static +struct netmem *__page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { struct netmem *nmem; @@ -371,27 +371,27 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); - return netmem_page(nmem); + return nmem; } /* slow path */ noinline -static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, +static struct netmem *__page_pool_alloc_netmem_slow(struct page_pool *pool, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; unsigned int pp_flags = pool->p.flags; unsigned int pp_order = pool->p.order; - struct page *page; + struct netmem *nmem; int i, nr_pages; /* Don't support bulk alloc for high-order pages */ if (unlikely(pp_order)) - return __page_pool_alloc_page_order(pool, gfp); + return __page_pool_alloc_netmem(pool, gfp); /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return netmem_page(pool->alloc.cache[--pool->alloc.count]); + return pool->alloc.cache[--pool->alloc.count]; /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); @@ -422,34 +422,33 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = netmem_page(pool->alloc.cache[--pool->alloc.count]); + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, slow); } else { - page = NULL; + nmem = NULL; } /* When page just allocated it should have refcnt 1 (but may have * speculative references) */ - return page; + return nmem; } /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; /* Fast-path: Get a page from cache */ - page = __page_pool_get_cached(pool); - if (page) - return page; + nmem = __page_pool_get_cached(pool); + if (nmem) + return nmem; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); - return page; + return __page_pool_alloc_netmem_slow(pool, gfp); } -EXPORT_SYMBOL(page_pool_alloc_pages); +EXPORT_SYMBOL(page_pool_alloc_netmem); /* Calculate distance between two u32 values, valid if distance is below 2^(31) * https://en.wikipedia.org/wiki/Serial_number_arithmetic#General_Solution From patchwork Thu Jan 5 21:46:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BAF7C4708E for ; Thu, 5 Jan 2023 21:46:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91AA2900005; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 878EC900002; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 48C22900005; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1E32F8E000D for ; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C650480AF3 for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) X-FDA: 80322079950.30.F969FDB Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 982D64000A for ; Thu, 5 Jan 2023 21:46:33 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jSzlPKYI; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955194; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tQ2IN+yjoBC8VwOYtQMP1zCz05U5Yk6dI9U+AmVRVpU=; b=Zo41iLkeh9JD5AjV4QWRkO3tIqUfZ9fVjI3kB4OdngdZ1QBbaWV+eJ43qH7hte1kK+ctsJ LifLw0tldpsYbFXk6kqcBWjH3WKPW2wUv5CwhsYGbPQHms4RTV6wCZ7gNQIII7PhSG5HXq 8TmrUxjqwFT2gOcrjdu+9OuBE0Ihrmo= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jSzlPKYI; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955194; a=rsa-sha256; cv=none; b=5J5FXOuhUu6HwHYxiF4gX+Dibn8hElhyK1bt2CFChNSR8nb0mp++3bQ5IJVPDkfyjinppX ONJYrfI/zojfBs0mdqcd+siAe2VRKz4XQ1ZXzkgPOkUYAJ1QYMfmQoAP7AzNsSjcREXrS6 eepzpNMoEF1xJw+xYX8yaRJRYBA2+ZE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tQ2IN+yjoBC8VwOYtQMP1zCz05U5Yk6dI9U+AmVRVpU=; b=jSzlPKYIoE5G9cBGexbxWVV70t jSr7sOsBSHp5Gi2i+PC3wn8iKft9qDHakVpzkazy5jnTtQQ+utGN29Lf9F/o5fbpY+81Iptajl0uw XH9uqZpS5o5a7gVGfF7xkcBE114snPqhBZw7lJJ+vjQWGtS+DM/WFLZRHWaVEhTfqp4k1K2nk9oPZ ba/29YreaPgRgnoXMLe15P74J2ATIfLOfU8DD0TK/hky/Dm76oyuWI5jKPah1EOCUQfHh1CVagGgH j/HhjHXpNEevtXtoOA5tokNRdqivSZXWDyPfZSv2cSxXC42O+VqKk1jKheUEmTWCpgvHHEVUrRQB9 fJIdWNFg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnb-EP; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 13/24] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Date: Thu, 5 Jan 2023 21:46:20 +0000 Message-Id: <20230105214631.3939268-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 982D64000A X-Rspam-User: X-Stat-Signature: fsw9htptnphpycfhw7w55mqhunpbwxx8 X-HE-Tag: 1672955193-792462 X-HE-Meta: U2FsdGVkX1+j0p8yQez82JhbQXilPOLJDXzJzNbEG9u8cWnw8emAf/FoTKH+ewcxikrHj9w/DdHBbveEy+9uYLDP+FvsbSwm8piEMQODMkIkmd+5BALjvDf1QA/XP0WfIQAP2LWXmWS8DnunBevKmK5rzVCIzsH1tvslful2XPgvQuFPCiGT6XoKVffo4J4L4yxk4TZis3+S8l2xOwJNSh7AkBQaz4fumPzK6Vgbl2WJ+KPOeV4y7N7y30m3zpLAa8vFicHdrjIp3VYU8hXy+pzaWxkJVKrOMgSv9YzSc0TbpRmTb2Ik7nCWUiy+EEHe6lCZT4/UF04m7UzaFZ/FA5h1r7WiBbGgGCly7KxxeWNmsLYv9Cz8PYRSGj3I3JUh56RpXuNGLRkA1DaGKKFI2FfbUh4EEg5YMOAHL47pyRB1t6FZ6r2XzcNmWS9qY5nUKJL//Yzp6YAwsLs48kVvgKnDT5HiJUD8ahjUNLjayjiYIJi7tWn/eaGdPSGffVRIObk21E/oKx+Nkaku9M4ZkNA0Yl88TfeWVKDBHmvYQ4BwpE5hft8KLw3cbn8zA3oNkWzkIMYqdgyfGxnIoL/8gpRy1DY7OJGNBYaF8bGLKoHQ6PhHA1QDP6f4teomQlz9fKa5aRc/8L4Ji/iusHVDHlRaLpQC3K75aFS/a0zPM0r65LkjY6e1f6V0UzHaw0cTTnQIiz2bTJdayvkIAMVlyt75Vvf0i2fRFvDYObuihRMoS3v2nbZ5kjId1ygM+vriTrB5hWpLEc4ciEsa8Y/BDzQilGkDlcGzRXeIKMKF/d8Vf5Al9ahyq7WQeMnM1z34ZX/t821T0P/igcU0IyJL+1u53lcu0BiHzQn73d8iZaucYKf42CCDdGVp/rgKVwjtN3E3lGDFll/FnV0xQEj6oV9/Pb4dpyaVIDJGRAnhbrYLQU/IKM4Hhw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change all callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c7ea487acbaa..3fa03baa80ee 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -299,10 +299,10 @@ static struct netmem *__page_pool_get_cached(struct page_pool *pool) } static void page_pool_dma_sync_for_device(struct page_pool *pool, - struct page *page, + struct netmem *nmem, unsigned int dma_sync_size) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = netmem_get_dma_addr(nmem); dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -329,7 +329,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) page_pool_set_dma_addr(page, dma); if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); + page_pool_dma_sync_for_device(pool, nmem, pool->p.max_len); return true; } @@ -576,7 +576,7 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, /* Read barrier done in netmem_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, netmem_page(nmem), + page_pool_dma_sync_for_device(pool, nmem, dma_sync_size); if (allow_direct && in_serving_softirq() && @@ -676,6 +676,7 @@ EXPORT_SYMBOL(page_pool_put_page_bulk); static struct page *page_pool_drain_frag(struct page_pool *pool, struct page *page) { + struct netmem *nmem = page_netmem(page); long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ @@ -684,7 +685,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, -1); + page_pool_dma_sync_for_device(pool, nmem, -1); return page; } From patchwork Thu Jan 5 21:46:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5C79C54EBF for ; Thu, 5 Jan 2023 21:46:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19CF290000A; Thu, 5 Jan 2023 16:46:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 105F690000B; Thu, 5 Jan 2023 16:46:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1EB190000A; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B3783900009 for ; Thu, 5 Jan 2023 16:46:37 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9605FA0C1B for ; Thu, 5 Jan 2023 21:46:37 +0000 (UTC) X-FDA: 80322080034.15.64C1D67 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 02E874000A for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="OK6S/uxI"; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955196; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H0WxbnPFe1KwSIjWT4wmoPQAd35t/aj4S8nLWveXfxg=; b=1/h5fltzfFXlZp91gYou63zmq3tERwSLRFvo4neogjeFkBt0ac+VM6fZzPvA92beN8exfF On6C/vlSDp8F8EPMpMWrBY0cBzRvv45m+SSJ+OJa1tc+ByISk6EvHdgBOI1X0i9wcDf7em E+MTt25QD/a/W23Uy5LhGR1JvuMVLjY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="OK6S/uxI"; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955196; a=rsa-sha256; cv=none; b=QW++/F31CnPC+NGf25MJ01lTFB3SFZJea1ajfYGRV8NYCj2D4WAC/pQfigMVKCDaVfzFOO oFX3VME5a2nKAcEFFiSNaq8QZThelTbLJi+qf5P3CRaNSjiwa3mPRuxiwbLACI3eGUZjjj wMQdXqqnv539qp9AV09HK8bK3N2SNjU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=H0WxbnPFe1KwSIjWT4wmoPQAd35t/aj4S8nLWveXfxg=; b=OK6S/uxIVk+67IWWhtmPrP3Q2Z WyvKA4Bp/2LYMKs1FgePJYKdVwXc7MsaG2jV52fI4uQFIZd/s4+Vorl1DgyWTf/B8Keq3Km2Ys/Oj snwvtgYaWZME5DwCGucVWI9hgKqSKmnrD4PeGm0UYQ58jxMF9qCw5PtlPTopEWoUqVk7LFLl0NwCr U77/CCd+eJ3vwfW7uoEs4BzhQJIBSInd1EGihYp0d/GOjsbL9TBcufAfBmbPPOMHrmWiC7jSGJ1+O OnoQaRy42+GwDfjrXkyoT4aT4kyVd8NfZ2DYCtc2scUId6QD4f3deEgxRMIsvVfB343ZPmkcQ7Zvc VPXTdU1g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnh-IT; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 14/24] page_pool: Convert page_pool_recycle_in_cache() to netmem Date: Thu, 5 Jan 2023 21:46:21 +0000 Message-Id: <20230105214631.3939268-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 02E874000A X-Rspam-User: X-Stat-Signature: edp1qjcacfh5kq55ccjngbbtbwigwoqs X-HE-Tag: 1672955195-687020 X-HE-Meta: U2FsdGVkX1/EGnKpjsvErxOp1Ove6KDj8sENokMWJnGdiU2cDSCSt708DEBGvSfy7FtPfXOCnKrLJ72spa3sTumcTf7k9sCHAneozfDYPM5RiUHML6KhjUYcYdhJCE6VsoNv8cgMc41SfNkL0dQymlB62kAcdaYaugb9kSidI3FMKTb8O8hHi0kA6ZUQEmIfV6MUDJEsnR4XGfsamT8vBuWSLtwRhXcMrkTjJXQJ4i6Cw1GMHR98pqY0n4gTe2/eejMi4ELJ2HAPC3zwh7WVA1WOIQMWQ/4DT42W+nz3aFasrnI1WeXCFDGs2kzEYB4+tYH68XH2iqvg/DWte1lKjhxa4NGHPaIFSY33SRlbU77+6f5nyczUL/s171wxxpQtRAMiIqZ2oyNSZVmSH81QMHTB7HpLvD+7A+sfG9+InZy4ZBaIfiPgMioTsESoQehuu7rvzSHQjaVOIRTmmQxfNyNOdPQBhwopuiTFvCksUiUSPk2DkpP75GnKwzQUvIAPVNcQMpJfTBQN4FZMUaOqkG1TLgAtRdLFA6KoARQmZ3wtT2FxKMZwhxjAavCl+DmjzXczo3dA0MxfY5yn4ZBbCddUGZ5xlJPZcy05jie83rqa5FdlTPXKLFLY6TmIVmHZm1f1KJ/TuySvwj1I5ohMgD1UvNepChNdA/nssY0SoFp6FoIIlAYn/BHGo8zqLFvgppNBuKUv0A72cG4VPY4JTokE7gPWqFfF2RjwWJlajSAQLS5PQI+ykjIIKlluemiuXZ3D19cB0SRNPqQiXWuzHuPbu0ZDvhW1mPhBDEuJ/3sMOCEI/gojosX5iXG0GFWjESsATg7pIgJEAbYcwpsIZQoNUXyZ7dnwSiYi+5bqEuPbFGTAUR5dQBZqzcD/lnCdphmG2M9Zn4ulP/YkMGlXaVcM0Daa5Vzvlx+5Azn5gGgxCNQoGwfBsA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes a few casts. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 3fa03baa80ee..b925a4dcb09b 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -538,7 +538,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, * * Caller must provide appropriate safe context. */ -static bool page_pool_recycle_in_cache(struct page *page, +static bool page_pool_recycle_in_cache(struct netmem *nmem, struct page_pool *pool) { if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) { @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page_netmem(page); + pool->alloc.cache[pool->alloc.count++] = nmem; recycle_stat_inc(pool, cached); return true; } @@ -580,7 +580,7 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, dma_sync_size); if (allow_direct && in_serving_softirq() && - page_pool_recycle_in_cache(netmem_page(nmem), pool)) + page_pool_recycle_in_cache(nmem, pool)) return NULL; /* Page found as candidate for recycling */ From patchwork Thu Jan 5 21:46:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD941C3DA7A for ; Thu, 5 Jan 2023 21:46:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 321F2900007; Thu, 5 Jan 2023 16:46:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 283DC900004; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1261C900007; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E3AF3900004 for ; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B45771609F3 for ; Thu, 5 Jan 2023 21:46:36 +0000 (UTC) X-FDA: 80322079992.22.1839A2A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 36EDCC000C for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ED1yBOrI; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955195; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RB/DJOai+Xj2Jq/alSKJq8OklNQ6kO9rsGtnPI4eEMo=; b=HJTh7thysjr3vos4lBH5gpfePuxm5zgvqLw7kg/MyMoWq00C7Oe4Dvb0yBlmLmt6eMaQMc aBPWnn95+kRJHwwcXhywms90vEa+3c+OczbFqegWuLvjdRRQbe4uoHPeOZBxKehBJYJUKp rIjaIIhRHVh/dG7WgrerChmUXo7vpR4= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ED1yBOrI; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955195; a=rsa-sha256; cv=none; b=p7soHaljAkdnP+0+rWfVcOqnmswZIkYDihCyxPNRLDY9pwdzCDcOlXSkj/xH9HXYYHhJ1H umcyR3AmLgJAGnBNLL3/RRCL1eFlpGlOl8G07V+BETFX6lRx+DPYLOUghNa7xQ8ygPrKSj Bf/I3Ukk4EQ9FI4HcChLDUphFVvjvtg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=RB/DJOai+Xj2Jq/alSKJq8OklNQ6kO9rsGtnPI4eEMo=; b=ED1yBOrIT0OjB5ucAUzWiZcaQ6 y11rP+SKiymZW8hxXR5mr6ahwWhzWGeh/LjNTDRLcn2jtWMw1KFa/NfDmNgpEhalGZ5cG1ZnDO+VS HsQKPPzX5sUN33Z4Ss+FuOX4ZDD60pu5LVK12bkmoskToy1NeNYH+dRe4d6TmQhSejDAOYHibU7b5 Y3TFMJ4bT6bv9hDPD1qKlYJ2eBszNt69x08T6VzmcEv3rItwA/hbovbZbfwLi5FssutKanUERY9JQ xCW13UZwJp/Om0P3jItR+KgqDyw8OefRmd9CJzVBaxJbODBuLJNtr5pBazPdfiqcQpoyrSUnBQNaX nfBRV2HQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnn-M0; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 15/24] page_pool: Remove page_pool_defrag_page() Date: Thu, 5 Jan 2023 21:46:22 +0000 Message-Id: <20230105214631.3939268-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 36EDCC000C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: cmygaoy3nthefrqrmnix1qboohr67k5a X-HE-Tag: 1672955194-63074 X-HE-Meta: U2FsdGVkX18dn4ndI/qiv5dVg5agGKLHQBx73TxhWQ+bB1eSXmOAgH6zljEkDhWgDQjghAxYDwRcoUYz2jqazznKuUy2AQrMHF+MRyMbMf5DkgrRcmNY2H3D0FtFbFVlxi0VYXuP0LnhBTC6AYdl+3TasatXfh+PrWYdKR47dl80Zq8Rf0uMfu/p1BbtYf0QqqCiW119yJSMxhl9AZFI8dVaLW0V4bFOLfLx8tl7+AJY/pDbUrgiZrYt7x/AHpco/VeElGSZal2Tt/KrnnYnDW9/d8mAchKdnHbPEFV4jyUd1x9B1E50xjfYIFx8P0Yx2DeU8DS40yNvY39fwFhz4TvAbrEHRsvYDdK4KSgZPcXBI8yHrQZKOB+I9fHgAhgjqloJoMIITWRpy+iB8qWjxqrLyIrux2ubiFapus/sjZlklXL2PwYj+CjQt95q9k6Gd/j1fzs1Qt/N1b4MFtwGXN8NyuZKu7BLvq8HW26r5CPw6fNP+YoM8akdqadAh4ojZqdfb7FcAqjUhXi51fDmhtScCRJLZ9zuFwDFbHwJbjy8bhoCogVUIt1z2ehqDcTA0dwR2SsEx/Zztw/e3yfw4VvmN3TyNgYYWgFar8g25rhrwBaOpwbRuLcBEHhAIPssgAyM21IaDBGK8KlWLxmP9TVMp3beidyouYu01zhFLigZODOpRKvNlHljsEgkzRHYGBFknIrGQ51TPFlRLXY/08flSXIhbSPw2LrhprJGtRuWKxgsi5u6ojsOXiiSeDa9aWu2TIRBYMGPWwWSHVJzMjpk3uO60xVtTAnUZ/y0tv6VMAVUjQY4QyEAzbec1V+i6dne6nl0F3t2UKuPFm8syPBqG8zxQ5Wa2TfR/151p2WrYbpMT83DWIb6ceAdZNzgmtIglSl8ktfJxGbcvTsSAzw8FCYGw1iXTdWCrE0LRg7XYUwmkPUyng== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This wrapper is no longer used. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b925a4dcb09b..c495e3a16e83 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -607,14 +607,6 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, return NULL; } -static __always_inline struct page * -__page_pool_put_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) -{ - return netmem_page(__page_pool_put_netmem(pool, page_netmem(page), - dma_sync_size, allow_direct)); -} - void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { From patchwork Thu Jan 5 21:46:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A132CC3DA7A for ; Thu, 5 Jan 2023 21:46:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 49DFD8E0009; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1EE588E0008; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FD3E8E0003; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 62BA38E0001 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 34AFA40B7B for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.10.9882892 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 08C9440010 for ; Thu, 5 Jan 2023 21:46:30 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Mo74Isxg; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NvXYimBhkC9d0f/zXB3DUmNEVicwJZ07XethqUojHGM=; b=gI7fkVWUp832NZQmCstqlfJAi3DPVIlDWLOb4q6zE3HkulrERZ5STsIJKrxV/Z9fRZTC2F ysLDLQ/ZBuE2NAgbpiDN5B3zLBFFr7NOi0NraFfsArwIu+1BayZLuK1QQXwU9BmoUgGEYI vWfxWBfpa8PG8OMkN/EFanrxaIISXkU= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Mo74Isxg; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955192; a=rsa-sha256; cv=none; b=AiXu9C7sEAElJ1jtatdIJSkKPDvAqB1W/OFm4yb38BglgYV5UG9FqBoVmQz2Q9swBPHPVC 3NvIn8a11luw7khRuQwPDuNYVpKIR7++77FvDfR53/ct6Y/Lj8mH10gPOZf4xgKXNShXaG 9mS7doo4Y3eF7LvEiyl07MHoJfFeLiw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NvXYimBhkC9d0f/zXB3DUmNEVicwJZ07XethqUojHGM=; b=Mo74Isxg6Sr4vltDjX+oAMDu17 H1OVN76kk1HywCbMBHQflKlZsd+tUK4n/HqaF/M9lPHOkTN+8C/TZz5dZF5MTR9MgYetwAN66rCpm pb+VgH7d2XmUpG+43YCbBpbMFI6AQ5lb+IjGdUFxrhfroGpJT4Q3TAPp8nhtDCFdILRC0/Jm6fvUY OPUrxv5D5ERSLg7zZKYAf+FhYnhVwhrbDBEabzmubIgf71OkWALdIEua2sdwgbky2ySCng5KX6dnl vJezFvIqgSnKJ1hNdZmvjeS478Gz6VwoWsa5qAXAetAPaR7834+TSyGohv7bDpeKqgMXUVBLwh4km gB/Q8YlQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnt-Q5; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 16/24] page_pool: Use netmem in page_pool_drain_frag() Date: Thu, 5 Jan 2023 21:46:23 +0000 Message-Id: <20230105214631.3939268-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 08C9440010 X-Stat-Signature: 85ptkjywmwdoj5t97mx73fcfu94c8cra X-HE-Tag: 1672955190-354207 X-HE-Meta: U2FsdGVkX1/1On72pF/gnrTrsdAi7Gz8RziY29AQDg6Sooa25XwYtRkbXi2sosWjc19tZKGWRCnOE2BCOZauLcB6FyCRuwL1A42nVG6Ty7VhC4byRANcdHRCX4lHBju8tn5yFYaR82r2/KSyLYcqhjORA7Fx5s2r+lYb85EaqyqzzLwq0KfbiyWvUAwAkctYAlhOMXzoajHPvQdMBup58neQu3hUNR8v/1BFlcy3krtZ/lPD0A75fyXzlc/LOL9LGlSwOoJT8rFsc4CibHtxZb5iIqHSXwcGFhfNrIxQWZZvVxDOXra3DURo1wiHYKpfsacgwXePO9lf+h+mVKQhfAlf6PYKtDj7wotVWLRi13UlRML5+OlgYnioLSHYDf3a3uYn8FGGq92pO6Mgw6EqAFpZ6HyabVNfngdN9EgKhB+AKc7qilygAOO3mluSS6asKC9hAgxmgl4ee8lKLWLcUZtm+OGG6LGGyd2z4LciCYx6Cieyyle0H2RQrwfHtgSe6t4h+KSVNbhF0VzOTA9HuBkB6GemXrzEoVhNl/4M45O+WbxYLhvdfPiQRITuq8aSFQLioesoPjj18DWV4izlEYjj2WnxYMQ7VJhj+kuDnz8qHh0fPMEPEJA2dCeVVVaeehBvC2/rolLPLMznH2gIXZlKVYGCqLpdzmbRxGGbBTVFgcys85Ig73qFHnFdDm4HLKNlVYDW/dAqkOWm4jvtd5wcF1fdxvoUWzSNZm8yAqDzwEESEoGyUVIgaUHitOp+R3jL6z5DeieUvCSa0diUoCSekvKmEEcO7EGhR2VMV/WE+UfNIO5Db0C4/ZPmW2DQjb1RaqPfVEa00aaXvIS0JBfxMhTUR4Eh3HDnhbfp/8HXAG4Gdvx4rgQzuH6kI6XP3G3ZxBI7IBHAppzB36J0xRT6YHMorj6mgNVvyXoBje7VibnGJHCL6Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We're not quite ready to change the API of page_pool_drain_frag(), but we can remove the use of several wrappers by using the netmem throughout. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/page_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c495e3a16e83..cd469a9970e7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -672,17 +672,17 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_defrag_netmem(nmem, drain_count))) return NULL; - if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { + if (netmem_ref_count(nmem) == 1 && !netmem_is_pfmemalloc(nmem)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, nmem, -1); return page; } - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); return NULL; } From patchwork Thu Jan 5 21:46:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 042B0C4708E for ; Thu, 5 Jan 2023 21:46:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2531C8E0003; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E91838E000D; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93A4D8E0009; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6D35B8E0003 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 43A6740B89 for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.13.1830D57 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id 87426100007 for ; Thu, 5 Jan 2023 21:46:32 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=B0IMstPJ; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955192; a=rsa-sha256; cv=none; b=netelgWl4qWSTfd0rxAA3fBXwZrjSFChyOAojNmuNYW21d0Ys1nx+j0wo+OoXNrXjslZ+w pAjY/67KH8wi7MEOc2Pab1btvb+vpum+kCXeZcyJi+uxHEyDy9DjRR/S2Mchf44dmKbBiC wye4xdZKg1hA90gGNmyFkl/b+jzJbhM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=B0IMstPJ; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lIvmCZwUtoaJt3sVXDitSXE8d8gXr+dU43z7S3YDxXs=; b=TvAT/UQzOk8j3fZ1qRq3FB/qhJRZMCoum+3jcLd3nWTGryi+50KYtTPPfVoVjF2yxy7I1i 1ubSDac/vuQP5SF5hTsgXn4L6/i/u+WKROka+n4YzUHUAgvbDJEq4OVomNBanTnUU/o8rH EVxmcs/B45FUtGKrBvx+7rnaLj3G76A= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lIvmCZwUtoaJt3sVXDitSXE8d8gXr+dU43z7S3YDxXs=; b=B0IMstPJlBu52FY/hGCEU+rl1I 1ap69no2R6a6XhkeNItIdR2rft9M2zgmBoE/u2YUEeJNSzZKFREAk8WkLrarDxNRepQ+3MNcVtZJ3 B5PFK3exUtbdWYd5nOXRi5e2YLaOdOi1DrxRPpMrySOs8F8dCX9nB1qsKWzOu4gwdiugDnWB5bqcd kCo0I8FQpubPkvZunuUjPvey4ILDed1IfpedFMxHAHRxulsdRFL8N/AEgtaqWGqnYSnmilhN3MT8V U+jCg3bWkGb2WRbaaPxM4p/48JCvhFId2Bl3K7TE46rQsDGid0bdvaAVLAyh9fEiLAUZjY6JSuVNO 1F5VBhjg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnz-Uf; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem Date: Thu, 5 Jan 2023 21:46:24 +0000 Message-Id: <20230105214631.3939268-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 87426100007 X-Stat-Signature: touedcq6u6donur6ienzp4ss5yoqod3e X-HE-Tag: 1672955192-846254 X-HE-Meta: U2FsdGVkX1/1qSo7xWteSeJ/+3SNLR8izX22YcUCtgE1rj348ChgxLf5e8StegY47USpGIqpRz3n+00SLqjhI2sOI3eQh60JXqYJlrJUXHcxlD0Yf+L6Wy+EYUQnoNEaWZO5OcGMyj8q0MioIj9mhYCfytJ4Y7UM7Q40LfQj8QPdheWpNgSP+tOXJNd171yH6rwB+XBAsXlyCiTD7egwWOP6tqDP391fLcbTJIP87X3Z+BQaqkk6VuGgibpxPILQdMa7xD3tDQI/AcblsUgrsMevpDU+E3jS5xOhSvQByyXhGcx4PDlu4KIMCXlTccwBr9NsHj3b4msGUZtxudx9x28JCvsYHlAL/DgpUnxORhvqd3TZ7cIGrOC8T3JZjQFNlfhQSW0Yjjrg2eSAE9Wlvzk8oZJ22oh4Xr9YP18wWAxo4Nl3QF5pRBrsWSJD+2tm02NA5W3z9nd0cfSiazDF7N8gKYZzYyzqJspFZCrRQvB6tHl9FiQ2rjQDjnHvTbg0G6vS/WSS6LzTTAsafPqxyr3uq8Gb0QLSD5yP1sl8wxSv9T6Rr5xzPgHiLn1eWYLKHDs/WwVnDPkee8zGJq772Xkdm6uxKKPi/wnCENfh+CHNrvvEcThNSX4sYJU+KoD6JI6ojBpSX2MF50vskbMV9c6a9hzNNAyHobr8MWhOBSImTDeOa3bN45Btar/FotVZENfJsnV+xZi40Uvb60MSDDYbAUSYQWUlCp6FGFpPrikoFOdAWggF7LXK/hSUI3Mw1kmMM1zlHu433TOEmoDf+BAKL1FItTXxw1fqD4nyfJFdKLW0eV0Nkl72sHiDDkzA2g5TRFzB134r1DXkNBUmo3YYq/jAbAuTZ597hrAxQajQX/0wNmTkxoTlfq239d4cxm+p1vKAtXw8EJU+gmq6EMEqqUUuS0gEWOqxWAIcl78OqdwJJZNKlQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function accesses the pagepool members of struct page directly, so it needs to become netmem. Add page_pool_put_full_netmem() and page_pool_recycle_netmem(). Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 14 +++++++++++++- net/core/page_pool.c | 13 ++++++------- 2 files changed, 19 insertions(+), 8 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index fbb653c9f1da..126c04315929 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -464,10 +464,16 @@ static inline void page_pool_put_page(struct page_pool *pool, } /* Same as above but will try to sync the entire area pool->max_len */ +static inline void page_pool_put_full_netmem(struct page_pool *pool, + struct netmem *nmem, bool allow_direct) +{ + page_pool_put_netmem(pool, nmem, -1, allow_direct); +} + static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) { - page_pool_put_page(pool, page, -1, allow_direct); + page_pool_put_full_netmem(pool, page_netmem(page), allow_direct); } /* Same as above but the caller must guarantee safe context. e.g NAPI */ @@ -477,6 +483,12 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, page_pool_put_full_page(pool, page, true); } +static inline void page_pool_recycle_netmem(struct page_pool *pool, + struct netmem *nmem) +{ + page_pool_put_full_netmem(pool, nmem, true); +} + #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index cd469a9970e7..ddf9f2bb85f7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -886,28 +886,27 @@ EXPORT_SYMBOL(page_pool_update_nid); bool page_pool_return_skb_page(struct page *page) { + struct netmem *nmem = page_netmem(compound_head(page)); struct page_pool *pp; - page = compound_head(page); - - /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation + /* nmem->pp_magic is OR'ed with PP_SIGNATURE after the allocation * in order to preserve any existing bits, such as bit 0 for the * head page of compound page and bit 1 for pfmemalloc page, so * mask those bits for freeing side when doing below checking, - * and page_is_pfmemalloc() is checked in __page_pool_put_page() + * and netmem_is_pfmemalloc() is checked in __page_pool_put_netmem() * to avoid recycling the pfmemalloc page. */ - if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE)) + if (unlikely((nmem->pp_magic & ~0x3UL) != PP_SIGNATURE)) return false; - pp = page->pp; + pp = nmem->pp; /* Driver set this to memory recycling info. Reset it on recycle. * This will *not* work for NIC using a split-page memory model. * The page will be returned to the pool here regardless of the * 'flipped' fragment being in use or not. */ - page_pool_put_full_page(pp, page, false); + page_pool_put_full_netmem(pp, nmem, false); return true; } From patchwork Thu Jan 5 21:46:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C25AC4708E for ; Thu, 5 Jan 2023 21:46:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 134E48E0008; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D5F1A8E000B; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B5BF900003; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5C9008E000C for ; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2CCF680AC5 for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) X-FDA: 80322079950.02.79F4882 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id B5772180009 for ; Thu, 5 Jan 2023 21:46:33 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kijQyTE0; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=p5t0IupfGZ8HBYmR0Bw0RCbwAiZ9Cv0/cS4279RSxzE=; b=ow5sk6rO4b0l82IRjeclB+ReW3hIcwmc/txIntJPCPszNKkgoyVZQvqIqeC1fnmTNemWtu eUZn6HgWBgD5wJTOxJG3lZYapfJuYSyd589v5hh/khtOVhs5pgVGL+C3NPprosetiiSXhW IJIgoW0Rqj1S9YpHH+ncp9O78S3qCbI= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kijQyTE0; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955193; a=rsa-sha256; cv=none; b=nIm+ZONP0uEoL1rN4A1dSvvGqS8VKENUQObaXqyo5dmM/LsHoJa5/rK3iulTqLXkxMnWZE qNByt3T11beoH4uf4lgevsLXSMD3A4pIW7LdY1LYiSF6mBldjCn+rFrSC7QPt479822Nwr 8AIivnnEJWf1QcK83tkH5mNVoH9tkN4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=p5t0IupfGZ8HBYmR0Bw0RCbwAiZ9Cv0/cS4279RSxzE=; b=kijQyTE0/Lb9PNYRHFa303SD6U +cJs9NuYnPxcz9At/QCu1cEgTUutEla/Q3tecOIoWO7aVLOaEk3rUCa9OFTqIu1U0zSan+ati1SiQ IpDQ4SpgmuhWmkPuaqKIiDsfIrBwIBIkLo+WKs9jmBQjc/eCN/kMT43FIYn19W+dicGRPV5WNejaX v3Z1LBZ5NKpy5QM5xmbMVoaevrqDuux3fc+jdhT92SW3f1WXEbiM/1CNY1M1w5SZQMkHAQDlM0St6 VhcysE+jDsobaB2N2M2yRBslriQr99yuEYqqbFIVlN5g6b1r/jA59gBTwOKy2ikP7tIYYfH+w6dyF L1hOksXA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWo5-3D; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 18/24] page_pool: Convert frag_page to frag_nmem Date: Thu, 5 Jan 2023 21:46:25 +0000 Message-Id: <20230105214631.3939268-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B5772180009 X-Rspam-User: X-Stat-Signature: nxaa89ra874gfkhkhuoiznsxsbme36rm X-HE-Tag: 1672955193-95263 X-HE-Meta: U2FsdGVkX1/oapuq8ky7bvP2ewRjM7sCzpImXTwJ1wO7AyfK0cebN0Q7Cvwd31zOSBQCMB/+hcVBG5UNiUI6wUrUsTQvenPCCQji/OZ/nb3zab5fyooINJypTelQnwgoz3WzfoTfbQfu3ZD5XHmRsTrippirTVmKcAnV44mgup4eTCG1iJKbRTTvzTneJ9O0GGBt61ctqGJjx+fYLhZByedB9QElasgKjF0A8omily8BhFJHv1hkpKSYd6e5PgoOEb4v9X/KRUtBe8FWzLZFFEbfAdDig62sBM5rqgAaKBIAQeSsJ0i70jbNQmTJ8CcSANKwYM1T2lc+v6h4zy+gZXpyBPo1pWSoraexO+rYva5CjI3A6x1Ep8rwwHZJGpttDMMCDDOz64vjpCNh9qOQ1dI4pSrpWa3pEGdhBT7KSQaLji0uK0WUzlXfxte1yLhuAoYEr7bIYjcwMgluQ6Ip9UjEJSN9dwzWrDHTEWLrC5BRPq+4grBVtTNt6XAYz/lNZLuqOMad9MPtQQztxUrPpzCiVLT8l9+dUgL+IWm+2PDQeDOZ+HblfN0GAC+8FFXjP9tRkdEDNZvfyyldf/QiPVYamvAogJGWaj6S7TWOx44oUYiZLESiJKPb3cLKip1hVrS4aidYh5Z5/F6jJ3hqrC2vwas+SX0nOwx0t3+TyBLttQ4gsA5LTgbJKVGkEMg4XSQxO5FJ9ArGMZlI9572j+T2jj4M4BLPxQg3RWDonAe2MKUmG3u1rFNgHdHYtiiqeTYaUReBgzgAiqOxWzb05RYr8tQeOKKGm8E84SWM2ioACyyBM4rjnqyCRDMVQK0EFfAyLd6KI7AthXhpLiEZn1oLjOiUdOYerNzjYRc2bkVSzkq9MjZ/eyBtCEqrXlNtfSqVeep02y9xbz3ALRXrQrHRE3e0ozjF1IBVeLWIpV/N7SUIfcWZSw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove page_pool_defrag_page() and page_pool_return_page() as they have no more callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 17 ++++++--------- net/core/page_pool.c | 47 ++++++++++++++++++----------------------- 2 files changed, 26 insertions(+), 38 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 126c04315929..a9dae4b5f2f7 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -262,7 +262,7 @@ struct page_pool { u32 pages_state_hold_cnt; unsigned int frag_offset; - struct page *frag_page; + struct netmem *frag_nmem; long frag_users; #ifdef CONFIG_PAGE_POOL_STATS @@ -334,8 +334,8 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) return page_pool_alloc_pages(pool, gfp); } -struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, - unsigned int size, gfp_t gfp); +struct netmem *page_pool_alloc_frag(struct page_pool *pool, + unsigned int *offset, unsigned int size, gfp_t gfp); static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, unsigned int *offset, @@ -343,7 +343,7 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); - return page_pool_alloc_frag(pool, offset, size, gfp); + return netmem_page(page_pool_alloc_frag(pool, offset, size, gfp)); } /* get the stored dma direction. A driver might decide to treat this locally and @@ -399,9 +399,9 @@ void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); -static inline void page_pool_fragment_page(struct page *page, long nr) +static inline void page_pool_fragment_netmem(struct netmem *nmem, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&nmem->pp_frag_count, nr); } static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) @@ -425,11 +425,6 @@ static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) return ret; } -static inline long page_pool_defrag_page(struct page *page, long nr) -{ - return page_pool_defrag_netmem(page_netmem(page), nr); -} - static inline bool page_pool_is_last_frag(struct page_pool *pool, struct netmem *nmem) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index ddf9f2bb85f7..5624cdae1f4e 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -222,12 +222,6 @@ EXPORT_SYMBOL(page_pool_create); static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); -static inline -void page_pool_return_page(struct page_pool *pool, struct page *page) -{ - page_pool_return_netmem(pool, page_netmem(page)); -} - noinline static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { @@ -665,10 +659,9 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, } EXPORT_SYMBOL(page_pool_put_page_bulk); -static struct page *page_pool_drain_frag(struct page_pool *pool, - struct page *page) +static struct netmem *page_pool_drain_frag(struct page_pool *pool, + struct netmem *nmem) { - struct netmem *nmem = page_netmem(page); long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ @@ -679,7 +672,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, nmem, -1); - return page; + return nmem; } page_pool_return_netmem(pool, nmem); @@ -689,22 +682,22 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, static void page_pool_free_frag(struct page_pool *pool) { long drain_count = BIAS_MAX - pool->frag_users; - struct page *page = pool->frag_page; + struct netmem *nmem = pool->frag_nmem; - pool->frag_page = NULL; + pool->frag_nmem = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!nmem || page_pool_defrag_netmem(nmem, drain_count)) return; - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } -struct page *page_pool_alloc_frag(struct page_pool *pool, +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, unsigned int size, gfp_t gfp) { unsigned int max_size = PAGE_SIZE << pool->p.order; - struct page *page = pool->frag_page; + struct netmem *nmem = pool->frag_nmem; if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) || size > max_size)) @@ -713,35 +706,35 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, size = ALIGN(size, dma_get_cache_alignment()); *offset = pool->frag_offset; - if (page && *offset + size > max_size) { - page = page_pool_drain_frag(pool, page); - if (page) { + if (nmem && *offset + size > max_size) { + nmem = page_pool_drain_frag(pool, nmem); + if (nmem) { alloc_stat_inc(pool, fast); goto frag_reset; } } - if (!page) { - page = page_pool_alloc_pages(pool, gfp); - if (unlikely(!page)) { - pool->frag_page = NULL; + if (!nmem) { + nmem = page_pool_alloc_netmem(pool, gfp); + if (unlikely(!nmem)) { + pool->frag_nmem = NULL; return NULL; } - pool->frag_page = page; + pool->frag_nmem = nmem; frag_reset: pool->frag_users = 1; *offset = 0; pool->frag_offset = size; - page_pool_fragment_page(page, BIAS_MAX); - return page; + page_pool_fragment_netmem(nmem, BIAS_MAX); + return nmem; } pool->frag_users++; pool->frag_offset = *offset + size; alloc_stat_inc(pool, fast); - return page; + return nmem; } EXPORT_SYMBOL(page_pool_alloc_frag); From patchwork Thu Jan 5 21:46:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9D07C3DA7A for ; Thu, 5 Jan 2023 21:46:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6A9A8E000C; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A4EA7900004; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40B258E0005; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C5BCA8E000B for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 80DA61C5E39 for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.07.41F2D2F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 9E4024000A for ; Thu, 5 Jan 2023 21:46:32 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jhhqxfbI; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955193; a=rsa-sha256; cv=none; b=vzBttWHLrXuKrLwdxTODJO98QQZuzoyqByVKRR2wy3Xo5Caowg4yRbPyLG3Nu9Is6AoDyI dII0TTTcAGIgrDm5FxSltppQ/sVRKNGSev3fS3+cWB4mmGex0QeYrmunKsZbTNorQVICj2 /7hYA+l3FFaiNowsLOWOwSrInju7r0A= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jhhqxfbI; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v8XZwj+qrPdVX2joAsiJ+16x3YDJToWBKe9JDnAu2v0=; b=wic9DFx5OMGYEQ62kd6jHpRv4zyT2T2X6t/spYZD5Zte8MeuPBzSmJizHOj/70/UZIwa7z BrqH2lPZ8ioRDZRX5BwwfuBioGARmrl+hmBcDa2yYK9EdbLCX72Hrtf25cb6d78vnZjXIT Qq1ChjzGU3F1lZuZZx6FhJ/ls5Fp2Xw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=v8XZwj+qrPdVX2joAsiJ+16x3YDJToWBKe9JDnAu2v0=; b=jhhqxfbIe+lYs0FzsttLqke7z8 XS4XLQXn2EJUYRmYVtPYUvXoPPwkPeiNS02jes+dFai5qoZ7hlECJ/7bLXVYRJUM6CJDRdq7SHV5H dH8dG8Y9dn6qTfxADoTwzb2MczMdVa6+NKtCPTd6oFArL8WVNzwq+zRXah7FqNjUsb85Jg6xLAu4H SLFc6eFrdPu+GRe0wKnL0RuPbpYP+Vwc9l6sHcYCn1V5ymRmDd5niCXuxufqxiM1JJRdpyVNofPKo LsTD2kmdr537YkEzVVJzEbVYFNj5kn6ca7qZpP/xyD6Bxui+OWnUQbEOftW687VVpr9DpdT18AcSa ue5aHJjQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWoI-8F; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 19/24] xdp: Convert to netmem Date: Thu, 5 Jan 2023 21:46:26 +0000 Message-Id: <20230105214631.3939268-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9E4024000A X-Stat-Signature: 3ethdttxcugsrtgpckho57qbek46utkt X-HE-Tag: 1672955192-206654 X-HE-Meta: U2FsdGVkX1/ifYQgSpXBbE7+6XC+tprMqAbGS1xQY7Y2zxkMSuWJ9nZdb2XY6joDJKfzngJrVFn+GaTQwByo5VWrVl+gQLox3Y2NEnjrmw6YYnUNPjtN9mio5cUCdlvQq8vJoZ3uskbSEiV5Uaf8qKrM76MNDF3FCN4UtKyw7hsRl1GCVVSvxSWP6njVexHfIqiAWyBIppltfRajtniVtN7JvvzlybbHuwCSIyYKy/a10A+gmD8AsooIHfbUr70C/d7cKhjeVWjb5qt2JBGiIFVH6DuvSnWY3eBxaSOxj7bXaoOvTounLoUIZW31J0fwyLY41iZ9mspCkNgQP0Lp2oz1qN75M8BYqbmaCXmbTwWYTwEPrTTLJw75lB+hBwGLi0ZqtzOwyCsaqYiN1xydN7bgfi3jz95QiJwlQsu1YOuesjyRlHAf3U2C7WWGTC4oUwklvIxYnctukOoqW08hoH4MraoA1DFpX760r/XJONF7/rANhWrexoMvsd0iIrFoAdP3n0mogYc0MyVd+xAwTtTbaSYMLZTeW+i7kvlMzmeuRs/7eex8OblP5i22iBDRF5aQ2fvmKgNFBeCrBFDx1+LOBK/VqM9BDNUITL6npCA0BhrwcVriRqr+Op7piKFzKGJ+uOJemmv68dqAiMK55CFQTi/A5nP4AtxyIZ5f5FDEFL0cJF4QMNYwL7iOCeFspN6F2UQUPAIqVp5RYbt8zH7Fl3iZGqwDgXdxfy7FH3teKHaCyqRMffS3Gd7kRcdc3MCKsDLI24nAomP8QYdw4uQIV5RFReTmEMe0TrnxO3KXvOIsy9k+IT1AaQMOuZJgaAUoHrEogPmq3lyEEwZx2IY1lUPdpFLxqZEuV0poGeron3Kx9LiM9csuNOHnalU/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We dereference the 'pp' member of struct page, so we must use a netmem here. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- net/core/xdp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index 844c9d99dc0e..7520c3b27356 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -375,17 +375,18 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, struct xdp_buff *xdp) { + struct netmem *nmem; struct page *page; switch (mem->type) { case MEM_TYPE_PAGE_POOL: - page = virt_to_head_page(data); + nmem = virt_to_netmem(data); if (napi_direct && xdp_return_frame_no_direct()) napi_direct = false; - /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) + /* No need to check ((nmem->pp_magic & ~0x3UL) == PP_SIGNATURE) * as mem->type knows this a page_pool page */ - page_pool_put_full_page(page->pp, page, napi_direct); + page_pool_put_full_netmem(nmem->pp, nmem, napi_direct); break; case MEM_TYPE_PAGE_SHARED: page_frag_free(data); From patchwork Thu Jan 5 21:46:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EACEC4708E for ; Thu, 5 Jan 2023 21:46:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C6FB8E0007; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CFCD8E000B; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C993C8E000C; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AEF698E0007 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8D4ED120C49 for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.07.14519DF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 1290AA0003 for ; Thu, 5 Jan 2023 21:46:32 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=u4dKmd+G; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XzNLmzeaTRndPTdBRkbz1QwQlJaxQtYQ8oYGbdWHgIU=; b=gQFKvl5JON7Wlo1ibx6mH9vpOCBFRmbTGIeNZBbWY90wLJwlVdKCjrMkxDKaAdV2bKNO+W Q5egM4CBvvAZl8NyNQ6g4yf+J85qLnYRZXiZQG9nmkmUslOKunS9t7GCmTkpX31BqZT+H6 xsux5tzEc5xS0+YDLev1KL/0s5lHYB0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=u4dKmd+G; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955193; a=rsa-sha256; cv=none; b=S53P8tYdWMnn05hs7PPsdLQEsjkkqpUOcQvH7AE776IFDHnQgrYGh/Xoavbdv0NuiGP+8m +QPU5aUW5ZjGep1w2RMKoUIB0KXEOA4VqpIyO+/ee4EEu8G/79UbO9k8Ea/vK5PukFDYwA v43ifzezFSo1p11507dICUlN2UCeVWM= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XzNLmzeaTRndPTdBRkbz1QwQlJaxQtYQ8oYGbdWHgIU=; b=u4dKmd+GIBUTWoEI1d2yL90ucO smpmlMFOAM9jvJ8P9o3Nra6X56WE+7Cgjh7qFCAbDGNPEZz297FCeMCIWPizsR4AxNSTzclv6vSbh mFWtoewljDFBlsdqYvDNXr8XR4PeyuhuMvSQR8EyAWnjpiptAZo1jwhVNFPoklDXNEkvCoOyYKXB9 5xgcTwduMQuLQO9VFcfRKltuy27//Pac1PGuEiYSvrpn8hECnviKdW6r/YT+r+ErCAUy1Asz765C9 AnUlTaMkaEP5HdYd1xvryUDuy5DSwwTpftrhVn41OQZbJYBDoWVCHHocqnah/5Y5qn/aOB7E1ddbx S+Jx4aKQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWoR-CI; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 20/24] mm: Remove page pool members from struct page Date: Thu, 5 Jan 2023 21:46:27 +0000 Message-Id: <20230105214631.3939268-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1290AA0003 X-Stat-Signature: g3nfc84rseuigrq1dn5yd8ty3zgxzdyw X-HE-Tag: 1672955192-431768 X-HE-Meta: U2FsdGVkX18GFQGKpZEhR+TsL88yNh93R5afcXfd11djy3xm1PNR4Ohzh2D+NhkKuyUJkqsW27ye0Gji31LpvfBH2TuZOw5v+Bax6lJyiF8UMhg0MFDHzNjfd2CRg4/ZbUOS+i6QdUtzTiBTlgmq5YvblgfPJQmbr/e/47peKDh+QNnuXabi6PSd4/XbnTEowAhZs2f+lBkfIwNhSLjzJvhJ/en++qUoecXUbVxOzBSVvns6eQzV/EW3ff/lu0WdejT4vHsQfe62NpGCD5BE/RIpJ6ENJ1BB5tiQW9NrAFUkebv2hX8Mpdza4lOYXMLsMrc28BYAeERBGVf9jg4CHrF09i2S8L6/Z5TEmjSGseHp5ysYotYJGP/ywz9QJ7Ulv5RUxgUBmP0d0tzGTr5kpv4wwS8UC9qos5rs0uXHjYDwhxE1v3f8fq10n98iW87CWQSXX+9TgX/pnmsbOH1s8zL8+6KQVuJ7R8uUy1uWfTko3esv2bmxea+6EAVnRdKYIediv+blJvejCxPNWkNEl4ChaLEkGSdYIdZe7054AGbU0ZpeNJij5ZvXbjI9rCQ7mx/nsd+gqgAjIet8FZme2ZbBV9F6yDMCeEK0fwHmMsZVkQY9z4o8VimqxlupFrxDv5t5ARQIHcTzy1da+uqPYPZ3fxDEewwDbZtBGpmJJwflzmOOi3772kkDAA8/7QZ+A2b5UhDor/uFzCNnRrjoR4LUeAUGterZk+WknsV5vb1nKTJRrFWh7NaW8US06LYtxydrYBU4jwlXBKfmFJqSAx3O39qbyTM8ARz61l5XmWB5dzsArZqE20k6uwZM7kNV9x9xIQEWBicEZnCnJRNNohp72d4ae9VqRLTqYil7bs2YCsIQH95p0EZTtlpGI18u X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are now split out into their own netmem struct. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/linux/mm_types.h | 22 ---------------------- include/net/page_pool.h | 4 ---- 2 files changed, 26 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 603b615f1bf3..90d91088a9d5 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -116,28 +116,6 @@ struct page { */ unsigned long private; }; - struct { /* page_pool used by netstack */ - /** - * @pp_magic: magic value to avoid recycling non - * page_pool allocated pages. - */ - unsigned long pp_magic; - struct page_pool *pp; - unsigned long _pp_mapping_pad; - unsigned long dma_addr; - union { - /** - * dma_addr_upper: might require a 64-bit - * value on 32-bit architectures. - */ - unsigned long dma_addr_upper; - /** - * For frag page support, not supported in - * 32-bit architectures with 64-bit DMA. - */ - atomic_long_t pp_frag_count; - }; - }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool.h b/include/net/page_pool.h index a9dae4b5f2f7..c607d67c96dc 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -86,11 +86,7 @@ struct netmem { static_assert(offsetof(struct page, pg) == offsetof(struct netmem, nm)) NETMEM_MATCH(flags, flags); NETMEM_MATCH(lru, pp_magic); -NETMEM_MATCH(pp, pp); NETMEM_MATCH(mapping, _pp_mapping_pad); -NETMEM_MATCH(dma_addr, dma_addr); -NETMEM_MATCH(dma_addr_upper, dma_addr_upper); -NETMEM_MATCH(pp_frag_count, pp_frag_count); NETMEM_MATCH(_mapcount, _mapcount); NETMEM_MATCH(_refcount, _refcount); #undef NETMEM_MATCH From patchwork Thu Jan 5 21:46:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10F6FC3DA7A for ; Thu, 5 Jan 2023 21:46:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0E63900008; Thu, 5 Jan 2023 16:46:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C9431940008; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C375900008; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6D586900004 for ; Thu, 5 Jan 2023 16:46:37 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 31CE71A0B36 for ; Thu, 5 Jan 2023 21:46:37 +0000 (UTC) X-FDA: 80322080034.22.B15F339 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 7C6511C0006 for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eOJKLnG8; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955195; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pXeNES8pyHVZBMwH4642v/IjLs/jyC0INFstQa9QZBQ=; b=XA8XGVtM9i2ifR9vnN/t340eHv0NbBXTHUhLuAnmhDVG8kaHyet6zmJC235+EQZ/ubQ3/R Q0y6/EQxOCvDnieQ7nlDZWvA0uEW8ReghZFJm3UwGzL0MQ84aqpZk5mycNeI9bkpcCRCFS D3NXFVPYkc8gzNhOZnGolHNJzBeLBAo= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eOJKLnG8; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955195; a=rsa-sha256; cv=none; b=5jPmLpsgakBBue0nIws9riQUtiEeOKKH0I8yXfeGA03cWnYTuHBbDPG31vA6wmEx//D45f slJ1Wm7jg9EB2ycfE49Ph+UqyPDe8DpPQ2viInU8ugJay5mMoq3R4Ehxl24ZHxFX68UR6O 2zgkv8pI7sBucFYsnCo1bWClgN1vAdw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pXeNES8pyHVZBMwH4642v/IjLs/jyC0INFstQa9QZBQ=; b=eOJKLnG8Rr92sywy/+UEfUME49 VQiOPrQaj2vRFtRTufsxm632f3qKW6xizMlKfNJHA/L4I5RG57xbOjW+yoOlXZM9PWRF4UVWIFXSU y9zMNId29AsEMqJ00is1Dh7fuOuF4H4EDdANupo8Ry9RNr40DGRqZvrSF7teb1z5eN5arTyatyBqS L0XQq+WicyY8SiVom780FshL0lUQyKqKWj8vopqV3kXJPJrIHXEg7Me+JLH0wfQPhd3b19+DKLP91 ZNMTsBXw9gJbRC7uRCROpn+q+xCas0M906m1tWeiHe4xIV2PAdkESeMMu1o/1K8zIMhtfIqllRx93 Pv9LQarA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWob-G5; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 21/24] page_pool: Pass a netmem to init_callback() Date: Thu, 5 Jan 2023 21:46:28 +0000 Message-Id: <20230105214631.3939268-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 7C6511C0006 X-Rspam-User: X-Stat-Signature: yahp9kbnz1bn8h1wua8ebmjnrmm9ehbc X-HE-Tag: 1672955195-162080 X-HE-Meta: U2FsdGVkX19g/AQ6QT1+bt3gtQU81DROXVHS3k71rdHYxfql92q0l2MConjmUGOVrnwoRqBAHy3c71XtHo8mcaeDneMlf5CRsIWPfAoZZAzItywrjrJoWAdCUbwkXeZdmHvnnpAQBs/mrStdJEp7Of3mkvKR0eJa8BtaV08Psk4adac5TYeyO1wdZ7/BmmxO7GSoN4byB4a8IvyyXei0OZRWJgh6YuUMUxAXTHpnOjVQmfH9Z0uprfZwch+4WyzEGW9ksSHN8/HwZNolGTelmRNFL7/EpwS7UdPNpg5/8iQh8WeSRBm/R8k/AAoqsVbRE+DwkUzl/uxphhG81PmcsuwNuEDQX/0NHZR/NVn7H74GaqNZ1ZAJiLnJIIyqAUgqTbXyykRYHgjD4HiLCMTGSjEPXfKSXVvQ/AlxgeFH+dHrte/FoeHcV6ymc6GkbdOyt5Mu6ocEdrFtrq7bBuVGzAcClkDuyLkQzgoSQ9e4X/LvNO+Jcrl7grU0iWWwn6ZKYyv+J4lN728SQ4IdIIj1qfSgc8WWqnP3PLfck3XoiulmjCOFAaRX2LvT5rv8UwgKz8gQ/gbUg2gZA/SNQsDqqB8lGr45y0z/rcdD+3RAE8t/fyfH6hWoPwaxVrepKPT6ezpSfAIKsFu2J+eSS+43FLzraOAI3PwhB79jo2klkIXXzzaKj5i/g19aRXTuwZ9/ttuGXJDUgZA7gIqRTPlOkbg/0r3Y/jYwF8yfvcmHB0G1nBCb20UtyMEwbwOwv2aSgMN7HeMHzlT4Twdk9HdE3BU0iiiZmcJqSGYy2PscpWHomx5HHVFMIjhS+NWP3IUvcXjklGf1QSpAzmUCc+A+r+7r4B0hrbY8/0SA9+cW9Bi8ML3lKXAvg9Ra3QK73H94GmEN6Q3ld+bIK2ySWYk48ZJ/mp5ByUK4lIeqtJnW7gy+7a6saLdaLg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert the only user of init_callback. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas --- include/net/page_pool.h | 2 +- net/bpf/test_run.c | 4 ++-- net/core/page_pool.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index c607d67c96dc..d2f98b9dce13 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -181,7 +181,7 @@ struct page_pool_params { enum dma_data_direction dma_dir; /* DMA mapping direction */ unsigned int max_len; /* max DMA sync memory size */ unsigned int offset; /* DMA addr offset */ - void (*init_callback)(struct page *page, void *arg); + void (*init_callback)(struct netmem *nmem, void *arg); void *init_arg; }; diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 2723623429ac..bd3c64e69f6e 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -116,9 +116,9 @@ struct xdp_test_data { #define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) #define TEST_XDP_MAX_BATCH 256 -static void xdp_test_run_init_page(struct page *page, void *arg) +static void xdp_test_run_init_page(struct netmem *nmem, void *arg) { - struct xdp_page_head *head = phys_to_virt(page_to_phys(page)); + struct xdp_page_head *head = netmem_to_virt(nmem); struct xdp_buff *new_ctx, *orig_ctx; u32 headroom = XDP_PACKET_HEADROOM; struct xdp_test_data *xdp = arg; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5624cdae1f4e..a1e404a7397f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -334,7 +334,7 @@ static void page_pool_set_pp_info(struct page_pool *pool, nmem->pp = pool; nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); + pool->p.init_callback(nmem, pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) From patchwork Thu Jan 5 21:46:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A14C54EBD for ; Thu, 5 Jan 2023 21:46:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F31DC8E0001; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D591A8E0005; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 802488E0005; Thu, 5 Jan 2023 16:46:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 541B88E0005 for ; Thu, 5 Jan 2023 16:46:34 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 280F7C096C for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) X-FDA: 80322079908.12.767F521 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id CD0C4C0010 for ; Thu, 5 Jan 2023 21:46:31 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vzBwMAJY; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LCbVTAuhCh385j/PxWtapxDXQaAaJaoPlnb+0l419SI=; b=xPFrXKhAZ3qwU5+MjK3hqtTU2DPlYzJj70Zrt7eAdCqiJAIte/5JvsjESXskivI2ZIkzDC eKidleSZSRqTvRk6jHWB3Q39vaTfLFt30BnAXn1eKEVisbVaZymI9+lBkj4Z+17RmUV2zv nGkwP+7xMtuERnOsZGSKfXbUnpU/YH8= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vzBwMAJY; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955192; a=rsa-sha256; cv=none; b=ODUjvyb3KYTMVZmfvChkgqWvwOT0yCiq1Jic2ieZILAn73IPCz3+vJNhRMfDjMf7CCW+oq U2ccE8YMGGS5QUYQ8iLO+ohVKzsBnMjvOfWTiHaaRIuufed2bvitmkPx5ZO1+KK4ynzzUC AGEnOzNuJ4Y6RZ/4uteDNgwiLxXDDes= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LCbVTAuhCh385j/PxWtapxDXQaAaJaoPlnb+0l419SI=; b=vzBwMAJYDrNZWVGowwXJKeAIaw KGP/EN2KOIcsIV9vxDr6eMP24skwRVzhL+Ks39CMXxLw0bpLJsCtxXZ08psESU9oozEa+Ovh15Krk sViLO9gbgmKaOexwuVVwkCVZp/fhcoyAduNjsHvRO+j0zgB5TX6c089NTrGQoy9EDO437hziBRlue bEAsGN0olecoxu5jVLsq2A6zkbR+8B8keqiGQPDbh4KmTyP7KWPkuprfRRUmjbX8rKQep59oKMqzd 3nvfQTJMkMXogEaEGhx+kTI1+mS70n3UjOJDA/57BJuTUbDdzkDA7gbBspXBwZQgdEe5bFjgpge2Y MWe9oIZg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWoj-Jq; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 22/24] net: Add support for netmem in skb_frag Date: Thu, 5 Jan 2023 21:46:29 +0000 Message-Id: <20230105214631.3939268-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: CD0C4C0010 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: jtehgsikdatfq1qa3eqfhitsgxrw4pj7 X-HE-Tag: 1672955191-936766 X-HE-Meta: U2FsdGVkX1/aHXQgZEHyClb3r4hQqQhf0TIMKnZekpxol4AM9H9AVEoDShNNmzBAAs/q9TYRrxMKLIoqmn/XwImf4mwJIOt/TKaIpx3phpNdi0ZT1Ok9yGaKDpta/gpuPqNqaYfv6/xS0Y3Yt7bPVC8otsDo0iokqdetPF7Zgeg4SuM8MlvPPW8frJiqLv23Ntza7+eSYX/vyUugGQZ0fjuPEyuhwgHanuWQ5hezA6l3oQKY3o2/xUeJzLEaLD5a8192UmS8euu1p13kMg9OPFqRVWAVDvCtl+iqmbM+H2OdRHXp2YpvJu/qdbq6cjyZrWLg1dkTVue00ax2Yx6nbHwHcicnwGJ8M8fp3uvNhT7ZCeb6OQyRckJiIIbIViq8e1N7qPzGy5VLwnNwliZQQU0MMP7vkbOfzJmsFvJK/KkQxZAQ3EtcbYzfVa1kwIV0Q/U3ZGwXilOhviZHmirflrVHA9zE9vzRDyGMA9ua1Wjr/yrdnLs5VsKsKFJgpwJv9+LnOpieVtHW02oeZ99U9UEj9VZ4rffCU0hRl9fB/V3marpfwTGwsLRigEMwUIJHrrfLVU1M23lDIpE38XGvBjBN3D4pN6hGSx+ErFPt+a2Y9XLRFhuorRIr11sznbAOFupKFd8XeZ78SMCrNdQUoEPtUszGltDe1uV4AZGiMBI5g6YzVyx5GmyZz1VeLvkc4v+9gjLDdPCpprYu+1Dj8Cz8PZ2ogrreWjasNZUcOGmKNttJKVyh0Y+zCHNqYVPI8u0bRM3cLBJuaWb4+4W3TbgRlVxEGHBiyBq0K7RX3Aw2HZQruwMyXMvOX6XPgNCeN7kWQD+WZKDkiSHJTHfUtdZ35BCoZFaPepeN7LwH+/0Ozftn9BNzKQnEdqwBKZxRrLDSbGJMZRMXgXSjMuGjayqBqiqD/0gF+wkpx8xDcqud3H0kTQzH6A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow drivers to add netmem to skbs & retrieve them again. If the VM_BUG_ON triggers, we can add a call to compound_head() either in this function or in page_netmem(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/skbuff.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4c8492401a10..4b04240385cc 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3346,6 +3346,12 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) return frag->bv_page; } +static inline struct netmem *skb_frag_netmem(const skb_frag_t *frag) +{ + VM_BUG_ON_PAGE(PageTail(frag->bv_page), frag->bv_page); + return page_netmem(frag->bv_page); +} + /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment @@ -3454,6 +3460,11 @@ static inline void __skb_frag_set_page(skb_frag_t *frag, struct page *page) frag->bv_page = page; } +static inline void __skb_frag_set_netmem(skb_frag_t *frag, struct netmem *nmem) +{ + __skb_frag_set_page(frag, netmem_page(nmem)); +} + /** * skb_frag_set_page - sets the page contained in a paged fragment of an skb * @skb: the buffer From patchwork Thu Jan 5 21:46:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5099C3DA7A for ; Thu, 5 Jan 2023 21:46:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53204900006; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 014FD900002; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC3348E0008; Thu, 5 Jan 2023 16:46:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8F3EF900002 for ; Thu, 5 Jan 2023 16:46:35 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4C05B1609B1 for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) X-FDA: 80322079950.30.078F22F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id C28091C0007 for ; Thu, 5 Jan 2023 21:46:33 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=i9ijuvOF; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955193; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Go7QGIGQsO8rLobpKFmZxg0grUkeRc8v5+9IoJUhrYw=; b=sSrB8hSSEI+j4mtW/nmGfxGt6x85mTNh3ErSOYdEnqPwe5xPf1n3jQj3ohthWzFG+2tUrg dSRq906MuZ0T2zhJYvs+/pwGIchiLzb5nogox6kIcC20dWHFM5Vlo1qxaD4eLZz9WRIonx kZMijNN7XYc1Zp7yZAQ3fubEVFIZxCs= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=i9ijuvOF; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955193; a=rsa-sha256; cv=none; b=PkGwes4NNRptet8kJTSV8ZE1qeSzohx5HrPj8L32mY8/ndxKOJFiJr1Keor2aCY/gWQr3z B/rtCiELskuaOnZk1jrKG49au6n0tUJY9iOuHRRBTZQ8ivS0PXLrcS3xdD8IypIOq7uwLO jBUjbGgTXQR6rUMoNp98WBgvLMcoKd4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Go7QGIGQsO8rLobpKFmZxg0grUkeRc8v5+9IoJUhrYw=; b=i9ijuvOFde1802UamBYB/rtJ9T +OsoqrraDtdWTimBPLx8HymlABrDqNhVghPTbp8oIVYguHUKqy8ARPmXTyM6wJqad7VDbe0fuGTCI 2BD0SnAfh1SgjLhC6xHNki5BIh7qsjAsAEprSXYSsCHiAJqnDD/hjyTuQsBkJwD4ulWBTcYyJij47 DPQPh3Q+6C4QKqaq+WsWVVeNPYC9ySi8IdNDEDEpyu98caq267NsbHGYCiYM4/g76l/0N2pG5eV3x USU2HXHXUIOX/3W95tQXE7HW4Mx+NFqJ9UsxVipDhAjCgHhiPnRy3LrcMm2ibKSG+c9mMWLKSwHMW iy+LcxLA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWor-P6; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 23/24] mvneta: Convert to netmem Date: Thu, 5 Jan 2023 21:46:30 +0000 Message-Id: <20230105214631.3939268-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C28091C0007 X-Rspam-User: X-Stat-Signature: wqie7dd8s55n8rehwre4ox1awnfkehmk X-HE-Tag: 1672955193-101368 X-HE-Meta: U2FsdGVkX18btinjALspSUFzMm8611S2DUi3PONCtzyMohZe5ahr9R1n5/jnR4ad43Qtj/czmVMvdlXF7/eko+zcRB6KeR2hJHByBilhk2tMjowl1pbQM5fapGiHtaYcH4cz9bNIIMoM6eJrZVULJQZ0ndbBFt9u0IzLImyE0E6wM7NTQKh34ybIeWX7vehXdBR8mIGAq0abGlyU/Xd66ctKD4FioZ3ULtv6XADDQtgiWnbxkA7tAdsWSgWmdVBHebazE004tITlJbkC4BWGnHIrz2VqaVy4Q1v5YiVltZG4JeFOyLdz4FDLQ8GmIe2HdkAL0QZsfs9athmuZx7zy0XoyEiuMWvwwFsd6q3ChDMQZ+uDxyULDzm1n8f0BVuTdKAaz0bshwgn1KmQ9k/uUjTEqFNCvoFv4EoDod9Z+jE3kCMGX3xSH7CMHiy96gxLpoQYXCQhEGFsPszLbhxMQoh3XnR0XkecdI2v73q8df3eUSDn9D3vmG5Bnyjen1Zq0HdgEUgwzvdzmBp2OUDHg6pkHLFDOVB1fYGoIeQfmHU5k/bf6r1wBdLk+pbVywqGqfPNz6WCcK9cb9NUXoyL/s5SYKY74YM6EEijpDAJix53XlSj6AEITj/m6mWIuJGYoYnRH5Dm9tLPCxlA7HuaPxFDBs0lWUp666uKu+6YcLjLqrtRc+TViGAL3+FT+pck7o8d7Yr2Onl5MLucEdRvV0v0ubGVq0UjUMhncLqCUYJbdcaql623StIG4xpOhkS45EXk9kivkOGlPKgonIvLNkY6VQdODBPbPtxsVfkvyZW2opB2VOBMvn5PTVl3S5DuWGHook/e53E6BXPA/PFkrp/Ak3kEtoE4Blz2YWU+hX9OD8VchU6pZz08lFkvB6l6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the netmem APIs instead of the page APIs. Improves type-safety. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/net/ethernet/marvell/mvneta.c | 48 +++++++++++++-------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f8925cac61e4..6177d2ffd33c 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1931,15 +1931,15 @@ static int mvneta_rx_refill(struct mvneta_port *pp, gfp_t gfp_mask) { dma_addr_t phys_addr; - struct page *page; + struct netmem *nmem; - page = page_pool_alloc_pages(rxq->page_pool, + nmem = page_pool_alloc_netmem(rxq->page_pool, gfp_mask | __GFP_NOWARN); - if (!page) + if (!nmem) return -ENOMEM; - phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction; - mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); + phys_addr = netmem_get_dma_addr(nmem) + pp->rx_offset_correction; + mvneta_rx_desc_fill(rx_desc, phys_addr, nmem, rxq); return 0; } @@ -2006,7 +2006,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, if (!data || !(rx_desc->buf_phys_addr)) continue; - page_pool_put_full_page(rxq->page_pool, data, false); + page_pool_put_full_netmem(rxq->page_pool, data, false); } if (xdp_rxq_info_is_reg(&rxq->xdp_rxq)) xdp_rxq_info_unreg(&rxq->xdp_rxq); @@ -2072,11 +2072,11 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, goto out; for (i = 0; i < sinfo->nr_frags; i++) - page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + page_pool_put_full_netmem(rxq->page_pool, + skb_frag_netmem(&sinfo->frags[i]), true); out: - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), + page_pool_put_netmem(rxq->page_pool, virt_to_netmem(xdp->data), sync_len, true); } @@ -2088,7 +2088,6 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, struct device *dev = pp->dev->dev.parent; struct mvneta_tx_desc *tx_desc; int i, num_frames = 1; - struct page *page; if (unlikely(xdp_frame_has_frags(xdpf))) num_frames += sinfo->nr_frags; @@ -2123,9 +2122,10 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, buf->type = MVNETA_TYPE_XDP_NDO; } else { - page = unlikely(frag) ? skb_frag_page(frag) - : virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page); + struct netmem *nmem = unlikely(frag) ? + skb_frag_netmem(frag) : + virt_to_netmem(xdpf->data); + dma_addr = netmem_get_dma_addr(nmem); if (unlikely(frag)) dma_addr += skb_frag_off(frag); else @@ -2308,9 +2308,9 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + struct netmem *nmem) { - unsigned char *data = page_address(page); + unsigned char *data = netmem_to_virt(nmem); int data_len = -MVNETA_MH_SIZE, len; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; @@ -2343,7 +2343,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + struct netmem *nmem) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); struct net_device *dev = pp->dev; @@ -2371,16 +2371,16 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, skb_frag_off_set(frag, pp->rx_offset_correction); skb_frag_size_set(frag, data_len); - __skb_frag_set_page(frag, page); + __skb_frag_set_netmem(frag, nmem); if (!xdp_buff_has_frags(xdp)) { sinfo->xdp_frags_size = *size; xdp_buff_set_frags_flag(xdp); } - if (page_is_pfmemalloc(page)) + if (netmem_is_pfmemalloc(nmem)) xdp_buff_set_frag_pfmemalloc(xdp); } else { - page_pool_put_full_page(rxq->page_pool, page, true); + page_pool_put_full_netmem(rxq->page_pool, nmem, true); } *size -= len; } @@ -2440,10 +2440,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi, struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); u32 rx_status, index; struct sk_buff *skb; - struct page *page; + struct netmem *nmem; index = rx_desc - rxq->descs; - page = (struct page *)rxq->buf_virt_addr[index]; + nmem = rxq->buf_virt_addr[index]; rx_status = rx_desc->status; rx_proc++; @@ -2461,17 +2461,17 @@ static int mvneta_rx_swbm(struct napi_struct *napi, desc_status = rx_status; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, nmem); } else { if (unlikely(!xdp_buf.data_hard_start)) { rx_desc->buf_phys_addr = 0; - page_pool_put_full_page(rxq->page_pool, page, + page_pool_put_full_netmem(rxq->page_pool, nmem, true); goto next; } mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, nmem); } /* Middle or Last descriptor */ if (!(rx_status & MVNETA_RXD_LAST_DESC)) From patchwork Thu Jan 5 21:46:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13090556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49B90C54EBE for ; Thu, 5 Jan 2023 21:46:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7728900004; Thu, 5 Jan 2023 16:46:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A2515940007; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DD11900009; Thu, 5 Jan 2023 16:46:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 52BC9900008 for ; Thu, 5 Jan 2023 16:46:37 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 25841AB346 for ; Thu, 5 Jan 2023 21:46:37 +0000 (UTC) X-FDA: 80322080034.22.7F00DE8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 6FDAC180006 for ; Thu, 5 Jan 2023 21:46:35 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VZ5oQgff; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955195; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DtKghffDvQ09Xu+kMmgcSP8En1MHzYyJ/FLFG/31SYU=; b=FAN8USmc4ZQoAMUpj7wwwjEqf6FNX5MF15FT0ofGG1db+gn+8y0NNOUJZaSf6cRfniQw1i kN1jgN8Fu83H6yeuF8Hy/FYlcPRk9tmXef1VFd5jZoeKY6oSsSA5PLCSi5q35ykb6qGywn n1Xbo4B3QLwfdX91mKatxpmbAVcWUSA= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=VZ5oQgff; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955195; a=rsa-sha256; cv=none; b=1jPvhF0BGZ7iVUxnW1f2xDJPAWbdJZFFnVjW7DdoO6AnSs0yWRjCm+L+TOtJskiql8lRO/ lN6N3eqUDmiFr6KOfHMeuS6PN/HnELpZCD/5LRdTU03vzHaN2lsBocDHXlgdDysLzwDiKY GJO6ifXzGMR7pmjJu86iM0MdSjdCdAc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DtKghffDvQ09Xu+kMmgcSP8En1MHzYyJ/FLFG/31SYU=; b=VZ5oQgffKNo5oDiCJpt6IYBiL5 tMNHlLvCDPRGo96cKrlcpTOKrDhgZRva/xLykBiK1XbQqtwitE1wnVnhUzfsVWYMrOzkh23wV066o PrMzQ2aQ3OucfVlFmfV4EJ5xllDNOQJqCiSHygnJKLtbfQWNSy5M8OpkOsJDJaeSX0shJ0p7WEsN6 2NzXtVshgrQAgIOq79P6p3kKzymV6SEJTMXCyUljCgwgCUSewiZIpnq+t0ZY/i3zsc7fklGFXO2DO yctENT/bqzuhDPyLqB3FAk3bpI4WCDI3/7FyRH9jN7CEfuRT32eV9/KfOnPZS6gjVFg+mPj4L8aLd nT7/9blw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4J-00GWoz-TA; Thu, 05 Jan 2023 21:46:35 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 24/24] mlx5: Convert to netmem Date: Thu, 5 Jan 2023 21:46:31 +0000 Message-Id: <20230105214631.3939268-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6FDAC180006 X-Rspam-User: X-Stat-Signature: 5x3ooe8mbjtxhaatbe9offr55jta3zsh X-HE-Tag: 1672955195-195841 X-HE-Meta: U2FsdGVkX1+YIEvtGAZ2D+zPO+fgMu1sP4aLRCY9jSTJbIH14TRJ8sfrOjXlCXm6BnnL9dfh+1lDGKbLOMjlITaU9/9lIWfxUOeQa3A5eh0pFpyBVY8JAfuQnvgaXlCnjy0mdgZJpldKEmW3OzrRr1PosZFHKgdKfqMYTnd2RbywYQYrTOkuN5VCgKq6uy74+W1f91AAyTC2Juz0Nrqf7w9kihxabeo9VBuaGSo7Nqr7JKMEj/siILQGnnFzKuWhlnQrYS0GJvg1Y/nVqZ5PaxeBPsHzV5pm/S7exVztZ5rixIqlwlo+TIzHW4mVWr++vjwSfgHLX3W4Asggjq0nFIHA2sgXJxo3yM+JmFLzyAfk1nuv3xltx78AS306WWbGGHyUOXsWsuXLI7av3XzVflriF5fVkW0ZtTaboly0xb64G21GxNG1lswwCtsBTubLyLpc+isYHY4jl9UCx5emf3Eu33d5kC0pbZvBcyVqipusEzeWdwQLW3a3KTjRkFn8jf4m6Sg4QQCoJhCSDGd2pTxyw8+9LjMc0JWLtSRsfzlZfjIcQcGP0zqRxP1qqX3+IedUHyx4nUkEdRQFr3B40yiVo2NcqlHZbRkKPLgy/6DioK0KSsaFamRN1xMKIdvFZMg4X7hwkt1pT1Q5gqn/2sOW8NOKuwv7yAsgaBRsmSQG00XfELWh+0Xd73ZymyXQZt2FUaBqHCIdgsGuABs5kDteP5rMYXl6ybPYtbHyRoDDI0TSqCWgdMeNwYx29kwVwdor73guRfUTIMpyc8cc2OrY9If18Vyszn6Dq2gZFtkxeX7cV0HSxvLKpmtK4Jd84kWCALPMdmNNCCxdU20VDMp0IevwPP1AD1gTitX7BqUsdo/2ZnlgQvPMib7hC3Wk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the netmem APIs instead of the page_pool APIs. Possibly we should add a netmem equivalent of skb_add_rx_frag(), but that can happen later. Saves one call to compound_head() in the call to put_page() in mlx5e_page_release_dynamic() which saves 58 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Tariq Toukan --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 10 +- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 4 +- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 24 ++-- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 12 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 130 +++++++++--------- 6 files changed, 94 insertions(+), 88 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 2d77fb8a8a01..35bff3b0d9f6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -467,7 +467,7 @@ struct mlx5e_txqsq { } ____cacheline_aligned_in_smp; union mlx5e_alloc_unit { - struct page *page; + struct netmem *nmem; struct xdp_buff *xsk; }; @@ -501,7 +501,7 @@ struct mlx5e_xdp_info { } frame; struct { struct mlx5e_rq *rq; - struct page *page; + struct netmem *nmem; } page; }; }; @@ -619,7 +619,7 @@ struct mlx5e_mpw_info { struct mlx5e_page_cache { u32 head; u32 tail; - struct page *page_cache[MLX5E_CACHE_SIZE]; + struct netmem *page_cache[MLX5E_CACHE_SIZE]; }; struct mlx5e_rq; @@ -657,13 +657,13 @@ struct mlx5e_rq_frags_info { struct mlx5e_dma_info { dma_addr_t addr; - struct page *page; + struct netmem *nmem; }; struct mlx5e_shampo_hd { u32 mkey; struct mlx5e_dma_info *info; - struct page *last_page; + struct netmem *last_nmem; u16 hd_per_wq; u16 hd_per_wqe; unsigned long *bitmap; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 853f312cd757..688d3ea9aa36 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -65,8 +65,8 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget); int mlx5e_poll_ico_cq(struct mlx5e_cq *cq); /* RX */ -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page); -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle); +void mlx5e_nmem_dma_unmap(struct mlx5e_rq *rq, struct netmem *nmem); +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct netmem *nmem, bool recycle); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)); int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..878e4e9f0f8b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -32,6 +32,7 @@ #include #include +#include "en/txrx.h" #include "en/xdp.h" #include "en/params.h" @@ -57,7 +58,7 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) static inline bool mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, - struct page *page, struct xdp_buff *xdp) + struct netmem *nmem, struct xdp_buff *xdp) { struct skb_shared_info *sinfo = NULL; struct mlx5e_xmit_data xdptxd; @@ -116,7 +117,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; xdpi.page.rq = rq; - dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); + dma_addr = netmem_get_dma_addr(nmem) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_BIDIRECTIONAL); if (unlikely(xdp_frame_has_frags(xdpf))) { @@ -127,7 +128,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, dma_addr_t addr; u32 len; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = netmem_get_dma_addr(skb_frag_netmem(frag)) + skb_frag_off(frag); len = skb_frag_size(frag); dma_sync_single_for_device(sq->pdev, addr, len, @@ -141,14 +142,14 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, mlx5e_xmit_xdp_frame, sq, &xdptxd, sinfo, 0))) return false; - xdpi.page.page = page; + xdpi.page.nmem = nmem; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); if (unlikely(xdp_frame_has_frags(xdpf))) { for (i = 0; i < sinfo->nr_frags; i++) { skb_frag_t *frag = &sinfo->frags[i]; - xdpi.page.page = skb_frag_page(frag); + xdpi.page.nmem = skb_frag_netmem(frag); mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); } } @@ -157,7 +158,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } /* returns true if packet was consumed by xdp */ -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct netmem *nmem, struct bpf_prog *prog, struct xdp_buff *xdp) { u32 act; @@ -168,19 +169,19 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, case XDP_PASS: return false; case XDP_TX: - if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, page, xdp))) + if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, nmem, xdp))) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */ return true; case XDP_REDIRECT: - /* When XDP enabled then page-refcnt==1 here */ + /* When XDP enabled then nmem->refcnt==1 here */ err = xdp_do_redirect(rq->netdev, xdp, prog); if (unlikely(err)) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); __set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags); if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL) - mlx5e_page_dma_unmap(rq, page); + mlx5e_nmem_dma_unmap(rq, nmem); rq->stats->xdp_redirect++; return true; default: @@ -445,7 +446,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, skb_frag_t *frag = &sinfo->frags[i]; dma_addr_t addr; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = netmem_get_dma_addr(skb_frag_netmem(frag)) + skb_frag_off(frag); dseg++; @@ -495,7 +496,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, break; case MLX5E_XDP_XMIT_MODE_PAGE: /* XDP_TX from the regular RQ */ - mlx5e_page_release_dynamic(xdpi.page.rq, xdpi.page.page, recycle); + mlx5e_page_release_dynamic(xdpi.page.rq, + xdpi.page.nmem, recycle); break; case MLX5E_XDP_XMIT_MODE_XSK: /* AF_XDP send */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..5bc875f131a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -46,7 +46,7 @@ struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct netmem *nmem, struct bpf_prog *prog, struct xdp_buff *xdp); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index cff5f2e29e1e..7c2a1ecd730b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -555,16 +555,18 @@ static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work) static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq) { - rq->wqe_overflow.page = alloc_page(GFP_KERNEL); - if (!rq->wqe_overflow.page) + struct page *page = alloc_page(GFP_KERNEL); + if (!page) return -ENOMEM; - rq->wqe_overflow.addr = dma_map_page(rq->pdev, rq->wqe_overflow.page, 0, + rq->wqe_overflow.addr = dma_map_page(rq->pdev, page, 0, PAGE_SIZE, rq->buff.map_dir); if (dma_mapping_error(rq->pdev, rq->wqe_overflow.addr)) { - __free_page(rq->wqe_overflow.page); + __free_page(page); return -ENOMEM; } + + rq->wqe_overflow.nmem = page_netmem(page); return 0; } @@ -572,7 +574,7 @@ static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq) { dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, PAGE_SIZE, rq->buff.map_dir); - __free_page(rq->wqe_overflow.page); + __free_page(netmem_page(rq->wqe_overflow.nmem)); } static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *params, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index c8820ab22169..11c1bf3f485d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -274,7 +274,7 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, return mlx5e_decompress_cqes_cont(rq, wq, 1, budget_rem); } -static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) +static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct netmem *nmem) { struct mlx5e_page_cache *cache = &rq->page_cache; u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1); @@ -285,12 +285,12 @@ static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) return false; } - if (!dev_page_is_reusable(page)) { + if (!dev_page_is_reusable(netmem_page(nmem))) { stats->cache_waive++; return false; } - cache->page_cache[cache->tail] = page; + cache->page_cache[cache->tail] = nmem; cache->tail = tail_next; return true; } @@ -306,16 +306,16 @@ static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq, union mlx5e_alloc_uni return false; } - if (page_ref_count(cache->page_cache[cache->head]) != 1) { + if (netmem_ref_count(cache->page_cache[cache->head]) != 1) { stats->cache_busy++; return false; } - au->page = cache->page_cache[cache->head]; + au->nmem = cache->page_cache[cache->head]; cache->head = (cache->head + 1) & (MLX5E_CACHE_SIZE - 1); stats->cache_reuse++; - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); /* Non-XSK always uses PAGE_SIZE. */ dma_sync_single_for_device(rq->pdev, addr, PAGE_SIZE, rq->buff.map_dir); return true; @@ -328,43 +328,45 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, union mlx5e_alloc_u if (mlx5e_rx_cache_get(rq, au)) return 0; - au->page = page_pool_dev_alloc_pages(rq->page_pool); - if (unlikely(!au->page)) + au->nmem = page_pool_dev_alloc_netmem(rq->page_pool); + if (unlikely(!au->nmem)) return -ENOMEM; /* Non-XSK always uses PAGE_SIZE. */ - addr = dma_map_page(rq->pdev, au->page, 0, PAGE_SIZE, rq->buff.map_dir); + addr = dma_map_page(rq->pdev, netmem_page(au->nmem), 0, PAGE_SIZE, + rq->buff.map_dir); if (unlikely(dma_mapping_error(rq->pdev, addr))) { - page_pool_recycle_direct(rq->page_pool, au->page); - au->page = NULL; + page_pool_recycle_netmem(rq->page_pool, au->nmem); + au->nmem = NULL; return -ENOMEM; } - page_pool_set_dma_addr(au->page, addr); + netmem_set_dma_addr(au->nmem, addr); return 0; } -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page) +void mlx5e_nmem_dma_unmap(struct mlx5e_rq *rq, struct netmem *nmem) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = netmem_get_dma_addr(nmem); dma_unmap_page_attrs(rq->pdev, dma_addr, PAGE_SIZE, rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); - page_pool_set_dma_addr(page, 0); + netmem_set_dma_addr(nmem, 0); } -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle) +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct netmem *nmem, + bool recycle) { if (likely(recycle)) { - if (mlx5e_rx_cache_put(rq, page)) + if (mlx5e_rx_cache_put(rq, nmem)) return; - mlx5e_page_dma_unmap(rq, page); - page_pool_recycle_direct(rq->page_pool, page); + mlx5e_nmem_dma_unmap(rq, nmem); + page_pool_recycle_netmem(rq->page_pool, nmem); } else { - mlx5e_page_dma_unmap(rq, page); - page_pool_release_page(rq->page_pool, page); - put_page(page); + mlx5e_nmem_dma_unmap(rq, nmem); + page_pool_release_netmem(rq->page_pool, nmem); + netmem_put(nmem); } } @@ -389,7 +391,7 @@ static inline void mlx5e_put_rx_frag(struct mlx5e_rq *rq, bool recycle) { if (frag->last_in_page) - mlx5e_page_release_dynamic(rq, frag->au->page, recycle); + mlx5e_page_release_dynamic(rq, frag->au->nmem, recycle); } static inline struct mlx5e_wqe_frag_info *get_frag(struct mlx5e_rq *rq, u16 ix) @@ -413,7 +415,7 @@ static int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe_cyc *wqe, goto free_frags; headroom = i == 0 ? rq->buff.headroom : 0; - addr = page_pool_get_dma_addr(frag->au->page); + addr = netmem_get_dma_addr(frag->au->nmem); wqe->data[i].addr = cpu_to_be64(addr + frag->offset + headroom); } @@ -475,21 +477,21 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, union mlx5e_alloc_unit *au, u32 frag_offset, u32 len, unsigned int truesize) { - dma_addr_t addr = page_pool_get_dma_addr(au->page); + dma_addr_t addr = netmem_get_dma_addr(au->nmem); dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); - page_ref_inc(au->page); + netmem_get(au->nmem); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, - au->page, frag_offset, len, truesize); + netmem_page(au->nmem), frag_offset, len, truesize); } static inline void mlx5e_copy_skb_header(struct mlx5e_rq *rq, struct sk_buff *skb, - struct page *page, dma_addr_t addr, + struct netmem *nmem, dma_addr_t addr, int offset_from, int dma_offset, u32 headlen) { - const void *from = page_address(page) + offset_from; + const void *from = netmem_address(nmem) + offset_from; /* Aligning len to sizeof(long) optimizes memcpy performance */ unsigned int len = ALIGN(headlen, sizeof(long)); @@ -522,7 +524,7 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle } else { for (i = 0; i < rq->mpwqe.pages_per_wqe; i++) if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap)) - mlx5e_page_release_dynamic(rq, alloc_units[i].page, recycle); + mlx5e_page_release_dynamic(rq, alloc_units[i].nmem, recycle); } } @@ -586,7 +588,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; u16 entries, pi, header_offset, err, wqe_bbs, new_entries; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; - struct page *page = shampo->last_page; + struct netmem *nmem = shampo->last_nmem; u64 addr = shampo->last_addr; struct mlx5e_dma_info *dma_info; struct mlx5e_umr_wqe *umr_wqe; @@ -613,11 +615,11 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, err = mlx5e_page_alloc_pool(rq, &au); if (unlikely(err)) goto err_unmap; - page = dma_info->page = au.page; - addr = dma_info->addr = page_pool_get_dma_addr(au.page); + nmem = dma_info->nmem = au.nmem; + addr = dma_info->addr = netmem_get_dma_addr(au.nmem); } else { dma_info->addr = addr + header_offset; - dma_info->page = page; + dma_info->nmem = nmem; } update_klm: @@ -635,7 +637,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, }; shampo->pi = (shampo->pi + new_entries) & (shampo->hd_per_wq - 1); - shampo->last_page = page; + shampo->last_nmem = nmem; shampo->last_addr = addr; sq->pc += wqe_bbs; sq->doorbell_cseg = &umr_wqe->ctrl; @@ -647,7 +649,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, dma_info = &shampo->info[--index]; if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) { dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE); - mlx5e_page_release_dynamic(rq, dma_info->page, true); + mlx5e_page_release_dynamic(rq, dma_info->nmem, true); } } rq->stats->buff_alloc_err++; @@ -721,7 +723,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err = mlx5e_page_alloc_pool(rq, au); if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); umr_wqe->inline_mtts[i] = (struct mlx5_mtt) { .ptag = cpu_to_be64(addr | MLX5_EN_WR), }; @@ -763,7 +765,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err_unmap: while (--i >= 0) { au--; - mlx5e_page_release_dynamic(rq, au->page, true); + mlx5e_page_release_dynamic(rq, au->nmem, true); } err: @@ -782,7 +784,7 @@ void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close { struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; int hd_per_wq = shampo->hd_per_wq; - struct page *deleted_page = NULL; + struct netmem *deleted_nmem = NULL; struct mlx5e_dma_info *hd_info; int i, index = start; @@ -795,9 +797,9 @@ void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close hd_info = &shampo->info[index]; hd_info->addr = ALIGN_DOWN(hd_info->addr, PAGE_SIZE); - if (hd_info->page != deleted_page) { - deleted_page = hd_info->page; - mlx5e_page_release_dynamic(rq, hd_info->page, false); + if (hd_info->nmem != deleted_nmem) { + deleted_nmem = hd_info->nmem; + mlx5e_page_release_dynamic(rq, hd_info->nmem, false); } } @@ -1136,7 +1138,7 @@ static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) struct mlx5e_dma_info *last_head = &rq->mpwqe.shampo->info[header_index]; u16 head_offset = (last_head->addr & (PAGE_SIZE - 1)) + rq->buff.headroom; - return page_address(last_head->page) + head_offset; + return netmem_address(last_head->nmem) + head_offset; } static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4) @@ -1595,11 +1597,11 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, dma_addr_t addr; u32 frag_size; - va = page_address(au->page) + wi->offset; + va = netmem_address(au->nmem) + wi->offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -1610,7 +1612,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) + if (mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) return NULL; /* page/packet was consumed by XDP */ rx_headroom = xdp.data - xdp.data_hard_start; @@ -1623,7 +1625,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(au->page); + netmem_get(au->nmem); return skb; } @@ -1645,10 +1647,10 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi u32 truesize; void *va; - va = page_address(au->page) + wi->offset; + va = netmem_address(au->nmem) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, rq->buff.frame0_sz, rq->buff.map_dir); net_prefetchw(va); /* xdp_frame data area */ @@ -1669,7 +1671,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, frag_consumed_bytes, rq->buff.map_dir); @@ -1683,11 +1685,11 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi } frag = &sinfo->frags[sinfo->nr_frags++]; - __skb_frag_set_page(frag, au->page); + __skb_frag_set_netmem(frag, au->nmem); skb_frag_off_set(frag, wi->offset); skb_frag_size_set(frag, frag_consumed_bytes); - if (page_is_pfmemalloc(au->page)) + if (netmem_is_pfmemalloc(au->nmem)) xdp_buff_set_frag_pfmemalloc(&xdp); sinfo->xdp_frags_size += frag_consumed_bytes; @@ -1701,7 +1703,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi au = head_wi->au; prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; @@ -1718,7 +1720,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi if (unlikely(!skb)) return NULL; - page_ref_inc(au->page); + netmem_get(au->nmem); if (unlikely(xdp_buff_has_frags(&xdp))) { int i; @@ -1967,8 +1969,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w mlx5e_fill_skb_data(skb, rq, au, byte_cnt, frag_offset); /* copy header */ - addr = page_pool_get_dma_addr(head_au->page); - mlx5e_copy_skb_header(rq, skb, head_au->page, addr, + addr = netmem_get_dma_addr(head_au->nmem); + mlx5e_copy_skb_header(rq, skb, head_au->nmem, addr, head_offset, head_offset, headlen); /* skb linear part was allocated with headlen and aligned to long */ skb->tail += headlen; @@ -1996,11 +1998,11 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; } - va = page_address(au->page) + head_offset; + va = netmem_address(au->nmem) + head_offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -2011,7 +2013,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ @@ -2027,7 +2029,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(au->page); + netmem_get(au->nmem); return skb; } @@ -2044,7 +2046,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, void *hdr, *data; u32 frag_size; - hdr = page_address(head->page) + head_offset; + hdr = netmem_address(head->nmem) + head_offset; data = hdr + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size); @@ -2059,7 +2061,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(head->page); + netmem_get(head->nmem); } else { /* allocate SKB and copy header for large header */ @@ -2072,7 +2074,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, } prefetchw(skb->data); - mlx5e_copy_skb_header(rq, skb, head->page, head->addr, + mlx5e_copy_skb_header(rq, skb, head->nmem, head->addr, head_offset + rx_headroom, rx_headroom, head_size); /* skb linear part was allocated with headlen and aligned to long */ @@ -2124,7 +2126,7 @@ mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index) if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) { shampo->info[header_index].addr = ALIGN_DOWN(addr, PAGE_SIZE); - mlx5e_page_release_dynamic(rq, shampo->info[header_index].page, true); + mlx5e_page_release_dynamic(rq, shampo->info[header_index].nmem, true); } bitmap_clear(shampo->bitmap, header_index, 1); }