From patchwork Wed Nov 30 22:07:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 353C2C4321E for ; Wed, 30 Nov 2022 22:08:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4C3C6B009A; Wed, 30 Nov 2022 17:08:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B5D546B009C; Wed, 30 Nov 2022 17:08:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D87F6B009B; Wed, 30 Nov 2022 17:08:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 883B26B0098 for ; Wed, 30 Nov 2022 17:08:33 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5D0B9C0761 for ; Wed, 30 Nov 2022 22:08:33 +0000 (UTC) X-FDA: 80191498506.12.D1F3546 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id E11F4C0016 for ; Wed, 30 Nov 2022 22:08:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kUxIquH3iyCGPtE7Ts3ENPkl66ISToAZq9sMocHTp+U=; b=MW1P+4JQRYxrXbBQWHsbro+cbE pox2RbEoqlz7zxkanBPj3DmgtujJM77XjWt2WWtg1XkrYkEznPzKiyohNvjB4GfsPGbhezjIxmiUm iXpFaTFN2FC9XCmB7x7h6Jf6zvgB6NBHKvFts8Rfy68IY2asXypWBEEO//VrrB/5UVxHwRBxmr2Lm 0JPlF3fGPA7UL1A7hggdtaogPe04YRh7du0DW1coSNHHhcddFkgZIcOIhYcMrqbwEYa/Ft/4nI9M5 CGtYZRg3Lx4Z924s6P5l7DIX3m2Gp1UTbDIvyJytEtVGXDPvRwuYIrNRs4FYelVvhgySVFL5GL5wk b6AkVs1A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFM-00FLUv-LE; Wed, 30 Nov 2022 22:08:04 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 01/24] netmem: Create new type Date: Wed, 30 Nov 2022 22:07:40 +0000 Message-Id: <20221130220803.3657490-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MW1P+4JQ; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846113; a=rsa-sha256; cv=none; b=RftEtHfHhJVGSI+XmULd4bS3UrVmBz9YX+3j1HD3Tz441289l+Hju3tU6GBGfbzGZx3NdF z1IurPZ5MfjKCB68p0dENi2QmUPY35BnTVCrXmPndikQIEHcVEzWM8CfbNgXwng3x9BlCx iRNgjvs6mUnfV00hCDpFHFtOBdC65xQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846113; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kUxIquH3iyCGPtE7Ts3ENPkl66ISToAZq9sMocHTp+U=; b=XV3EGiXk2EMX2Gs8kjTnbkAkoi5bF1jeoHUFGadwBzWCdpOscYjucoib7Ndl4hXBGTi6CZ Iifd/Nfng0liCiSnlU1KttMAZ4UvaqEC0tgO51lnNTCDSB/vc6aQxKVwaDqMpZGrtmZ0Iz uqIFdLPc7OUNfRcgnb+wWND4KT+Mhrk= X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E11F4C0016 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MW1P+4JQ; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: pa5w5kso4ni16pe3nuw7f9xzqsfszn3r X-HE-Tag: 1669846112-798393 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As part of simplifying struct page, create a new netmem type which mirrors the page_pool members in struct page. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 813c93499f20..af6ff8c302a0 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -50,6 +50,47 @@ PP_FLAG_DMA_SYNC_DEV |\ PP_FLAG_PAGE_FRAG) +/* page_pool used by netstack */ +struct netmem { + unsigned long flags; /* Page flags */ + /** + * @pp_magic: magic value to avoid recycling non + * page_pool allocated pages. + */ + unsigned long pp_magic; + struct page_pool *pp; + unsigned long _pp_mapping_pad; + unsigned long dma_addr; + union { + /** + * dma_addr_upper: might require a 64-bit + * value on 32-bit architectures. + */ + unsigned long dma_addr_upper; + /** + * For frag page support, not supported in + * 32-bit architectures with 64-bit DMA. + */ + atomic_long_t pp_frag_count; + }; + atomic_t _mapcount; + atomic_t _refcount; +}; + +#define NETMEM_MATCH(pg, nm) \ + static_assert(offsetof(struct page, pg) == offsetof(struct netmem, nm)) +NETMEM_MATCH(flags, flags); +NETMEM_MATCH(lru, pp_magic); +NETMEM_MATCH(pp, pp); +NETMEM_MATCH(mapping, _pp_mapping_pad); +NETMEM_MATCH(dma_addr, dma_addr); +NETMEM_MATCH(dma_addr_upper, dma_addr_upper); +NETMEM_MATCH(pp_frag_count, pp_frag_count); +NETMEM_MATCH(_mapcount, _mapcount); +NETMEM_MATCH(_refcount, _refcount); +#undef NETMEM_MATCH +static_assert(sizeof(struct netmem) <= sizeof(struct page)); + /* * Fast allocation side cache array/stack * From patchwork Wed Nov 30 22:07:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87229C4321E for ; Wed, 30 Nov 2022 22:08:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B899A6B008A; Wed, 30 Nov 2022 17:08:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B108E6B008C; Wed, 30 Nov 2022 17:08:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8ECE26B0092; Wed, 30 Nov 2022 17:08:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 779406B008A for ; Wed, 30 Nov 2022 17:08:15 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4D63C411BC for ; Wed, 30 Nov 2022 22:08:15 +0000 (UTC) X-FDA: 80191497750.01.5F673D7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id F39E340004 for ; Wed, 30 Nov 2022 22:08:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Bf9QgNgl8uu84zUZAyTetZnjlw3v/IdWDtDQm/ZO4Nw=; b=qWaBxKInFy2BBK0lobKCcVwpzx F8KWylsxU0loegts+qKNXRjtnAu2jP7Nq///h84MtAOGqmU4IKzRVF+kUq/mW9ysHDT5soCIggufZ nyR6XlxKnnOKLbjP/NASbm7jwUiiJB9kzNaMa9e9O/1/L8Q/nMeQrS5s0WoosjIUBUQbJiDEwfo+9 z2/4sve9GVWIB3yBkwj8SAbdfULtJSzat9twZa82REpw1Pw4VgaEl3/7I9KruP/m4JFqRTdOFaLvm opVp6T94gWOdDQuIwtom/BwaJ9ofW/bm0hbaQt8BZhbzvatjtefQZ8PstHfRTTZ5LswKgUTdrCRkk 2+xGkxsg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFM-00FLUz-O0; Wed, 30 Nov 2022 22:08:04 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 02/24] netmem: Add utility functions Date: Wed, 30 Nov 2022 22:07:41 +0000 Message-Id: <20221130220803.3657490-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qWaBxKIn; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846095; a=rsa-sha256; cv=none; b=4f+93Cy20r3xAGok8nqDrIR822EBKWAqf7QjaK6KdzIzMvuEdLHQfaLJG8OQz/PQK+Mrl9 DUN/zJGGn3knU7CU5Iql7iG2qIJr4RZfdtvQskn7lB7jMU3yGHf0A/JjzvVZZpOjSZ7zQV N6/dLoQT/uG1er3RkbXRAA//KYj14gw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Bf9QgNgl8uu84zUZAyTetZnjlw3v/IdWDtDQm/ZO4Nw=; b=LJk+1d7JBe3vUNjCQNpyAFJrvJKzz6q71gseknezZMuwYPgAST8zdgG0H6XcqZ11fLLioT EtPFXgX87vU7RiivsGFSpFpAUYzdpTkg4yhVerOwBZgkCdUHDt/E/6b9v0KiVMCwKsUFL5 qOdAsDa7ZSsS4BTnRQGrW5uyIUzdvVU= Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qWaBxKIn; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam01 X-Stat-Signature: 5bopwmbe31499e5gca4t4zpj7ergamjy X-Rspamd-Queue-Id: F39E340004 X-Rspam-User: X-HE-Tag: 1669846094-815439 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: netmem_page() is defined this way to preserve constness. page_netmem() doesn't call compound_head() because netmem users always use the head page; it does include a debugging assert to check that it's true. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 42 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index af6ff8c302a0..0ce20b95290b 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -91,6 +91,48 @@ NETMEM_MATCH(_refcount, _refcount); #undef NETMEM_MATCH static_assert(sizeof(struct netmem) <= sizeof(struct page)); +#define netmem_page(nmem) (_Generic((*nmem), \ + const struct netmem: (const struct page *)nmem, \ + struct netmem: (struct page *)nmem)) + +static inline struct netmem *page_netmem(struct page *page) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + return (struct netmem *)page; +} + +static inline unsigned long netmem_pfn(const struct netmem *nmem) +{ + return page_to_pfn(netmem_page(nmem)); +} + +static inline unsigned long netmem_nid(const struct netmem *nmem) +{ + return page_to_nid(netmem_page(nmem)); +} + +static inline struct netmem *virt_to_netmem(const void *x) +{ + return page_netmem(virt_to_head_page(x)); +} + +static inline int netmem_ref_count(const struct netmem *nmem) +{ + return page_ref_count(netmem_page(nmem)); +} + +static inline void netmem_put(struct netmem *nmem) +{ + struct folio *folio = (struct folio *)nmem; + + return folio_put(folio); +} + +static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) +{ + return nmem->pp_magic & BIT(1); +} + /* * Fast allocation side cache array/stack * From patchwork Wed Nov 30 22:07:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96885C352A1 for ; Wed, 30 Nov 2022 22:08:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C71B46B0096; Wed, 30 Nov 2022 17:08:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BF9D76B0098; Wed, 30 Nov 2022 17:08:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4D2A6B0099; Wed, 30 Nov 2022 17:08:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 86B176B0096 for ; Wed, 30 Nov 2022 17:08:30 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 4DBE01211C1 for ; Wed, 30 Nov 2022 22:08:30 +0000 (UTC) X-FDA: 80191498380.11.C50FF88 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id E8990A0013 for ; Wed, 30 Nov 2022 22:08:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XdUZ+/B1a8lpS36cSldYYXGLY2sJggnamDfVktxibus=; b=wSYzYd+gcAS0OyS39Pteb8Kmi7 0XR76OvpCj2XroqfT5bGk8qgskXN/+XYcI+snuoGPopc918i9YIxBiy95MS6NKqUGGwo+w2bVW2lR g6E7aD8WyfLOKe2QGKwQikLGUK4/Al4TSbWJdxZV8Z6NQX9Miu+VKoUVethIgQu6iYBY0fpa8QnEy jLQiavCTSra4ppZlE9kKr2uoxiLh93U3CGwQkmTGUsbiws+QXN4kdOLJFyfoyYh0vLg1lxu+YN0/K S3BlZLd5JDgH6dIKHk4/ReOWDg/ji7C9+oSF8uK3oRyclV1IkoZDVvOxgVqz4AjpS1+Kb1/Mia4F6 s5N5sNDw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFM-00FLV2-Qa; Wed, 30 Nov 2022 22:08:04 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 03/24] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Date: Wed, 30 Nov 2022 22:07:42 +0000 Message-Id: <20221130220803.3657490-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846110; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XdUZ+/B1a8lpS36cSldYYXGLY2sJggnamDfVktxibus=; b=HP3SCPRuC71Av1yyQ2g0i4Y3sXv7kpVgIDOry3nJbaXjmjHZjDCihcbKHWBTu7s01MHVg+ eLkCwSUnx1urW1mHuYoKLlnhiZkWUHvk+VUXPjaUn+DUpZKSXATptAhf10WF1i+w3v3BIM 7oOqP7oM9Ldz2mUkrxp3ni3MqRrElbo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=wSYzYd+g; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846110; a=rsa-sha256; cv=none; b=T9eRJtUKOHIUrOi9e1BpW6qmIHi82TdthyY3xQ2ZlyMnQn1KoEG/e36WslOTFkYRrqzNhI qKHYFPLet1ZEVcG3ZARWUTtnqoDmxASyfaS6pNCHrbCpJ+29YVn1OS2c6qxU0Nn5hoMmR6 m649NEV1GfusNaWhd4+vno3C31Wbtjg= X-Stat-Signature: o343xt3xumgm9r81s3hkadt5yw5ki5bs X-Rspamd-Queue-Id: E8990A0013 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=wSYzYd+g; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1669846109-912277 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into wrappers. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 0ce20b95290b..a68746a5b99c 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -427,21 +427,31 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem) { - dma_addr_t ret = page->dma_addr; + dma_addr_t ret = nmem->dma_addr; if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16; + ret |= (dma_addr_t)nmem->dma_addr_upper << 16 << 16; return ret; } -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +{ + return netmem_get_dma_addr(page_netmem(page)); +} + +static inline void netmem_set_dma_addr(struct netmem *nmem, dma_addr_t addr) { - page->dma_addr = addr; + nmem->dma_addr = addr; if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - page->dma_addr_upper = upper_32_bits(addr); + nmem->dma_addr_upper = upper_32_bits(addr); +} + +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +{ + netmem_set_dma_addr(page_netmem(page), addr); } static inline bool is_page_pool_compiled_in(void) From patchwork Wed Nov 30 22:07:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69A6FC47088 for ; Wed, 30 Nov 2022 22:08:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BAA96B0099; Wed, 30 Nov 2022 17:08:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 543AD6B009B; Wed, 30 Nov 2022 17:08:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 396636B009C; Wed, 30 Nov 2022 17:08:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0F27B6B0099 for ; Wed, 30 Nov 2022 17:08:34 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E1CFD160AB1 for ; Wed, 30 Nov 2022 22:08:33 +0000 (UTC) X-FDA: 80191498506.06.27E3F82 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 81CAE2000E for ; Wed, 30 Nov 2022 22:08:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wK0+Y2Z7+C+MyZoaRh/jfdsgnrkZnPCk0m09amqVjBk=; b=RyH59HF8KJ2mKzt4+uxCRZoILV e46NX6ADL2F4xfUMjmgbMmHwXDPFJ7KWaRZPSFkpY6E1Y70fG5mXd3PF47In/AulHQ5NOpICykOVo U1p9DVZ5soIRvpLnEPZ5nyLbmpCZMWoODCVZhfq7rtGc0NPN1J4FOB4eyIBT4gGOLxt+oo8hn8UG4 0JTM1kAQklIyWKydW9VzrEUXXQqVxrdBQG6mqROET4oGrCPo7s1lWZ2w/dftIUTpcjmI8mwzCBIJP o1Wf9OLt9Qzq3sxoeARc2M2pYg/Gf8WSIRkWxYKvPqYBNa5aWZv6H7Wg539Zs3Z9hV+FZ1PdoOjgh G7HfRKkw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFM-00FLV5-TO; Wed, 30 Nov 2022 22:08:04 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 04/24] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Date: Wed, 30 Nov 2022 22:07:43 +0000 Message-Id: <20221130220803.3657490-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RyH59HF8; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846113; a=rsa-sha256; cv=none; b=LD9G4ooxyzm/EH3y0pkBHHgg5GkllCGtM2q+sHX5QYwj+0eKjADdnxioOZyp/sRu76goFK ixey8WJKsTyx7np6WJoFfZdTo8Cno48WXTG1eMU+ZMCw1uKrf1SV1BjLTW9CIMwuLIvZf8 o41TwRd+f7IqnwRkzPrWI2V2H4AZNz8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846113; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wK0+Y2Z7+C+MyZoaRh/jfdsgnrkZnPCk0m09amqVjBk=; b=DIBzdiMAi/T9X1js9WAlHDYOZ0rZubKAcR+1QiC2kty3uHMWGFiZFEasSIkbGtCGM4cR+0 mgEa0e0cysuSjTa0Zg4UDs4g8KbcXMh7pVn4ShzTTCxj9hzO1aNamsKvoOsRzIJQUz6x7J 0GietfcXkF69yPwo2R3vKt1JZlkVfQI= X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 81CAE2000E Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RyH59HF8; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: 31x4nyfk5gwqm7ynzt1f66xkjcgkg15g X-HE-Tag: 1669846113-896691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert page_pool_clear_pp_info() and trace_page_pool_state_release() to take a netmem. Include a wrapper for page_pool_release_page() to avoid converting all callers. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 14 ++++++++++---- include/trace/events/page_pool.h | 14 +++++++------- net/core/page_pool.c | 18 +++++++++--------- 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index a68746a5b99c..453797f9cb90 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -18,7 +18,7 @@ * * API keeps track of in-flight pages, in-order to let API user know * when it is safe to dealloactor page_pool object. Thus, API users - * must make sure to call page_pool_release_page() when a page is + * must make sure to call page_pool_release_netmem() when a page is * "leaving" the page_pool. Or call page_pool_put_page() where * appropiate. For maintaining correct accounting. * @@ -332,7 +332,7 @@ struct xdp_mem_info; void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), struct xdp_mem_info *mem); -void page_pool_release_page(struct page_pool *pool, struct page *page); +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem); void page_pool_put_page_bulk(struct page_pool *pool, void **data, int count); #else @@ -345,8 +345,8 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool, struct xdp_mem_info *mem) { } -static inline void page_pool_release_page(struct page_pool *pool, - struct page *page) +static inline void page_pool_release_netmem(struct page_pool *pool, + struct netmem *nmem) { } @@ -356,6 +356,12 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif +static inline void page_pool_release_page(struct page_pool *pool, + struct page *page) +{ + page_pool_release_netmem(pool, page_netmem(page)); +} + void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct); diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index ca534501158b..113aad0c9e5b 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -42,26 +42,26 @@ TRACE_EVENT(page_pool_release, TRACE_EVENT(page_pool_state_release, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 release), + const struct netmem *nmem, u32 release), - TP_ARGS(pool, page, release), + TP_ARGS(pool, nmem, release), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, release) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->release = release; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->release) + TP_printk("page_pool=%p nmem=%p pfn=0x%lx release=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->release) ); TRACE_EVENT(page_pool_state_hold, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 9b203d8660e4..437241aba5a7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -336,10 +336,10 @@ static void page_pool_set_pp_info(struct page_pool *pool, pool->p.init_callback(page, pool->p.init_arg); } -static void page_pool_clear_pp_info(struct page *page) +static void page_pool_clear_pp_info(struct netmem *nmem) { - page->pp_magic = 0; - page->pp = NULL; + nmem->pp_magic = 0; + nmem->pp = NULL; } static struct page *__page_pool_alloc_page_order(struct page_pool *pool, @@ -467,7 +467,7 @@ static s32 page_pool_inflight(struct page_pool *pool) * a regular page (that will eventually be returned to the normal * page-allocator via put_page). */ -void page_pool_release_page(struct page_pool *pool, struct page *page) +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) { dma_addr_t dma; int count; @@ -478,23 +478,23 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) */ goto skip_dma_unmap; - dma = page_pool_get_dma_addr(page); + dma = netmem_get_dma_addr(nmem); /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - page_pool_set_dma_addr(page, 0); + netmem_set_dma_addr(nmem, 0); skip_dma_unmap: - page_pool_clear_pp_info(page); + page_pool_clear_pp_info(nmem); /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. */ count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); - trace_page_pool_state_release(pool, page, count); + trace_page_pool_state_release(pool, nmem, count); } -EXPORT_SYMBOL(page_pool_release_page); +EXPORT_SYMBOL(page_pool_release_netmem); /* Return a page to the page allocator, cleaning up our state */ static void page_pool_return_page(struct page_pool *pool, struct page *page) From patchwork Wed Nov 30 22:07:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8596C4321E for ; Wed, 30 Nov 2022 22:08:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 100C66B0095; Wed, 30 Nov 2022 17:08:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0625A6B0096; Wed, 30 Nov 2022 17:08:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1DE66B0098; Wed, 30 Nov 2022 17:08:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BFC036B0095 for ; Wed, 30 Nov 2022 17:08:27 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 87555A049A for ; Wed, 30 Nov 2022 22:08:27 +0000 (UTC) X-FDA: 80191498254.10.DB336DF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 26E412000E for ; Wed, 30 Nov 2022 22:08:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9GpVtqE16bx92cBcnBntQf4rdh1V8b/Phx6mQORBxs8=; b=Gb7Esyx/pDfAzo29TzNM267+5c HpieHZcjXSzCovEuqjAMyWzXowmUCRtyNWKMUDRuSXUdmxtcjYPBDvrt3czDcoKGkQ256MIoGXSPT WMOh/Dv+8HvHqjjRvqSG0oKKvM1osGquId378ZgQfYmC4Q8JF/R5Y5gOdkEq5CG0104NCmjj8JL/Z n3vPCb5EzBN3E6sL/bsawoB+Yu2y/jxaE9wYj8qOAmDMYCgkoeQk+iM3vg6Obblo5+pMtgIUtz+Zs F5nLAquBOu92cm0Vc/5yJm2mHrFbLaUBsnAkcr5lXbyYokbPQZ+fLAhnf9/CXzRL0PXopBlyWBokm 2ZqRcX/A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLV9-09; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 05/24] page_pool: Start using netmem in allocation path. Date: Wed, 30 Nov 2022 22:07:44 +0000 Message-Id: <20221130220803.3657490-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846107; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9GpVtqE16bx92cBcnBntQf4rdh1V8b/Phx6mQORBxs8=; b=Fr1rZE9z+xFeo62N5fek7SRjLYxKPRWCqJPpeL74sStdrKpUgxJKo3+/j39I9/eKV6+tj7 mZ33r4K50wQZg86yTFQrcsLbcI5AvInZPxzHYl/kL1lvat2GOCTWpY5wR8FmhQjvlWJYxc KX15R1OeDl30LvfRJCeUeJMqNZS44oU= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Gb7Esyx/"; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846107; a=rsa-sha256; cv=none; b=5+iGzF3fYp+Mh1v8BKdWPyiMmFZDQkgu39lDx65xYFcouBQI7L4T/sBFJqetbIfI0llXRZ RZjwsbpCEQlms9iLaYuKE3nDSIPbF5QZOONfrQ4y5kWVygjMtdUnAc6NdDLWK6WOuP8ZUc Tmu8XmP+ebT/lWF9nk3XjoEq+cReJ+0= X-Stat-Signature: ggfa3aqrhaxa45wtydgxc79b49itcs74 X-Rspamd-Queue-Id: 26E412000E Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Gb7Esyx/"; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1669846106-87555 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __page_pool_alloc_page_order() and __page_pool_alloc_pages_slow() to use netmem internally. This removes a couple of calls to compound_head() that are hidden inside put_page(). Convert trace_page_pool_state_hold(), page_pool_dma_map() and page_pool_set_pp_info() to take a netmem argument. Saves 83 bytes of text in __page_pool_alloc_page_order() and 98 in __page_pool_alloc_pages_slow() for a total of 181 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/trace/events/page_pool.h | 14 +++++------ net/core/page_pool.c | 42 +++++++++++++++++--------------- 2 files changed, 29 insertions(+), 27 deletions(-) diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index 113aad0c9e5b..d1237a7ce481 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -67,26 +67,26 @@ TRACE_EVENT(page_pool_state_release, TRACE_EVENT(page_pool_state_hold, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 hold), + const struct netmem *nmem, u32 hold), - TP_ARGS(pool, page, hold), + TP_ARGS(pool, nmem, hold), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, hold) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->hold = hold; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx hold=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->hold) + TP_printk("page_pool=%p netmem=%p pfn=0x%lx hold=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->hold) ); TRACE_EVENT(page_pool_update_nid, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 437241aba5a7..4e985502c569 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -304,8 +304,9 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, pool->p.dma_dir); } -static bool page_pool_dma_map(struct page_pool *pool, struct page *page) +static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) { + struct page *page = netmem_page(nmem); dma_addr_t dma; /* Setup DMA mapping: use 'struct page' area for storing DMA-addr @@ -328,12 +329,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) } static void page_pool_set_pp_info(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; + nmem->pp = pool; + nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(page, pool->p.init_arg); + pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) @@ -345,26 +346,26 @@ static void page_pool_clear_pp_info(struct netmem *nmem) static struct page *__page_pool_alloc_page_order(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; gfp |= __GFP_COMP; - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); - if (unlikely(!page)) + nmem = page_netmem(alloc_pages_node(pool->p.nid, gfp, pool->p.order)); + if (unlikely(!nmem)) return NULL; if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); return NULL; } alloc_stat_inc(pool, slow_high_order); - page_pool_set_pp_info(pool, page); + page_pool_set_pp_info(pool, nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); - return page; + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); + return netmem_page(nmem); } /* slow path */ @@ -398,18 +399,18 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - page = pool->alloc.cache[i]; + struct netmem *nmem = page_netmem(pool->alloc.cache[i]); if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); continue; } - page_pool_set_pp_info(pool, page); - pool->alloc.cache[pool->alloc.count++] = page; + page_pool_set_pp_info(pool, nmem); + pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); } @@ -421,7 +422,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, page = NULL; } - /* When page just alloc'ed is should/must have refcnt 1. */ + /* When page just allocated it should have refcnt 1 (but may have + * speculative references) */ return page; } From patchwork Wed Nov 30 22:07:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC380C352A1 for ; Wed, 30 Nov 2022 22:08:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A50356B0075; Wed, 30 Nov 2022 17:08:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F2E16B0089; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 542486B0075; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 106FB6B0089 for ; Wed, 30 Nov 2022 17:08:09 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E6BB6161168 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) X-FDA: 80191497456.05.D90C44C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 6511980009 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6uhBgyX+WOyze+nX1i+x8P3OR4HsIe/Xfex6UDdj364=; b=q3oxrqh0vGrpyiYKfQtxT6VSfJ 8RSEUc1UTMpje9rAnyi5OwG5k/ihjmqanRqqqx2ltL/UZOw9qwiGYieZF1aq0PpqK/xIubDs+iMtS KjqD24rMb7tDwGWSk8IuQ3QFJhbG44j3XhyL/KH13j/sEYrox0SHG9TCJ3sAt9TB0H65At7xEsGAs 7hrjfSXADqa2NcG6KMRLfJC3VvyWa7XH6SElXVfZ04Ctck2jtZJcGhN6iYgX/rHN0pHfF1Z56I5ip Y3aZOF1PoC8PsiSvRLsc3aw9ay/3FCvUsfdNfgab85nW4hVhJMxKrsazqry4kaeTHlWta5Me5OCJs uHwPIN2w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVD-2m; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 06/24] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Date: Wed, 30 Nov 2022 22:07:45 +0000 Message-Id: <20221130220803.3657490-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846088; a=rsa-sha256; cv=none; b=Z3sJ4dwPCiaj83cNW4R2BI0IvNOz1i9scxi+MWHTyytwNTyHOV3Z+Cl2t5fdeyiRZtt89n Le3V2CA70juo5OZkd4ZTPnvVx6wUdukfJbay+a+1eCgHlP7Lw2XmsZL/ptdRO14G51ceEc nZGCuEm2OPhW8d4+9hRU9tNC3E4pCtY= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=q3oxrqh0; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846088; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6uhBgyX+WOyze+nX1i+x8P3OR4HsIe/Xfex6UDdj364=; b=qqQoPy1fTEyKjpPbbT4yVuXUEYXRaxbKVivGuDxsyCqrpuHtuAE7Fz1aDjyJvDiM/3xR0O dWLTU2Ndh4KOXM2G3Av15ba9jc/xqC8fnYjJQlZETwVwA1mkwQhYMnVFTDSrZG2ENhzBry FROuPUEczJfPFkDWvvq+IE5ZfyrjWqQ= X-Stat-Signature: xqutdzfrcsfk3mw8gouwohhxb5a33o5a Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=q3oxrqh0; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6511980009 X-Rspam-User: X-HE-Tag: 1669846088-303266 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes a call to compound_head(), saving 464 bytes of kernel text as page_pool_return_page() is inlined seven times. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 4e985502c569..b606952773a6 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -220,7 +220,13 @@ struct page_pool *page_pool_create(const struct page_pool_params *params) } EXPORT_SYMBOL(page_pool_create); -static void page_pool_return_page(struct page_pool *pool, struct page *page); +static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); + +static inline +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + page_pool_return_netmem(pool, page_netmem(page)); +} noinline static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) @@ -499,11 +505,11 @@ void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) EXPORT_SYMBOL(page_pool_release_netmem); /* Return a page to the page allocator, cleaning up our state */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) { - page_pool_release_page(pool, page); + page_pool_release_netmem(pool, nmem); - put_page(page); + netmem_put(nmem); /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). From patchwork Wed Nov 30 22:07:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95FAFC4321E for ; Wed, 30 Nov 2022 22:08:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC0C06B0092; Wed, 30 Nov 2022 17:08:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BFA1B6B0093; Wed, 30 Nov 2022 17:08:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9C476B0095; Wed, 30 Nov 2022 17:08:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 907796B0092 for ; Wed, 30 Nov 2022 17:08:21 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6BC354052C for ; Wed, 30 Nov 2022 22:08:21 +0000 (UTC) X-FDA: 80191498002.11.992F1D6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 01A7440012 for ; Wed, 30 Nov 2022 22:08:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=JiZMgS7pP8K35sUweK//q6lTVK6Ins4aBIFLXYHxMXM=; b=OhYTBJdKLJ32dpX4tSHtx63rGt Ms2kjKeqpigpP+v8SHfH/C3Ehqw2I93pmIsaardhFZRrHax2KyLg3cTsxLMJY6eDcsnNhKPISrX5G bvjJIbbdrr1qx6FFsYsDT4pzWOWKqRgEmV4sVYMhMeGK0ILTG98VRtYDBODa9iher8zEWQMTAoRr1 SqAyb3ye7bJF6wVG9PFYM0qktlfIbm4cG0816ylpd97SO1wVNs/P1xidojYbi740sQzF+EENCT2rs M4jqw/YMKqtpnoPzvVwZb9P29dH3w/ta6f0/eD4vDKPvChuCNTGLPzozkSqJJEWc7OvHO0SJobWBy rLFzaNIQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVG-4u; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 07/24] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Date: Wed, 30 Nov 2022 22:07:46 +0000 Message-Id: <20221130220803.3657490-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846101; a=rsa-sha256; cv=none; b=TRt8Z8n0tiygoctKc2Sjt1ZGawOgtENSB1jqsMUZUBl49Rw7PHpw7mNs+vHJ3vMPVT5IZ2 FF6BcDXgm1jiMuXOYPPm+l8OgVWAacHyO8/wqM3cw+yImAUuK5RnvpL5IyZDaqV5c1wwId /Szmy5e82L2QSPfpoFvrGDJeYfQagQo= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OhYTBJdK; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JiZMgS7pP8K35sUweK//q6lTVK6Ins4aBIFLXYHxMXM=; b=MM47fmhxj7uJ2cYeViw5QnVrqFYj63G7y484bEW94J4X2tCluLC2x5iMSPeHpaoSDE1xPy auwrUWEH3E6VW8wa0WQlDqfR+7g5LUcEolFQMWUG8NrgKL2UGCGpgB1nKeXtzowZCgaazK XYDls2p3XfKRvAt6pbSxL+MTx3aVBd0= X-Rspamd-Queue-Id: 01A7440012 X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OhYTBJdK; dmarc=none; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam09 X-Stat-Signature: ngqso8cqapmtpikgwhdhj8dtqprfgsmk X-HE-Tag: 1669846100-982749 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes the call to compound_head() hidden in put_page() which saves 169 bytes of kernel text as __page_pool_put_page() is inlined twice. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b606952773a6..8f3f7cc5a2d5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -558,8 +558,8 @@ static bool page_pool_recycle_in_cache(struct page *page, * If the page refcnt != 1, then the page will be returned to memory * subsystem. */ -static __always_inline struct page * -__page_pool_put_page(struct page_pool *pool, struct page *page, +static __always_inline struct netmem * +__page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { /* This allocator is optimized for the XDP mode that uses @@ -571,19 +571,20 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * page is NOT reusable when allocated when system is under * some pressure. (page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { - /* Read barrier done in page_ref_count / READ_ONCE */ + if (likely(netmem_ref_count(nmem) == 1 && + !netmem_is_pfmemalloc(nmem))) { + /* Read barrier done in netmem_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, + page_pool_dma_sync_for_device(pool, netmem_page(nmem), dma_sync_size); if (allow_direct && in_serving_softirq() && - page_pool_recycle_in_cache(page, pool)) + page_pool_recycle_in_cache(netmem_page(nmem), pool)) return NULL; /* Page found as candidate for recycling */ - return page; + return nmem; } /* Fallback/non-XDP mode: API user have elevated refcnt. * @@ -599,13 +600,21 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * will be invoking put_page. */ recycle_stat_inc(pool, released_refcnt); - /* Do not replace this with page_pool_return_page() */ - page_pool_release_page(pool, page); - put_page(page); + /* Do not replace this with page_pool_return_netmem() */ + page_pool_release_netmem(pool, nmem); + netmem_put(nmem); return NULL; } +static __always_inline struct page * +__page_pool_put_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) +{ + return netmem_page(__page_pool_put_netmem(pool, page_netmem(page), + dma_sync_size, allow_direct)); +} + void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct) { From patchwork Wed Nov 30 22:07:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78077C4321E for ; Wed, 30 Nov 2022 22:08:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19D006B007E; Wed, 30 Nov 2022 17:08:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 062416B007B; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C21816B0075; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9F0AA6B0074 for ; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 65C4D16108C for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) X-FDA: 80191497372.25.DA09481 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id E2EDA1C0002 for ; Wed, 30 Nov 2022 22:08:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LhfQ5UTV1uFpBl8/LGi5IM95ItOmrCw3TfgUxTkyLZY=; b=MqBQU1/3SztU4a+WXPT5FBphmD hl6Ezbck0EwNZykR0cGXvTrQ4WjwmCnALi8Ca/rTA3pLVcF1hG5oJV8XgzNgp0cGnZUdIHO1XtpVN JjiHyp989pRaafjLcgbJNcf9iIcyKH2FMrvUb3Z+dimsLW/6+Ukb6eB7hRKcXBxRw5grLyvhUksf6 tIONZd05ok/DvoAKMP0N7R7I1Dkjt3DF4KcsjS5/LF1nBb+79/BTB1l/EGOQsfRz6eytrLHSJjdOf nhnMFlrS6juyGFz1W9x3K1AInvH3ukoG/T01NHS9tLVM7qQADPeW75cToRMWgS5M3ujFt/YazVxuT CATNXLsQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVJ-7B; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 08/24] page_pool: Convert pp_alloc_cache to contain netmem Date: Wed, 30 Nov 2022 22:07:47 +0000 Message-Id: <20221130220803.3657490-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846085; a=rsa-sha256; cv=none; b=cu/fyVlPHfYW8axFktvnbBtjp/7PQ/0pMaLiCCozh8yXnF2rFjcaVGcbjmgua0SuOI/iK+ ob+b/TNnPuobIzYRxkXWdww/vQOTMk0wnoEB85gcQ2cC54axYaRIFokix5HIqWGm2O6+cK s9DIK2pPQ7VRF/3LJDmOSPE27guDKEs= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="MqBQU1/3"; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846085; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LhfQ5UTV1uFpBl8/LGi5IM95ItOmrCw3TfgUxTkyLZY=; b=IO5IAEnOFCq8S1GIEhqpCeOGFKZjEr6OuGbdUv7BtJQkCllizKfTLkGYuSdoWDgA8j+7mk epK7egI/V95M9dfW+5wrI84sHrHCUqaa8oMmMP5NeN8VItBqqsECK2KoLhXeT1vaY/o155 Q5/GsDBhuzeWwCR9JpNDB9N1UgiW0a4= X-Rspamd-Queue-Id: E2EDA1C0002 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="MqBQU1/3"; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam09 X-Stat-Signature: fjctf461iy8pw3agmyhmyggmgf6k3gqi X-HE-Tag: 1669846084-15864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the type here from page to netmem. It works out well to convert page_pool_refill_alloc_cache() to return a netmem instead of a page as part of this commit. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 2 +- net/core/page_pool.c | 52 ++++++++++++++++++++--------------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 453797f9cb90..88eb5be77b2c 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -151,7 +151,7 @@ static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) #define PP_ALLOC_CACHE_REFILL 64 struct pp_alloc_cache { u32 count; - struct page *cache[PP_ALLOC_CACHE_SIZE]; + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; }; struct page_pool_params { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8f3f7cc5a2d5..c54217ce6b77 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -229,10 +229,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) } noinline -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) +static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { struct ptr_ring *r = &pool->ring; - struct page *page; + struct netmem *nmem; int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ @@ -253,49 +253,49 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) /* Refill alloc array, but only if NUMA match */ do { - page = __ptr_ring_consume(r); - if (unlikely(!page)) + nmem = __ptr_ring_consume(r); + if (unlikely(!nmem)) break; - if (likely(page_to_nid(page) == pref_nid)) { - pool->alloc.cache[pool->alloc.count++] = page; + if (likely(netmem_nid(nmem) == pref_nid)) { + pool->alloc.cache[pool->alloc.count++] = nmem; } else { /* NUMA mismatch; * (1) release 1 page to page-allocator and * (2) break out to fallthrough to alloc_pages_node. * This limit stress on page buddy alloactor. */ - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); alloc_stat_inc(pool, waive); - page = NULL; + nmem = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, refill); } - return page; + return nmem; } /* fast path */ static struct page *__page_pool_get_cached(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ if (likely(pool->alloc.count)) { /* Fast-path */ - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, fast); } else { - page = page_pool_refill_alloc_cache(pool); + nmem = page_pool_refill_alloc_cache(pool); } - return page; + return netmem_page(nmem); } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -391,13 +391,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return pool->alloc.cache[--pool->alloc.count]; + return netmem_page(pool->alloc.cache[--pool->alloc.count]); /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk, - pool->alloc.cache); + (struct page **)pool->alloc.cache); if (unlikely(!nr_pages)) return NULL; @@ -405,7 +405,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - struct netmem *nmem = page_netmem(pool->alloc.cache[i]); + struct netmem *nmem = pool->alloc.cache[i]; if ((pp_flags & PP_FLAG_DMA_MAP) && unlikely(!page_pool_dma_map(pool, nmem))) { netmem_put(nmem); @@ -413,7 +413,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } page_pool_set_pp_info(pool, nmem); - pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); + pool->alloc.cache[pool->alloc.count++] = nmem; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, @@ -422,7 +422,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + page = netmem_page(pool->alloc.cache[--pool->alloc.count]); alloc_stat_inc(pool, slow); } else { page = NULL; @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page; + pool->alloc.cache[pool->alloc.count++] = page_netmem(page); recycle_stat_inc(pool, cached); return true; } @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool *pool) static void page_pool_empty_alloc_cache_once(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; if (pool->destroy_cnt) return; @@ -795,8 +795,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) * call concurrently. */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); /* Caller must provide appropriate safe context, e.g. NAPI. */ void page_pool_update_nid(struct page_pool *pool, int new_nid) { - struct page *page; + struct netmem *nmem; trace_page_pool_update_nid(pool, new_nid); pool->p.nid = new_nid; /* Flush pool alloc cache, as refill will check NUMA node */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } EXPORT_SYMBOL(page_pool_update_nid); From patchwork Wed Nov 30 22:07:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF3FFC352A1 for ; Wed, 30 Nov 2022 22:08:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 482056B0093; Wed, 30 Nov 2022 17:08:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E38A6B0095; Wed, 30 Nov 2022 17:08:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25D706B0096; Wed, 30 Nov 2022 17:08:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 127CE6B0093 for ; Wed, 30 Nov 2022 17:08:25 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CBC8A1211A8 for ; Wed, 30 Nov 2022 22:08:24 +0000 (UTC) X-FDA: 80191498128.05.0651CB5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 77901A0015 for ; Wed, 30 Nov 2022 22:08:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=F5jHy32ogt9755aGfnC84dmJ1NGdPkiedk+cdsHjX5I=; b=oJ5Hn8sn07bTx1NIJJF70zKJ0P MbtzfSnV7CLskkfgCC+ZM52jHml8etKVxR30Ns6couZTODDzAMhwzg79Ia4zt8BOfCLODCyYSLvit 9GDiPbAqQune1G8dF5y7Gd5eSNMxlq8UvIz2KvCn2m1ZBCy7fGWma+3TCiN/ZLIUWMek2qJP95L7/ nt1QoKz8Hf8wiX8N7o40ZAI5dYfFR/1E/Eyd8h9KYzp5hVF13bluFV5JQDv0Ucoz5pSjl71WmMS9z 8QXgcYJl57Lk3C+U4k1/YhelgbJV9YfNY5A3btlGtIRRfG8rxuuviQV2Haudp+cTGNsgelljBIMq1 hxNmXCxg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVN-Bg; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 09/24] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Date: Wed, 30 Nov 2022 22:07:48 +0000 Message-Id: <20221130220803.3657490-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846104; a=rsa-sha256; cv=none; b=Z8c7K8RiQTcXmm9pL5bL7lx/Kr07qyFxRBftY8enbYA9TRuj6DcoiVn9Ybi6c3PYSZM1aX CNi9ZLyiL9yklmQB8fqqTnrDmA2XvpHph6Jd5Uz1tNeu/Dg4mIrIRhumffRGxIyOQzUY74 J7ULVeerVbjfLSB+xJSoMYs6Kq7b61U= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oJ5Hn8sn; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846104; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F5jHy32ogt9755aGfnC84dmJ1NGdPkiedk+cdsHjX5I=; b=MJxH9M3aY8KZcgq+Bs8RuLnk37qDRlWHbftr/vp0IdCSgYPaNSSeI3GU2fVufrtsmmSwga 0zUh3gCW4cqzU5ZoCysDdFTSddQZLA8oZkqkfcxqgfnJLzhKePtthifLgaezdPiu1itJkz Y2f8iGiQdqapnC++cA+UIU/q+QIX584= X-Rspamd-Queue-Id: 77901A0015 X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oJ5Hn8sn; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam09 X-Stat-Signature: 1o9w68ri5m864rwy3oae91qeuhpkodqm X-HE-Tag: 1669846104-592188 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a page_pool_defrag_page() wrapper. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 88eb5be77b2c..bfb77b75f333 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -371,7 +371,7 @@ static inline void page_pool_fragment_page(struct page *page, long nr) atomic_long_set(&page->pp_frag_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) { long ret; @@ -384,14 +384,19 @@ static inline long page_pool_defrag_page(struct page *page, long nr) * especially when dealing with a page that may be partitioned * into only 2 or 3 pieces. */ - if (atomic_long_read(&page->pp_frag_count) == nr) + if (atomic_long_read(&nmem->pp_frag_count) == nr) return 0; - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &nmem->pp_frag_count); WARN_ON(ret < 0); return ret; } +static inline long page_pool_defrag_page(struct page *page, long nr) +{ + return page_pool_defrag_netmem(page_netmem(page), nr); +} + static inline bool page_pool_is_last_frag(struct page_pool *pool, struct page *page) { From patchwork Wed Nov 30 22:07:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3330DC4321E for ; Wed, 30 Nov 2022 22:08:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 946726B008C; Wed, 30 Nov 2022 17:08:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C7796B0092; Wed, 30 Nov 2022 17:08:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 740256B0093; Wed, 30 Nov 2022 17:08:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5D0556B008C for ; Wed, 30 Nov 2022 17:08:19 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3116A801DE for ; Wed, 30 Nov 2022 22:08:19 +0000 (UTC) X-FDA: 80191497918.28.F81E3F6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id BA71314000E for ; Wed, 30 Nov 2022 22:08:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=C/sNw3veMKtURCiokaJ/pKwRw1nL+1aLdsvQ8DTvuXU=; b=Y2Vnb4q6B2LdiNZqzDkLmgkd5G oCHJ0xUNeKs/dX0mnLHSVVQ0m+7vvvsIwq+5+MPmYSoXhlQYlPYezaywNAgpnosOMkqrpTFsAliaS F7Lz5ggiPxWHlsZfgWK348a0JqDAKf0AWkaSgrqrzeLlJ/N46vF4a+XcE0m1kxc1hWA+jox+fsSYl FNE4C1o/rK8w1TpGrW+TaIBmlwCeFhhQ2xySgFyRSoZC61WXyr5TLVvF9RSAfUdEpPSR3itFZEcD0 rgK0i9vcLSdhGjtRh+0EzKyie7SyCbulwBgi+SfDWw+F9WEsJJN4cuL6yQKdhnAEzpI8xIGjmCtYP Get9KnLw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVQ-Dp; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 10/24] page_pool: Convert page_pool_put_defragged_page() to netmem Date: Wed, 30 Nov 2022 22:07:49 +0000 Message-Id: <20221130220803.3657490-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846098; a=rsa-sha256; cv=none; b=qkblDz5EmIgLWHL+bOZoIo6h3tww0u6y4Us84GWppaR2lbtSeaiWcoaRe/nkKNq/y72F6S FtRzkYfr2laLv2dRiiz27oxzzRz4V3A0Bmq74a60z0B9k06L8kIMbLsqWaEuFuVShUULhf P/rGoX7bkalh8CBBiHJvy88AAQ8vk78= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y2Vnb4q6; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846098; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C/sNw3veMKtURCiokaJ/pKwRw1nL+1aLdsvQ8DTvuXU=; b=UxUB7yA/jsW+MjLnIWPlnjdj1ozm3b8yuk4PubGw4+YtsdL1qjmCk6kPJKaVaAAxOZ23CB hjeH/MxZEzraXzEsf5WMtrH2dNglnsirY1Q2Nzt+lJy1vI5+fUO9ZcC3EhPCmId+Vz03sK mxiNtWFVDLfLHhZJcEpx2Q11q/GTnn0= X-Stat-Signature: j18m6pxtqhpa1m7tm5fsfqem86re995i X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: BA71314000E Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y2Vnb4q6; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1669846098-294695 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also convert page_pool_is_last_frag(), page_pool_put_page(), page_pool_recycle_in_ring() and use netmem in page_pool_put_page_bulk(). Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 23 ++++++++++++++++------- net/core/page_pool.c | 29 +++++++++++++++-------------- 2 files changed, 31 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index bfb77b75f333..db617073025e 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -362,7 +362,7 @@ static inline void page_pool_release_page(struct page_pool *pool, page_pool_release_netmem(pool, page_netmem(page)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); @@ -398,15 +398,15 @@ static inline long page_pool_defrag_page(struct page *page, long nr) } static inline bool page_pool_is_last_frag(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { /* If fragments aren't enabled or count is 0 we were the last user */ return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || - (page_pool_defrag_page(page, 1) == 0); + (page_pool_defrag_netmem(nmem, 1) == 0); } -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_netmem(struct page_pool *pool, + struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { @@ -414,13 +414,22 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_netmem(pool, nmem, dma_sync_size, allow_direct); #endif } +static inline void page_pool_put_page(struct page_pool *pool, + struct page *page, + unsigned int dma_sync_size, + bool allow_direct) +{ + page_pool_put_netmem(pool, page_netmem(page), dma_sync_size, + allow_direct); +} + /* Same as above but will try to sync the entire area pool->max_len */ static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c54217ce6b77..e727a74504c2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -516,14 +516,15 @@ static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) */ } -static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) +static bool page_pool_recycle_in_ring(struct page_pool *pool, + struct netmem *nmem) { int ret; /* BH protection not needed if current is serving softirq */ if (in_serving_softirq()) - ret = ptr_ring_produce(&pool->ring, page); + ret = ptr_ring_produce(&pool->ring, nmem); else - ret = ptr_ring_produce_bh(&pool->ring, page); + ret = ptr_ring_produce_bh(&pool->ring, nmem); if (!ret) { recycle_stat_inc(pool, ring); @@ -615,17 +616,17 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, dma_sync_size, allow_direct)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { - page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); - if (page && !page_pool_recycle_in_ring(pool, page)) { + nmem = __page_pool_put_netmem(pool, nmem, dma_sync_size, allow_direct); + if (nmem && !page_pool_recycle_in_ring(pool, nmem)) { /* Cache full, fallback to free pages */ recycle_stat_inc(pool, ring_full); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_defragged_netmem); /* Caller must not use data area after call, as this function overwrites it */ void page_pool_put_page_bulk(struct page_pool *pool, void **data, @@ -634,16 +635,16 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, int i, bulk_len = 0; for (i = 0; i < count; i++) { - struct page *page = virt_to_head_page(data[i]); + struct netmem *nmem = virt_to_netmem(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) continue; - page = __page_pool_put_page(pool, page, -1, false); + nmem = __page_pool_put_netmem(pool, nmem, -1, false); /* Approved for bulk recycling in ptr_ring cache */ - if (page) - data[bulk_len++] = page; + if (nmem) + data[bulk_len++] = nmem; } if (unlikely(!bulk_len)) @@ -669,7 +670,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, * since put_page() with refcnt == 1 can be an expensive operation */ for (; i < bulk_len; i++) - page_pool_return_page(pool, data[i]); + page_pool_return_netmem(pool, data[i]); } EXPORT_SYMBOL(page_pool_put_page_bulk); From patchwork Wed Nov 30 22:07:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DBD3C352A1 for ; Wed, 30 Nov 2022 22:08:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A0966B0088; Wed, 30 Nov 2022 17:08:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 829776B0089; Wed, 30 Nov 2022 17:08:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 654CC6B008A; Wed, 30 Nov 2022 17:08:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4AA666B0088 for ; Wed, 30 Nov 2022 17:08:10 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 25E084118B for ; Wed, 30 Nov 2022 22:08:10 +0000 (UTC) X-FDA: 80191497540.20.CA2BEB4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 3623640013 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iPBTKSyjdnWVcmy7PKTHSZ5HdtAW1lixRa2OtWrAiwc=; b=SrqZkHtngliktxapX4NhwLDBP1 lmINlxudkSZhEMAPNZHs3iXUPU0rQnF1U2Dr8ZpXS6G+kq/gt/g2fITgE3wFpvmwGW2YpdXe4Qcwy sE7XRrBrRXKXctHQLSMfFfPi6dgAEdoTlN8ateNrkz6MsjQdCwijWP/9SqZX6OfV4digTDPgfe5ep 2Ap8so56kxKJQuMQgBKHs0JvE8fshPbcWfniI+nlL33KokqAjqEQG9Hr5FSmV4UzJ9tSl1IGQh2dS K1oYNW+8ngA2Te96Ksijlf0F8Mn6KAmtgpu50rcgrar4Adh0dHlaDFaOXWJvY/ogC+FbwRHZp09oF y88mogbg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVU-Gd; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 11/24] page_pool: Convert page_pool_empty_ring() to use netmem Date: Wed, 30 Nov 2022 22:07:50 +0000 Message-Id: <20221130220803.3657490-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846088; a=rsa-sha256; cv=none; b=zvNJFPdzK+Q9J/PCzoQ1A7j6sk3V86B43ll0iqhgw2AByS7sUB9pVAX93IqWBODbS7/Kly wT31GIOZB7sejUa9kJzyHXhsjXwY3sD/Gb9WdsC/LI0s1zWCmQcxa42Wo8Ob2mFdgga71V TdYKLbCj0ji+X5WkUB+NEXZYmUURJxs= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SrqZkHtn; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846088; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iPBTKSyjdnWVcmy7PKTHSZ5HdtAW1lixRa2OtWrAiwc=; b=6Z3by25iFAeiovax6qQjcO2vzRpOq6lk3yXCRyJpuHakKHBhGeIx/tlOaVFEYyoF94d0ix qex4uNtjy9DoDHqpbYwwsB8yTeOlGUt4x38o2s6MBByj+QSxpRukoLPyBTpKdWvcxhZ7jk v2GgDbBytDXlRzo8OOu7IgpcgMbaACs= X-Stat-Signature: 95fhdaawj911kzohwr18duqmhgoofy3k X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 3623640013 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SrqZkHtn; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-HE-Tag: 1669846087-267459 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Retrieve a netmem from the ptr_ring instead of a page. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e727a74504c2..7a77e3220205 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -755,16 +755,16 @@ EXPORT_SYMBOL(page_pool_alloc_frag); static void page_pool_empty_ring(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Empty recycle ring */ - while ((page = ptr_ring_consume_bh(&pool->ring))) { + while ((nmem = ptr_ring_consume_bh(&pool->ring)) != NULL) { /* Verify the refcnt invariant of cached pages */ - if (!(page_ref_count(page) == 1)) + if (!(netmem_ref_count(nmem) == 1)) pr_crit("%s() page_pool refcnt %d violation\n", - __func__, page_ref_count(page)); + __func__, netmem_ref_count(nmem)); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } From patchwork Wed Nov 30 22:07:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FDB3C352A1 for ; Wed, 30 Nov 2022 22:08:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F8D76B0078; Wed, 30 Nov 2022 17:08:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 25CF56B0075; Wed, 30 Nov 2022 17:08:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD8CF6B0075; Wed, 30 Nov 2022 17:08:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 657B96B0078 for ; Wed, 30 Nov 2022 17:08:08 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 4489B1203D5 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) X-FDA: 80191497456.06.DBBCDF4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id C6A96180015 for ; Wed, 30 Nov 2022 22:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PxWNL/tjWIqDR8XmqIe4fjWtjHmT3qk449W8d0uRXDw=; b=Jf9EssJPH91dkhQGtAiuZByQyl SiSWxm9XmF+KaYMP8Mk7os4pEdQHevU6YojFN+904UE4mworMT7euqzCDFQ7ehLmT48wGku0DLUok 7Xybz1XXi3nozgkCSLRc6+EaW1hD6vNMHE+JoZ0bJC5tBvpTizBjBmdCZh4SQJge9yNUYVu6XZWR2 O4+mUGzNRXVYBFIddb5EiPVzbX8DhL5U/s029f3eA+u3t8edltYVZkV6BsklTCM+5zeyey2B3qYRO uj1o9aSAU+MtV/pWVl1fS0BGg9hUJRT9s76MuKhaxokuqDX7uXvdOgPSw/BLx5iECga3/Hc/Y+E+U OuADfrsg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVY-Ju; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Date: Wed, 30 Nov 2022 22:07:51 +0000 Message-Id: <20221130220803.3657490-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PxWNL/tjWIqDR8XmqIe4fjWtjHmT3qk449W8d0uRXDw=; b=EeSJzcjJIFTZ7H60ZWEgAjQ0s8n9utlr+pcwBik7ji/oRe6YkI0KxwWp9zeTRZctY+d73x 4qQokeCM4RrvvEzt5stt7I9zI7rrqbwp4390rBchW42zZCJohNcTbcpKA8R9j2hnHPVsqd 2y+EdpsCR9dMjutlEkIWctOT0yrQWQ4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Jf9EssJP; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846087; a=rsa-sha256; cv=none; b=g1IF89oWPEHXwvqezwBmvnwu7xoDbrGO7+C2p8Fer0O7MIFbWz3J1PTVdPfdL2jUuK+qBN JZra4gFPxshNCWCXLZlVBi7c5b9mz5ngU9UuP0no4MZFXdVTbamX2wbhoY7kpiWy9O9pV4 PHawk10f8hWdtjsz+3aWIcHrQw6+K4Q= X-Stat-Signature: ki9z7mi3tz8tfk3kruar5s57sd5kjeeq X-Rspamd-Queue-Id: C6A96180015 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Jf9EssJP; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1669846087-922269 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a page_pool_alloc_pages() compatibility wrapper. Also convert __page_pool_alloc_pages_slow() to __page_pool_alloc_netmem_slow() and __page_pool_alloc_page_order() to __page_pool_alloc_netmem(). __page_pool_get_cached() is converted to return a netmem. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 8 +++++++- net/core/page_pool.c | 39 +++++++++++++++++++-------------------- 2 files changed, 26 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index db617073025e..4c730591de46 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -292,7 +292,13 @@ struct page_pool { u64 destroy_cnt; }; -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp); + +static inline +struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +{ + return netmem_page(page_pool_alloc_netmem(pool, gfp)); +} static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7a77e3220205..efe9f1471caa 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -282,7 +282,7 @@ static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) } /* fast path */ -static struct page *__page_pool_get_cached(struct page_pool *pool) +static struct netmem *__page_pool_get_cached(struct page_pool *pool) { struct netmem *nmem; @@ -295,7 +295,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) nmem = page_pool_refill_alloc_cache(pool); } - return netmem_page(nmem); + return nmem; } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -349,8 +349,8 @@ static void page_pool_clear_pp_info(struct netmem *nmem) nmem->pp = NULL; } -static struct page *__page_pool_alloc_page_order(struct page_pool *pool, - gfp_t gfp) +static +struct netmem *__page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { struct netmem *nmem; @@ -371,27 +371,27 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); - return netmem_page(nmem); + return nmem; } /* slow path */ noinline -static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, +static struct netmem *__page_pool_alloc_netmem_slow(struct page_pool *pool, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; unsigned int pp_flags = pool->p.flags; unsigned int pp_order = pool->p.order; - struct page *page; + struct netmem *nmem; int i, nr_pages; /* Don't support bulk alloc for high-order pages */ if (unlikely(pp_order)) - return __page_pool_alloc_page_order(pool, gfp); + return __page_pool_alloc_netmem(pool, gfp); /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return netmem_page(pool->alloc.cache[--pool->alloc.count]); + return pool->alloc.cache[--pool->alloc.count]; /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); @@ -422,34 +422,33 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = netmem_page(pool->alloc.cache[--pool->alloc.count]); + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, slow); } else { - page = NULL; + nmem = NULL; } /* When page just allocated it should have refcnt 1 (but may have * speculative references) */ - return page; + return nmem; } /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; /* Fast-path: Get a page from cache */ - page = __page_pool_get_cached(pool); - if (page) - return page; + nmem = __page_pool_get_cached(pool); + if (nmem) + return nmem; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); - return page; + return __page_pool_alloc_netmem_slow(pool, gfp); } -EXPORT_SYMBOL(page_pool_alloc_pages); +EXPORT_SYMBOL(page_pool_alloc_netmem); /* Calculate distance between two u32 values, valid if distance is below 2^(31) * https://en.wikipedia.org/wiki/Serial_number_arithmetic#General_Solution From patchwork Wed Nov 30 22:07:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE8DEC4321E for ; Wed, 30 Nov 2022 22:08:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 768D16B0083; Wed, 30 Nov 2022 17:08:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 67DA46B0085; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A3416B008A; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B0C2F6B0078 for ; Wed, 30 Nov 2022 17:08:08 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 75E98161163 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) X-FDA: 80191497456.06.5981FBF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 22CD41C0010 for ; Wed, 30 Nov 2022 22:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ytDwzaaiBIaOl0NGC8+JijprqvlmynIdZB7ZXnih+Nw=; b=P8rICWLHbEY1OGbm+L5MV9t+64 ZeXvZXd0Ny8CovsmEJAC3zBU3Nytd9NLzR5yFRWXVAbum5tqbesvKlIgopNK8mmIBsZtHEeW4c0cH dI0n8CGq1NyH1OyX1aAEG4HvQ+gX3bDYtf91/4ngQmkpWnWJJl/tI8X1PWi+P8TURZxXQP9SLxuak 8uoqaaUva97FP3meHxIc51dg4f8g44eA0QH//q5Sj364rDRhd6YTGJgUCQ34M2FkZwRVHGBqbn6pX G2rJQbOYN8l0WZrKpaooQA6K0LSOTw5YyEIA9Dp1V/cVvnPldZndudaJgsze+sjzLWypvIgmwXIOu kBTb/mJg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVc-Mc; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 13/24] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Date: Wed, 30 Nov 2022 22:07:52 +0000 Message-Id: <20221130220803.3657490-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846088; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ytDwzaaiBIaOl0NGC8+JijprqvlmynIdZB7ZXnih+Nw=; b=P+PFGqsatfArhxR1XxoFIQLcHEifF9QRe2OVKynjTYvnpzLG6ArQMiW2TszOKdrTfg3Gjv tjj89K5qeREQV4ft5UHMYFE4sfDjz8fvo/tOibLeWOoG6S2NzOyHhKvw0a7WucCw9tU37+ MGu1hEAsDwdvxNyjtUug2hlL9i35pbs= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P8rICWLH; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846088; a=rsa-sha256; cv=none; b=HHR2QLiRzcYFqkCHXNpVM6eF5wNU18Rvo0CX9EZaPDgT5ejHHQJgNNqSzcrWpxbtmUsa0A j6DeQzbrK1JWhXd2iF5MjsM9YRQw89OW63/8no1sWdKxZNCElWXou5OBYCb9puOCneMTlq Yfdd66GqYBYE5JwTyCJCp307rwntWrw= X-Stat-Signature: ad795hg6hoeqjpp5hwdwra7qete7a5tk X-Rspam-User: X-Rspamd-Queue-Id: 22CD41C0010 X-Rspamd-Server: rspam11 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=P8rICWLH; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1669846087-671928 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change all callers. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index efe9f1471caa..9ef65b383b40 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -299,10 +299,10 @@ static struct netmem *__page_pool_get_cached(struct page_pool *pool) } static void page_pool_dma_sync_for_device(struct page_pool *pool, - struct page *page, + struct netmem *nmem, unsigned int dma_sync_size) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = netmem_get_dma_addr(nmem); dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -329,7 +329,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) page_pool_set_dma_addr(page, dma); if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); + page_pool_dma_sync_for_device(pool, nmem, pool->p.max_len); return true; } @@ -576,7 +576,7 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, /* Read barrier done in netmem_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, netmem_page(nmem), + page_pool_dma_sync_for_device(pool, nmem, dma_sync_size); if (allow_direct && in_serving_softirq() && @@ -676,6 +676,7 @@ EXPORT_SYMBOL(page_pool_put_page_bulk); static struct page *page_pool_drain_frag(struct page_pool *pool, struct page *page) { + struct netmem *nmem = page_netmem(page); long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ @@ -684,7 +685,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, -1); + page_pool_dma_sync_for_device(pool, nmem, -1); return page; } From patchwork Wed Nov 30 22:07:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E05EC47088 for ; Wed, 30 Nov 2022 22:08:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 545B26B0082; Wed, 30 Nov 2022 17:08:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A4E06B0088; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBE736B0082; Wed, 30 Nov 2022 17:08:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9D1E66B0083 for ; Wed, 30 Nov 2022 17:08:08 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 782434083B for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) X-FDA: 80191497456.05.8D820D5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 0EC054000D for ; Wed, 30 Nov 2022 22:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hhgNIUCxZExeWU3Da4EdVgppfCYn/YsquALun7mh3kg=; b=ZKBe7wFXG7z39jmLufRfrrtk3S toKZbSE1NNOPrYUg5aTImDQH6lr5XFDR5g85kafW8f6h7lAm/8P+73n8jjfqOfDxohe7wqon1RS2k AB307UOidlx8/RfhmfW80tFW+FQS7Wm1lQOS24obFBOFBpTfpMAGUQUSCvP87KAjoUHJo7ZEIMsI8 XtJ9ijMI9Tg+MJ5xEQV1GzyBoru/fDhL3UheQy3dh6HYJGzTIbwO1p4G5za8PcDusgSToJ2I7ZwkT wSkAH3eczXRILCOnNeC8gl6dp+9d2PRgSuTIVJqH2qyomgoRmiWKDcpzyhnOBKjqzWjvIp0kMKphg LcLA4/8w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVe-PH; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 14/24] page_pool: Convert page_pool_recycle_in_cache() to netmem Date: Wed, 30 Nov 2022 22:07:53 +0000 Message-Id: <20221130220803.3657490-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846088; a=rsa-sha256; cv=none; b=asHha0pQAyfVzaAUPEgk/SyB1kYPxwKYHbXpbQjiEUXTKysVzli6+xQ4C7KMigtLPG/vjL 80YpVH9fWihDxQr0yKyIRuhUPtt3Gt7ml1eZIRf87zZ7wYacfYDacuWvlul07V4rYErec7 wQFngCT3Fwzb0Rn6tZPf8k8f9dLcBjY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZKBe7wFX; dmarc=none; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846088; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hhgNIUCxZExeWU3Da4EdVgppfCYn/YsquALun7mh3kg=; b=Ah8m9tQ3zcsW9vzouwkY+wl9vNlpdfQx89GbEwX3RzIFAZqsIYaty4DiNPTyNv6Y0pXZT+ MZG2779OV0tJ028D3hmGG6JNCIP6vSs2C2gSvPYv1zOrmgg9wmnRuPVXaaxP4Aqmjk1+CF VX4HdTY48fs8DxSpw3U8KL4SsB4sTwA= X-Rspamd-Queue-Id: 0EC054000D Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZKBe7wFX; dmarc=none; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam12 X-Rspam-User: X-Stat-Signature: zst8o795siqe9ge9z1hhph811jpu371k X-HE-Tag: 1669846087-848586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes a few casts. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 9ef65b383b40..b34d1695698a 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -538,7 +538,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, * * Caller must provide appropriate safe context. */ -static bool page_pool_recycle_in_cache(struct page *page, +static bool page_pool_recycle_in_cache(struct netmem *nmem, struct page_pool *pool) { if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) { @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page_netmem(page); + pool->alloc.cache[pool->alloc.count++] = nmem; recycle_stat_inc(pool, cached); return true; } @@ -580,7 +580,7 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, dma_sync_size); if (allow_direct && in_serving_softirq() && - page_pool_recycle_in_cache(netmem_page(nmem), pool)) + page_pool_recycle_in_cache(nmem, pool)) return NULL; /* Page found as candidate for recycling */ From patchwork Wed Nov 30 22:07:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19C7DC47088 for ; Wed, 30 Nov 2022 22:08:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C24496B0089; Wed, 30 Nov 2022 17:08:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B85856B008A; Wed, 30 Nov 2022 17:08:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A00EE6B008C; Wed, 30 Nov 2022 17:08:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 776766B0089 for ; Wed, 30 Nov 2022 17:08:12 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4BEA5AB98D for ; Wed, 30 Nov 2022 22:08:12 +0000 (UTC) X-FDA: 80191497624.30.42159C6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id 0263B4000D for ; Wed, 30 Nov 2022 22:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZJ5hBzYhZmqX7EWE+UyUU89Wo31DpmEICSokzO+fPkY=; b=n8RFCcCMXl5iM1uUVA6zf1zyks PyjgurKWyEzmOKEuRkha9iZg+AkoVAgEaV2bs0NRWEBUoRWl/UJFAj6U8BAWGw/+IQhCW6BEUl4El diuNbQlG5FXcLmez1LuOhq52VbvGsihFf45CxJ48Qv1n7BcdG13dprT3dn9tqLazYgMuOxQ8fBnPs WpHJzZs5YpwsrepKxGNQA8okq7zQIAvQq4WITEQzrLsyUDHFDXut97myVtF6xS1F1rp7HxdL0IlQp Pj3dnG93vIgZJ8mdvNlnM0jCEF/jVasWzxqvCcC5OquwtEsJbAyAYuyFV8gMLggCgHn/Ni8vqZvsk cRBAMgqA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVi-SC; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 15/24] page_pool: Remove page_pool_defrag_page() Date: Wed, 30 Nov 2022 22:07:54 +0000 Message-Id: <20221130220803.3657490-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846092; a=rsa-sha256; cv=none; b=tGVACFytgG4i9k5ybNGM0vMDQ9T+U+FK7WoJwdGcncXNdVl+KqR9DMhhEwd7S0J5WEXkOx Q2Akpd3oPDyoFgYf04Rre1rTTF01FJu1FM3IoyzGjd5AwwKP5cK+X9xhpGKODAjcc/QXhg vB3quvXpdeX3mcpXvrheF56VVoGels0= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=n8RFCcCM; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846092; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZJ5hBzYhZmqX7EWE+UyUU89Wo31DpmEICSokzO+fPkY=; b=sC8VgXAA4VtqRtWHqCXXJLZgCtt0mZfHmeCFrdDTb3e7m83toKjbiP/LK6Bx4i0cyQv31n Srr3mIiFyegq4raGSlB+uTx2UP2dHaNEEfmZkK5OAxzq6TP7OujMAGbRYLzbmlXH0kcJwV RiwnnBYwk8BpHQ2oRVQ7gIHusjMgeoY= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0263B4000D X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=n8RFCcCM; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: jhtzw4qyi1ogh6ftoyifnxezc4q3xnh8 X-HE-Tag: 1669846091-664335 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This wrapper is no longer used. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b34d1695698a..c89a13393a23 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -607,14 +607,6 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, return NULL; } -static __always_inline struct page * -__page_pool_put_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) -{ - return netmem_page(__page_pool_put_netmem(pool, page_netmem(page), - dma_sync_size, allow_direct)); -} - void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { From patchwork Wed Nov 30 22:07:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4AEEC352A1 for ; Wed, 30 Nov 2022 22:08:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 067CE6B0098; Wed, 30 Nov 2022 17:08:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E938F6B009B; Wed, 30 Nov 2022 17:08:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C49C76B0099; Wed, 30 Nov 2022 17:08:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9D7196B009A for ; Wed, 30 Nov 2022 17:08:33 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5883980BAA for ; Wed, 30 Nov 2022 22:08:33 +0000 (UTC) X-FDA: 80191498506.09.B7B59D2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id E4974180018 for ; Wed, 30 Nov 2022 22:08:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=nlPeMtORpr6+hFy3RU85lPdDgtB6HfYh05fUjnke2sY=; b=OYGXvLcMfCDTR1Yg5QJsgRr95s UDWFqlJveoL80vS0DnQ6dlAM7v6kPJHJFmCuIRSfGqhB1yiuSAkGX10+RHs54a3Cn5tRL32bnjgKY ebCIiwqhtd87TcTJDwvZvvsM+SRE2oC7OqRJoE55xyaOcnNdP4wym76XshSYdxDY+NJMYJvfYSpk3 y9UbgeA0Sm0Q1su4P1KsqfG8lNKKQJmknsFMiEjG/6vL89EtuNE9xEHjH36A7BbVQNTvLLPjlQiV1 HWKt7Ip/TqLkkeu6jz97MJcJEb68+V/mllSqamGb/9IkUpZOEvGDjqH0m6Z1oeloyVYg+kuF8shup JkQJuIbg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFN-00FLVm-Uy; Wed, 30 Nov 2022 22:08:05 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 16/24] page_pool: Use netmem in page_pool_drain_frag() Date: Wed, 30 Nov 2022 22:07:55 +0000 Message-Id: <20221130220803.3657490-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OYGXvLcM; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846113; a=rsa-sha256; cv=none; b=Ol4tPdxdmB6Vopy3vZ7KFGddwaTj7i3PwetYcvhUWxkDQzv4ySimGE0j2CSz97Xb5DZQCs ONY6On04Oe7ILjon2e+ZuaIqSsonMY5evENE1nN3sGQTWeEe2IkxMIlwnvAD6yySzpaCjH h6ySNlk4qXvgvHrsKGudQ014GydByTU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846113; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nlPeMtORpr6+hFy3RU85lPdDgtB6HfYh05fUjnke2sY=; b=MFO3rrMo+U50EP8IST4zrj2nxIXXiQNMOB6YRIRxJVq1TKeDhHGvD9aPwY9KCU/9pyHY/f 0gdPoMacj7k9+9X8TJMe6iqkVb9ljB7RVulN/zJbPt5uEhPYpP63DvXzNhHGFs8lFBATgY uVhAY6EgfzU1hJ/xD2zOtFGj30wZ+UE= X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E4974180018 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=OYGXvLcM; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: p5ih1y511dx8wy1c7emspp9zcowzcj3w X-HE-Tag: 1669846112-750198 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We're not quite ready to change the API of page_pool_drain_frag(), but we can remove the use of several wrappers by using the netmem throughout. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/page_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c89a13393a23..39f09d011a46 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -672,17 +672,17 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_defrag_netmem(nmem, drain_count))) return NULL; - if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { + if (netmem_ref_count(nmem) == 1 && !netmem_is_pfmemalloc(nmem)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, nmem, -1); return page; } - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); return NULL; } From patchwork Wed Nov 30 22:07:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEF65C47088 for ; Wed, 30 Nov 2022 22:08:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B2A66B0085; Wed, 30 Nov 2022 17:08:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 03A9D6B0088; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF6236B0089; Wed, 30 Nov 2022 17:08:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B81E26B0085 for ; Wed, 30 Nov 2022 17:08:09 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 66DF0AB99E for ; Wed, 30 Nov 2022 22:08:09 +0000 (UTC) X-FDA: 80191497498.01.1B0749F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id CA93E80017 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6lONMjeDGYr9GURTEU9mXmeq1Gc9m2yzsRQTSZ/cy00=; b=p0WKC4Ym0iVOfwWMLJIwWghpXd wCiF/OXhsOZ39NY7q8snE1L6gNVOHKvHCN7p5HTAFLD8phCN63uwfNjVrcaT5S34hhO2k9tcg5riL IPGIXZN8n/ZJSRUNeUKxcvex0/a2eeNIi9mKIGneKvR4Z4pPR90UM3ZrPVirOIfCTiiWQO43Wd3UC /ZuxysHrqbZp1mSC/dUiT/KKLx/o5lLdpNZa49cNL7JRBEK+iB8bWsklXzwl51h+xI8wyWOTvxN3d GVKwce0D1SUzh4I2xoi3MuruxXe0M3AFz+6xuxfHT0RPT9lmQca7rkyUgu2wNLskKfR78PJRPyqk/ R7YfUeaQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFO-00FLVq-0v; Wed, 30 Nov 2022 22:08:06 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem Date: Wed, 30 Nov 2022 22:07:56 +0000 Message-Id: <20221130220803.3657490-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846088; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6lONMjeDGYr9GURTEU9mXmeq1Gc9m2yzsRQTSZ/cy00=; b=7RJLOEbA8xv/dj25fTJAcgGVfDyGcLiAKsrSUQmyMJ5sZTeb84SzZwbZxYI4LNUdcQKlsp TRty+beieVks2M/399XGO4mNRhHLB2Pr4RnnHyqfjtyCA75E5F2TikkIFaYdXmnSwwyKCd tBO0RsAP+9CBCcvqt0emi081xj9KOww= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=p0WKC4Ym; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846088; a=rsa-sha256; cv=none; b=Lyyux+4EmGZuvScYCo35mNzQ0qV6nO6TEzUGXLifroIKwvTfzZePTHGpmdRTv6LsQ7YMyV oQXMrYuXJbW73a4FfhaX/l+Q49nAv115IJkaH8dlH3B/VeskVUwXz1kKf1NnEtx0ORY+4E 2S/Ro056FTbnMrtDxDV8DxHHJYp+nYQ= X-Stat-Signature: rwrxkpcbux63jyrust8frzjcpqouefjz X-Rspam-User: X-Rspamd-Queue-Id: CA93E80017 X-Rspamd-Server: rspam11 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=p0WKC4Ym; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1669846088-947399 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function accesses the pagepool members of struct page directly, so it needs to become netmem. Add page_pool_put_full_netmem(). Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 8 +++++++- net/core/page_pool.c | 13 ++++++------- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 4c730591de46..701f94947e8a 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -437,10 +437,16 @@ static inline void page_pool_put_page(struct page_pool *pool, } /* Same as above but will try to sync the entire area pool->max_len */ +static inline void page_pool_put_full_netmem(struct page_pool *pool, + struct netmem *nmem, bool allow_direct) +{ + page_pool_put_netmem(pool, nmem, -1, allow_direct); +} + static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) { - page_pool_put_page(pool, page, -1, allow_direct); + page_pool_put_full_netmem(pool, page_netmem(page), allow_direct); } /* Same as above but the caller must guarantee safe context. e.g NAPI */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 39f09d011a46..b4540d242081 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -886,28 +886,27 @@ EXPORT_SYMBOL(page_pool_update_nid); bool page_pool_return_skb_page(struct page *page) { + struct netmem *nmem = page_netmem(compound_head(page)); struct page_pool *pp; - page = compound_head(page); - - /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation + /* nmem->pp_magic is OR'ed with PP_SIGNATURE after the allocation * in order to preserve any existing bits, such as bit 0 for the * head page of compound page and bit 1 for pfmemalloc page, so * mask those bits for freeing side when doing below checking, - * and page_is_pfmemalloc() is checked in __page_pool_put_page() + * and netmem_is_pfmemalloc() is checked in __page_pool_put_netmem() * to avoid recycling the pfmemalloc page. */ - if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE)) + if (unlikely((nmem->pp_magic & ~0x3UL) != PP_SIGNATURE)) return false; - pp = page->pp; + pp = nmem->pp; /* Driver set this to memory recycling info. Reset it on recycle. * This will *not* work for NIC using a split-page memory model. * The page will be returned to the pool here regardless of the * 'flipped' fragment being in use or not. */ - page_pool_put_full_page(pp, page, false); + page_pool_put_full_netmem(pp, nmem, false); return true; } From patchwork Wed Nov 30 22:07:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FF30C47089 for ; Wed, 30 Nov 2022 22:08:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 796866B007D; Wed, 30 Nov 2022 17:08:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F85D6B007B; Wed, 30 Nov 2022 17:08:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06CCB6B0074; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C507A6B0074 for ; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7E7B1AB76F for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) X-FDA: 80191497372.21.65198E5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 0D4A6C001C for ; Wed, 30 Nov 2022 22:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=WPw43D2FWQN3SSgISxsX8XwX/rXccr8Gt85IFRnjKRs=; b=PJ6YQTlsLPGTHZ66bVvy2i2V6s mXq9ZVtLGmN4XqbaXiYbhdcAoIIbHth8W8eaGScdYkIuotJgZnqMKZzf0u3A3kViUxtEkOTZylGYn oV9v9DH0aoVvOTmuNDZrYWPQyCM3l97C5BkFJctrwoNnoEAaaforfKu3shdkNOnN3aZXYUGipC2Fu X/IFiXc3BntzVRSkW1UFK0C/wL0vdhhHVMINMErtvs7ZZ8GMpGqdzDAE9aiFjoHJM3myhmh3fxNYF Ie55w7znDVfhYJDHc6mslqjVv0nll9C6cuceJbvZJMlPVUBn311TRFXDIEJ56j2x5GKcxnn1nizd5 xAUSkbJQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFO-00FLVt-3Z; Wed, 30 Nov 2022 22:08:06 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 18/24] page_pool: Convert frag_page to frag_nmem Date: Wed, 30 Nov 2022 22:07:57 +0000 Message-Id: <20221130220803.3657490-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846086; a=rsa-sha256; cv=none; b=stXevroKDBGycWUWF/EdfPJR7WQafYEwhedjIfwJkqal+DtwrNqOJwNJ8GmArWe/tfnVtr FKDlUNxkmnuTE49nxtLrUMWM1r6ZEQM32+pIkL3f+0ghOlcqbu3Hjjj7mIk2yFz/MMOspM L4E3wRmyz/a7ZipWdCaqtv3dLNnKBMI= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PJ6YQTls; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WPw43D2FWQN3SSgISxsX8XwX/rXccr8Gt85IFRnjKRs=; b=U37VkClmvUCQk0NWbQ3s2f+HLgyYFnqPO8wE7m1F9hKY4GV1Wkdk9YOmqe7nQQpfLGzl5t zJRdNhRz8/k3zWWOje62SfdenQiZo1vVZQSkB/R2YCwKPdTcQNKN9GcllLsxAvNxyOzw2d fUZ2Os9cKUTohpolgmVvIQerGc+Z37c= X-Stat-Signature: t8gchmse85hxydqhj6pxu83pm1w5ec75 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=PJ6YQTls; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0D4A6C001C X-Rspam-User: X-HE-Tag: 1669846085-682292 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove page_pool_defrag_page() and page_pool_return_page() as they have no more callers. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 17 ++++++--------- net/core/page_pool.c | 47 ++++++++++++++++++----------------------- 2 files changed, 26 insertions(+), 38 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 701f94947e8a..ce1049a03f2d 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -240,7 +240,7 @@ struct page_pool { u32 pages_state_hold_cnt; unsigned int frag_offset; - struct page *frag_page; + struct netmem *frag_nmem; long frag_users; #ifdef CONFIG_PAGE_POOL_STATS @@ -307,8 +307,8 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) return page_pool_alloc_pages(pool, gfp); } -struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, - unsigned int size, gfp_t gfp); +struct netmem *page_pool_alloc_frag(struct page_pool *pool, + unsigned int *offset, unsigned int size, gfp_t gfp); static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, unsigned int *offset, @@ -316,7 +316,7 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); - return page_pool_alloc_frag(pool, offset, size, gfp); + return netmem_page(page_pool_alloc_frag(pool, offset, size, gfp)); } /* get the stored dma direction. A driver might decide to treat this locally and @@ -372,9 +372,9 @@ void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); -static inline void page_pool_fragment_page(struct page *page, long nr) +static inline void page_pool_fragment_netmem(struct netmem *nmem, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&nmem->pp_frag_count, nr); } static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) @@ -398,11 +398,6 @@ static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) return ret; } -static inline long page_pool_defrag_page(struct page *page, long nr) -{ - return page_pool_defrag_netmem(page_netmem(page), nr); -} - static inline bool page_pool_is_last_frag(struct page_pool *pool, struct netmem *nmem) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b4540d242081..5be78ec93af8 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -222,12 +222,6 @@ EXPORT_SYMBOL(page_pool_create); static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); -static inline -void page_pool_return_page(struct page_pool *pool, struct page *page) -{ - page_pool_return_netmem(pool, page_netmem(page)); -} - noinline static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { @@ -665,10 +659,9 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, } EXPORT_SYMBOL(page_pool_put_page_bulk); -static struct page *page_pool_drain_frag(struct page_pool *pool, - struct page *page) +static struct netmem *page_pool_drain_frag(struct page_pool *pool, + struct netmem *nmem) { - struct netmem *nmem = page_netmem(page); long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ @@ -679,7 +672,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, nmem, -1); - return page; + return nmem; } page_pool_return_netmem(pool, nmem); @@ -689,22 +682,22 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, static void page_pool_free_frag(struct page_pool *pool) { long drain_count = BIAS_MAX - pool->frag_users; - struct page *page = pool->frag_page; + struct netmem *nmem = pool->frag_nmem; - pool->frag_page = NULL; + pool->frag_nmem = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!nmem || page_pool_defrag_netmem(nmem, drain_count)) return; - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } -struct page *page_pool_alloc_frag(struct page_pool *pool, +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, unsigned int size, gfp_t gfp) { unsigned int max_size = PAGE_SIZE << pool->p.order; - struct page *page = pool->frag_page; + struct netmem *nmem = pool->frag_nmem; if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) || size > max_size)) @@ -713,35 +706,35 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, size = ALIGN(size, dma_get_cache_alignment()); *offset = pool->frag_offset; - if (page && *offset + size > max_size) { - page = page_pool_drain_frag(pool, page); - if (page) { + if (nmem && *offset + size > max_size) { + nmem = page_pool_drain_frag(pool, nmem); + if (nmem) { alloc_stat_inc(pool, fast); goto frag_reset; } } - if (!page) { - page = page_pool_alloc_pages(pool, gfp); - if (unlikely(!page)) { - pool->frag_page = NULL; + if (!nmem) { + nmem = page_pool_alloc_netmem(pool, gfp); + if (unlikely(!nmem)) { + pool->frag_nmem = NULL; return NULL; } - pool->frag_page = page; + pool->frag_nmem = nmem; frag_reset: pool->frag_users = 1; *offset = 0; pool->frag_offset = size; - page_pool_fragment_page(page, BIAS_MAX); - return page; + page_pool_fragment_netmem(nmem, BIAS_MAX); + return nmem; } pool->frag_users++; pool->frag_offset = *offset + size; alloc_stat_inc(pool, fast); - return page; + return nmem; } EXPORT_SYMBOL(page_pool_alloc_frag); From patchwork Wed Nov 30 22:07:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D83AC4321E for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BDBF6B0071; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 347676B0073; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E74F6B0074; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0BBE26B0071 for ; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D53D8141261 for ; Wed, 30 Nov 2022 22:08:05 +0000 (UTC) X-FDA: 80191497330.30.6CE34E2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 76A58C0008 for ; Wed, 30 Nov 2022 22:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=v8XZwj+qrPdVX2joAsiJ+16x3YDJToWBKe9JDnAu2v0=; b=CTMRjQjVAmOAztNSb86YHDpcSe jueHTUGiFOpQ+IAD0kbYeWdDpANdxW3zYimtYphn7Tlfk6/vQP27d4lg8BMsCEMWW+6Ix8NMlN9CX Z5PeuVwcpM+19U1V9QPwlSVgzHJQtl+snXGLnNcAOiQ5ZHYFhohObGRIsjO/NVo2+F73mAR3/Agmi NNXYRNPyde4sGEFDn+2S2DRn0vZ6nlB4S+OVzEFaeq5WjyxLJxQ4W2TUAEe/PjGMZvoy4CP2RVNkM +zy0CjkbLQxtwOXcerQ0HR7/G0oTpjQWbeuQi9ZxwlMIsh2IDQFD6Gsg0PBz9LG097G5WRKvOc+AC dTq4CSkA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFO-00FLWA-6g; Wed, 30 Nov 2022 22:08:06 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 19/24] xdp: Convert to netmem Date: Wed, 30 Nov 2022 22:07:58 +0000 Message-Id: <20221130220803.3657490-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846085; a=rsa-sha256; cv=none; b=rQscAea+K3Fhy9730KwZTB/sPX//7wQy4458mKl77P/dzMHcgQjcsx2KZtBSAUe2kCVRIH hmVwNT5ifdDo1vFQ9wlTDso0Q9VysHjlYGmImCdud4Mw4LfLIhcBN+/2/PJSuKRdF0yNNF kgxoWSLtgOHG0iLpvZWxr5XT4OAp7xo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CTMRjQjV; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846085; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v8XZwj+qrPdVX2joAsiJ+16x3YDJToWBKe9JDnAu2v0=; b=2vGvdOJ55+irVHYTOLRYq2wD5U72287s8z5xYbS/izem3wjqnYFQuQ7fh81+pk9pUvrSlq uxGxKN3TcRNqhdxkvHbnuoEAGHc/c4284wWGJ8cLHJKguvVguOzu3c68l3nSu9N8/PovPt atKO/8oWovNKdV7EvafOMlkPYGEU66s= X-Stat-Signature: 8qpnkoxxrsz4sodgkm5xg4thwgf34g4u Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CTMRjQjV; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 76A58C0008 X-Rspam-User: X-HE-Tag: 1669846085-592683 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We dereference the 'pp' member of struct page, so we must use a netmem here. Signed-off-by: Matthew Wilcox (Oracle) --- net/core/xdp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index 844c9d99dc0e..7520c3b27356 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -375,17 +375,18 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, struct xdp_buff *xdp) { + struct netmem *nmem; struct page *page; switch (mem->type) { case MEM_TYPE_PAGE_POOL: - page = virt_to_head_page(data); + nmem = virt_to_netmem(data); if (napi_direct && xdp_return_frame_no_direct()) napi_direct = false; - /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) + /* No need to check ((nmem->pp_magic & ~0x3UL) == PP_SIGNATURE) * as mem->type knows this a page_pool page */ - page_pool_put_full_page(page->pp, page, napi_direct); + page_pool_put_full_netmem(nmem->pp, nmem, napi_direct); break; case MEM_TYPE_PAGE_SHARED: page_frag_free(data); From patchwork Wed Nov 30 22:07:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D1F3C47089 for ; Wed, 30 Nov 2022 22:08:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E277B6B0073; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D12256B0078; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B36BC6B0078; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9CDA96B0073 for ; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6D89E1403AB for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) X-FDA: 80191497372.22.9CB3FDE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 1218DA000E for ; Wed, 30 Nov 2022 22:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zWebMUOZHo9GaCcn2DPCoqafVDMpG7RCOZ9SaEUIFhI=; b=Mxgpw5JuDdm3aUTpbihdlCNz7B Orno3S0npaZteP2tCsC3vn+2XJrW5EWfw3qXyjmVDJvCs9w/gQ9HLK0TOYNJYjMwCn8+qSwvZoMJh ELKC6HWD+qqWDnOYeTWiWLCH0/OVgnt0cbcFuYOWDHBdwqX9ycD0lKQZ82zd6j6AyXQXl/MhfXNue 58UO5s+X+oTUzS4AgMKU7nngYF21ck1B90Xbv7Q74ufKONaMoo+/ME68jtr2kRmnxpAYo9bWY/dPN TNZPtmdHkm0Dr9zYZqINtRJRnZLM6vwYLWXw2Zn/Jco6g9pephvDp0627tTIs9uWxgyyhBVpnXlHy WaxAEJew==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFO-00FLWR-EU; Wed, 30 Nov 2022 22:08:06 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 20/24] mm: Remove page pool members from struct page Date: Wed, 30 Nov 2022 22:07:59 +0000 Message-Id: <20221130220803.3657490-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846086; a=rsa-sha256; cv=none; b=aQHRxQu3ZWAALeHts7O2IOBVBt9gR1Ao3RlBDKzi2K1GYbn3Y/jMhbYBR0cBW/LFaOBfUW lcBQHHAN1DTILBwP5EBtzXRiGnlf1zzdFpcDfpP1J9aqEd4Dgz6iZDjghg5j1w16H0Daus D0Av+eJuV8cm39Rm/nvxyhEKJZTY4wQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Mxgpw5Ju; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zWebMUOZHo9GaCcn2DPCoqafVDMpG7RCOZ9SaEUIFhI=; b=ZhJrwJWlx+fx58h2ezG06Ak64tJvoOdArHWaKTlw4FGDL3lXDVWrshaUSM4H3RWrAxJjgX +OUti0y6f2FCJVdGBvAo0GXbzN1sikRxiQc8dxSRMgP6d867FE9I0BZg91zExKyTT2q/o6 2PyZGTRK5a1cPC3LC0ketTaH13cOCiE= X-Rspamd-Queue-Id: 1218DA000E X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Mxgpw5Ju; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam09 X-Stat-Signature: 7jha3ac4t4fe1knjcss7eox8x9kgmqum X-HE-Tag: 1669846085-868886 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are now split out into their own netmem struct. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm_types.h | 22 ---------------------- include/net/page_pool.h | 4 ---- 2 files changed, 26 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 1ad1ef3a1288..6999af135f1d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -113,28 +113,6 @@ struct page { */ unsigned long private; }; - struct { /* page_pool used by netstack */ - /** - * @pp_magic: magic value to avoid recycling non - * page_pool allocated pages. - */ - unsigned long pp_magic; - struct page_pool *pp; - unsigned long _pp_mapping_pad; - unsigned long dma_addr; - union { - /** - * dma_addr_upper: might require a 64-bit - * value on 32-bit architectures. - */ - unsigned long dma_addr_upper; - /** - * For frag page support, not supported in - * 32-bit architectures with 64-bit DMA. - */ - atomic_long_t pp_frag_count; - }; - }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool.h b/include/net/page_pool.h index ce1049a03f2d..222eedc39140 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -81,11 +81,7 @@ struct netmem { static_assert(offsetof(struct page, pg) == offsetof(struct netmem, nm)) NETMEM_MATCH(flags, flags); NETMEM_MATCH(lru, pp_magic); -NETMEM_MATCH(pp, pp); NETMEM_MATCH(mapping, _pp_mapping_pad); -NETMEM_MATCH(dma_addr, dma_addr); -NETMEM_MATCH(dma_addr_upper, dma_addr_upper); -NETMEM_MATCH(pp_frag_count, pp_frag_count); NETMEM_MATCH(_mapcount, _mapcount); NETMEM_MATCH(_refcount, _refcount); #undef NETMEM_MATCH From patchwork Wed Nov 30 22:08:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74669C47089 for ; Wed, 30 Nov 2022 22:08:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEC866B0074; Wed, 30 Nov 2022 17:08:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 989C16B0081; Wed, 30 Nov 2022 17:08:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F7186B0075; Wed, 30 Nov 2022 17:08:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3E5756B0081 for ; Wed, 30 Nov 2022 17:08:07 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id CE39CA111C for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) X-FDA: 80191497372.23.81D2456 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 55AF5140011 for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LAxzevWS6YPjNXWh31EDfcrFweLPV/vKsTWRIoCwYk8=; b=Q3AOoll1e7TsmpbCt5Usl8uDJ/ so3m8U9hVMfnRgp7mzSDXMC4JghiFAVwzqVOhZzB9ekBfCj0U7/f/XRfRm7hDAStcHDvXo7Tw9n/s dDh4nS2EohevkK9OltPAuW99iVugPDbISJAG/VPZcEju10qexa7pWsqj/Oi5vGO+Aoqo7xiUMLiWC j1fZawbP4orkfv6iFv61Go/nfWrdjoPviVCaP+bw2pHZ90VVxXZNfWPg/TEYHAlK5JjBvn8QltV1i 1jC8+x6Yrw+kooi8Q9xu2l+g6Kh42ahY12wHwtQyc+br2Wj4NyjCgJHeFWmSfAuCWf2RUHDf09JZf p5vpbCnA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFO-00FLWY-Iw; Wed, 30 Nov 2022 22:08:06 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 21/24] netmem_to_virt Date: Wed, 30 Nov 2022 22:08:00 +0000 Message-Id: <20221130220803.3657490-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846086; a=rsa-sha256; cv=none; b=rbAxaJQpbrrnyqjrEwhWKBiCH7NLpftG2MqYVB4+pVT+HWJ3YBMys7KubTjmgaHZQPsIFu icJnlt90t+16xeLq3mXl5pe95hKEY24eKgs1aYpY2QrpLa90eZNLMK6y6X6a5HkD5Gdysh o2lRHbi/jfNKVzPcqWD7+ZiwLBFAe6Y= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Q3AOoll1; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LAxzevWS6YPjNXWh31EDfcrFweLPV/vKsTWRIoCwYk8=; b=TV5y4BzhdoR7kom51ToIxM47Wm6maSwJEt7DNiEs2zE7ma+4X3y4Zt9NtC31v/qdNIt2Fr sPzAaSveoy+PqCiSOITGD5thtQSv+SpOeGLNZNaZlYUPIpriu7luzpqE6VTanmEStZmukv 7/0dqZgv+HatdD0+HGbKRU18MpbH4dE= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 55AF5140011 X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Q3AOoll1; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: c9mrmfuesqpugqcwrg8f3dik68t6m95z X-HE-Tag: 1669846086-539135 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --- include/net/page_pool.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 222eedc39140..e13e3a8e83d3 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -112,6 +112,11 @@ static inline struct netmem *virt_to_netmem(const void *x) return page_netmem(virt_to_head_page(x)); } +static inline void *netmem_to_virt(const struct netmem *nmem) +{ + return page_to_virt(netmem_page(nmem)); +} + static inline int netmem_ref_count(const struct netmem *nmem) { return page_ref_count(netmem_page(nmem)); From patchwork Wed Nov 30 22:08:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1691DC4708D for ; Wed, 30 Nov 2022 22:08:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 500DD6B0080; Wed, 30 Nov 2022 17:08:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C3856B0078; Wed, 30 Nov 2022 17:08:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E42436B0075; Wed, 30 Nov 2022 17:08:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B8F686B0073 for ; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8F5B1120176 for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) X-FDA: 80191497372.20.B9AAE90 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 0FE7FC000E for ; Wed, 30 Nov 2022 22:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=232s0hcEdsLQ/y1Kk8Yj7ntJwtD3lvyB86OlxbT/3Qk=; b=jFJOcsOFUZpRAPlwzC0tgzjzX6 tcYyjq0ix/8JY3blCrvpbonmwCqm5eDaCzuZzcgswzh0HmL3NxMaK67T5ZXd5T4GjkTE6B64Eo2lc VLItc01IyTRNrCYlQnHOKnxH0VIuTNSHVmicFG3pHN7FX3tyj79w0SuwkFouGetym/msUbbnjY5Fu qagNUTOoeJhIWaz86lfELzxtsDZgiFLlFepVxUij2nO7kNnxCHHNfHEIJr6ZjTwAbgcADYz3r5SrF 1mhVqNusPeOUCt01aehAdHdJauZawKzFobbVun2qMxDcgVB5NUuSR/Ui1FA1zS/H9qabe7Dc1QSC8 roSC99vg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFO-00FLWo-VB; Wed, 30 Nov 2022 22:08:07 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 22/24] page_pool: Pass a netmem to init_callback() Date: Wed, 30 Nov 2022 22:08:01 +0000 Message-Id: <20221130220803.3657490-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846086; a=rsa-sha256; cv=none; b=ZclSCARNPBe1kOspR7aUL5JgRGSLpUtZUG6e8b09h1BCMsgfweGP7S6L8ZTQcSfVsT5/hv GwhXcqwAdJP3R3Yh8Wsug3ynsFW3Kb32Uxcl4U3kbAFGp10OVmLu3IvwqRRhWtvQjJA6Ie RaEBhoQiP6yVSORGmJ6Ec7bTXIX8Hr0= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jFJOcsOF; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=232s0hcEdsLQ/y1Kk8Yj7ntJwtD3lvyB86OlxbT/3Qk=; b=We8RNfxX43lxyb7EKLeIyD5il6ZID1uAAVL9+4d2fRPa/B0rnRw6we4f2VqeS0KeKpeeBH U49tWk5G2u3R2FV5+GRnnUjOpx5zpd9zuR1e2GFq2p2PbUFu2PgzUN5Ldq2f2FAlODI6rF evWUHKS0991+aN2sHC8x+0PaQGRPbwA= X-Stat-Signature: na93to1h68tbgfnzj35mso13nkgob4ce Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=jFJOcsOF; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0FE7FC000E X-Rspam-User: X-HE-Tag: 1669846085-375269 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert the only user of init_callback. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 2 +- net/bpf/test_run.c | 4 ++-- net/core/page_pool.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index e13e3a8e83d3..4878fe30f52c 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -164,7 +164,7 @@ struct page_pool_params { enum dma_data_direction dma_dir; /* DMA mapping direction */ unsigned int max_len; /* max DMA sync memory size */ unsigned int offset; /* DMA addr offset */ - void (*init_callback)(struct page *page, void *arg); + void (*init_callback)(struct netmem *nmem, void *arg); void *init_arg; }; diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 6094ef7cffcd..921b085802af 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -116,9 +116,9 @@ struct xdp_test_data { #define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) #define TEST_XDP_MAX_BATCH 256 -static void xdp_test_run_init_page(struct page *page, void *arg) +static void xdp_test_run_init_page(struct netmem *nmem, void *arg) { - struct xdp_page_head *head = phys_to_virt(page_to_phys(page)); + struct xdp_page_head *head = netmem_to_virt(nmem); struct xdp_buff *new_ctx, *orig_ctx; u32 headroom = XDP_PACKET_HEADROOM; struct xdp_test_data *xdp = arg; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5be78ec93af8..bed40515e74c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -334,7 +334,7 @@ static void page_pool_set_pp_info(struct page_pool *pool, nmem->pp = pool; nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); + pool->p.init_callback(nmem, pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) From patchwork Wed Nov 30 22:08:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE81CC636F8 for ; Wed, 30 Nov 2022 22:08:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9EE736B007B; Wed, 30 Nov 2022 17:08:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 78D516B0078; Wed, 30 Nov 2022 17:08:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 173BF6B007D; Wed, 30 Nov 2022 17:08:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EB1986B0080 for ; Wed, 30 Nov 2022 17:08:06 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B83201203D5 for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) X-FDA: 80191497372.21.04D7A7A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id 54F5312001D for ; Wed, 30 Nov 2022 22:08:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0pEHOtsVyrt2uy64MIziwQ+z4DkQ3BQz3gapa10bEAk=; b=FV9m59ZLLmonxbZodQwy/1V+DG sp5RKJnRn2xVXD954BBVMq6nItUlAlmCP6Th9l1yDzP2o4//MgKL4DoUx9tcIWtTQ1nCtGZujwx52 zd2sNnr+CPeXINX1ahXYO+Bv3OCtT6oQqPVADwluv8wTylDLVODSLC76+1OCWSwcMSazlyQwHRm/1 Q6Y4A34fJvpcn0oryqFccot8+5coDn049YKQJ6mdMyaMeOoHAmJ2Nych4sR5o8LJBhxxC1eKDcGx2 DONEpcngxIePq61eTw+xfjGq8CeoxKVlWe9Mbu0uXORFcT38R7v/Zl4CXCFXblIPF68rIZ75aXcIb fCWa+p7g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFP-00FLWw-30; Wed, 30 Nov 2022 22:08:07 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 23/24] net: Add support for netmem in skb_frag Date: Wed, 30 Nov 2022 22:08:02 +0000 Message-Id: <20221130220803.3657490-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846086; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0pEHOtsVyrt2uy64MIziwQ+z4DkQ3BQz3gapa10bEAk=; b=sUZ8RGYsETuBjVWpaD9QdY1ItRns8EtG2w8V3fniukimCYI94PMlENkKbrXwPAAB2eSQuO tzIXGvmv0O4qq1VD0bPZoxHHdq0HZ3rws8UmbxtGtgvegvlQGnSl1DG/5o/IyWBaonSADk NQUuTdBirr+gnz0feaEvd/wfGX/pnPE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FV9m59ZL; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846086; a=rsa-sha256; cv=none; b=WMlcvyGPONb4ZFuW9CK38P5N85jk09tB87+RWxRMGpebddLIsIO41JpB+4sR85shOCskd+ XZRFNEUuJSytHXKsFkdz+2zUOT6cUFSSzaNc6Shx7uAn6xPS1S+XND++MFyHnN3/LOZiPK j5UwRU4UXnVLJrEbrnQFfRi0iBypYXM= X-Stat-Signature: kwgegs8ure69c4sw9ahf3qqyh5xinxtf X-Rspamd-Queue-Id: 54F5312001D Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FV9m59ZL; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1669846086-288887 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow drivers to add netmem to skbs & retrieve them again. If the VM_BUG_ON triggers, we can add a call to compound_head() either in this function or in page_netmem(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/skbuff.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4e464a27adaf..dabfb392ca01 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3345,6 +3345,12 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) return frag->bv_page; } +static inline struct netmem *skb_frag_netmem(const skb_frag_t *frag) +{ + VM_BUG_ON_PAGE(PageTail(frag->bv_page), frag->bv_page); + return page_netmem(frag->bv_page); +} + /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment @@ -3453,6 +3459,11 @@ static inline void __skb_frag_set_page(skb_frag_t *frag, struct page *page) frag->bv_page = page; } +static inline void __skb_frag_set_netmem(skb_frag_t *frag, struct netmem *nmem) +{ + __skb_frag_set_page(frag, netmem_page(nmem)); +} + /** * skb_frag_set_page - sets the page contained in a paged fragment of an skb * @skb: the buffer From patchwork Wed Nov 30 22:08:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13060513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA7C6C4321E for ; Wed, 30 Nov 2022 22:08:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6CDC6B0081; Wed, 30 Nov 2022 17:08:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF6EE6B0085; Wed, 30 Nov 2022 17:08:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A75786B0081; Wed, 30 Nov 2022 17:08:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6B1866B0082 for ; Wed, 30 Nov 2022 17:08:08 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 45D11141253 for ; Wed, 30 Nov 2022 22:08:08 +0000 (UTC) X-FDA: 80191497456.06.5FBF7B8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id BF9EC8000F for ; Wed, 30 Nov 2022 22:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bSDA7GTpvC67lmI4tEnGrrVMWxO8+GdRH/dCWhZ3QLk=; b=s8rFR8tRIlxpGApevMDLM2BUI1 eieqmdcvKJJXqjgzssWT6+5iPvvrTzBZtNZZhWNAExHGHFogKIpDnYvFR/liAZuHeIy7vI+IrAoom L55POngnYXSNRMZNDTB+o8/qIncSm+jGxlIDqwJzBRdvhBPPxGcMu59fqWHBy+gtFQplvL2C3ojG+ CszGH48qoamRQBnXaArY5BGU4e6h3I2uDNGJpTQ5YDI7FzAhY7HFerpOpHWM/TtzJWGTQ2zgvPJ87 6Fmf9SJWWdWu1nku5M10wvf5nb3He7TDDahtVPiJDO2aet8ZMe8rxc9TgyLPbDwkgcqs/oRGdvcfD Vk2DwTqg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0VFP-00FLX4-6O; Wed, 30 Nov 2022 22:08:07 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 24/24] mvneta: Convert to netmem Date: Wed, 30 Nov 2022 22:08:03 +0000 Message-Id: <20221130220803.3657490-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669846087; a=rsa-sha256; cv=none; b=CdDgZdbxoBzN+dkZ/18+bMQRPYKbcTfjYXF1fiylFr0+ph6Tf33c/zaqLNpDjnNf9s2LRC EwTxCDQ0wpIS8OETRpzYnIbihwyS/X7HtoU+9PtDruQB5LWK9l0iSJNduJh0EDbf+RgNwD ucMjkyfhJ4y6QDi6KAhYdN1nzvu87eM= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=s8rFR8tR; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669846087; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bSDA7GTpvC67lmI4tEnGrrVMWxO8+GdRH/dCWhZ3QLk=; b=k+H2QO7go44aJtzMzzlBBGRtIb76HUuwZpVoKsoX1M5gp72w5wihf29EHNdnMWrT5zWlV7 OoEOgcPuP6H0bajSXDvV9MjPY+cxgVQTWRRuWkRKS+fkLa86ZBeOTzq5peoG0LIbjqtKdW 6cANpVSnYWwWklbGCuoby33LpJo3aQI= X-Stat-Signature: g4j7r7gqtwuw1a7w9xpq8g788zogqjic Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=s8rFR8tR; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: BF9EC8000F X-Rspam-User: X-HE-Tag: 1669846087-218765 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the netmem APIs instead of the page APIs. Improves type-safety. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/net/ethernet/marvell/mvneta.c | 48 +++++++++++++-------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index c2cb98d24f5c..5b8cfd50b7e1 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1931,15 +1931,15 @@ static int mvneta_rx_refill(struct mvneta_port *pp, gfp_t gfp_mask) { dma_addr_t phys_addr; - struct page *page; + struct netmem *nmem; - page = page_pool_alloc_pages(rxq->page_pool, + nmem = page_pool_alloc_netmem(rxq->page_pool, gfp_mask | __GFP_NOWARN); - if (!page) + if (!nmem) return -ENOMEM; - phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction; - mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); + phys_addr = netmem_get_dma_addr(nmem) + pp->rx_offset_correction; + mvneta_rx_desc_fill(rx_desc, phys_addr, nmem, rxq); return 0; } @@ -2006,7 +2006,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, if (!data || !(rx_desc->buf_phys_addr)) continue; - page_pool_put_full_page(rxq->page_pool, data, false); + page_pool_put_full_netmem(rxq->page_pool, data, false); } if (xdp_rxq_info_is_reg(&rxq->xdp_rxq)) xdp_rxq_info_unreg(&rxq->xdp_rxq); @@ -2072,11 +2072,11 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, goto out; for (i = 0; i < sinfo->nr_frags; i++) - page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + page_pool_put_full_netmem(rxq->page_pool, + skb_frag_netmem(&sinfo->frags[i]), true); out: - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), + page_pool_put_netmem(rxq->page_pool, virt_to_netmem(xdp->data), sync_len, true); } @@ -2088,7 +2088,6 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, struct device *dev = pp->dev->dev.parent; struct mvneta_tx_desc *tx_desc; int i, num_frames = 1; - struct page *page; if (unlikely(xdp_frame_has_frags(xdpf))) num_frames += sinfo->nr_frags; @@ -2123,9 +2122,10 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, buf->type = MVNETA_TYPE_XDP_NDO; } else { - page = unlikely(frag) ? skb_frag_page(frag) - : virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page); + struct netmem *nmem = unlikely(frag) ? + skb_frag_netmem(frag) : + virt_to_netmem(xdpf->data); + dma_addr = netmem_get_dma_addr(nmem); if (unlikely(frag)) dma_addr += skb_frag_off(frag); else @@ -2308,9 +2308,9 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + struct netmem *nmem) { - unsigned char *data = page_address(page); + unsigned char *data = netmem_to_virt(nmem); int data_len = -MVNETA_MH_SIZE, len; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; @@ -2343,7 +2343,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + struct netmem *nmem) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); struct net_device *dev = pp->dev; @@ -2371,16 +2371,16 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, skb_frag_off_set(frag, pp->rx_offset_correction); skb_frag_size_set(frag, data_len); - __skb_frag_set_page(frag, page); + __skb_frag_set_netmem(frag, nmem); if (!xdp_buff_has_frags(xdp)) { sinfo->xdp_frags_size = *size; xdp_buff_set_frags_flag(xdp); } - if (page_is_pfmemalloc(page)) + if (netmem_is_pfmemalloc(nmem)) xdp_buff_set_frag_pfmemalloc(xdp); } else { - page_pool_put_full_page(rxq->page_pool, page, true); + page_pool_put_full_netmem(rxq->page_pool, nmem, true); } *size -= len; } @@ -2440,10 +2440,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi, struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); u32 rx_status, index; struct sk_buff *skb; - struct page *page; + struct netmem *nmem; index = rx_desc - rxq->descs; - page = (struct page *)rxq->buf_virt_addr[index]; + nmem = rxq->buf_virt_addr[index]; rx_status = rx_desc->status; rx_proc++; @@ -2461,17 +2461,17 @@ static int mvneta_rx_swbm(struct napi_struct *napi, desc_status = rx_status; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, nmem); } else { if (unlikely(!xdp_buf.data_hard_start)) { rx_desc->buf_phys_addr = 0; - page_pool_put_full_page(rxq->page_pool, page, + page_pool_put_full_netmem(rxq->page_pool, nmem, true); goto next; } mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, nmem); } /* Middle or Last descriptor */ if (!(rx_status & MVNETA_RXD_LAST_DESC)) From patchwork Tue Dec 6 16:05:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13066135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C322C352A1 for ; Tue, 6 Dec 2022 16:05:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 82A538E0005; Tue, 6 Dec 2022 11:05:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D9348E0001; Tue, 6 Dec 2022 11:05:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A23A8E0005; Tue, 6 Dec 2022 11:05:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5D11F8E0001 for ; Tue, 6 Dec 2022 11:05:45 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0885BA05E1 for ; Tue, 6 Dec 2022 16:05:45 +0000 (UTC) X-FDA: 80212357050.26.EA3363A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 335DA1C0012 for ; Tue, 6 Dec 2022 16:05:42 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AlPNq+68; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670342743; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4icAOvmrxTKchMrADp++FcHPPFRR8CrUb6dlnPoOzwg=; b=p/5eylZnX55q6HI7KTLmgXEGGPLDsA8v/kfNZmidLPz9nwT1zA0qs6IyUbC7a2PI7X+4dG LmxacffYS3B/J3OIl71C4Do1QvPZarwJUgeTu7m6KM5WvdAm7j9FLNuLje8Q56KgeBh5ol PFTsAQjMV18MHawlzg59xxZyVqPDrp8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AlPNq+68; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670342743; a=rsa-sha256; cv=none; b=Cq3mV/TpsOQtv4OAfO4lPAfimNuE3BoqK8Tmt2r9NtwS9RPp3vOEUDyGYRGLPZqK0/gOi8 7GTzaNBkA9eeX65pVZMd75HL5RmFlkOChK0gNo7cQtMvi/YliXV1xcCVoWvuyUsbwZ/xyp ROPlfoMcEfarOrjE/Of30gzZ0bXvtBw= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4icAOvmrxTKchMrADp++FcHPPFRR8CrUb6dlnPoOzwg=; b=AlPNq+68y2pVteEGObEwIqxqNG w47ES80i4MSal/x+QtESa8ln4fY9Ig+suDfDv5o+9uPrAvZE59sKf5ujHFFhlO0LuH5ZAr/+Oz03K RK98ssOFJz0Fg+5abxSlTZtEnxJAYJexTTfKGgW1hs464jVX5oIDEpe/k9Ptxoo1+OQmk0K3/JmZ+ uD3YNC3PmVgXDtVUZ/Pk9KIXLyW8Vj3AZNckV8HovmlQ6We+fV4121Y7MVK5Wrk2EPgo+lPnBbzAV n62kBU1xzl17809N27C/j4/q4ppdPBOeUZuCk5nH45BXuXny0SL7j79UMHJ7JNMGVU/OD0ChscirV Kzw93ybw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p2aRv-004aAm-6J; Tue, 06 Dec 2022 16:05:39 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 25/26] netpool: Additional utility functions Date: Tue, 6 Dec 2022 16:05:36 +0000 Message-Id: <20221206160537.1092343-1-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 X-Spamd-Result: default: False [-2.70 / 9.00]; BAYES_HAM(-6.00)[99.99%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; R_DKIM_ALLOW(-0.20)[infradead.org:s=casper.20170209]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; R_SPF_NA(0.00)[no SPF record]; MIME_TRACE(0.00)[0:+]; FROM_EQ_ENVFROM(0.00)[]; DKIM_TRACE(0.00)[infradead.org:+]; RCPT_COUNT_FIVE(0.00)[5]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_NA(0.00)[infradead.org]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 335DA1C0012 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: cea66o6w3uhpx9ehqwsh1j5u3rhcbzsj X-HE-Tag: 1670342742-461234 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To be folded into earlier commit --- include/net/page_pool.h | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 4878fe30f52c..94bad45ed8d0 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -117,16 +117,28 @@ static inline void *netmem_to_virt(const struct netmem *nmem) return page_to_virt(netmem_page(nmem)); } +static inline void *netmem_address(const struct netmem *nmem) +{ + return page_address(netmem_page(nmem)); +} + static inline int netmem_ref_count(const struct netmem *nmem) { return page_ref_count(netmem_page(nmem)); } +static inline void netmem_get(struct netmem *nmem) +{ + struct folio *folio = (struct folio *)nmem; + + folio_get(folio); +} + static inline void netmem_put(struct netmem *nmem) { struct folio *folio = (struct folio *)nmem; - return folio_put(folio); + folio_put(folio); } static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) @@ -295,6 +307,11 @@ struct page_pool { struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp); +static inline struct netmem *page_pool_dev_alloc_netmem(struct page_pool *pool) +{ + return page_pool_alloc_netmem(pool, GFP_ATOMIC | __GFP_NOWARN); +} + static inline struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) { @@ -452,6 +469,12 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, page_pool_put_full_page(pool, page, true); } +static inline void page_pool_recycle_netmem(struct page_pool *pool, + struct netmem *nmem) +{ + page_pool_put_full_netmem(pool, nmem, true); +} + #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) From patchwork Tue Dec 6 16:05:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13066136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDD97C352A1 for ; Tue, 6 Dec 2022 16:05:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73E8F8E0006; Tue, 6 Dec 2022 11:05:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EF4E8E0001; Tue, 6 Dec 2022 11:05:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5692A8E0006; Tue, 6 Dec 2022 11:05:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 45AEF8E0001 for ; Tue, 6 Dec 2022 11:05:48 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D45A5160D9E for ; Tue, 6 Dec 2022 16:05:47 +0000 (UTC) X-FDA: 80212357134.13.58AFA31 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 1D363140002 for ; Tue, 6 Dec 2022 16:05:44 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="wKuhGxz/"; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670342745; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YoCKSCTx4k22RkKAc07hjRz7rBFtTZNQG7kvSXZRaNI=; b=1kDrICi1+nMYLvWgD3NTuZRMZ04fc+7vkwUAIzniC5B4BrmwbPDVBbIoxnCl2pqaHwf8i0 M+pc/IIR7omYNSTgSAwoRorBr6JGgd26HTMS6db2mG/EsKSLP9ae6CAC0F1EEwomMbuXQz Rc/OISgBPMySf09TftCcAUvbULb41uY= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="wKuhGxz/"; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670342745; a=rsa-sha256; cv=none; b=FcwHpa0lwslINcZnjgUwhL6IZnsrvo68Dv0TberA3gI/E0F3N8TxtC+4lBNoYumsC3ZiDA 0vVANfcxhf4E+XJDY7IELUeB5MDc1rOzSBOtVn+M218hNrhg4p4JFisdt/OVthFwguJqIt WLI7bU2KE3ZeKsf+4fgb5A5M7a32Ap4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YoCKSCTx4k22RkKAc07hjRz7rBFtTZNQG7kvSXZRaNI=; b=wKuhGxz/3Ny0sAVktiKgSJWyeN X9pKboXpTiufF9WNRMIiF7g6a40OyEeLBqIRA0vOV7ZiQOpRevYI0BYC87hi2D0BBwPgIJ/coKZFN 1dT1eZ4+FdbywdomzsMEk7OQ1m9UJqMcmxLlhi/vACnoaVIFmakk2D2FaKEmv6p31En6swGXPfP3G ZMA7yr+sV30zMYlVFdI6ZGtkNekqvTfxYafSk8Ccv5H79kocTgI2ilxSowyAgbTHjywPFz92f9zA3 yyXDYKN1vsMoVDaf3Z+TU0T/hQGgw5LIumookO2NELOQdftD7C0MpFaTknhJEAxYYsUPnZ/hchX+H QlThIXEQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p2aRv-004aAo-9A; Tue, 06 Dec 2022 16:05:39 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 26/26] mlx5: Convert to netmem Date: Tue, 6 Dec 2022 16:05:37 +0000 Message-Id: <20221206160537.1092343-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221130220803.3657490-1-willy@infradead.org> References: <20221130220803.3657490-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1D363140002 X-Stat-Signature: ht94kn5whgqppguy4hc418eotuu7c78y X-Spamd-Result: default: False [-2.70 / 9.00]; BAYES_HAM(-6.00)[99.99%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; R_DKIM_ALLOW(-0.20)[infradead.org:s=casper.20170209]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; R_SPF_NA(0.00)[no SPF record]; MIME_TRACE(0.00)[0:+]; FROM_EQ_ENVFROM(0.00)[]; DKIM_TRACE(0.00)[infradead.org:+]; RCPT_COUNT_FIVE(0.00)[5]; RCVD_COUNT_TWO(0.00)[2]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; DMARC_NA(0.00)[infradead.org]; ARC_NA(0.00)[] X-Rspam-User: X-HE-Tag: 1670342744-616711 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the netmem APIs instead of the page_pool APIs. Possibly we should add a netmem equivalent of skb_add_rx_frag(), but that can happen later. Saves one call to compound_head() in the call to put_page() in mlx5e_page_release_dynamic() which saves 58 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Jesper Dangaard Brouer --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 10 +- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 4 +- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 23 ++-- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 12 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 130 +++++++++--------- 6 files changed, 93 insertions(+), 88 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index ff5b302531d5..f334f87273c9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -467,7 +467,7 @@ struct mlx5e_txqsq { } ____cacheline_aligned_in_smp; union mlx5e_alloc_unit { - struct page *page; + struct netmem *nmem; struct xdp_buff *xsk; }; @@ -501,7 +501,7 @@ struct mlx5e_xdp_info { } frame; struct { struct mlx5e_rq *rq; - struct page *page; + struct netmem *nmem; } page; }; }; @@ -619,7 +619,7 @@ struct mlx5e_mpw_info { struct mlx5e_page_cache { u32 head; u32 tail; - struct page *page_cache[MLX5E_CACHE_SIZE]; + struct netmem *page_cache[MLX5E_CACHE_SIZE]; }; struct mlx5e_rq; @@ -657,13 +657,13 @@ struct mlx5e_rq_frags_info { struct mlx5e_dma_info { dma_addr_t addr; - struct page *page; + struct netmem *nmem; }; struct mlx5e_shampo_hd { u32 mkey; struct mlx5e_dma_info *info; - struct page *last_page; + struct netmem *last_nmem; u16 hd_per_wq; u16 hd_per_wqe; unsigned long *bitmap; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 853f312cd757..aa231d96c52c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -65,8 +65,8 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget); int mlx5e_poll_ico_cq(struct mlx5e_cq *cq); /* RX */ -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page); -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle); +void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct netmem *nmem); +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct netmem *nmem, bool recycle); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)); int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..8e9136381592 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -57,7 +57,7 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) static inline bool mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, - struct page *page, struct xdp_buff *xdp) + struct netmem *nmem, struct xdp_buff *xdp) { struct skb_shared_info *sinfo = NULL; struct mlx5e_xmit_data xdptxd; @@ -116,7 +116,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; xdpi.page.rq = rq; - dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); + dma_addr = netmem_get_dma_addr(nmem) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_BIDIRECTIONAL); if (unlikely(xdp_frame_has_frags(xdpf))) { @@ -127,7 +127,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, dma_addr_t addr; u32 len; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = netmem_get_dma_addr(skb_frag_netmem(frag)) + skb_frag_off(frag); len = skb_frag_size(frag); dma_sync_single_for_device(sq->pdev, addr, len, @@ -141,14 +141,14 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, mlx5e_xmit_xdp_frame, sq, &xdptxd, sinfo, 0))) return false; - xdpi.page.page = page; + xdpi.page.nmem = nmem; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); if (unlikely(xdp_frame_has_frags(xdpf))) { for (i = 0; i < sinfo->nr_frags; i++) { skb_frag_t *frag = &sinfo->frags[i]; - xdpi.page.page = skb_frag_page(frag); + xdpi.page.nmem = skb_frag_netmem(frag); mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); } } @@ -157,7 +157,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } /* returns true if packet was consumed by xdp */ -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct netmem *nmem, struct bpf_prog *prog, struct xdp_buff *xdp) { u32 act; @@ -168,19 +168,19 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, case XDP_PASS: return false; case XDP_TX: - if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, page, xdp))) + if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, nmem, xdp))) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */ return true; case XDP_REDIRECT: - /* When XDP enabled then page-refcnt==1 here */ + /* When XDP enabled then nmem->refcnt==1 here */ err = xdp_do_redirect(rq->netdev, xdp, prog); if (unlikely(err)) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); __set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags); if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL) - mlx5e_page_dma_unmap(rq, page); + mlx5e_page_dma_unmap(rq, nmem); rq->stats->xdp_redirect++; return true; default: @@ -445,7 +445,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, skb_frag_t *frag = &sinfo->frags[i]; dma_addr_t addr; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = netmem_get_dma_addr(skb_frag_netmem(frag)) + skb_frag_off(frag); dseg++; @@ -495,7 +495,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, break; case MLX5E_XDP_XMIT_MODE_PAGE: /* XDP_TX from the regular RQ */ - mlx5e_page_release_dynamic(xdpi.page.rq, xdpi.page.page, recycle); + mlx5e_page_release_dynamic(xdpi.page.rq, + xdpi.page.nmem, recycle); break; case MLX5E_XDP_XMIT_MODE_XSK: /* AF_XDP send */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..5bc875f131a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -46,7 +46,7 @@ struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct netmem *nmem, struct bpf_prog *prog, struct xdp_buff *xdp); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 217c8a478977..e6b7a6b263ab 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -555,16 +555,18 @@ static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work) static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq) { - rq->wqe_overflow.page = alloc_page(GFP_KERNEL); - if (!rq->wqe_overflow.page) + struct page *page = alloc_page(GFP_KERNEL); + if (!page) return -ENOMEM; - rq->wqe_overflow.addr = dma_map_page(rq->pdev, rq->wqe_overflow.page, 0, + rq->wqe_overflow.addr = dma_map_page(rq->pdev, page, 0, PAGE_SIZE, rq->buff.map_dir); if (dma_mapping_error(rq->pdev, rq->wqe_overflow.addr)) { - __free_page(rq->wqe_overflow.page); + __free_page(page); return -ENOMEM; } + + rq->wqe_overflow.nmem = page_netmem(page); return 0; } @@ -572,7 +574,7 @@ static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq) { dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, PAGE_SIZE, rq->buff.map_dir); - __free_page(rq->wqe_overflow.page); + __free_page(netmem_page(rq->wqe_overflow.nmem)); } static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *params, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index b1ea0b995d9c..77dee6138dc8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -274,7 +274,7 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, return mlx5e_decompress_cqes_cont(rq, wq, 1, budget_rem); } -static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) +static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct netmem *nmem) { struct mlx5e_page_cache *cache = &rq->page_cache; u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1); @@ -285,12 +285,12 @@ static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) return false; } - if (!dev_page_is_reusable(page)) { + if (!dev_page_is_reusable(netmem_page(nmem))) { stats->cache_waive++; return false; } - cache->page_cache[cache->tail] = page; + cache->page_cache[cache->tail] = nmem; cache->tail = tail_next; return true; } @@ -306,16 +306,16 @@ static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq, union mlx5e_alloc_uni return false; } - if (page_ref_count(cache->page_cache[cache->head]) != 1) { + if (netmem_ref_count(cache->page_cache[cache->head]) != 1) { stats->cache_busy++; return false; } - au->page = cache->page_cache[cache->head]; + au->nmem = cache->page_cache[cache->head]; cache->head = (cache->head + 1) & (MLX5E_CACHE_SIZE - 1); stats->cache_reuse++; - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); /* Non-XSK always uses PAGE_SIZE. */ dma_sync_single_for_device(rq->pdev, addr, PAGE_SIZE, rq->buff.map_dir); return true; @@ -328,43 +328,45 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, union mlx5e_alloc_u if (mlx5e_rx_cache_get(rq, au)) return 0; - au->page = page_pool_dev_alloc_pages(rq->page_pool); - if (unlikely(!au->page)) + au->nmem = page_pool_dev_alloc_netmem(rq->page_pool); + if (unlikely(!au->nmem)) return -ENOMEM; /* Non-XSK always uses PAGE_SIZE. */ - addr = dma_map_page(rq->pdev, au->page, 0, PAGE_SIZE, rq->buff.map_dir); + addr = dma_map_page(rq->pdev, netmem_page(au->nmem), 0, PAGE_SIZE, + rq->buff.map_dir); if (unlikely(dma_mapping_error(rq->pdev, addr))) { - page_pool_recycle_direct(rq->page_pool, au->page); - au->page = NULL; + page_pool_recycle_netmem(rq->page_pool, au->nmem); + au->nmem = NULL; return -ENOMEM; } - page_pool_set_dma_addr(au->page, addr); + netmem_set_dma_addr(au->nmem, addr); return 0; } -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page) +void mlx5e_nmem_dma_unmap(struct mlx5e_rq *rq, struct netmem *nmem) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = netmem_get_dma_addr(nmem); dma_unmap_page_attrs(rq->pdev, dma_addr, PAGE_SIZE, rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); - page_pool_set_dma_addr(page, 0); + netmem_set_dma_addr(nmem, 0); } -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle) +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct netmem *nmem, + bool recycle) { if (likely(recycle)) { - if (mlx5e_rx_cache_put(rq, page)) + if (mlx5e_rx_cache_put(rq, nmem)) return; - mlx5e_page_dma_unmap(rq, page); - page_pool_recycle_direct(rq->page_pool, page); + mlx5e_nmem_dma_unmap(rq, nmem); + page_pool_recycle_netmem(rq->page_pool, nmem); } else { - mlx5e_page_dma_unmap(rq, page); - page_pool_release_page(rq->page_pool, page); - put_page(page); + mlx5e_nmem_dma_unmap(rq, nmem); + page_pool_release_netmem(rq->page_pool, nmem); + netmem_put(nmem); } } @@ -389,7 +391,7 @@ static inline void mlx5e_put_rx_frag(struct mlx5e_rq *rq, bool recycle) { if (frag->last_in_page) - mlx5e_page_release_dynamic(rq, frag->au->page, recycle); + mlx5e_page_release_dynamic(rq, frag->au->nmem, recycle); } static inline struct mlx5e_wqe_frag_info *get_frag(struct mlx5e_rq *rq, u16 ix) @@ -413,7 +415,7 @@ static int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe_cyc *wqe, goto free_frags; headroom = i == 0 ? rq->buff.headroom : 0; - addr = page_pool_get_dma_addr(frag->au->page); + addr = netmem_get_dma_addr(frag->au->nmem); wqe->data[i].addr = cpu_to_be64(addr + frag->offset + headroom); } @@ -475,21 +477,21 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, union mlx5e_alloc_unit *au, u32 frag_offset, u32 len, unsigned int truesize) { - dma_addr_t addr = page_pool_get_dma_addr(au->page); + dma_addr_t addr = netmem_get_dma_addr(au->nmem); dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); - page_ref_inc(au->page); + netmem_get(au->nmem); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, - au->page, frag_offset, len, truesize); + netmem_page(au->nmem), frag_offset, len, truesize); } static inline void mlx5e_copy_skb_header(struct mlx5e_rq *rq, struct sk_buff *skb, - struct page *page, dma_addr_t addr, + struct netmem *nmem, dma_addr_t addr, int offset_from, int dma_offset, u32 headlen) { - const void *from = page_address(page) + offset_from; + const void *from = netmem_address(nmem) + offset_from; /* Aligning len to sizeof(long) optimizes memcpy performance */ unsigned int len = ALIGN(headlen, sizeof(long)); @@ -522,7 +524,7 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle } else { for (i = 0; i < rq->mpwqe.pages_per_wqe; i++) if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap)) - mlx5e_page_release_dynamic(rq, alloc_units[i].page, recycle); + mlx5e_page_release_dynamic(rq, alloc_units[i].nmem, recycle); } } @@ -586,7 +588,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; u16 entries, pi, header_offset, err, wqe_bbs, new_entries; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; - struct page *page = shampo->last_page; + struct netmem *nmem = shampo->last_nmem; u64 addr = shampo->last_addr; struct mlx5e_dma_info *dma_info; struct mlx5e_umr_wqe *umr_wqe; @@ -613,11 +615,11 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, err = mlx5e_page_alloc_pool(rq, &au); if (unlikely(err)) goto err_unmap; - page = dma_info->page = au.page; - addr = dma_info->addr = page_pool_get_dma_addr(au.page); + nmem = dma_info->nmem = au.nmem; + addr = dma_info->addr = netmem_get_dma_addr(au.nmem); } else { dma_info->addr = addr + header_offset; - dma_info->page = page; + dma_info->nmem = nmem; } update_klm: @@ -635,7 +637,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, }; shampo->pi = (shampo->pi + new_entries) & (shampo->hd_per_wq - 1); - shampo->last_page = page; + shampo->last_nmem = nmem; shampo->last_addr = addr; sq->pc += wqe_bbs; sq->doorbell_cseg = &umr_wqe->ctrl; @@ -647,7 +649,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, dma_info = &shampo->info[--index]; if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) { dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE); - mlx5e_page_release_dynamic(rq, dma_info->page, true); + mlx5e_page_release_dynamic(rq, dma_info->nmem, true); } } rq->stats->buff_alloc_err++; @@ -721,7 +723,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err = mlx5e_page_alloc_pool(rq, au); if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); umr_wqe->inline_mtts[i] = (struct mlx5_mtt) { .ptag = cpu_to_be64(addr | MLX5_EN_WR), }; @@ -752,7 +754,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err_unmap: while (--i >= 0) { au--; - mlx5e_page_release_dynamic(rq, au->page, true); + mlx5e_page_release_dynamic(rq, au->nmem, true); } err: @@ -771,7 +773,7 @@ void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close { struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; int hd_per_wq = shampo->hd_per_wq; - struct page *deleted_page = NULL; + struct netmem *deleted_nmem = NULL; struct mlx5e_dma_info *hd_info; int i, index = start; @@ -784,9 +786,9 @@ void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close hd_info = &shampo->info[index]; hd_info->addr = ALIGN_DOWN(hd_info->addr, PAGE_SIZE); - if (hd_info->page != deleted_page) { - deleted_page = hd_info->page; - mlx5e_page_release_dynamic(rq, hd_info->page, false); + if (hd_info->nmem != deleted_nmem) { + deleted_nmem = hd_info->nmem; + mlx5e_page_release_dynamic(rq, hd_info->nmem, false); } } @@ -1125,7 +1127,7 @@ static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) struct mlx5e_dma_info *last_head = &rq->mpwqe.shampo->info[header_index]; u16 head_offset = (last_head->addr & (PAGE_SIZE - 1)) + rq->buff.headroom; - return page_address(last_head->page) + head_offset; + return netmem_address(last_head->nmem) + head_offset; } static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4) @@ -1584,11 +1586,11 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, dma_addr_t addr; u32 frag_size; - va = page_address(au->page) + wi->offset; + va = netmem_address(au->nmem) + wi->offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -1599,7 +1601,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) + if (mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) return NULL; /* page/packet was consumed by XDP */ rx_headroom = xdp.data - xdp.data_hard_start; @@ -1612,7 +1614,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(au->page); + netmem_get(au->nmem); return skb; } @@ -1634,10 +1636,10 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi u32 truesize; void *va; - va = page_address(au->page) + wi->offset; + va = netmem_address(au->nmem) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, rq->buff.frame0_sz, rq->buff.map_dir); net_prefetchw(va); /* xdp_frame data area */ @@ -1658,7 +1660,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, frag_consumed_bytes, rq->buff.map_dir); @@ -1672,11 +1674,11 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi } frag = &sinfo->frags[sinfo->nr_frags++]; - __skb_frag_set_page(frag, au->page); + __skb_frag_set_netmem(frag, au->nmem); skb_frag_off_set(frag, wi->offset); skb_frag_size_set(frag, frag_consumed_bytes); - if (page_is_pfmemalloc(au->page)) + if (netmem_is_pfmemalloc(au->nmem)) xdp_buff_set_frag_pfmemalloc(&xdp); sinfo->xdp_frags_size += frag_consumed_bytes; @@ -1690,7 +1692,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi au = head_wi->au; prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; @@ -1707,7 +1709,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi if (unlikely(!skb)) return NULL; - page_ref_inc(au->page); + netmem_get(au->nmem); if (unlikely(xdp_buff_has_frags(&xdp))) { int i; @@ -1956,8 +1958,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w mlx5e_fill_skb_data(skb, rq, au, byte_cnt, frag_offset); /* copy header */ - addr = page_pool_get_dma_addr(head_au->page); - mlx5e_copy_skb_header(rq, skb, head_au->page, addr, + addr = netmem_get_dma_addr(head_au->nmem); + mlx5e_copy_skb_header(rq, skb, head_au->nmem, addr, head_offset, head_offset, headlen); /* skb linear part was allocated with headlen and aligned to long */ skb->tail += headlen; @@ -1985,11 +1987,11 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; } - va = page_address(au->page) + head_offset; + va = netmem_address(au->nmem) + head_offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -2000,7 +2002,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ @@ -2016,7 +2018,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(au->page); + netmem_get(au->nmem); return skb; } @@ -2033,7 +2035,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, void *hdr, *data; u32 frag_size; - hdr = page_address(head->page) + head_offset; + hdr = netmem_address(head->nmem) + head_offset; data = hdr + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size); @@ -2048,7 +2050,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(head->page); + netmem_get(head->nmem); } else { /* allocate SKB and copy header for large header */ @@ -2061,7 +2063,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, } prefetchw(skb->data); - mlx5e_copy_skb_header(rq, skb, head->page, head->addr, + mlx5e_copy_skb_header(rq, skb, head->nmem, head->addr, head_offset + rx_headroom, rx_headroom, head_size); /* skb linear part was allocated with headlen and aligned to long */ @@ -2113,7 +2115,7 @@ mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index) if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) { shampo->info[header_index].addr = ALIGN_DOWN(addr, PAGE_SIZE); - mlx5e_page_release_dynamic(rq, shampo->info[header_index].page, true); + mlx5e_page_release_dynamic(rq, shampo->info[header_index].nmem, true); } bitmap_clear(shampo->bitmap, header_index, 1); }