From patchwork Tue May 30 15:00:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13260411 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D8CC17AD4 for ; Tue, 30 May 2023 15:04:56 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34D1B1AD; Tue, 30 May 2023 08:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685459079; x=1716995079; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eLJOSFdKG61xS8RaVR7iDFns16OyB/17swgN14K+JTY=; b=Rhv7Bi2lfCLDMDSrksjYpgXoLSTXNQnpcMI3YKiXuNa5mn8pcc3uev5J mTBW1Z3vPPYfnnfbQxyqKSeKTcMuI2pW5MZZPBvPYCNTsbtkrVHzRmfx9 Gk+O3DJtCOhoACiZ/LbATb+tqcbOd87qVowHxRoAy86/uxW1p6DwxJY10 zt9rP3amnCKqWvxjxYhMSZF35QmoOCA7Kc0idQ/KUgOmNIz8cp/YcbOOc ReB5V8Z8BpYVmnMfi3W987tfr1B0xF246hBL+hTchm9cJ3X/UnrXcb4uH WDBQuuCzrwpHIS9b00k/sg2xui5ZmBDEZ/8UUDQu8hgytBYw8LpuORk4z A==; X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="358192566" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="358192566" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2023 08:03:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="796304188" X-IronPort-AV: E=Sophos;i="6.00,204,1681196400"; d="scan'208";a="796304188" Received: from newjersey.igk.intel.com ([10.102.20.203]) by FMSMGA003.fm.intel.com with ESMTP; 30 May 2023 08:03:34 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Magnus Karlsson , Michal Kubiak , Larysa Zaremba , Jesper Dangaard Brouer , Ilias Apalodimas , Christoph Hellwig , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 08/12] net: page_pool: add DMA-sync-for-CPU inline helpers Date: Tue, 30 May 2023 17:00:31 +0200 Message-Id: <20230530150035.1943669-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230530150035.1943669-1-aleksander.lobakin@intel.com> References: <20230530150035.1943669-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Each driver is responsible for syncing buffers written by HW for CPU before accessing them. Almost each PP-enabled driver uses the same pattern, which could be shorthanded into a static inline to make driver code a little bit more compact. Introduce a couple such functions. The first one takes the actual size of the data written by HW and is the main one to be used on Rx. The second does the same, but only if the PP performs DMA synchronizations at all. The last one picks max_len from the PP params and is designed for more extreme cases when the size is unknown, but the buffer still needs to be synced. Also constify pointer arguments of page_pool_get_dma_dir() and page_pool_get_dma_addr() to give a bit more room for optimization, as both of them are read-only. Signed-off-by: Alexander Lobakin --- include/net/page_pool.h | 64 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index ee895376270e..361c7fdd718c 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -32,7 +32,7 @@ #include /* Needed by ptr_ring */ #include -#include +#include #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA * map/unmap @@ -237,8 +237,8 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, /* get the stored dma direction. A driver might decide to treat this locally and * avoid the extra cache line from page_pool to determine the direction */ -static -inline enum dma_data_direction page_pool_get_dma_dir(struct page_pool *pool) +static inline enum dma_data_direction +page_pool_get_dma_dir(const struct page_pool *pool) { return pool->p.dma_dir; } @@ -361,7 +361,7 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t page_pool_get_dma_addr(const struct page *page) { dma_addr_t ret = page->dma_addr; @@ -378,6 +378,62 @@ static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) page->dma_addr_upper = upper_32_bits(addr); } +/** + * __page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW + * @pool: page_pool which this page belongs to + * @page: page to sync + * @dma_sync_size: size of the data written to the page + * + * Can be used as a shorthand to sync Rx pages before accessing them in the + * driver. Caller must ensure the pool was created with %PP_FLAG_DMA_MAP. + * Note that this version performs DMA sync unconditionally, even if the + * associated PP doesn't perform sync-for-device. Consider the non-underscored + * version first if unsure. + */ +static inline void __page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) +{ + dma_sync_single_range_for_cpu(pool->p.dev, + page_pool_get_dma_addr(page), + pool->p.offset, dma_sync_size, + page_pool_get_dma_dir(pool)); +} + +/** + * page_pool_dma_sync_for_cpu - sync Rx page for CPU if needed + * @pool: page_pool which this page belongs to + * @page: page to sync + * @dma_sync_size: size of the data written to the page + * + * Performs DMA sync for CPU, but *only* when: + * 1) page_pool was created with %PP_FLAG_DMA_SYNC_DEV to manage DMA syncs; + * 2) AND sync shortcut is not available (IOMMU, swiotlb, non-coherent DMA, ...) + */ +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 dma_sync_size) +{ + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + __page_pool_dma_sync_for_cpu(pool, page, dma_sync_size); +} + +/** + * page_pool_dma_sync_full_for_cpu - sync full Rx page for CPU (if needed) + * @pool: page_pool which this page belongs to + * @page: page to sync + * + * Performs sync for the entire length exposed to hardware. Can be used on + * DMA errors or before freeing the page, when it's unknown whether the HW + * touched the buffer. + */ +static inline void +page_pool_dma_sync_full_for_cpu(const struct page_pool *pool, + const struct page *page) +{ + page_pool_dma_sync_for_cpu(pool, page, pool->p.max_len); +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL