From patchwork Wed Apr 24 20:35:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Nguyen X-Patchwork-Id: 13642506 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2534F156C5A; Wed, 24 Apr 2024 20:36:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.17 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713990986; cv=none; b=HHT7SQQeVlumh8aWqcT/vbFHc6U1E7EnIjvaXMvWrhbAZ5U9nQ7ykGhEKjqbgcaBtkDaX3mgwJ29xcFDAn4sYY0ytgLHDkVtXTv0nNKPKFNUr79JFc2BFFUfMgKpZxq+RRH2ewQvoP9N74ORtcZQ0tQbJqt7OaWWRzRet7noVK0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713990986; c=relaxed/simple; bh=Wvi02HNh2hrm+wyxbuaUHtlFc/Lbovf6JCYmO1OYEWE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jcgrJ573/SNbrW4D3IP9DRwKHsc0L9VR60FY5WexSdStNVbU/o7NXwpTdu4MZfW+TkjmZhj0jigRw/1JFb/ln8K4EfiHJD+/0IIx9SjZTesCUjbWs8rkh0iYqQkSB23eIidO3YUEGQQW4MMni90seKgKFjaZU3XHyh/KOIPqx2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=oGJzgQ6Z; arc=none smtp.client-ip=192.198.163.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="oGJzgQ6Z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713990985; x=1745526985; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wvi02HNh2hrm+wyxbuaUHtlFc/Lbovf6JCYmO1OYEWE=; b=oGJzgQ6Z8C06gRx2DFdDH/3WqNTNN2i8e4SU/3SKgWjDVTFLAAC+zKpR JuOM6rqq2tJDbcxU/6PJMDcjgEVs2lXu4V4R/Vg4TPxxbHrGCjInUpu8n 9eOwm+2lNx2JuGKHVLa1Z+O+M+Kh91g1qPeQCeHkUiQR7Gnqeu02jNUDT GOIqQ4Q/lwqJhJFBOGThNzMhT5uxDIY7rCLjMNEVGajIZhQKMQ0hhwVLA 6wfuaDOBA9bBc5zaJvAo+JBcJdGpvCmDDoxKqoZdNKf7SrHyEj4rc/Hnb VP5Iz5z3Q5ghA8TDh1/ThT1D1BdIpkYaUlBVdJ1Wkgq8uG2VR8E25NIls g==; X-CSE-ConnectionGUID: 1iuKoOA2S6KeHgMHQVou/A== X-CSE-MsgGUID: gdBwhPFeSUCbRakJAMvPyQ== X-IronPort-AV: E=McAfee;i="6600,9927,11054"; a="9511961" X-IronPort-AV: E=Sophos;i="6.07,227,1708416000"; d="scan'208";a="9511961" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2024 13:36:20 -0700 X-CSE-ConnectionGUID: E+r5NoRCQlysfC2wtLM+VQ== X-CSE-MsgGUID: KxPWp1YKSQiC4YjhzCtlcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,227,1708416000"; d="scan'208";a="29314992" Received: from anguy11-upstream.jf.intel.com ([10.166.9.133]) by fmviesa003.fm.intel.com with ESMTP; 24 Apr 2024 13:36:19 -0700 From: Tony Nguyen To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, netdev@vger.kernel.org Cc: Alexander Lobakin , anthony.l.nguyen@intel.com, hawk@kernel.org, linux-mm@kvack.org, przemyslaw.kitszel@intel.com, alexanderduyck@fb.com, ilias.apalodimas@linaro.org, linux-kernel@vger.kernel.org, linyunsheng@huawei.com, nex.sw.ncis.osdt.itp.upstreaming@intel.com, cl@linux.com, akpm@linux-foundation.org, vbabka@suse.cz Subject: [PATCH net-next v11 06/10] page_pool: add DMA-sync-for-CPU inline helper Date: Wed, 24 Apr 2024 13:35:53 -0700 Message-ID: <20240424203559.3420468-7-anthony.l.nguyen@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20240424203559.3420468-1-anthony.l.nguyen@intel.com> References: <20240424203559.3420468-1-anthony.l.nguyen@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Alexander Lobakin Each driver is responsible for syncing buffers written by HW for CPU before accessing them. Almost each PP-enabled driver uses the same pattern, which could be shorthanded into a static inline to make driver code a little bit more compact. Introduce a simple helper which performs DMA synchronization for the size passed from the driver. It can be used even when the pool doesn't manage DMA-syncs-for-device, just make sure the page has a correct DMA address set via page_pool_set_dma_addr(). Signed-off-by: Alexander Lobakin Reviewed-by: Ilias Apalodimas Signed-off-by: Tony Nguyen --- include/net/page_pool/helpers.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index c7bb06750e85..873631c79ab1 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -52,6 +52,8 @@ #ifndef _NET_PAGE_POOL_HELPERS_H #define _NET_PAGE_POOL_HELPERS_H +#include + #include #ifdef CONFIG_PAGE_POOL_STATS @@ -395,6 +397,28 @@ static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) return false; } +/** + * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW + * @pool: &page_pool the @page belongs to + * @page: page to sync + * @offset: offset from page start to "hard" start if using PP frags + * @dma_sync_size: size of the data written to the page + * + * Can be used as a shorthand to sync Rx pages before accessing them in the + * driver. Caller must ensure the pool was created with ``PP_FLAG_DMA_MAP``. + * Note that this version performs DMA sync unconditionally, even if the + * associated PP doesn't perform sync-for-device. + */ +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, + const struct page *page, + u32 offset, u32 dma_sync_size) +{ + dma_sync_single_range_for_cpu(pool->p.dev, + page_pool_get_dma_addr(page), + offset + pool->p.offset, dma_sync_size, + page_pool_get_dma_dir(pool)); +} + static inline bool page_pool_put(struct page_pool *pool) { return refcount_dec_and_test(&pool->user_cnt);