From patchwork Fri Aug 20 02:06:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 12448247 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60947C4320E for ; Fri, 20 Aug 2021 02:07:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 447A9610A1 for ; Fri, 20 Aug 2021 02:07:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234768AbhHTCIY (ORCPT ); Thu, 19 Aug 2021 22:08:24 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:14386 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237272AbhHTCIW (ORCPT ); Thu, 19 Aug 2021 22:08:22 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GrQ0s5LPZzdZXT; Fri, 20 Aug 2021 10:03:57 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 20 Aug 2021 10:07:41 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Fri, 20 Aug 2021 10:07:41 +0800 From: Yunsheng Lin To: , CC: , , , Subject: [PATCH net-next 2/2] page_pool: optimize the cpu sync operation when DMA mapping Date: Fri, 20 Aug 2021 10:06:35 +0800 Message-ID: <1629425195-10130-3-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1629425195-10130-1-git-send-email-linyunsheng@huawei.com> References: <1629425195-10130-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org If the DMA_ATTR_SKIP_CPU_SYNC is not set, cpu syncing is also done in dma_map_page_attrs(), so set the attrs according to pool->p.flags to avoid calling dma sync function again. Also mark the dma error as the unlikely case While we are at it. Signed-off-by: Yunsheng Lin --- net/core/page_pool.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 1a69784..8172045 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -191,8 +191,12 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, static bool page_pool_dma_map(struct page_pool *pool, struct page *page) { + unsigned long attrs = DMA_ATTR_SKIP_CPU_SYNC; dma_addr_t dma; + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + attrs = 0; + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr * since dma_addr_t can be either 32 or 64 bits and does not always fit * into page private data (i.e 32bit cpu with 64bit DMA caps) @@ -200,15 +204,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) */ dma = dma_map_page_attrs(pool->p.dev, page, 0, (PAGE_SIZE << pool->p.order), - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - if (dma_mapping_error(pool->p.dev, dma)) + pool->p.dma_dir, attrs); + if (unlikely(dma_mapping_error(pool->p.dev, dma))) return false; page_pool_set_dma_addr(page, dma); - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); - return true; }