From patchwork Thu Jun 29 12:02:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13296933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA33DC001B3 for ; Thu, 29 Jun 2023 12:05:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:CC :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=teTGMfcN7pY2ym2IIzFL5JuZu8DOQLsyxsOL0W57oBM=; b=jak/3UwmKrEmrU 7ekAae5+js1dL0HiQezrvAaLuo/Jf8XAKMk2Hfu9Z1RSZOGX/G9Im6KEB3OAIgnS58YdaCe+Qokmk 8oNil3JYwdFusI1dfTAJ7eprjdWtAPejbKfg+4Ve/GaXb+bCnS4vj681BuzvMDEqaQP9ziL6BBdMq gkBEmXQWELwMrZJhXUE81PB0p/9R3NbA23Cw8TauLrStYFwQV0X0+F+oxGdu1rgUaGCTkkmqliTLK N2S7wZ4eQKsapTEs4r/cc1f/Fy/r0QESwd9DtCrJXNXjOXnfNi3JxCjyQqetElZfSOmTAOUOJZDnD pcmCPjc/s5ScTMTtLbZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qEqOV-0011FN-1G; Thu, 29 Jun 2023 12:05:03 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qEqOR-0011DN-35; Thu, 29 Jun 2023 12:05:02 +0000 Received: from dggpemm500005.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QsHBl2WhDzLmvg; Thu, 29 Jun 2023 20:02:39 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 29 Jun 2023 20:04:45 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Matthias Brugger , AngeloGioacchino Del Regno , , , Subject: [PATCH v5 RFC 0/6] introduce page_pool_alloc() API Date: Thu, 29 Jun 2023 20:02:20 +0800 Message-ID: <20230629120226.14854-1-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230629_050500_361523_41D966C4 X-CRM114-Status: GOOD ( 14.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In [1] & [2] & [3], there are usecases for veth and virtio_net to use frag support in page pool to reduce memory usage, and it may request different frag size depending on the head/tail room space for xdp_frame/shinfo and mtu/packet size. When the requested frag size is large enough that a single page can not be split into more than one frag, using frag support only have performance penalty because of the extra frag count handling for frag support. So this patchset provides a page pool API for the driver to allocate memory with least memory utilization and performance penalty when it doesn't know the size of memory it need beforehand. 1. https://patchwork.kernel.org/project/netdevbpf/patch/d3ae6bd3537fbce379382ac6a42f67e22f27ece2.1683896626.git.lorenzo@kernel.org/ 2. https://patchwork.kernel.org/project/netdevbpf/patch/20230526054621.18371-3-liangchen.linux@gmail.com/ 3. https://github.com/alobakin/linux/tree/iavf-pp-frag v5 RFC: add a new page_pool_cache_alloc() API, and other minor change as discussed in v4. As there seems to be three comsumers that might be made use of the new API, so repost it as RFC and CC the relevant authors to see if the new API fits their need. V4. Fix a typo and add a patch to update document about frag API, PAGE_POOL_DMA_USE_PP_FRAG_COUNT is not renamed yet as we may need a different thread to discuss that. V3: Incorporate changes from the disscusion with Alexander, mostly the inline wraper, PAGE_POOL_DMA_USE_PP_FRAG_COUNT change split to separate patch and comment change. V2: Add patch to remove PP_FLAG_PAGE_FRAG flags and mention virtio_net usecase in the cover letter. V1: Drop RFC tag and page_pool_frag patch. Yunsheng Lin (6): page_pool: frag API support for 32-bit arch with 64-bit DMA page_pool: unify frag_count handling in page_pool_is_last_frag() page_pool: introduce page_pool[_cache]_alloc() API page_pool: remove PP_FLAG_PAGE_FRAG flag page_pool: update document about frag API net: veth: use newly added page pool API for veth with xdp Documentation/networking/page_pool.rst | 34 +++- .../net/ethernet/hisilicon/hns3/hns3_enet.c | 3 +- .../marvell/octeontx2/nic/otx2_common.c | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 12 +- drivers/net/veth.c | 24 ++- drivers/net/wireless/mediatek/mt76/mac80211.c | 2 +- include/net/page_pool.h | 179 ++++++++++++++---- net/core/page_pool.c | 30 ++- net/core/skbuff.c | 2 +- 9 files changed, 212 insertions(+), 76 deletions(-)