From patchwork Thu May 13 16:58:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12256013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CF59C433B4 for ; Thu, 13 May 2021 16:59:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09B2C613CA for ; Thu, 13 May 2021 16:59:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09B2C613CA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7A1426B0036; Thu, 13 May 2021 12:59:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7513F6B006E; Thu, 13 May 2021 12:59:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A3CF6B0070; Thu, 13 May 2021 12:59:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id 2B18E6B0036 for ; Thu, 13 May 2021 12:59:09 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B055B1801C411 for ; Thu, 13 May 2021 16:59:08 +0000 (UTC) X-FDA: 78136817976.15.941D18C Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by imf30.hostedemail.com (Postfix) with ESMTP id 5CF12E0001B4 for ; Thu, 13 May 2021 16:59:07 +0000 (UTC) Received: by mail-ed1-f47.google.com with SMTP id v5so20663781edc.8 for ; Thu, 13 May 2021 09:59:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SuYo0v4Sspk+YrS82KRsmWvE9J4qLdnDJDZ/tX57c60=; b=s1YROhRfN7D5wowp28PI14Cf/+BTxXxTNPhT8E+7MWrluSv2u21DDoHS9ajlK2bHjv D4IW7Gcnc6zafMkPiB7OXxgloR1ceYPEb8NJJ1LDI1kq48St8lv+DJcIqaWCIzUeBnaL cYPpJwXTTZJUpjRSIXTo9TzS569Kx79cOtbmhmIGcnAW4A4/MeB6mLd4Tl7O3wonBoiS U9IldfgxDolo6he2V3l4h16ScZmjmY5uV3l4/JMQ5GCc2iICJyRlBVDm3Q5dmy+1Hsi6 KmePNELjz8UeaAebieucr6ilCB/PcgcyPNUlVDbO8JRwahcrfJ0pFgaibU0kBy25XTEb gG7g== X-Gm-Message-State: AOAM530FX2s+1JY02QRElUwnWwjNp9gl+FsaJbjRzvsFTpOiUTu+BmCc r5kjs+GVaY7y0Ez834idtY8= X-Google-Smtp-Source: ABdhPJxDK3pB3PY/rvI4wpNPRfdEYs654cpLUafAO8s3rWQ+yENAmMzQGxTHm01qJ30+vbCBmmirnQ== X-Received: by 2002:a05:6402:284:: with SMTP id l4mr52310910edv.299.1620925146948; Thu, 13 May 2021 09:59:06 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-5-94-253-60.cust.vodafonedsl.it. [5.94.253.60]) by smtp.gmail.com with ESMTPSA id w11sm2959431ede.54.2021.05.13.09.59.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 May 2021 09:59:06 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v5 0/5] page_pool: recycle buffers Date: Thu, 13 May 2021 18:58:41 +0200 Message-Id: <20210513165846.23722-1-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Rspamd-Queue-Id: 5CF12E0001B4 Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of technoboy85@gmail.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=technoboy85@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=linux.microsoft.com (policy=none) X-Rspamd-Server: rspam03 X-Stat-Signature: fruc5txmy51gesf15rpkrr5o6oc5wu4i X-HE-Tag: 1620925147-37688 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Matteo Croce This is a respin of [1] This patchset shows the plans for allowing page_pool to handle and maintain DMA map/unmap of the pages it serves to the driver. For this to work a return hook in the network core is introduced. The overall purpose is to simplify drivers, by providing a page allocation API that does recycling, such that each driver doesn't have to reinvent its own recycling scheme. Using page_pool in a driver does not require implementing XDP support, but it makes it trivially easy to do so. Instead of allocating buffers specifically for SKBs we now allocate a generic buffer and either wrap it on an SKB (via build_skb) or create an XDP frame. The recycling code leverages the XDP recycle APIs. The Marvell mvpp2 and mvneta drivers are used in this patchset to demonstrate how to use the API, and tested on a MacchiatoBIN and EspressoBIN boards respectively. Please let this going in on a future -rc1 so to allow enough time to have wider tests. Note that this series depends on the change "mm: fix struct page layout on 32-bit systems"[2] which is not yet in master. v4 -> v5: - move the signature so it doesn't alias with page->mapping - use an invalid pointer as magic - incorporate Matthew Wilcox's changes for pfmemalloc pages - move the __skb_frag_unref() changes to a preliminary patch - refactor some cpp directives - only attempt recycling if skb->head_frag - clear skb->pp_recycle in pskb_expand_head() v3 -> v4: - store a pointer to page_pool instead of xdp_mem_info - drop a patch which reduces xdp_mem_info size - do the recycling in the page_pool code instead of xdp_return - remove some unused headers include - remove some useless forward declaration v2 -> v3: - added missing SOBs - CCed the MM people v1 -> v2: - fix a commit message - avoid setting pp_recycle multiple times on mvneta - squash two patches to avoid breaking bisect [1] https://lore.kernel.org/netdev/154413868810.21735.572808840657728172.stgit@firesoul/ [2] https://lore.kernel.org/linux-mm/20210510153211.1504886-1-willy@infradead.org/ Ilias Apalodimas (1): page_pool: Allow drivers to hint on SKB recycling Matteo Croce (4): mm: add a signature in struct page skbuff: add a parameter to __skb_frag_unref mvpp2: recycle buffers mvneta: recycle buffers drivers/net/ethernet/marvell/mvneta.c | 11 +++--- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 17 +++++----- drivers/net/ethernet/marvell/sky2.c | 2 +- drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +- include/linux/mm.h | 12 ++++--- include/linux/mm_types.h | 12 +++++++ include/linux/skbuff.h | 34 ++++++++++++++++--- include/net/page_pool.h | 11 ++++++ net/core/page_pool.c | 27 +++++++++++++++ net/core/skbuff.c | 25 +++++++++++--- net/tls/tls_device.c | 2 +- 11 files changed, 126 insertions(+), 29 deletions(-)