From patchwork Tue Oct 18 19:15:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Lemon X-Patchwork-Id: 13010976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C01AAC433FE for ; Tue, 18 Oct 2022 19:16:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229890AbiJRTQ2 convert rfc822-to-8bit (ORCPT ); Tue, 18 Oct 2022 15:16:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229872AbiJRTQ1 (ORCPT ); Tue, 18 Oct 2022 15:16:27 -0400 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B77F05BC31 for ; Tue, 18 Oct 2022 12:16:22 -0700 (PDT) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 29IDjl3H005727 for ; Tue, 18 Oct 2022 12:16:21 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net (PPS) with ESMTPS id 3k92jvu7vn-10 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 18 Oct 2022 12:16:21 -0700 Received: from twshared1458.22.frc3.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 18 Oct 2022 12:16:18 -0700 Received: by devvm2494.atn0.facebook.com (Postfix, from userid 172786) id 67FCB227F051B; Tue, 18 Oct 2022 12:16:02 -0700 (PDT) From: Jonathan Lemon To: CC: Subject: [RFC PATCH v2 09/13] skbuff: Introduce SKBFL_FIXED_FRAG and skb_fixed() Date: Tue, 18 Oct 2022 12:15:58 -0700 Message-ID: <20221018191602.2112515-10-jonathan.lemon@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221018191602.2112515-1-jonathan.lemon@gmail.com> References: <20221018191602.2112515-1-jonathan.lemon@gmail.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: U4p_yK4mrlMwdDddNN6X6yQw-dnnfWz8 X-Proofpoint-GUID: U4p_yK4mrlMwdDddNN6X6yQw-dnnfWz8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-18_07,2022-10-18_01,2022-06-22_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org When a skb marked as zerocopy goes up the network stack, during RX, it calls skb_orphan_frags_rx. This is designed to catch TX zerocopy data being redirected back up the stack, not new zerocopy fragments coming up from the driver. Currently, since the skb is marked as zerocopy, skb_copy_ubufs() is called, defeating the point of zerocopy-RX. Have the driver mark the fragments as fixed, so they are not copied. Signed-off-by: Jonathan Lemon --- include/linux/skbuff.h | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 9fcf534f2d92..e11e55487c64 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -516,6 +516,9 @@ enum { * use frags only up until ubuf_info is released */ SKBFL_MANAGED_FRAG_REFS = BIT(4), + + /* don't move or copy the fragment */ + SKBFL_FIXED_FRAG = BIT(5), }; #define SKBFL_ZEROCOPY_FRAG (SKBFL_ZEROCOPY_ENABLE | SKBFL_SHARED_FRAG) @@ -1651,6 +1654,11 @@ static inline bool skb_zcopy_managed(const struct sk_buff *skb) return skb_shinfo(skb)->flags & SKBFL_MANAGED_FRAG_REFS; } +static inline bool skb_fixed(const struct sk_buff *skb) +{ + return skb_shinfo(skb)->flags & SKBFL_FIXED_FRAG; +} + static inline bool skb_pure_zcopy_same(const struct sk_buff *skb1, const struct sk_buff *skb2) { @@ -3087,7 +3095,7 @@ static inline int skb_orphan_frags(struct sk_buff *skb, gfp_t gfp_mask) /* Frags must be orphaned, even if refcounted, if skb might loop to rx path */ static inline int skb_orphan_frags_rx(struct sk_buff *skb, gfp_t gfp_mask) { - if (likely(!skb_zcopy(skb))) + if (likely(!skb_zcopy(skb) || skb_fixed(skb))) return 0; return skb_copy_ubufs(skb, gfp_mask); }