From patchwork Thu Jul 7 11:49:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Begunkov X-Patchwork-Id: 12909366 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB992CCA482 for ; Thu, 7 Jul 2022 11:51:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235239AbiGGLvy (ORCPT ); Thu, 7 Jul 2022 07:51:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234803AbiGGLvv (ORCPT ); Thu, 7 Jul 2022 07:51:51 -0400 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D564F4507D; Thu, 7 Jul 2022 04:51:49 -0700 (PDT) Received: by mail-wr1-x432.google.com with SMTP id d16so19626195wrv.10; Thu, 07 Jul 2022 04:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aRSWPFJdUal4sFDyAFcRSHQQ/M3F+xYGlhF06C7Lf5E=; b=GMs/RhN8qUMfPGBs4yo+sSPeIeKLafGXETcG+w4RWJBYHzK8hsztfbzT6JKFKgPgJ1 3R2wDQ9Xrnd0oaObDr0yyhD89Xl+ZqwnnaPqB2jqF2k4hq2ma2c1KJBky6XI9Z8CPFAi tTPVPYThTF2p00DcfNOlhmvPn0CFcbxYNgWdKBpfWgmyTvRd3l0WVB2OzQ5LbOfb0yKa IeYmi57QoUBJ20PoxQkWZFTpULc/XhWBK6/+Fb+cv2KWtBGTNqnPmqtPigUv0L9Ip/d2 0EUcdcP59POGgMgufO7jMX+tHUhLsv2lr7VZzShByWv8Ck9U1aCvUL56aKYbotvQPGW+ 55iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aRSWPFJdUal4sFDyAFcRSHQQ/M3F+xYGlhF06C7Lf5E=; b=jlllIr/ax5rz7yjmm0MbSztjR8ZgEfp3ycIE3p4UBax/RwXgnow8/KkqAh9RLo2sWL 1iaIitHjpEPHmnp19rSqMM6kxNm9H7I8jsorZz/8Y24amYHaDAANDohnN8wTpdSYm9X9 husH+RudY41YUjYy6G6ntI9K5TRIJMxxmujyY7rEHmqgE1v64HYOtOsVyQQCuQglupvm hiqKG4LSbeNtNMsAkhPnUjeRxa7M5mnRf6DslJbBC5JDEWxyOOUdXdc5kwnaSeRseFGf P465mf0u1ClwDU61d3YA+92f0/nmhk6MGZnqtr5mXaAISbc0TQLMp49DFnGc6Vvlib+6 4eqw== X-Gm-Message-State: AJIora/xZ3CjJ1bNRsKbpIxV+j0ETume1enZuO4LN/3oeUPdRtKwlApo A9TzOjuAegSTCWrBw/ZQlrqxj8ltsw5aSB7mYlQ= X-Google-Smtp-Source: AGRyM1uxlRBXgmEexZel6mtArbY3Szz4Y8NjgueyWbmhHTDmaxbayaLLP/y6jHPuvF6XWwn1Fq2o8Q== X-Received: by 2002:a5d:6b81:0:b0:21d:72a8:73c9 with SMTP id n1-20020a5d6b81000000b0021d72a873c9mr13024174wrx.630.1657194708086; Thu, 07 Jul 2022 04:51:48 -0700 (PDT) Received: from 127.0.0.1localhost (188.28.125.106.threembb.co.uk. [188.28.125.106]) by smtp.gmail.com with ESMTPSA id u2-20020a5d5142000000b0021b966abc19sm37982131wrt.19.2022.07.07.04.51.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jul 2022 04:51:47 -0700 (PDT) From: Pavel Begunkov To: io-uring@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Cc: "David S . Miller" , Jakub Kicinski , Jonathan Lemon , Willem de Bruijn , Jens Axboe , David Ahern , kernel-team@fb.com, Pavel Begunkov Subject: [PATCH net-next v4 01/27] ipv4: avoid partial copy for zc Date: Thu, 7 Jul 2022 12:49:32 +0100 Message-Id: <0eb1cb5746e9ac938a7ba7848b33ccf680d30030.1657194434.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Even when zerocopy transmission is requested and possible, __ip_append_data() will still copy a small chunk of data just because it allocated some extra linear space (e.g. 148 bytes). It wastes CPU cycles on copy and iter manipulations and also misalignes potentially aligned data. Avoid such coies. And as a bonus we can allocate smaller skb. Signed-off-by: Pavel Begunkov --- net/ipv4/ip_output.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 00b4bf26fd93..581d1e233260 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -969,7 +969,6 @@ static int __ip_append_data(struct sock *sk, struct inet_sock *inet = inet_sk(sk); struct ubuf_info *uarg = NULL; struct sk_buff *skb; - struct ip_options *opt = cork->opt; int hh_len; int exthdrlen; @@ -977,6 +976,7 @@ static int __ip_append_data(struct sock *sk, int copy; int err; int offset = 0; + bool zc = false; unsigned int maxfraglen, fragheaderlen, maxnonfragsize; int csummode = CHECKSUM_NONE; struct rtable *rt = (struct rtable *)cork->dst; @@ -1025,6 +1025,7 @@ static int __ip_append_data(struct sock *sk, if (rt->dst.dev->features & NETIF_F_SG && csummode == CHECKSUM_PARTIAL) { paged = true; + zc = true; } else { uarg->zerocopy = 0; skb_zcopy_set(skb, uarg, &extra_uref); @@ -1091,9 +1092,12 @@ static int __ip_append_data(struct sock *sk, (fraglen + alloc_extra < SKB_MAX_ALLOC || !(rt->dst.dev->features & NETIF_F_SG))) alloclen = fraglen; - else { + else if (!zc) { alloclen = min_t(int, fraglen, MAX_HEADER); pagedlen = fraglen - alloclen; + } else { + alloclen = fragheaderlen + transhdrlen; + pagedlen = datalen - transhdrlen; } alloclen += alloc_extra;