From patchwork Wed Nov 9 18:02:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Palmas X-Patchwork-Id: 13037835 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8A61C4332F for ; Wed, 9 Nov 2022 18:09:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229842AbiKISJ1 (ORCPT ); Wed, 9 Nov 2022 13:09:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229872AbiKISJ0 (ORCPT ); Wed, 9 Nov 2022 13:09:26 -0500 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [IPv6:2a00:1450:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 722081C41A for ; Wed, 9 Nov 2022 10:09:25 -0800 (PST) Received: by mail-ej1-x629.google.com with SMTP id bj12so48918612ejb.13 for ; Wed, 09 Nov 2022 10:09:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4zL1hfmtgQMnqgdfJLzIDEINdeloTLiy6lXdpMwhNEI=; b=CFitsk7sgicEAAsX7oCrEpTfMyvgR7yWIGpS7KEL2GHgrCd5ynsA63AiZRij9JOZGi Jg1JT/dkpk5twfBvle2r+RPaB3+qgSiZWx2qDrkiGmvtpCp+tIe988HdoGaG12gZf8Bn FGn47tZYyTCMW22O9rlIqK4ykgOLtTthLBoTCZMSFQQcZ+ka3X5xrMhRChC9spqAbvxG RVuJsxy3pc3jFz4TDWytDjk0q6aooz57PvLVld+5sbzytxnkQoyQO37/VOGcZtOaZgFC rO90cX8wtDZW/hUOd4t93kuwTAdjfi87UK9B71Ej8nd+JckNXzcfvhCMgrjszbIAuXjd 7sNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4zL1hfmtgQMnqgdfJLzIDEINdeloTLiy6lXdpMwhNEI=; b=GmUwYkUbCayQzC8eh5gtDPu5m/eJoaw4mKkZLJMogooCTf3iBnvRc454q7tBmZBhnE iC2Ad6ABWhDLZ+5RX9LHm1/K792xGGP2k/M08riqOgOkMgS/l89F8XaARQo06RGR8QHZ cfHGdNxvoozfd8bIVrxP2Pj8C1S9bPNa0k5v94ICdbIowv/lNQTNKrnGDxusNkcJtFko F4LeHAXCLtYAY3mRAeu84ODxjPZ0Y7bkz3EzPwEhwfjP62Gret5gfG04aMZFne6okATy BAAG1vYhrEWd5upGT0Qlcvdzp2tC9xDsCwarVU0L+S+pN/Kx39Dld/Fn5JTGyW9cavcU caxw== X-Gm-Message-State: ACrzQf06IfkLUD5eF4VTnj3jWzsNHOHO4eKCOPqUk8WLFdYvCRBqYkUS +CANizEUcbQ1V2N2JH7nPL0= X-Google-Smtp-Source: AMsMyM5vQrZ9hvPS7BBdrk37P2PJkYVCXal73xlmSe7R3+TGzpYHg9Oip/l8JoSUTqvz1eIPtxUbpg== X-Received: by 2002:a17:907:2c71:b0:79e:8603:72c6 with SMTP id ib17-20020a1709072c7100b0079e860372c6mr58786595ejc.172.1668017363864; Wed, 09 Nov 2022 10:09:23 -0800 (PST) Received: from ThinkStation-P340.. (static-82-85-31-68.clienti.tiscali.it. [82.85.31.68]) by smtp.gmail.com with ESMTPSA id rh16-20020a17090720f000b0077016f4c6d4sm6116311ejb.55.2022.11.09.10.09.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Nov 2022 10:09:23 -0800 (PST) From: Daniele Palmas To: David Miller , Jakub Kicinski , Paolo Abeni , Eric Dumazet , Subash Abhinov Kasiviswanathan , Sean Tranchetti , Jonathan Corbet Cc: =?utf-8?q?Bj=C3=B8rn_Mork?= , Greg Kroah-Hartman , netdev@vger.kernel.org, Daniele Palmas Subject: [PATCH net-next 1/3] ethtool: add tx aggregation parameters Date: Wed, 9 Nov 2022 19:02:47 +0100 Message-Id: <20221109180249.4721-2-dnlplm@gmail.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221109180249.4721-1-dnlplm@gmail.com> References: <20221109180249.4721-1-dnlplm@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add the following ethtool tx aggregation parameters: ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE Maximum size of an aggregated block of frames in tx. ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES Maximum number of frames that can be aggregated into a block. ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME Time in usecs after the first packet arrival in an aggregated block for the block to be sent. Signed-off-by: Daniele Palmas --- Documentation/networking/ethtool-netlink.rst | 6 ++++++ include/linux/ethtool.h | 12 ++++++++++- include/uapi/linux/ethtool_netlink.h | 3 +++ net/ethtool/coalesce.c | 22 ++++++++++++++++++-- 4 files changed, 40 insertions(+), 3 deletions(-) diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index d578b8bcd8a4..a6f115867648 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -1001,6 +1001,9 @@ Kernel response contents: ``ETHTOOL_A_COALESCE_RATE_SAMPLE_INTERVAL`` u32 rate sampling interval ``ETHTOOL_A_COALESCE_USE_CQE_TX`` bool timer reset mode, Tx ``ETHTOOL_A_COALESCE_USE_CQE_RX`` bool timer reset mode, Rx + ``ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE`` u32 max aggr packets size, Tx + ``ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES`` u32 max aggr packets, Tx + ``ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME`` u32 time (us), aggr pkts, Tx =========================================== ====== ======================= Attributes are only included in reply if their value is not zero or the @@ -1052,6 +1055,9 @@ Request contents: ``ETHTOOL_A_COALESCE_RATE_SAMPLE_INTERVAL`` u32 rate sampling interval ``ETHTOOL_A_COALESCE_USE_CQE_TX`` bool timer reset mode, Tx ``ETHTOOL_A_COALESCE_USE_CQE_RX`` bool timer reset mode, Rx + ``ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE`` u32 max aggr packets size, Tx + ``ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES`` u32 max aggr packets, Tx + ``ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME`` u32 time (us), aggr pkts, Tx =========================================== ====== ======================= Request is rejected if it attributes declared as unsupported by driver (i.e. diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index 99dc7bfbcd3c..3726db470247 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -203,6 +203,9 @@ __ethtool_get_link_ksettings(struct net_device *dev, struct kernel_ethtool_coalesce { u8 use_cqe_mode_tx; u8 use_cqe_mode_rx; + u32 tx_max_aggr_size; + u32 tx_max_aggr_frames; + u32 tx_usecs_aggr_time; }; /** @@ -246,7 +249,10 @@ bool ethtool_convert_link_mode_to_legacy_u32(u32 *legacy_u32, #define ETHTOOL_COALESCE_RATE_SAMPLE_INTERVAL BIT(21) #define ETHTOOL_COALESCE_USE_CQE_RX BIT(22) #define ETHTOOL_COALESCE_USE_CQE_TX BIT(23) -#define ETHTOOL_COALESCE_ALL_PARAMS GENMASK(23, 0) +#define ETHTOOL_COALESCE_TX_MAX_AGGR_SIZE BIT(24) +#define ETHTOOL_COALESCE_TX_MAX_AGGR_FRAMES BIT(25) +#define ETHTOOL_COALESCE_TX_USECS_AGGR_TIME BIT(26) +#define ETHTOOL_COALESCE_ALL_PARAMS GENMASK(26, 0) #define ETHTOOL_COALESCE_USECS \ (ETHTOOL_COALESCE_RX_USECS | ETHTOOL_COALESCE_TX_USECS) @@ -274,6 +280,10 @@ bool ethtool_convert_link_mode_to_legacy_u32(u32 *legacy_u32, ETHTOOL_COALESCE_RATE_SAMPLE_INTERVAL) #define ETHTOOL_COALESCE_USE_CQE \ (ETHTOOL_COALESCE_USE_CQE_RX | ETHTOOL_COALESCE_USE_CQE_TX) +#define ETHTOOL_COALESCE_TX_AGGR \ + (ETHTOOL_COALESCE_TX_MAX_AGGR_SIZE | \ + ETHTOOL_COALESCE_TX_MAX_AGGR_FRAMES | \ + ETHTOOL_COALESCE_TX_USECS_AGGR_TIME) #define ETHTOOL_STAT_NOT_SET (~0ULL) diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index bb57084ac524..08872c8ea0d6 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -397,6 +397,9 @@ enum { ETHTOOL_A_COALESCE_RATE_SAMPLE_INTERVAL, /* u32 */ ETHTOOL_A_COALESCE_USE_CQE_MODE_TX, /* u8 */ ETHTOOL_A_COALESCE_USE_CQE_MODE_RX, /* u8 */ + ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE, /* u32 */ + ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES, /* u32 */ + ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME, /* u32 */ /* add new constants above here */ __ETHTOOL_A_COALESCE_CNT, diff --git a/net/ethtool/coalesce.c b/net/ethtool/coalesce.c index 487bdf345541..014a7a4f73f2 100644 --- a/net/ethtool/coalesce.c +++ b/net/ethtool/coalesce.c @@ -105,7 +105,10 @@ static int coalesce_reply_size(const struct ethnl_req_info *req_base, nla_total_size(sizeof(u32)) + /* _TX_MAX_FRAMES_HIGH */ nla_total_size(sizeof(u32)) + /* _RATE_SAMPLE_INTERVAL */ nla_total_size(sizeof(u8)) + /* _USE_CQE_MODE_TX */ - nla_total_size(sizeof(u8)); /* _USE_CQE_MODE_RX */ + nla_total_size(sizeof(u8)) + /* _USE_CQE_MODE_RX */ + nla_total_size(sizeof(u32)) + /* _TX_MAX_AGGR_SIZE */ + nla_total_size(sizeof(u32)) + /* _TX_MAX_AGGR_FRAMES */ + nla_total_size(sizeof(u32)); /* _TX_USECS_AGGR_TIME */ } static bool coalesce_put_u32(struct sk_buff *skb, u16 attr_type, u32 val, @@ -180,7 +183,13 @@ static int coalesce_fill_reply(struct sk_buff *skb, coalesce_put_bool(skb, ETHTOOL_A_COALESCE_USE_CQE_MODE_TX, kcoal->use_cqe_mode_tx, supported) || coalesce_put_bool(skb, ETHTOOL_A_COALESCE_USE_CQE_MODE_RX, - kcoal->use_cqe_mode_rx, supported)) + kcoal->use_cqe_mode_rx, supported) || + coalesce_put_u32(skb, ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE, + kcoal->tx_max_aggr_size, supported) || + coalesce_put_u32(skb, ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES, + kcoal->tx_max_aggr_frames, supported) || + coalesce_put_u32(skb, ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME, + kcoal->tx_usecs_aggr_time, supported)) return -EMSGSIZE; return 0; @@ -227,6 +236,9 @@ const struct nla_policy ethnl_coalesce_set_policy[] = { [ETHTOOL_A_COALESCE_RATE_SAMPLE_INTERVAL] = { .type = NLA_U32 }, [ETHTOOL_A_COALESCE_USE_CQE_MODE_TX] = NLA_POLICY_MAX(NLA_U8, 1), [ETHTOOL_A_COALESCE_USE_CQE_MODE_RX] = NLA_POLICY_MAX(NLA_U8, 1), + [ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE] = { .type = NLA_U32 }, + [ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES] = { .type = NLA_U32 }, + [ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME] = { .type = NLA_U32 }, }; int ethnl_set_coalesce(struct sk_buff *skb, struct genl_info *info) @@ -321,6 +333,12 @@ int ethnl_set_coalesce(struct sk_buff *skb, struct genl_info *info) tb[ETHTOOL_A_COALESCE_USE_CQE_MODE_TX], &mod); ethnl_update_u8(&kernel_coalesce.use_cqe_mode_rx, tb[ETHTOOL_A_COALESCE_USE_CQE_MODE_RX], &mod); + ethnl_update_u32(&kernel_coalesce.tx_max_aggr_size, + tb[ETHTOOL_A_COALESCE_TX_MAX_AGGR_SIZE], &mod); + ethnl_update_u32(&kernel_coalesce.tx_max_aggr_frames, + tb[ETHTOOL_A_COALESCE_TX_MAX_AGGR_FRAMES], &mod); + ethnl_update_u32(&kernel_coalesce.tx_usecs_aggr_time, + tb[ETHTOOL_A_COALESCE_TX_USECS_AGGR_TIME], &mod); ret = 0; if (!mod) goto out_ops; From patchwork Wed Nov 9 18:02:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Palmas X-Patchwork-Id: 13037836 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7893FC4332F for ; Wed, 9 Nov 2022 18:09:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229850AbiKISJf (ORCPT ); Wed, 9 Nov 2022 13:09:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230121AbiKISJc (ORCPT ); Wed, 9 Nov 2022 13:09:32 -0500 Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [IPv6:2a00:1450:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CA5624F0C for ; Wed, 9 Nov 2022 10:09:31 -0800 (PST) Received: by mail-ej1-x62b.google.com with SMTP id kt23so48962505ejc.7 for ; Wed, 09 Nov 2022 10:09:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aaAOHKyrBJT3EFrE2PabDENwkfEtobniT2QNAB+W8Nw=; b=I0e3JvD2m7Yh0q77HI0nsN1Gv6EiUD3qkZNqE6zCjeJS8SjgK2y+knBBTZ049gQ1E/ d+jfUlLxuRF6TWxWV8EyOAabKjiKN82HA0o0b5nE5FDCHoUhM5SZNHuW+wnIgIBmkSou 5NY2PFAMTwvnqR6KOitHLJkIPoKjNSM944nRMiLBHb8Y/YYwKEUnmZ6He//SkM/oqGb3 hQzRkFFuGrdYOkXdFzmwHBz+DMTNvBVijqWG/T5WSJtXxLKLnFD4Rqk6gS2FV1YILXvZ cc9euHdW8qHaiw5uV7tKxsZXP5h53v0ciDuPDw3aUl0SW/fn0rTBUm2XdYXC36fRIzCc 8eoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aaAOHKyrBJT3EFrE2PabDENwkfEtobniT2QNAB+W8Nw=; b=L5RcWlO9+a5BMrnD1OXhqRS005/9Z1z8V6z4OTrLnNMiZqhoexndp1oLMaCd/lj06w raQaWf6FdOQM4AychduQp0Q8XJ+mEqouoTdGj9W4UEw1+bjE8EW8fTqE5XCtRM6NW+w0 zYVP9Gf1rHCC56vPcEZ+HyU1FjUqm/TRfbKYruZrkLc27i1KO+DSTy5ZX+oxNhOJcV3Z 5tHzRvwLy8bAvv9Zd6pJa5q+X0piNaL+qy03vtlGxF4szUG4BL9Et0uktkgCo8hbKf3F cHw0mHmzwMdQ+38/aAfDP4DB4uweawXqCjnYM3NTrK1FZkrf0s+WzpMZJfdgWRGXbUua L7qQ== X-Gm-Message-State: ACrzQf1LKceIp6KRKjUr6ZW7A+RafM5n/aOQQCcOlpmJyR7b3pn9JsGS 5Qvjr4eAlUc6AVX10YJdShaNcskTLtom8g== X-Google-Smtp-Source: AMsMyM77ur3tuL/Bh5+MnzkGZ/NF99HIAVquk3GHwIhasv4SBcNuo83v9lY0K2N1Oa5SdBaNWcA4iw== X-Received: by 2002:a17:907:105d:b0:7ad:c2ef:4d69 with SMTP id oy29-20020a170907105d00b007adc2ef4d69mr1587234ejb.10.1668017369656; Wed, 09 Nov 2022 10:09:29 -0800 (PST) Received: from ThinkStation-P340.. (static-82-85-31-68.clienti.tiscali.it. [82.85.31.68]) by smtp.gmail.com with ESMTPSA id rh16-20020a17090720f000b0077016f4c6d4sm6116311ejb.55.2022.11.09.10.09.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Nov 2022 10:09:29 -0800 (PST) From: Daniele Palmas To: David Miller , Jakub Kicinski , Paolo Abeni , Eric Dumazet , Subash Abhinov Kasiviswanathan , Sean Tranchetti , Jonathan Corbet Cc: =?utf-8?q?Bj=C3=B8rn_Mork?= , Greg Kroah-Hartman , netdev@vger.kernel.org, Daniele Palmas Subject: [PATCH net-next 2/3] net: qualcomm: rmnet: add tx packets aggregation Date: Wed, 9 Nov 2022 19:02:48 +0100 Message-Id: <20221109180249.4721-3-dnlplm@gmail.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221109180249.4721-1-dnlplm@gmail.com> References: <20221109180249.4721-1-dnlplm@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Bidirectional TCP throughput tests through iperf with low-cat Thread-x based modems showed performance issues both in tx and rx. The Windows driver does not show this issue: inspecting USB packets revealed that the only notable change is the driver enabling tx packets aggregation. Tx packets aggregation, by default disabled, requires flag RMNET_FLAGS_EGRESS_AGGREGATION to be set (e.g. through ip command). The maximum number of aggregated packets and the maximum aggregated size are by default set to reasonably low values in order to support the majority of modems. This implementation is based on patches available in Code Aurora repositories (msm kernel) whose main authors are Subash Abhinov Kasiviswanathan Sean Tranchetti Signed-off-by: Daniele Palmas --- .../ethernet/qualcomm/rmnet/rmnet_config.c | 5 + .../ethernet/qualcomm/rmnet/rmnet_config.h | 19 ++ .../ethernet/qualcomm/rmnet/rmnet_handlers.c | 25 ++- .../net/ethernet/qualcomm/rmnet/rmnet_map.h | 7 + .../ethernet/qualcomm/rmnet/rmnet_map_data.c | 196 ++++++++++++++++++ include/uapi/linux/if_link.h | 1 + 6 files changed, 251 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c index 27b1663c476e..39d24e07f306 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c @@ -12,6 +12,7 @@ #include "rmnet_handlers.h" #include "rmnet_vnd.h" #include "rmnet_private.h" +#include "rmnet_map.h" /* Local Definitions and Declarations */ @@ -39,6 +40,8 @@ static int rmnet_unregister_real_device(struct net_device *real_dev) if (port->nr_rmnet_devs) return -EINVAL; + rmnet_map_tx_aggregate_exit(port); + netdev_rx_handler_unregister(real_dev); kfree(port); @@ -79,6 +82,8 @@ static int rmnet_register_real_device(struct net_device *real_dev, for (entry = 0; entry < RMNET_MAX_LOGICAL_EP; entry++) INIT_HLIST_HEAD(&port->muxed_ep[entry]); + rmnet_map_tx_aggregate_init(port); + netdev_dbg(real_dev, "registered with rmnet\n"); return 0; } diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h index 3d3cba56c516..d341df78e411 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.h @@ -6,6 +6,7 @@ */ #include +#include #include #ifndef _RMNET_CONFIG_H_ @@ -19,6 +20,12 @@ struct rmnet_endpoint { struct hlist_node hlnode; }; +struct rmnet_egress_agg_params { + u16 agg_size; + u16 agg_count; + u64 agg_time_nsec; +}; + /* One instance of this structure is instantiated for each real_dev associated * with rmnet. */ @@ -30,6 +37,18 @@ struct rmnet_port { struct hlist_head muxed_ep[RMNET_MAX_LOGICAL_EP]; struct net_device *bridge_ep; struct net_device *rmnet_dev; + + /* Egress aggregation information */ + struct rmnet_egress_agg_params egress_agg_params; + /* Protect aggregation related elements */ + spinlock_t agg_lock; + struct sk_buff *agg_skb; + int agg_state; + u8 agg_count; + struct timespec64 agg_time; + struct timespec64 agg_last; + struct hrtimer hrtimer; + struct work_struct agg_wq; }; extern struct rtnl_link_ops rmnet_link_ops; diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c index a313242a762e..82e2669e3590 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c @@ -136,10 +136,15 @@ static int rmnet_map_egress_handler(struct sk_buff *skb, { int required_headroom, additional_header_len, csum_type = 0; struct rmnet_map_header *map_header; + bool is_icmp = false; additional_header_len = 0; required_headroom = sizeof(struct rmnet_map_header); + if (port->data_format & RMNET_FLAGS_EGRESS_AGGREGATION && + rmnet_map_tx_agg_skip(skb)) + is_icmp = true; + if (port->data_format & RMNET_FLAGS_EGRESS_MAP_CKSUMV4) { additional_header_len = sizeof(struct rmnet_map_ul_csum_header); csum_type = RMNET_FLAGS_EGRESS_MAP_CKSUMV4; @@ -164,8 +169,18 @@ static int rmnet_map_egress_handler(struct sk_buff *skb, map_header->mux_id = mux_id; - skb->protocol = htons(ETH_P_MAP); + if (port->data_format & RMNET_FLAGS_EGRESS_AGGREGATION && !is_icmp) { + if (skb_is_nonlinear(skb)) { + if (unlikely(__skb_linearize(skb))) + goto done; + } + + rmnet_map_tx_aggregate(skb, port, orig_dev); + return -EINPROGRESS; + } +done: + skb->protocol = htons(ETH_P_MAP); return 0; } @@ -235,6 +250,7 @@ void rmnet_egress_handler(struct sk_buff *skb) struct rmnet_port *port; struct rmnet_priv *priv; u8 mux_id; + int err; sk_pacing_shift_update(skb->sk, 8); @@ -247,8 +263,13 @@ void rmnet_egress_handler(struct sk_buff *skb) if (!port) goto drop; - if (rmnet_map_egress_handler(skb, port, mux_id, orig_dev)) + err = rmnet_map_egress_handler(skb, port, mux_id, orig_dev); + if (err == -ENOMEM) { goto drop; + } else if (err == -EINPROGRESS) { + rmnet_vnd_tx_fixup(skb, orig_dev); + return; + } rmnet_vnd_tx_fixup(skb, orig_dev); diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h index 2b033060fc20..6aefc4e1bf47 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map.h @@ -53,5 +53,12 @@ void rmnet_map_checksum_uplink_packet(struct sk_buff *skb, struct net_device *orig_dev, int csum_type); int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, u16 len); +bool rmnet_map_tx_agg_skip(struct sk_buff *skb); +void rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port, + struct net_device *orig_dev); +void rmnet_map_tx_aggregate_init(struct rmnet_port *port); +void rmnet_map_tx_aggregate_exit(struct rmnet_port *port); +void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u16 size, + u16 count, u32 time); #endif /* _RMNET_MAP_H_ */ diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c index ba194698cc14..49eeed4a126b 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_map_data.c @@ -12,6 +12,7 @@ #include "rmnet_config.h" #include "rmnet_map.h" #include "rmnet_private.h" +#include "rmnet_vnd.h" #define RMNET_MAP_DEAGGR_SPACING 64 #define RMNET_MAP_DEAGGR_HEADROOM (RMNET_MAP_DEAGGR_SPACING / 2) @@ -518,3 +519,198 @@ int rmnet_map_process_next_hdr_packet(struct sk_buff *skb, return 0; } + +long rmnet_agg_bypass_time __read_mostly = 10000L * NSEC_PER_USEC; + +bool rmnet_map_tx_agg_skip(struct sk_buff *skb) +{ + bool is_icmp = 0; + + if (skb->protocol == htons(ETH_P_IP)) { + struct iphdr *ip4h = ip_hdr(skb); + + if (ip4h->protocol == IPPROTO_ICMP) + is_icmp = true; + } else if (skb->protocol == htons(ETH_P_IPV6)) { + unsigned int icmp_offset = 0; + + if (ipv6_find_hdr(skb, &icmp_offset, IPPROTO_ICMPV6, NULL, NULL) == IPPROTO_ICMPV6) + is_icmp = true; + } + + return is_icmp; +} + +static void reset_aggr_params(struct rmnet_port *port) +{ + port->agg_skb = NULL; + port->agg_count = 0; + port->agg_state = 0; + memset(&port->agg_time, 0, sizeof(struct timespec64)); +} + +static void rmnet_map_flush_tx_packet_work(struct work_struct *work) +{ + struct sk_buff *skb = NULL; + struct rmnet_port *port; + unsigned long flags; + + port = container_of(work, struct rmnet_port, agg_wq); + + spin_lock_irqsave(&port->agg_lock, flags); + if (likely(port->agg_state == -EINPROGRESS)) { + /* Buffer may have already been shipped out */ + if (likely(port->agg_skb)) { + skb = port->agg_skb; + reset_aggr_params(port); + } + port->agg_state = 0; + } + + spin_unlock_irqrestore(&port->agg_lock, flags); + if (skb) + dev_queue_xmit(skb); +} + +enum hrtimer_restart rmnet_map_flush_tx_packet_queue(struct hrtimer *t) +{ + struct rmnet_port *port; + + port = container_of(t, struct rmnet_port, hrtimer); + + schedule_work(&port->agg_wq); + + return HRTIMER_NORESTART; +} + +void rmnet_map_tx_aggregate(struct sk_buff *skb, struct rmnet_port *port, + struct net_device *orig_dev) +{ + struct timespec64 diff, last; + int size = 0; + struct sk_buff *agg_skb; + unsigned long flags; + +new_packet: + spin_lock_irqsave(&port->agg_lock, flags); + memcpy(&last, &port->agg_last, sizeof(struct timespec64)); + ktime_get_real_ts64(&port->agg_last); + + if (!port->agg_skb) { + /* Check to see if we should agg first. If the traffic is very + * sparse, don't aggregate. + */ + diff = timespec64_sub(port->agg_last, last); + size = port->egress_agg_params.agg_size - skb->len; + + if (size < 0) { + struct rmnet_priv *priv; + + /* dropped */ + dev_kfree_skb_any(skb); + spin_unlock_irqrestore(&port->agg_lock, flags); + priv = netdev_priv(orig_dev); + this_cpu_inc(priv->pcpu_stats->stats.tx_drops); + + return; + } + + if (diff.tv_sec > 0 || diff.tv_nsec > rmnet_agg_bypass_time || + size == 0) { + spin_unlock_irqrestore(&port->agg_lock, flags); + skb->protocol = htons(ETH_P_MAP); + dev_queue_xmit(skb); + return; + } + + port->agg_skb = skb_copy_expand(skb, 0, size, GFP_ATOMIC); + if (!port->agg_skb) { + reset_aggr_params(port); + spin_unlock_irqrestore(&port->agg_lock, flags); + skb->protocol = htons(ETH_P_MAP); + dev_queue_xmit(skb); + return; + } + port->agg_skb->protocol = htons(ETH_P_MAP); + port->agg_count = 1; + ktime_get_real_ts64(&port->agg_time); + dev_kfree_skb_any(skb); + goto schedule; + } + diff = timespec64_sub(port->agg_last, port->agg_time); + size = port->egress_agg_params.agg_size - port->agg_skb->len; + + if (skb->len > size || + diff.tv_sec > 0 || diff.tv_nsec > port->egress_agg_params.agg_time_nsec) { + agg_skb = port->agg_skb; + reset_aggr_params(port); + spin_unlock_irqrestore(&port->agg_lock, flags); + hrtimer_cancel(&port->hrtimer); + dev_queue_xmit(agg_skb); + goto new_packet; + } + + skb_put_data(port->agg_skb, skb->data, skb->len); + port->agg_count++; + dev_kfree_skb_any(skb); + + if (port->agg_count == port->egress_agg_params.agg_count || + port->agg_skb->len == port->egress_agg_params.agg_size) { + agg_skb = port->agg_skb; + reset_aggr_params(port); + spin_unlock_irqrestore(&port->agg_lock, flags); + hrtimer_cancel(&port->hrtimer); + dev_queue_xmit(agg_skb); + return; + } + +schedule: + if (!hrtimer_active(&port->hrtimer) && port->agg_state != -EINPROGRESS) { + port->agg_state = -EINPROGRESS; + hrtimer_start(&port->hrtimer, + ns_to_ktime(port->egress_agg_params.agg_time_nsec), + HRTIMER_MODE_REL); + } + spin_unlock_irqrestore(&port->agg_lock, flags); +} + +void rmnet_map_update_ul_agg_config(struct rmnet_port *port, u16 size, + u16 count, u32 time) +{ + unsigned long irq_flags; + + spin_lock_irqsave(&port->agg_lock, irq_flags); + port->egress_agg_params.agg_size = size; + port->egress_agg_params.agg_count = count; + port->egress_agg_params.agg_time_nsec = time * NSEC_PER_USEC; + spin_unlock_irqrestore(&port->agg_lock, irq_flags); +} + +void rmnet_map_tx_aggregate_init(struct rmnet_port *port) +{ + hrtimer_init(&port->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + port->hrtimer.function = rmnet_map_flush_tx_packet_queue; + spin_lock_init(&port->agg_lock); + rmnet_map_update_ul_agg_config(port, 4096, 16, 800); + INIT_WORK(&port->agg_wq, rmnet_map_flush_tx_packet_work); +} + +void rmnet_map_tx_aggregate_exit(struct rmnet_port *port) +{ + unsigned long flags; + + hrtimer_cancel(&port->hrtimer); + cancel_work_sync(&port->agg_wq); + + spin_lock_irqsave(&port->agg_lock, flags); + if (port->agg_state == -EINPROGRESS) { + if (port->agg_skb) { + kfree_skb(port->agg_skb); + reset_aggr_params(port); + } + + port->agg_state = 0; + } + + spin_unlock_irqrestore(&port->agg_lock, flags); +} diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h index 5e7a1041df3a..09a30e2b29b1 100644 --- a/include/uapi/linux/if_link.h +++ b/include/uapi/linux/if_link.h @@ -1351,6 +1351,7 @@ enum { #define RMNET_FLAGS_EGRESS_MAP_CKSUMV4 (1U << 3) #define RMNET_FLAGS_INGRESS_MAP_CKSUMV5 (1U << 4) #define RMNET_FLAGS_EGRESS_MAP_CKSUMV5 (1U << 5) +#define RMNET_FLAGS_EGRESS_AGGREGATION (1U << 6) enum { IFLA_RMNET_UNSPEC, From patchwork Wed Nov 9 18:02:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniele Palmas X-Patchwork-Id: 13037837 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7A6BC433FE for ; Wed, 9 Nov 2022 18:09:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229879AbiKISJo (ORCPT ); Wed, 9 Nov 2022 13:09:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230131AbiKISJh (ORCPT ); Wed, 9 Nov 2022 13:09:37 -0500 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [IPv6:2a00:1450:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C2DE24F0F for ; Wed, 9 Nov 2022 10:09:35 -0800 (PST) Received: by mail-ej1-x632.google.com with SMTP id ud5so49021345ejc.4 for ; Wed, 09 Nov 2022 10:09:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fbCTPd6kJ9UGi3QAhvxinrgq+beJGg9acRzTs2s933w=; b=HUQLeu6JYR9w877YR9xRNJQwoM09E0eIME20tiO8hsXCX5ET8ydDg08etBrRgy9SwQ uZjcwGjfEtriDEcPi7/QgOB1SSyiClmVBXYUAlsGOpT/9y45tydaxcG/X/g6ngUqcN4F vjWszS9qkzKGGjeSddxrItRtSsRvieM6YHE835w6QZ4Q93ciPcG47tdEaWNYf3L4px/K RMsc3fP6losrzehs5m6tiu4RJSN+xKgbDMAPc8lxMhbRGGTQ319PUvXSWnUciC+JfsHg Hb7Cou6NXq7Vq/xjGDwCSLMwFh/ff+Xx5bQ66MPcVt4/+0Rw11iEAPOE1PoVQ5LhXfny ot9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fbCTPd6kJ9UGi3QAhvxinrgq+beJGg9acRzTs2s933w=; b=CFAtJ+b4LFEF+sPB4Z3TCjCar7O6ogW5CVml9jRFNth0mLA5ouPAC+G2hHj9t2BoSC GmOMRa6kiICvpJQftpTRjrbaTkC4g21GdzS7kCXxGI8kz2/b17DwZ64NUMe/sud9Sb2l QbLkqlsQa/xExAZ1BYBExXPu/zEiVAS8NHVVSkMSJwGxIL7uXSU+0wq4QSTySyQ45j0H 4Q4Rikrr/BK4I63SQSr+M/DRetRzTizK+VrARoEC6VEh+iwBzWcEOAFkVnztCpyukWYF lF12jUDuUlegsQcNTMkwjHFGqhp6+YdcsCoxlG75g0k51ts4l61RrxfWmJCVzmVzeOZi dypg== X-Gm-Message-State: ACrzQf3YI3R4p8digrlXeScI7pF1wc0nWu9CnPi191zBtq20VHXjX7KH gMFcYw7hoo5Q7Yi4tOR2ShE= X-Google-Smtp-Source: AMsMyM5KW61FeMYJnxPmXZ2YI1bOq/TYLb67Exk/s9h5WlsqFeVBe4BgZxtzuy7HrSuO7tqBOF7baA== X-Received: by 2002:a17:907:5ce:b0:730:bae0:deb with SMTP id wg14-20020a17090705ce00b00730bae00debmr58555698ejb.181.1668017374254; Wed, 09 Nov 2022 10:09:34 -0800 (PST) Received: from ThinkStation-P340.. (static-82-85-31-68.clienti.tiscali.it. [82.85.31.68]) by smtp.gmail.com with ESMTPSA id rh16-20020a17090720f000b0077016f4c6d4sm6116311ejb.55.2022.11.09.10.09.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Nov 2022 10:09:34 -0800 (PST) From: Daniele Palmas To: David Miller , Jakub Kicinski , Paolo Abeni , Eric Dumazet , Subash Abhinov Kasiviswanathan , Sean Tranchetti , Jonathan Corbet Cc: =?utf-8?q?Bj=C3=B8rn_Mork?= , Greg Kroah-Hartman , netdev@vger.kernel.org, Daniele Palmas Subject: [PATCH net-next 3/3] net: qualcomm: rmnet: add ethtool support for configuring tx aggregation Date: Wed, 9 Nov 2022 19:02:49 +0100 Message-Id: <20221109180249.4721-4-dnlplm@gmail.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20221109180249.4721-1-dnlplm@gmail.com> References: <20221109180249.4721-1-dnlplm@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add support for ETHTOOL_COALESCE_TX_AGGR for configuring the tx aggregation settings. Signed-off-by: Daniele Palmas --- .../net/ethernet/qualcomm/rmnet/rmnet_vnd.c | 44 +++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c index 1b2119b1d48a..630cf6737f64 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_vnd.c @@ -210,7 +210,51 @@ static void rmnet_get_ethtool_stats(struct net_device *dev, memcpy(data, st, ARRAY_SIZE(rmnet_gstrings_stats) * sizeof(u64)); } +static int rmnet_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct rmnet_priv *priv = netdev_priv(dev); + struct rmnet_port *port; + + port = rmnet_get_port_rtnl(priv->real_dev); + + memset(kernel_coal, 0, sizeof(*kernel_coal)); + kernel_coal->tx_max_aggr_size = port->egress_agg_params.agg_size; + kernel_coal->tx_max_aggr_frames = port->egress_agg_params.agg_count; + kernel_coal->tx_usecs_aggr_time = port->egress_agg_params.agg_time_nsec / NSEC_PER_USEC; + + return 0; +} + +static int rmnet_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *coal, + struct kernel_ethtool_coalesce *kernel_coal, + struct netlink_ext_ack *extack) +{ + struct rmnet_priv *priv = netdev_priv(dev); + struct rmnet_port *port; + + port = rmnet_get_port_rtnl(priv->real_dev); + + if (kernel_coal->tx_max_aggr_frames <= 1 || kernel_coal->tx_max_aggr_frames > 64) + return -EINVAL; + + if (kernel_coal->tx_max_aggr_size > 32768) + return -EINVAL; + + rmnet_map_update_ul_agg_config(port, kernel_coal->tx_max_aggr_size, + kernel_coal->tx_max_aggr_frames, + kernel_coal->tx_usecs_aggr_time); + + return 0; +} + static const struct ethtool_ops rmnet_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_TX_AGGR, + .get_coalesce = rmnet_get_coalesce, + .set_coalesce = rmnet_set_coalesce, .get_ethtool_stats = rmnet_get_ethtool_stats, .get_strings = rmnet_get_strings, .get_sset_count = rmnet_get_sset_count,