From patchwork Tue Jan 19 00:40:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028475 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26548C433DB for ; Tue, 19 Jan 2021 00:41:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EA69223101 for ; Tue, 19 Jan 2021 00:41:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388708AbhASAll (ORCPT ); Mon, 18 Jan 2021 19:41:41 -0500 Received: from mga09.intel.com ([134.134.136.24]:38527 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730132AbhASAlc (ORCPT ); Mon, 18 Jan 2021 19:41:32 -0500 IronPort-SDR: +qRgIRKrq0XMteCm8Fm+YFhRx6nr8Yh7NYCv/Nu5wvXyKNLYrIPT8Fd0O8wAltc5MLftWL37w3 drYI9s4II3dQ== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011253" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011253" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:50 -0800 IronPort-SDR: Nc+vhipkB8tAh/UT9vfKT3kz+CqmVpb7cEEl/DagI/c+zHe0I/0uXnhYNZdCtn9xfnWXJe3qRh +E/dSRw4V7Cw== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285755" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:49 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 1/8] ethtool: Add support for configuring frame preemption Date: Mon, 18 Jan 2021 16:40:21 -0800 Message-Id: <20210119004028.2809425-2-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Frame preemption (described in IEEE 802.3br-2016) defines the concept of preemptible and express queues. It allows traffic from express queues to "interrupt" traffic from preemptible queues, which are "resumed" after the express traffic has finished transmitting. Frame preemption can only be used when both the local device and the link partner support it. Only parameters for enabling/disabling frame preemption and configuring the minimum fragment size are included here. Expressing which queues are marked as preemptible is left to mqprio/taprio, as having that information there should be easier on the user. Signed-off-by: Vinicius Costa Gomes --- Documentation/networking/ethtool-netlink.rst | 38 +++++ include/linux/ethtool.h | 39 ++++- include/uapi/linux/ethtool_netlink.h | 17 +++ net/ethtool/Makefile | 2 +- net/ethtool/common.c | 10 ++ net/ethtool/netlink.c | 19 +++ net/ethtool/netlink.h | 4 + net/ethtool/preempt.c | 146 +++++++++++++++++++ 8 files changed, 273 insertions(+), 2 deletions(-) create mode 100644 net/ethtool/preempt.c diff --git a/Documentation/networking/ethtool-netlink.rst b/Documentation/networking/ethtool-netlink.rst index 30b98245979f..fc71d11317c2 100644 --- a/Documentation/networking/ethtool-netlink.rst +++ b/Documentation/networking/ethtool-netlink.rst @@ -1279,6 +1279,44 @@ Kernel response contents: For UDP tunnel table empty ``ETHTOOL_A_TUNNEL_UDP_TABLE_TYPES`` indicates that the table contains static entries, hard-coded by the NIC. +PREEMPT_GET +=========== + +Get information about frame preemption state. + +Request contents: + + ==================================== ====== ========================== + ``ETHTOOL_A_CHANNELS_HEADER`` nested request header + ==================================== ====== ========================== + +Kernel response contents: + + ===================================== ====== ========================== + ``ETHTOOL_A_CHANNELS_HEADER`` nested reply header + ``ETHTOOL_A_PREEMPT_ENABLED`` u8 frame preemption enabled + ``ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE`` u32 Min additional frag size + ===================================== ====== ========================== + +``ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE`` configures the minimum non-final +fragment size that the receiver device supports. + +PREEMPT_SET +============ + +Sets frame preemption parameters. + +Request contents: + + ===================================== ====== ========================== + ``ETHTOOL_A_CHANNELS_HEADER`` nested reply header + ``ETHTOOL_A_PREEMPT_ENABLED`` u8 frame preemption enabled + ``ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE`` u32 Min additional frag size + ===================================== ====== ========================== + +``ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE`` configures the minimum non-final +fragment size that the receiver device supports. + Request translation =================== diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h index e3da25b51ae4..f7692a348e9d 100644 --- a/include/linux/ethtool.h +++ b/include/linux/ethtool.h @@ -263,6 +263,22 @@ struct ethtool_pause_stats { u64 rx_pause_frames; }; +/** + * struct ethtool_fp - Frame Preemption information + * + * @enabled: Enable frame preemption. + + * @add_frag_size: Minimum size for additional (non-final) fragments + * in bytes, for the value defined in the IEEE 802.3-2018 standard see + * ethtool_frag_size_to_mult(). + */ +struct ethtool_fp { + u8 enabled; + u32 add_frag_size; +}; + +struct netlink_ext_ack; + /** * struct ethtool_ops - optional netdev operations * @supported_coalesce_params: supported types of interrupt coalescing. @@ -406,6 +422,8 @@ struct ethtool_pause_stats { * @get_ethtool_phy_stats: Return extended statistics about the PHY device. * This is only useful if the device maintains PHY statistics and * cannot use the standard PHY library helpers. + * @get_preempt: Get the network device Frame Preemption parameters. + * @set_preempt: Set the network device Frame Preemption parameters. * * All operations are optional (i.e. the function pointer may be set * to %NULL) and callers must take this into account. Callers must @@ -504,6 +522,10 @@ struct ethtool_ops { struct ethtool_fecparam *); int (*set_fecparam)(struct net_device *, struct ethtool_fecparam *); + int (*get_preempt)(struct net_device *, + struct ethtool_fp *); + int (*set_preempt)(struct net_device *, struct ethtool_fp *, + struct netlink_ext_ack *); void (*get_ethtool_phy_stats)(struct net_device *, struct ethtool_stats *, u64 *); int (*get_phy_tunable)(struct net_device *, @@ -533,7 +555,6 @@ int ethtool_virtdev_set_link_ksettings(struct net_device *dev, const struct ethtool_link_ksettings *cmd, u32 *dev_speed, u8 *dev_duplex); -struct netlink_ext_ack; struct phy_device; struct phy_tdr_config; @@ -566,4 +587,20 @@ struct ethtool_phy_ops { */ void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops); +/** + * Convert from a Frame Preemption Additional Fragment size in bytes + * to a multiplier. The multiplier is defined as: + * "A 2-bit integer value used to indicate the minimum size of + * non-final fragments supported by the receiver on the given port + * associated with the local System. This value is expressed in units + * of 64 octets of additional fragment length." + * Equivalent to `30.14.1.7 aMACMergeAddFragSize` from the IEEE 802.3-2018 + * standard. + * + * @frag_size: minimum non-final fragment size in bytes + * + * Returns the multiplier. + */ +u8 ethtool_frag_size_to_mult(u32 frag_size); + #endif /* _LINUX_ETHTOOL_H */ diff --git a/include/uapi/linux/ethtool_netlink.h b/include/uapi/linux/ethtool_netlink.h index e2bf36e6964b..bd7a722dc13a 100644 --- a/include/uapi/linux/ethtool_netlink.h +++ b/include/uapi/linux/ethtool_netlink.h @@ -42,6 +42,8 @@ enum { ETHTOOL_MSG_CABLE_TEST_ACT, ETHTOOL_MSG_CABLE_TEST_TDR_ACT, ETHTOOL_MSG_TUNNEL_INFO_GET, + ETHTOOL_MSG_PREEMPT_GET, + ETHTOOL_MSG_PREEMPT_SET, /* add new constants above here */ __ETHTOOL_MSG_USER_CNT, @@ -80,6 +82,8 @@ enum { ETHTOOL_MSG_CABLE_TEST_NTF, ETHTOOL_MSG_CABLE_TEST_TDR_NTF, ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY, + ETHTOOL_MSG_PREEMPT_GET_REPLY, + ETHTOOL_MSG_PREEMPT_NTF, /* add new constants above here */ __ETHTOOL_MSG_KERNEL_CNT, @@ -628,6 +632,19 @@ enum { ETHTOOL_A_TUNNEL_INFO_MAX = (__ETHTOOL_A_TUNNEL_INFO_CNT - 1) }; +/* FRAME PREEMPTION */ + +enum { + ETHTOOL_A_PREEMPT_UNSPEC, + ETHTOOL_A_PREEMPT_HEADER, /* nest - _A_HEADER_* */ + ETHTOOL_A_PREEMPT_ENABLED, /* u8 */ + ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE, /* u32 */ + + /* add new constants above here */ + __ETHTOOL_A_PREEMPT_CNT, + ETHTOOL_A_PREEMPT_MAX = (__ETHTOOL_A_PREEMPT_CNT - 1) +}; + /* generic netlink info */ #define ETHTOOL_GENL_NAME "ethtool" #define ETHTOOL_GENL_VERSION 1 diff --git a/net/ethtool/Makefile b/net/ethtool/Makefile index 7a849ff22dad..4e584903e3ef 100644 --- a/net/ethtool/Makefile +++ b/net/ethtool/Makefile @@ -7,4 +7,4 @@ obj-$(CONFIG_ETHTOOL_NETLINK) += ethtool_nl.o ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o \ linkstate.o debug.o wol.o features.o privflags.o rings.o \ channels.o coalesce.o pause.o eee.o tsinfo.o cabletest.o \ - tunnels.o + tunnels.o preempt.o diff --git a/net/ethtool/common.c b/net/ethtool/common.c index 24036e3055a1..6b47ee0565f5 100644 --- a/net/ethtool/common.c +++ b/net/ethtool/common.c @@ -410,3 +410,13 @@ void ethtool_set_ethtool_phy_ops(const struct ethtool_phy_ops *ops) rtnl_unlock(); } EXPORT_SYMBOL_GPL(ethtool_set_ethtool_phy_ops); + +u8 ethtool_frag_size_to_mult(u32 frag_size) +{ + u8 mult = (frag_size / 64) - 1; + + mult = clamp_t(u8, mult, 0, 3); + + return mult; +} +EXPORT_SYMBOL_GPL(ethtool_frag_size_to_mult); diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c index 50d3c8896f91..bc7d66e3ba38 100644 --- a/net/ethtool/netlink.c +++ b/net/ethtool/netlink.c @@ -245,6 +245,7 @@ ethnl_default_requests[__ETHTOOL_MSG_USER_CNT] = { [ETHTOOL_MSG_PAUSE_GET] = ðnl_pause_request_ops, [ETHTOOL_MSG_EEE_GET] = ðnl_eee_request_ops, [ETHTOOL_MSG_TSINFO_GET] = ðnl_tsinfo_request_ops, + [ETHTOOL_MSG_PREEMPT_GET] = ðnl_preempt_request_ops, }; static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) @@ -551,6 +552,7 @@ ethnl_default_notify_ops[ETHTOOL_MSG_KERNEL_MAX + 1] = { [ETHTOOL_MSG_COALESCE_NTF] = ðnl_coalesce_request_ops, [ETHTOOL_MSG_PAUSE_NTF] = ðnl_pause_request_ops, [ETHTOOL_MSG_EEE_NTF] = ðnl_eee_request_ops, + [ETHTOOL_MSG_PREEMPT_NTF] = ðnl_preempt_request_ops, }; /* default notification handler */ @@ -643,6 +645,7 @@ static const ethnl_notify_handler_t ethnl_notify_handlers[] = { [ETHTOOL_MSG_COALESCE_NTF] = ethnl_default_notify, [ETHTOOL_MSG_PAUSE_NTF] = ethnl_default_notify, [ETHTOOL_MSG_EEE_NTF] = ethnl_default_notify, + [ETHTOOL_MSG_PREEMPT_NTF] = ethnl_default_notify, }; void ethtool_notify(struct net_device *dev, unsigned int cmd, const void *data) @@ -912,6 +915,22 @@ static const struct genl_ops ethtool_genl_ops[] = { .policy = ethnl_tunnel_info_get_policy, .maxattr = ARRAY_SIZE(ethnl_tunnel_info_get_policy) - 1, }, + { + .cmd = ETHTOOL_MSG_PREEMPT_GET, + .doit = ethnl_default_doit, + .start = ethnl_default_start, + .dumpit = ethnl_default_dumpit, + .done = ethnl_default_done, + .policy = ethnl_preempt_get_policy, + .maxattr = ARRAY_SIZE(ethnl_preempt_get_policy) - 1, + }, + { + .cmd = ETHTOOL_MSG_PREEMPT_SET, + .flags = GENL_UNS_ADMIN_PERM, + .doit = ethnl_set_preempt, + .policy = ethnl_preempt_set_policy, + .maxattr = ARRAY_SIZE(ethnl_preempt_set_policy) - 1, + }, }; static const struct genl_multicast_group ethtool_nl_mcgrps[] = { diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h index d8efec516d86..53d2e722fad3 100644 --- a/net/ethtool/netlink.h +++ b/net/ethtool/netlink.h @@ -344,6 +344,7 @@ extern const struct ethnl_request_ops ethnl_coalesce_request_ops; extern const struct ethnl_request_ops ethnl_pause_request_ops; extern const struct ethnl_request_ops ethnl_eee_request_ops; extern const struct ethnl_request_ops ethnl_tsinfo_request_ops; +extern const struct ethnl_request_ops ethnl_preempt_request_ops; extern const struct nla_policy ethnl_header_policy[ETHTOOL_A_HEADER_FLAGS + 1]; extern const struct nla_policy ethnl_header_policy_stats[ETHTOOL_A_HEADER_FLAGS + 1]; @@ -375,6 +376,8 @@ extern const struct nla_policy ethnl_tsinfo_get_policy[ETHTOOL_A_TSINFO_HEADER + extern const struct nla_policy ethnl_cable_test_act_policy[ETHTOOL_A_CABLE_TEST_HEADER + 1]; extern const struct nla_policy ethnl_cable_test_tdr_act_policy[ETHTOOL_A_CABLE_TEST_TDR_CFG + 1]; extern const struct nla_policy ethnl_tunnel_info_get_policy[ETHTOOL_A_TUNNEL_INFO_HEADER + 1]; +extern const struct nla_policy ethnl_preempt_get_policy[ETHTOOL_A_PREEMPT_HEADER + 1]; +extern const struct nla_policy ethnl_preempt_set_policy[ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE + 1]; int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info); @@ -392,5 +395,6 @@ int ethnl_act_cable_test_tdr(struct sk_buff *skb, struct genl_info *info); int ethnl_tunnel_info_doit(struct sk_buff *skb, struct genl_info *info); int ethnl_tunnel_info_start(struct netlink_callback *cb); int ethnl_tunnel_info_dumpit(struct sk_buff *skb, struct netlink_callback *cb); +int ethnl_set_preempt(struct sk_buff *skb, struct genl_info *info); #endif /* _NET_ETHTOOL_NETLINK_H */ diff --git a/net/ethtool/preempt.c b/net/ethtool/preempt.c new file mode 100644 index 000000000000..4f96d3c2b1d5 --- /dev/null +++ b/net/ethtool/preempt.c @@ -0,0 +1,146 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "netlink.h" +#include "common.h" + +struct preempt_req_info { + struct ethnl_req_info base; +}; + +struct preempt_reply_data { + struct ethnl_reply_data base; + struct ethtool_fp fp; +}; + +#define PREEMPT_REPDATA(__reply_base) \ + container_of(__reply_base, struct preempt_reply_data, base) + +const struct nla_policy +ethnl_preempt_get_policy[] = { + [ETHTOOL_A_PREEMPT_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy), +}; + +static int preempt_prepare_data(const struct ethnl_req_info *req_base, + struct ethnl_reply_data *reply_base, + struct genl_info *info) +{ + struct preempt_reply_data *data = PREEMPT_REPDATA(reply_base); + struct net_device *dev = reply_base->dev; + int ret; + + if (!dev->ethtool_ops->get_preempt) + return -EOPNOTSUPP; + + ret = ethnl_ops_begin(dev); + if (ret < 0) + return ret; + + ret = dev->ethtool_ops->get_preempt(dev, &data->fp); + ethnl_ops_complete(dev); + + return ret; +} + +static int preempt_reply_size(const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + int len = 0; + + len += nla_total_size(sizeof(u8)); /* _PREEMPT_ENABLED */ + len += nla_total_size(sizeof(u32)); /* _PREEMPT_ADD_FRAG_SIZE */ + + return len; +} + +static int preempt_fill_reply(struct sk_buff *skb, + const struct ethnl_req_info *req_base, + const struct ethnl_reply_data *reply_base) +{ + const struct preempt_reply_data *data = PREEMPT_REPDATA(reply_base); + const struct ethtool_fp *preempt = &data->fp; + + if (nla_put_u8(skb, ETHTOOL_A_PREEMPT_ENABLED, preempt->enabled)) + return -EMSGSIZE; + + if (nla_put_u32(skb, ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE, + preempt->add_frag_size)) + return -EMSGSIZE; + + return 0; +} + +const struct ethnl_request_ops ethnl_preempt_request_ops = { + .request_cmd = ETHTOOL_MSG_PREEMPT_GET, + .reply_cmd = ETHTOOL_MSG_PREEMPT_GET_REPLY, + .hdr_attr = ETHTOOL_A_PREEMPT_HEADER, + .req_info_size = sizeof(struct preempt_req_info), + .reply_data_size = sizeof(struct preempt_reply_data), + + .prepare_data = preempt_prepare_data, + .reply_size = preempt_reply_size, + .fill_reply = preempt_fill_reply, +}; + +const struct nla_policy +ethnl_preempt_set_policy[ETHTOOL_A_PREEMPT_MAX + 1] = { + [ETHTOOL_A_PREEMPT_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy), + [ETHTOOL_A_PREEMPT_ENABLED] = NLA_POLICY_RANGE(NLA_U8, 0, 1), + [ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE] = { .type = NLA_U32 }, +}; + +int ethnl_set_preempt(struct sk_buff *skb, struct genl_info *info) +{ + struct ethnl_req_info req_info = {}; + struct nlattr **tb = info->attrs; + struct ethtool_fp preempt = {}; + struct net_device *dev; + bool mod = false; + int ret; + + ret = ethnl_parse_header_dev_get(&req_info, + tb[ETHTOOL_A_PREEMPT_HEADER], + genl_info_net(info), info->extack, + true); + if (ret < 0) + return ret; + dev = req_info.dev; + ret = -EOPNOTSUPP; + if (!dev->ethtool_ops->get_preempt || + !dev->ethtool_ops->set_preempt) + goto out_dev; + + rtnl_lock(); + ret = ethnl_ops_begin(dev); + if (ret < 0) + goto out_rtnl; + + ret = dev->ethtool_ops->get_preempt(dev, &preempt); + if (ret < 0) { + GENL_SET_ERR_MSG(info, "failed to retrieve frame preemption settings"); + goto out_ops; + } + + ethnl_update_u8(&preempt.enabled, + tb[ETHTOOL_A_PREEMPT_ENABLED], &mod); + ethnl_update_u32(&preempt.add_frag_size, + tb[ETHTOOL_A_PREEMPT_ADD_FRAG_SIZE], &mod); + ret = 0; + if (!mod) + goto out_ops; + + ret = dev->ethtool_ops->set_preempt(dev, &preempt, info->extack); + if (ret < 0) { + GENL_SET_ERR_MSG(info, "frame preemption settings update failed"); + goto out_ops; + } + + ethtool_notify(dev, ETHTOOL_MSG_PREEMPT_NTF, NULL); + +out_ops: + ethnl_ops_complete(dev); +out_rtnl: + rtnl_unlock(); +out_dev: + dev_put(dev); + return ret; +} From patchwork Tue Jan 19 00:40:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028473 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D675C433E6 for ; Tue, 19 Jan 2021 00:41:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D53CE23103 for ; Tue, 19 Jan 2021 00:41:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732515AbhASAlg (ORCPT ); Mon, 18 Jan 2021 19:41:36 -0500 Received: from mga09.intel.com ([134.134.136.24]:38531 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731808AbhASAld (ORCPT ); Mon, 18 Jan 2021 19:41:33 -0500 IronPort-SDR: QAUU+uIPj9acxIMXI8gETbpY6fzvxeTkOkqXTodPnW/flTvmronQNfc4Tg0WVP3nwWUGO6Qyv+ cFSzPQ7/zhjw== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011256" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011256" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:51 -0800 IronPort-SDR: nX5maNVPitem1NJf+z0tUH/5nnjTRizNj4IovAQAPPPxGCDx5sX/5brlOj+HZiJHfr4N5MLyJa 461GOkyP59XQ== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285761" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:50 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 2/8] taprio: Add support for frame preemption offload Date: Mon, 18 Jan 2021 16:40:22 -0800 Message-Id: <20210119004028.2809425-3-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Adds a way to configure which traffic classes are marked as preemptible and which are marked as express. Even if frame preemption is not a "real" offload, because it can't be executed purely in software, having this information near where the mapping of traffic classes to queues is specified, makes it, hopefully, easier to use. taprio will receive the information of which traffic classes are marked as express/preemptible, and when offloading frame preemption to the driver will convert the information, so the driver receives which queues are marked as express/preemptible. Signed-off-by: Vinicius Costa Gomes --- include/linux/netdevice.h | 1 + include/net/pkt_sched.h | 4 ++++ include/uapi/linux/pkt_sched.h | 1 + net/sched/sch_taprio.c | 41 ++++++++++++++++++++++++++++++---- 4 files changed, 43 insertions(+), 4 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 5b949076ed23..7388c20c07a8 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -858,6 +858,7 @@ enum tc_setup_type { TC_SETUP_QDISC_ETS, TC_SETUP_QDISC_TBF, TC_SETUP_QDISC_FIFO, + TC_SETUP_PREEMPT, }; /* These structures hold the attributes of bpf state that are being passed diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h index 15b1b30f454e..be5ff1535332 100644 --- a/include/net/pkt_sched.h +++ b/include/net/pkt_sched.h @@ -183,6 +183,10 @@ struct tc_taprio_qopt_offload { struct tc_taprio_sched_entry entries[]; }; +struct tc_preempt_qopt_offload { + u32 preemptible_queues; +}; + /* Reference counting */ struct tc_taprio_qopt_offload *taprio_offload_get(struct tc_taprio_qopt_offload *offload); diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 9e7c2c607845..9ca9d2e55557 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -1240,6 +1240,7 @@ enum { TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION, /* s64 */ TCA_TAPRIO_ATTR_FLAGS, /* u32 */ TCA_TAPRIO_ATTR_TXTIME_DELAY, /* u32 */ + TCA_TAPRIO_ATTR_PREEMPT_TCS, /* u32 */ __TCA_TAPRIO_ATTR_MAX, }; diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index 6f775275826a..e4b511a0ee38 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -64,6 +64,7 @@ struct taprio_sched { struct Qdisc **qdiscs; struct Qdisc *root; u32 flags; + u32 preemptible_tcs; enum tk_offsets tk_offset; int clockid; atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+ @@ -776,6 +777,7 @@ static const struct nla_policy taprio_policy[TCA_TAPRIO_ATTR_MAX + 1] = { [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION] = { .type = NLA_S64 }, [TCA_TAPRIO_ATTR_FLAGS] = { .type = NLA_U32 }, [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, + [TCA_TAPRIO_ATTR_PREEMPT_TCS] = { .type = NLA_U32 }, }; static int fill_sched_entry(struct taprio_sched *q, struct nlattr **tb, @@ -1268,6 +1270,7 @@ static int taprio_disable_offload(struct net_device *dev, struct netlink_ext_ack *extack) { const struct net_device_ops *ops = dev->netdev_ops; + struct tc_preempt_qopt_offload preempt = { }; struct tc_taprio_qopt_offload *offload; int err; @@ -1286,13 +1289,15 @@ static int taprio_disable_offload(struct net_device *dev, offload->enable = 0; err = ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TAPRIO, offload); - if (err < 0) { + if (err < 0) + NL_SET_ERR_MSG(extack, + "Device failed to disable offload"); + + err = ops->ndo_setup_tc(dev, TC_SETUP_PREEMPT, &preempt); + if (err < 0) NL_SET_ERR_MSG(extack, "Device failed to disable offload"); - goto out; - } -out: taprio_offload_free(offload); return err; @@ -1509,6 +1514,29 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, mqprio->prio_tc_map[i]); } + /* It's valid to enable frame preemption without any kind of + * offloading being enabled, so keep it separated. + */ + if (tb[TCA_TAPRIO_ATTR_PREEMPT_TCS]) { + u32 preempt = nla_get_u32(tb[TCA_TAPRIO_ATTR_PREEMPT_TCS]); + struct tc_preempt_qopt_offload qopt = { }; + + if (preempt == U32_MAX) { + NL_SET_ERR_MSG(extack, "At least one queue must be not be preemptible"); + err = -EINVAL; + goto free_sched; + } + + qopt.preemptible_queues = tc_map_to_queue_mask(dev, preempt); + + err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_PREEMPT, + &qopt); + if (err) + goto free_sched; + + q->preemptible_tcs = preempt; + } + if (FULL_OFFLOAD_IS_ENABLED(q->flags)) err = taprio_enable_offload(dev, q, new_admin, extack); else @@ -1665,6 +1693,7 @@ static int taprio_init(struct Qdisc *sch, struct nlattr *opt, */ q->clockid = -1; q->flags = TAPRIO_FLAGS_INVALID; + q->preemptible_tcs = U32_MAX; spin_lock(&taprio_list_lock); list_add(&q->taprio_list, &taprio_list); @@ -1848,6 +1877,10 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb) if (q->flags && nla_put_u32(skb, TCA_TAPRIO_ATTR_FLAGS, q->flags)) goto options_error; + if (q->preemptible_tcs != U32_MAX && + nla_put_u32(skb, TCA_TAPRIO_ATTR_PREEMPT_TCS, q->preemptible_tcs)) + goto options_error; + if (q->txtime_delay && nla_put_u32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay)) goto options_error; From patchwork Tue Jan 19 00:40:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028477 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1C74C433E0 for ; Tue, 19 Jan 2021 00:41:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80A3223101 for ; Tue, 19 Jan 2021 00:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391315AbhASAl6 (ORCPT ); Mon, 18 Jan 2021 19:41:58 -0500 Received: from mga09.intel.com ([134.134.136.24]:38525 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730132AbhASAlr (ORCPT ); Mon, 18 Jan 2021 19:41:47 -0500 IronPort-SDR: NjuAKNezWHSisiPqBloH/QVjVvJSy7pbzEqV3dBBMj34Za0DBnfsXGB4QFY7w7HCMNZ8TDkfmo jLzFWQprsXuQ== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011257" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011257" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:52 -0800 IronPort-SDR: S4iGKsAaOsLdWc2XRLltfKxhstNgKlMmycZLklh/yQrzoqvEcILovXMSZpFzANSLRnKJxuWOWn th2Ew85gOvgg== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285766" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:51 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 3/8] igc: Set the RX packet buffer size for TSN mode Date: Mon, 18 Jan 2021 16:40:23 -0800 Message-Id: <20210119004028.2809425-4-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org In preparation for supporting frame preemption, when entering TSN mode set the receive packet buffer to 16KB for the Express MAC, 16KB for the Preemptible MAC and 2KB for the BMC, according to the datasheet section 7.1.3.2. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_defines.h | 2 ++ drivers/net/ethernet/intel/igc/igc_tsn.c | 14 ++++++++++++-- 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h index 32f5fd684139..0e78abfd99ee 100644 --- a/drivers/net/ethernet/intel/igc/igc_defines.h +++ b/drivers/net/ethernet/intel/igc/igc_defines.h @@ -351,6 +351,8 @@ #define IGC_RXPBS_CFG_TS_EN 0x80000000 /* Timestamp in Rx buffer */ #define IGC_TXPBSIZE_TSN 0x04145145 /* 5k bytes buffer for each queue */ +#define IGC_RXPBSIZE_TSN 0x00010090 /* 16KB for EXP + 16KB for BE + 2KB for BMC */ +#define IGC_RXPBSIZE_SIZE_MASK 0x0001FFFF #define IGC_DTXMXPKTSZ_TSN 0x19 /* 1600 bytes of max TX DMA packet size */ #define IGC_DTXMXPKTSZ_DEFAULT 0x98 /* 9728-byte Jumbo frames */ diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index 174103c4bea6..38451cf05ac6 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -24,7 +24,7 @@ static bool is_any_launchtime(struct igc_adapter *adapter) static int igc_tsn_disable_offload(struct igc_adapter *adapter) { struct igc_hw *hw = &adapter->hw; - u32 tqavctrl; + u32 tqavctrl, rxpbs; int i; if (!(adapter->flags & IGC_FLAG_TSN_QBV_ENABLED)) @@ -35,6 +35,11 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) wr32(IGC_TXPBS, I225_TXPBSIZE_DEFAULT); wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_DEFAULT); + rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; + rxpbs |= I225_RXPBSIZE_DEFAULT; + + wr32(IGC_RXPBS, rxpbs); + tqavctrl = rd32(IGC_TQAVCTRL); tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV); @@ -64,7 +69,7 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) { struct igc_hw *hw = &adapter->hw; u32 tqavctrl, baset_l, baset_h; - u32 sec, nsec, cycle; + u32 sec, nsec, cycle, rxpbs; ktime_t base_time, systim; int i; @@ -79,6 +84,11 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); tqavctrl = rd32(IGC_TQAVCTRL); + rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; + rxpbs |= IGC_RXPBSIZE_TSN; + + wr32(IGC_RXPBS, rxpbs); + tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; wr32(IGC_TQAVCTRL, tqavctrl); From patchwork Tue Jan 19 00:40:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028479 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E091C433DB for ; Tue, 19 Jan 2021 00:42:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 593322225C for ; Tue, 19 Jan 2021 00:42:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391325AbhASAmB (ORCPT ); Mon, 18 Jan 2021 19:42:01 -0500 Received: from mga09.intel.com ([134.134.136.24]:38527 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388992AbhASAls (ORCPT ); Mon, 18 Jan 2021 19:41:48 -0500 IronPort-SDR: LESk/8L3PjtEmZuVeVfDwToiYENnrHgPc2KMnKU3lSBOeAPcL3KcXcWTeiUZAGFI9e1NFkWyAB SQjpXkzwLlVQ== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011259" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011259" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:52 -0800 IronPort-SDR: UudlCTyGtEY0Bz2KWkpQOenq5yl2XESzJJPFRKkwpeLBdt6Ip2JQOr+farwewjietcwhgfJLnC 7c2Gx0/aDp+w== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285772" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:52 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 4/8] igc: Only dump registers if configured to dump HW information Date: Mon, 18 Jan 2021 16:40:24 -0800 Message-Id: <20210119004028.2809425-5-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org To avoid polluting the users logs with register dumps, only dump the adapter's registers if configured to do so. If users want to enable HW status messages they can do: $ ethtool -s IFACE msglvl hw on Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_dump.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ethernet/intel/igc/igc_dump.c b/drivers/net/ethernet/intel/igc/igc_dump.c index 4b9ec7d0b727..90b754b429ff 100644 --- a/drivers/net/ethernet/intel/igc/igc_dump.c +++ b/drivers/net/ethernet/intel/igc/igc_dump.c @@ -308,6 +308,9 @@ void igc_regs_dump(struct igc_adapter *adapter) struct igc_hw *hw = &adapter->hw; struct igc_reg_info *reginfo; + if (!netif_msg_hw(adapter)) + return; + /* Print Registers */ netdev_info(adapter->netdev, "Register Dump\n"); netdev_info(adapter->netdev, "Register Name Value\n"); From patchwork Tue Jan 19 00:40:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028481 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFD32C433E0 for ; Tue, 19 Jan 2021 00:42:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B4F232225C for ; Tue, 19 Jan 2021 00:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391371AbhASAmI (ORCPT ); Mon, 18 Jan 2021 19:42:08 -0500 Received: from mga09.intel.com ([134.134.136.24]:38531 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391079AbhASAlt (ORCPT ); Mon, 18 Jan 2021 19:41:49 -0500 IronPort-SDR: Slg/xgOLDzXHo7kZymF1wobJKSZixeJdJb//HLzxcyvnUtmlfRwkTez3GEw6YWdZ/Ef2WcFqML uTXCC+JkSuoA== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011264" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011264" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:54 -0800 IronPort-SDR: QZSJrKENayqnnMspYJoz4fxLDekgzyytlj8vot2EsqqKPDZa3urdDUgiSGo/37+qiRQyYvGlUo LNLblCTgfMKQ== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285778" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:52 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 5/8] igc: Avoid TX Hangs because long cycles Date: Mon, 18 Jan 2021 16:40:25 -0800 Message-Id: <20210119004028.2809425-6-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Avoid possible TX Hangs caused by using long Qbv cycles. In some cases, using long cycles (more than 1 second) can cause transmissions to be blocked for that time. As the TX Hang timeout is close to 1 second, we may need to reduce the cycle time to something more reasonable: the value chosen is 1ms. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_main.c | 4 ++-- drivers/net/ethernet/intel/igc/igc_tsn.c | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index afd6a62da29d..f1b31fa04734 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4693,12 +4693,12 @@ static int igc_save_launchtime_params(struct igc_adapter *adapter, int queue, if (adapter->base_time) return 0; - adapter->cycle_time = NSEC_PER_SEC; + adapter->cycle_time = NSEC_PER_MSEC; for (i = 0; i < adapter->num_tx_queues; i++) { ring = adapter->tx_ring[i]; ring->start_time = 0; - ring->end_time = NSEC_PER_SEC; + ring->end_time = NSEC_PER_MSEC; } return 0; diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index 38451cf05ac6..f5a5527adb21 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -54,11 +54,11 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) wr32(IGC_TXQCTL(i), 0); wr32(IGC_STQT(i), 0); - wr32(IGC_ENDQT(i), NSEC_PER_SEC); + wr32(IGC_ENDQT(i), NSEC_PER_MSEC); } - wr32(IGC_QBVCYCLET_S, NSEC_PER_SEC); - wr32(IGC_QBVCYCLET, NSEC_PER_SEC); + wr32(IGC_QBVCYCLET_S, NSEC_PER_MSEC); + wr32(IGC_QBVCYCLET, NSEC_PER_MSEC); adapter->flags &= ~IGC_FLAG_TSN_QBV_ENABLED; From patchwork Tue Jan 19 00:40:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028483 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6E0DC433DB for ; Tue, 19 Jan 2021 00:42:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9A41223101 for ; Tue, 19 Jan 2021 00:42:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391473AbhASAmO (ORCPT ); Mon, 18 Jan 2021 19:42:14 -0500 Received: from mga09.intel.com ([134.134.136.24]:38525 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391079AbhASAmL (ORCPT ); Mon, 18 Jan 2021 19:42:11 -0500 IronPort-SDR: 4HAjAKfaN4zBmsG/o8+d6lbI/79usJhcHe1UWFSzWAh9nucA586VZnpn9PdmVBzUa+C5OEKe9+ d1vQwy4lgqRA== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011266" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011266" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:54 -0800 IronPort-SDR: wxeSF/KbvI4vzVhyzEnQiJXPxQ3oCkeDXMVTY1AZ6jQnMJEsNURyiA3YCWbTvKV8b12CUmghrl 5r1Ta7Rbc6jQ== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285784" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:54 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 6/8] igc: Add support for tuning frame preemption via ethtool Date: Mon, 18 Jan 2021 16:40:26 -0800 Message-Id: <20210119004028.2809425-7-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org The tc subsystem sets which queues are marked as preemptible, it's the role of ethtool to control more hardware specific parameters. These parameters include: - enabling the frame preemption hardware: As enabling frame preemption may have other requirements before it can be enabled, it's exposed via the ethtool API; - mininum fragment size multiplier: expressed in usually in the form of (1 + N)*64, this number indicates what's the size of the minimum fragment that can be preempted. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc.h | 12 +++++ drivers/net/ethernet/intel/igc/igc_defines.h | 4 ++ drivers/net/ethernet/intel/igc/igc_ethtool.c | 53 ++++++++++++++++++++ drivers/net/ethernet/intel/igc/igc_tsn.c | 25 +++++++-- 4 files changed, 91 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h index 35baae900c1f..1067c46e0bc2 100644 --- a/drivers/net/ethernet/intel/igc/igc.h +++ b/drivers/net/ethernet/intel/igc/igc.h @@ -87,6 +87,7 @@ struct igc_ring { u8 queue_index; /* logical index of the ring*/ u8 reg_idx; /* physical index of the ring */ bool launchtime_enable; /* true if LaunchTime is enabled */ + bool preemptible; /* true if not express */ u32 start_time; u32 end_time; @@ -165,6 +166,8 @@ struct igc_adapter { ktime_t base_time; ktime_t cycle_time; + bool frame_preemption_active; + u8 add_frag_size; /* OS defined structs */ struct pci_dev *pdev; @@ -262,6 +265,10 @@ extern char igc_driver_name[]; #define IGC_FLAG_VLAN_PROMISC BIT(15) #define IGC_FLAG_RX_LEGACY BIT(16) #define IGC_FLAG_TSN_QBV_ENABLED BIT(17) +#define IGC_FLAG_TSN_PREEMPT_ENABLED BIT(18) + +#define IGC_FLAG_TSN_ANY_ENABLED \ + (IGC_FLAG_TSN_QBV_ENABLED | IGC_FLAG_TSN_PREEMPT_ENABLED) #define IGC_FLAG_RSS_FIELD_IPV4_UDP BIT(6) #define IGC_FLAG_RSS_FIELD_IPV6_UDP BIT(7) @@ -310,6 +317,11 @@ extern char igc_driver_name[]; #define IGC_I225_RX_LATENCY_1000 300 #define IGC_I225_RX_LATENCY_2500 1485 +/* From the datasheet section 8.12.4 Tx Qav Control TQAVCTRL, + * MIN_FRAG initial value. + */ +#define IGC_I225_MIN_FRAG_SIZE_DEFAULT 68 + /* RX and TX descriptor control thresholds. * PTHRESH - MAC will consider prefetch if it has fewer than this number of * descriptors available in its onboard memory. diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h index 0e78abfd99ee..c2155d109bd6 100644 --- a/drivers/net/ethernet/intel/igc/igc_defines.h +++ b/drivers/net/ethernet/intel/igc/igc_defines.h @@ -410,10 +410,14 @@ /* Transmit Scheduling */ #define IGC_TQAVCTRL_TRANSMIT_MODE_TSN 0x00000001 #define IGC_TQAVCTRL_ENHANCED_QAV 0x00000008 +#define IGC_TQAVCTRL_PREEMPT_ENA 0x00000002 +#define IGC_TQAVCTRL_MIN_FRAG_MASK 0x0000C000 +#define IGC_TQAVCTRL_MIN_FRAG_SHIFT 14 #define IGC_TXQCTL_QUEUE_MODE_LAUNCHT 0x00000001 #define IGC_TXQCTL_STRICT_CYCLE 0x00000002 #define IGC_TXQCTL_STRICT_END 0x00000004 +#define IGC_TXQCTL_PREEMPTABLE 0x00000008 /* Receive Checksum Control */ #define IGC_RXCSUM_CRCOFL 0x00000800 /* CRC32 offload enable */ diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c index 61d331ce38cd..2ed32ac0dca8 100644 --- a/drivers/net/ethernet/intel/igc/igc_ethtool.c +++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c @@ -8,6 +8,7 @@ #include "igc.h" #include "igc_diag.h" +#include "igc_tsn.h" /* forward declaration */ struct igc_stats { @@ -1636,6 +1637,56 @@ static int igc_ethtool_set_eee(struct net_device *netdev, return 0; } +static int igc_ethtool_get_preempt(struct net_device *netdev, + struct ethtool_fp *fpcmd) +{ + struct igc_adapter *adapter = netdev_priv(netdev); + + fpcmd->enabled = adapter->frame_preemption_active; + fpcmd->add_frag_size = adapter->add_frag_size; + + return 0; +} + +static int igc_ethtool_set_preempt(struct net_device *netdev, + struct ethtool_fp *fpcmd, + struct netlink_ext_ack *extack) +{ + struct igc_adapter *adapter = netdev_priv(netdev); + int i; + + if (fpcmd->add_frag_size < 68 || fpcmd->add_frag_size > 260) { + NL_SET_ERR_MSG(extack, "Invalid value for add-frag-size"); + return -EINVAL; + } + + adapter->frame_preemption_active = fpcmd->enabled; + adapter->add_frag_size = fpcmd->add_frag_size; + + if (!adapter->frame_preemption_active) + goto done; + + /* Enabling frame preemption requires TSN mode to be enabled, + * which requires a schedule to be active. So, if there isn't + * a schedule already configured, configure a simple one, with + * all queues open, with 1ms cycle time. + */ + if (adapter->base_time) + goto done; + + adapter->cycle_time = NSEC_PER_MSEC; + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct igc_ring *ring = adapter->tx_ring[i]; + + ring->start_time = 0; + ring->end_time = NSEC_PER_MSEC; + } + +done: + return igc_tsn_offload_apply(adapter); +} + static int igc_ethtool_begin(struct net_device *netdev) { struct igc_adapter *adapter = netdev_priv(netdev); @@ -1915,6 +1966,8 @@ static const struct ethtool_ops igc_ethtool_ops = { .get_ts_info = igc_ethtool_get_ts_info, .get_channels = igc_ethtool_get_channels, .set_channels = igc_ethtool_set_channels, + .get_preempt = igc_ethtool_get_preempt, + .set_preempt = igc_ethtool_set_preempt, .get_priv_flags = igc_ethtool_get_priv_flags, .set_priv_flags = igc_ethtool_set_priv_flags, .get_eee = igc_ethtool_get_eee, diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index f5a5527adb21..31aa9eed3ae3 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -30,7 +30,10 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) if (!(adapter->flags & IGC_FLAG_TSN_QBV_ENABLED)) return 0; + adapter->base_time = 0; adapter->cycle_time = 0; + adapter->frame_preemption_active = false; + adapter->add_frag_size = IGC_I225_MIN_FRAG_SIZE_DEFAULT; wr32(IGC_TXPBS, I225_TXPBSIZE_DEFAULT); wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_DEFAULT); @@ -42,7 +45,8 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) tqavctrl = rd32(IGC_TQAVCTRL); tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN | - IGC_TQAVCTRL_ENHANCED_QAV); + IGC_TQAVCTRL_ENHANCED_QAV | IGC_TQAVCTRL_PREEMPT_ENA | + IGC_TQAVCTRL_MIN_FRAG_MASK); wr32(IGC_TQAVCTRL, tqavctrl); for (i = 0; i < adapter->num_tx_queues; i++) { @@ -51,6 +55,7 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) ring->start_time = 0; ring->end_time = 0; ring->launchtime_enable = false; + ring->preemptible = false; wr32(IGC_TXQCTL(i), 0); wr32(IGC_STQT(i), 0); @@ -71,6 +76,7 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) u32 tqavctrl, baset_l, baset_h; u32 sec, nsec, cycle, rxpbs; ktime_t base_time, systim; + u8 frag_size_mult; int i; if (adapter->flags & IGC_FLAG_TSN_QBV_ENABLED) @@ -83,13 +89,22 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN); wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); - tqavctrl = rd32(IGC_TQAVCTRL); rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; rxpbs |= IGC_RXPBSIZE_TSN; wr32(IGC_RXPBS, rxpbs); + tqavctrl = rd32(IGC_TQAVCTRL) & + ~(IGC_TQAVCTRL_MIN_FRAG_MASK | IGC_TQAVCTRL_PREEMPT_ENA); tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; + + if (adapter->frame_preemption_active) + tqavctrl |= IGC_TQAVCTRL_PREEMPT_ENA; + + frag_size_mult = ethtool_frag_size_to_mult(adapter->add_frag_size); + + tqavctrl |= frag_size_mult << IGC_TQAVCTRL_MIN_FRAG_SHIFT; + wr32(IGC_TQAVCTRL, tqavctrl); wr32(IGC_QBVCYCLET_S, cycle); @@ -115,6 +130,9 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) if (ring->launchtime_enable) txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT; + if (ring->preemptible) + txqctl |= IGC_TXQCTL_PREEMPTABLE; + wr32(IGC_TXQCTL(i), txqctl); } @@ -142,7 +160,8 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) int igc_tsn_offload_apply(struct igc_adapter *adapter) { - bool is_any_enabled = adapter->base_time || is_any_launchtime(adapter); + bool is_any_enabled = adapter->base_time || + is_any_launchtime(adapter) || adapter->frame_preemption_active; if (!(adapter->flags & IGC_FLAG_TSN_QBV_ENABLED) && !is_any_enabled) return 0; From patchwork Tue Jan 19 00:40:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028485 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DD5AC433E0 for ; Tue, 19 Jan 2021 00:42:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6FAB23101 for ; Tue, 19 Jan 2021 00:42:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391527AbhASAmg (ORCPT ); Mon, 18 Jan 2021 19:42:36 -0500 Received: from mga09.intel.com ([134.134.136.24]:38527 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391385AbhASAmL (ORCPT ); Mon, 18 Jan 2021 19:42:11 -0500 IronPort-SDR: 2CgKtVw4udaSCOaJ+tTiSlgWBr5ExDbFjp3hGZ5vsJkATgRgywnLyguXnookbu/Ad3sCHT4VtQ J9JOyCTDXRkQ== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011267" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011267" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:55 -0800 IronPort-SDR: Hn1Lm3Gh7gEqt8pRJz3t5DXZeFpfWxuDZ5yXY5+XRlx995Rlplt1vJFgdYw4F1v5Q6E9VGulkl TwQXmGpWGCIA== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285788" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:54 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 7/8] igc: Add support for Frame Preemption offload Date: Mon, 18 Jan 2021 16:40:27 -0800 Message-Id: <20210119004028.2809425-8-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org After the set of queues that are marked as preemptible are exposed to the driver we can configure the hardware to enable the frame preemption functionality. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_main.c | 32 +++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index f1b31fa04734..6a09f37ba7ed 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4818,6 +4818,23 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter, return 0; } +static int igc_save_frame_preemption(struct igc_adapter *adapter, + struct tc_preempt_qopt_offload *qopt) +{ + u32 preempt; + int i; + + preempt = qopt->preemptible_queues; + + for (i = 0; i < adapter->num_tx_queues; i++) { + struct igc_ring *ring = adapter->tx_ring[i]; + + ring->preemptible = preempt & BIT(i); + } + + return 0; +} + static int igc_tsn_enable_qbv_scheduling(struct igc_adapter *adapter, struct tc_taprio_qopt_offload *qopt) { @@ -4834,6 +4851,18 @@ static int igc_tsn_enable_qbv_scheduling(struct igc_adapter *adapter, return igc_tsn_offload_apply(adapter); } +static int igc_tsn_enable_frame_preemption(struct igc_adapter *adapter, + struct tc_preempt_qopt_offload *qopt) +{ + int err; + + err = igc_save_frame_preemption(adapter, qopt); + if (err) + return err; + + return igc_tsn_offload_apply(adapter); +} + static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { @@ -4846,6 +4875,9 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, case TC_SETUP_QDISC_ETF: return igc_tsn_enable_launchtime(adapter, type_data); + case TC_SETUP_PREEMPT: + return igc_tsn_enable_frame_preemption(adapter, type_data); + default: return -EOPNOTSUPP; } From patchwork Tue Jan 19 00:40:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinicius Costa Gomes X-Patchwork-Id: 12028487 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 093BFC433DB for ; Tue, 19 Jan 2021 00:42:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8E5223101 for ; Tue, 19 Jan 2021 00:42:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404339AbhASAml (ORCPT ); Mon, 18 Jan 2021 19:42:41 -0500 Received: from mga09.intel.com ([134.134.136.24]:38531 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391387AbhASAmL (ORCPT ); Mon, 18 Jan 2021 19:42:11 -0500 IronPort-SDR: SoUbIHhLIHl/dscdJn2WFKTYj9cTL07tFwMvD22vO7KIVguGL0ea3a1uui7GSL5Gc/qR4hCEaV Xb9KmDEVNrxw== X-IronPort-AV: E=McAfee;i="6000,8403,9868"; a="179011268" X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="179011268" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:56 -0800 IronPort-SDR: rryoP86f2/jqERXEi9OD55lGJSKxjIRFGTsTkH1U72kUKNkb3LDBUEjK/xsNOBzYE9EINij16a 32INf1X6K+xQ== X-IronPort-AV: E=Sophos;i="5.79,357,1602572400"; d="scan'208";a="426285792" Received: from cemillan-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.212.57.184]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 16:40:55 -0800 From: Vinicius Costa Gomes To: netdev@vger.kernel.org Cc: Vinicius Costa Gomes , jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, kuba@kernel.org, m-karicheri2@ti.com, vladimir.oltean@nxp.com, Jose.Abreu@synopsys.com, po.liu@nxp.com, intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com, mkubecek@suse.cz Subject: [PATCH net-next v2 8/8] igc: Separate TSN configurations that can be updated Date: Mon, 18 Jan 2021 16:40:28 -0800 Message-Id: <20210119004028.2809425-9-vinicius.gomes@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210119004028.2809425-1-vinicius.gomes@intel.com> References: <20210119004028.2809425-1-vinicius.gomes@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Some TSN features can be enabled during runtime, but most of the features need an adapter reset to be disabled. To better keep track of this, separate the process into an "_apply" and a "reset" functions, "_apply" will run with the adapter in potencially "dirty" state, and if necessary will request an adapter reset, so "_reset" always run with a "clean" adapter. The idea is to make the process easier to follow. Signed-off-by: Vinicius Costa Gomes --- drivers/net/ethernet/intel/igc/igc_main.c | 21 ++-- drivers/net/ethernet/intel/igc/igc_tsn.c | 139 +++++++++++++++------- drivers/net/ethernet/intel/igc/igc_tsn.h | 1 + 3 files changed, 102 insertions(+), 59 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 6a09f37ba7ed..8f94b53de2df 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -108,7 +108,7 @@ void igc_reset(struct igc_adapter *adapter) igc_ptp_reset(adapter); /* Re-enable TSN offloading, where applicable. */ - igc_tsn_offload_apply(adapter); + igc_tsn_reset(adapter); igc_get_phy_info(hw); } @@ -4824,6 +4824,11 @@ static int igc_save_frame_preemption(struct igc_adapter *adapter, u32 preempt; int i; + /* What we want here is just to save the configuration, so + * when frame preemption is enabled via ethtool, which queues + * are marked as preemptible is saved. + */ + preempt = qopt->preemptible_queues; for (i = 0; i < adapter->num_tx_queues; i++) { @@ -4851,18 +4856,6 @@ static int igc_tsn_enable_qbv_scheduling(struct igc_adapter *adapter, return igc_tsn_offload_apply(adapter); } -static int igc_tsn_enable_frame_preemption(struct igc_adapter *adapter, - struct tc_preempt_qopt_offload *qopt) -{ - int err; - - err = igc_save_frame_preemption(adapter, qopt); - if (err) - return err; - - return igc_tsn_offload_apply(adapter); -} - static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, void *type_data) { @@ -4876,7 +4869,7 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type, return igc_tsn_enable_launchtime(adapter, type_data); case TC_SETUP_PREEMPT: - return igc_tsn_enable_frame_preemption(adapter, type_data); + return igc_save_frame_preemption(adapter, type_data); default: return -EOPNOTSUPP; diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.c b/drivers/net/ethernet/intel/igc/igc_tsn.c index 31aa9eed3ae3..fdb472a80967 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.c +++ b/drivers/net/ethernet/intel/igc/igc_tsn.c @@ -18,8 +18,24 @@ static bool is_any_launchtime(struct igc_adapter *adapter) return false; } +static unsigned int igc_tsn_new_flags(struct igc_adapter *adapter) +{ + unsigned int new_flags = adapter->flags & ~IGC_FLAG_TSN_ANY_ENABLED; + + if (adapter->base_time) + new_flags |= IGC_FLAG_TSN_QBV_ENABLED; + + if (is_any_launchtime(adapter)) + new_flags |= IGC_FLAG_TSN_QBV_ENABLED; + + if (adapter->frame_preemption_active) + new_flags |= IGC_FLAG_TSN_PREEMPT_ENABLED; + + return new_flags; +} + /* Returns the TSN specific registers to their default values after - * TSN offloading is disabled. + * the adapter is reset. */ static int igc_tsn_disable_offload(struct igc_adapter *adapter) { @@ -27,9 +43,6 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) u32 tqavctrl, rxpbs; int i; - if (!(adapter->flags & IGC_FLAG_TSN_QBV_ENABLED)) - return 0; - adapter->base_time = 0; adapter->cycle_time = 0; adapter->frame_preemption_active = false; @@ -65,38 +78,25 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter) wr32(IGC_QBVCYCLET_S, NSEC_PER_MSEC); wr32(IGC_QBVCYCLET, NSEC_PER_MSEC); - adapter->flags &= ~IGC_FLAG_TSN_QBV_ENABLED; + adapter->flags &= ~IGC_FLAG_TSN_ANY_ENABLED; return 0; } -static int igc_tsn_enable_offload(struct igc_adapter *adapter) +static int igc_tsn_update_params(struct igc_adapter *adapter) { struct igc_hw *hw = &adapter->hw; - u32 tqavctrl, baset_l, baset_h; - u32 sec, nsec, cycle, rxpbs; - ktime_t base_time, systim; + unsigned int flags; u8 frag_size_mult; + u32 tqavctrl; int i; - if (adapter->flags & IGC_FLAG_TSN_QBV_ENABLED) + flags = igc_tsn_new_flags(adapter) & IGC_FLAG_TSN_ANY_ENABLED; + if (!flags) return 0; - cycle = adapter->cycle_time; - base_time = adapter->base_time; - - wr32(IGC_TSAUXC, 0); - wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN); - wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); - - rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; - rxpbs |= IGC_RXPBSIZE_TSN; - - wr32(IGC_RXPBS, rxpbs); - tqavctrl = rd32(IGC_TQAVCTRL) & ~(IGC_TQAVCTRL_MIN_FRAG_MASK | IGC_TQAVCTRL_PREEMPT_ENA); - tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; if (adapter->frame_preemption_active) tqavctrl |= IGC_TQAVCTRL_PREEMPT_ENA; @@ -107,9 +107,6 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) wr32(IGC_TQAVCTRL, tqavctrl); - wr32(IGC_QBVCYCLET_S, cycle); - wr32(IGC_QBVCYCLET, cycle); - for (i = 0; i < adapter->num_tx_queues; i++) { struct igc_ring *ring = adapter->tx_ring[i]; u32 txqctl = 0; @@ -130,12 +127,47 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) if (ring->launchtime_enable) txqctl |= IGC_TXQCTL_QUEUE_MODE_LAUNCHT; - if (ring->preemptible) + if (adapter->frame_preemption_active && ring->preemptible) txqctl |= IGC_TXQCTL_PREEMPTABLE; wr32(IGC_TXQCTL(i), txqctl); } + adapter->flags = igc_tsn_new_flags(adapter); + + return 0; +} + +static int igc_tsn_enable_offload(struct igc_adapter *adapter) +{ + struct igc_hw *hw = &adapter->hw; + u32 baset_l, baset_h, tqavctrl; + u32 sec, nsec, cycle, rxpbs; + ktime_t base_time, systim; + + tqavctrl = rd32(IGC_TQAVCTRL); + tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV; + + wr32(IGC_TQAVCTRL, tqavctrl); + + wr32(IGC_TSAUXC, 0); + wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN); + wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); + + rxpbs = rd32(IGC_RXPBS) & ~IGC_RXPBSIZE_SIZE_MASK; + rxpbs |= IGC_RXPBSIZE_TSN; + + wr32(IGC_RXPBS, rxpbs); + + if (!adapter->base_time) + goto done; + + cycle = adapter->cycle_time; + base_time = adapter->base_time; + + wr32(IGC_QBVCYCLET_S, cycle); + wr32(IGC_QBVCYCLET, cycle); + nsec = rd32(IGC_SYSTIML); sec = rd32(IGC_SYSTIMH); @@ -153,34 +185,51 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter) wr32(IGC_BASET_H, baset_h); wr32(IGC_BASET_L, baset_l); - adapter->flags |= IGC_FLAG_TSN_QBV_ENABLED; +done: + igc_tsn_update_params(adapter); return 0; } +int igc_tsn_reset(struct igc_adapter *adapter) +{ + unsigned int new_flags; + int err = 0; + + new_flags = igc_tsn_new_flags(adapter); + + if (!(new_flags & IGC_FLAG_TSN_ANY_ENABLED)) + return igc_tsn_disable_offload(adapter); + + err = igc_tsn_enable_offload(adapter); + if (err < 0) + return err; + + adapter->flags = new_flags; + + return err; +} + int igc_tsn_offload_apply(struct igc_adapter *adapter) { - bool is_any_enabled = adapter->base_time || - is_any_launchtime(adapter) || adapter->frame_preemption_active; + unsigned int new_flags, old_flags; - if (!(adapter->flags & IGC_FLAG_TSN_QBV_ENABLED) && !is_any_enabled) - return 0; + old_flags = adapter->flags; + new_flags = igc_tsn_new_flags(adapter); - if (!is_any_enabled) { - int err = igc_tsn_disable_offload(adapter); + if (old_flags == new_flags) + return igc_tsn_update_params(adapter); - if (err < 0) - return err; + /* Enabling features work without resetting the adapter */ + if (new_flags > old_flags) + return igc_tsn_enable_offload(adapter); - /* The BASET registers aren't cleared when writing - * into them, force a reset if the interface is - * running. - */ - if (netif_running(adapter->netdev)) - schedule_work(&adapter->reset_task); + adapter->flags = new_flags; - return 0; - } + if (!netif_running(adapter->netdev)) + return igc_tsn_enable_offload(adapter); - return igc_tsn_enable_offload(adapter); + schedule_work(&adapter->reset_task); + + return 0; } diff --git a/drivers/net/ethernet/intel/igc/igc_tsn.h b/drivers/net/ethernet/intel/igc/igc_tsn.h index f76bc86ddccd..1512307f5a52 100644 --- a/drivers/net/ethernet/intel/igc/igc_tsn.h +++ b/drivers/net/ethernet/intel/igc/igc_tsn.h @@ -5,5 +5,6 @@ #define _IGC_TSN_H_ int igc_tsn_offload_apply(struct igc_adapter *adapter); +int igc_tsn_reset(struct igc_adapter *adapter); #endif /* _IGC_BASE_H */