From patchwork Wed Dec 22 12:08:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 12691459 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36127C433F5 for ; Wed, 22 Dec 2021 12:09:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244817AbhLVMJd (ORCPT ); Wed, 22 Dec 2021 07:09:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240406AbhLVMJa (ORCPT ); Wed, 22 Dec 2021 07:09:30 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6DCFC061574 for ; Wed, 22 Dec 2021 04:09:30 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id l10-20020a17090a384a00b001b22190e075so411707pjf.3 for ; Wed, 22 Dec 2021 04:09:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JG27UivwSlWEDIzNkRGCS2yPkY4fPWhfzpx30/KoFw8=; b=O3oohxhabOVek3Jp8r6/gSp+2PubyK3HPWq9a5nXSmWuLYVskgJbjgP9/yqbN4EYGP hSbUqrS6WPeNzlIhiX8KYpXd8fyeNgHrN9VLPnuPCZtBEtjyNkl4X7HzzSxhL6R9Os/k Fjhxl5800vZ+lxTPOWscVly01CPgub+Cqddw9UCTdE0JjFLrHh/TlsbU+OPLCoONfqBG DAXsPIvv6aIAwbGVaG+iB1YIfMnl5tJAFY2URvOpSpb1OqhPMnd5QIDi8DXrLAcr5/yH PEVVvyPJ41OQLx/rapMxZ82CLsQix1lnYbaKxtZtrDN+FMYyGbVcmNrUzhsSsxB7PD8D cVCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JG27UivwSlWEDIzNkRGCS2yPkY4fPWhfzpx30/KoFw8=; b=VRmStgPuTm1Bl+6ZVb0fAT2DW49Q+UiHWusdvTSXuR7I2ta+jXh5/CzyJx4hSWBbS2 ESd4UkfKwTURi0daon4BewbltH3ZzvMiTeZlIME6VeBblvmUoqFYi8t8z2rniRy1QES+ HwNUP1ZvaGjMhxUFVVZTlxGytdP8o7QD+Ef3WtL2DG+6El/xSeirjtQiNX8+hHgbz2NL HxxHA3RjdT3Nutp7jqlbEk3Q6CwJ7ArXBQLayFfT7gwq8qqVdUmErVndubqeKwE0F25G 9o+qdl6LyYnb1GuFNqgcF2w63pnMXg02dd6DNUQf9beMZwXDNPjr6vMb679wpp1mQb9C rEJQ== X-Gm-Message-State: AOAM530CocQbWYSaNwWQg8beFprOH9c0Pj6q0PUZ6SNXbwBRXrUc+9Sj moi+8BjkErA4wmYnfMNMaDwJtfviVEWRCw== X-Google-Smtp-Source: ABdhPJxjT6yJ4hL/PXWhhoWNBfVXRXCkXYsuyLFO+2r1d68DJLy/p7tJOz9FNFc42arA9OdFo/GJqw== X-Received: by 2002:a17:90b:4a09:: with SMTP id kk9mr992516pjb.230.1640174969946; Wed, 22 Dec 2021 04:09:29 -0800 (PST) Received: from localhost.localdomain ([111.201.150.233]) by smtp.gmail.com with ESMTPSA id y128sm2598517pfb.24.2021.12.22.04.09.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Dec 2021 04:09:29 -0800 (PST) From: xiangxia.m.yue@gmail.com To: netdev@vger.kernel.org Cc: Tonghao Zhang , Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , Jakub Kicinski , Jonathan Lemon , Eric Dumazet , Alexander Lobakin , Paolo Abeni , Talal Ahmad , Kevin Hao , Ilias Apalodimas , Kees Cook , Kumar Kartikeya Dwivedi , Antoine Tenart , Wei Wang , Arnd Bergmann Subject: [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue Date: Wed, 22 Dec 2021 20:08:08 +0800 Message-Id: <20211222120809.2222-2-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211222120809.2222-1-xiangxia.m.yue@gmail.com> References: <20211222120809.2222-1-xiangxia.m.yue@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tonghao Zhang This patch fixes issue: * If we install tc filters with act_skbedit in clsact hook. It doesn't work, because netdev_core_pick_tx() overwrites queue_mapping. $ tc filter ... action skbedit queue_mapping 1 And this patch is useful: * We can use FQ + EDT to implement efficient policies. Tx queues are picked by xps, ndo_select_queue of netdev driver, or skb hash in netdev_core_pick_tx(). In fact, the netdev driver, and skb hash are _not_ under control. xps uses the CPUs map to select Tx queues, but we can't figure out which task_struct of pod/containter running on this cpu in most case. We can use clsact filters to classify one pod/container traffic to one Tx queue. Why ? In containter networking environment, there are two kinds of pod/ containter/net-namespace. One kind (e.g. P1, P2), the high throughput is key in these applications. But avoid running out of network resource, the outbound traffic of these pods is limited, using or sharing one dedicated Tx queues assigned HTB/TBF/FQ Qdisc. Other kind of pods (e.g. Pn), the low latency of data access is key. And the traffic is not limited. Pods use or share other dedicated Tx queues assigned FIFO Qdisc. This choice provides two benefits. First, contention on the HTB/FQ Qdisc lock is significantly reduced since fewer CPUs contend for the same queue. More importantly, Qdisc contention can be eliminated completely if each CPU has its own FIFO Qdisc for the second kind of pods. There must be a mechanism in place to support classifying traffic based on pods/container to different Tx queues. Note that clsact is outside of Qdisc while Qdisc can run a classifier to select a sub-queue under the lock. In general recording the decision in the skb seems a little heavy handed. This patch introduces a per-CPU variable, suggested by Eric. The xmit.skip_txqueue flag is firstly cleared in __dev_queue_xmit(). - Tx Qdisc may install that skbedit actions, then xmit.skip_txqueue flag is set in qdisc->enqueue() though tx queue has been selected in netdev_tx_queue_mapping() or netdev_core_pick_tx(). That flag is cleared firstly in __dev_queue_xmit(), is useful: - Avoid picking Tx queue with netdev_tx_queue_mapping() in next netdev in such case: eth0 macvlan - eth0.3 vlan - eth0 ixgbe-phy: For example, eth0, macvlan in pod, which root Qdisc install skbedit queue_mapping, send packets to eth0.3, vlan in host. In __dev_queue_xmit() of eth0.3, clear the flag, does not select tx queue according to skb->queue_mapping because there is no filters in clsact or tx Qdisc of this netdev. Same action taked in eth0, ixgbe in Host. - Avoid picking Tx queue for next packet. If we set xmit.skip_txqueue in tx Qdisc (qdisc->enqueue()), the proper way to clear it is clearing it in __dev_queue_xmit when processing next packets. For performance reasons, use the static key. If user does not config the NET_EGRESS, the patch will not be compiled. +----+ +----+ +----+ | P1 | | P2 | | Pn | +----+ +----+ +----+ | | | +-----------+-----------+ | | clsact/skbedit | MQ v +-----------+-----------+ | q0 | q1 | qn v v v HTB/FQ HTB/FQ ... FIFO Cc: Jamal Hadi Salim Cc: Cong Wang Cc: Jiri Pirko Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jonathan Lemon Cc: Eric Dumazet Cc: Alexander Lobakin Cc: Paolo Abeni Cc: Talal Ahmad Cc: Kevin Hao Cc: Ilias Apalodimas Cc: Kees Cook Cc: Kumar Kartikeya Dwivedi Cc: Antoine Tenart Cc: Wei Wang Cc: Arnd Bergmann Suggested-by: Eric Dumazet Signed-off-by: Tonghao Zhang Reported-by: kernel test robot --- include/linux/netdevice.h | 3 +++ include/linux/rtnetlink.h | 3 +++ net/core/dev.c | 44 ++++++++++++++++++++++++++++++++++++++- net/sched/act_skbedit.c | 18 ++++++++++++++-- 4 files changed, 65 insertions(+), 3 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 8b0bdeb4734e..708e9f4cca01 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3010,6 +3010,9 @@ struct softnet_data { struct { u16 recursion; u8 more; +#ifdef CONFIG_NET_EGRESS + u8 skip_txqueue; +#endif } xmit; #ifdef CONFIG_RPS /* input_queue_head should be written by cpu owning this struct, diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h index bb9cb84114c1..256bf78daea6 100644 --- a/include/linux/rtnetlink.h +++ b/include/linux/rtnetlink.h @@ -100,6 +100,9 @@ void net_dec_ingress_queue(void); #ifdef CONFIG_NET_EGRESS void net_inc_egress_queue(void); void net_dec_egress_queue(void); +void net_inc_queue_mapping(void); +void net_dec_queue_mapping(void); +void netdev_xmit_skip_txqueue(bool skip); #endif void rtnetlink_init(void); diff --git a/net/core/dev.c b/net/core/dev.c index a855e41bbe39..b197dabcd721 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1998,6 +1998,20 @@ void net_dec_egress_queue(void) static_branch_dec(&egress_needed_key); } EXPORT_SYMBOL_GPL(net_dec_egress_queue); + +static DEFINE_STATIC_KEY_FALSE(txqueue_needed_key); + +void net_inc_queue_mapping(void) +{ + static_branch_inc(&txqueue_needed_key); +} +EXPORT_SYMBOL_GPL(net_inc_queue_mapping); + +void net_dec_queue_mapping(void) +{ + static_branch_dec(&txqueue_needed_key); +} +EXPORT_SYMBOL_GPL(net_dec_queue_mapping); #endif static DEFINE_STATIC_KEY_FALSE(netstamp_needed_key); @@ -3860,6 +3874,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev) return skb; } + +static inline struct netdev_queue * +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb) +{ + int qm = skb_get_queue_mapping(skb); + + return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm)); +} + +static inline bool netdev_xmit_txqueue_skipped(void) +{ + return __this_cpu_read(softnet_data.xmit.skip_txqueue); +} + +void netdev_xmit_skip_txqueue(bool skip) +{ + __this_cpu_write(softnet_data.xmit.skip_txqueue, skip); +} +EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue); #endif /* CONFIG_NET_EGRESS */ #ifdef CONFIG_XPS @@ -4052,6 +4085,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev) skb->tc_at_ingress = 0; #endif #ifdef CONFIG_NET_EGRESS + if (static_branch_unlikely(&txqueue_needed_key)) + netdev_xmit_skip_txqueue(false); + if (static_branch_unlikely(&egress_needed_key)) { if (nf_hook_egress_active()) { skb = nf_hook_egress(skb, &rc, dev); @@ -4064,7 +4100,14 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev) goto out; nf_skip_egress(skb, false); } + + if (static_branch_unlikely(&txqueue_needed_key) && + netdev_xmit_txqueue_skipped()) + txq = netdev_tx_queue_mapping(dev, skb); + else #endif + txq = netdev_core_pick_tx(dev, skb, sb_dev); + /* If device/qdisc don't need skb->dst, release it right now while * its hot in this cpu cache. */ @@ -4073,7 +4116,6 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev) else skb_dst_force(skb); - txq = netdev_core_pick_tx(dev, skb, sb_dev); q = rcu_dereference_bh(txq->qdisc); trace_net_dev_queue(skb); diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c index ceba11b198bb..325991080a8a 100644 --- a/net/sched/act_skbedit.c +++ b/net/sched/act_skbedit.c @@ -58,8 +58,12 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, } } if (params->flags & SKBEDIT_F_QUEUE_MAPPING && - skb->dev->real_num_tx_queues > params->queue_mapping) + skb->dev->real_num_tx_queues > params->queue_mapping) { +#ifdef CONFIG_NET_EGRESS + netdev_xmit_skip_txqueue(true); +#endif skb_set_queue_mapping(skb, params->queue_mapping); + } if (params->flags & SKBEDIT_F_MARK) { skb->mark &= ~params->mask; skb->mark |= params->mark & params->mask; @@ -225,6 +229,11 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, if (goto_ch) tcf_chain_put_by_act(goto_ch); +#ifdef CONFIG_NET_EGRESS + if (flags & SKBEDIT_F_QUEUE_MAPPING) + net_inc_queue_mapping(); +#endif + return ret; put_chain: if (goto_ch) @@ -295,8 +304,13 @@ static void tcf_skbedit_cleanup(struct tc_action *a) struct tcf_skbedit_params *params; params = rcu_dereference_protected(d->params, 1); - if (params) + if (params) { +#ifdef CONFIG_NET_EGRESS + if (params->flags & SKBEDIT_F_QUEUE_MAPPING) + net_dec_queue_mapping(); +#endif kfree_rcu(params, rcu); + } } static int tcf_skbedit_walker(struct net *net, struct sk_buff *skb, From patchwork Wed Dec 22 12:08:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 12691461 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4315DC433EF for ; Wed, 22 Dec 2021 12:09:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244821AbhLVMJg (ORCPT ); Wed, 22 Dec 2021 07:09:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244837AbhLVMJg (ORCPT ); Wed, 22 Dec 2021 07:09:36 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2D9DC061574 for ; Wed, 22 Dec 2021 04:09:35 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id n15-20020a17090a160f00b001a75089daa3so5657289pja.1 for ; Wed, 22 Dec 2021 04:09:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/GgfR7QcCJjgE15e+lCO8WDZtcKd87cFfiY03FXqO1c=; b=eT+LPpvhvyIj3+JGEPgGVYagc1S0KR25aZeJDp60eFoOVl12ujkcnj2+FLMTTBmxSQ e99xWgZFEpj+HxCzwyr85iH6OXSb7wxJ94pWbTsmrkWle4WdmczlQ66W6D6xdipymE+v EEhtVJXVOQarY3Qi55Pf7AEfKNih+vCOo591KYg0wy3F6EKXfNDn14fQcUEdgKCcekl3 FsU5s9BXLarmYMIpyB8HSm6bA7rol1PSPqNXdFljKHQQz3eHZhLTqsC7K9x7/WNsroGl SmnmGnhrUGS+0uW4cy5sqFM6jvoMOMH57XywdPLkrHd30E/0BDvBY08aah0vla3LtWP5 B7ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/GgfR7QcCJjgE15e+lCO8WDZtcKd87cFfiY03FXqO1c=; b=Yjx1bumzsXjt5cpCppxcsZEF3CdBGkYTIn+/y7pmY0AMlq5QuZscPmLyGTLQobgllz N+srN4jN18xhRwC8Kst6EuJGe+MA3wRi1vUNqe+kwzRD/rGvSKn5ZRA+RfjVPZ4DJ9jj vaRJIXDnU+29AtJcWAHNda7mA0RkgMPwnxQwOV3EljE2BfhqjupOfrNsw1jwCUv+92Se ZH/5nDUoaPwy/O0fIz8/WFNvOvb7hLf8zGqdiC6ReKYG/UtBILg6sY5rwvyrYHm/wpJl DiBQiGjE1hAzalQ6YCuJKJi65Nqb91m25PqaQPKWtRChf+PSrX3FYF7+EbWdGgWs5vK4 PVKA== X-Gm-Message-State: AOAM531TE2ki+D1LgoGeHFda/5aTJuJevtBe32pYrZAnPXg+HtbvCOlO tfxmUFMgcWYiDPBiWJRaFPsWeaWVwJK2TQ== X-Google-Smtp-Source: ABdhPJz20rIH9PBaae0/mbZs0e94Y48tx+lOj5Uij6yij2qxxzwx+EvSZtiv6FXkZGOoTopejWzgRA== X-Received: by 2002:a17:90a:f998:: with SMTP id cq24mr1001520pjb.64.1640174975179; Wed, 22 Dec 2021 04:09:35 -0800 (PST) Received: from localhost.localdomain ([111.201.150.233]) by smtp.gmail.com with ESMTPSA id y128sm2598517pfb.24.2021.12.22.04.09.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 22 Dec 2021 04:09:34 -0800 (PST) From: xiangxia.m.yue@gmail.com To: netdev@vger.kernel.org Cc: Tonghao Zhang , Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , Jakub Kicinski , Jonathan Lemon , Eric Dumazet , Alexander Lobakin , Paolo Abeni , Talal Ahmad , Kevin Hao , Ilias Apalodimas , Kees Cook , Kumar Kartikeya Dwivedi , Antoine Tenart , Wei Wang , Arnd Bergmann Subject: [net-next v6 2/2] net: sched: support hash/classid/cpuid selecting tx queue Date: Wed, 22 Dec 2021 20:08:09 +0800 Message-Id: <20211222120809.2222-3-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211222120809.2222-1-xiangxia.m.yue@gmail.com> References: <20211222120809.2222-1-xiangxia.m.yue@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tonghao Zhang This patch allows user to select queue_mapping, range from A to B. And user can use skbhash, cgroup classid and cpuid to select Tx queues. Then we can load balance packets from A to B queue. The range is an unsigned 16bit value in decimal format. $ tc filter ... action skbedit queue_mapping skbhash A B "skbedit queue_mapping QUEUE_MAPPING" (from "man 8 tc-skbedit") is enhanced with flags: * SKBEDIT_F_TXQ_SKBHASH * SKBEDIT_F_TXQ_CLASSID * SKBEDIT_F_TXQ_CPUID Use skb->hash, cgroup classid, or cpuid to distribute packets. Then same range of tx queues can be shared for different flows, cgroups, or CPUs in a variety of scenarios. For example, F1 may share range R1 with F2. The best way to do that is to set flag to SKBEDIT_F_TXQ_HASH, using skb->hash to share the queues. If cgroup C1 want to share the R1 with cgroup C2 .. Cn, use the SKBEDIT_F_TXQ_CLASSID. Of course, in some other scenario, C1 use R1, while Cn can use the Rn. Cc: Jamal Hadi Salim Cc: Cong Wang Cc: Jiri Pirko Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jonathan Lemon Cc: Eric Dumazet Cc: Alexander Lobakin Cc: Paolo Abeni Cc: Talal Ahmad Cc: Kevin Hao Cc: Ilias Apalodimas Cc: Kees Cook Cc: Kumar Kartikeya Dwivedi Cc: Antoine Tenart Cc: Wei Wang Cc: Arnd Bergmann Signed-off-by: Tonghao Zhang --- include/net/tc_act/tc_skbedit.h | 1 + include/uapi/linux/tc_act/tc_skbedit.h | 8 +++ net/sched/act_skbedit.c | 78 +++++++++++++++++++++++++- 3 files changed, 84 insertions(+), 3 deletions(-) diff --git a/include/net/tc_act/tc_skbedit.h b/include/net/tc_act/tc_skbedit.h index 00bfee70609e..ee96e0fa6566 100644 --- a/include/net/tc_act/tc_skbedit.h +++ b/include/net/tc_act/tc_skbedit.h @@ -17,6 +17,7 @@ struct tcf_skbedit_params { u32 mark; u32 mask; u16 queue_mapping; + u16 mapping_mod; u16 ptype; struct rcu_head rcu; }; diff --git a/include/uapi/linux/tc_act/tc_skbedit.h b/include/uapi/linux/tc_act/tc_skbedit.h index 800e93377218..5ea1438a4d88 100644 --- a/include/uapi/linux/tc_act/tc_skbedit.h +++ b/include/uapi/linux/tc_act/tc_skbedit.h @@ -29,6 +29,13 @@ #define SKBEDIT_F_PTYPE 0x8 #define SKBEDIT_F_MASK 0x10 #define SKBEDIT_F_INHERITDSFIELD 0x20 +#define SKBEDIT_F_TXQ_SKBHASH 0x40 +#define SKBEDIT_F_TXQ_CLASSID 0x80 +#define SKBEDIT_F_TXQ_CPUID 0x100 + +#define SKBEDIT_F_TXQ_HASH_MASK (SKBEDIT_F_TXQ_SKBHASH | \ + SKBEDIT_F_TXQ_CLASSID | \ + SKBEDIT_F_TXQ_CPUID) struct tc_skbedit { tc_gen; @@ -45,6 +52,7 @@ enum { TCA_SKBEDIT_PTYPE, TCA_SKBEDIT_MASK, TCA_SKBEDIT_FLAGS, + TCA_SKBEDIT_QUEUE_MAPPING_MAX, __TCA_SKBEDIT_MAX }; #define TCA_SKBEDIT_MAX (__TCA_SKBEDIT_MAX - 1) diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c index 325991080a8a..9493b3102923 100644 --- a/net/sched/act_skbedit.c +++ b/net/sched/act_skbedit.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -23,6 +24,38 @@ static unsigned int skbedit_net_id; static struct tc_action_ops act_skbedit_ops; +static u16 tcf_skbedit_hash(struct tcf_skbedit_params *params, + struct sk_buff *skb) +{ + u32 mapping_hash_type = params->flags & SKBEDIT_F_TXQ_HASH_MASK; + u16 queue_mapping = params->queue_mapping; + u16 mapping_mod = params->mapping_mod; + u32 hash = 0; + + switch (mapping_hash_type) { + case SKBEDIT_F_TXQ_CLASSID: + hash = task_get_classid(skb); + break; + case SKBEDIT_F_TXQ_SKBHASH: + hash = skb_get_hash(skb); + break; + case SKBEDIT_F_TXQ_CPUID: + hash = raw_smp_processor_id(); + break; + case 0: + /* Hash type isn't specified. In this case: + * hash % mapping_mod == 0 + */ + break; + default: + net_warn_ratelimited("The type of queue_mapping hash is not supported. 0x%x\n", + mapping_hash_type); + } + + queue_mapping = queue_mapping + hash % mapping_mod; + return netdev_cap_txqueue(skb->dev, queue_mapping); +} + static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { @@ -62,7 +95,7 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, #ifdef CONFIG_NET_EGRESS netdev_xmit_skip_txqueue(true); #endif - skb_set_queue_mapping(skb, params->queue_mapping); + skb_set_queue_mapping(skb, tcf_skbedit_hash(params, skb)); } if (params->flags & SKBEDIT_F_MARK) { skb->mark &= ~params->mask; @@ -96,6 +129,7 @@ static const struct nla_policy skbedit_policy[TCA_SKBEDIT_MAX + 1] = { [TCA_SKBEDIT_PTYPE] = { .len = sizeof(u16) }, [TCA_SKBEDIT_MASK] = { .len = sizeof(u32) }, [TCA_SKBEDIT_FLAGS] = { .len = sizeof(u64) }, + [TCA_SKBEDIT_QUEUE_MAPPING_MAX] = { .len = sizeof(u16) }, }; static int tcf_skbedit_init(struct net *net, struct nlattr *nla, @@ -112,6 +146,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, struct tcf_skbedit *d; u32 flags = 0, *priority = NULL, *mark = NULL, *mask = NULL; u16 *queue_mapping = NULL, *ptype = NULL; + u16 mapping_mod = 1; bool exists = false; int ret = 0, err; u32 index; @@ -156,7 +191,34 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, if (tb[TCA_SKBEDIT_FLAGS] != NULL) { u64 *pure_flags = nla_data(tb[TCA_SKBEDIT_FLAGS]); - + u64 mapping_hash_type; + + mapping_hash_type = *pure_flags & SKBEDIT_F_TXQ_HASH_MASK; + if (mapping_hash_type) { + u16 *queue_mapping_max; + + /* Hash types are mutually exclusive. */ + if (mapping_hash_type & (mapping_hash_type - 1)) { + NL_SET_ERR_MSG_MOD(extack, "Multi types of hash are specified."); + return -EINVAL; + } + + if (!tb[TCA_SKBEDIT_QUEUE_MAPPING] || + !tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]) { + NL_SET_ERR_MSG_MOD(extack, "Missing required range of queue_mapping."); + return -EINVAL; + } + + queue_mapping_max = + nla_data(tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]); + if (*queue_mapping_max < *queue_mapping) { + NL_SET_ERR_MSG_MOD(extack, "The range of queue_mapping is invalid, max < min."); + return -EINVAL; + } + + mapping_mod = *queue_mapping_max - *queue_mapping + 1; + flags |= mapping_hash_type; + } if (*pure_flags & SKBEDIT_F_INHERITDSFIELD) flags |= SKBEDIT_F_INHERITDSFIELD; } @@ -208,8 +270,10 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, params_new->flags = flags; if (flags & SKBEDIT_F_PRIORITY) params_new->priority = *priority; - if (flags & SKBEDIT_F_QUEUE_MAPPING) + if (flags & SKBEDIT_F_QUEUE_MAPPING) { params_new->queue_mapping = *queue_mapping; + params_new->mapping_mod = mapping_mod; + } if (flags & SKBEDIT_F_MARK) params_new->mark = *mark; if (flags & SKBEDIT_F_PTYPE) @@ -281,6 +345,13 @@ static int tcf_skbedit_dump(struct sk_buff *skb, struct tc_action *a, goto nla_put_failure; if (params->flags & SKBEDIT_F_INHERITDSFIELD) pure_flags |= SKBEDIT_F_INHERITDSFIELD; + if (params->flags & SKBEDIT_F_TXQ_HASH_MASK) { + if (nla_put_u16(skb, TCA_SKBEDIT_QUEUE_MAPPING_MAX, + params->queue_mapping + params->mapping_mod - 1)) + goto nla_put_failure; + + pure_flags |= params->flags & SKBEDIT_F_TXQ_HASH_MASK; + } if (pure_flags != 0 && nla_put(skb, TCA_SKBEDIT_FLAGS, sizeof(pure_flags), &pure_flags)) goto nla_put_failure; @@ -335,6 +406,7 @@ static size_t tcf_skbedit_get_fill_size(const struct tc_action *act) return nla_total_size(sizeof(struct tc_skbedit)) + nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_PRIORITY */ + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING */ + + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING_MAX */ + nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MARK */ + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_PTYPE */ + nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MASK */