From patchwork Wed Dec 8 14:34:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 12664527 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADFB9C433F5 for ; Wed, 8 Dec 2021 14:34:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233127AbhLHOiL (ORCPT ); Wed, 8 Dec 2021 09:38:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235224AbhLHOiK (ORCPT ); Wed, 8 Dec 2021 09:38:10 -0500 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3627EC061746 for ; Wed, 8 Dec 2021 06:34:39 -0800 (PST) Received: by mail-pl1-x62d.google.com with SMTP id b13so1660682plg.2 for ; Wed, 08 Dec 2021 06:34:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TW88ZYfqECxihaZG2wRwSDQZ3RpnTkkuxVf5gea+Zq0=; b=JdD08A/EkT3d3ScvGtQ5OTJKWlTTo6dsZiylWvxRe0yih3yKjVKky4gaa2aKf38f5I SwoxX8zSius69F/OIRyHbMQNCZXWN9pC/FPXKeg7GYVoilyiI1Q1kIAvkzTYB9UXIqNk eGvEAm/9VHlM1Ui99/FzNHss7TjYhkVXhUD4HHAgsxhOuU5JtUu+Us6x4LmJFWG98PF2 zQfVLGCHe38AQXs0hhLabXW0dvcYuE/KQ5maJBdynkOF1PMR39OsYikdtlN4TJQkuTks d+QJrnKQeGNekYb0eVRgjTWtEulCokoFHMS7vFxJ65F2GFuzIsmFmXgXplnlTasmCkX2 V34Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TW88ZYfqECxihaZG2wRwSDQZ3RpnTkkuxVf5gea+Zq0=; b=PdKYLFQPA0sHh6F5K/TgLdalybsxMtExJeFQllUdUTjUAYrBmc9b+KnT1zJbVbUpg4 UeX8OH1qx6yi+ibIZabd/UTJrnjuoY1NUjz00tNwGPRP7tSZjCjaPxX5shgdVw69h2YR cn87Ctq9YEKP90V0a4PvMQ4yWC5a/n1JjcMWcEeavBwf0f+J4GC/3/EmyXHy5C/SIiRt BWCQ90Z1yXGHXVHaXLKRZ16roXkI5fpKBkhAmNUFU6WqD0Pu4ONhNsCJ26FzRd1Qf42q tcYuqORA+CemzsVIjqxHPwvhLyMxwfVVpnPOggetiT/zNRJopUSEWxDp22rylflG7wc/ w/Ww== X-Gm-Message-State: AOAM531Iwh4ZDZwkxihpmb1dhLHI40X97m8SMwZW7E6IMoDa8v74alUz pW6iWTrzflNVvK56ZxjxQ2qlETBiz7oO3Q== X-Google-Smtp-Source: ABdhPJz+0GPnj06vW+NraTqHvP5rKl4Adw18dlzK79TMH9ir2zK4ogn4Pz5Hpkh+7+cJLmpE+Zo8Eg== X-Received: by 2002:a17:902:d114:b0:142:3934:be82 with SMTP id w20-20020a170902d11400b001423934be82mr59090465plw.40.1638974078097; Wed, 08 Dec 2021 06:34:38 -0800 (PST) Received: from bogon.localdomain ([111.201.150.233]) by smtp.gmail.com with ESMTPSA id g18sm4160123pfj.142.2021.12.08.06.34.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 Dec 2021 06:34:37 -0800 (PST) From: xiangxia.m.yue@gmail.com To: netdev@vger.kernel.org Cc: Tonghao Zhang , Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , Jakub Kicinski , Jonathan Lemon , Eric Dumazet , Alexander Lobakin , Paolo Abeni , Talal Ahmad , Kevin Hao , Ilias Apalodimas , Kees Cook , Kumar Kartikeya Dwivedi , Antoine Tenart , Wei Wang , Arnd Bergmann Subject: [net-next v2 1/2] net: sched: use queue_mapping to pick tx queue Date: Wed, 8 Dec 2021 22:34:07 +0800 Message-Id: <20211208143408.7047-2-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211208143408.7047-1-xiangxia.m.yue@gmail.com> References: <20211208143408.7047-1-xiangxia.m.yue@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tonghao Zhang This patch fix issue: * If we install tc filters with act_skbedit in clsact hook. It doesn't work, because netdev_core_pick_tx() overwrites queue_mapping. $ tc filter ... action skbedit queue_mapping 1 And this patch is useful: * We can use FQ + EDT to implement efficient policies. Tx queues are picked by xps, ndo_select_queue of netdev driver, or skb hash in netdev_core_pick_tx(). In fact, the netdev driver, and skb hash are _not_ under control. xps uses the CPUs map to select Tx queues, but we can't figure out which task_struct of pod/containter running on this cpu in most case. We can use clsact filters to classify one pod/container traffic to one Tx queue. Why ? In containter networking environment, there are two kinds of pod/ containter/net-namespace. One kind (e.g. P1, P2), the high throughput is key in these applications. But avoid running out of network resource, the outbound traffic of these pods is limited, using or sharing one dedicated Tx queues assigned HTB/TBF/FQ Qdisc. Other kind of pods (e.g. Pn), the low latency of data access is key. And the traffic is not limited. Pods use or share other dedicated Tx queues assigned FIFO Qdisc. This choice provides two benefits. First, contention on the HTB/FQ Qdisc lock is significantly reduced since fewer CPUs contend for the same queue. More importantly, Qdisc contention can be eliminated completely if each CPU has its own FIFO Qdisc for the second kind of pods. There must be a mechanism in place to support classifying traffic based on pods/container to different Tx queues. Note that clsact is outside of Qdisc while Qdisc can run a classifier to select a sub-queue under the lock. In general recording the decision in the skb seems a little heavy handed. This patch introduces a per-CPU variable, suggested by Eric. The skip txqueue flag will be cleared to avoid picking Tx queue in next netdev, for example (not usual case): eth0 (macvlan in Pod, skbedit queue_mapping) -> eth0.3 (vlan in Host) -> eth0 (ixgbe in Host). +----+ +----+ +----+ | P1 | | P2 | | Pn | +----+ +----+ +----+ | | | +-----------+-----------+ | | clsact/skbedit | MQ v +-----------+-----------+ | q0 | q1 | qn v v v HTB/FQ HTB/FQ ... FIFO Cc: Jamal Hadi Salim Cc: Cong Wang Cc: Jiri Pirko Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jonathan Lemon Cc: Eric Dumazet Cc: Alexander Lobakin Cc: Paolo Abeni Cc: Talal Ahmad Cc: Kevin Hao Cc: Ilias Apalodimas Cc: Kees Cook Cc: Kumar Kartikeya Dwivedi Cc: Antoine Tenart Cc: Wei Wang Cc: Arnd Bergmann Suggested-by: Eric Dumazet Signed-off-by: Tonghao Zhang --- include/linux/netdevice.h | 21 +++++++++++++++++++++ net/core/dev.c | 6 +++++- net/sched/act_skbedit.c | 4 +++- 3 files changed, 29 insertions(+), 2 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 65117f01d5f2..64f12a819246 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2997,6 +2997,7 @@ struct softnet_data { /* written and read only by owning cpu: */ struct { u16 recursion; + u8 skip_txqueue; u8 more; } xmit; #ifdef CONFIG_RPS @@ -4633,6 +4634,26 @@ static inline netdev_tx_t netdev_start_xmit(struct sk_buff *skb, struct net_devi return rc; } +static inline void netdev_xmit_skip_txqueue(void) +{ + __this_cpu_write(softnet_data.xmit.skip_txqueue, 1); +} + +static inline bool netdev_xmit_txqueue_skipped(void) +{ + return __this_cpu_read(softnet_data.xmit.skip_txqueue); +} + +static inline struct netdev_queue * +netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb) +{ + int qm = skb_get_queue_mapping(skb); + + /* Take effect only on current netdev. */ + __this_cpu_write(softnet_data.xmit.skip_txqueue, 0); + return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm)); +} + int netdev_class_create_file_ns(const struct class_attribute *class_attr, const void *ns); void netdev_class_remove_file_ns(const struct class_attribute *class_attr, diff --git a/net/core/dev.c b/net/core/dev.c index aba8acc1238c..a64297a4cc89 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4069,7 +4069,11 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev) else skb_dst_force(skb); - txq = netdev_core_pick_tx(dev, skb, sb_dev); + if (netdev_xmit_txqueue_skipped()) + txq = netdev_tx_queue_mapping(dev, skb); + else + txq = netdev_core_pick_tx(dev, skb, sb_dev); + q = rcu_dereference_bh(txq->qdisc); trace_net_dev_queue(skb); diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c index d30ecbfc8f84..498feedad70a 100644 --- a/net/sched/act_skbedit.c +++ b/net/sched/act_skbedit.c @@ -58,8 +58,10 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, } } if (params->flags & SKBEDIT_F_QUEUE_MAPPING && - skb->dev->real_num_tx_queues > params->queue_mapping) + skb->dev->real_num_tx_queues > params->queue_mapping) { + netdev_xmit_skip_txqueue(); skb_set_queue_mapping(skb, params->queue_mapping); + } if (params->flags & SKBEDIT_F_MARK) { skb->mark &= ~params->mask; skb->mark |= params->mark & params->mask; From patchwork Wed Dec 8 14:34:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 12664529 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F4100C433F5 for ; Wed, 8 Dec 2021 14:34:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235235AbhLHOiQ (ORCPT ); Wed, 8 Dec 2021 09:38:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235224AbhLHOiP (ORCPT ); Wed, 8 Dec 2021 09:38:15 -0500 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2D70C061746 for ; Wed, 8 Dec 2021 06:34:43 -0800 (PST) Received: by mail-pg1-x52e.google.com with SMTP id q16so2196624pgq.10 for ; Wed, 08 Dec 2021 06:34:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ygs2akJt0yCgsHrsTHXTQ2RCbRNgVPOo61Ey1APENp4=; b=anAPjjswd7RwCD6J3r4/OBcXp64bQYpls3D2tfy55vJr2HgN+fSRDItM+PD8MQLhYr aBxNMq9iib/Pebf3i3T8tCXBs8QZIufTOgnj9+g9mAfeqIODG2zUGfb8kD30EbfZIhuH bs7YU3OAz5ag6ae/jJHsWS6dsi3l1HH/iIfNLUZRWu13g+EpJtHo96nnXXNBkyMN5pib DSjLw+i+AsscLQA02kpEKUwGJbetSkwjyHUx9amnmUBNTbUplaA5KtjUbZ1wB6A9km0y NdpAqs2iHjUzKfhothZJAOg4lRqpmKtuIMGbbn931HaUipKcvGu/eUm5L5D+ZFRXLoKL hiFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ygs2akJt0yCgsHrsTHXTQ2RCbRNgVPOo61Ey1APENp4=; b=ATwuFvws0dX4hZ62BEs8tW0N+FyY/9wO8mQtOXRNJVU7pYj2W/JbEc1Cd3AGrjJNSY OIiJ/M4iYnX4pGAjQ3cX2k8WzDMNc18Rtqas27txNDEGXo6JCrxmcDxUdmM0IBcER+ne SJiszRM0snroUs266aVKvUwl21v55bQmSs457cqCSkQ1RN4v6mteMV1lN5At1QzhI/ev w2YZ6IIXiqx8/KegmO2c34aU988qeLNKbZrWWK9MV+6KY9XtJd8ozspoEk7WZGwrMeic Qy6ynoInMGOmfu2wVWdEUfmUMrglPAVBiwqyGujrsvSnMuqeWlN1jM6v6R3Bl1z1SYm+ NQ6A== X-Gm-Message-State: AOAM533jMaulnbnvYDucBxWrgS/zvLbn6zbjonCGLnPtfhbONn1v5W+7 GEy3O4mBiGll5zLGu9TK3Sza6zFxxZ6rqg== X-Google-Smtp-Source: ABdhPJxcMNbJrtTuZzZ5WMe6+hNY0McqpvTQ9VbUyx4JTNSQS/As7DoVphxEWAE2DvUm0FPiHX+tYw== X-Received: by 2002:a63:bf4a:: with SMTP id i10mr28863021pgo.196.1638974082999; Wed, 08 Dec 2021 06:34:42 -0800 (PST) Received: from bogon.localdomain ([111.201.150.233]) by smtp.gmail.com with ESMTPSA id g18sm4160123pfj.142.2021.12.08.06.34.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 08 Dec 2021 06:34:42 -0800 (PST) From: xiangxia.m.yue@gmail.com To: netdev@vger.kernel.org Cc: Tonghao Zhang , Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , Jakub Kicinski , Jonathan Lemon , Eric Dumazet , Alexander Lobakin , Paolo Abeni , Talal Ahmad , Kevin Hao , Ilias Apalodimas , Kees Cook , Kumar Kartikeya Dwivedi , Antoine Tenart , Wei Wang , Arnd Bergmann Subject: [net-next v2 2/2] net: sched: support hash/classid selecting tx queue Date: Wed, 8 Dec 2021 22:34:08 +0800 Message-Id: <20211208143408.7047-3-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211208143408.7047-1-xiangxia.m.yue@gmail.com> References: <20211208143408.7047-1-xiangxia.m.yue@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tonghao Zhang This patch allows users to select queue_mapping, range from A to B. And users can use skb-hash or cgroup classid to select Tx queues. Then the packets can load balance from A to B queue. $ tc filter ... action skbedit queue_mapping hash-type normal 0 4 "skbedit queue_mapping QUEUE_MAPPING" [0] is enhanced with two flags: SKBEDIT_F_QUEUE_MAPPING_HASH, SKBEDIT_F_QUEUE_MAPPING_CLASSID. The range is an unsigned 16bit value in decimal format. [0]: https://man7.org/linux/man-pages/man8/tc-skbedit.8.html Cc: Jamal Hadi Salim Cc: Cong Wang Cc: Jiri Pirko Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jonathan Lemon Cc: Eric Dumazet Cc: Alexander Lobakin Cc: Paolo Abeni Cc: Talal Ahmad Cc: Kevin Hao Cc: Ilias Apalodimas Cc: Kees Cook Cc: Kumar Kartikeya Dwivedi Cc: Antoine Tenart Cc: Wei Wang Cc: Arnd Bergmann Signed-off-by: Tonghao Zhang Reported-by: kernel test robot Reported-by: kernel test robot --- include/net/tc_act/tc_skbedit.h | 1 + include/uapi/linux/tc_act/tc_skbedit.h | 6 +++ net/sched/act_skbedit.c | 58 ++++++++++++++++++++++++-- 3 files changed, 61 insertions(+), 4 deletions(-) diff --git a/include/net/tc_act/tc_skbedit.h b/include/net/tc_act/tc_skbedit.h index 00bfee70609e..ee96e0fa6566 100644 --- a/include/net/tc_act/tc_skbedit.h +++ b/include/net/tc_act/tc_skbedit.h @@ -17,6 +17,7 @@ struct tcf_skbedit_params { u32 mark; u32 mask; u16 queue_mapping; + u16 mapping_mod; u16 ptype; struct rcu_head rcu; }; diff --git a/include/uapi/linux/tc_act/tc_skbedit.h b/include/uapi/linux/tc_act/tc_skbedit.h index 800e93377218..8df288078dde 100644 --- a/include/uapi/linux/tc_act/tc_skbedit.h +++ b/include/uapi/linux/tc_act/tc_skbedit.h @@ -29,6 +29,11 @@ #define SKBEDIT_F_PTYPE 0x8 #define SKBEDIT_F_MASK 0x10 #define SKBEDIT_F_INHERITDSFIELD 0x20 +#define SKBEDIT_F_QUEUE_MAPPING_HASH 0x40 +#define SKBEDIT_F_QUEUE_MAPPING_CLASSID 0x80 + +#define SKBEDIT_F_QUEUE_MAPPING_HASH_MASK (SKBEDIT_F_QUEUE_MAPPING_HASH | \ + SKBEDIT_F_QUEUE_MAPPING_CLASSID) struct tc_skbedit { tc_gen; @@ -45,6 +50,7 @@ enum { TCA_SKBEDIT_PTYPE, TCA_SKBEDIT_MASK, TCA_SKBEDIT_FLAGS, + TCA_SKBEDIT_QUEUE_MAPPING_MAX, __TCA_SKBEDIT_MAX }; #define TCA_SKBEDIT_MAX (__TCA_SKBEDIT_MAX - 1) diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c index 498feedad70a..355b43999a4a 100644 --- a/net/sched/act_skbedit.c +++ b/net/sched/act_skbedit.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -23,6 +24,25 @@ static unsigned int skbedit_net_id; static struct tc_action_ops act_skbedit_ops; +static u16 tcf_skbedit_hash(struct tcf_skbedit_params *params, + struct sk_buff *skb) +{ + u16 queue_mapping = params->queue_mapping; + u16 mapping_mod = params->mapping_mod; + u32 hash; + + if (!(params->flags & SKBEDIT_F_QUEUE_MAPPING_HASH_MASK)) + return netdev_cap_txqueue(skb->dev, queue_mapping); + + if (params->flags & SKBEDIT_F_QUEUE_MAPPING_CLASSID) + hash = jhash_1word(task_get_classid(skb), 0); + else if (params->flags & SKBEDIT_F_QUEUE_MAPPING_HASH) + hash = skb_get_hash(skb); + + queue_mapping = queue_mapping + hash % mapping_mod; + return netdev_cap_txqueue(skb->dev, queue_mapping); +} + static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { @@ -57,10 +77,9 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, break; } } - if (params->flags & SKBEDIT_F_QUEUE_MAPPING && - skb->dev->real_num_tx_queues > params->queue_mapping) { + if (params->flags & SKBEDIT_F_QUEUE_MAPPING) { netdev_xmit_skip_txqueue(); - skb_set_queue_mapping(skb, params->queue_mapping); + skb_set_queue_mapping(skb, tcf_skbedit_hash(params, skb)); } if (params->flags & SKBEDIT_F_MARK) { skb->mark &= ~params->mask; @@ -94,6 +113,7 @@ static const struct nla_policy skbedit_policy[TCA_SKBEDIT_MAX + 1] = { [TCA_SKBEDIT_PTYPE] = { .len = sizeof(u16) }, [TCA_SKBEDIT_MASK] = { .len = sizeof(u32) }, [TCA_SKBEDIT_FLAGS] = { .len = sizeof(u64) }, + [TCA_SKBEDIT_QUEUE_MAPPING_MAX] = { .len = sizeof(u16) }, }; static int tcf_skbedit_init(struct net *net, struct nlattr *nla, @@ -110,6 +130,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, struct tcf_skbedit *d; u32 flags = 0, *priority = NULL, *mark = NULL, *mask = NULL; u16 *queue_mapping = NULL, *ptype = NULL; + u16 mapping_mod = 0; bool exists = false; int ret = 0, err; u32 index; @@ -157,6 +178,25 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, if (*pure_flags & SKBEDIT_F_INHERITDSFIELD) flags |= SKBEDIT_F_INHERITDSFIELD; + if (*pure_flags & SKBEDIT_F_QUEUE_MAPPING_HASH_MASK) { + u16 *queue_mapping_max; + + if (!tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]) + return -EINVAL; + + if (!tb[TCA_SKBEDIT_QUEUE_MAPPING]) + return -EINVAL; + + queue_mapping_max = + nla_data(tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]); + + if (*queue_mapping_max < *queue_mapping) + return -EINVAL; + + mapping_mod = *queue_mapping_max - *queue_mapping + 1; + flags |= *pure_flags & + SKBEDIT_F_QUEUE_MAPPING_HASH_MASK; + } } parm = nla_data(tb[TCA_SKBEDIT_PARMS]); @@ -206,8 +246,10 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, params_new->flags = flags; if (flags & SKBEDIT_F_PRIORITY) params_new->priority = *priority; - if (flags & SKBEDIT_F_QUEUE_MAPPING) + if (flags & SKBEDIT_F_QUEUE_MAPPING) { params_new->queue_mapping = *queue_mapping; + params_new->mapping_mod = mapping_mod; + } if (flags & SKBEDIT_F_MARK) params_new->mark = *mark; if (flags & SKBEDIT_F_PTYPE) @@ -274,6 +316,14 @@ static int tcf_skbedit_dump(struct sk_buff *skb, struct tc_action *a, goto nla_put_failure; if (params->flags & SKBEDIT_F_INHERITDSFIELD) pure_flags |= SKBEDIT_F_INHERITDSFIELD; + if (params->flags & SKBEDIT_F_QUEUE_MAPPING_HASH_MASK) { + if (nla_put_u16(skb, TCA_SKBEDIT_QUEUE_MAPPING_MAX, + params->queue_mapping + params->mapping_mod - 1)) + goto nla_put_failure; + + pure_flags |= params->flags & + SKBEDIT_F_QUEUE_MAPPING_HASH_MASK; + } if (pure_flags != 0 && nla_put(skb, TCA_SKBEDIT_FLAGS, sizeof(pure_flags), &pure_flags)) goto nla_put_failure;