From patchwork Wed Jan 26 14:32:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 12725097 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E780AC28CF5 for ; Wed, 26 Jan 2022 14:32:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242163AbiAZOcb (ORCPT ); Wed, 26 Jan 2022 09:32:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242147AbiAZOc3 (ORCPT ); Wed, 26 Jan 2022 09:32:29 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CC84C061747 for ; Wed, 26 Jan 2022 06:32:29 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id b1-20020a17090a990100b001b14bd47532so7293pjp.0 for ; Wed, 26 Jan 2022 06:32:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e50/ZIMQaxDCbsjkCHIbFxxG7HiJC0StWx+ig1pfA54=; b=EnqQuwvmczdPXanCPT+6Jb2MtdjlTrKRl1xeDB+Ipk3Qy6QbjVaSFPnjTMLpLO6MrF FBx26lTt3gyFIFi49kCAo8I1r0UFzE5xs+4ZuXR7EcWbACP+7VmGaj4QBQ0+u9DuIsBc jir0koMznHOE61zlJZf4QmlW8oQuf+wGwLcjxdkuoWslGzwtJBSuvl2fMeUJggrUWp3F itzMLUpObQVCD9ZTlJRwWrIgA9Mu4ivZkc7wf8OQQErnxh8zdJDLVvA9b9afQWICJS4+ DJu0eDZTM/ZaMrSJOPRteBEyC9AUZapvXC0CNGHBqDNBmLT45WGc3K2udEpZvnlZVFLl qkaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e50/ZIMQaxDCbsjkCHIbFxxG7HiJC0StWx+ig1pfA54=; b=gvc8bRJ3ohVC0aPfwkQZpBmlc7/gSFwZQG0n03OrTdVik37Cysuv2s3Sjv6X9aM3nH 5VfDp0TzQ2pkoekICkuRCOG4Aa04Z63e5ZdjHCWTXVQMSIhFgS+XUcfAoHGmRa3W6zXq 7mJJHQ0zGK/WN+ZeeHZppzJQEx0mCY5F8TlOrhh01qunaJmjal7e7jpQI/J8jPMQ7vun D0TCF97UJoDGM6uUn86CXcKfhsEDCKtGGlcopBowyg5dMZRX9VifGKfaMM4q9ctsVFTO A1gPOiT+j9RtA/5T1rkv2ymQB326r9gR36IEsl26Z0mZzyR7KpgnICTXZwT5avIeexs1 0JiA== X-Gm-Message-State: AOAM533K+b4AOL1joGwdwVV+yZYTwZDFw7uOJ350fJwCec7/3UlrSw7e 6D/UEKSYP/tCYQEjeGOL2Cz2bJwaUfJjmg== X-Google-Smtp-Source: ABdhPJwNtcXZeon3KZfAzCOlge28l2YvKQNX6lwrZndHgtg4/mHGTTi3W6Oq4gQBjUMKefVZBXz2NQ== X-Received: by 2002:a17:902:ac98:b0:14a:dffc:ba69 with SMTP id h24-20020a170902ac9800b0014adffcba69mr23847216plr.2.1643207548598; Wed, 26 Jan 2022 06:32:28 -0800 (PST) Received: from bogon.xiaojukeji.com ([111.201.150.233]) by smtp.gmail.com with ESMTPSA id n4sm5268501pjf.0.2022.01.26.06.32.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 26 Jan 2022 06:32:28 -0800 (PST) From: xiangxia.m.yue@gmail.com To: netdev@vger.kernel.org Cc: Tonghao Zhang , Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , Jakub Kicinski , Jonathan Lemon , Eric Dumazet , Alexander Lobakin , Paolo Abeni , Talal Ahmad , Kevin Hao , Ilias Apalodimas , Kees Cook , Kumar Kartikeya Dwivedi , Antoine Tenart , Wei Wang , Arnd Bergmann Subject: [net-next v8 2/2] net: sched: support hash/classid/cpuid selecting tx queue Date: Wed, 26 Jan 2022 22:32:06 +0800 Message-Id: <20220126143206.23023-3-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20220126143206.23023-1-xiangxia.m.yue@gmail.com> References: <20220126143206.23023-1-xiangxia.m.yue@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tonghao Zhang This patch allows user to select queue_mapping, range from A to B. And user can use skbhash, cgroup classid and cpuid to select Tx queues. Then we can load balance packets from A to B queue. The range is an unsigned 16bit value in decimal format. $ tc filter ... action skbedit queue_mapping skbhash A B "skbedit queue_mapping QUEUE_MAPPING" (from "man 8 tc-skbedit") is enhanced with flags: * SKBEDIT_F_TXQ_SKBHASH * SKBEDIT_F_TXQ_CLASSID * SKBEDIT_F_TXQ_CPUID Use skb->hash, cgroup classid, or cpuid to distribute packets. Then same range of tx queues can be shared for different flows, cgroups, or CPUs in a variety of scenarios. For example, F1 may share range R1 with F2. The best way to do that is to set flag to SKBEDIT_F_TXQ_HASH, using skb->hash to share the queues. If cgroup C1 want to share the R1 with cgroup C2 .. Cn, use the SKBEDIT_F_TXQ_CLASSID. Of course, in some other scenario, C1 use R1, while Cn can use the Rn. Cc: Jamal Hadi Salim Cc: Cong Wang Cc: Jiri Pirko Cc: "David S. Miller" Cc: Jakub Kicinski Cc: Jonathan Lemon Cc: Eric Dumazet Cc: Alexander Lobakin Cc: Paolo Abeni Cc: Talal Ahmad Cc: Kevin Hao Cc: Ilias Apalodimas Cc: Kees Cook Cc: Kumar Kartikeya Dwivedi Cc: Antoine Tenart Cc: Wei Wang Cc: Arnd Bergmann Signed-off-by: Tonghao Zhang --- include/net/tc_act/tc_skbedit.h | 1 + include/uapi/linux/tc_act/tc_skbedit.h | 8 +++ net/sched/act_skbedit.c | 78 +++++++++++++++++++++++++- 3 files changed, 84 insertions(+), 3 deletions(-) diff --git a/include/net/tc_act/tc_skbedit.h b/include/net/tc_act/tc_skbedit.h index 00bfee70609e..ee96e0fa6566 100644 --- a/include/net/tc_act/tc_skbedit.h +++ b/include/net/tc_act/tc_skbedit.h @@ -17,6 +17,7 @@ struct tcf_skbedit_params { u32 mark; u32 mask; u16 queue_mapping; + u16 mapping_mod; u16 ptype; struct rcu_head rcu; }; diff --git a/include/uapi/linux/tc_act/tc_skbedit.h b/include/uapi/linux/tc_act/tc_skbedit.h index 800e93377218..5ea1438a4d88 100644 --- a/include/uapi/linux/tc_act/tc_skbedit.h +++ b/include/uapi/linux/tc_act/tc_skbedit.h @@ -29,6 +29,13 @@ #define SKBEDIT_F_PTYPE 0x8 #define SKBEDIT_F_MASK 0x10 #define SKBEDIT_F_INHERITDSFIELD 0x20 +#define SKBEDIT_F_TXQ_SKBHASH 0x40 +#define SKBEDIT_F_TXQ_CLASSID 0x80 +#define SKBEDIT_F_TXQ_CPUID 0x100 + +#define SKBEDIT_F_TXQ_HASH_MASK (SKBEDIT_F_TXQ_SKBHASH | \ + SKBEDIT_F_TXQ_CLASSID | \ + SKBEDIT_F_TXQ_CPUID) struct tc_skbedit { tc_gen; @@ -45,6 +52,7 @@ enum { TCA_SKBEDIT_PTYPE, TCA_SKBEDIT_MASK, TCA_SKBEDIT_FLAGS, + TCA_SKBEDIT_QUEUE_MAPPING_MAX, __TCA_SKBEDIT_MAX }; #define TCA_SKBEDIT_MAX (__TCA_SKBEDIT_MAX - 1) diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c index d5799b4fc499..4c209689f8de 100644 --- a/net/sched/act_skbedit.c +++ b/net/sched/act_skbedit.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -23,6 +24,38 @@ static unsigned int skbedit_net_id; static struct tc_action_ops act_skbedit_ops; +static u16 tcf_skbedit_hash(struct tcf_skbedit_params *params, + struct sk_buff *skb) +{ + u32 mapping_hash_type = params->flags & SKBEDIT_F_TXQ_HASH_MASK; + u16 queue_mapping = params->queue_mapping; + u16 mapping_mod = params->mapping_mod; + u32 hash = 0; + + switch (mapping_hash_type) { + case SKBEDIT_F_TXQ_CLASSID: + hash = task_get_classid(skb); + break; + case SKBEDIT_F_TXQ_SKBHASH: + hash = skb_get_hash(skb); + break; + case SKBEDIT_F_TXQ_CPUID: + hash = raw_smp_processor_id(); + break; + case 0: + /* Hash type isn't specified. In this case: + * hash % mapping_mod == 0 + */ + break; + default: + net_warn_ratelimited("The type of queue_mapping hash is not supported. 0x%x\n", + mapping_hash_type); + } + + queue_mapping = queue_mapping + hash % mapping_mod; + return netdev_cap_txqueue(skb->dev, queue_mapping); +} + static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { @@ -62,7 +95,7 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a, #ifdef CONFIG_NET_EGRESS netdev_xmit_skip_txqueue(true); #endif - skb_set_queue_mapping(skb, params->queue_mapping); + skb_set_queue_mapping(skb, tcf_skbedit_hash(params, skb)); } if (params->flags & SKBEDIT_F_MARK) { skb->mark &= ~params->mask; @@ -96,6 +129,7 @@ static const struct nla_policy skbedit_policy[TCA_SKBEDIT_MAX + 1] = { [TCA_SKBEDIT_PTYPE] = { .len = sizeof(u16) }, [TCA_SKBEDIT_MASK] = { .len = sizeof(u32) }, [TCA_SKBEDIT_FLAGS] = { .len = sizeof(u64) }, + [TCA_SKBEDIT_QUEUE_MAPPING_MAX] = { .len = sizeof(u16) }, }; static int tcf_skbedit_init(struct net *net, struct nlattr *nla, @@ -112,6 +146,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, struct tcf_skbedit *d; u32 flags = 0, *priority = NULL, *mark = NULL, *mask = NULL; u16 *queue_mapping = NULL, *ptype = NULL; + u16 mapping_mod = 1; bool exists = false; int ret = 0, err; u32 index; @@ -156,7 +191,34 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, if (tb[TCA_SKBEDIT_FLAGS] != NULL) { u64 *pure_flags = nla_data(tb[TCA_SKBEDIT_FLAGS]); - + u64 mapping_hash_type; + + mapping_hash_type = *pure_flags & SKBEDIT_F_TXQ_HASH_MASK; + if (mapping_hash_type) { + u16 *queue_mapping_max; + + /* Hash types are mutually exclusive. */ + if (mapping_hash_type & (mapping_hash_type - 1)) { + NL_SET_ERR_MSG_MOD(extack, "Multi types of hash are specified."); + return -EINVAL; + } + + if (!tb[TCA_SKBEDIT_QUEUE_MAPPING] || + !tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]) { + NL_SET_ERR_MSG_MOD(extack, "Missing required range of queue_mapping."); + return -EINVAL; + } + + queue_mapping_max = + nla_data(tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]); + if (*queue_mapping_max < *queue_mapping) { + NL_SET_ERR_MSG_MOD(extack, "The range of queue_mapping is invalid, max < min."); + return -EINVAL; + } + + mapping_mod = *queue_mapping_max - *queue_mapping + 1; + flags |= mapping_hash_type; + } if (*pure_flags & SKBEDIT_F_INHERITDSFIELD) flags |= SKBEDIT_F_INHERITDSFIELD; } @@ -208,8 +270,10 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, params_new->flags = flags; if (flags & SKBEDIT_F_PRIORITY) params_new->priority = *priority; - if (flags & SKBEDIT_F_QUEUE_MAPPING) + if (flags & SKBEDIT_F_QUEUE_MAPPING) { params_new->queue_mapping = *queue_mapping; + params_new->mapping_mod = mapping_mod; + } if (flags & SKBEDIT_F_MARK) params_new->mark = *mark; if (flags & SKBEDIT_F_PTYPE) @@ -276,6 +340,13 @@ static int tcf_skbedit_dump(struct sk_buff *skb, struct tc_action *a, goto nla_put_failure; if (params->flags & SKBEDIT_F_INHERITDSFIELD) pure_flags |= SKBEDIT_F_INHERITDSFIELD; + if (params->flags & SKBEDIT_F_TXQ_HASH_MASK) { + if (nla_put_u16(skb, TCA_SKBEDIT_QUEUE_MAPPING_MAX, + params->queue_mapping + params->mapping_mod - 1)) + goto nla_put_failure; + + pure_flags |= params->flags & SKBEDIT_F_TXQ_HASH_MASK; + } if (pure_flags != 0 && nla_put(skb, TCA_SKBEDIT_FLAGS, sizeof(pure_flags), &pure_flags)) goto nla_put_failure; @@ -325,6 +396,7 @@ static size_t tcf_skbedit_get_fill_size(const struct tc_action *act) return nla_total_size(sizeof(struct tc_skbedit)) + nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_PRIORITY */ + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING */ + + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING_MAX */ + nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MARK */ + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_PTYPE */ + nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MASK */