From patchwork Sat Aug 19 16:35:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Victor Nogueira X-Patchwork-Id: 13358653 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69628A57 for ; Sat, 19 Aug 2023 16:35:29 +0000 (UTC) Received: from mail-oo1-xc31.google.com (mail-oo1-xc31.google.com [IPv6:2607:f8b0:4864:20::c31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D795F902B for ; Sat, 19 Aug 2023 09:35:27 -0700 (PDT) Received: by mail-oo1-xc31.google.com with SMTP id 006d021491bc7-56ffe7eee6fso495975eaf.1 for ; Sat, 19 Aug 2023 09:35:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mojatatu-com.20221208.gappssmtp.com; s=20221208; t=1692462927; x=1693067727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7HGvG8kemDNif8PiiKjFRJ4/vBmbS0xJERu31ArHdx8=; b=E4vUkSzHe95dU1onS2V91rLuatnjwT7Arg467ARgcTg5WcI7ZIrx/2BZ/Ico62GOgm cTCMl/+sT0+4a55J22tqGiCM/9BPURDFR/qqX54jc3uCeHs6fPfdI2Mq7NN2gW0/kw8O QvQImhC67rHVadt+Q456mh6QtfU8FU3iDpR6DArzWCdAum7/hn1krzOE7nImcqDNRIll CbuGqVqtAU0rN07U5/UXe0J5uQjiqljXhkfNLypmAZVms6FwwfsGawFdiFTp/gFf+U+b /K9xnlrGRJ5AoQwTqDkpLBzM+RRNHsKHYL+DN47rS2idiqAJ+rQy8dWjAt9dn/RVsvmp O4gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692462927; x=1693067727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7HGvG8kemDNif8PiiKjFRJ4/vBmbS0xJERu31ArHdx8=; b=cMD0F51KgzMGnjXufy6Jcbtt7pFNOYuOVwsfu7vRITRXVoWX8I9XNbCvcZ3/jggdct KCFqDaNMdOWJxDxNuRItinygSTpSr8nujBEc+s9ODC/pO+jXzd0zsUrMSD1LYtsE2FXu pJDylAMP+18RMw3xiGUJ+GAcm1THvTg/NZiI9oocSH4S3BYFec+1MWSll6QSrRYW+hTQ Mzl9t64i2oCqYnS0ev3xgNb9Xo7Z1hBLO0Wz69uNbkfD461WdvCRDgWDbCedDtahky/+ AephN5GGztpZOTZkRtlZ6V1Hzmb5wKUACW2pldY4fcj5+AvWIfGQ2UqAkms6pH+WhuMJ 0iRw== X-Gm-Message-State: AOJu0Yx0k+snLfKvhRQtuw1M6B/PdqqK/iU9C+6Mv26lUN2ZQA3fLZdP HlfJcynC87P0+HPNjHpORdYj0A== X-Google-Smtp-Source: AGHT+IFGFOE6f8lzw7nDIIppPSWqGG9nmKVWg6J5CQDIyshAUShYq1U8BkKsTVpiSDqdm675tT4sGQ== X-Received: by 2002:aca:2811:0:b0:3a7:3374:9a43 with SMTP id 17-20020aca2811000000b003a733749a43mr2919949oix.57.1692462927216; Sat, 19 Aug 2023 09:35:27 -0700 (PDT) Received: from exu-caveira.tail33bf8.ts.net ([2804:7f1:e2c1:d019:34ee:449:f6bb:38e9]) by smtp.gmail.com with ESMTPSA id p187-20020acaf1c4000000b003a7847cf407sm2098303oih.44.2023.08.19.09.35.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 19 Aug 2023 09:35:26 -0700 (PDT) From: Victor Nogueira To: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org Cc: mleitner@redhat.com, vladbu@nvidia.com, horms@kernel.org, pctammela@mojatatu.com, kernel@mojatatu.com Subject: [PATCH net-next v2 1/3] net/sched: Introduce tc block netdev tracking infra Date: Sat, 19 Aug 2023 13:35:12 -0300 Message-ID: <20230819163515.2266246-2-victor@mojatatu.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230819163515.2266246-1-victor@mojatatu.com> References: <20230819163515.2266246-1-victor@mojatatu.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The tc block is a collection of netdevs/ports which allow qdiscs to share filter block instances (as opposed to the traditional tc filter per port). Example: $ tc qdisc add dev ens7 ingress block 22 $ tc qdisc add dev ens8 ingress block 22 Now we can add a filter using the block index: $ tc filter add block 22 protocol ip pref 25 \ flower dst_ip 192.168.0.0/16 action drop Up to this point, the block is unaware of its ports. This patch fixes that and makes the tc block ports available to the datapath as well as control path on offloading. Suggested-by: Jiri Pirko Co-developed-by: Jamal Hadi Salim Signed-off-by: Jamal Hadi Salim Co-developed-by: Pedro Tammela Signed-off-by: Pedro Tammela Signed-off-by: Victor Nogueira --- include/net/sch_generic.h | 4 ++ net/sched/cls_api.c | 1 + net/sched/sch_api.c | 79 +++++++++++++++++++++++++++++++++++++-- net/sched/sch_generic.c | 34 ++++++++++++++++- 4 files changed, 112 insertions(+), 6 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index e92f73bb3198..824a0ecb5afc 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -19,6 +19,7 @@ #include #include #include +#include struct Qdisc_ops; struct qdisc_walker; @@ -126,6 +127,8 @@ struct Qdisc { struct rcu_head rcu; netdevice_tracker dev_tracker; + netdevice_tracker in_block_tracker; + netdevice_tracker eg_block_tracker; /* private data */ long privdata[] ____cacheline_aligned; }; @@ -458,6 +461,7 @@ struct tcf_chain { }; struct tcf_block { + struct xarray ports; /* datapath accessible */ /* Lock protects tcf_block and lifetime-management data of chains * attached to the block (refcnt, action_refcnt, explicitly_created). */ diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index a193cc7b3241..a976792ef02f 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -1003,6 +1003,7 @@ static struct tcf_block *tcf_block_create(struct net *net, struct Qdisc *q, refcount_set(&block->refcnt, 1); block->net = net; block->index = block_index; + xa_init(&block->ports); /* Don't store q pointer for blocks which are shared */ if (!tcf_block_shared(block)) diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c index aa6b1fe65151..6c0c220cdb21 100644 --- a/net/sched/sch_api.c +++ b/net/sched/sch_api.c @@ -1180,6 +1180,71 @@ static int qdisc_graft(struct net_device *dev, struct Qdisc *parent, return 0; } +static void qdisc_block_undo_set(struct Qdisc *sch, struct nlattr **tca) +{ + if (tca[TCA_INGRESS_BLOCK]) + sch->ops->ingress_block_set(sch, 0); + + if (tca[TCA_EGRESS_BLOCK]) + sch->ops->egress_block_set(sch, 0); +} + +static int qdisc_block_add_dev(struct Qdisc *sch, struct net_device *dev, + struct nlattr **tca, + struct netlink_ext_ack *extack) +{ + const struct Qdisc_class_ops *cl_ops = sch->ops->cl_ops; + struct tcf_block *in_block = NULL; + struct tcf_block *eg_block = NULL; + unsigned long cl = 0; + int err; + + if (tca[TCA_INGRESS_BLOCK]) { + /* works for both ingress and clsact */ + cl = TC_H_MIN_INGRESS; + in_block = cl_ops->tcf_block(sch, cl, NULL); + if (!in_block) { + NL_SET_ERR_MSG(extack, "Shared ingress block missing"); + return -EINVAL; + } + + err = xa_insert(&in_block->ports, dev->ifindex, dev, GFP_KERNEL); + if (err) { + NL_SET_ERR_MSG(extack, "ingress block dev insert failed"); + return err; + } + + netdev_hold(dev, &sch->in_block_tracker, GFP_KERNEL); + } + + if (tca[TCA_EGRESS_BLOCK]) { + cl = TC_H_MIN_EGRESS; + eg_block = cl_ops->tcf_block(sch, cl, NULL); + if (!eg_block) { + NL_SET_ERR_MSG(extack, "Shared egress block missing"); + err = -EINVAL; + goto err_out; + } + + err = xa_insert(&eg_block->ports, dev->ifindex, dev, GFP_KERNEL); + if (err) { + netdev_put(dev, &sch->eg_block_tracker); + NL_SET_ERR_MSG(extack, "Egress block dev insert failed"); + goto err_out; + } + netdev_hold(dev, &sch->eg_block_tracker, GFP_KERNEL); + } + + return 0; +err_out: + if (in_block) { + xa_erase(&in_block->ports, dev->ifindex); + netdev_put(dev, &sch->in_block_tracker); + NL_SET_ERR_MSG(extack, "ingress block dev insert failed"); + } + return err; +} + static int qdisc_block_indexes_set(struct Qdisc *sch, struct nlattr **tca, struct netlink_ext_ack *extack) { @@ -1270,7 +1335,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev, sch = qdisc_alloc(dev_queue, ops, extack); if (IS_ERR(sch)) { err = PTR_ERR(sch); - goto err_out2; + goto err_out1; } sch->parent = parent; @@ -1289,7 +1354,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev, if (handle == 0) { NL_SET_ERR_MSG(extack, "Maximum number of qdisc handles was exceeded"); err = -ENOSPC; - goto err_out3; + goto err_out2; } } if (!netif_is_multiqueue(dev)) @@ -1311,7 +1376,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev, err = qdisc_block_indexes_set(sch, tca, extack); if (err) - goto err_out3; + goto err_out2; if (tca[TCA_STAB]) { stab = qdisc_get_stab(tca[TCA_STAB], extack); @@ -1350,6 +1415,10 @@ static struct Qdisc *qdisc_create(struct net_device *dev, qdisc_hash_add(sch, false); trace_qdisc_create(ops, dev, parent); + err = qdisc_block_add_dev(sch, dev, tca, extack); + if (err) + goto err_out4; + return sch; err_out4: @@ -1360,9 +1429,11 @@ static struct Qdisc *qdisc_create(struct net_device *dev, ops->destroy(sch); qdisc_put_stab(rtnl_dereference(sch->stab)); err_out3: + qdisc_block_undo_set(sch, tca); +err_out2: netdev_put(dev, &sch->dev_tracker); qdisc_free(sch); -err_out2: +err_out1: module_put(ops->owner); err_out: *errp = err; diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 5d7e23f4cc0e..0fb51fd6f01e 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -1048,7 +1048,12 @@ static void qdisc_free_cb(struct rcu_head *head) static void __qdisc_destroy(struct Qdisc *qdisc) { - const struct Qdisc_ops *ops = qdisc->ops; + struct net_device *dev = qdisc_dev(qdisc); + const struct Qdisc_ops *ops = qdisc->ops; + const struct Qdisc_class_ops *cops; + struct tcf_block *block; + unsigned long cl; + u32 block_index; #ifdef CONFIG_NET_SCHED qdisc_hash_del(qdisc); @@ -1059,11 +1064,36 @@ static void __qdisc_destroy(struct Qdisc *qdisc) qdisc_reset(qdisc); + cops = ops->cl_ops; + if (ops->ingress_block_get) { + block_index = ops->ingress_block_get(qdisc); + if (block_index) { + cl = TC_H_MIN_INGRESS; + block = cops->tcf_block(qdisc, cl, NULL); + if (block) { + if (xa_erase(&block->ports, dev->ifindex)) + netdev_put(dev, &qdisc->in_block_tracker); + } + } + } + + if (ops->egress_block_get) { + block_index = ops->egress_block_get(qdisc); + if (block_index) { + cl = TC_H_MIN_EGRESS; + block = cops->tcf_block(qdisc, cl, NULL); + if (block) { + if (xa_erase(&block->ports, dev->ifindex)) + netdev_put(dev, &qdisc->eg_block_tracker); + } + } + } + if (ops->destroy) ops->destroy(qdisc); module_put(ops->owner); - netdev_put(qdisc_dev(qdisc), &qdisc->dev_tracker); + netdev_put(dev, &qdisc->dev_tracker); trace_qdisc_destroy(qdisc); From patchwork Sat Aug 19 16:35:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Victor Nogueira X-Patchwork-Id: 13358654 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C49FB19BC7 for ; Sat, 19 Aug 2023 16:35:32 +0000 (UTC) Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com [IPv6:2607:f8b0:4864:20::22f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9663DA5FF for ; Sat, 19 Aug 2023 09:35:31 -0700 (PDT) Received: by mail-oi1-x22f.google.com with SMTP id 5614622812f47-3a44cccbd96so1409625b6e.3 for ; Sat, 19 Aug 2023 09:35:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mojatatu-com.20221208.gappssmtp.com; s=20221208; t=1692462931; x=1693067731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z29+3z1/4bxUpOu2Gfht98o1ZpOA+SV6U777EguN+1I=; b=HTaywKJJf5ZxAO0M5a4+3wIDV8LglRiKguYJhuvJcFszgdWP59rq5eMQVCSujZKCfl zwHu9LqgwvO1PYDfqPKKEiIpV9DFN0cQ6TthjF/UE1+OvesaJRVT9C8YAeJC1fgxdOD/ yI51zRWBlLTA5brGl3nszdv2BNEM565+ZWox/9Umo3JRg492BxtagBvpkfAm9akydyrQ JIZsHJC39/rHJA5V9MhDA29dlNyRuY4LtXKecu2cvlCcpED30kry0AE2e5QggBIP0fpe yoAWp7tg5rB1vW4Upn+tkSaseyYxY1CTc9bNjhgB8vG9excFxmFxhkpK78uguBbl/nIl 0J7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692462931; x=1693067731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z29+3z1/4bxUpOu2Gfht98o1ZpOA+SV6U777EguN+1I=; b=Mxapzg6pa4Lkx5I65Mttv73fqXaYK/KXKEcthGaqhLQVEhqLBmKNR8nULiY8R546r6 Y8DXdQX/w6Mmf8SNTyRem221YFbJbt8jAsf0OfNENFvWUFEFyOLQaWPD6labdllLshYE 4YfYbV0zbu6WaU5IVtud8DSSDTEa8S2lyw3E4xhpmW4ydum8IQaApKSRbBCQ57RW95sc YVDTXIyuEe1Kx+6uQZCxuA0aCzI2RIcZeV9VfIn7g9y4NKwK12c1gegAbQkJ9MEr0Qur s0eam/B9hInNqJ2VxbkbRK56eEkQS9kg6AuIybEvM5wAb/76RBr7pW7Tl35QwwyoeGXI Nbjg== X-Gm-Message-State: AOJu0Yz443TnPn5tl2rJAiPi5bWivVGexZrvfgCj50O89uuwLTcUlIvD tseTsLvIA2va0BAEnYa1NgyIYA== X-Google-Smtp-Source: AGHT+IE1egHbjxj4yYwoaK9mNup9AGch0OnKM0UcbQs7W3RCzy/jOkgqeQ6ARCVQk1aAM8A8icCKuQ== X-Received: by 2002:a05:6808:3a8e:b0:3a7:7988:9916 with SMTP id fb14-20020a0568083a8e00b003a779889916mr3014221oib.59.1692462930946; Sat, 19 Aug 2023 09:35:30 -0700 (PDT) Received: from exu-caveira.tail33bf8.ts.net ([2804:7f1:e2c1:d019:34ee:449:f6bb:38e9]) by smtp.gmail.com with ESMTPSA id p187-20020acaf1c4000000b003a7847cf407sm2098303oih.44.2023.08.19.09.35.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 19 Aug 2023 09:35:30 -0700 (PDT) From: Victor Nogueira To: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org Cc: mleitner@redhat.com, vladbu@nvidia.com, horms@kernel.org, pctammela@mojatatu.com, kernel@mojatatu.com Subject: [PATCH net-next v2 2/3] net/sched: cls_api: Expose tc block ports to the datapath Date: Sat, 19 Aug 2023 13:35:13 -0300 Message-ID: <20230819163515.2266246-3-victor@mojatatu.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230819163515.2266246-1-victor@mojatatu.com> References: <20230819163515.2266246-1-victor@mojatatu.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The datapath can now find the block of the port in which the packet arrived at. It can then use it for various activities. In the next patch we show a simple action that multicasts to all ports excep for the port in which the packet arrived on. Co-developed-by: Jamal Hadi Salim Signed-off-by: Jamal Hadi Salim Co-developed-by: Pedro Tammela Signed-off-by: Pedro Tammela Signed-off-by: Victor Nogueira --- include/net/sch_generic.h | 4 ++++ net/sched/cls_api.c | 10 +++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 824a0ecb5afc..c5defb166ef6 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -440,6 +440,8 @@ struct qdisc_skb_cb { }; #define QDISC_CB_PRIV_LEN 20 unsigned char data[QDISC_CB_PRIV_LEN]; + /* This should allow eBPF to continue to align */ + u32 block_index; }; typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv); @@ -488,6 +490,8 @@ struct tcf_block { struct mutex proto_destroy_lock; /* Lock for proto_destroy hashtable. */ }; +struct tcf_block *tcf_block_lookup(struct net *net, u32 block_index); + static inline bool lockdep_tcf_chain_is_locked(struct tcf_chain *chain) { return lockdep_is_held(&chain->filter_chain_lock); diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index a976792ef02f..00e776cdd3fc 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -1011,12 +1011,13 @@ static struct tcf_block *tcf_block_create(struct net *net, struct Qdisc *q, return block; } -static struct tcf_block *tcf_block_lookup(struct net *net, u32 block_index) +struct tcf_block *tcf_block_lookup(struct net *net, u32 block_index) { struct tcf_net *tn = net_generic(net, tcf_net_id); return idr_find(&tn->idr, block_index); } +EXPORT_SYMBOL(tcf_block_lookup); static struct tcf_block *tcf_block_refcnt_get(struct net *net, u32 block_index) { @@ -1737,9 +1738,13 @@ int tcf_classify(struct sk_buff *skb, const struct tcf_proto *tp, struct tcf_result *res, bool compat_mode) { + struct qdisc_skb_cb *qdisc_cb = qdisc_skb_cb(skb); + #if !IS_ENABLED(CONFIG_NET_TC_SKB_EXT) u32 last_executed_chain = 0; + qdisc_cb->block_index = block ? block->index : 0; + return __tcf_classify(skb, tp, tp, res, compat_mode, NULL, 0, &last_executed_chain); #else @@ -1751,6 +1756,7 @@ int tcf_classify(struct sk_buff *skb, int ret; if (block) { + qdisc_cb->block_index = block->index; ext = skb_ext_find(skb, TC_SKB_EXT); if (ext && (ext->chain || ext->act_miss)) { @@ -1778,6 +1784,8 @@ int tcf_classify(struct sk_buff *skb, tp = rcu_dereference_bh(fchain->filter_chain); last_executed_chain = fchain->index; } + } else { + qdisc_cb->block_index = 0; } ret = __tcf_classify(skb, tp, orig_tp, res, compat_mode, n, act_index, From patchwork Sat Aug 19 16:35:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Victor Nogueira X-Patchwork-Id: 13358655 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BA141ADC3 for ; Sat, 19 Aug 2023 16:35:37 +0000 (UTC) Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com [IPv6:2607:f8b0:4864:20::233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72D63D96A for ; Sat, 19 Aug 2023 09:35:35 -0700 (PDT) Received: by mail-oi1-x233.google.com with SMTP id 5614622812f47-3a81154c570so1234357b6e.1 for ; Sat, 19 Aug 2023 09:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mojatatu-com.20221208.gappssmtp.com; s=20221208; t=1692462935; x=1693067735; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w9VRxpoBTXDrQL+So4QAre9fTGwARbehF+3lefK/LYU=; b=nudDxfAGtsf846nksJsBpAK57RGC7bOSVwiGpY6SmizXoOKft6gAQUC5flGpnHl7fB SBwszJab7HcimxVa2c9o45dbjIioummLMbCsLhQTuCR5ZGqY4E4jpZLIXFQcIbkbhZa2 z5zayu/kLdfRfNMNS10jPSVq5FRV9mxJ+56I7VVhv3mzYFLU4wgVVxXjCalZUSprzavz 4PsIePJ0u8i1+aakKVE2gh015mjqV1+u0d0zpwahTwUxLn56XDwf8M/1lLHfx8gf/HCE f3/kNlVCf3ruy1TOw7/Hu+DAFkjPITS8pesndS+jZX7gtrtpGumm/+fMeuVXXURGreKJ 37Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692462935; x=1693067735; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w9VRxpoBTXDrQL+So4QAre9fTGwARbehF+3lefK/LYU=; b=HzRrKc+HQ9L11R0faouQj/WSr3ivZhD500cRcr3W509e0DRJjFzbxFNUj90Ki5/YIg QVzVCLBcx+OY6dxEmDYNlnAcEp+7dq+GViIrmBRqdiuWwKvsvDHvM5Xgm61k++IYQC4J NTgmQMvmzSvb19ayQ8+MEWu04JcKlyKFtvY7h5YtpBZW4wvITYUH59JYvohvAjqUYxjq ENBYqRvVGddxmXof54HBpM03bVdl3OAVJwJjlPFO2zusCRJy8UjNKVw3zu6UozOVBC8K 5IySlivWsWUrvXngyS8NQ9tsJALXIaCKCYtPxPVey4LeZzD8lzQI4UtIfezCTnPcwk7z OsEg== X-Gm-Message-State: AOJu0YzZBcIiS/mkLpI07vO2jocwBm7l8nDQ1k4DcVZBGavDA7bzzsLu R0cUzyrseo5sMFq2+eVkylQjvg== X-Google-Smtp-Source: AGHT+IGpoEO3RTEJMruSf0MIwk14AmUGAwDchSgCzh2vZ+D0768wWMI6DnHkRBU42XrnGzqrC/iqhw== X-Received: by 2002:aca:1b0f:0:b0:3a6:fb15:399b with SMTP id b15-20020aca1b0f000000b003a6fb15399bmr2384389oib.36.1692462934731; Sat, 19 Aug 2023 09:35:34 -0700 (PDT) Received: from exu-caveira.tail33bf8.ts.net ([2804:7f1:e2c1:d019:34ee:449:f6bb:38e9]) by smtp.gmail.com with ESMTPSA id p187-20020acaf1c4000000b003a7847cf407sm2098303oih.44.2023.08.19.09.35.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 19 Aug 2023 09:35:34 -0700 (PDT) From: Victor Nogueira To: jhs@mojatatu.com, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org Cc: mleitner@redhat.com, vladbu@nvidia.com, horms@kernel.org, pctammela@mojatatu.com, kernel@mojatatu.com Subject: [PATCH net-next v2 3/3] net/sched: act_blockcast: Introduce blockcast tc action Date: Sat, 19 Aug 2023 13:35:14 -0300 Message-ID: <20230819163515.2266246-4-victor@mojatatu.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230819163515.2266246-1-victor@mojatatu.com> References: <20230819163515.2266246-1-victor@mojatatu.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org This action takes advantage of the presence of tc block ports set in the datapath and broadcast a packet to all ports on that set with exception of the port in which it arrived on.. Example usage: $ tc qdisc add dev ens7 ingress block 22 $ tc qdisc add dev ens8 ingress block 22 Now we can add a filter using the block index: $ tc filter add block 22 protocol ip pref 25 \ flower dst_ip 192.168.0.0/16 action blockcast Co-developed-by: Jamal Hadi Salim Signed-off-by: Jamal Hadi Salim Co-developed-by: Pedro Tammela Signed-off-by: Pedro Tammela Signed-off-by: Victor Nogueira --- include/net/tc_wrapper.h | 5 + net/sched/Kconfig | 13 ++ net/sched/Makefile | 1 + net/sched/act_blockcast.c | 299 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 318 insertions(+) create mode 100644 net/sched/act_blockcast.c diff --git a/include/net/tc_wrapper.h b/include/net/tc_wrapper.h index a6d481b5bcbc..8ef848968be7 100644 --- a/include/net/tc_wrapper.h +++ b/include/net/tc_wrapper.h @@ -28,6 +28,7 @@ TC_INDIRECT_ACTION_DECLARE(tcf_csum_act); TC_INDIRECT_ACTION_DECLARE(tcf_ct_act); TC_INDIRECT_ACTION_DECLARE(tcf_ctinfo_act); TC_INDIRECT_ACTION_DECLARE(tcf_gact_act); +TC_INDIRECT_ACTION_DECLARE(tcf_blockcast_run); TC_INDIRECT_ACTION_DECLARE(tcf_gate_act); TC_INDIRECT_ACTION_DECLARE(tcf_ife_act); TC_INDIRECT_ACTION_DECLARE(tcf_ipt_act); @@ -57,6 +58,10 @@ static inline int tc_act(struct sk_buff *skb, const struct tc_action *a, if (a->ops->act == tcf_mirred_act) return tcf_mirred_act(skb, a, res); #endif +#if IS_BUILTIN(CONFIG_NET_ACT_BLOCKCAST) + if (a->ops->act == tcf_blockcast_run) + return tcf_blockcast_run(skb, a, res); +#endif #if IS_BUILTIN(CONFIG_NET_ACT_PEDIT) if (a->ops->act == tcf_pedit_act) return tcf_pedit_act(skb, a, res); diff --git a/net/sched/Kconfig b/net/sched/Kconfig index 4b95cb1ac435..1b0edf1287d0 100644 --- a/net/sched/Kconfig +++ b/net/sched/Kconfig @@ -780,6 +780,19 @@ config NET_ACT_SIMP To compile this code as a module, choose M here: the module will be called act_simple. +config NET_ACT_BLOCKCAST + tristate "TC block Multicast" + depends on NET_CLS_ACT + help + Say Y here to add an action that will multicast an skb to egress of + all netdevs that belong to a tc block except for the netdev on which + the skb arrived on + + If unsure, say N. + + To compile this code as a module, choose M here: the + module will be called act_blockcast. + config NET_ACT_SKBEDIT tristate "SKB Editing" depends on NET_CLS_ACT diff --git a/net/sched/Makefile b/net/sched/Makefile index b5fd49641d91..2cdcf30645eb 100644 --- a/net/sched/Makefile +++ b/net/sched/Makefile @@ -17,6 +17,7 @@ obj-$(CONFIG_NET_ACT_IPT) += act_ipt.o obj-$(CONFIG_NET_ACT_NAT) += act_nat.o obj-$(CONFIG_NET_ACT_PEDIT) += act_pedit.o obj-$(CONFIG_NET_ACT_SIMP) += act_simple.o +obj-$(CONFIG_NET_ACT_BLOCKCAST) += act_blockcast.o obj-$(CONFIG_NET_ACT_SKBEDIT) += act_skbedit.o obj-$(CONFIG_NET_ACT_CSUM) += act_csum.o obj-$(CONFIG_NET_ACT_MPLS) += act_mpls.o diff --git a/net/sched/act_blockcast.c b/net/sched/act_blockcast.c new file mode 100644 index 000000000000..85fd0289927c --- /dev/null +++ b/net/sched/act_blockcast.c @@ -0,0 +1,299 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * net/sched/act_blockcast.c Block Cast action + * Copyright (c) 2023, Mojatatu Networks + * Authors: Jamal Hadi Salim + * Victor Nogueira + * Pedro Tammela + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +static struct tc_action_ops act_blockcast_ops; + +struct tcf_blockcast_act { + struct tc_action common; +}; + +#define to_blockcast_act(a) ((struct tcf_blockcast_act *)a) + +#define TCA_ID_BLOCKCAST 123 +#define CAST_RECURSION_LIMIT 4 + +static DEFINE_PER_CPU(unsigned int, redirect_rec_level); + +static int cast_one(struct sk_buff *skb, const u32 ifindex) +{ + struct sk_buff *skb2 = skb; + int retval = TC_ACT_PIPE; + struct net_device *dev; + unsigned int rec_level; + bool expects_nh; + int mac_len; + bool at_nh; + int err; + + rec_level = __this_cpu_inc_return(redirect_rec_level); + if (unlikely(rec_level > CAST_RECURSION_LIMIT)) { + net_warn_ratelimited("blockcast: exceeded redirect recursion limit on dev %s\n", + netdev_name(skb->dev)); + __this_cpu_dec(redirect_rec_level); + return TC_ACT_SHOT; + } + + dev = dev_get_by_index_rcu(dev_net(skb->dev), ifindex); + if (unlikely(!dev)) { + __this_cpu_dec(redirect_rec_level); + return TC_ACT_SHOT; + } + + if (unlikely(!(dev->flags & IFF_UP) || !netif_carrier_ok(dev))) { + net_notice_ratelimited("blockcast: device %s is down\n", + dev->name); + __this_cpu_dec(redirect_rec_level); + return TC_ACT_SHOT; + } + + skb2 = skb_clone(skb, GFP_ATOMIC); + if (!skb2) { + __this_cpu_dec(redirect_rec_level); + return retval; + } + + nf_reset_ct(skb2); + + expects_nh = !dev_is_mac_header_xmit(dev); + at_nh = skb->data == skb_network_header(skb); + if (at_nh != expects_nh) { + mac_len = skb_at_tc_ingress(skb) ? + skb->mac_len : + skb_network_header(skb) - skb_mac_header(skb); + + if (expects_nh) { + /* target device/action expect data at nh */ + skb_pull_rcsum(skb2, mac_len); + } else { + /* target device/action expect data at mac */ + skb_push_rcsum(skb2, mac_len); + } + } + + skb2->skb_iif = skb->dev->ifindex; + skb2->dev = dev; + + err = dev_queue_xmit(skb2); + if (err) + retval = TC_ACT_SHOT; + + __this_cpu_dec(redirect_rec_level); + + return retval; +} + +TC_INDIRECT_SCOPE int tcf_blockcast_run(struct sk_buff *skb, + const struct tc_action *a, + struct tcf_result *res) +{ + u32 block_index = qdisc_skb_cb(skb)->block_index; + struct tcf_blockcast_act *p = to_blockcast_act(a); + int action = READ_ONCE(p->tcf_action); + struct net *net = dev_net(skb->dev); + struct tcf_block *block; + struct net_device *dev; + u32 exception_ifindex; + unsigned long index; + + block = tcf_block_lookup(net, block_index); + exception_ifindex = skb->dev->ifindex; + + tcf_action_update_bstats(&p->common, skb); + tcf_lastuse_update(&p->tcf_tm); + + if (!block || xa_empty(&block->ports)) + goto act_done; + + /* we are already under rcu protection, so iterating block is safe*/ + xa_for_each(&block->ports, index, dev) { + int err; + + if (index == exception_ifindex) + continue; + + err = cast_one(skb, dev->ifindex); + if (err != TC_ACT_PIPE) + printk("(%d)Failed to send to dev\t%d: %s\n", err, + dev->ifindex, dev->name); + } + +act_done: + if (action == TC_ACT_SHOT) + tcf_action_inc_drop_qstats(&p->common); + return action; +} + +static const struct nla_policy blockcast_policy[TCA_DEF_MAX + 1] = { + [TCA_DEF_PARMS] = { .len = sizeof(struct tc_defact) }, +}; + +static int tcf_blockcast_init(struct net *net, struct nlattr *nla, + struct nlattr *est, struct tc_action **a, + struct tcf_proto *tp, u32 flags, + struct netlink_ext_ack *extack) +{ + struct tc_action_net *tn = net_generic(net, act_blockcast_ops.net_id); + struct tcf_blockcast_act *p = to_blockcast_act(a); + bool bind = flags & TCA_ACT_FLAGS_BIND; + struct nlattr *tb[TCA_DEF_MAX + 1]; + struct tcf_chain *goto_ch = NULL; + struct tc_defact *parm; + bool exists = false; + int ret = 0, err; + u32 index; + + if (!nla) + return -EINVAL; + + err = nla_parse_nested_deprecated(tb, TCA_DEF_MAX, nla, + blockcast_policy, NULL); + if (err < 0) + return err; + + if (!tb[TCA_DEF_PARMS]) + return -EINVAL; + + parm = nla_data(tb[TCA_DEF_PARMS]); + index = parm->index; + + err = tcf_idr_check_alloc(tn, &index, a, bind); + if (err < 0) + return err; + + exists = err; + if (exists && bind) + return 0; + + if (!exists) { + ret = tcf_idr_create_from_flags(tn, index, est, a, + &act_blockcast_ops, bind, flags); + if (ret) { + tcf_idr_cleanup(tn, index); + return ret; + } + + ret = ACT_P_CREATED; + } else { + if (!(flags & TCA_ACT_FLAGS_REPLACE)) { + err = -EEXIST; + goto release_idr; + } + } + + err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack); + if (err < 0) + goto release_idr; + + if (exists) + spin_lock_bh(&p->tcf_lock); + goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch); + if (exists) + spin_unlock_bh(&p->tcf_lock); + + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + + return ret; +release_idr: + tcf_idr_release(*a, bind); + return err; +} + +static int tcf_blockcast_dump(struct sk_buff *skb, struct tc_action *a, + int bind, int ref) +{ + unsigned char *b = skb_tail_pointer(skb); + struct tcf_blockcast_act *p = to_blockcast_act(a); + struct tc_defact opt = { + .index = p->tcf_index, + .refcnt = refcount_read(&p->tcf_refcnt) - ref, + .bindcnt = atomic_read(&p->tcf_bindcnt) - bind, + }; + struct tcf_t t; + + spin_lock_bh(&p->tcf_lock); + opt.action = p->tcf_action; + if (nla_put(skb, TCA_DEF_PARMS, sizeof(opt), &opt)) + goto nla_put_failure; + + tcf_tm_dump(&t, &p->tcf_tm); + if (nla_put_64bit(skb, TCA_DEF_TM, sizeof(t), &t, TCA_DEF_PAD)) + goto nla_put_failure; + spin_unlock_bh(&p->tcf_lock); + + return skb->len; + +nla_put_failure: + spin_unlock_bh(&p->tcf_lock); + nlmsg_trim(skb, b); + return -1; +} + +static struct tc_action_ops act_blockcast_ops = { + .kind = "blockcast", + .id = TCA_ID_BLOCKCAST, + .owner = THIS_MODULE, + .act = tcf_blockcast_run, + .dump = tcf_blockcast_dump, + .init = tcf_blockcast_init, + .size = sizeof(struct tcf_blockcast_act), +}; + +static __net_init int blockcast_init_net(struct net *net) +{ + struct tc_action_net *tn = net_generic(net, act_blockcast_ops.net_id); + + return tc_action_net_init(net, tn, &act_blockcast_ops); +} + +static void __net_exit blockcast_exit_net(struct list_head *net_list) +{ + tc_action_net_exit(net_list, act_blockcast_ops.net_id); +} + +static struct pernet_operations blockcast_net_ops = { + .init = blockcast_init_net, + .exit_batch = blockcast_exit_net, + .id = &act_blockcast_ops.net_id, + .size = sizeof(struct tc_action_net), +}; + +MODULE_AUTHOR("Mojatatu Networks, Inc"); +MODULE_LICENSE("GPL"); + +static int __init blockcast_init_module(void) +{ + int ret = tcf_register_action(&act_blockcast_ops, &blockcast_net_ops); + + if (!ret) + pr_info("blockcast TC action Loaded\n"); + return ret; +} + +static void __exit blockcast_cleanup_module(void) +{ + tcf_unregister_action(&act_blockcast_ops, &blockcast_net_ops); +} + +module_init(blockcast_init_module); +module_exit(blockcast_cleanup_module);