diff mbox series

[net-next,v2,1/3] net/sched: add retpoline wrapper for tc

Message ID 20221128154456.689326-2-pctammela@mojatatu.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series net/sched: retpoline wrappers for tc | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 1 maintainers not CCed: bpf@vger.kernel.org
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch fail ERROR: trailing statements should be on next line WARNING: From:/Signed-off-by: email address mismatch: 'From: Pedro Tammela <pctammela@gmail.com>' != 'Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>' WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: braces {} are not necessary for single statement blocks
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Pedro Tammela Nov. 28, 2022, 3:44 p.m. UTC
On kernels compiled with CONFIG_RETPOLINE and CONFIG_NET_TC_INDIRECT_WRAPPER,
optimize actions and filters that are compiled as built-ins into a direct call.
The calls are ordered alphabetically, but new ones should be ideally
added last.

On subsequent patches we expose the classifiers and actions functions
and wire up the wrapper into tc.

Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
---
 include/net/tc_wrapper.h | 232 +++++++++++++++++++++++++++++++++++++++
 net/sched/Kconfig        |  13 +++
 2 files changed, 245 insertions(+)
 create mode 100644 include/net/tc_wrapper.h

Comments

Jakub Kicinski Dec. 1, 2022, 5:16 a.m. UTC | #1
On Mon, 28 Nov 2022 12:44:54 -0300 Pedro Tammela wrote:
> On kernels compiled with CONFIG_RETPOLINE and CONFIG_NET_TC_INDIRECT_WRAPPER,
> optimize actions and filters that are compiled as built-ins into a direct call.
> The calls are ordered alphabetically, but new ones should be ideally
> added last.
> 
> On subsequent patches we expose the classifiers and actions functions
> and wire up the wrapper into tc.

> +#if IS_ENABLED(CONFIG_RETPOLINE) && IS_ENABLED(CONFIG_NET_TC_INDIRECT_WRAPPER)

The latter 'depends on' former, so just check the latter.

> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
> +			   struct tcf_result *res)
> +{
> +	if (0) { /* noop */ }
> +#if IS_BUILTIN(CONFIG_NET_ACT_BPF)
> +	else if (a->ops->act == tcf_bpf_act)
> +		return tcf_bpf_act(skb, a, res);
> +#endif

How does the 'else if' ladder compare to a switch statement?

> +#ifdef CONFIG_NET_CLS_ACT
> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
> +			   struct tcf_result *res)
> +{
> +	return a->ops->act(skb, a, res);
> +}
> +#endif
> +
> +#ifdef CONFIG_NET_CLS
> +static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
> +				struct tcf_result *res)
> +{
> +	return tp->classify(skb, tp, res);
> +}
> +#endif

please don't wrap the static inline helpers in #ifdefs unless it's
actually necessary for build to pass.
Paolo Abeni Dec. 1, 2022, 11:10 a.m. UTC | #2
On Mon, 2022-11-28 at 12:44 -0300, Pedro Tammela wrote:
> On kernels compiled with CONFIG_RETPOLINE and CONFIG_NET_TC_INDIRECT_WRAPPER,
> optimize actions and filters that are compiled as built-ins into a direct call.
> The calls are ordered alphabetically, but new ones should be ideally
> added last.
> 
> On subsequent patches we expose the classifiers and actions functions
> and wire up the wrapper into tc.
> 
> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>

There is a mismatch between the submitter and the SoB tag. You either
need to add a 

From: Pedro Tammela <pctammela@mojatatu.com>

tag at the beginning of each patch or your mojatatu account to send the
patches (assuming you prefer to retain the mojatatu SoB)

Thanks!

Paolo
Pedro Tammela Dec. 1, 2022, 4:40 p.m. UTC | #3
On 01/12/2022 02:16, Jakub Kicinski wrote:
> On Mon, 28 Nov 2022 12:44:54 -0300 Pedro Tammela wrote:
>> On kernels compiled with CONFIG_RETPOLINE and CONFIG_NET_TC_INDIRECT_WRAPPER,
>> optimize actions and filters that are compiled as built-ins into a direct call.
>> The calls are ordered alphabetically, but new ones should be ideally
>> added last.
>>
>> On subsequent patches we expose the classifiers and actions functions
>> and wire up the wrapper into tc.
> 
>> +#if IS_ENABLED(CONFIG_RETPOLINE) && IS_ENABLED(CONFIG_NET_TC_INDIRECT_WRAPPER)
> 
> The latter 'depends on' former, so just check the latter.
> 
>> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
>> +			   struct tcf_result *res)
>> +{
>> +	if (0) { /* noop */ }
>> +#if IS_BUILTIN(CONFIG_NET_ACT_BPF)
>> +	else if (a->ops->act == tcf_bpf_act)
>> +		return tcf_bpf_act(skb, a, res);
>> +#endif
> 
> How does the 'else if' ladder compare to a switch statement?

It's the semantically the same, we would just need to do some casts to 
unsigned long.
WDYT about the following?

   #define __TC_ACT_BUILTIN(builtin, fname) \
      if (builtin && a->ops->act == fname) return fname(skb, a, res)

   #define _TC_ACT_BUILTIN(builtin, fname) __TC_ACT_BUILTIN(builtin, fname)
   #define TC_ACT_BUILTIN(cfg, fname)  _TC_ACT_BUILTIN(IS_BUILTIN(cfg), 
fname)

   static inline int __tc_act(struct sk_buff *skb, const struct 
tc_action *a,
                              struct tcf_result *res)
   {
           TC_ACT_BUILTIN(CONFIG_NET_ACT_BPF, tcf_bpf_act);
   ...

It might be more pleasant to the reader.

> 
>> +#ifdef CONFIG_NET_CLS_ACT
>> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
>> +			   struct tcf_result *res)
>> +{
>> +	return a->ops->act(skb, a, res);
>> +}
>> +#endif
>> +
>> +#ifdef CONFIG_NET_CLS
>> +static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
>> +				struct tcf_result *res)
>> +{
>> +	return tp->classify(skb, tp, res);
>> +}
>> +#endif
> 
> please don't wrap the static inline helpers in #ifdefs unless it's
> actually necessary for build to pass.

The only one really needed is CONFIG_NET_CLS_ACT because the struct 
tc_action definition is protected by it. Perhaps we should move it out 
of the #ifdef?
Jakub Kicinski Dec. 1, 2022, 10:38 p.m. UTC | #4
On Thu, 1 Dec 2022 13:40:34 -0300 Pedro Tammela wrote:
> >> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
> >> +			   struct tcf_result *res)
> >> +{
> >> +	if (0) { /* noop */ }
> >> +#if IS_BUILTIN(CONFIG_NET_ACT_BPF)
> >> +	else if (a->ops->act == tcf_bpf_act)
> >> +		return tcf_bpf_act(skb, a, res);
> >> +#endif  
> > 
> > How does the 'else if' ladder compare to a switch statement?  
> 
> It's the semantically the same, we would just need to do some casts to 
> unsigned long.

Sorry, should've been clearer, I mean in terms of generated code.
Is the machine code identical / better / worse?

> WDYT about the following?
> 
>    #define __TC_ACT_BUILTIN(builtin, fname) \
>       if (builtin && a->ops->act == fname) return fname(skb, a, res)
> 
>    #define _TC_ACT_BUILTIN(builtin, fname) __TC_ACT_BUILTIN(builtin, fname)
>    #define TC_ACT_BUILTIN(cfg, fname)  _TC_ACT_BUILTIN(IS_BUILTIN(cfg), 
> fname)
> 
>    static inline int __tc_act(struct sk_buff *skb, const struct 
> tc_action *a,
>                               struct tcf_result *res)
>    {
>            TC_ACT_BUILTIN(CONFIG_NET_ACT_BPF, tcf_bpf_act);
>    ...
> 
> It might be more pleasant to the reader.

Most definitely not to this reader :) The less macro magic the better.
I'm primarily curious about whether the compiler treats this sort of
construct the same as a switch.

> >> +#ifdef CONFIG_NET_CLS_ACT
> >> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
> >> +			   struct tcf_result *res)
> >> +{
> >> +	return a->ops->act(skb, a, res);
> >> +}
> >> +#endif
> >> +
> >> +#ifdef CONFIG_NET_CLS
> >> +static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
> >> +				struct tcf_result *res)
> >> +{
> >> +	return tp->classify(skb, tp, res);
> >> +}
> >> +#endif  
> > 
> > please don't wrap the static inline helpers in #ifdefs unless it's
> > actually necessary for build to pass.  
> 
> The only one really needed is CONFIG_NET_CLS_ACT because the struct 
> tc_action definition is protected by it. Perhaps we should move it out 
> of the #ifdef?

Yes, I think that's a nice cleanup.
Pedro Tammela Dec. 2, 2022, 12:48 p.m. UTC | #5
On 01/12/2022 19:38, Jakub Kicinski wrote:
> On Thu, 1 Dec 2022 13:40:34 -0300 Pedro Tammela wrote:
>>>> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
>>>> +			   struct tcf_result *res)
>>>> +{
>>>> +	if (0) { /* noop */ }
>>>> +#if IS_BUILTIN(CONFIG_NET_ACT_BPF)
>>>> +	else if (a->ops->act == tcf_bpf_act)
>>>> +		return tcf_bpf_act(skb, a, res);
>>>> +#endif
>>>
>>> How does the 'else if' ladder compare to a switch statement?
>>
>> It's the semantically the same, we would just need to do some casts to
>> unsigned long.
> 
> Sorry, should've been clearer, I mean in terms of generated code.
> Is the machine code identical / better / worse?
> 
>> WDYT about the following?
>>
>>     #define __TC_ACT_BUILTIN(builtin, fname) \
>>        if (builtin && a->ops->act == fname) return fname(skb, a, res)
>>
>>     #define _TC_ACT_BUILTIN(builtin, fname) __TC_ACT_BUILTIN(builtin, fname)
>>     #define TC_ACT_BUILTIN(cfg, fname)  _TC_ACT_BUILTIN(IS_BUILTIN(cfg),
>> fname)
>>
>>     static inline int __tc_act(struct sk_buff *skb, const struct
>> tc_action *a,
>>                                struct tcf_result *res)
>>     {
>>             TC_ACT_BUILTIN(CONFIG_NET_ACT_BPF, tcf_bpf_act);
>>     ...
>>
>> It might be more pleasant to the reader.
> 
> Most definitely not to this reader :) The less macro magic the better.
> I'm primarily curious about whether the compiler treats this sort of
> construct the same as a switch.

Apparently we can't do a switch case in pointer addresses because the 
case condition must be constant.

> 
>>>> +#ifdef CONFIG_NET_CLS_ACT
>>>> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
>>>> +			   struct tcf_result *res)
>>>> +{
>>>> +	return a->ops->act(skb, a, res);
>>>> +}
>>>> +#endif
>>>> +
>>>> +#ifdef CONFIG_NET_CLS
>>>> +static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
>>>> +				struct tcf_result *res)
>>>> +{
>>>> +	return tp->classify(skb, tp, res);
>>>> +}
>>>> +#endif
>>>
>>> please don't wrap the static inline helpers in #ifdefs unless it's
>>> actually necessary for build to pass.
>>
>> The only one really needed is CONFIG_NET_CLS_ACT because the struct
>> tc_action definition is protected by it. Perhaps we should move it out
>> of the #ifdef?
> 
> Yes, I think that's a nice cleanup.
Jakub Kicinski Dec. 2, 2022, 6:13 p.m. UTC | #6
On Fri, 2 Dec 2022 09:48:49 -0300 Pedro Tammela wrote:
> > Most definitely not to this reader :) The less macro magic the better.
> > I'm primarily curious about whether the compiler treats this sort of
> > construct the same as a switch.  
> 
> Apparently we can't do a switch case in pointer addresses because the 
> case condition must be constant.

Ack, thanks for checking!
Eric Dumazet Dec. 5, 2022, 4:56 a.m. UTC | #7
On Mon, Nov 28, 2022 at 4:45 PM Pedro Tammela <pctammela@gmail.com> wrote:
>
> On kernels compiled with CONFIG_RETPOLINE and CONFIG_NET_TC_INDIRECT_WRAPPER,
> optimize actions and filters that are compiled as built-ins into a direct call.
> The calls are ordered alphabetically, but new ones should be ideally
> added last.
>
> On subsequent patches we expose the classifiers and actions functions
> and wire up the wrapper into tc.
>
> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
> ---


> +static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
> +                          struct tcf_result *res)

Perhaps remove the __ prefix, since there is not yet an  __tc_act() and tc_act()

> +
> +static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
> +                               struct tcf_result *res)

Same remark, the __ prefix seems not necessary.
diff mbox series

Patch

diff --git a/include/net/tc_wrapper.h b/include/net/tc_wrapper.h
new file mode 100644
index 000000000000..bd2b4789db2b
--- /dev/null
+++ b/include/net/tc_wrapper.h
@@ -0,0 +1,232 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __NET_TC_WRAPPER_H
+#define __NET_TC_WRAPPER_H
+
+#include <linux/indirect_call_wrapper.h>
+#include <net/pkt_cls.h>
+
+#if IS_ENABLED(CONFIG_RETPOLINE) && IS_ENABLED(CONFIG_NET_TC_INDIRECT_WRAPPER)
+
+#define TC_INDIRECT_SCOPE
+
+/* TC Actions */
+#ifdef CONFIG_NET_CLS_ACT
+
+#define TC_INDIRECT_ACTION_DECLARE(fname)                              \
+	INDIRECT_CALLABLE_DECLARE(int fname(struct sk_buff *skb,       \
+					    const struct tc_action *a, \
+					    struct tcf_result *res))
+
+TC_INDIRECT_ACTION_DECLARE(tcf_bpf_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_connmark_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_csum_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_ct_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_ctinfo_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_gact_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_gate_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_ife_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_ipt_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_mirred_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_mpls_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_nat_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_pedit_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_police_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_sample_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_simp_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_skbedit_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_skbmod_act);
+TC_INDIRECT_ACTION_DECLARE(tcf_vlan_act);
+TC_INDIRECT_ACTION_DECLARE(tunnel_key_act);
+
+static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
+			   struct tcf_result *res)
+{
+	if (0) { /* noop */ }
+#if IS_BUILTIN(CONFIG_NET_ACT_BPF)
+	else if (a->ops->act == tcf_bpf_act)
+		return tcf_bpf_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_CONNMARK)
+	else if (a->ops->act == tcf_connmark_act)
+		return tcf_connmark_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_CSUM)
+	else if (a->ops->act == tcf_csum_act)
+		return tcf_csum_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_CT)
+	else if (a->ops->act == tcf_ct_act)
+		return tcf_ct_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_CTINFO)
+	else if (a->ops->act == tcf_ctinfo_act)
+		return tcf_ctinfo_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_GACT)
+	else if (a->ops->act == tcf_gact_act)
+		return tcf_gact_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_GATE)
+	else if (a->ops->act == tcf_gate_act)
+		return tcf_gate_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_IFE)
+	else if (a->ops->act == tcf_ife_act)
+		return tcf_ife_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_IPT)
+	else if (a->ops->act == tcf_ipt_act)
+		return tcf_ipt_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_MIRRED)
+	else if (a->ops->act == tcf_mirred_act)
+		return tcf_mirred_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_MPLS)
+	else if (a->ops->act == tcf_mpls_act)
+		return tcf_mpls_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_NAT)
+	else if (a->ops->act == tcf_nat_act)
+		return tcf_nat_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_PEDIT)
+	else if (a->ops->act == tcf_pedit_act)
+		return tcf_pedit_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_POLICE)
+	else if (a->ops->act == tcf_police_act)
+		return tcf_police_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_SAMPLE)
+	else if (a->ops->act == tcf_sample_act)
+		return tcf_sample_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_SIMP)
+	else if (a->ops->act == tcf_simp_act)
+		return tcf_simp_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_SKBEDIT)
+	else if (a->ops->act == tcf_skbedit_act)
+		return tcf_skbedit_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_SKBMOD)
+	else if (a->ops->act == tcf_skbmod_act)
+		return tcf_skbmod_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_TUNNEL_KEY)
+	else if (a->ops->act == tunnel_key_act)
+		return tunnel_key_act(skb, a, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_ACT_VLAN)
+	else if (a->ops->act == tcf_vlan_act)
+		return tcf_vlan_act(skb, a, res);
+#endif
+	else
+		return a->ops->act(skb, a, res);
+}
+
+#endif /* CONFIG_NET_CLS_ACT */
+
+/* TC Filters */
+#ifdef CONFIG_NET_CLS
+
+#define TC_INDIRECT_FILTER_DECLARE(fname)                               \
+	INDIRECT_CALLABLE_DECLARE(int fname(struct sk_buff *skb,        \
+					    const struct tcf_proto *tp, \
+					    struct tcf_result *res))
+
+TC_INDIRECT_FILTER_DECLARE(basic_classify);
+TC_INDIRECT_FILTER_DECLARE(cls_bpf_classify);
+TC_INDIRECT_FILTER_DECLARE(cls_cgroup_classify);
+TC_INDIRECT_FILTER_DECLARE(fl_classify);
+TC_INDIRECT_FILTER_DECLARE(flow_classify);
+TC_INDIRECT_FILTER_DECLARE(fw_classify);
+TC_INDIRECT_FILTER_DECLARE(mall_classify);
+TC_INDIRECT_FILTER_DECLARE(route4_classify);
+TC_INDIRECT_FILTER_DECLARE(rsvp_classify);
+TC_INDIRECT_FILTER_DECLARE(rsvp6_classify);
+TC_INDIRECT_FILTER_DECLARE(tcindex_classify);
+TC_INDIRECT_FILTER_DECLARE(u32_classify);
+
+static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+				struct tcf_result *res)
+{
+	if (0) { /* noop */ }
+#if IS_BUILTIN(CONFIG_NET_CLS_BASIC)
+	else if (tp->classify == basic_classify)
+		return basic_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_BPF)
+	else if (tp->classify == cls_bpf_classify)
+		return cls_bpf_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_CGROUP)
+	else if (tp->classify == cls_cgroup_classify)
+		return cls_cgroup_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_FLOW)
+	else if (tp->classify == flow_classify)
+		return flow_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_FLOWER)
+	else if (tp->classify == fl_classify)
+		return fl_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_FW)
+	else if (tp->classify == fw_classify)
+		return fw_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_MATCHALL)
+	else if (tp->classify == mall_classify)
+		return mall_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_ROUTE4)
+	else if (tp->classify == route4_classify)
+		return route4_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_RSVP)
+	else if (tp->classify == rsvp_classify)
+		return rsvp_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_RSVP6)
+	else if (tp->classify == rsvp6_classify)
+		return rsvp_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_TCINDEX)
+	else if (tp->classify == tcindex_classify)
+		return tcindex_classify(skb, tp, res);
+#endif
+#if IS_BUILTIN(CONFIG_NET_CLS_U32)
+	else if (tp->classify == u32_classify)
+		return u32_classify(skb, tp, res);
+#endif
+	else
+		return tp->classify(skb, tp, res);
+}
+
+#endif /* CONFIG_NET_CLS */
+
+#else
+
+#define TC_INDIRECT_SCOPE static
+
+#ifdef CONFIG_NET_CLS_ACT
+static inline int __tc_act(struct sk_buff *skb, const struct tc_action *a,
+			   struct tcf_result *res)
+{
+	return a->ops->act(skb, a, res);
+}
+#endif
+
+#ifdef CONFIG_NET_CLS
+static inline int __tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
+				struct tcf_result *res)
+{
+	return tp->classify(skb, tp, res);
+}
+#endif
+
+#endif
+
+#endif /* __NET_TC_WRAPPER_H */
diff --git a/net/sched/Kconfig b/net/sched/Kconfig
index 1e8ab4749c6c..9bc055f8013e 100644
--- a/net/sched/Kconfig
+++ b/net/sched/Kconfig
@@ -1021,6 +1021,19 @@  config NET_TC_SKB_EXT
 
 	  Say N here if you won't be using tc<->ovs offload or tc chains offload.
 
+config NET_TC_INDIRECT_WRAPPER
+	bool "TC indirect call wrapper"
+	depends on NET_SCHED
+	depends on RETPOLINE
+
+	help
+	  Say Y here to skip indirect calls in the TC datapath for known
+	  builtin classifiers/actions under CONFIG_RETPOLINE kernels.
+
+	  TC may run slower on CPUs with hardware based mitigations.
+
+	  If unsure, say N.
+
 endif # NET_SCHED
 
 config NET_SCH_FIFO