diff mbox series

[net-next,1/3] net: sched: cls_api: add skip_sw counter

Message ID 20240215160458.1727237-2-ast@fiberby.net (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series make skip_sw actually skip software | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2381 this patch: 2381
netdev/build_tools success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers warning 3 maintainers not CCed: pabeni@redhat.com kuba@kernel.org edumazet@google.com
netdev/build_clang success Errors and warnings before: 1046 this patch: 1046
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 2463 this patch: 2463
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 23 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-02-15--21-00 (tests: 882)

Commit Message

Asbjørn Sloth Tønnesen Feb. 15, 2024, 4:04 p.m. UTC
Maintain a count of skip_sw filters.

This counter is protected by the cb_lock, and is updated
at the same time as offloadcnt.

Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net>
---
 include/net/sch_generic.h | 1 +
 net/sched/cls_api.c       | 4 ++++
 2 files changed, 5 insertions(+)

Comments

Jamal Hadi Salim Feb. 15, 2024, 5:39 p.m. UTC | #1
+Cc Vlad and Marcelo..

On Thu, Feb 15, 2024 at 11:06 AM Asbjørn Sloth Tønnesen <ast@fiberby.net> wrote:
>
> Maintain a count of skip_sw filters.
>
> This counter is protected by the cb_lock, and is updated
> at the same time as offloadcnt.
>
> Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net>
> ---
>  include/net/sch_generic.h | 1 +
>  net/sched/cls_api.c       | 4 ++++
>  2 files changed, 5 insertions(+)
>
> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
> index 934fdb977551..46a63d1818a0 100644
> --- a/include/net/sch_generic.h
> +++ b/include/net/sch_generic.h
> @@ -476,6 +476,7 @@ struct tcf_block {
>         struct flow_block flow_block;
>         struct list_head owner_list;
>         bool keep_dst;
> +       atomic_t skipswcnt; /* Number of skip_sw filters */
>         atomic_t offloadcnt; /* Number of oddloaded filters */

For your use case is skipswcnt ever going to be any different than offloadcnt?

cheers,
jamal

>         unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
>         unsigned int lockeddevcnt; /* Number of devs that require rtnl lock. */
> diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
> index ca5676b2668e..397c3d29659c 100644
> --- a/net/sched/cls_api.c
> +++ b/net/sched/cls_api.c
> @@ -3483,6 +3483,8 @@ static void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
>         if (*flags & TCA_CLS_FLAGS_IN_HW)
>                 return;
>         *flags |= TCA_CLS_FLAGS_IN_HW;
> +       if (tc_skip_sw(*flags))
> +               atomic_inc(&block->skipswcnt);
>         atomic_inc(&block->offloadcnt);
>  }
>
> @@ -3491,6 +3493,8 @@ static void tcf_block_offload_dec(struct tcf_block *block, u32 *flags)
>         if (!(*flags & TCA_CLS_FLAGS_IN_HW))
>                 return;
>         *flags &= ~TCA_CLS_FLAGS_IN_HW;
> +       if (tc_skip_sw(*flags))
> +               atomic_dec(&block->skipswcnt);
>         atomic_dec(&block->offloadcnt);
>  }
>
> --
> 2.43.0
>
Asbjørn Sloth Tønnesen Feb. 15, 2024, 11:34 p.m. UTC | #2
Hi Jamal,

Thank you for the review.

On 2/15/24 17:39, Jamal Hadi Salim wrote:
> +Cc Vlad and Marcelo..
> 
> On Thu, Feb 15, 2024 at 11:06 AM Asbjørn Sloth Tønnesen <ast@fiberby.net> wrote:
>>
>> Maintain a count of skip_sw filters.
>>
>> This counter is protected by the cb_lock, and is updated
>> at the same time as offloadcnt.
>>
>> Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net>
>> ---
>>   include/net/sch_generic.h | 1 +
>>   net/sched/cls_api.c       | 4 ++++
>>   2 files changed, 5 insertions(+)
>>
>> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
>> index 934fdb977551..46a63d1818a0 100644
>> --- a/include/net/sch_generic.h
>> +++ b/include/net/sch_generic.h
>> @@ -476,6 +476,7 @@ struct tcf_block {
>>          struct flow_block flow_block;
>>          struct list_head owner_list;
>>          bool keep_dst;
>> +       atomic_t skipswcnt; /* Number of skip_sw filters */
>>          atomic_t offloadcnt; /* Number of oddloaded filters */
> 
> For your use case is skipswcnt ever going to be any different than offloadcnt?

No, we only use skip_sw filters, since we only use TC as a control path to
install skip_sw rules into hardware.

AFAICT offloadcnt is the sum of skip_sw filters, and filters with no flags which
have implicitly been offloaded.

The reason that I didn't just use offloadcnt, is that I'm not sure if it is
acceptable to treat implicitly offloaded rules without skip_sw, as if they were
explicitly skip_sw. It sounds reasonable, given that the filters without skip_* flags
shouldn't really care.

I tried to only trigger the TC bypass, in the cases that I was absolutely sure would
be safe as a first step.


> 
> cheers,
> jamal
> 
>>          unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
>>          unsigned int lockeddevcnt; /* Number of devs that require rtnl lock. */
>> diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
>> index ca5676b2668e..397c3d29659c 100644
>> --- a/net/sched/cls_api.c
>> +++ b/net/sched/cls_api.c
>> @@ -3483,6 +3483,8 @@ static void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
>>          if (*flags & TCA_CLS_FLAGS_IN_HW)
>>                  return;
>>          *flags |= TCA_CLS_FLAGS_IN_HW;
>> +       if (tc_skip_sw(*flags))
>> +               atomic_inc(&block->skipswcnt);
>>          atomic_inc(&block->offloadcnt);
>>   }
>>
>> @@ -3491,6 +3493,8 @@ static void tcf_block_offload_dec(struct tcf_block *block, u32 *flags)
>>          if (!(*flags & TCA_CLS_FLAGS_IN_HW))
>>                  return;
>>          *flags &= ~TCA_CLS_FLAGS_IN_HW;
>> +       if (tc_skip_sw(*flags))
>> +               atomic_dec(&block->skipswcnt);
>>          atomic_dec(&block->offloadcnt);
>>   }
>>
>> --
>> 2.43.0
>>
Vlad Buslov Feb. 16, 2024, 8:35 a.m. UTC | #3
On Thu 15 Feb 2024 at 23:34, Asbjørn Sloth Tønnesen <ast@fiberby.net> wrote:
> Hi Jamal,
>
> Thank you for the review.
>
> On 2/15/24 17:39, Jamal Hadi Salim wrote:
>> +Cc Vlad and Marcelo..
>> On Thu, Feb 15, 2024 at 11:06 AM Asbjørn Sloth Tønnesen <ast@fiberby.net>
>> wrote:
>>>
>>> Maintain a count of skip_sw filters.
>>>
>>> This counter is protected by the cb_lock, and is updated
>>> at the same time as offloadcnt.
>>>
>>> Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net>
>>> ---
>>>   include/net/sch_generic.h | 1 +
>>>   net/sched/cls_api.c       | 4 ++++
>>>   2 files changed, 5 insertions(+)
>>>
>>> diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
>>> index 934fdb977551..46a63d1818a0 100644
>>> --- a/include/net/sch_generic.h
>>> +++ b/include/net/sch_generic.h
>>> @@ -476,6 +476,7 @@ struct tcf_block {
>>>          struct flow_block flow_block;
>>>          struct list_head owner_list;
>>>          bool keep_dst;
>>> +       atomic_t skipswcnt; /* Number of skip_sw filters */
>>>          atomic_t offloadcnt; /* Number of oddloaded filters */
>> For your use case is skipswcnt ever going to be any different than offloadcnt?
>
> No, we only use skip_sw filters, since we only use TC as a control path to
> install skip_sw rules into hardware.
>
> AFAICT offloadcnt is the sum of skip_sw filters, and filters with no flags which
> have implicitly been offloaded.
>
> The reason that I didn't just use offloadcnt, is that I'm not sure if it is
> acceptable to treat implicitly offloaded rules without skip_sw, as if they were
> explicitly skip_sw. It sounds reasonable, given that the filters without skip_* flags
> shouldn't really care.

It is not acceptable since there are valid use-cases where packets need
to match sw filters that are supposedly also in-hw. For example, filters
with tunnel_key set action during neighbor update event.

>
> I tried to only trigger the TC bypass, in the cases that I was absolutely sure would
> be safe as a first step.
>
>
>> cheers,
>> jamal
>> 
>>>          unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
>>>          unsigned int lockeddevcnt; /* Number of devs that require rtnl lock. */
>>> diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
>>> index ca5676b2668e..397c3d29659c 100644
>>> --- a/net/sched/cls_api.c
>>> +++ b/net/sched/cls_api.c
>>> @@ -3483,6 +3483,8 @@ static void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
>>>          if (*flags & TCA_CLS_FLAGS_IN_HW)
>>>                  return;
>>>          *flags |= TCA_CLS_FLAGS_IN_HW;
>>> +       if (tc_skip_sw(*flags))
>>> +               atomic_inc(&block->skipswcnt);
>>>          atomic_inc(&block->offloadcnt);
>>>   }
>>>
>>> @@ -3491,6 +3493,8 @@ static void tcf_block_offload_dec(struct tcf_block *block, u32 *flags)
>>>          if (!(*flags & TCA_CLS_FLAGS_IN_HW))
>>>                  return;
>>>          *flags &= ~TCA_CLS_FLAGS_IN_HW;
>>> +       if (tc_skip_sw(*flags))
>>> +               atomic_dec(&block->skipswcnt);
>>>          atomic_dec(&block->offloadcnt);
>>>   }
>>>
>>> --
>>> 2.43.0
>>>
Jiri Pirko Feb. 16, 2024, 12:52 p.m. UTC | #4
Thu, Feb 15, 2024 at 05:04:42PM CET, ast@fiberby.net wrote:
>Maintain a count of skip_sw filters.
>
>This counter is protected by the cb_lock, and is updated
>at the same time as offloadcnt.
>
>Signed-off-by: Asbjørn Sloth Tønnesen <ast@fiberby.net>

Reviewed-by: Jiri Pirko <jiri@nvidia.com>
diff mbox series

Patch

diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 934fdb977551..46a63d1818a0 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -476,6 +476,7 @@  struct tcf_block {
 	struct flow_block flow_block;
 	struct list_head owner_list;
 	bool keep_dst;
+	atomic_t skipswcnt; /* Number of skip_sw filters */
 	atomic_t offloadcnt; /* Number of oddloaded filters */
 	unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
 	unsigned int lockeddevcnt; /* Number of devs that require rtnl lock. */
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index ca5676b2668e..397c3d29659c 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -3483,6 +3483,8 @@  static void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
 	if (*flags & TCA_CLS_FLAGS_IN_HW)
 		return;
 	*flags |= TCA_CLS_FLAGS_IN_HW;
+	if (tc_skip_sw(*flags))
+		atomic_inc(&block->skipswcnt);
 	atomic_inc(&block->offloadcnt);
 }
 
@@ -3491,6 +3493,8 @@  static void tcf_block_offload_dec(struct tcf_block *block, u32 *flags)
 	if (!(*flags & TCA_CLS_FLAGS_IN_HW))
 		return;
 	*flags &= ~TCA_CLS_FLAGS_IN_HW;
+	if (tc_skip_sw(*flags))
+		atomic_dec(&block->skipswcnt);
 	atomic_dec(&block->offloadcnt);
 }