mbox series

[v3,net,0/4] qbv cycle time extension/truncation

Message ID 20231219081453.718489-1-faizal.abdul.rahim@linux.intel.com (mailing list archive)
Headers show
Series qbv cycle time extension/truncation | expand

Message

Abdul Rahim, Faizal Dec. 19, 2023, 8:14 a.m. UTC
According to IEEE Std. 802.1Q-2018 section Q.5 CycleTimeExtension,
the Cycle Time Extension variable allows this extension of the last old
cycle to be done in a defined way. If the last complete old cycle would
normally end less than OperCycleTimeExtension nanoseconds before the new
base time, then the last complete cycle before AdminBaseTime is reached
is extended so that it ends at AdminBaseTime.

Changes in v3:
- Removed the last 3 patches related to fixing cycle time adjustment
for the "current entry". This is to simplify this patch series submission 
which only covers cycle time adjustment for the "next entry".
- Negative correction calculation in get_cycle_time_correction() is
  guarded so that it doesn't exceed interval
- Some rename (macro, function)
- Transport commit message comments to the code comments 
- Removed unnecessary null check
- Reword commit message 

Changes in v2:
- Added 's64 cycle_time_correction' in 'sched_gate_list struct'.
- Removed sched_changed created in v1 since the new cycle_time_correction
  field can also serve to indicate the need for a schedule change.
- Added 'bool correction_active' in 'struct sched_entry' to represent
  the correction state from the entry's perspective and return corrected
  interval value when active.
- Fix cycle time correction logics for the next entry in advance_sched()
- Fix and implement proper cycle time correction logics for current
  entry in taprio_start_sched()

v2 at:
https://lore.kernel.org/lkml/20231107112023.676016-1-faizal.abdul.rahim@linux.intel.com/
v1 at:
https://lore.kernel.org/lkml/20230530082541.495-1-muhammad.husaini.zulkifli@intel.com/

Faizal Rahim (4):
  net/sched: taprio: fix too early schedules switching
  net/sched: taprio: fix cycle time adjustment for next entry
  net/sched: taprio: fix impacted fields value during cycle time
    adjustment
  net/sched: taprio: get corrected value of cycle_time and interval

 net/sched/sch_taprio.c | 178 +++++++++++++++++++++++++++++++----------
 1 file changed, 135 insertions(+), 43 deletions(-)

Comments

Vladimir Oltean Dec. 19, 2023, 4:56 p.m. UTC | #1
On Tue, Dec 19, 2023 at 03:14:49AM -0500, Faizal Rahim wrote:
> According to IEEE Std. 802.1Q-2018 section Q.5 CycleTimeExtension,
> the Cycle Time Extension variable allows this extension of the last old
> cycle to be done in a defined way. If the last complete old cycle would
> normally end less than OperCycleTimeExtension nanoseconds before the new
> base time, then the last complete cycle before AdminBaseTime is reached
> is extended so that it ends at AdminBaseTime.
> 
> Changes in v3:
> - Removed the last 3 patches related to fixing cycle time adjustment
> for the "current entry". This is to simplify this patch series submission 
> which only covers cycle time adjustment for the "next entry".
> - Negative correction calculation in get_cycle_time_correction() is
>   guarded so that it doesn't exceed interval
> - Some rename (macro, function)
> - Transport commit message comments to the code comments 
> - Removed unnecessary null check
> - Reword commit message 
> 
> Changes in v2:
> - Added 's64 cycle_time_correction' in 'sched_gate_list struct'.
> - Removed sched_changed created in v1 since the new cycle_time_correction
>   field can also serve to indicate the need for a schedule change.
> - Added 'bool correction_active' in 'struct sched_entry' to represent
>   the correction state from the entry's perspective and return corrected
>   interval value when active.
> - Fix cycle time correction logics for the next entry in advance_sched()
> - Fix and implement proper cycle time correction logics for current
>   entry in taprio_start_sched()
> 
> v2 at:
> https://lore.kernel.org/lkml/20231107112023.676016-1-faizal.abdul.rahim@linux.intel.com/
> v1 at:
> https://lore.kernel.org/lkml/20230530082541.495-1-muhammad.husaini.zulkifli@intel.com/

I'm sorry that I stopped responding on your v2. I realized the discussion
reached a point where I couldn't figure out who is right without some
testing. I wanted to write a selftest to highlight the expected correct
behavior of the datapath during various schedule changes, and whether we
could ever end up with a negative interval after the correction. However,
writing that got quite complicated and that ended there.

How are you testing the behavior, and who reported the issues / what prompted
the changes? Honestly I'm not very confident in the changes we're
pushing down the linux-stable pipe. They don't look all that obvious, so
I still think that having selftests would help. If you don't have a
testing rig already assembled, and you don't want to start one, I might
want to give it a second try and cook something up myself.

Something really simple like:
- start schedule 1 with base-time A and cycle-time-extension B
- start schedule 2 with base-time C
- send one packet with isochron during the last cycle of schedule 1

By varying the parameters, we could check if the schedule is correctly
extended or truncated. We could configure the 2 schedules in such a way
that "extending" would mean that isochron's gate (from schedule 1) is
open (and thus, the packet will pass) and "truncating" would mean that
the packet is scheduled according to schedule 2 (where isochron's gate
will be always closed, so the packet will never pass).

We could then alter the cycle-time-extension relative to the base-times,
to force a truncation of 1, 2, 3 entries or more, and see that the
behavior is always correct.
Eric Dumazet Dec. 19, 2023, 5:02 p.m. UTC | #2
On Tue, Dec 19, 2023 at 9:17 AM Faizal Rahim
<faizal.abdul.rahim@linux.intel.com> wrote:
>
> According to IEEE Std. 802.1Q-2018 section Q.5 CycleTimeExtension,
> the Cycle Time Extension variable allows this extension of the last old
> cycle to be done in a defined way. If the last complete old cycle would
> normally end less than OperCycleTimeExtension nanoseconds before the new
> base time, then the last complete cycle before AdminBaseTime is reached
> is extended so that it ends at AdminBaseTime.
>

Hmm... Is this series fixing any of the pending syzbot bugs ?
Abdul Rahim, Faizal Dec. 20, 2023, 3:25 a.m. UTC | #3
On 20/12/2023 12:56 am, Vladimir Oltean wrote:
> On Tue, Dec 19, 2023 at 03:14:49AM -0500, Faizal Rahim wrote:
>> According to IEEE Std. 802.1Q-2018 section Q.5 CycleTimeExtension,
>> the Cycle Time Extension variable allows this extension of the last old
>> cycle to be done in a defined way. If the last complete old cycle would
>> normally end less than OperCycleTimeExtension nanoseconds before the new
>> base time, then the last complete cycle before AdminBaseTime is reached
>> is extended so that it ends at AdminBaseTime.
>>
>> Changes in v3:
>> - Removed the last 3 patches related to fixing cycle time adjustment
>> for the "current entry". This is to simplify this patch series submission
>> which only covers cycle time adjustment for the "next entry".
>> - Negative correction calculation in get_cycle_time_correction() is
>>    guarded so that it doesn't exceed interval
>> - Some rename (macro, function)
>> - Transport commit message comments to the code comments
>> - Removed unnecessary null check
>> - Reword commit message
>>
>> Changes in v2:
>> - Added 's64 cycle_time_correction' in 'sched_gate_list struct'.
>> - Removed sched_changed created in v1 since the new cycle_time_correction
>>    field can also serve to indicate the need for a schedule change.
>> - Added 'bool correction_active' in 'struct sched_entry' to represent
>>    the correction state from the entry's perspective and return corrected
>>    interval value when active.
>> - Fix cycle time correction logics for the next entry in advance_sched()
>> - Fix and implement proper cycle time correction logics for current
>>    entry in taprio_start_sched()
>>
>> v2 at:
>> https://lore.kernel.org/lkml/20231107112023.676016-1-faizal.abdul.rahim@linux.intel.com/
>> v1 at:
>> https://lore.kernel.org/lkml/20230530082541.495-1-muhammad.husaini.zulkifli@intel.com/
> 
> I'm sorry that I stopped responding on your v2. I realized the discussion
> reached a point where I couldn't figure out who is right without some
> testing. I wanted to write a selftest to highlight the expected correct
> behavior of the datapath during various schedule changes, and whether we
> could ever end up with a negative interval after the correction. However,
> writing that got quite complicated and that ended there.
> 
> How are you testing the behavior, and who reported the issues / what prompted
> the changes? Honestly I'm not very confident in the changes we're
> pushing down the linux-stable pipe. They don't look all that obvious, so
> I still think that having selftests would help. If you don't have a
> testing rig already assembled, and you don't want to start one, I might
> want to give it a second try and cook something up myself.
> 
> Something really simple like:
> - start schedule 1 with base-time A and cycle-time-extension B
> - start schedule 2 with base-time C
> - send one packet with isochron during the last cycle of schedule 1
> 
> By varying the parameters, we could check if the schedule is correctly
> extended or truncated. We could configure the 2 schedules in such a way
> that "extending" would mean that isochron's gate (from schedule 1) is
> open (and thus, the packet will pass) and "truncating" would mean that
> the packet is scheduled according to schedule 2 (where isochron's gate
> will be always closed, so the packet will never pass).
> 
> We could then alter the cycle-time-extension relative to the base-times,
> to force a truncation of 1, 2, 3 entries or more, and see that the
> behavior is always correct.

Hi Vladimir,

No worries, I truly appreciate the time you took to review and reply.

What prompted this in general is related to my project requirement to 
enable software QBV cycle time extension, so there's a validation team that 
created test cases to properly validate cycle time extension. Then I 
noticed the code doesn't handle truncation properly also, since it's the 
same code area, I just fixed it together.

Each time before sending the patch for upstream review, I normally will run 
our test cases that only validates cycle time extension. For truncation, I 
modify the test cases on my own and put logs to check if the 
cycle_time_correction negative value is within the correct range. I 
probably should have mentioned sooner that I have tested this myself, sorry 
about that.

Example of the test I run for cycle time extension:
1) 2 boards connected back-to-back with i226 NIC. Board A as sender, Board 
B as receiver
2) Time is sync between 2 boards with phc2sys and ptp4l
3) Run GCL1 on Board A with cycle time extension enabled:
     tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
     num_tc 4 \
     map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
     queues 1@0 1@1 1@2 1@3 \
     base-time 0 \
     cycle-time-extension 1000000 \
     sched-entry S 09 500000 \
     sched-entry S 0a 500000 \
     clockid CLOCK_TAI
4) capture tcp dump on Board B
5) Send packets from Board A to Board B with 200us interval via UDP Tai
6) When packets reached Board B, trigger GCL2 to Board A:
    CYCLETIME=1000000
    APPLYTIME=1000000000 # 1s
    CURRENT=$(date +%s%N)
    BASE=$(( (CURRENT + APPLYTIME + (2*CYCLETIME)) - ((CURRENT + APPLYTIME)
          % CYCLETIME) + ((CYCLETIME*3)/5) ))
     tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
     num_tc 4 \
     map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
     queues 1@0 1@1 1@2 1@3 \
     base-time $BASE \
     cycle-time-extension 1000000 \
     sched-entry S oc 500000 \
     sched-entry S 08 500000 \
     clockid CLOCK_TAI
7) Analyze tcp dump data on Board B using wireshark, will observe packets 
receive pattern changed.

Note that I've hidden "Best Effort (default) 7001 → 7001" data from the 
wireshark log so that it's easier to see the pattern.

      TIMESTAMP               PRIORITY             PRIORITY    NOTES 

1702896645.925014509	Critical Applications	7004 → 7004   GCL1
1702896645.925014893	Critical Applications	7004 → 7004   GCL1
1702896645.925514454	Excellent Effort	7003 → 7003   GCL1
1702896645.925514835	Excellent Effort	7003 → 7003   GCL1
1702896645.926014371	Critical Applications	7004 → 7004   GCL1
1702896645.926014755	Critical Applications	7004 → 7004   GCL1
1702896645.926514620	Excellent Effort	7003 → 7003   GCL1
1702896645.926515004	Excellent Effort	7003 → 7003   GCL1
1702896645.927014408	Critical Applications	7004 → 7004   GCL1
1702896645.927014792	Critical Applications	7004 → 7004   GCL1
1702896645.927514789	Excellent Effort	7003 → 7003   GCL1
1702896645.927515173	Excellent Effort	7003 → 7003   GCL1
1702896645.928168304	Excellent Effort	7003 → 7003   Extended
1702896645.928368780	Excellent Effort	7003 → 7003   Extended
1702896645.928569406	Excellent Effort	7003 → 7003   Extended
1702896645.929614835	Background	        7002 → 7002   GCL2
1702896645.929615219	Background	        7002 → 7002   GCL2
1702896645.930614643	Background	        7002 → 7002   GCL2
1702896645.930615027	Background	        7002 → 7002   GCL2
1702896645.931614604	Background	        7002 → 7002   GCL2
1702896645.931614991	Background	        7002 → 7002   GCL2

The extended packets only will happen if cycle_time and interval fields
are updated using cycle_time_correction. Without that patch, the extended 
packets are not received.


As for the negative truncation case, I just make the interval quite long, 
and experimented with GCL2 base-time value so that it hits the "next entry" 
in advance_sched(). Then I checked my logs in get_cycle_time_correction() 
to see the truncation case and its values.

Based on your feedback of the test required, I think that my existing 
truncation test is not enough, but the extension test case part should be 
good right ?

Do let me know then, I'm more than willing to do more test for the 
truncation case as per your suggestion, well basically, anything to help 
speed up the patches series review process :)


Appreciate your suggestion and help a lot, thank you.
Abdul Rahim, Faizal Dec. 21, 2023, 5:57 a.m. UTC | #4
On 20/12/2023 1:02 am, Eric Dumazet wrote:
> On Tue, Dec 19, 2023 at 9:17 AM Faizal Rahim
> <faizal.abdul.rahim@linux.intel.com> wrote:
>>
>> According to IEEE Std. 802.1Q-2018 section Q.5 CycleTimeExtension,
>> the Cycle Time Extension variable allows this extension of the last old
>> cycle to be done in a defined way. If the last complete old cycle would
>> normally end less than OperCycleTimeExtension nanoseconds before the new
>> base time, then the last complete cycle before AdminBaseTime is reached
>> is extended so that it ends at AdminBaseTime.
>>
> 
> Hmm... Is this series fixing any of the pending syzbot bugs ?

Not really I think ? I found some bugs in this area when I tried to 
enable/fix software QBV cycle time extension for my project.
Paolo Abeni Dec. 21, 2023, 8:52 a.m. UTC | #5
On Tue, 2023-12-19 at 18:56 +0200, Vladimir Oltean wrote:
> How are you testing the behavior, and who reported the issues / what prompted
> the changes? Honestly I'm not very confident in the changes we're
> pushing down the linux-stable pipe. They don't look all that obvious, so
> I still think that having selftests would help.

I agree with Vladimir, this looks quite a bit too complex for a net fix
at this late point of the cycle. Given the period of the year, I think
it could be too late even for net-next - for this cycle.

It would be great if you could add some self-tests.

@Faizal: I understand your setup is quite complex, but it would be
great if you could come-up with something similar that could fit 
tools/testing/selftests/net

Thanks!

Paolo
Abdul Rahim, Faizal Dec. 21, 2023, 10:12 a.m. UTC | #6
On 21/12/2023 4:52 pm, Paolo Abeni wrote:
> On Tue, 2023-12-19 at 18:56 +0200, Vladimir Oltean wrote:
>> How are you testing the behavior, and who reported the issues / what prompted
>> the changes? Honestly I'm not very confident in the changes we're
>> pushing down the linux-stable pipe. They don't look all that obvious, so
>> I still think that having selftests would help.
> 
> I agree with Vladimir, this looks quite a bit too complex for a net fix
> at this late point of the cycle. Given the period of the year, I think
> it could be too late even for net-next - for this cycle.
> 

Would it be better to just submit into net-next and target for the next 
cycle ? I'm okay with that.

> It would be great if you could add some self-tests.
> 
> @Faizal: I understand your setup is quite complex, but it would be
> great if you could come-up with something similar that could fit
> tools/testing/selftests/net
> 
> Thanks!
> 
> Paolo
> 

Ohh my bad, I thought selftest is just to develop and run the test case 
locally, but it seems that it also refers to integrating it into the 
existing selftest framework ?
Got it. I'll explore that and cover extension/truncation cases.
Vladimir Oltean Dec. 21, 2023, 1:35 p.m. UTC | #7
(sorry, I started writing this email yesterday, I noticed the
conversation continued with Paolo)

On Wed, Dec 20, 2023 at 11:25:09AM +0800, Abdul Rahim, Faizal wrote:
> Hi Vladimir,
> 
> No worries, I truly appreciate the time you took to review and reply.
> 
> What prompted this in general is related to my project requirement to enable
> software QBV cycle time extension, so there's a validation team that created
> test cases to properly validate cycle time extension. Then I noticed the
> code doesn't handle truncation properly also, since it's the same code area,
> I just fixed it together.

We tend to do patch triage between 'net' and 'net-next' based on the
balance between the urgency/impact of the fix and its complexity.

While it's undoubtable that there are issues with taprio's handling of
dynamic schedules, you've mentioned yourself that you only hit those
issues as part of some new development work - they weren't noticed by
end users. And fixing them is not quite trivial, there are also FIXMEs
in taprio which suggest so. I'm worried that the fixes may also impact
the code from stable trees in unforeseen ways.

So I would recommend moving the development of these fixes to 'net-next',
if possible.

> Each time before sending the patch for upstream review, I normally will run
> our test cases that only validates cycle time extension. For truncation, I
> modify the test cases on my own and put logs to check if the
> cycle_time_correction negative value is within the correct range. I probably
> should have mentioned sooner that I have tested this myself, sorry about
> that.
> 
> Example of the test I run for cycle time extension:
> 1) 2 boards connected back-to-back with i226 NIC. Board A as sender, Board B
> as receiver
> 2) Time is sync between 2 boards with phc2sys and ptp4l
> 3) Run GCL1 on Board A with cycle time extension enabled:
>     tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
>     num_tc 4 \
>     map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
>     queues 1@0 1@1 1@2 1@3 \
>     base-time 0 \
>     cycle-time-extension 1000000 \
>     sched-entry S 09 500000 \
>     sched-entry S 0a 500000 \
>     clockid CLOCK_TAI

Why do you need PTP sync? Cannot this test run between 2 veth ports?

> 4) capture tcp dump on Board B
> 5) Send packets from Board A to Board B with 200us interval via UDP Tai

What is udp_tai? This program?
https://gist.github.com/jeez/bd3afeff081ba64a695008dd8215866f

> 6) When packets reached Board B, trigger GCL2 to Board A:
>    CYCLETIME=1000000
>    APPLYTIME=1000000000 # 1s
>    CURRENT=$(date +%s%N)
>    BASE=$(( (CURRENT + APPLYTIME + (2*CYCLETIME)) - ((CURRENT + APPLYTIME)
>          % CYCLETIME) + ((CYCLETIME*3)/5) ))
>     tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
>     num_tc 4 \
>     map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
>     queues 1@0 1@1 1@2 1@3 \
>     base-time $BASE \
>     cycle-time-extension 1000000 \
>     sched-entry S oc 500000 \
>     sched-entry S 08 500000 \
>     clockid CLOCK_TAI
> 7) Analyze tcp dump data on Board B using wireshark, will observe packets
> receive pattern changed.
> 
> Note that I've hidden "Best Effort (default) 7001 → 7001" data from the
> wireshark log so that it's easier to see the pattern.
> 
>      TIMESTAMP               PRIORITY             PRIORITY    NOTES
> 
> 1702896645.925014509	Critical Applications	7004 → 7004   GCL1
> 1702896645.925014893	Critical Applications	7004 → 7004   GCL1
> 1702896645.925514454	Excellent Effort	7003 → 7003   GCL1
> 1702896645.925514835	Excellent Effort	7003 → 7003   GCL1
> 1702896645.926014371	Critical Applications	7004 → 7004   GCL1
> 1702896645.926014755	Critical Applications	7004 → 7004   GCL1
> 1702896645.926514620	Excellent Effort	7003 → 7003   GCL1
> 1702896645.926515004	Excellent Effort	7003 → 7003   GCL1
> 1702896645.927014408	Critical Applications	7004 → 7004   GCL1
> 1702896645.927014792	Critical Applications	7004 → 7004   GCL1
> 1702896645.927514789	Excellent Effort	7003 → 7003   GCL1
> 1702896645.927515173	Excellent Effort	7003 → 7003   GCL1
> 1702896645.928168304	Excellent Effort	7003 → 7003   Extended
> 1702896645.928368780	Excellent Effort	7003 → 7003   Extended
> 1702896645.928569406	Excellent Effort	7003 → 7003   Extended
> 1702896645.929614835	Background	        7002 → 7002   GCL2
> 1702896645.929615219	Background	        7002 → 7002   GCL2
> 1702896645.930614643	Background	        7002 → 7002   GCL2
> 1702896645.930615027	Background	        7002 → 7002   GCL2
> 1702896645.931614604	Background	        7002 → 7002   GCL2
> 1702896645.931614991	Background	        7002 → 7002   GCL2
> 
> The extended packets only will happen if cycle_time and interval fields
> are updated using cycle_time_correction. Without that patch, the extended
> packets are not received.
> 
> 
> As for the negative truncation case, I just make the interval quite long,
> and experimented with GCL2 base-time value so that it hits the "next entry"
> in advance_sched(). Then I checked my logs in get_cycle_time_correction() to
> see the truncation case and its values.
> 
> Based on your feedback of the test required, I think that my existing
> truncation test is not enough, but the extension test case part should be
> good right ?
> 
> Do let me know then, I'm more than willing to do more test for the
> truncation case as per your suggestion, well basically, anything to help
> speed up the patches series review process :)
> 
> 
> Appreciate your suggestion and help a lot, thank you.

Do you think you could automate a test suite which only measures software
TX timestamps and works on veth?

I prepared this very small patch set just to give you a head start
(the skeleton). You'll still have to add the logic for individual tests.
https://lore.kernel.org/netdev/20231221132521.2314811-1-vladimir.oltean@nxp.com/
I'm terribly sorry, but this is the most I can do due to my current lack
of spare time, unfortunately.

If you've never run kselftests before, you'll need some kernel options
to enable VRF support. From my notes I have this list below, but there
may be more missing options.

CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_NET_L3_MASTER_DEV=y
CONFIG_NET_VRF=y

Let me know if you face any trouble or if I can help in some way.
Thanks for doing this.
Abdul Rahim, Faizal Dec. 29, 2023, 2:15 a.m. UTC | #8
Hi Vladimir,

Sorry for the late reply, was on leave.

On 21/12/2023 9:35 pm, Vladimir Oltean wrote:
> (sorry, I started writing this email yesterday, I noticed the
> conversation continued with Paolo)
> 
> On Wed, Dec 20, 2023 at 11:25:09AM +0800, Abdul Rahim, Faizal wrote:
>> Hi Vladimir,
>>
>> No worries, I truly appreciate the time you took to review and reply.
>>
>> What prompted this in general is related to my project requirement to enable
>> software QBV cycle time extension, so there's a validation team that created
>> test cases to properly validate cycle time extension. Then I noticed the
>> code doesn't handle truncation properly also, since it's the same code area,
>> I just fixed it together.
> 
> We tend to do patch triage between 'net' and 'net-next' based on the
> balance between the urgency/impact of the fix and its complexity.
> 
> While it's undoubtable that there are issues with taprio's handling of
> dynamic schedules, you've mentioned yourself that you only hit those
> issues as part of some new development work - they weren't noticed by
> end users. And fixing them is not quite trivial, there are also FIXMEs
> in taprio which suggest so. I'm worried that the fixes may also impact
> the code from stable trees in unforeseen ways.
> 
> So I would recommend moving the development of these fixes to 'net-next',
> if possible.

Got it, will move it to net-next.

>> Each time before sending the patch for upstream review, I normally will run
>> our test cases that only validates cycle time extension. For truncation, I
>> modify the test cases on my own and put logs to check if the
>> cycle_time_correction negative value is within the correct range. I probably
>> should have mentioned sooner that I have tested this myself, sorry about
>> that.
>>
>> Example of the test I run for cycle time extension:
>> 1) 2 boards connected back-to-back with i226 NIC. Board A as sender, Board B
>> as receiver
>> 2) Time is sync between 2 boards with phc2sys and ptp4l
>> 3) Run GCL1 on Board A with cycle time extension enabled:
>>      tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
>>      num_tc 4 \
>>      map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
>>      queues 1@0 1@1 1@2 1@3 \
>>      base-time 0 \
>>      cycle-time-extension 1000000 \
>>      sched-entry S 09 500000 \
>>      sched-entry S 0a 500000 \
>>      clockid CLOCK_TAI
> 
> Why do you need PTP sync? Cannot this test run between 2 veth ports?
PTP sync is probably not needed, but the test case already has it (I just 
reuse the test case), I assume it's to simulate a complete use case of a 
real user.
Let me explore testing using veth ports, haven't tried this before.

> 
>> 4) capture tcp dump on Board B
>> 5) Send packets from Board A to Board B with 200us interval via UDP Tai
> 
> What is udp_tai? This program?
> https://gist.github.com/jeez/bd3afeff081ba64a695008dd8215866f
> 

Yea the base app looks similar to the one that I use, but the one I use is 
modified. It's to transmit UDP packets.

>> 6) When packets reached Board B, trigger GCL2 to Board A:
>>     CYCLETIME=1000000
>>     APPLYTIME=1000000000 # 1s
>>     CURRENT=$(date +%s%N)
>>     BASE=$(( (CURRENT + APPLYTIME + (2*CYCLETIME)) - ((CURRENT + APPLYTIME)
>>           % CYCLETIME) + ((CYCLETIME*3)/5) ))
>>      tc qdisc replace dev $INTERFACE parent root handle 100 taprio \
>>      num_tc 4 \
>>      map 3 2 1 0 3 3 3 3 3 3 3 3 3 3 3 3 \
>>      queues 1@0 1@1 1@2 1@3 \
>>      base-time $BASE \
>>      cycle-time-extension 1000000 \
>>      sched-entry S oc 500000 \
>>      sched-entry S 08 500000 \
>>      clockid CLOCK_TAI
>> 7) Analyze tcp dump data on Board B using wireshark, will observe packets
>> receive pattern changed.
>>
>> Note that I've hidden "Best Effort (default) 7001 → 7001" data from the
>> wireshark log so that it's easier to see the pattern.
>>
>>       TIMESTAMP               PRIORITY             PRIORITY    NOTES
>>
>> 1702896645.925014509	Critical Applications	7004 → 7004   GCL1
>> 1702896645.925014893	Critical Applications	7004 → 7004   GCL1
>> 1702896645.925514454	Excellent Effort	7003 → 7003   GCL1
>> 1702896645.925514835	Excellent Effort	7003 → 7003   GCL1
>> 1702896645.926014371	Critical Applications	7004 → 7004   GCL1
>> 1702896645.926014755	Critical Applications	7004 → 7004   GCL1
>> 1702896645.926514620	Excellent Effort	7003 → 7003   GCL1
>> 1702896645.926515004	Excellent Effort	7003 → 7003   GCL1
>> 1702896645.927014408	Critical Applications	7004 → 7004   GCL1
>> 1702896645.927014792	Critical Applications	7004 → 7004   GCL1
>> 1702896645.927514789	Excellent Effort	7003 → 7003   GCL1
>> 1702896645.927515173	Excellent Effort	7003 → 7003   GCL1
>> 1702896645.928168304	Excellent Effort	7003 → 7003   Extended
>> 1702896645.928368780	Excellent Effort	7003 → 7003   Extended
>> 1702896645.928569406	Excellent Effort	7003 → 7003   Extended
>> 1702896645.929614835	Background	        7002 → 7002   GCL2
>> 1702896645.929615219	Background	        7002 → 7002   GCL2
>> 1702896645.930614643	Background	        7002 → 7002   GCL2
>> 1702896645.930615027	Background	        7002 → 7002   GCL2
>> 1702896645.931614604	Background	        7002 → 7002   GCL2
>> 1702896645.931614991	Background	        7002 → 7002   GCL2
>>
>> The extended packets only will happen if cycle_time and interval fields
>> are updated using cycle_time_correction. Without that patch, the extended
>> packets are not received.
>>
>>
>> As for the negative truncation case, I just make the interval quite long,
>> and experimented with GCL2 base-time value so that it hits the "next entry"
>> in advance_sched(). Then I checked my logs in get_cycle_time_correction() to
>> see the truncation case and its values.
>>
>> Based on your feedback of the test required, I think that my existing
>> truncation test is not enough, but the extension test case part should be
>> good right ?
>>
>> Do let me know then, I'm more than willing to do more test for the
>> truncation case as per your suggestion, well basically, anything to help
>> speed up the patches series review process :)
>>
>>
>> Appreciate your suggestion and help a lot, thank you.
> 
> Do you think you could automate a test suite which only measures software
> TX timestamps and works on veth?
> 
> I prepared this very small patch set just to give you a head start
> (the skeleton). You'll still have to add the logic for individual tests.
> https://lore.kernel.org/netdev/20231221132521.2314811-1-vladimir.oltean@nxp.com/
> I'm terribly sorry, but this is the most I can do due to my current lack
> of spare time, unfortunately.
> 
> If you've never run kselftests before, you'll need some kernel options
> to enable VRF support. From my notes I have this list below, but there
> may be more missing options.
> 
> CONFIG_IP_MULTIPLE_TABLES=y
> CONFIG_NET_L3_MASTER_DEV=y
> CONFIG_NET_VRF=y
> 
> Let me know if you face any trouble or if I can help in some way.
> Thanks for doing this.

Thank you so much for helping with this self test skeleton ! I'll explore 
and continue from where you've left. Appreciate it.