Message ID | 20230130173145.475943-13-vladimir.oltean@nxp.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | ENETC mqprio/taprio cleanup | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Clearly marked for net-next, async |
netdev/fixes_present | success | Fixes tag not required for -next series |
netdev/subject_prefix | success | Link |
netdev/cover_letter | success | Series has a cover letter |
netdev/patch_count | success | Link |
netdev/header_inline | success | No static functions without inline keyword in header files |
netdev/build_32bit | success | Errors and warnings before: 73 this patch: 73 |
netdev/cc_maintainers | success | CCed 9 of 9 maintainers |
netdev/build_clang | success | Errors and warnings before: 3 this patch: 3 |
netdev/module_param | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/check_selftest | success | No net selftest shell script |
netdev/verify_fixes | success | No Fixes tag |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 91 this patch: 91 |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 39 lines checked |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/source_inline | success | Was 0 now: 0 |
On Mon, Jan 30, 2023 at 07:31:42PM +0200, Vladimir Oltean wrote: > The taprio offload does not currently pass the mqprio queue configuration > down to the offloading device driver. So the driver cannot act upon the > TXQ counts/offsets per TC, or upon the prio->tc map. It was probably > assumed that the driver only wants to offload num_tc (see > TC_MQPRIO_HW_OFFLOAD_TCS), which it can get from netdev_get_num_tc(), > but there's clearly more to the mqprio configuration than that. > > To remedy that, we need to actually reconstruct a struct > tc_mqprio_qopt_offload to pass as part of the tc_taprio_qopt_offload. > The problem is that taprio doesn't keep a persistent reference to the > mqprio queue structure in its own struct taprio_sched, instead it just > applies the contents of that to the netdev state (prio:tc map, per-TC > TXQ counts and offsets, num_tc etc). Maybe it's easier to understand > why, when we look at the size of struct tc_mqprio_qopt_offload: 352 > bytes on arm64. Keeping such a large structure would throw off the > memory accesses in struct taprio_sched no matter where we put it. > So we prefer to dynamically reconstruct the mqprio offload structure > based on netdev information, rather than saving a copy of it. > > Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Simon Horman <simon.horman@corigine.com>
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h index 02e3ccfbc7d1..ace8be520fb0 100644 --- a/include/net/pkt_sched.h +++ b/include/net/pkt_sched.h @@ -187,6 +187,7 @@ struct tc_taprio_sched_entry { }; struct tc_taprio_qopt_offload { + struct tc_mqprio_qopt_offload mqprio; u8 enable; ktime_t base_time; u64 cycle_time; diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c index c322a61eaeea..f40016275384 100644 --- a/net/sched/sch_taprio.c +++ b/net/sched/sch_taprio.c @@ -1225,6 +1225,25 @@ static void taprio_sched_to_offload(struct net_device *dev, offload->num_entries = i; } +static void +taprio_mqprio_qopt_reconstruct(struct net_device *dev, + struct tc_mqprio_qopt_offload *mqprio) +{ + struct tc_mqprio_qopt *qopt = &mqprio->qopt; + int num_tc = netdev_get_num_tc(dev); + int tc, prio; + + qopt->num_tc = num_tc; + + for (prio = 0; prio <= TC_BITMASK; prio++) + qopt->prio_tc_map[prio] = netdev_get_prio_tc_map(dev, prio); + + for (tc = 0; tc < num_tc; tc++) { + qopt->count[tc] = dev->tc_to_txq[tc].count; + qopt->offset[tc] = dev->tc_to_txq[tc].offset; + } +} + static int taprio_enable_offload(struct net_device *dev, struct taprio_sched *q, struct sched_gate_list *sched, @@ -1261,6 +1280,7 @@ static int taprio_enable_offload(struct net_device *dev, return -ENOMEM; } offload->enable = 1; + taprio_mqprio_qopt_reconstruct(dev, &offload->mqprio); taprio_sched_to_offload(dev, sched, offload); for (tc = 0; tc < TC_MAX_QUEUE; tc++)
The taprio offload does not currently pass the mqprio queue configuration down to the offloading device driver. So the driver cannot act upon the TXQ counts/offsets per TC, or upon the prio->tc map. It was probably assumed that the driver only wants to offload num_tc (see TC_MQPRIO_HW_OFFLOAD_TCS), which it can get from netdev_get_num_tc(), but there's clearly more to the mqprio configuration than that. To remedy that, we need to actually reconstruct a struct tc_mqprio_qopt_offload to pass as part of the tc_taprio_qopt_offload. The problem is that taprio doesn't keep a persistent reference to the mqprio queue structure in its own struct taprio_sched, instead it just applies the contents of that to the netdev state (prio:tc map, per-TC TXQ counts and offsets, num_tc etc). Maybe it's easier to understand why, when we look at the size of struct tc_mqprio_qopt_offload: 352 bytes on arm64. Keeping such a large structure would throw off the memory accesses in struct taprio_sched no matter where we put it. So we prefer to dynamically reconstruct the mqprio offload structure based on netdev information, rather than saving a copy of it. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> --- v2->v4: none v1->v2: reconstruct the mqprio queue configuration structure include/net/pkt_sched.h | 1 + net/sched/sch_taprio.c | 20 ++++++++++++++++++++ 2 files changed, 21 insertions(+)