Message ID | 20230210111051.13654-4-hkelam@marvell.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | octeontx2-pf: HTB offload support | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Clearly marked for net-next, async |
netdev/fixes_present | success | Fixes tag not required for -next series |
netdev/subject_prefix | success | Link |
netdev/cover_letter | success | Series has a cover letter |
netdev/patch_count | success | Link |
netdev/header_inline | success | No static functions without inline keyword in header files |
netdev/build_32bit | success | Errors and warnings before: 0 this patch: 0 |
netdev/cc_maintainers | success | CCed 12 of 12 maintainers |
netdev/build_clang | success | Errors and warnings before: 3 this patch: 3 |
netdev/module_param | success | Was 0 now: 0 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/check_selftest | success | No net selftest shell script |
netdev/verify_fixes | success | No Fixes tag |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 13 this patch: 13 |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 162 lines checked |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/source_inline | success | Was 0 now: 0 |
On Fri, Feb 10, 2023 at 04:40:50PM +0530, Hariprasad Kelam wrote: > Multiple transmit scheduler queues can be configured at different > levels to support traffic shaping and scheduling. But on txschq free > requests, the transmit schedular config in hardware is not getting > reset. This patch adds support to reset the stale config. > > The txschq alloc response handler updates the default txschq > array which is used to configure the transmit packet path from > SMQ to TL2 levels. However, for new features such as QoS offload > that requires it's own txschq queues, this handler is still > invoked and results in undefined behavior. The code now handles > txschq response in the mbox caller function. > > Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> > Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> > --- > .../ethernet/marvell/octeontx2/af/rvu_nix.c | 45 +++++++++++++++++++ > .../marvell/octeontx2/nic/otx2_common.c | 36 ++++++++------- > .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 4 -- > .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 -- > 4 files changed, 64 insertions(+), 25 deletions(-) ... > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > index 73c8d36b6e12..4cb3fab8baae 100644 > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > @@ -716,7 +716,8 @@ EXPORT_SYMBOL(otx2_smq_flush); > int otx2_txsch_alloc(struct otx2_nic *pfvf) > { > struct nix_txsch_alloc_req *req; > - int lvl; > + struct nix_txsch_alloc_rsp *rsp; > + int lvl, schq, rc; > > /* Get memory to put this msg */ > req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox); > @@ -726,8 +727,24 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) > /* Request one schq per level */ > for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) > req->schq[lvl] = 1; > + rc = otx2_sync_mbox_msg(&pfvf->mbox); > + if (rc) > + return rc; > > - return otx2_sync_mbox_msg(&pfvf->mbox); > + rsp = (struct nix_txsch_alloc_rsp *) > + otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); > + if (IS_ERR(rsp)) > + return PTR_ERR(rsp); > + > + /* Setup transmit scheduler list */ > + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) > + for (schq = 0; schq < rsp->schq[lvl]; schq++) > + pfvf->hw.txschq_list[lvl][schq] = > + rsp->schq_list[lvl][schq]; > + > + pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; nit: extra whitespace before '=' > + > + return 0; > }
> On Fri, Feb 10, 2023 at 04:40:50PM +0530, Hariprasad Kelam wrote: > > Multiple transmit scheduler queues can be configured at different > > levels to support traffic shaping and scheduling. But on txschq free > > requests, the transmit schedular config in hardware is not getting > > reset. This patch adds support to reset the stale config. > > > > The txschq alloc response handler updates the default txschq array > > which is used to configure the transmit packet path from SMQ to TL2 > > levels. However, for new features such as QoS offload that requires > > it's own txschq queues, this handler is still invoked and results in > > undefined behavior. The code now handles txschq response in the mbox > > caller function. > > > > Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> > > Signed-off-by: Naveen Mamindlapalli <naveenm@marvell.com> > > Signed-off-by: Sunil Kovvuri Goutham <sgoutham@marvell.com> > > --- > > .../ethernet/marvell/octeontx2/af/rvu_nix.c | 45 > +++++++++++++++++++ > > .../marvell/octeontx2/nic/otx2_common.c | 36 ++++++++------- > > .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 4 -- > > .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 4 -- > > 4 files changed, 64 insertions(+), 25 deletions(-) > > ... > > > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > > b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > > index 73c8d36b6e12..4cb3fab8baae 100644 > > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > > @@ -716,7 +716,8 @@ EXPORT_SYMBOL(otx2_smq_flush); int > > otx2_txsch_alloc(struct otx2_nic *pfvf) { > > struct nix_txsch_alloc_req *req; > > - int lvl; > > + struct nix_txsch_alloc_rsp *rsp; > > + int lvl, schq, rc; > > > > /* Get memory to put this msg */ > > req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox); > > @@ -726,8 +727,24 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) > > /* Request one schq per level */ > > for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) > > req->schq[lvl] = 1; > > + rc = otx2_sync_mbox_msg(&pfvf->mbox); > > + if (rc) > > + return rc; > > > > - return otx2_sync_mbox_msg(&pfvf->mbox); > > + rsp = (struct nix_txsch_alloc_rsp *) > > + otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); > > + if (IS_ERR(rsp)) > > + return PTR_ERR(rsp); > > + > > + /* Setup transmit scheduler list */ > > + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) > > + for (schq = 0; schq < rsp->schq[lvl]; schq++) > > + pfvf->hw.txschq_list[lvl][schq] = > > + rsp->schq_list[lvl][schq]; > > + > > + pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; > > nit: extra whitespace before '=' > ACK , will fix in next version. Thanks, Hariprasad k > > + > > + return 0; > > }
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 89e94569e74c..c11859999074 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -1677,6 +1677,42 @@ handle_txschq_shaper_update(struct rvu *rvu, int blkaddr, int nixlf, return true; } +static void nix_reset_tx_schedule(struct rvu *rvu, int blkaddr, + int lvl, int schq) +{ + u64 tlx_parent = 0, tlx_schedule = 0; + + switch (lvl) { + case NIX_TXSCH_LVL_TL2: + tlx_parent = NIX_AF_TL2X_PARENT(schq); + tlx_schedule = NIX_AF_TL2X_SCHEDULE(schq); + break; + case NIX_TXSCH_LVL_TL3: + tlx_parent = NIX_AF_TL3X_PARENT(schq); + tlx_schedule = NIX_AF_TL3X_SCHEDULE(schq); + break; + case NIX_TXSCH_LVL_TL4: + tlx_parent = NIX_AF_TL4X_PARENT(schq); + tlx_schedule = NIX_AF_TL4X_SCHEDULE(schq); + break; + case NIX_TXSCH_LVL_MDQ: + /* no need to reset SMQ_CFG as HW clears this CSR + * on SMQ flush + */ + tlx_parent = NIX_AF_MDQX_PARENT(schq); + tlx_schedule = NIX_AF_MDQX_SCHEDULE(schq); + break; + default: + return; + } + + if (tlx_parent) + rvu_write64(rvu, blkaddr, tlx_parent, 0x0); + + if (tlx_schedule) + rvu_write64(rvu, blkaddr, tlx_schedule, 0x0); +} + /* Disable shaping of pkts by a scheduler queue * at a given scheduler level. */ @@ -2025,6 +2061,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); } for (idx = 0; idx < req->schq[lvl]; idx++) { @@ -2034,6 +2071,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); } } @@ -2122,6 +2160,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) continue; nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_clear_tx_xoff(rvu, blkaddr, lvl, schq); + nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); } } nix_clear_tx_xoff(rvu, blkaddr, NIX_TXSCH_LVL_TL1, @@ -2160,6 +2199,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) for (schq = 0; schq < txsch->schq.max; schq++) { if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) continue; + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); rvu_free_rsrc(&txsch->schq, schq); txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE); } @@ -2219,6 +2259,9 @@ static int nix_txschq_free_one(struct rvu *rvu, */ nix_clear_tx_xoff(rvu, blkaddr, lvl, schq); + nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); + nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + /* Flush if it is a SMQ. Onus of disabling * TL2/3 queue links before SMQ flush is on user */ @@ -2228,6 +2271,8 @@ static int nix_txschq_free_one(struct rvu *rvu, goto err; } + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); + /* Free the resource */ rvu_free_rsrc(&txsch->schq, schq); txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 73c8d36b6e12..4cb3fab8baae 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -716,7 +716,8 @@ EXPORT_SYMBOL(otx2_smq_flush); int otx2_txsch_alloc(struct otx2_nic *pfvf) { struct nix_txsch_alloc_req *req; - int lvl; + struct nix_txsch_alloc_rsp *rsp; + int lvl, schq, rc; /* Get memory to put this msg */ req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox); @@ -726,8 +727,24 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) /* Request one schq per level */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) req->schq[lvl] = 1; + rc = otx2_sync_mbox_msg(&pfvf->mbox); + if (rc) + return rc; - return otx2_sync_mbox_msg(&pfvf->mbox); + rsp = (struct nix_txsch_alloc_rsp *) + otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) + return PTR_ERR(rsp); + + /* Setup transmit scheduler list */ + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) + for (schq = 0; schq < rsp->schq[lvl]; schq++) + pfvf->hw.txschq_list[lvl][schq] = + rsp->schq_list[lvl][schq]; + + pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; + + return 0; } int otx2_txschq_stop(struct otx2_nic *pfvf) @@ -1641,21 +1658,6 @@ void mbox_handler_cgx_fec_stats(struct otx2_nic *pfvf, pfvf->hw.cgx_fec_uncorr_blks += rsp->fec_uncorr_blks; } -void mbox_handler_nix_txsch_alloc(struct otx2_nic *pf, - struct nix_txsch_alloc_rsp *rsp) -{ - int lvl, schq; - - /* Setup transmit scheduler list */ - for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) - for (schq = 0; schq < rsp->schq[lvl]; schq++) - pf->hw.txschq_list[lvl][schq] = - rsp->schq_list[lvl][schq]; - - pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; -} -EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc); - void mbox_handler_npa_lf_alloc(struct otx2_nic *pfvf, struct npa_lf_alloc_rsp *rsp) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index 3f88975e13c1..9ed24bff6b2a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -792,10 +792,6 @@ static void otx2_process_pfaf_mbox_msg(struct otx2_nic *pf, case MBOX_MSG_NIX_LF_ALLOC: mbox_handler_nix_lf_alloc(pf, (struct nix_lf_alloc_rsp *)msg); break; - case MBOX_MSG_NIX_TXSCH_ALLOC: - mbox_handler_nix_txsch_alloc(pf, - (struct nix_txsch_alloc_rsp *)msg); - break; case MBOX_MSG_NIX_BP_ENABLE: mbox_handler_nix_bp_enable(pf, (struct nix_bp_cfg_rsp *)msg); break; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index c84804d16b8a..b7af02b12e05 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -70,10 +70,6 @@ static void otx2vf_process_vfaf_mbox_msg(struct otx2_nic *vf, case MBOX_MSG_NIX_LF_ALLOC: mbox_handler_nix_lf_alloc(vf, (struct nix_lf_alloc_rsp *)msg); break; - case MBOX_MSG_NIX_TXSCH_ALLOC: - mbox_handler_nix_txsch_alloc(vf, - (struct nix_txsch_alloc_rsp *)msg); - break; case MBOX_MSG_NIX_BP_ENABLE: mbox_handler_nix_bp_enable(vf, (struct nix_bp_cfg_rsp *)msg); break;