Message ID | 20230513085143.3289-5-hkelam@marvell.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 6b4b2ded9c4282deea421eef144ab0ced954721c |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | octeontx2-pf: HTB offload support | expand |
Context | Check | Description |
---|---|---|
netdev/series_format | success | Posting correctly formatted |
netdev/tree_selection | success | Clearly marked for net-next, async |
netdev/fixes_present | success | Fixes tag not required for -next series |
netdev/header_inline | success | No static functions without inline keyword in header files |
netdev/build_32bit | success | Errors and warnings before: 8 this patch: 8 |
netdev/cc_maintainers | success | CCed 11 of 11 maintainers |
netdev/build_clang | success | Errors and warnings before: 8 this patch: 8 |
netdev/verify_signedoff | success | Signed-off-by tag matches author and committer |
netdev/deprecated_api | success | None detected |
netdev/check_selftest | success | No net selftest shell script |
netdev/verify_fixes | success | No Fixes tag |
netdev/build_allmodconfig_warn | success | Errors and warnings before: 20 this patch: 20 |
netdev/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 247 lines checked |
netdev/kdoc | success | Errors and warnings before: 0 this patch: 0 |
netdev/source_inline | success | Was 0 now: 0 |
Hi Hariprasad, On Sat, May 13, 2023 at 02:21:39PM +0530, Hariprasad Kelam wrote: > 1. Upon txschq free request, the transmit schedular config in hardware > is not getting reset. This patch adds necessary changes to do the same. > nit: s/schedular/scheduler/ > 2. Current implementation calls txschq alloc during interface > initialization and in response handler updates the default txschq array. > This creates a problem for htb offload where txsch alloc will be called > for every tc class. This patch addresses the issue by reading txschq > response in mbox caller function instead in the response handler. > > 3. Current otx2_txschq_stop routine tries to free all txschq nodes > allocated to the interface. This creates a problem for htb offload. > This patch introduces the otx2_txschq_free_one to free txschq in a > given level. This patch seems to be doing three things. Could it be split into three patches? ... > -int otx2_txschq_stop(struct otx2_nic *pfvf) > +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq) > { > struct nix_txsch_free_req *free_req; > - int lvl, schq, err; > + int err; > > mutex_lock(&pfvf->mbox.lock); > - /* Free the transmit schedulers */ > + > free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox); Mainly for my own edification: - otx2_mbox_alloc_msg_nix_txsch_free is created via the M(_name, _id, _fn_name, _req_type, _rsp_type) macro around line 844 of otx2_common.h - It calls otx2_mbox_alloc_msg_rsp - Which does not call any allocation functions such as kmalloc > if (!free_req) { > mutex_unlock(&pfvf->mbox.lock); > - return -ENOMEM; > + netdev_err(pfvf->netdev, > + "Failed alloc txschq free req\n"); I think that given the above it's ok to log an error here. As the allocation core won't have (because it's not used here. But I wonder if it would be more consistent with how allocation errors are usually handled to move the logging into otx2_mbox_alloc_msg_rsp(). > + return; > } > > - free_req->flags = TXSCHQ_FREE_ALL; > + free_req->schq_lvl = lvl; > + free_req->schq = schq; > + > err = otx2_sync_mbox_msg(&pfvf->mbox); > + if (err) { > + netdev_err(pfvf->netdev, > + "Failed stop txschq %d at level %d\n", schq, lvl); > + } > + > mutex_unlock(&pfvf->mbox.lock); > +} > + > +void otx2_txschq_stop(struct otx2_nic *pfvf) > +{ > + int lvl, schq; > + > + /* free non QOS TLx nodes */ > + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) > + otx2_txschq_free_one(pfvf, lvl, > + pfvf->hw.txschq_list[lvl][0]); > > /* Clear the txschq list */ > for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { > for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) > pfvf->hw.txschq_list[lvl][schq] = 0; > } > - return err; > + nit: no blank line here. > } > > void otx2_sqb_flush(struct otx2_nic *pfvf) ...
On Tue, May 16, 2023 at 10:05:54AM +0200, Simon Horman wrote: > Hi Hariprasad, > > On Sat, May 13, 2023 at 02:21:39PM +0530, Hariprasad Kelam wrote: > > 1. Upon txschq free request, the transmit schedular config in hardware > > is not getting reset. This patch adds necessary changes to do the same. > > > > nit: s/schedular/scheduler/ > > > 2. Current implementation calls txschq alloc during interface > > initialization and in response handler updates the default txschq array. > > This creates a problem for htb offload where txsch alloc will be called > > for every tc class. This patch addresses the issue by reading txschq > > response in mbox caller function instead in the response handler. > > > > 3. Current otx2_txschq_stop routine tries to free all txschq nodes > > allocated to the interface. This creates a problem for htb offload. > > This patch introduces the otx2_txschq_free_one to free txschq in a > > given level. > > This patch seems to be doing three things. > Could it be split into three patches? I see that I was a bit late with my review as the series was applied yesterday. > ... > > > -int otx2_txschq_stop(struct otx2_nic *pfvf) > > +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq) > > { > > struct nix_txsch_free_req *free_req; > > - int lvl, schq, err; > > + int err; > > > > mutex_lock(&pfvf->mbox.lock); > > - /* Free the transmit schedulers */ > > + > > free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox); > > Mainly for my own edification: > > - otx2_mbox_alloc_msg_nix_txsch_free is created via the > M(_name, _id, _fn_name, _req_type, _rsp_type) macro > around line 844 of otx2_common.h > - It calls otx2_mbox_alloc_msg_rsp > - Which does not call any allocation functions such as kmalloc > > > if (!free_req) { > > mutex_unlock(&pfvf->mbox.lock); > > - return -ENOMEM; > > + netdev_err(pfvf->netdev, > > + "Failed alloc txschq free req\n"); > > I think that given the above it's ok to log an error here. > As the allocation core won't have (because it's not used here. > But I wonder if it would be more consistent with how > allocation errors are usually handled to move the logging into > otx2_mbox_alloc_msg_rsp(). > > > + return; > > } > > > > - free_req->flags = TXSCHQ_FREE_ALL; > > + free_req->schq_lvl = lvl; > > + free_req->schq = schq; > > + > > err = otx2_sync_mbox_msg(&pfvf->mbox); > > + if (err) { > > + netdev_err(pfvf->netdev, > > + "Failed stop txschq %d at level %d\n", schq, lvl); > > + } > > + > > mutex_unlock(&pfvf->mbox.lock); > > +} > > + > > +void otx2_txschq_stop(struct otx2_nic *pfvf) > > +{ > > + int lvl, schq; > > + > > + /* free non QOS TLx nodes */ > > + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) > > + otx2_txschq_free_one(pfvf, lvl, > > + pfvf->hw.txschq_list[lvl][0]); > > > > /* Clear the txschq list */ > > for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { > > for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) > > pfvf->hw.txschq_list[lvl][schq] = 0; > > } > > - return err; > > + > > nit: no blank line here. > > > } > > > > void otx2_sqb_flush(struct otx2_nic *pfvf) > > ...
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 4ad707e758b9..79ed7af0b0a4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -1691,6 +1691,42 @@ handle_txschq_shaper_update(struct rvu *rvu, int blkaddr, int nixlf, return true; } +static void nix_reset_tx_schedule(struct rvu *rvu, int blkaddr, + int lvl, int schq) +{ + u64 tlx_parent = 0, tlx_schedule = 0; + + switch (lvl) { + case NIX_TXSCH_LVL_TL2: + tlx_parent = NIX_AF_TL2X_PARENT(schq); + tlx_schedule = NIX_AF_TL2X_SCHEDULE(schq); + break; + case NIX_TXSCH_LVL_TL3: + tlx_parent = NIX_AF_TL3X_PARENT(schq); + tlx_schedule = NIX_AF_TL3X_SCHEDULE(schq); + break; + case NIX_TXSCH_LVL_TL4: + tlx_parent = NIX_AF_TL4X_PARENT(schq); + tlx_schedule = NIX_AF_TL4X_SCHEDULE(schq); + break; + case NIX_TXSCH_LVL_MDQ: + /* no need to reset SMQ_CFG as HW clears this CSR + * on SMQ flush + */ + tlx_parent = NIX_AF_MDQX_PARENT(schq); + tlx_schedule = NIX_AF_MDQX_SCHEDULE(schq); + break; + default: + return; + } + + if (tlx_parent) + rvu_write64(rvu, blkaddr, tlx_parent, 0x0); + + if (tlx_schedule) + rvu_write64(rvu, blkaddr, tlx_schedule, 0x0); +} + /* Disable shaping of pkts by a scheduler queue * at a given scheduler level. */ @@ -2039,6 +2075,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); } for (idx = 0; idx < req->schq[lvl]; idx++) { @@ -2048,6 +2085,7 @@ int rvu_mbox_handler_nix_txsch_alloc(struct rvu *rvu, pfvf_map[schq] = TXSCH_MAP(pcifunc, 0); nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); } } @@ -2143,6 +2181,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) continue; nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); nix_clear_tx_xoff(rvu, blkaddr, lvl, schq); + nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); } } nix_clear_tx_xoff(rvu, blkaddr, NIX_TXSCH_LVL_TL1, @@ -2181,6 +2220,7 @@ static int nix_txschq_free(struct rvu *rvu, u16 pcifunc) for (schq = 0; schq < txsch->schq.max; schq++) { if (TXSCH_MAP_FUNC(txsch->pfvf_map[schq]) != pcifunc) continue; + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); rvu_free_rsrc(&txsch->schq, schq); txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE); } @@ -2240,6 +2280,9 @@ static int nix_txschq_free_one(struct rvu *rvu, */ nix_clear_tx_xoff(rvu, blkaddr, lvl, schq); + nix_reset_tx_linkcfg(rvu, blkaddr, lvl, schq); + nix_reset_tx_shaping(rvu, blkaddr, nixlf, lvl, schq); + /* Flush if it is a SMQ. Onus of disabling * TL2/3 queue links before SMQ flush is on user */ @@ -2249,6 +2292,8 @@ static int nix_txschq_free_one(struct rvu *rvu, goto err; } + nix_reset_tx_schedule(rvu, blkaddr, lvl, schq); + /* Free the resource */ rvu_free_rsrc(&txsch->schq, schq); txsch->pfvf_map[schq] = TXSCH_MAP(0, NIX_TXSCHQ_FREE); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index adbcc087d2a8..6df6f6380b55 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -716,7 +716,8 @@ EXPORT_SYMBOL(otx2_smq_flush); int otx2_txsch_alloc(struct otx2_nic *pfvf) { struct nix_txsch_alloc_req *req; - int lvl; + struct nix_txsch_alloc_rsp *rsp; + int lvl, schq, rc; /* Get memory to put this msg */ req = otx2_mbox_alloc_msg_nix_txsch_alloc(&pfvf->mbox); @@ -726,33 +727,68 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) /* Request one schq per level */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) req->schq[lvl] = 1; + rc = otx2_sync_mbox_msg(&pfvf->mbox); + if (rc) + return rc; - return otx2_sync_mbox_msg(&pfvf->mbox); + rsp = (struct nix_txsch_alloc_rsp *) + otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr); + if (IS_ERR(rsp)) + return PTR_ERR(rsp); + + /* Setup transmit scheduler list */ + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) + for (schq = 0; schq < rsp->schq[lvl]; schq++) + pfvf->hw.txschq_list[lvl][schq] = + rsp->schq_list[lvl][schq]; + + pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; + + return 0; } -int otx2_txschq_stop(struct otx2_nic *pfvf) +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq) { struct nix_txsch_free_req *free_req; - int lvl, schq, err; + int err; mutex_lock(&pfvf->mbox.lock); - /* Free the transmit schedulers */ + free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox); if (!free_req) { mutex_unlock(&pfvf->mbox.lock); - return -ENOMEM; + netdev_err(pfvf->netdev, + "Failed alloc txschq free req\n"); + return; } - free_req->flags = TXSCHQ_FREE_ALL; + free_req->schq_lvl = lvl; + free_req->schq = schq; + err = otx2_sync_mbox_msg(&pfvf->mbox); + if (err) { + netdev_err(pfvf->netdev, + "Failed stop txschq %d at level %d\n", schq, lvl); + } + mutex_unlock(&pfvf->mbox.lock); +} + +void otx2_txschq_stop(struct otx2_nic *pfvf) +{ + int lvl, schq; + + /* free non QOS TLx nodes */ + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) + otx2_txschq_free_one(pfvf, lvl, + pfvf->hw.txschq_list[lvl][0]); /* Clear the txschq list */ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) pfvf->hw.txschq_list[lvl][schq] = 0; } - return err; + } void otx2_sqb_flush(struct otx2_nic *pfvf) @@ -1642,21 +1678,6 @@ void mbox_handler_cgx_fec_stats(struct otx2_nic *pfvf, pfvf->hw.cgx_fec_uncorr_blks += rsp->fec_uncorr_blks; } -void mbox_handler_nix_txsch_alloc(struct otx2_nic *pf, - struct nix_txsch_alloc_rsp *rsp) -{ - int lvl, schq; - - /* Setup transmit scheduler list */ - for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) - for (schq = 0; schq < rsp->schq[lvl]; schq++) - pf->hw.txschq_list[lvl][schq] = - rsp->schq_list[lvl][schq]; - - pf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; -} -EXPORT_SYMBOL(mbox_handler_nix_txsch_alloc); - void mbox_handler_npa_lf_alloc(struct otx2_nic *pfvf, struct npa_lf_alloc_rsp *rsp) { diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index 383a6716dbb9..cbe09a33ba01 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -934,7 +934,8 @@ int otx2_config_nix(struct otx2_nic *pfvf); int otx2_config_nix_queues(struct otx2_nic *pfvf); int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool pfc_en); int otx2_txsch_alloc(struct otx2_nic *pfvf); -int otx2_txschq_stop(struct otx2_nic *pfvf); +void otx2_txschq_stop(struct otx2_nic *pfvf); +void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq); void otx2_sqb_flush(struct otx2_nic *pfvf); int otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, dma_addr_t *dma); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index c9141d608342..8706bc2bcf75 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -792,10 +792,6 @@ static void otx2_process_pfaf_mbox_msg(struct otx2_nic *pf, case MBOX_MSG_NIX_LF_ALLOC: mbox_handler_nix_lf_alloc(pf, (struct nix_lf_alloc_rsp *)msg); break; - case MBOX_MSG_NIX_TXSCH_ALLOC: - mbox_handler_nix_txsch_alloc(pf, - (struct nix_txsch_alloc_rsp *)msg); - break; case MBOX_MSG_NIX_BP_ENABLE: mbox_handler_nix_bp_enable(pf, (struct nix_bp_cfg_rsp *)msg); break; @@ -1522,8 +1518,7 @@ static int otx2_init_hw_resources(struct otx2_nic *pf) otx2_free_cq_res(pf); otx2_ctx_disable(mbox, NIX_AQ_CTYPE_RQ, false); err_free_txsch: - if (otx2_txschq_stop(pf)) - dev_err(pf->dev, "%s failed to stop TX schedulers\n", __func__); + otx2_txschq_stop(pf); err_free_sq_ptrs: otx2_sq_free_sqbs(pf); err_free_rq_ptrs: @@ -1558,15 +1553,13 @@ static void otx2_free_hw_resources(struct otx2_nic *pf) struct mbox *mbox = &pf->mbox; struct otx2_cq_queue *cq; struct msg_req *req; - int qidx, err; + int qidx; /* Ensure all SQE are processed */ otx2_sqb_flush(pf); /* Stop transmission */ - err = otx2_txschq_stop(pf); - if (err) - dev_err(pf->dev, "RVUPF: Failed to stop/free TX schedulers\n"); + otx2_txschq_stop(pf); #ifdef CONFIG_DCB if (pf->pfc_en) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c index 5dddf067a810..ce8ffb71ff38 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c @@ -70,10 +70,6 @@ static void otx2vf_process_vfaf_mbox_msg(struct otx2_nic *vf, case MBOX_MSG_NIX_LF_ALLOC: mbox_handler_nix_lf_alloc(vf, (struct nix_lf_alloc_rsp *)msg); break; - case MBOX_MSG_NIX_TXSCH_ALLOC: - mbox_handler_nix_txsch_alloc(vf, - (struct nix_txsch_alloc_rsp *)msg); - break; case MBOX_MSG_NIX_BP_ENABLE: mbox_handler_nix_bp_enable(vf, (struct nix_bp_cfg_rsp *)msg); break;