Message ID | 20240610230706.34883-4-shannon.nelson@amd.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | ionic: rework fix for doorbell miss | expand |
On Mon, 10 Jun 2024 16:07:01 -0700 Shannon Nelson wrote: > Instead of using the system's default workqueue, > add a private workqueue for the device to use for > its little jobs. little jobs little point of having your own wq, no? At this point of reading the series its a bit unclear why the wq separation is needed.
On 6/12/2024 6:08 PM, Jakub Kicinski wrote: > Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding. > > On Mon, 10 Jun 2024 16:07:01 -0700 Shannon Nelson wrote: >> Instead of using the system's default workqueue, >> add a private workqueue for the device to use for >> its little jobs. > > little jobs little point of having your own wq, no? > At this point of reading the series its a bit unclear why > the wq separation is needed. Yes, when using only a single PF or two this doesn't look so bad to be left on the system workqueue. But we have a couple customers that want to scale out to lots of VFs with multiple queues per VF, which multiplies out 100's of queues getting workitems. We thought that instead of firebombing the system workqueue with a lot of little jobs, we would give the scheduler a chance to work with our stuff separately, and setting it up by device seemed like an easy enough way to partition the work. Other options might be one ionic wq used by all devices, or maybe a wq per PF family? Do you have a preference, or do you still think that the system wq is enough? Thanks, sln
On Thu, 13 Jun 2024 13:38:18 -0700 Nelson, Shannon wrote: > > little jobs little point of having your own wq, no? > > At this point of reading the series its a bit unclear why > > the wq separation is needed. > > Yes, when using only a single PF or two this doesn't look so bad to be > left on the system workqueue. But we have a couple customers that want > to scale out to lots of VFs with multiple queues per VF, which > multiplies out 100's of queues getting workitems. We thought that > instead of firebombing the system workqueue with a lot of little jobs, > we would give the scheduler a chance to work with our stuff separately, > and setting it up by device seemed like an easy enough way to partition > the work. Other options might be one ionic wq used by all devices, or > maybe a wq per PF family? Do you have a preference, or do you still > think that the system wq is enough? No, no, code is fine. I was just complaining about the commit message :) The math for how many work items you can see in reasonable scenarios would be helpful to see when judging the need.
On 6/13/2024 3:19 PM, Jakub Kicinski wrote: > > On Thu, 13 Jun 2024 13:38:18 -0700 Nelson, Shannon wrote: >>> little jobs little point of having your own wq, no? >>> At this point of reading the series its a bit unclear why >>> the wq separation is needed. >> >> Yes, when using only a single PF or two this doesn't look so bad to be >> left on the system workqueue. But we have a couple customers that want >> to scale out to lots of VFs with multiple queues per VF, which >> multiplies out 100's of queues getting workitems. We thought that >> instead of firebombing the system workqueue with a lot of little jobs, >> we would give the scheduler a chance to work with our stuff separately, >> and setting it up by device seemed like an easy enough way to partition >> the work. Other options might be one ionic wq used by all devices, or >> maybe a wq per PF family? Do you have a preference, or do you still >> think that the system wq is enough? > > No, no, code is fine. I was just complaining about the commit message :) > The math for how many work items you can see in reasonable scenarios > would be helpful to see when judging the need. Sure, I can add to the description - thanks. sln
diff --git a/drivers/net/ethernet/pensando/ionic/ionic.h b/drivers/net/ethernet/pensando/ionic/ionic.h index 438172cfb170..df29c977a702 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic.h +++ b/drivers/net/ethernet/pensando/ionic/ionic.h @@ -47,6 +47,7 @@ struct ionic { struct ionic_dev_bar bars[IONIC_BARS_MAX]; unsigned int num_bars; struct ionic_identity ident; + struct workqueue_struct *wq; struct ionic_lif *lif; unsigned int nnqs_per_lif; unsigned int neqs_per_lif; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c index 89b4310f244c..342863fd0b16 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c @@ -43,11 +43,11 @@ static void ionic_watchdog_cb(struct timer_list *t) work->type = IONIC_DW_TYPE_RX_MODE; netdev_dbg(lif->netdev, "deferred: rx_mode\n"); - ionic_lif_deferred_enqueue(&lif->deferred, work); + ionic_lif_deferred_enqueue(lif, work); } } -static void ionic_watchdog_init(struct ionic *ionic) +static int ionic_watchdog_init(struct ionic *ionic) { struct ionic_dev *idev = &ionic->idev; @@ -63,6 +63,15 @@ static void ionic_watchdog_init(struct ionic *ionic) idev->fw_status_ready = true; idev->fw_generation = IONIC_FW_STS_F_GENERATION & ioread8(&idev->dev_info_regs->fw_status); + + ionic->wq = alloc_workqueue("%s-wq", WQ_UNBOUND, 0, + dev_name(ionic->dev)); + if (!ionic->wq) { + dev_err(ionic->dev, "alloc_workqueue failed"); + return -ENOMEM; + } + + return 0; } void ionic_init_devinfo(struct ionic *ionic) @@ -94,6 +103,7 @@ int ionic_dev_setup(struct ionic *ionic) struct device *dev = ionic->dev; int size; u32 sig; + int err; /* BAR0: dev_cmd and interrupts */ if (num_bars < 1) { @@ -129,7 +139,9 @@ int ionic_dev_setup(struct ionic *ionic) return -EFAULT; } - ionic_watchdog_init(ionic); + err = ionic_watchdog_init(ionic); + if (err) + return err; idev->db_pages = bar->vaddr; idev->phy_db_pages = bar->bus_addr; @@ -161,6 +173,7 @@ void ionic_dev_teardown(struct ionic *ionic) idev->phy_cmb_pages = 0; idev->cmb_npages = 0; + destroy_workqueue(ionic->wq); mutex_destroy(&idev->cmb_inuse_lock); } @@ -273,7 +286,7 @@ int ionic_heartbeat_check(struct ionic *ionic) if (work) { work->type = IONIC_DW_TYPE_LIF_RESET; work->fw_status = fw_status_ready; - ionic_lif_deferred_enqueue(&lif->deferred, work); + ionic_lif_deferred_enqueue(lif, work); } } } diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c index ff6a7e86254c..af269a198d5d 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -126,13 +126,13 @@ static void ionic_lif_deferred_work(struct work_struct *work) } while (true); } -void ionic_lif_deferred_enqueue(struct ionic_deferred *def, +void ionic_lif_deferred_enqueue(struct ionic_lif *lif, struct ionic_deferred_work *work) { - spin_lock_bh(&def->lock); - list_add_tail(&work->list, &def->list); - spin_unlock_bh(&def->lock); - schedule_work(&def->work); + spin_lock_bh(&lif->deferred.lock); + list_add_tail(&work->list, &lif->deferred.list); + spin_unlock_bh(&lif->deferred.lock); + queue_work(lif->ionic->wq, &lif->deferred.work); } static void ionic_link_status_check(struct ionic_lif *lif) @@ -207,7 +207,7 @@ void ionic_link_status_check_request(struct ionic_lif *lif, bool can_sleep) } work->type = IONIC_DW_TYPE_LINK_STATUS; - ionic_lif_deferred_enqueue(&lif->deferred, work); + ionic_lif_deferred_enqueue(lif, work); } else { ionic_link_status_check(lif); } @@ -1391,7 +1391,7 @@ static void ionic_ndo_set_rx_mode(struct net_device *netdev) } work->type = IONIC_DW_TYPE_RX_MODE; netdev_dbg(lif->netdev, "deferred: rx_mode\n"); - ionic_lif_deferred_enqueue(&lif->deferred, work); + ionic_lif_deferred_enqueue(lif, work); } static __le64 ionic_netdev_features_to_nic(netdev_features_t features) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h index a029206c0bc8..e4a5ae70793e 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h @@ -331,7 +331,7 @@ static inline bool ionic_txq_hwstamp_enabled(struct ionic_queue *q) void ionic_link_status_check_request(struct ionic_lif *lif, bool can_sleep); void ionic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *ns); -void ionic_lif_deferred_enqueue(struct ionic_deferred *def, +void ionic_lif_deferred_enqueue(struct ionic_lif *lif, struct ionic_deferred_work *work); int ionic_lif_alloc(struct ionic *ionic); int ionic_lif_init(struct ionic_lif *lif); diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c index c1259324b0be..0f817c3f92d8 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_main.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c @@ -287,7 +287,7 @@ bool ionic_notifyq_service(struct ionic_cq *cq) clear_bit(IONIC_LIF_F_FW_STOPPING, lif->state); } else { work->type = IONIC_DW_TYPE_LIF_RESET; - ionic_lif_deferred_enqueue(&lif->deferred, work); + ionic_lif_deferred_enqueue(lif, work); } } break;