Message ID | dfa6ed0926e045fe7c14f0894cc0c37fee81bf9d.1697034729.git.petrm@nvidia.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 958a140d7a0afcac3c0bb0d3b262a8608f7bba16 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next] mlxsw: pci: Allocate skbs using GFP_KERNEL during initialization | expand |
Wed, Oct 11, 2023 at 04:39:12PM CEST, petrm@nvidia.com wrote: >From: Ido Schimmel <idosch@nvidia.com> > >The driver allocates skbs during initialization and during Rx >processing. Take advantage of the fact that the former happens in >process context and allocate the skbs using GFP_KERNEL to decrease the >probability of allocation failure. > >Tested with CONFIG_DEBUG_ATOMIC_SLEEP=y. > >Signed-off-by: Ido Schimmel <idosch@nvidia.com> >Reviewed-by: Petr Machata <petrm@nvidia.com> >Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
Hello: This patch was applied to netdev/net-next.git (main) by Jakub Kicinski <kuba@kernel.org>: On Wed, 11 Oct 2023 16:39:12 +0200 you wrote: > From: Ido Schimmel <idosch@nvidia.com> > > The driver allocates skbs during initialization and during Rx > processing. Take advantage of the fact that the former happens in > process context and allocate the skbs using GFP_KERNEL to decrease the > probability of allocation failure. > > [...] Here is the summary with links: - [net-next] mlxsw: pci: Allocate skbs using GFP_KERNEL during initialization https://git.kernel.org/netdev/net-next/c/958a140d7a0a You are awesome, thank you!
diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c index 51eea1f0529c..7fae963b2608 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/pci.c +++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c @@ -352,14 +352,15 @@ static void mlxsw_pci_wqe_frag_unmap(struct mlxsw_pci *mlxsw_pci, char *wqe, } static int mlxsw_pci_rdq_skb_alloc(struct mlxsw_pci *mlxsw_pci, - struct mlxsw_pci_queue_elem_info *elem_info) + struct mlxsw_pci_queue_elem_info *elem_info, + gfp_t gfp) { size_t buf_len = MLXSW_PORT_MAX_MTU; char *wqe = elem_info->elem; struct sk_buff *skb; int err; - skb = netdev_alloc_skb_ip_align(NULL, buf_len); + skb = __netdev_alloc_skb_ip_align(NULL, buf_len, gfp); if (!skb) return -ENOMEM; @@ -420,7 +421,7 @@ static int mlxsw_pci_rdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox, for (i = 0; i < q->count; i++) { elem_info = mlxsw_pci_queue_elem_info_producer_get(q); BUG_ON(!elem_info); - err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info); + err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info, GFP_KERNEL); if (err) goto rollback; /* Everything is set up, ring doorbell to pass elem to HW */ @@ -640,7 +641,7 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci, if (q->consumer_counter++ != consumer_counter_limit) dev_dbg_ratelimited(&pdev->dev, "Consumer counter does not match limit in RDQ\n"); - err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info); + err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info, GFP_ATOMIC); if (err) { dev_err_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n"); goto out;