diff mbox series

[net-next] net: mana: Add support for jumbo frame

Message ID 1679261264-26375-1-git-send-email-haiyangz@microsoft.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net-next] net: mana: Add support for jumbo frame | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 18 this patch: 18
netdev/cc_maintainers warning 7 maintainers not CCed: linux-rdma@vger.kernel.org daniel@iogearbox.net john.fastabend@gmail.com bpf@vger.kernel.org ast@kernel.org sharmaajay@microsoft.com hawk@kernel.org
netdev/build_clang success Errors and warnings before: 18 this patch: 18
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 18 this patch: 18
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 470 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Haiyang Zhang March 19, 2023, 9:27 p.m. UTC
During probe, get the hardware allowed max MTU by querying the device
configuration. Users can select MTU up to the device limit. Also,
when XDP is in use, we currently limit the buffer size to one page.

Updated RX data path to allocate and use RX queue DMA buffers with
proper size based on the MTU setting.

Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
---
 .../net/ethernet/microsoft/mana/mana_bpf.c    |  22 +-
 drivers/net/ethernet/microsoft/mana/mana_en.c | 229 ++++++++++++------
 include/net/mana/gdma.h                       |   4 +
 include/net/mana/mana.h                       |  18 +-
 4 files changed, 183 insertions(+), 90 deletions(-)

Comments

Francois Romieu March 19, 2023, 10:46 p.m. UTC | #1
Haiyang Zhang <haiyangz@microsoft.com> :
[...]
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index 492474b4d8aa..07738b7e85f2 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> @@ -427,6 +427,34 @@ static u16 mana_select_queue(struct net_device *ndev, struct sk_buff *skb,
>  	return txq;
>  }
>  
> +static int mana_change_mtu(struct net_device *ndev, int new_mtu)
> +{
> +	unsigned int old_mtu = ndev->mtu;
> +	int err, err2;
> +
> +	err = mana_detach(ndev, false);
> +	if (err) {
> +		netdev_err(ndev, "mana_detach failed: %d\n", err);
> +		return err;
> +	}
> +
> +	ndev->mtu = new_mtu;
> +
> +	err = mana_attach(ndev);
> +	if (!err)
> +		return 0;
> +
> +	netdev_err(ndev, "mana_attach failed: %d\n", err);
> +
> +	/* Try to roll it back to the old configuration. */
> +	ndev->mtu = old_mtu;
> +	err2 = mana_attach(ndev);
> +	if (err2)
> +		netdev_err(ndev, "mana re-attach failed: %d\n", err2);
> +
> +	return err;
> +}

I do not see where the driver could depend on the MTU. Even if it fails,
a single call to mana_change_mtu should thus never wreck the old working
state/configuration.

Stated differently, the detach/attach implementation is simple but
it makes the driver less reliable than it could be.

No ?
Haiyang Zhang March 20, 2023, 1:47 a.m. UTC | #2
> -----Original Message-----
> From: Francois Romieu <romieu@fr.zoreil.com>
> Sent: Sunday, March 19, 2023 6:47 PM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; Dexuan Cui
> <decui@microsoft.com>; KY Srinivasan <kys@microsoft.com>; Paul Rosswurm
> <paulros@microsoft.com>; olaf@aepfle.de; vkuznets@redhat.com;
> davem@davemloft.net; wei.liu@kernel.org; edumazet@google.com;
> kuba@kernel.org; pabeni@redhat.com; leon@kernel.org; Long Li
> <longli@microsoft.com>; ssengar@linux.microsoft.com; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add support for jumbo frame
> 
> [Some people who received this message don't often get email from
> romieu@fr.zoreil.com. Learn why this is important at
> https://aka.ms/LearnAboutSenderIdentification ]
> 
> Haiyang Zhang <haiyangz@microsoft.com> :
> [...]
> > diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c
> b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > index 492474b4d8aa..07738b7e85f2 100644
> > --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > @@ -427,6 +427,34 @@ static u16 mana_select_queue(struct net_device
> *ndev, struct sk_buff *skb,
> >       return txq;
> >  }
> >
> > +static int mana_change_mtu(struct net_device *ndev, int new_mtu)
> > +{
> > +     unsigned int old_mtu = ndev->mtu;
> > +     int err, err2;
> > +
> > +     err = mana_detach(ndev, false);
> > +     if (err) {
> > +             netdev_err(ndev, "mana_detach failed: %d\n", err);
> > +             return err;
> > +     }
> > +
> > +     ndev->mtu = new_mtu;
> > +
> > +     err = mana_attach(ndev);
> > +     if (!err)
> > +             return 0;
> > +
> > +     netdev_err(ndev, "mana_attach failed: %d\n", err);
> > +
> > +     /* Try to roll it back to the old configuration. */
> > +     ndev->mtu = old_mtu;
> > +     err2 = mana_attach(ndev);
> > +     if (err2)
> > +             netdev_err(ndev, "mana re-attach failed: %d\n", err2);
> > +
> > +     return err;
> > +}
> 
> I do not see where the driver could depend on the MTU. Even if it fails,
> a single call to mana_change_mtu should thus never wreck the old working
> state/configuration.
> 
> Stated differently, the detach/attach implementation is simple but
> it makes the driver less reliable than it could be.
> 
> No ?

No, it doesn't make the driver less reliable. To safely remove and reallocate 
DMA buffers with different size, we have to stop the traffic. So, mana_detach() 
is called. We also call mana_detach() in mana_close(). So the process in 
mana_change_mtu() is no more risky than ifdown/ifup of the NIC. 

In some rare cases, if the system memory is running really low, the bigger 
buffer allocation may fail, so we re-try with the previous MTU. I don't expect 
it to fail again. But we still check & log the error code for completeness and 
debugging.

Thanks,
- Haiyang
Leon Romanovsky March 20, 2023, 7:42 a.m. UTC | #3
On Sun, Mar 19, 2023 at 02:27:44PM -0700, Haiyang Zhang wrote:
> During probe, get the hardware allowed max MTU by querying the device
> configuration. Users can select MTU up to the device limit. Also,
> when XDP is in use, we currently limit the buffer size to one page.
> 
> Updated RX data path to allocate and use RX queue DMA buffers with
> proper size based on the MTU setting.
> 
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
>  .../net/ethernet/microsoft/mana/mana_bpf.c    |  22 +-
>  drivers/net/ethernet/microsoft/mana/mana_en.c | 229 ++++++++++++------
>  include/net/mana/gdma.h                       |   4 +
>  include/net/mana/mana.h                       |  18 +-
>  4 files changed, 183 insertions(+), 90 deletions(-)

<...>

> +static int mana_change_mtu(struct net_device *ndev, int new_mtu)
> +{
> +	unsigned int old_mtu = ndev->mtu;
> +	int err, err2;
> +
> +	err = mana_detach(ndev, false);
> +	if (err) {
> +		netdev_err(ndev, "mana_detach failed: %d\n", err);
> +		return err;
> +	}
> +
> +	ndev->mtu = new_mtu;
> +
> +	err = mana_attach(ndev);
> +	if (!err)
> +		return 0;
> +
> +	netdev_err(ndev, "mana_attach failed: %d\n", err);
> +
> +	/* Try to roll it back to the old configuration. */
> +	ndev->mtu = old_mtu;
> +	err2 = mana_attach(ndev);

I second to Francois and agree with him that it is very questionable.
If mana_attach() failed for first try, you should bail out and not
retry with some hope that it will pass.

Thanks

> +	if (err2)
> +		netdev_err(ndev, "mana re-attach failed: %d\n", err2);
> +
> +	return err;
Yunsheng Lin March 20, 2023, 9:42 a.m. UTC | #4
On 2023/3/20 5:27, Haiyang Zhang wrote:
> During probe, get the hardware allowed max MTU by querying the device
> configuration. Users can select MTU up to the device limit. Also,
> when XDP is in use, we currently limit the buffer size to one page.
> 
> Updated RX data path to allocate and use RX queue DMA buffers with
> proper size based on the MTU setting.

The change in this patch seems better to splitted into more reviewable
patchset. Perhaps refactor the RX queue DMA buffers allocation to handle
different size first, then add support for the jumbo frame.

> 
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
>  .../net/ethernet/microsoft/mana/mana_bpf.c    |  22 +-
>  drivers/net/ethernet/microsoft/mana/mana_en.c | 229 ++++++++++++------
>  include/net/mana/gdma.h                       |   4 +
>  include/net/mana/mana.h                       |  18 +-
>  4 files changed, 183 insertions(+), 90 deletions(-)
> 
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> index 3caea631229c..23b1521c0df9 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> @@ -133,12 +133,6 @@ u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq,
>  	return act;
>  }
>  
> -static unsigned int mana_xdp_fraglen(unsigned int len)
> -{
> -	return SKB_DATA_ALIGN(len) +
> -	       SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> -}
> -
>  struct bpf_prog *mana_xdp_get(struct mana_port_context *apc)
>  {
>  	ASSERT_RTNL();
> @@ -179,17 +173,18 @@ static int mana_xdp_set(struct net_device *ndev, struct bpf_prog *prog,
>  {
>  	struct mana_port_context *apc = netdev_priv(ndev);
>  	struct bpf_prog *old_prog;
> -	int buf_max;
> +	struct gdma_context *gc;
> +
> +	gc = apc->ac->gdma_dev->gdma_context;
>  
>  	old_prog = mana_xdp_get(apc);
>  
>  	if (!old_prog && !prog)
>  		return 0;
>  
> -	buf_max = XDP_PACKET_HEADROOM + mana_xdp_fraglen(ndev->mtu + ETH_HLEN);
> -	if (prog && buf_max > PAGE_SIZE) {
> -		netdev_err(ndev, "XDP: mtu:%u too large, buf_max:%u\n",
> -			   ndev->mtu, buf_max);
> +	if (prog && ndev->mtu > MANA_XDP_MTU_MAX) {
> +		netdev_err(ndev, "XDP: mtu:%u too large, mtu_max:%lu\n",
> +			   ndev->mtu, MANA_XDP_MTU_MAX);
>  		NL_SET_ERR_MSG_MOD(extack, "XDP: mtu too large");
>  
>  		return -EOPNOTSUPP;
> @@ -206,6 +201,11 @@ static int mana_xdp_set(struct net_device *ndev, struct bpf_prog *prog,
>  	if (apc->port_is_up)
>  		mana_chn_setxdp(apc, prog);
>  
> +	if (prog)
> +		ndev->max_mtu = MANA_XDP_MTU_MAX;
> +	else
> +		ndev->max_mtu = gc->adapter_mtu - ETH_HLEN;
> +
>  	return 0;
>  }
>  
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index 492474b4d8aa..07738b7e85f2 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> @@ -427,6 +427,34 @@ static u16 mana_select_queue(struct net_device *ndev, struct sk_buff *skb,
>  	return txq;
>  }
>  
> +static int mana_change_mtu(struct net_device *ndev, int new_mtu)
> +{
> +	unsigned int old_mtu = ndev->mtu;
> +	int err, err2;
> +
> +	err = mana_detach(ndev, false);
> +	if (err) {
> +		netdev_err(ndev, "mana_detach failed: %d\n", err);
> +		return err;
> +	}
> +
> +	ndev->mtu = new_mtu;
> +
> +	err = mana_attach(ndev);
> +	if (!err)
> +		return 0;
> +
> +	netdev_err(ndev, "mana_attach failed: %d\n", err);
> +
> +	/* Try to roll it back to the old configuration. */
> +	ndev->mtu = old_mtu;
> +	err2 = mana_attach(ndev);
> +	if (err2)
> +		netdev_err(ndev, "mana re-attach failed: %d\n", err2);
> +
> +	return err;
> +}
> +
>  static const struct net_device_ops mana_devops = {
>  	.ndo_open		= mana_open,
>  	.ndo_stop		= mana_close,
> @@ -436,6 +464,7 @@ static const struct net_device_ops mana_devops = {
>  	.ndo_get_stats64	= mana_get_stats64,
>  	.ndo_bpf		= mana_bpf,
>  	.ndo_xdp_xmit		= mana_xdp_xmit,
> +	.ndo_change_mtu		= mana_change_mtu,
>  };
>  
>  static void mana_cleanup_port_context(struct mana_port_context *apc)
> @@ -625,6 +654,9 @@ static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver,
>  
>  	mana_gd_init_req_hdr(&req.hdr, MANA_QUERY_DEV_CONFIG,
>  			     sizeof(req), sizeof(resp));
> +
> +	req.hdr.resp.msg_version = GDMA_MESSAGE_V2;

hdr->req.msg_version and hdr->resp.msg_version are both set to
GDMA_MESSAGE_V1 in mana_gd_init_req_hdr(), is there any reason
why hdr->req.msg_version is not set to GDMA_MESSAGE_V2?
Does initializing req.hdr.resp.msg_version to GDMA_MESSAGE_V2
in mana_gd_init_req_hdr() without reset it to GDMA_MESSAGE_V2
in mana_query_device_cfg() affect other user?


> +
>  	req.proto_major_ver = proto_major_ver;
>  	req.proto_minor_ver = proto_minor_ver;
>  	req.proto_micro_ver = proto_micro_ver;
> @@ -647,6 +679,11 @@ static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver,
>  
>  	*max_num_vports = resp.max_num_vports;
>  
> +	if (resp.hdr.response.msg_version == GDMA_MESSAGE_V2)

It seems the driver is setting resp.hdr.response.msg_version to
GDMA_MESSAGE_V2 above, and do the checking here. Does older
firmware reset the resp.hdr.response.msg_version to GDMA_MESSAGE_V1
in order to enable compatibility between firmware and driver?

> +		gc->adapter_mtu = resp.adapter_mtu;
> +	else
> +		gc->adapter_mtu = ETH_FRAME_LEN;
> +
>  	return 0;
>  }
>  
> @@ -1185,10 +1222,10 @@ static void mana_post_pkt_rxq(struct mana_rxq *rxq)
>  	WARN_ON_ONCE(recv_buf_oob->wqe_inf.wqe_size_in_bu != 1);
>  }
>  
> -static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len,
> -				      struct xdp_buff *xdp)
> +static struct sk_buff *mana_build_skb(struct mana_rxq *rxq, void *buf_va,
> +				      uint pkt_len, struct xdp_buff *xdp)
>  {
> -	struct sk_buff *skb = build_skb(buf_va, PAGE_SIZE);
> +	struct sk_buff *skb = napi_build_skb(buf_va, rxq->alloc_size);

Changing build_skb() to napi_build_skb() seems like an optimization
unrelated to jumbo frame support, seems like another patch to do that?

>  
>  	if (!skb)
>  		return NULL;
> @@ -1196,11 +1233,12 @@ static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len,
>  	if (xdp->data_hard_start) {
>  		skb_reserve(skb, xdp->data - xdp->data_hard_start);
>  		skb_put(skb, xdp->data_end - xdp->data);
> -	} else {
> -		skb_reserve(skb, XDP_PACKET_HEADROOM);
> -		skb_put(skb, pkt_len);
> +		return skb;
>  	}
>  
> +	skb_reserve(skb, rxq->headroom);
> +	skb_put(skb, pkt_len);
> +
>  	return skb;
>  }
>  
> @@ -1233,7 +1271,7 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe,
>  	if (act != XDP_PASS && act != XDP_TX)
>  		goto drop_xdp;
>  
> -	skb = mana_build_skb(buf_va, pkt_len, &xdp);
> +	skb = mana_build_skb(rxq, buf_va, pkt_len, &xdp);
>  
>  	if (!skb)
>  		goto drop;
> @@ -1282,14 +1320,72 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe,
>  	u64_stats_update_end(&rx_stats->syncp);
>  
>  drop:
> -	WARN_ON_ONCE(rxq->xdp_save_page);
> -	rxq->xdp_save_page = virt_to_page(buf_va);
> +	WARN_ON_ONCE(rxq->xdp_save_va);
> +	/* Save for reuse */
> +	rxq->xdp_save_va = buf_va;
>  
>  	++ndev->stats.rx_dropped;
>  
>  	return;
>  }
>
Francois Romieu March 20, 2023, 11:33 a.m. UTC | #5
Haiyang Zhang <haiyangz@microsoft.com> :
> > From: Francois Romieu <romieu@fr.zoreil.com>
[...]
> > I do not see where the driver could depend on the MTU. Even if it fails,
> > a single call to mana_change_mtu should thus never wreck the old working
> > state/configuration.
> > 
> > Stated differently, the detach/attach implementation is simple but
> > it makes the driver less reliable than it could be.
> > 
> > No ?
> 
> No, it doesn't make the driver less reliable. To safely remove and reallocate 
> DMA buffers with different size, we have to stop the traffic. So, mana_detach() 
> is called. We also call mana_detach() in mana_close(). So the process in 
> mana_change_mtu() is no more risky than ifdown/ifup of the NIC. 
> 
> In some rare cases, if the system memory is running really low, the bigger 
> buffer allocation may fail, so we re-try with the previous MTU. I don't expect 
> it to fail again. But we still check & log the error code for completeness and 
> debugging.

In a ideal world, I would expect change_mtu() to allocate the new resources,
bail out if some allocation fails, stop the traffic, swap the old and new
resources, then restart the traffic and release the old resources.
This way the device is never left in a failed state.
Haiyang Zhang March 20, 2023, 3:13 p.m. UTC | #6
> -----Original Message-----
> From: Francois Romieu <romieu@fr.zoreil.com>
> Sent: Monday, March 20, 2023 7:33 AM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; Dexuan Cui
> <decui@microsoft.com>; KY Srinivasan <kys@microsoft.com>; Paul Rosswurm
> <paulros@microsoft.com>; olaf@aepfle.de; vkuznets@redhat.com;
> davem@davemloft.net; wei.liu@kernel.org; edumazet@google.com;
> kuba@kernel.org; pabeni@redhat.com; leon@kernel.org; Long Li
> <longli@microsoft.com>; ssengar@linux.microsoft.com; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add support for jumbo frame
> 
> [You don't often get email from romieu@fr.zoreil.com. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
> 
> Haiyang Zhang <haiyangz@microsoft.com> :
> > > From: Francois Romieu <romieu@fr.zoreil.com>
> [...]
> > > I do not see where the driver could depend on the MTU. Even if it fails,
> > > a single call to mana_change_mtu should thus never wreck the old working
> > > state/configuration.
> > >
> > > Stated differently, the detach/attach implementation is simple but
> > > it makes the driver less reliable than it could be.
> > >
> > > No ?
> >
> > No, it doesn't make the driver less reliable. To safely remove and reallocate
> > DMA buffers with different size, we have to stop the traffic. So,
> mana_detach()
> > is called. We also call mana_detach() in mana_close(). So the process in
> > mana_change_mtu() is no more risky than ifdown/ifup of the NIC.
> >
> > In some rare cases, if the system memory is running really low, the bigger
> > buffer allocation may fail, so we re-try with the previous MTU. I don't expect
> > it to fail again. But we still check & log the error code for completeness and
> > debugging.
> 
> In a ideal world, I would expect change_mtu() to allocate the new resources,
> bail out if some allocation fails, stop the traffic, swap the old and new
> resources, then restart the traffic and release the old resources.
> This way the device is never left in a failed state.

Thanks for the idea -- I will look into that. But the real world implementation may
be less than ideal :)

- Haiyang
Haiyang Zhang March 20, 2023, 3:14 p.m. UTC | #7
> -----Original Message-----
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Monday, March 20, 2023 3:42 AM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; Dexuan Cui
> <decui@microsoft.com>; KY Srinivasan <kys@microsoft.com>; Paul Rosswurm
> <paulros@microsoft.com>; olaf@aepfle.de; vkuznets@redhat.com;
> davem@davemloft.net; wei.liu@kernel.org; edumazet@google.com;
> kuba@kernel.org; pabeni@redhat.com; Long Li <longli@microsoft.com>;
> ssengar@linux.microsoft.com; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add support for jumbo frame
> 
> On Sun, Mar 19, 2023 at 02:27:44PM -0700, Haiyang Zhang wrote:
> > During probe, get the hardware allowed max MTU by querying the device
> > configuration. Users can select MTU up to the device limit. Also,
> > when XDP is in use, we currently limit the buffer size to one page.
> >
> > Updated RX data path to allocate and use RX queue DMA buffers with
> > proper size based on the MTU setting.
> >
> > Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> >  .../net/ethernet/microsoft/mana/mana_bpf.c    |  22 +-
> >  drivers/net/ethernet/microsoft/mana/mana_en.c | 229 ++++++++++++----
> --
> >  include/net/mana/gdma.h                       |   4 +
> >  include/net/mana/mana.h                       |  18 +-
> >  4 files changed, 183 insertions(+), 90 deletions(-)
> 
> <...>
> 
> > +static int mana_change_mtu(struct net_device *ndev, int new_mtu)
> > +{
> > +	unsigned int old_mtu = ndev->mtu;
> > +	int err, err2;
> > +
> > +	err = mana_detach(ndev, false);
> > +	if (err) {
> > +		netdev_err(ndev, "mana_detach failed: %d\n", err);
> > +		return err;
> > +	}
> > +
> > +	ndev->mtu = new_mtu;
> > +
> > +	err = mana_attach(ndev);
> > +	if (!err)
> > +		return 0;
> > +
> > +	netdev_err(ndev, "mana_attach failed: %d\n", err);
> > +
> > +	/* Try to roll it back to the old configuration. */
> > +	ndev->mtu = old_mtu;
> > +	err2 = mana_attach(ndev);
> 
> I second to Francois and agree with him that it is very questionable.
> If mana_attach() failed for first try, you should bail out and not
> retry with some hope that it will pass.
> 

Will consider. Thanks.
Haiyang Zhang March 20, 2023, 3:21 p.m. UTC | #8
> -----Original Message-----
> From: Yunsheng Lin <linyunsheng@huawei.com>
> Sent: Monday, March 20, 2023 5:42 AM
> To: Haiyang Zhang <haiyangz@microsoft.com>; linux-hyperv@vger.kernel.org;
> netdev@vger.kernel.org
> Cc: Dexuan Cui <decui@microsoft.com>; KY Srinivasan <kys@microsoft.com>;
> Paul Rosswurm <paulros@microsoft.com>; olaf@aepfle.de;
> vkuznets@redhat.com; davem@davemloft.net; wei.liu@kernel.org;
> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
> leon@kernel.org; Long Li <longli@microsoft.com>;
> ssengar@linux.microsoft.com; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH net-next] net: mana: Add support for jumbo frame
> 
> On 2023/3/20 5:27, Haiyang Zhang wrote:
> > During probe, get the hardware allowed max MTU by querying the device
> > configuration. Users can select MTU up to the device limit. Also,
> > when XDP is in use, we currently limit the buffer size to one page.
> >
> > Updated RX data path to allocate and use RX queue DMA buffers with
> > proper size based on the MTU setting.
> 
> The change in this patch seems better to splitted into more reviewable
> patchset. Perhaps refactor the RX queue DMA buffers allocation to handle
> different size first, then add support for the jumbo frame.

Will consider.

> 
> >
> > Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> >  .../net/ethernet/microsoft/mana/mana_bpf.c    |  22 +-
> >  drivers/net/ethernet/microsoft/mana/mana_en.c | 229 ++++++++++++----
> --
> >  include/net/mana/gdma.h                       |   4 +
> >  include/net/mana/mana.h                       |  18 +-
> >  4 files changed, 183 insertions(+), 90 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> b/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> > index 3caea631229c..23b1521c0df9 100644
> > --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> > +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c
> > @@ -133,12 +133,6 @@ u32 mana_run_xdp(struct net_device *ndev,
> struct mana_rxq *rxq,
> >  	return act;
> >  }
> >
> > -static unsigned int mana_xdp_fraglen(unsigned int len)
> > -{
> > -	return SKB_DATA_ALIGN(len) +
> > -	       SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> > -}
> > -
> >  struct bpf_prog *mana_xdp_get(struct mana_port_context *apc)
> >  {
> >  	ASSERT_RTNL();
> > @@ -179,17 +173,18 @@ static int mana_xdp_set(struct net_device *ndev,
> struct bpf_prog *prog,
> >  {
> >  	struct mana_port_context *apc = netdev_priv(ndev);
> >  	struct bpf_prog *old_prog;
> > -	int buf_max;
> > +	struct gdma_context *gc;
> > +
> > +	gc = apc->ac->gdma_dev->gdma_context;
> >
> >  	old_prog = mana_xdp_get(apc);
> >
> >  	if (!old_prog && !prog)
> >  		return 0;
> >
> > -	buf_max = XDP_PACKET_HEADROOM + mana_xdp_fraglen(ndev-
> >mtu + ETH_HLEN);
> > -	if (prog && buf_max > PAGE_SIZE) {
> > -		netdev_err(ndev, "XDP: mtu:%u too large, buf_max:%u\n",
> > -			   ndev->mtu, buf_max);
> > +	if (prog && ndev->mtu > MANA_XDP_MTU_MAX) {
> > +		netdev_err(ndev, "XDP: mtu:%u too large, mtu_max:%lu\n",
> > +			   ndev->mtu, MANA_XDP_MTU_MAX);
> >  		NL_SET_ERR_MSG_MOD(extack, "XDP: mtu too large");
> >
> >  		return -EOPNOTSUPP;
> > @@ -206,6 +201,11 @@ static int mana_xdp_set(struct net_device *ndev,
> struct bpf_prog *prog,
> >  	if (apc->port_is_up)
> >  		mana_chn_setxdp(apc, prog);
> >
> > +	if (prog)
> > +		ndev->max_mtu = MANA_XDP_MTU_MAX;
> > +	else
> > +		ndev->max_mtu = gc->adapter_mtu - ETH_HLEN;
> > +
> >  	return 0;
> >  }
> >
> > diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c
> b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > index 492474b4d8aa..07738b7e85f2 100644
> > --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > @@ -427,6 +427,34 @@ static u16 mana_select_queue(struct net_device
> *ndev, struct sk_buff *skb,
> >  	return txq;
> >  }
> >
> > +static int mana_change_mtu(struct net_device *ndev, int new_mtu)
> > +{
> > +	unsigned int old_mtu = ndev->mtu;
> > +	int err, err2;
> > +
> > +	err = mana_detach(ndev, false);
> > +	if (err) {
> > +		netdev_err(ndev, "mana_detach failed: %d\n", err);
> > +		return err;
> > +	}
> > +
> > +	ndev->mtu = new_mtu;
> > +
> > +	err = mana_attach(ndev);
> > +	if (!err)
> > +		return 0;
> > +
> > +	netdev_err(ndev, "mana_attach failed: %d\n", err);
> > +
> > +	/* Try to roll it back to the old configuration. */
> > +	ndev->mtu = old_mtu;
> > +	err2 = mana_attach(ndev);
> > +	if (err2)
> > +		netdev_err(ndev, "mana re-attach failed: %d\n", err2);
> > +
> > +	return err;
> > +}
> > +
> >  static const struct net_device_ops mana_devops = {
> >  	.ndo_open		= mana_open,
> >  	.ndo_stop		= mana_close,
> > @@ -436,6 +464,7 @@ static const struct net_device_ops mana_devops = {
> >  	.ndo_get_stats64	= mana_get_stats64,
> >  	.ndo_bpf		= mana_bpf,
> >  	.ndo_xdp_xmit		= mana_xdp_xmit,
> > +	.ndo_change_mtu		= mana_change_mtu,
> >  };
> >
> >  static void mana_cleanup_port_context(struct mana_port_context *apc)
> > @@ -625,6 +654,9 @@ static int mana_query_device_cfg(struct
> mana_context *ac, u32 proto_major_ver,
> >
> >  	mana_gd_init_req_hdr(&req.hdr, MANA_QUERY_DEV_CONFIG,
> >  			     sizeof(req), sizeof(resp));
> > +
> > +	req.hdr.resp.msg_version = GDMA_MESSAGE_V2;
> 
> hdr->req.msg_version and hdr->resp.msg_version are both set to
> GDMA_MESSAGE_V1 in mana_gd_init_req_hdr(), is there any reason
> why hdr->req.msg_version is not set to GDMA_MESSAGE_V2?

The request version is still V1 in our hardware.

> Does initializing req.hdr.resp.msg_version to GDMA_MESSAGE_V2
> in mana_gd_init_req_hdr() without reset it to GDMA_MESSAGE_V2
> in mana_query_device_cfg() affect other user?

Yes, other users still need V1. So only this message response version is set 
to V2.

> 
> 
> > +
> >  	req.proto_major_ver = proto_major_ver;
> >  	req.proto_minor_ver = proto_minor_ver;
> >  	req.proto_micro_ver = proto_micro_ver;
> > @@ -647,6 +679,11 @@ static int mana_query_device_cfg(struct
> mana_context *ac, u32 proto_major_ver,
> >
> >  	*max_num_vports = resp.max_num_vports;
> >
> > +	if (resp.hdr.response.msg_version == GDMA_MESSAGE_V2)
> 
> It seems the driver is setting resp.hdr.response.msg_version to
> GDMA_MESSAGE_V2 above, and do the checking here. Does older
> firmware reset the resp.hdr.response.msg_version to GDMA_MESSAGE_V1
> in order to enable compatibility between firmware and driver?

Yes older firmware still using V1.

> 
> > +		gc->adapter_mtu = resp.adapter_mtu;
> > +	else
> > +		gc->adapter_mtu = ETH_FRAME_LEN;
> > +
> >  	return 0;
> >  }
> >
> > @@ -1185,10 +1222,10 @@ static void mana_post_pkt_rxq(struct
> mana_rxq *rxq)
> >  	WARN_ON_ONCE(recv_buf_oob->wqe_inf.wqe_size_in_bu != 1);
> >  }
> >
> > -static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len,
> > -				      struct xdp_buff *xdp)
> > +static struct sk_buff *mana_build_skb(struct mana_rxq *rxq, void *buf_va,
> > +				      uint pkt_len, struct xdp_buff *xdp)
> >  {
> > -	struct sk_buff *skb = build_skb(buf_va, PAGE_SIZE);
> > +	struct sk_buff *skb = napi_build_skb(buf_va, rxq->alloc_size);
> 
> Changing build_skb() to napi_build_skb() seems like an optimization
> unrelated to jumbo frame support, seems like another patch to do that?
Will do. Thanks.

- Haiyang
diff mbox series

Patch

diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c
index 3caea631229c..23b1521c0df9 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c
@@ -133,12 +133,6 @@  u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq,
 	return act;
 }
 
-static unsigned int mana_xdp_fraglen(unsigned int len)
-{
-	return SKB_DATA_ALIGN(len) +
-	       SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-}
-
 struct bpf_prog *mana_xdp_get(struct mana_port_context *apc)
 {
 	ASSERT_RTNL();
@@ -179,17 +173,18 @@  static int mana_xdp_set(struct net_device *ndev, struct bpf_prog *prog,
 {
 	struct mana_port_context *apc = netdev_priv(ndev);
 	struct bpf_prog *old_prog;
-	int buf_max;
+	struct gdma_context *gc;
+
+	gc = apc->ac->gdma_dev->gdma_context;
 
 	old_prog = mana_xdp_get(apc);
 
 	if (!old_prog && !prog)
 		return 0;
 
-	buf_max = XDP_PACKET_HEADROOM + mana_xdp_fraglen(ndev->mtu + ETH_HLEN);
-	if (prog && buf_max > PAGE_SIZE) {
-		netdev_err(ndev, "XDP: mtu:%u too large, buf_max:%u\n",
-			   ndev->mtu, buf_max);
+	if (prog && ndev->mtu > MANA_XDP_MTU_MAX) {
+		netdev_err(ndev, "XDP: mtu:%u too large, mtu_max:%lu\n",
+			   ndev->mtu, MANA_XDP_MTU_MAX);
 		NL_SET_ERR_MSG_MOD(extack, "XDP: mtu too large");
 
 		return -EOPNOTSUPP;
@@ -206,6 +201,11 @@  static int mana_xdp_set(struct net_device *ndev, struct bpf_prog *prog,
 	if (apc->port_is_up)
 		mana_chn_setxdp(apc, prog);
 
+	if (prog)
+		ndev->max_mtu = MANA_XDP_MTU_MAX;
+	else
+		ndev->max_mtu = gc->adapter_mtu - ETH_HLEN;
+
 	return 0;
 }
 
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index 492474b4d8aa..07738b7e85f2 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -427,6 +427,34 @@  static u16 mana_select_queue(struct net_device *ndev, struct sk_buff *skb,
 	return txq;
 }
 
+static int mana_change_mtu(struct net_device *ndev, int new_mtu)
+{
+	unsigned int old_mtu = ndev->mtu;
+	int err, err2;
+
+	err = mana_detach(ndev, false);
+	if (err) {
+		netdev_err(ndev, "mana_detach failed: %d\n", err);
+		return err;
+	}
+
+	ndev->mtu = new_mtu;
+
+	err = mana_attach(ndev);
+	if (!err)
+		return 0;
+
+	netdev_err(ndev, "mana_attach failed: %d\n", err);
+
+	/* Try to roll it back to the old configuration. */
+	ndev->mtu = old_mtu;
+	err2 = mana_attach(ndev);
+	if (err2)
+		netdev_err(ndev, "mana re-attach failed: %d\n", err2);
+
+	return err;
+}
+
 static const struct net_device_ops mana_devops = {
 	.ndo_open		= mana_open,
 	.ndo_stop		= mana_close,
@@ -436,6 +464,7 @@  static const struct net_device_ops mana_devops = {
 	.ndo_get_stats64	= mana_get_stats64,
 	.ndo_bpf		= mana_bpf,
 	.ndo_xdp_xmit		= mana_xdp_xmit,
+	.ndo_change_mtu		= mana_change_mtu,
 };
 
 static void mana_cleanup_port_context(struct mana_port_context *apc)
@@ -625,6 +654,9 @@  static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver,
 
 	mana_gd_init_req_hdr(&req.hdr, MANA_QUERY_DEV_CONFIG,
 			     sizeof(req), sizeof(resp));
+
+	req.hdr.resp.msg_version = GDMA_MESSAGE_V2;
+
 	req.proto_major_ver = proto_major_ver;
 	req.proto_minor_ver = proto_minor_ver;
 	req.proto_micro_ver = proto_micro_ver;
@@ -647,6 +679,11 @@  static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver,
 
 	*max_num_vports = resp.max_num_vports;
 
+	if (resp.hdr.response.msg_version == GDMA_MESSAGE_V2)
+		gc->adapter_mtu = resp.adapter_mtu;
+	else
+		gc->adapter_mtu = ETH_FRAME_LEN;
+
 	return 0;
 }
 
@@ -1185,10 +1222,10 @@  static void mana_post_pkt_rxq(struct mana_rxq *rxq)
 	WARN_ON_ONCE(recv_buf_oob->wqe_inf.wqe_size_in_bu != 1);
 }
 
-static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len,
-				      struct xdp_buff *xdp)
+static struct sk_buff *mana_build_skb(struct mana_rxq *rxq, void *buf_va,
+				      uint pkt_len, struct xdp_buff *xdp)
 {
-	struct sk_buff *skb = build_skb(buf_va, PAGE_SIZE);
+	struct sk_buff *skb = napi_build_skb(buf_va, rxq->alloc_size);
 
 	if (!skb)
 		return NULL;
@@ -1196,11 +1233,12 @@  static struct sk_buff *mana_build_skb(void *buf_va, uint pkt_len,
 	if (xdp->data_hard_start) {
 		skb_reserve(skb, xdp->data - xdp->data_hard_start);
 		skb_put(skb, xdp->data_end - xdp->data);
-	} else {
-		skb_reserve(skb, XDP_PACKET_HEADROOM);
-		skb_put(skb, pkt_len);
+		return skb;
 	}
 
+	skb_reserve(skb, rxq->headroom);
+	skb_put(skb, pkt_len);
+
 	return skb;
 }
 
@@ -1233,7 +1271,7 @@  static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe,
 	if (act != XDP_PASS && act != XDP_TX)
 		goto drop_xdp;
 
-	skb = mana_build_skb(buf_va, pkt_len, &xdp);
+	skb = mana_build_skb(rxq, buf_va, pkt_len, &xdp);
 
 	if (!skb)
 		goto drop;
@@ -1282,14 +1320,72 @@  static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe,
 	u64_stats_update_end(&rx_stats->syncp);
 
 drop:
-	WARN_ON_ONCE(rxq->xdp_save_page);
-	rxq->xdp_save_page = virt_to_page(buf_va);
+	WARN_ON_ONCE(rxq->xdp_save_va);
+	/* Save for reuse */
+	rxq->xdp_save_va = buf_va;
 
 	++ndev->stats.rx_dropped;
 
 	return;
 }
 
+static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
+			     dma_addr_t *da, bool is_napi)
+{
+	struct page *page;
+	void *va;
+
+	/* Reuse XDP dropped page if available */
+	if (rxq->xdp_save_va) {
+		va = rxq->xdp_save_va;
+		rxq->xdp_save_va = NULL;
+	} else if (rxq->alloc_size > PAGE_SIZE) {
+		if (is_napi)
+			va = napi_alloc_frag(rxq->alloc_size);
+		else
+			va = netdev_alloc_frag(rxq->alloc_size);
+
+		if (!va)
+			return NULL;
+	} else {
+		page = dev_alloc_page();
+		if (!page)
+			return NULL;
+
+		va = page_to_virt(page);
+	}
+
+	*da = dma_map_single(dev, va + rxq->headroom, rxq->datasize,
+			     DMA_FROM_DEVICE);
+
+	if (dma_mapping_error(dev, *da)) {
+		put_page(virt_to_head_page(va));
+		return NULL;
+	}
+
+	return va;
+}
+
+/* Allocate frag for rx buffer, and save the old buf */
+static void mana_refill_rxoob(struct device *dev, struct mana_rxq *rxq,
+			      struct mana_recv_buf_oob *rxoob, void **old_buf)
+{
+	dma_addr_t da;
+	void *va;
+
+	va = mana_get_rxfrag(rxq, dev, &da, true);
+
+	if (!va)
+		return;
+
+	dma_unmap_single(dev, rxoob->sgl[0].address, rxq->datasize,
+			 DMA_FROM_DEVICE);
+	*old_buf = rxoob->buf_va;
+
+	rxoob->buf_va = va;
+	rxoob->sgl[0].address = da;
+}
+
 static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq,
 				struct gdma_comp *cqe)
 {
@@ -1299,10 +1395,8 @@  static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq,
 	struct mana_recv_buf_oob *rxbuf_oob;
 	struct mana_port_context *apc;
 	struct device *dev = gc->dev;
-	void *new_buf, *old_buf;
-	struct page *new_page;
+	void *old_buf = NULL;
 	u32 curr, pktlen;
-	dma_addr_t da;
 
 	apc = netdev_priv(ndev);
 
@@ -1345,40 +1439,11 @@  static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq,
 	rxbuf_oob = &rxq->rx_oobs[curr];
 	WARN_ON_ONCE(rxbuf_oob->wqe_inf.wqe_size_in_bu != 1);
 
-	/* Reuse XDP dropped page if available */
-	if (rxq->xdp_save_page) {
-		new_page = rxq->xdp_save_page;
-		rxq->xdp_save_page = NULL;
-	} else {
-		new_page = alloc_page(GFP_ATOMIC);
-	}
-
-	if (new_page) {
-		da = dma_map_page(dev, new_page, XDP_PACKET_HEADROOM, rxq->datasize,
-				  DMA_FROM_DEVICE);
-
-		if (dma_mapping_error(dev, da)) {
-			__free_page(new_page);
-			new_page = NULL;
-		}
-	}
-
-	new_buf = new_page ? page_to_virt(new_page) : NULL;
-
-	if (new_buf) {
-		dma_unmap_page(dev, rxbuf_oob->buf_dma_addr, rxq->datasize,
-			       DMA_FROM_DEVICE);
-
-		old_buf = rxbuf_oob->buf_va;
-
-		/* refresh the rxbuf_oob with the new page */
-		rxbuf_oob->buf_va = new_buf;
-		rxbuf_oob->buf_dma_addr = da;
-		rxbuf_oob->sgl[0].address = rxbuf_oob->buf_dma_addr;
-	} else {
-		old_buf = NULL; /* drop the packet if no memory */
-	}
+	mana_refill_rxoob(dev, rxq, rxbuf_oob, &old_buf);
 
+	/* Unsuccessful refill will have old_buf == NULL.
+	 * In this case, mana_rx_skb() will drop the packet.
+	 */
 	mana_rx_skb(old_buf, oob, rxq);
 
 drop:
@@ -1659,8 +1724,8 @@  static void mana_destroy_rxq(struct mana_port_context *apc,
 
 	mana_deinit_cq(apc, &rxq->rx_cq);
 
-	if (rxq->xdp_save_page)
-		__free_page(rxq->xdp_save_page);
+	if (rxq->xdp_save_va)
+		put_page(virt_to_head_page(rxq->xdp_save_va));
 
 	for (i = 0; i < rxq->num_rx_buf; i++) {
 		rx_oob = &rxq->rx_oobs[i];
@@ -1668,10 +1733,10 @@  static void mana_destroy_rxq(struct mana_port_context *apc,
 		if (!rx_oob->buf_va)
 			continue;
 
-		dma_unmap_page(dev, rx_oob->buf_dma_addr, rxq->datasize,
-			       DMA_FROM_DEVICE);
+		dma_unmap_single(dev, rx_oob->sgl[0].address,
+				 rx_oob->sgl[0].size, DMA_FROM_DEVICE);
 
-		free_page((unsigned long)rx_oob->buf_va);
+		put_page(virt_to_head_page(rx_oob->buf_va));
 		rx_oob->buf_va = NULL;
 	}
 
@@ -1681,6 +1746,26 @@  static void mana_destroy_rxq(struct mana_port_context *apc,
 	kfree(rxq);
 }
 
+static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key,
+			    struct mana_rxq *rxq, struct device *dev)
+{
+	dma_addr_t da;
+	void *va;
+
+	va = mana_get_rxfrag(rxq, dev, &da, false);
+
+	if (!va)
+		return -ENOMEM;
+
+	rx_oob->buf_va = va;
+
+	rx_oob->sgl[0].address = da;
+	rx_oob->sgl[0].size = rxq->datasize;
+	rx_oob->sgl[0].mem_key = mem_key;
+
+	return 0;
+}
+
 #define MANA_WQE_HEADER_SIZE 16
 #define MANA_WQE_SGE_SIZE 16
 
@@ -1690,11 +1775,10 @@  static int mana_alloc_rx_wqe(struct mana_port_context *apc,
 	struct gdma_context *gc = apc->ac->gdma_dev->gdma_context;
 	struct mana_recv_buf_oob *rx_oob;
 	struct device *dev = gc->dev;
-	struct page *page;
-	dma_addr_t da;
 	u32 buf_idx;
+	int ret;
 
-	WARN_ON(rxq->datasize == 0 || rxq->datasize > PAGE_SIZE);
+	WARN_ON(rxq->datasize == 0);
 
 	*rxq_size = 0;
 	*cq_size = 0;
@@ -1703,25 +1787,12 @@  static int mana_alloc_rx_wqe(struct mana_port_context *apc,
 		rx_oob = &rxq->rx_oobs[buf_idx];
 		memset(rx_oob, 0, sizeof(*rx_oob));
 
-		page = alloc_page(GFP_KERNEL);
-		if (!page)
-			return -ENOMEM;
-
-		da = dma_map_page(dev, page, XDP_PACKET_HEADROOM, rxq->datasize,
-				  DMA_FROM_DEVICE);
-
-		if (dma_mapping_error(dev, da)) {
-			__free_page(page);
-			return -ENOMEM;
-		}
-
-		rx_oob->buf_va = page_to_virt(page);
-		rx_oob->buf_dma_addr = da;
-
 		rx_oob->num_sge = 1;
-		rx_oob->sgl[0].address = rx_oob->buf_dma_addr;
-		rx_oob->sgl[0].size = rxq->datasize;
-		rx_oob->sgl[0].mem_key = apc->ac->gdma_dev->gpa_mkey;
+
+		ret = mana_fill_rx_oob(rx_oob, apc->ac->gdma_dev->gpa_mkey, rxq,
+				       dev);
+		if (ret)
+			return ret;
 
 		rx_oob->wqe_req.sgl = rx_oob->sgl;
 		rx_oob->wqe_req.num_sge = rx_oob->num_sge;
@@ -1764,6 +1835,7 @@  static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc,
 	struct mana_obj_spec wq_spec;
 	struct mana_obj_spec cq_spec;
 	struct gdma_queue_spec spec;
+	unsigned int mtu = ndev->mtu;
 	struct mana_cq *cq = NULL;
 	struct gdma_context *gc;
 	u32 cq_size, rq_size;
@@ -1780,9 +1852,18 @@  static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc,
 	rxq->ndev = ndev;
 	rxq->num_rx_buf = RX_BUFFERS_PER_QUEUE;
 	rxq->rxq_idx = rxq_idx;
-	rxq->datasize = ALIGN(MAX_FRAME_SIZE, 64);
 	rxq->rxobj = INVALID_MANA_HANDLE;
 
+	rxq->datasize = ALIGN(mtu + ETH_HLEN, 64);
+
+	if (mtu > MANA_XDP_MTU_MAX) {
+		rxq->alloc_size = mtu + MANA_RXBUF_PAD;
+		rxq->headroom = 0;
+	} else {
+		rxq->alloc_size = mtu + MANA_RXBUF_PAD + XDP_PACKET_HEADROOM;
+		rxq->headroom = XDP_PACKET_HEADROOM;
+	}
+
 	err = mana_alloc_rx_wqe(apc, rxq, &rq_size, &cq_size);
 	if (err)
 		goto out;
@@ -2194,8 +2275,8 @@  static int mana_probe_port(struct mana_context *ac, int port_idx,
 	ndev->netdev_ops = &mana_devops;
 	ndev->ethtool_ops = &mana_ethtool_ops;
 	ndev->mtu = ETH_DATA_LEN;
-	ndev->max_mtu = ndev->mtu;
-	ndev->min_mtu = ndev->mtu;
+	ndev->max_mtu = gc->adapter_mtu - ETH_HLEN;
+	ndev->min_mtu = ETH_MIN_MTU;
 	ndev->needed_headroom = MANA_HEADROOM;
 	ndev->dev_port = port_idx;
 	SET_NETDEV_DEV(ndev, gc->dev);
diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
index 56189e4252da..96c120160f15 100644
--- a/include/net/mana/gdma.h
+++ b/include/net/mana/gdma.h
@@ -145,6 +145,7 @@  struct gdma_general_req {
 }; /* HW DATA */
 
 #define GDMA_MESSAGE_V1 1
+#define GDMA_MESSAGE_V2 2
 
 struct gdma_general_resp {
 	struct gdma_resp_hdr hdr;
@@ -354,6 +355,9 @@  struct gdma_context {
 	struct gdma_resource	msix_resource;
 	struct gdma_irq_context	*irq_contexts;
 
+	/* L2 MTU */
+	u16 adapter_mtu;
+
 	/* This maps a CQ index to the queue structure. */
 	unsigned int		max_num_cqs;
 	struct gdma_queue	**cq_table;
diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
index bb11a6535d80..33cd144733fb 100644
--- a/include/net/mana/mana.h
+++ b/include/net/mana/mana.h
@@ -36,9 +36,6 @@  enum TRI_STATE {
 
 #define COMP_ENTRY_SIZE 64
 
-#define ADAPTER_MTU_SIZE 1500
-#define MAX_FRAME_SIZE (ADAPTER_MTU_SIZE + 14)
-
 #define RX_BUFFERS_PER_QUEUE 512
 
 #define MAX_SEND_BUFFERS_PER_QUEUE 256
@@ -282,7 +279,6 @@  struct mana_recv_buf_oob {
 	struct gdma_wqe_request wqe_req;
 
 	void *buf_va;
-	dma_addr_t buf_dma_addr;
 
 	/* SGL of the buffer going to be sent has part of the work request. */
 	u32 num_sge;
@@ -295,6 +291,11 @@  struct mana_recv_buf_oob {
 	struct gdma_posted_wqe_info wqe_inf;
 };
 
+#define MANA_RXBUF_PAD (SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) \
+			+ ETH_HLEN)
+
+#define MANA_XDP_MTU_MAX (PAGE_SIZE - MANA_RXBUF_PAD - XDP_PACKET_HEADROOM)
+
 struct mana_rxq {
 	struct gdma_queue *gdma_rq;
 	/* Cache the gdma receive queue id */
@@ -304,6 +305,8 @@  struct mana_rxq {
 	u32 rxq_idx;
 
 	u32 datasize;
+	u32 alloc_size;
+	u32 headroom;
 
 	mana_handle_t rxobj;
 
@@ -322,7 +325,7 @@  struct mana_rxq {
 
 	struct bpf_prog __rcu *bpf_prog;
 	struct xdp_rxq_info xdp_rxq;
-	struct page *xdp_save_page;
+	void *xdp_save_va; /* for reusing */
 	bool xdp_flush;
 	int xdp_rc; /* XDP redirect return code */
 
@@ -486,6 +489,11 @@  struct mana_query_device_cfg_resp {
 	u16 max_num_vports;
 	u16 reserved;
 	u32 max_num_eqs;
+
+	/* response v2: */
+	u16 adapter_mtu;
+	u16 reserved2;
+	u32 reserved3;
 }; /* HW DATA */
 
 /* Query vPort Configuration */