diff mbox series

[v2,net-next,resent] net: fec: Enable SOC specific rx-usecs coalescence default setting

Message ID 20240729193527.376077-1-shenwei.wang@nxp.com (mailing list archive)
State New
Headers show
Series [v2,net-next,resent] net: fec: Enable SOC specific rx-usecs coalescence default setting | expand

Commit Message

Shenwei Wang July 29, 2024, 7:35 p.m. UTC
The current FEC driver uses a single default rx-usecs coalescence setting
across all SoCs. This approach leads to suboptimal latency on newer, high
performance SoCs such as i.MX8QM and i.MX8M.

For example, the following are the ping result on a i.MX8QXP board:

$ ping 192.168.0.195
PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms

The current default rx-usecs value of 1000us was originally optimized for
CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
generations, CPU performance is no longer a limiting factor. Consequently,
the rx-usecs value should be reduced to enhance receive latency.

The following are the ping result with the 100us setting:

$ ping 192.168.0.195
PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms

Performance testing using iperf revealed no noticeable impact on
network throughput or CPU utilization.

Signed-off-by: Shenwei Wang <shenwei.wang@nxp.com>
---
Changes in V2:
- improved the commit comments and removed the fix tag per Andrew's feedback

 drivers/net/ethernet/freescale/fec_main.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

--
2.34.1

Comments

Joe Damato July 30, 2024, 10:17 a.m. UTC | #1
On Mon, Jul 29, 2024 at 02:35:27PM -0500, Shenwei Wang wrote:
> The current FEC driver uses a single default rx-usecs coalescence setting
> across all SoCs. This approach leads to suboptimal latency on newer, high
> performance SoCs such as i.MX8QM and i.MX8M.
> 
> For example, the following are the ping result on a i.MX8QXP board:
> 
> $ ping 192.168.0.195
> PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
> 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
> 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
> 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms
> 
> The current default rx-usecs value of 1000us was originally optimized for
> CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
> generations, CPU performance is no longer a limiting factor. Consequently,
> the rx-usecs value should be reduced to enhance receive latency.
> 
> The following are the ping result with the 100us setting:
> 
> $ ping 192.168.0.195
> PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
> 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
> 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
> 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms
> 
> Performance testing using iperf revealed no noticeable impact on
> network throughput or CPU utilization.

I'm not sure this short paragraph addresses Andrew's comment:

  Have you benchmarked CPU usage with this patch, for a range of traffic
  bandwidths and burst patterns. How does it differ?

Maybe you could provide more details of the iperf tests you ran? It
seems odd that CPU usage is unchanged.

If the system is more reactive (due to lower coalesce settings and
IRQs firing more often), you'd expect CPU usage to increase,
wouldn't you?

- Joe
Shenwei Wang July 30, 2024, 1:47 p.m. UTC | #2
> -----Original Message-----
> From: Joe Damato <jdamato@fastly.com>
> Sent: Tuesday, July 30, 2024 5:17 AM
> To: Shenwei Wang <shenwei.wang@nxp.com>
> Cc: Wei Fang <wei.fang@nxp.com>; David S. Miller <davem@davemloft.net>;
> Eric Dumazet <edumazet@google.com>; Jakub Kicinski <kuba@kernel.org>;
> Paolo Abeni <pabeni@redhat.com>; Clark Wang <xiaoning.wang@nxp.com>;
> imx@lists.linux.dev; netdev@vger.kernel.org; dl-linux-imx <linux-
> imx@nxp.com>
> Subject: [EXT] Re: [PATCH v2 net-next resent] net: fec: Enable SOC specific rx-
> usecs coalescence default setting
> 
> Caution: This is an external email. Please take care when clicking links or
> opening attachments. When in doubt, report the message using the 'Report
> this email' button
> 
> 
> On Mon, Jul 29, 2024 at 02:35:27PM -0500, Shenwei Wang wrote:
> > Performance testing using iperf revealed no noticeable impact on
> > network throughput or CPU utilization.
> 
> I'm not sure this short paragraph addresses Andrew's comment:
> 
>   Have you benchmarked CPU usage with this patch, for a range of traffic
>   bandwidths and burst patterns. How does it differ?
> 
> Maybe you could provide more details of the iperf tests you ran? It seems odd
> that CPU usage is unchanged.
> 
> If the system is more reactive (due to lower coalesce settings and IRQs firing
> more often), you'd expect CPU usage to increase, wouldn't you?
> 

The driver operates under NAPI polling, where several factors influence IRQ triggering: 
NAPI polling weight, RX timer threshold, and RX frame count threshold. 
During iperf testing, my understanding is that the NAPI polling weight is likely the primary 
factor affecting triggering frequency, as IRQs are disabled during NAPI polling cycles.

Thanks,
Shenwei
	
> - Joe
Fabio Estevam July 30, 2024, 2:24 p.m. UTC | #3
On Tue, Jul 30, 2024 at 7:17 AM Joe Damato <jdamato@fastly.com> wrote:

> I'm not sure this short paragraph addresses Andrew's comment:
>
>   Have you benchmarked CPU usage with this patch, for a range of traffic
>   bandwidths and burst patterns. How does it differ?
>
> Maybe you could provide more details of the iperf tests you ran? It
> seems odd that CPU usage is unchanged.
>
> If the system is more reactive (due to lower coalesce settings and
> IRQs firing more often), you'd expect CPU usage to increase,
> wouldn't you?

[Added Andrew on Cc]

Shenwei,

If someone comments on a previous version of the path,
it is good practice to copy the person on subsequent versions.
Shenwei Wang July 30, 2024, 7:26 p.m. UTC | #4
> -----Original Message-----
> From: Fabio Estevam <festevam@gmail.com>
> Sent: Tuesday, July 30, 2024 9:25 AM
> To: Joe Damato <jdamato@fastly.com>; Shenwei Wang
> <shenwei.wang@nxp.com>; Wei Fang <wei.fang@nxp.com>; David S. Miller
> <davem@davemloft.net>; Eric Dumazet <edumazet@google.com>; Jakub
> Kicinski <kuba@kernel.org>; Paolo Abeni <pabeni@redhat.com>; Clark Wang
> <xiaoning.wang@nxp.com>; imx@lists.linux.dev; netdev@vger.kernel.org; dl-
> linux-imx <linux-imx@nxp.com>
> Cc: Andrew Lunn <andrew@lunn.ch>
> Subject: [EXT] Re: [PATCH v2 net-next resent] net: fec: Enable SOC specific rx-
> usecs coalescence default setting
> 
> Caution: This is an external email. Please take care when clicking links or
> opening attachments. When in doubt, report the message using the 'Report
> this email' button
> 
> 
> On Tue, Jul 30, 2024 at 7:17 AM Joe Damato <jdamato@fastly.com> wrote:
> 
> > I'm not sure this short paragraph addresses Andrew's comment:
> >
> >   Have you benchmarked CPU usage with this patch, for a range of traffic
> >   bandwidths and burst patterns. How does it differ?
> >
> > Maybe you could provide more details of the iperf tests you ran? It
> > seems odd that CPU usage is unchanged.
> >
> > If the system is more reactive (due to lower coalesce settings and
> > IRQs firing more often), you'd expect CPU usage to increase, wouldn't
> > you?
> 
> [Added Andrew on Cc]
> 
> Shenwei,
> 
> If someone comments on a previous version of the path, it is good practice to
> copy the person on subsequent versions.

Hi Fabio,

Thank you! 
I wasn't aware that he isn't included in the maintainer list generated by get_maintainer.pl.

Thanks,
Shenwei
Andrew Lunn July 31, 2024, 11:56 p.m. UTC | #5
On Tue, Jul 30, 2024 at 11:17:05AM +0100, Joe Damato wrote:
> On Mon, Jul 29, 2024 at 02:35:27PM -0500, Shenwei Wang wrote:
> > The current FEC driver uses a single default rx-usecs coalescence setting
> > across all SoCs. This approach leads to suboptimal latency on newer, high
> > performance SoCs such as i.MX8QM and i.MX8M.
> > 
> > For example, the following are the ping result on a i.MX8QXP board:
> > 
> > $ ping 192.168.0.195
> > PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> > 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
> > 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
> > 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
> > 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms
> > 
> > The current default rx-usecs value of 1000us was originally optimized for
> > CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
> > generations, CPU performance is no longer a limiting factor. Consequently,
> > the rx-usecs value should be reduced to enhance receive latency.
> > 
> > The following are the ping result with the 100us setting:
> > 
> > $ ping 192.168.0.195
> > PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> > 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
> > 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
> > 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
> > 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms
> > 
> > Performance testing using iperf revealed no noticeable impact on
> > network throughput or CPU utilization.
> 
> I'm not sure this short paragraph addresses Andrew's comment:
> 
>   Have you benchmarked CPU usage with this patch, for a range of traffic
>   bandwidths and burst patterns. How does it differ?
> 
> Maybe you could provide more details of the iperf tests you ran? It
> seems odd that CPU usage is unchanged.
> 
> If the system is more reactive (due to lower coalesce settings and
> IRQs firing more often), you'd expect CPU usage to increase,
> wouldn't you?

Hi Joe

It is not as simple as that.

Consider a VoIP system, a CISCO or Snom phone. It will be receiving a
packet about every 2ms. This change in interrupt coalescing will have
no effect on CPU load, there will still be an interrupt per
packet. What this change does however do is reduce the latency, as can
be seen by the ping. However, anybody building a phone knows about

ethtool -C|--coalesce

and will either configure the value lower, or turn it off
altogether. Also, CCITT recommends 50ms end to end delay for a
national call, so going from 1.5 to 0.4ms is in the noise.

Now consider bulk transfer at line rate. The receive buffer is going
to fill with multiple packets, NAPI is going to get its budget of 64
packets, and the interrupt will be left disabled. NAPI will then poll
the device every so often, receiving packets. Since interrupts are
off, the coalesce time makes no difference.

Now consider packets arriving at about 0.5ms intervals. That is way
too slow for NAPI to go into polled mode. It does however mean 2
packets would typically be received in each coalescence
period. However with the proposed change, an interrupt would be
triggered for each packet, doubling the interrupt load.

But think about a packet every 0.5ms. That is 2000 packets per
second. Even the older CPUs should be able to handle that.

What i would really like to know is the real use case this change is
for. For my, ping is not a use case.

     Andrew
Wei Fang Oct. 16, 2024, 2:34 a.m. UTC | #6
> -----Original Message-----
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: 2024年8月1日 7:56
> To: Joe Damato <jdamato@fastly.com>; Shenwei Wang
> <shenwei.wang@nxp.com>; Wei Fang <wei.fang@nxp.com>; David S. Miller
> <davem@davemloft.net>; Eric Dumazet <edumazet@google.com>; Jakub
> Kicinski <kuba@kernel.org>; Paolo Abeni <pabeni@redhat.com>; Clark Wang
> <xiaoning.wang@nxp.com>; imx@lists.linux.dev; netdev@vger.kernel.org;
> dl-linux-imx <linux-imx@nxp.com>
> Subject: Re: [PATCH v2 net-next resent] net: fec: Enable SOC specific rx-usecs
> coalescence default setting
> 
> On Tue, Jul 30, 2024 at 11:17:05AM +0100, Joe Damato wrote:
> > On Mon, Jul 29, 2024 at 02:35:27PM -0500, Shenwei Wang wrote:
> > > The current FEC driver uses a single default rx-usecs coalescence setting
> > > across all SoCs. This approach leads to suboptimal latency on newer, high
> > > performance SoCs such as i.MX8QM and i.MX8M.
> > >
> > > For example, the following are the ping result on a i.MX8QXP board:
> > >
> > > $ ping 192.168.0.195
> > > PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> > > 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=1.32 ms
> > > 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=1.31 ms
> > > 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=1.33 ms
> > > 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=1.33 ms
> > >
> > > The current default rx-usecs value of 1000us was originally optimized for
> > > CPU-bound systems like i.MX2x and i.MX6x. However, for i.MX8 and later
> > > generations, CPU performance is no longer a limiting factor. Consequently,
> > > the rx-usecs value should be reduced to enhance receive latency.
> > >
> > > The following are the ping result with the 100us setting:
> > >
> > > $ ping 192.168.0.195
> > > PING 192.168.0.195 (192.168.0.195) 56(84) bytes of data.
> > > 64 bytes from 192.168.0.195: icmp_seq=1 ttl=64 time=0.554 ms
> > > 64 bytes from 192.168.0.195: icmp_seq=2 ttl=64 time=0.499 ms
> > > 64 bytes from 192.168.0.195: icmp_seq=3 ttl=64 time=0.502 ms
> > > 64 bytes from 192.168.0.195: icmp_seq=4 ttl=64 time=0.486 ms
> > >
> > > Performance testing using iperf revealed no noticeable impact on
> > > network throughput or CPU utilization.
> >
> > I'm not sure this short paragraph addresses Andrew's comment:
> >
> >   Have you benchmarked CPU usage with this patch, for a range of traffic
> >   bandwidths and burst patterns. How does it differ?
> >
> > Maybe you could provide more details of the iperf tests you ran? It
> > seems odd that CPU usage is unchanged.
> >
> > If the system is more reactive (due to lower coalesce settings and
> > IRQs firing more often), you'd expect CPU usage to increase,
> > wouldn't you?
> 
> Hi Joe
> 
> It is not as simple as that.
> 
> Consider a VoIP system, a CISCO or Snom phone. It will be receiving a
> packet about every 2ms. This change in interrupt coalescing will have
> no effect on CPU load, there will still be an interrupt per
> packet. What this change does however do is reduce the latency, as can
> be seen by the ping. However, anybody building a phone knows about
> 
> ethtool -C|--coalesce
> 
> and will either configure the value lower, or turn it off
> altogether. Also, CCITT recommends 50ms end to end delay for a
> national call, so going from 1.5 to 0.4ms is in the noise.
> 
> Now consider bulk transfer at line rate. The receive buffer is going
> to fill with multiple packets, NAPI is going to get its budget of 64
> packets, and the interrupt will be left disabled. NAPI will then poll
> the device every so often, receiving packets. Since interrupts are
> off, the coalesce time makes no difference.
> 
> Now consider packets arriving at about 0.5ms intervals. That is way
> too slow for NAPI to go into polled mode. It does however mean 2
> packets would typically be received in each coalescence
> period. However with the proposed change, an interrupt would be
> triggered for each packet, doubling the interrupt load.
> 
> But think about a packet every 0.5ms. That is 2000 packets per
> second. Even the older CPUs should be able to handle that.
> 
> What i would really like to know is the real use case this change is
> for. For my, ping is not a use case.
> 

Sorry for the delayed response, as I really didn't have a real use case
before. But just the other day we had a customer issue. The customer
used eth0 (FEC) and eth1 (eQOS, Synopsys DWMAC) on i.MX8MP. Both
ports are RMII with 100M bandwidth. They tested and found that the
transmission rate of both ports was different when using TFTP to transfer
data, FEC was slower than eQOS. The transmission rate of FEC was only
about 400KB, but the eQOS could reach to 1MB. When the related 
coalescence parameters were adjusted to be the same as eQOS, the
transmission rate was the same as eQOS.

From this use case, when there are fewer packets, if the coalescence
parameters are set to large values, the CPU will respond too slowly, thus
affecting the performance of some protocols.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index a923cb95cdc6..13c663dbf7b1 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -99,6 +99,7 @@  static const u16 fec_enet_vlan_pri_to_queue[8] = {0, 0, 1, 1, 1, 2, 2, 2};

 struct fec_devinfo {
 	u32 quirks;
+	unsigned int rx_time_itr;
 };

 static const struct fec_devinfo fec_imx25_info = {
@@ -159,6 +160,7 @@  static const struct fec_devinfo fec_imx8mq_info = {
 		  FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES |
 		  FEC_QUIRK_HAS_EEE | FEC_QUIRK_WAKEUP_FROM_INT2 |
 		  FEC_QUIRK_HAS_MDIO_C45,
+	.rx_time_itr = 100,
 };

 static const struct fec_devinfo fec_imx8qm_info = {
@@ -169,6 +171,7 @@  static const struct fec_devinfo fec_imx8qm_info = {
 		  FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE |
 		  FEC_QUIRK_CLEAR_SETUP_MII | FEC_QUIRK_HAS_MULTI_QUEUES |
 		  FEC_QUIRK_DELAYED_CLKS_SUPPORT | FEC_QUIRK_HAS_MDIO_C45,
+	.rx_time_itr = 100,
 };

 static const struct fec_devinfo fec_s32v234_info = {
@@ -4027,8 +4030,9 @@  static int fec_enet_init(struct net_device *ndev)
 #endif
 	fep->rx_pkts_itr = FEC_ITR_ICFT_DEFAULT;
 	fep->tx_pkts_itr = FEC_ITR_ICFT_DEFAULT;
-	fep->rx_time_itr = FEC_ITR_ICTT_DEFAULT;
 	fep->tx_time_itr = FEC_ITR_ICTT_DEFAULT;
+	if (fep->rx_time_itr == 0)
+		fep->rx_time_itr = FEC_ITR_ICTT_DEFAULT;

 	/* Check mask of the streaming and coherent API */
 	ret = dma_set_mask_and_coherent(&fep->pdev->dev, DMA_BIT_MASK(32));
@@ -4325,8 +4329,10 @@  fec_probe(struct platform_device *pdev)
 	dev_info = device_get_match_data(&pdev->dev);
 	if (!dev_info)
 		dev_info = (const struct fec_devinfo *)pdev->id_entry->driver_data;
-	if (dev_info)
+	if (dev_info) {
 		fep->quirks = dev_info->quirks;
+		fep->rx_time_itr = dev_info->rx_time_itr;
+	}

 	fep->netdev = ndev;
 	fep->num_rx_queues = num_rx_qs;