diff mbox series

[net-next,v2,2/3] net: dev: Makes sure netif_rx() can be invoked in any context.

Message ID 20220204201259.1095226-3-bigeasy@linutronix.de (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series net: dev: PREEMPT_RT fixups. | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Series has a cover letter
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 4832 this patch: 4832
netdev/cc_maintainers warning 8 maintainers not CCed: andrii@kernel.org rostedt@goodmis.org kpsingh@kernel.org kafai@fb.com songliubraving@fb.com qitao.xu@bytedance.com yhs@fb.com mingo@redhat.com
netdev/build_clang success Errors and warnings before: 823 this patch: 823
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 4987 this patch: 4987
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 133 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Sebastian Andrzej Siewior Feb. 4, 2022, 8:12 p.m. UTC
Dave suggested a while ago (eleven years by now) "Let's make netif_rx()
work in all contexts and get rid of netif_rx_ni()". Eric agreed and
pointed out that modern devices should use netif_receive_skb() to avoid
the overhead.
In the meantime someone added another variant, netif_rx_any_context(),
which behaves as suggested.

netif_rx() must be invoked with disabled bottom halves to ensure that
pending softirqs, which were raised within the function, are handled.
netif_rx_ni() can be invoked only from process context (bottom halves
must be enabled) because the function handles pending softirqs without
checking if bottom halves were disabled or not.
netif_rx_any_context() invokes on the former functions by checking
in_interrupts().

netif_rx() could be taught to handle both cases (disabled and enabled
bottom halves) by simply disabling bottom halves while invoking
netif_rx_internal(). The local_bh_enable() invocation will then invoke
pending softirqs only if the BH-disable counter drops to zero.

Eric is concerned about the overhead of BH-disable+enable especially in
regard to the loopback driver. As critical as this driver is, it will
receive a shortcut to avoid the additional overhead which is not needed.

Add a local_bh_disable() section in netif_rx() to ensure softirqs are
handled if needed. Provide the internal bits as __netif_rx() which can
be used by the loopback driver. This function is not exported so it
can't be used by modules.
Make netif_rx_ni() and netif_rx_any_context() invoke netif_rx() so they
can be removed once they are no more users left.

Link: https://lkml.kernel.org/r/20100415.020246.218622820.davem@davemloft.net
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Eric Dumazet <edumazet@google.com>
---
 drivers/net/loopback.c     |  2 +-
 include/linux/netdevice.h  | 14 ++++++++--
 include/trace/events/net.h | 14 ----------
 net/core/dev.c             | 53 +++++++++++---------------------------
 4 files changed, 28 insertions(+), 55 deletions(-)

Comments

Toke Høiland-Jørgensen Feb. 4, 2022, 11:44 p.m. UTC | #1
Sebastian Andrzej Siewior <bigeasy@linutronix.de> writes:

> Dave suggested a while ago (eleven years by now) "Let's make netif_rx()
> work in all contexts and get rid of netif_rx_ni()". Eric agreed and
> pointed out that modern devices should use netif_receive_skb() to avoid
> the overhead.
> In the meantime someone added another variant, netif_rx_any_context(),
> which behaves as suggested.
>
> netif_rx() must be invoked with disabled bottom halves to ensure that
> pending softirqs, which were raised within the function, are handled.
> netif_rx_ni() can be invoked only from process context (bottom halves
> must be enabled) because the function handles pending softirqs without
> checking if bottom halves were disabled or not.
> netif_rx_any_context() invokes on the former functions by checking
> in_interrupts().
>
> netif_rx() could be taught to handle both cases (disabled and enabled
> bottom halves) by simply disabling bottom halves while invoking
> netif_rx_internal(). The local_bh_enable() invocation will then invoke
> pending softirqs only if the BH-disable counter drops to zero.
>
> Eric is concerned about the overhead of BH-disable+enable especially in
> regard to the loopback driver. As critical as this driver is, it will
> receive a shortcut to avoid the additional overhead which is not needed.
>
> Add a local_bh_disable() section in netif_rx() to ensure softirqs are
> handled if needed. Provide the internal bits as __netif_rx() which can
> be used by the loopback driver. This function is not exported so it
> can't be used by modules.
> Make netif_rx_ni() and netif_rx_any_context() invoke netif_rx() so they
> can be removed once they are no more users left.
>
> Link: https://lkml.kernel.org/r/20100415.020246.218622820.davem@davemloft.net
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Reviewed-by: Eric Dumazet <edumazet@google.com>

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Jakub Kicinski Feb. 5, 2022, 4:17 a.m. UTC | #2
On Fri,  4 Feb 2022 21:12:58 +0100 Sebastian Andrzej Siewior wrote:
> +int __netif_rx(struct sk_buff *skb)
> +{
> +	int ret;
> +
> +	trace_netif_rx_entry(skb);
> +	ret = netif_rx_internal(skb);
> +	trace_netif_rx_exit(ret);
> +	return ret;
> +}

Any reason this is not exported? I don't think there's anything wrong
with drivers calling this function, especially SW drivers which already
know to be in BH. I'd vote for roughly all of $(ls drivers/net/*.c) to
get the same treatment as loopback.
Sebastian Andrzej Siewior Feb. 5, 2022, 8:36 p.m. UTC | #3
On 2022-02-04 20:17:15 [-0800], Jakub Kicinski wrote:
> On Fri,  4 Feb 2022 21:12:58 +0100 Sebastian Andrzej Siewior wrote:
> > +int __netif_rx(struct sk_buff *skb)
> > +{
> > +	int ret;
> > +
> > +	trace_netif_rx_entry(skb);
> > +	ret = netif_rx_internal(skb);
> > +	trace_netif_rx_exit(ret);
> > +	return ret;
> > +}
> 
> Any reason this is not exported? I don't think there's anything wrong
> with drivers calling this function, especially SW drivers which already
> know to be in BH. I'd vote for roughly all of $(ls drivers/net/*.c) to
> get the same treatment as loopback.

Don't we end up in the same situation as netif_rx() vs netix_rx_ni()?

Sebastian
Jakub Kicinski Feb. 7, 2022, 4:47 p.m. UTC | #4
On Sat, 5 Feb 2022 21:36:05 +0100 Sebastian Andrzej Siewior wrote:
> On 2022-02-04 20:17:15 [-0800], Jakub Kicinski wrote:
> > On Fri,  4 Feb 2022 21:12:58 +0100 Sebastian Andrzej Siewior wrote:  
> > > +int __netif_rx(struct sk_buff *skb)
> > > +{
> > > +	int ret;
> > > +
> > > +	trace_netif_rx_entry(skb);
> > > +	ret = netif_rx_internal(skb);
> > > +	trace_netif_rx_exit(ret);
> > > +	return ret;
> > > +}  
> > 
> > Any reason this is not exported? I don't think there's anything wrong
> > with drivers calling this function, especially SW drivers which already
> > know to be in BH. I'd vote for roughly all of $(ls drivers/net/*.c) to
> > get the same treatment as loopback.  
> 
> Don't we end up in the same situation as netif_rx() vs netix_rx_ni()?

Sort of. TBH my understanding of the motivation is a bit vague.
IIUC you want to reduce the API duplication so drivers know what to
do[1]. I believe the quote from Eric you put in the commit message
pertains to HW devices, where using netif_rx() is quite anachronistic. 
But software devices like loopback, veth or tunnels may want to go via
backlog for good reasons. Would it make it better if we called
netif_rx() netif_rx_backlog() instead? Or am I missing the point?
Sebastian Andrzej Siewior Feb. 10, 2022, 12:22 p.m. UTC | #5
On 2022-02-07 08:47:17 [-0800], Jakub Kicinski wrote:
> On Sat, 5 Feb 2022 21:36:05 +0100 Sebastian Andrzej Siewior wrote:
> > On 2022-02-04 20:17:15 [-0800], Jakub Kicinski wrote:
> > > On Fri,  4 Feb 2022 21:12:58 +0100 Sebastian Andrzej Siewior wrote:  
> > > > +int __netif_rx(struct sk_buff *skb)
> > > > +{
> > > > +	int ret;
> > > > +
> > > > +	trace_netif_rx_entry(skb);
> > > > +	ret = netif_rx_internal(skb);
> > > > +	trace_netif_rx_exit(ret);
> > > > +	return ret;
> > > > +}  
> > > 
> > > Any reason this is not exported? I don't think there's anything wrong
> > > with drivers calling this function, especially SW drivers which already
> > > know to be in BH. I'd vote for roughly all of $(ls drivers/net/*.c) to
> > > get the same treatment as loopback.  
> > 
> > Don't we end up in the same situation as netif_rx() vs netix_rx_ni()?
> 
> Sort of. TBH my understanding of the motivation is a bit vague.
> IIUC you want to reduce the API duplication so drivers know what to
> do[1]. I believe the quote from Eric you put in the commit message
> pertains to HW devices, where using netif_rx() is quite anachronistic. 
> But software devices like loopback, veth or tunnels may want to go via
> backlog for good reasons. Would it make it better if we called
> netif_rx() netif_rx_backlog() instead? Or am I missing the point?

So we do netif_rx_backlog() with the bh disable+enable and
__netif_rx_backlog() without it and export both tree wide? It would make
it more obvious indeed. Could we add
	WARN_ON_ONCE(!(hardirq_count() | softirq_count()))
to the shortcut to catch the "you did it wrong folks"? This costs me
about 2ns.

TL;DR

The netix_rx_ni() is problematic on RT and I tried to do something about
it. I remembered from the in_atomic() cleanup that a few drivers got it
wrong (one way or another). We added also netif_rx_any_context() which
is used by some of the drivers (which is yet another entry point) while
the few other got fixed.
Then I stumbled over the thread where the entry (netif_rx() vs
netif_rx_ni()) was wrong and Dave suggested to have one entry point for
them all. This sounded like a good idea since it would eliminate the
several API entry points where things can go wrong and my RT trouble
would vanish in one go.
The part with deprecated looked promising but I didn't take into account
that the overhead for legitimate users (like the backlog or the software
tunnels you mention) is not acceptable.

Sebastian
Jakub Kicinski Feb. 10, 2022, 6:13 p.m. UTC | #6
On Thu, 10 Feb 2022 13:22:32 +0100 Sebastian Andrzej Siewior wrote:
> On 2022-02-07 08:47:17 [-0800], Jakub Kicinski wrote:
> > On Sat, 5 Feb 2022 21:36:05 +0100 Sebastian Andrzej Siewior wrote:  
> > > Don't we end up in the same situation as netif_rx() vs netix_rx_ni()?  
> > 
> > Sort of. TBH my understanding of the motivation is a bit vague.
> > IIUC you want to reduce the API duplication so drivers know what to
> > do[1]. I believe the quote from Eric you put in the commit message
> > pertains to HW devices, where using netif_rx() is quite anachronistic. 
> > But software devices like loopback, veth or tunnels may want to go via
> > backlog for good reasons. Would it make it better if we called
> > netif_rx() netif_rx_backlog() instead? Or am I missing the point?  
> 
> So we do netif_rx_backlog() with the bh disable+enable and
> __netif_rx_backlog() without it and export both tree wide?

At a risk of confusing people about the API we could also name the
"non-super-optimized" version netif_rx(), like you had in your patch.
Grepping thru the drivers there's ~250 uses so maybe we don't wanna
touch all that code. No strong preference, I just didn't expect to 
see __netif_rx_backlog(), but either way works.

> It would make it more obvious indeed. Could we add
> 	WARN_ON_ONCE(!(hardirq_count() | softirq_count()))
> to the shortcut to catch the "you did it wrong folks"? This costs me
> about 2ns.

Modulo lockdep_..(), so we don't have to run this check on prod kernels?

> TL;DR
> 
> The netix_rx_ni() is problematic on RT and I tried to do something about
> it. I remembered from the in_atomic() cleanup that a few drivers got it
> wrong (one way or another). We added also netif_rx_any_context() which
> is used by some of the drivers (which is yet another entry point) while
> the few other got fixed.
> Then I stumbled over the thread where the entry (netif_rx() vs
> netif_rx_ni()) was wrong and Dave suggested to have one entry point for
> them all. This sounded like a good idea since it would eliminate the
> several API entry points where things can go wrong and my RT trouble
> would vanish in one go.
> The part with deprecated looked promising but I didn't take into account
> that the overhead for legitimate users (like the backlog or the software
> tunnels you mention) is not acceptable.

I see. So IIUC primary motivation is replacing preempt disable with bh
disable but the cleanup seemed like a good idea.
Sebastian Andrzej Siewior Feb. 10, 2022, 7:52 p.m. UTC | #7
On 2022-02-10 10:13:30 [-0800], Jakub Kicinski wrote:
> > So we do netif_rx_backlog() with the bh disable+enable and
> > __netif_rx_backlog() without it and export both tree wide?
> 
> At a risk of confusing people about the API we could also name the
> "non-super-optimized" version netif_rx(), like you had in your patch.
> Grepping thru the drivers there's ~250 uses so maybe we don't wanna
> touch all that code. No strong preference, I just didn't expect to 
> see __netif_rx_backlog(), but either way works.

So let me keep the naming as-is, export __netif_rx() and update the
kernel doc with the bits about backlog.
After that if we are up to rename the function in ~250 drivers then I
should be simpler.

> > It would make it more obvious indeed. Could we add
> > 	WARN_ON_ONCE(!(hardirq_count() | softirq_count()))
> > to the shortcut to catch the "you did it wrong folks"? This costs me
> > about 2ns.
> 
> Modulo lockdep_..(), so we don't have to run this check on prod kernels?

I was worried a little about the corner cases but then lockdep is your
friend and you should test your code. Okay.

Sebastian
diff mbox series

Patch

diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
index ed0edf5884ef8..77f5b564382b6 100644
--- a/drivers/net/loopback.c
+++ b/drivers/net/loopback.c
@@ -86,7 +86,7 @@  static netdev_tx_t loopback_xmit(struct sk_buff *skb,
 	skb->protocol = eth_type_trans(skb, dev);
 
 	len = skb->len;
-	if (likely(netif_rx(skb) == NET_RX_SUCCESS))
+	if (likely(__netif_rx(skb) == NET_RX_SUCCESS))
 		dev_lstats_add(dev, len);
 
 	return NETDEV_TX_OK;
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index e490b84732d16..c9e883104adb1 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3669,8 +3669,18 @@  u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp,
 void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog);
 int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff *skb);
 int netif_rx(struct sk_buff *skb);
-int netif_rx_ni(struct sk_buff *skb);
-int netif_rx_any_context(struct sk_buff *skb);
+int __netif_rx(struct sk_buff *skb);
+
+static inline int netif_rx_ni(struct sk_buff *skb)
+{
+	return netif_rx(skb);
+}
+
+static inline int netif_rx_any_context(struct sk_buff *skb)
+{
+	return netif_rx(skb);
+}
+
 int netif_receive_skb(struct sk_buff *skb);
 int netif_receive_skb_core(struct sk_buff *skb);
 void netif_receive_skb_list_internal(struct list_head *head);
diff --git a/include/trace/events/net.h b/include/trace/events/net.h
index 78c448c6ab4c5..032b431b987b6 100644
--- a/include/trace/events/net.h
+++ b/include/trace/events/net.h
@@ -260,13 +260,6 @@  DEFINE_EVENT(net_dev_rx_verbose_template, netif_rx_entry,
 	TP_ARGS(skb)
 );
 
-DEFINE_EVENT(net_dev_rx_verbose_template, netif_rx_ni_entry,
-
-	TP_PROTO(const struct sk_buff *skb),
-
-	TP_ARGS(skb)
-);
-
 DECLARE_EVENT_CLASS(net_dev_rx_exit_template,
 
 	TP_PROTO(int ret),
@@ -312,13 +305,6 @@  DEFINE_EVENT(net_dev_rx_exit_template, netif_rx_exit,
 	TP_ARGS(ret)
 );
 
-DEFINE_EVENT(net_dev_rx_exit_template, netif_rx_ni_exit,
-
-	TP_PROTO(int ret),
-
-	TP_ARGS(ret)
-);
-
 DEFINE_EVENT(net_dev_rx_exit_template, netif_receive_skb_list_exit,
 
 	TP_PROTO(int ret),
diff --git a/net/core/dev.c b/net/core/dev.c
index 0d13340ed4054..f34a8f3a448a7 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4815,6 +4815,16 @@  static int netif_rx_internal(struct sk_buff *skb)
 	return ret;
 }
 
+int __netif_rx(struct sk_buff *skb)
+{
+	int ret;
+
+	trace_netif_rx_entry(skb);
+	ret = netif_rx_internal(skb);
+	trace_netif_rx_exit(ret);
+	return ret;
+}
+
 /**
  *	netif_rx	-	post buffer to the network code
  *	@skb: buffer to post
@@ -4823,58 +4833,25 @@  static int netif_rx_internal(struct sk_buff *skb)
  *	the upper (protocol) levels to process.  It always succeeds. The buffer
  *	may be dropped during processing for congestion control or by the
  *	protocol layers.
+ *	This interface is considered legacy. Modern NIC driver should use NAPI
+ *	and GRO.
  *
  *	return values:
  *	NET_RX_SUCCESS	(no congestion)
  *	NET_RX_DROP     (packet was dropped)
  *
  */
-
 int netif_rx(struct sk_buff *skb)
 {
 	int ret;
 
-	trace_netif_rx_entry(skb);
-
-	ret = netif_rx_internal(skb);
-	trace_netif_rx_exit(ret);
-
+	local_bh_disable();
+	ret = __netif_rx(skb);
+	local_bh_enable();
 	return ret;
 }
 EXPORT_SYMBOL(netif_rx);
 
-int netif_rx_ni(struct sk_buff *skb)
-{
-	int err;
-
-	trace_netif_rx_ni_entry(skb);
-
-	preempt_disable();
-	err = netif_rx_internal(skb);
-	if (local_softirq_pending())
-		do_softirq();
-	preempt_enable();
-	trace_netif_rx_ni_exit(err);
-
-	return err;
-}
-EXPORT_SYMBOL(netif_rx_ni);
-
-int netif_rx_any_context(struct sk_buff *skb)
-{
-	/*
-	 * If invoked from contexts which do not invoke bottom half
-	 * processing either at return from interrupt or when softrqs are
-	 * reenabled, use netif_rx_ni() which invokes bottomhalf processing
-	 * directly.
-	 */
-	if (in_interrupt())
-		return netif_rx(skb);
-	else
-		return netif_rx_ni(skb);
-}
-EXPORT_SYMBOL(netif_rx_any_context);
-
 static __latent_entropy void net_tx_action(struct softirq_action *h)
 {
 	struct softnet_data *sd = this_cpu_ptr(&softnet_data);