diff mbox series

[net-next,16/24] net: netkit, veth, tun, virt*: Use nested-BH locking for XDP redirect.

Message ID 20231215171020.687342-17-bigeasy@linutronix.de (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series locking: Introduce nested-BH locking. | expand

Checks

Context Check Description
netdev/series_format fail Series longer than 15 patches (and no cover letter)
netdev/tree_selection success Clearly marked for net-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit fail Errors and warnings before: 1117 this patch: 1126
netdev/cc_maintainers warning 3 maintainers not CCed: oleksandr_tyshchenko@epam.com linux-hyperv@vger.kernel.org jasowang@redhat.com
netdev/build_clang fail Errors and warnings before: 1143 this patch: 19
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 1146 this patch: 1157
netdev/checkpatch warning WARNING: line length of 81 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Sebastian Andrzej Siewior Dec. 15, 2023, 5:07 p.m. UTC
The per-CPU variables used during bpf_prog_run_xdp() invocation and
later during xdp_do_redirect() rely on disabled BH for their protection.
Without locking in local_bh_disable() on PREEMPT_RT these data structure
require explicit locking.

This is a follow-up on the previous change which introduced
bpf_run_lock.redirect_lock and uses it now within drivers.

The simple way is to acquire the lock before bpf_prog_run_xdp() is
invoked and hold it until the end of function.
This does not always work because some drivers (cpsw, atlantic) invoke
xdp_do_flush() in the same context.
Acquiring the lock in bpf_prog_run_xdp() and dropping in
xdp_do_redirect() (without touching drivers) does not work because not
all driver, which use bpf_prog_run_xdp(), do support XDP_REDIRECT (and
invoke xdp_do_redirect()).

Ideally the minimal locking scope would be bpf_prog_run_xdp() +
xdp_do_redirect() and everything else (error recovery, DMA unmapping,
free/ alloc of memory, …) would happen outside of the locked section.

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Nikolay Aleksandrov <razor@blackwall.org>
Cc: Song Liu <song@kernel.org>
Cc: Stanislav Fomichev <sdf@google.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: Yonghong Song <yonghong.song@linux.dev>
Cc: bpf@vger.kernel.org
Cc: virtualization@lists.linux.dev
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/net/hyperv/netvsc_bpf.c |  1 +
 drivers/net/netkit.c            | 13 +++++++----
 drivers/net/tun.c               | 28 +++++++++++++----------
 drivers/net/veth.c              | 40 ++++++++++++++++++++-------------
 drivers/net/virtio_net.c        |  1 +
 drivers/net/xen-netfront.c      |  1 +
 6 files changed, 52 insertions(+), 32 deletions(-)

Comments

kernel test robot Dec. 16, 2023, 7:28 p.m. UTC | #1
Hi Sebastian,

kernel test robot noticed the following build errors:

[auto build test ERROR on net-next/main]

url:    https://github.com/intel-lab-lkp/linux/commits/Sebastian-Andrzej-Siewior/locking-local_lock-Introduce-guard-definition-for-local_lock/20231216-011911
base:   net-next/main
patch link:    https://lore.kernel.org/r/20231215171020.687342-17-bigeasy%40linutronix.de
patch subject: [PATCH net-next 16/24] net: netkit, veth, tun, virt*: Use nested-BH locking for XDP redirect.
config: x86_64-rhel-8.3-bpf (https://download.01.org/0day-ci/archive/20231217/202312170350.n7ssgNDP-lkp@intel.com/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231217/202312170350.n7ssgNDP-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202312170350.n7ssgNDP-lkp@intel.com/

All errors (new ones prefixed by >>):

>> drivers/net/hyperv/netvsc_bpf.c:53:3: error: cannot jump from this goto statement to its label
                   goto out;
                   ^
   drivers/net/hyperv/netvsc_bpf.c:61:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
           guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
           ^
   include/linux/cleanup.h:142:15: note: expanded from macro 'guard'
           CLASS(_name, __UNIQUE_ID(guard))
                        ^
   include/linux/compiler.h:180:29: note: expanded from macro '__UNIQUE_ID'
   #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
                               ^
   include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
   #define __PASTE(a,b) ___PASTE(a,b)
                        ^
   include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
   #define ___PASTE(a,b) a##b
                         ^
   <scratch space>:81:1: note: expanded from here
   __UNIQUE_ID_guard635
   ^
   drivers/net/hyperv/netvsc_bpf.c:46:3: error: cannot jump from this goto statement to its label
                   goto out;
                   ^
   drivers/net/hyperv/netvsc_bpf.c:61:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
           guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
           ^
   include/linux/cleanup.h:142:15: note: expanded from macro 'guard'
           CLASS(_name, __UNIQUE_ID(guard))
                        ^
   include/linux/compiler.h:180:29: note: expanded from macro '__UNIQUE_ID'
   #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
                               ^
   include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
   #define __PASTE(a,b) ___PASTE(a,b)
                        ^
   include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
   #define ___PASTE(a,b) a##b
                         ^
   <scratch space>:81:1: note: expanded from here
   __UNIQUE_ID_guard635
   ^
   drivers/net/hyperv/netvsc_bpf.c:41:3: error: cannot jump from this goto statement to its label
                   goto out;
                   ^
   drivers/net/hyperv/netvsc_bpf.c:61:2: note: jump bypasses initialization of variable with __attribute__((cleanup))
           guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
           ^
   include/linux/cleanup.h:142:15: note: expanded from macro 'guard'
           CLASS(_name, __UNIQUE_ID(guard))
                        ^
   include/linux/compiler.h:180:29: note: expanded from macro '__UNIQUE_ID'
   #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
                               ^
   include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE'
   #define __PASTE(a,b) ___PASTE(a,b)
                        ^
   include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE'
   #define ___PASTE(a,b) a##b
                         ^
   <scratch space>:81:1: note: expanded from here
   __UNIQUE_ID_guard635
   ^
   3 errors generated.


vim +53 drivers/net/hyperv/netvsc_bpf.c

351e1581395fcc Haiyang Zhang             2020-01-23   23  
351e1581395fcc Haiyang Zhang             2020-01-23   24  u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
351e1581395fcc Haiyang Zhang             2020-01-23   25  		   struct xdp_buff *xdp)
351e1581395fcc Haiyang Zhang             2020-01-23   26  {
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   27  	struct netvsc_stats_rx *rx_stats = &nvchan->rx_stats;
351e1581395fcc Haiyang Zhang             2020-01-23   28  	void *data = nvchan->rsc.data[0];
351e1581395fcc Haiyang Zhang             2020-01-23   29  	u32 len = nvchan->rsc.len[0];
351e1581395fcc Haiyang Zhang             2020-01-23   30  	struct page *page = NULL;
351e1581395fcc Haiyang Zhang             2020-01-23   31  	struct bpf_prog *prog;
351e1581395fcc Haiyang Zhang             2020-01-23   32  	u32 act = XDP_PASS;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   33  	bool drop = true;
351e1581395fcc Haiyang Zhang             2020-01-23   34  
351e1581395fcc Haiyang Zhang             2020-01-23   35  	xdp->data_hard_start = NULL;
351e1581395fcc Haiyang Zhang             2020-01-23   36  
351e1581395fcc Haiyang Zhang             2020-01-23   37  	rcu_read_lock();
351e1581395fcc Haiyang Zhang             2020-01-23   38  	prog = rcu_dereference(nvchan->bpf_prog);
351e1581395fcc Haiyang Zhang             2020-01-23   39  
351e1581395fcc Haiyang Zhang             2020-01-23   40  	if (!prog)
351e1581395fcc Haiyang Zhang             2020-01-23   41  		goto out;
351e1581395fcc Haiyang Zhang             2020-01-23   42  
505e3f00c3f364 Andrea Parri (Microsoft   2021-01-14   43) 	/* Ensure that the below memcpy() won't overflow the page buffer. */
505e3f00c3f364 Andrea Parri (Microsoft   2021-01-14   44) 	if (len > ndev->mtu + ETH_HLEN) {
505e3f00c3f364 Andrea Parri (Microsoft   2021-01-14   45) 		act = XDP_DROP;
505e3f00c3f364 Andrea Parri (Microsoft   2021-01-14   46) 		goto out;
505e3f00c3f364 Andrea Parri (Microsoft   2021-01-14   47) 	}
505e3f00c3f364 Andrea Parri (Microsoft   2021-01-14   48) 
351e1581395fcc Haiyang Zhang             2020-01-23   49  	/* allocate page buffer for data */
351e1581395fcc Haiyang Zhang             2020-01-23   50  	page = alloc_page(GFP_ATOMIC);
351e1581395fcc Haiyang Zhang             2020-01-23   51  	if (!page) {
351e1581395fcc Haiyang Zhang             2020-01-23   52  		act = XDP_DROP;
351e1581395fcc Haiyang Zhang             2020-01-23  @53  		goto out;
351e1581395fcc Haiyang Zhang             2020-01-23   54  	}
351e1581395fcc Haiyang Zhang             2020-01-23   55  
43b5169d8355cc Lorenzo Bianconi          2020-12-22   56  	xdp_init_buff(xdp, PAGE_SIZE, &nvchan->xdp_rxq);
be9df4aff65f18 Lorenzo Bianconi          2020-12-22   57  	xdp_prepare_buff(xdp, page_address(page), NETVSC_XDP_HDRM, len, false);
351e1581395fcc Haiyang Zhang             2020-01-23   58  
351e1581395fcc Haiyang Zhang             2020-01-23   59  	memcpy(xdp->data, data, len);
351e1581395fcc Haiyang Zhang             2020-01-23   60  
31dbfc0f055c7d Sebastian Andrzej Siewior 2023-12-15   61  	guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
351e1581395fcc Haiyang Zhang             2020-01-23   62  	act = bpf_prog_run_xdp(prog, xdp);
351e1581395fcc Haiyang Zhang             2020-01-23   63  
351e1581395fcc Haiyang Zhang             2020-01-23   64  	switch (act) {
351e1581395fcc Haiyang Zhang             2020-01-23   65  	case XDP_PASS:
351e1581395fcc Haiyang Zhang             2020-01-23   66  	case XDP_TX:
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   67  		drop = false;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   68  		break;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   69  
351e1581395fcc Haiyang Zhang             2020-01-23   70  	case XDP_DROP:
351e1581395fcc Haiyang Zhang             2020-01-23   71  		break;
351e1581395fcc Haiyang Zhang             2020-01-23   72  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   73  	case XDP_REDIRECT:
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   74  		if (!xdp_do_redirect(ndev, xdp, prog)) {
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   75  			nvchan->xdp_flush = true;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   76  			drop = false;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   77  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   78  			u64_stats_update_begin(&rx_stats->syncp);
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   79  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   80  			rx_stats->xdp_redirect++;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   81  			rx_stats->packets++;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   82  			rx_stats->bytes += nvchan->rsc.pktlen;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   83  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   84  			u64_stats_update_end(&rx_stats->syncp);
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   85  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   86  			break;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   87  		} else {
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   88  			u64_stats_update_begin(&rx_stats->syncp);
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   89  			rx_stats->xdp_drop++;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   90  			u64_stats_update_end(&rx_stats->syncp);
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   91  		}
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   92  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   93  		fallthrough;
1cb9d3b6185b2a Haiyang Zhang             2022-04-07   94  
351e1581395fcc Haiyang Zhang             2020-01-23   95  	case XDP_ABORTED:
351e1581395fcc Haiyang Zhang             2020-01-23   96  		trace_xdp_exception(ndev, prog, act);
351e1581395fcc Haiyang Zhang             2020-01-23   97  		break;
351e1581395fcc Haiyang Zhang             2020-01-23   98  
351e1581395fcc Haiyang Zhang             2020-01-23   99  	default:
c8064e5b4adac5 Paolo Abeni               2021-11-30  100  		bpf_warn_invalid_xdp_action(ndev, prog, act);
351e1581395fcc Haiyang Zhang             2020-01-23  101  	}
351e1581395fcc Haiyang Zhang             2020-01-23  102  
351e1581395fcc Haiyang Zhang             2020-01-23  103  out:
351e1581395fcc Haiyang Zhang             2020-01-23  104  	rcu_read_unlock();
351e1581395fcc Haiyang Zhang             2020-01-23  105  
1cb9d3b6185b2a Haiyang Zhang             2022-04-07  106  	if (page && drop) {
351e1581395fcc Haiyang Zhang             2020-01-23  107  		__free_page(page);
351e1581395fcc Haiyang Zhang             2020-01-23  108  		xdp->data_hard_start = NULL;
351e1581395fcc Haiyang Zhang             2020-01-23  109  	}
351e1581395fcc Haiyang Zhang             2020-01-23  110  
351e1581395fcc Haiyang Zhang             2020-01-23  111  	return act;
351e1581395fcc Haiyang Zhang             2020-01-23  112  }
351e1581395fcc Haiyang Zhang             2020-01-23  113
Daniel Borkmann Dec. 18, 2023, 8:52 a.m. UTC | #2
Hi Sebastian,

On 12/15/23 6:07 PM, Sebastian Andrzej Siewior wrote:
> The per-CPU variables used during bpf_prog_run_xdp() invocation and
> later during xdp_do_redirect() rely on disabled BH for their protection.
> Without locking in local_bh_disable() on PREEMPT_RT these data structure
> require explicit locking.
> 
> This is a follow-up on the previous change which introduced
> bpf_run_lock.redirect_lock and uses it now within drivers.
> 
> The simple way is to acquire the lock before bpf_prog_run_xdp() is
> invoked and hold it until the end of function.
> This does not always work because some drivers (cpsw, atlantic) invoke
> xdp_do_flush() in the same context.
> Acquiring the lock in bpf_prog_run_xdp() and dropping in
> xdp_do_redirect() (without touching drivers) does not work because not
> all driver, which use bpf_prog_run_xdp(), do support XDP_REDIRECT (and
> invoke xdp_do_redirect()).
> 
> Ideally the minimal locking scope would be bpf_prog_run_xdp() +
> xdp_do_redirect() and everything else (error recovery, DMA unmapping,
> free/ alloc of memory, …) would happen outside of the locked section.
[...]

>   drivers/net/hyperv/netvsc_bpf.c |  1 +
>   drivers/net/netkit.c            | 13 +++++++----
>   drivers/net/tun.c               | 28 +++++++++++++----------
>   drivers/net/veth.c              | 40 ++++++++++++++++++++-------------
>   drivers/net/virtio_net.c        |  1 +
>   drivers/net/xen-netfront.c      |  1 +
>   6 files changed, 52 insertions(+), 32 deletions(-)
[...]

Please exclude netkit from this set given it does not support XDP, but
instead only accepts tc BPF typed programs.

Thanks,
Daniel

> diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
> index 39171380ccf29..fbcf78477bda8 100644
> --- a/drivers/net/netkit.c
> +++ b/drivers/net/netkit.c
> @@ -80,8 +80,15 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
>   	netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)));
>   	skb->dev = peer;
>   	entry = rcu_dereference(nk->active);
> -	if (entry)
> -		ret = netkit_run(entry, skb, ret);
> +	if (entry) {
> +		scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) {
> +			ret = netkit_run(entry, skb, ret);
> +			if (ret == NETKIT_REDIRECT) {
> +				dev_sw_netstats_tx_add(dev, 1, len);
> +				skb_do_redirect(skb);
> +			}
> +		}
> +	}
>   	switch (ret) {
>   	case NETKIT_NEXT:
>   	case NETKIT_PASS:
> @@ -95,8 +102,6 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
>   		}
>   		break;
>   	case NETKIT_REDIRECT:
> -		dev_sw_netstats_tx_add(dev, 1, len);
> -		skb_do_redirect(skb);
>   		break;
>   	case NETKIT_DROP:
>   	default:
Sebastian Andrzej Siewior Jan. 12, 2024, 3:37 p.m. UTC | #3
On 2023-12-18 09:52:05 [+0100], Daniel Borkmann wrote:
> Hi Sebastian,
Hi Daniel,

> Please exclude netkit from this set given it does not support XDP, but
> instead only accepts tc BPF typed programs.

okay, thank you.

> Thanks,
> Daniel

Sebastian
diff mbox series

Patch

diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
index 4a9522689fa4f..55f8ca92ca199 100644
--- a/drivers/net/hyperv/netvsc_bpf.c
+++ b/drivers/net/hyperv/netvsc_bpf.c
@@ -58,6 +58,7 @@  u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
 
 	memcpy(xdp->data, data, len);
 
+	guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
 	act = bpf_prog_run_xdp(prog, xdp);
 
 	switch (act) {
diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
index 39171380ccf29..fbcf78477bda8 100644
--- a/drivers/net/netkit.c
+++ b/drivers/net/netkit.c
@@ -80,8 +80,15 @@  static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
 	netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)));
 	skb->dev = peer;
 	entry = rcu_dereference(nk->active);
-	if (entry)
-		ret = netkit_run(entry, skb, ret);
+	if (entry) {
+		scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) {
+			ret = netkit_run(entry, skb, ret);
+			if (ret == NETKIT_REDIRECT) {
+				dev_sw_netstats_tx_add(dev, 1, len);
+				skb_do_redirect(skb);
+			}
+		}
+	}
 	switch (ret) {
 	case NETKIT_NEXT:
 	case NETKIT_PASS:
@@ -95,8 +102,6 @@  static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
 		}
 		break;
 	case NETKIT_REDIRECT:
-		dev_sw_netstats_tx_add(dev, 1, len);
-		skb_do_redirect(skb);
 		break;
 	case NETKIT_DROP:
 	default:
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index afa5497f7c35c..fe0d31f11e4b6 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1708,16 +1708,18 @@  static struct sk_buff *tun_build_skb(struct tun_struct *tun,
 		xdp_init_buff(&xdp, buflen, &tfile->xdp_rxq);
 		xdp_prepare_buff(&xdp, buf, pad, len, false);
 
-		act = bpf_prog_run_xdp(xdp_prog, &xdp);
-		if (act == XDP_REDIRECT || act == XDP_TX) {
-			get_page(alloc_frag->page);
-			alloc_frag->offset += buflen;
-		}
-		err = tun_xdp_act(tun, xdp_prog, &xdp, act);
-		if (err < 0) {
-			if (act == XDP_REDIRECT || act == XDP_TX)
-				put_page(alloc_frag->page);
-			goto out;
+		scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) {
+			act = bpf_prog_run_xdp(xdp_prog, &xdp);
+			if (act == XDP_REDIRECT || act == XDP_TX) {
+				get_page(alloc_frag->page);
+				alloc_frag->offset += buflen;
+			}
+			err = tun_xdp_act(tun, xdp_prog, &xdp, act);
+			if (err < 0) {
+				if (act == XDP_REDIRECT || act == XDP_TX)
+					put_page(alloc_frag->page);
+				goto out;
+			}
 		}
 
 		if (err == XDP_REDIRECT)
@@ -2460,8 +2462,10 @@  static int tun_xdp_one(struct tun_struct *tun,
 		xdp_init_buff(xdp, buflen, &tfile->xdp_rxq);
 		xdp_set_data_meta_invalid(xdp);
 
-		act = bpf_prog_run_xdp(xdp_prog, xdp);
-		ret = tun_xdp_act(tun, xdp_prog, xdp, act);
+		scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) {
+			act = bpf_prog_run_xdp(xdp_prog, xdp);
+			ret = tun_xdp_act(tun, xdp_prog, xdp, act);
+		}
 		if (ret < 0) {
 			put_page(virt_to_head_page(xdp->data));
 			return ret;
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 977861c46b1fe..c69e5ff9f8795 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -624,7 +624,18 @@  static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq,
 		xdp->rxq = &rq->xdp_rxq;
 		vxbuf.skb = NULL;
 
-		act = bpf_prog_run_xdp(xdp_prog, xdp);
+		scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) {
+			act = bpf_prog_run_xdp(xdp_prog, xdp);
+			if (act == XDP_REDIRECT) {
+				orig_frame = *frame;
+				xdp->rxq->mem = frame->mem;
+				if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
+					frame = &orig_frame;
+					stats->xdp_drops++;
+					goto err_xdp;
+				}
+			}
+		}
 
 		switch (act) {
 		case XDP_PASS:
@@ -644,13 +655,6 @@  static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq,
 			rcu_read_unlock();
 			goto xdp_xmit;
 		case XDP_REDIRECT:
-			orig_frame = *frame;
-			xdp->rxq->mem = frame->mem;
-			if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
-				frame = &orig_frame;
-				stats->rx_drops++;
-				goto err_xdp;
-			}
 			stats->xdp_redirect++;
 			rcu_read_unlock();
 			goto xdp_xmit;
@@ -857,7 +861,18 @@  static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	orig_data = xdp->data;
 	orig_data_end = xdp->data_end;
 
-	act = bpf_prog_run_xdp(xdp_prog, xdp);
+	scoped_guard(local_lock_nested_bh, &bpf_run_lock.redirect_lock) {
+		act = bpf_prog_run_xdp(xdp_prog, xdp);
+		if (act == XDP_REDIRECT) {
+			veth_xdp_get(xdp);
+			consume_skb(skb);
+			xdp->rxq->mem = rq->xdp_mem;
+			if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
+				stats->rx_drops++;
+				goto err_xdp;
+			}
+		}
+	}
 
 	switch (act) {
 	case XDP_PASS:
@@ -875,13 +890,6 @@  static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 		rcu_read_unlock();
 		goto xdp_xmit;
 	case XDP_REDIRECT:
-		veth_xdp_get(xdp);
-		consume_skb(skb);
-		xdp->rxq->mem = rq->xdp_mem;
-		if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
-			stats->rx_drops++;
-			goto err_xdp;
-		}
 		stats->xdp_redirect++;
 		rcu_read_unlock();
 		goto xdp_xmit;
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d16f592c2061f..5e362c4604239 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1010,6 +1010,7 @@  static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 	int err;
 	u32 act;
 
+	guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
 	act = bpf_prog_run_xdp(xdp_prog, xdp);
 	u64_stats_inc(&stats->xdp_packets);
 
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index ad29f370034e4..e3daa8cdeb84e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -978,6 +978,7 @@  static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
 	xdp_prepare_buff(xdp, page_address(pdata), XDP_PACKET_HEADROOM,
 			 len, false);
 
+	guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock);
 	act = bpf_prog_run_xdp(prog, xdp);
 	switch (act) {
 	case XDP_TX: