diff mbox series

[net-next,v21] net: refactor ->ndo_bpf calls into dev_xdp_propagate

Message ID 20240821045629.2856641-1-almasrymina@google.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series [net-next,v21] net: refactor ->ndo_bpf calls into dev_xdp_propagate | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 54 this patch: 54
netdev/build_tools success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers success CCed 29 of 29 maintainers
netdev/build_clang success Errors and warnings before: 100 this patch: 100
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 4065 this patch: 4065
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 86 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 114 this patch: 114
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-08-21--09-00 (tests: 711)

Commit Message

Mina Almasry Aug. 21, 2024, 4:56 a.m. UTC
When net devices propagate xdp configurations to slave devices, or when
core propagates xdp configuration to a device, we will need to perform
a memory provider check to ensure we're not binding xdp to a device
using unreadable netmem.

Currently ->ndo_bpf calls are all over the place. Adding checks to all
these places would not be ideal.

Refactor all the ->ndo_bpf calls into one place where we can add this
check in the future.

Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Mina Almasry <almasrymina@google.com>
---
 drivers/net/bonding/bond_main.c | 8 ++++----
 drivers/net/hyperv/netvsc_bpf.c | 2 +-
 include/linux/netdevice.h       | 1 +
 kernel/bpf/offload.c            | 2 +-
 net/core/dev.c                  | 9 +++++++++
 net/xdp/xsk_buff_pool.c         | 4 ++--
 6 files changed, 18 insertions(+), 8 deletions(-)

Comments

Mina Almasry Aug. 21, 2024, 5:24 a.m. UTC | #1
On Wed, Aug 21, 2024 at 12:56 AM Mina Almasry <almasrymina@google.com> wrote:
>
> When net devices propagate xdp configurations to slave devices, or when
> core propagates xdp configuration to a device, we will need to perform
> a memory provider check to ensure we're not binding xdp to a device
> using unreadable netmem.
>
> Currently ->ndo_bpf calls are all over the place. Adding checks to all
> these places would not be ideal.
>
> Refactor all the ->ndo_bpf calls into one place where we can add this
> check in the future.
>
> Suggested-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Mina Almasry <almasrymina@google.com>

Sorry the patch title should just be:

[PATCH net-next v1] net: refactor ->ndo_bpf calls into dev_xdp_propagate

v1, not v21.
Jakub Kicinski Aug. 21, 2024, 11:43 p.m. UTC | #2
On Wed, 21 Aug 2024 04:56:29 +0000 Mina Almasry wrote:
> When net devices propagate xdp configurations to slave devices, or when
> core propagates xdp configuration to a device, we will need to perform
> a memory provider check to ensure we're not binding xdp to a device
> using unreadable netmem.
> 
> Currently ->ndo_bpf calls are all over the place. Adding checks to all
> these places would not be ideal.
> 
> Refactor all the ->ndo_bpf calls into one place where we can add this
> check in the future.
> 
> Suggested-by: Jakub Kicinski <kuba@kernel.org>
> Signed-off-by: Mina Almasry <almasrymina@google.com>

> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index f9633a6f8571..73f9416c6c1b 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -2258,7 +2258,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
>  			goto err_sysfs_del;
>  		}
>  
> -		res = slave_dev->netdev_ops->ndo_bpf(slave_dev, &xdp);
> +		res = dev_xdp_propagate(slave_dev, &xdp);

I was hoping we can fold the "is there any program present already"
but I'm not sure if that check itself isn't buggy... so let's leave
that part to someone else.

Hangbin, would you be willing to take a look at testing (and fixing)
the XDP program propagation? I did a naive test of adding a bond
and veth under it, I attached an XDP prog to bond, and nothing happened
on the veth. Maybe I'm misreading but I expected the XDP prog to show
up on the veth.

> diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
> index 4a9522689fa4..e01c5997a551 100644
> --- a/drivers/net/hyperv/netvsc_bpf.c
> +++ b/drivers/net/hyperv/netvsc_bpf.c
> @@ -183,7 +183,7 @@ int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog)
>  	xdp.command = XDP_SETUP_PROG;
>  	xdp.prog = prog;
>  
> -	ret = vf_netdev->netdev_ops->ndo_bpf(vf_netdev, &xdp);
> +	ret = dev_xdp_propagate(vf_netdev, &xdp);

Again, the driver itself appears rather questionable but we can leave
it be :)

> @@ -130,7 +130,7 @@ static int bpf_map_offload_ndo(struct bpf_offloaded_map *offmap,
>  	/* Caller must make sure netdev is valid */
>  	netdev = offmap->netdev;
>  
> -	return netdev->netdev_ops->ndo_bpf(netdev, &data);
> +	return dev_xdp_propagate(netdev, &data);

This is not propagation, it's an offload call, let's not convert it

> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index c0e0204b9630..f44d68c8d75d 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -149,7 +149,7 @@ static void xp_disable_drv_zc(struct xsk_buff_pool *pool)
>  		bpf.xsk.pool = NULL;
>  		bpf.xsk.queue_id = pool->queue_id;
>  
> -		err = pool->netdev->netdev_ops->ndo_bpf(pool->netdev, &bpf);
> +		err = dev_xdp_propagate(pool->netdev, &bpf);
>  
>  		if (err)
>  			WARN(1, "Failed to disable zero-copy!\n");
> @@ -215,7 +215,7 @@ int xp_assign_dev(struct xsk_buff_pool *pool,
>  	bpf.xsk.pool = pool;
>  	bpf.xsk.queue_id = queue_id;
>  
> -	err = netdev->netdev_ops->ndo_bpf(netdev, &bpf);
> +	err = dev_xdp_propagate(netdev, &bpf);
>  	if (err)
>  		goto err_unreg_pool;
>  

That's also not xdp propagation. If you're not doing so already in your
series - you should look at queue state here directly and check if it
has MP installed. Conversely you should look at rxq->pool when binding
MP. But as I said, that's not part of the XDP refactor, just as part of
your series.

So in case my ramblings were confusing - code LG, but ditch the
net/xdp/xsk_buff_pool.c and kernel/bpf/offload.c changes.
diff mbox series

Patch

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index f9633a6f8571..73f9416c6c1b 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -2258,7 +2258,7 @@  int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
 			goto err_sysfs_del;
 		}
 
-		res = slave_dev->netdev_ops->ndo_bpf(slave_dev, &xdp);
+		res = dev_xdp_propagate(slave_dev, &xdp);
 		if (res < 0) {
 			/* ndo_bpf() sets extack error message */
 			slave_dbg(bond_dev, slave_dev, "Error %d calling ndo_bpf\n", res);
@@ -2394,7 +2394,7 @@  static int __bond_release_one(struct net_device *bond_dev,
 			.prog	 = NULL,
 			.extack  = NULL,
 		};
-		if (slave_dev->netdev_ops->ndo_bpf(slave_dev, &xdp))
+		if (dev_xdp_propagate(slave_dev, &xdp))
 			slave_warn(bond_dev, slave_dev, "failed to unload XDP program\n");
 	}
 
@@ -5584,7 +5584,7 @@  static int bond_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 			goto err;
 		}
 
-		err = slave_dev->netdev_ops->ndo_bpf(slave_dev, &xdp);
+		err = dev_xdp_propagate(slave_dev, &xdp);
 		if (err < 0) {
 			/* ndo_bpf() sets extack error message */
 			slave_err(dev, slave_dev, "Error %d calling ndo_bpf\n", err);
@@ -5616,7 +5616,7 @@  static int bond_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 		if (slave == rollback_slave)
 			break;
 
-		err_unwind = slave_dev->netdev_ops->ndo_bpf(slave_dev, &xdp);
+		err_unwind = dev_xdp_propagate(slave_dev, &xdp);
 		if (err_unwind < 0)
 			slave_err(dev, slave_dev,
 				  "Error %d when unwinding XDP program change\n", err_unwind);
diff --git a/drivers/net/hyperv/netvsc_bpf.c b/drivers/net/hyperv/netvsc_bpf.c
index 4a9522689fa4..e01c5997a551 100644
--- a/drivers/net/hyperv/netvsc_bpf.c
+++ b/drivers/net/hyperv/netvsc_bpf.c
@@ -183,7 +183,7 @@  int netvsc_vf_setxdp(struct net_device *vf_netdev, struct bpf_prog *prog)
 	xdp.command = XDP_SETUP_PROG;
 	xdp.prog = prog;
 
-	ret = vf_netdev->netdev_ops->ndo_bpf(vf_netdev, &xdp);
+	ret = dev_xdp_propagate(vf_netdev, &xdp);
 
 	if (ret && prog)
 		bpf_prog_put(prog);
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0ef3eaa23f4b..a4f876767423 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3918,6 +3918,7 @@  struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
 
 int bpf_xdp_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
 u8 dev_xdp_prog_count(struct net_device *dev);
+int dev_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf);
 u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode);
 
 int __dev_forward_skb(struct net_device *dev, struct sk_buff *skb);
diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c
index 1a4fec330eaa..a5b06bd5fe9b 100644
--- a/kernel/bpf/offload.c
+++ b/kernel/bpf/offload.c
@@ -130,7 +130,7 @@  static int bpf_map_offload_ndo(struct bpf_offloaded_map *offmap,
 	/* Caller must make sure netdev is valid */
 	netdev = offmap->netdev;
 
-	return netdev->netdev_ops->ndo_bpf(netdev, &data);
+	return dev_xdp_propagate(netdev, &data);
 }
 
 static void __bpf_map_offload_destroy(struct bpf_offloaded_map *offmap)
diff --git a/net/core/dev.c b/net/core/dev.c
index e7260889d4cb..165e9778d422 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -9369,6 +9369,15 @@  u8 dev_xdp_prog_count(struct net_device *dev)
 }
 EXPORT_SYMBOL_GPL(dev_xdp_prog_count);
 
+int dev_xdp_propagate(struct net_device *dev, struct netdev_bpf *bpf)
+{
+	if (!dev->netdev_ops->ndo_bpf)
+		return -EOPNOTSUPP;
+
+	return dev->netdev_ops->ndo_bpf(dev, bpf);
+}
+EXPORT_SYMBOL_GPL(dev_xdp_propagate);
+
 u32 dev_xdp_prog_id(struct net_device *dev, enum bpf_xdp_mode mode)
 {
 	struct bpf_prog *prog = dev_xdp_prog(dev, mode);
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index c0e0204b9630..f44d68c8d75d 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -149,7 +149,7 @@  static void xp_disable_drv_zc(struct xsk_buff_pool *pool)
 		bpf.xsk.pool = NULL;
 		bpf.xsk.queue_id = pool->queue_id;
 
-		err = pool->netdev->netdev_ops->ndo_bpf(pool->netdev, &bpf);
+		err = dev_xdp_propagate(pool->netdev, &bpf);
 
 		if (err)
 			WARN(1, "Failed to disable zero-copy!\n");
@@ -215,7 +215,7 @@  int xp_assign_dev(struct xsk_buff_pool *pool,
 	bpf.xsk.pool = pool;
 	bpf.xsk.queue_id = queue_id;
 
-	err = netdev->netdev_ops->ndo_bpf(netdev, &bpf);
+	err = dev_xdp_propagate(netdev, &bpf);
 	if (err)
 		goto err_unreg_pool;