mbox series

[net-next,0/2] Introduce XDP_FLAGS_NO_TX flag

Message ID 20210109024950.4043819-1-charlie@charlie.bz (mailing list archive)
Headers show
Series Introduce XDP_FLAGS_NO_TX flag | expand

Message

Charlie Somerville Jan. 9, 2021, 2:49 a.m. UTC
This patch series introduces a new flag XDP_FLAGS_NO_TX which prevents
the allocation of additional send queues for XDP programs.

Included in this patch series is an implementation of XDP_FLAGS_NO_TX
for the virtio_net driver. This flag is intended to be advisory - not
all drivers must implement support for it.

Many virtualised environments only provide enough virtio_net send queues
for the number of processors allocated to the VM:

# nproc
8
# ethtool --show-channels ens3
Channel parameters for ens3:
Pre-set maximums:
RX:     0
TX:     0
Other:      0
Combined:   8

In this configuration XDP is unusable because the virtio_net driver
always tries to allocate an extra send queue for each processor - even
if the XDP the program never uses the XDP_TX functionality.

While XDP_TX is still unavailable in these environments, this new flag
expands the set of XDP programs that can be used.

This is my first contribution to the kernel, so apologies if I've sent
this to the wrong list. I have tried to cc relevant maintainers but
it's possible I may have missed some people. I'm looking forward to
receiving feedback on this change.

Charlie Somerville (2):
  xdp: Add XDP_FLAGS_NO_TX flag
  virtio_net: Implement XDP_FLAGS_NO_TX support

 drivers/net/virtio_net.c     | 17 +++++++++++++----
 include/uapi/linux/if_link.h |  5 ++++-
 2 files changed, 17 insertions(+), 5 deletions(-)

Comments

Toke Høiland-Jørgensen Jan. 11, 2021, 11:10 a.m. UTC | #1
Charlie Somerville <charlie@charlie.bz> writes:

> This patch series introduces a new flag XDP_FLAGS_NO_TX which prevents
> the allocation of additional send queues for XDP programs.
>
> Included in this patch series is an implementation of XDP_FLAGS_NO_TX
> for the virtio_net driver. This flag is intended to be advisory - not
> all drivers must implement support for it.
>
> Many virtualised environments only provide enough virtio_net send queues
> for the number of processors allocated to the VM:
>
> # nproc
> 8
> # ethtool --show-channels ens3
> Channel parameters for ens3:
> Pre-set maximums:
> RX:     0
> TX:     0
> Other:      0
> Combined:   8
>
> In this configuration XDP is unusable because the virtio_net driver
> always tries to allocate an extra send queue for each processor - even
> if the XDP the program never uses the XDP_TX functionality.
>
> While XDP_TX is still unavailable in these environments, this new flag
> expands the set of XDP programs that can be used.

I don't think adding a new flag is a good idea. Why can't you just fix
the driver?

-Toke
Jason Wang Jan. 12, 2021, 3:03 a.m. UTC | #2
On 2021/1/9 上午10:49, Charlie Somerville wrote:
> This patch series introduces a new flag XDP_FLAGS_NO_TX which prevents
> the allocation of additional send queues for XDP programs.


This part I don't understand. Is such flag a must? I think the answer is 
probably not.

Why not simply do:

1) if we had sufficient TX queues, use dedicated TX queues for XDP_TX
2) if we don't, simple synchronize through spin_lock[1]

Thanks

[1] https://www.spinics.net/lists/bpf/msg32587.html


>
> Included in this patch series is an implementation of XDP_FLAGS_NO_TX
> for the virtio_net driver. This flag is intended to be advisory - not
> all drivers must implement support for it.
>
> Many virtualised environments only provide enough virtio_net send queues
> for the number of processors allocated to the VM:
>
> # nproc
> 8
> # ethtool --show-channels ens3
> Channel parameters for ens3:
> Pre-set maximums:
> RX:     0
> TX:     0
> Other:      0
> Combined:   8
>
> In this configuration XDP is unusable because the virtio_net driver
> always tries to allocate an extra send queue for each processor - even
> if the XDP the program never uses the XDP_TX functionality.
>
> While XDP_TX is still unavailable in these environments, this new flag
> expands the set of XDP programs that can be used.
>
> This is my first contribution to the kernel, so apologies if I've sent
> this to the wrong list. I have tried to cc relevant maintainers but
> it's possible I may have missed some people. I'm looking forward to
> receiving feedback on this change.
>
> Charlie Somerville (2):
>    xdp: Add XDP_FLAGS_NO_TX flag
>    virtio_net: Implement XDP_FLAGS_NO_TX support
>
>   drivers/net/virtio_net.c     | 17 +++++++++++++----
>   include/uapi/linux/if_link.h |  5 ++++-
>   2 files changed, 17 insertions(+), 5 deletions(-)
>
Charlie Somerville Jan. 12, 2021, 4:38 a.m. UTC | #3
On Tue, Jan 12, 2021, at 14:03, Jason Wang wrote:
> 
> On 2021/1/9 上午10:49, Charlie Somerville wrote:
> > This patch series introduces a new flag XDP_FLAGS_NO_TX which prevents
> > the allocation of additional send queues for XDP programs.
> 
> 
> This part I don't understand. Is such flag a must? I think the answer is 
> probably not.
> 
> Why not simply do:
> 
> 1) if we had sufficient TX queues, use dedicated TX queues for XDP_TX
> 2) if we don't, simple synchronize through spin_lock[1]
> 
> Thanks
> 
> [1] https://www.spinics.net/lists/bpf/msg32587.html

The patch from Xuan Zhuo looks like a much better approach. I am happy to close this out in favour of that one! Thanks for the link.