mbox series

[RFC,net-next,0/1] ibmveth: Implement BQL

Message ID 20221024213828.320219-1-nnac123@linux.ibm.com (mailing list archive)
Headers show
Series ibmveth: Implement BQL | expand

Message

Nick Child Oct. 24, 2022, 9:38 p.m. UTC
Hello,

Labeled as RFC because I am unsure if adding Byte Queue Limits (BQL) is
positively effecting the ibmveth driver. BQL is common among network
drivers so I would like to incorporate it into the virtual ethernet
driver, ibmveth. But I am having trouble measuring its effects.

From my understanding (and please correct me if I am wrong), BQL will 
use the number of packets sent to the NIC to approximate the minimum
number of packets to enqueue to a netdev_queue without starving the NIC.
As a result, bufferbloat in the networking queues are minimized which
may allow for smaller latencies.

After performing various netperf tests under differing loads and
priorities, I do not see any performance effect when comparing the
driver with and without BQL. The ibmveth driver is a virtual driver
which has an abstracted view of the NIC so I am comfortable without
seeing any performance deltas. That being said, I would like to know if
BQL is actually being enforced in some way. In other words, I would
like to observe a change in the number of queued bytes during BQL
implementations. Does anyone know of a mechanism to measure the length
of a netdev_queue?

I tried creating a BPF script[1] to track the bytes in a netdev_queue
but again am not seeing any difference with and without BQL. I do not
believe anything is wrong with BQL (it is more likely that my tracing
is bad) but I would like to have some evidence of BQL having a
positive effect on the device. Any recommendations or advice would be
greatly appreciated.
Thanks.

[1] https://github.com/nick-child-ibm/bpf_scripts/blob/main/bpftrace_queued_bytes.bt 

Nick Child (1):
  ibmveth: Implement BQL

 drivers/net/ethernet/ibm/ibmveth.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Jakub Kicinski Oct. 25, 2022, 6:41 p.m. UTC | #1
On Mon, 24 Oct 2022 16:38:27 -0500 Nick Child wrote:
> Labeled as RFC because I am unsure if adding Byte Queue Limits (BQL) is
> positively effecting the ibmveth driver. BQL is common among network
> drivers so I would like to incorporate it into the virtual ethernet
> driver, ibmveth. But I am having trouble measuring its effects.
> 
> From my understanding (and please correct me if I am wrong), BQL will 
> use the number of packets sent to the NIC to approximate the minimum
> number of packets to enqueue to a netdev_queue without starving the NIC.
> As a result, bufferbloat in the networking queues are minimized which
> may allow for smaller latencies.
> 
> After performing various netperf tests under differing loads and
> priorities, I do not see any performance effect when comparing the
> driver with and without BQL. The ibmveth driver is a virtual driver
> which has an abstracted view of the NIC so I am comfortable without
> seeing any performance deltas. That being said, I would like to know if
> BQL is actually being enforced in some way. In other words, I would
> like to observe a change in the number of queued bytes during BQL
> implementations. Does anyone know of a mechanism to measure the length
> of a netdev_queue?
> 
> I tried creating a BPF script[1] to track the bytes in a netdev_queue
> but again am not seeing any difference with and without BQL. I do not
> believe anything is wrong with BQL (it is more likely that my tracing
> is bad) but I would like to have some evidence of BQL having a
> positive effect on the device. Any recommendations or advice would be
> greatly appreciated.

What qdisc are you using and what "netperf tests" are you running?
Nick Child Oct. 25, 2022, 8:03 p.m. UTC | #2
On 10/25/22 13:41, Jakub Kicinski wrote:
> On Mon, 24 Oct 2022 16:38:27 -0500 Nick Child wrote:

>>  Does anyone know of a mechanism to measure the length
>> of a netdev_queue?
>>
>> I tried creating a BPF script[1] to track the bytes in a netdev_queue
>> but again am not seeing any difference with and without BQL. I do not
>> believe anything is wrong with BQL (it is more likely that my tracing
>> is bad) but I would like to have some evidence of BQL having a
>> positive effect on the device. Any recommendations or advice would be
>> greatly appreciated.
> 
> What qdisc are you using and what "netperf tests" are you running?

Th qdisc is default pfifo_fast.

I have tried the netperf tests described in the patchset which 
introduced BQL[1]. More specifically, 100 low priority netperf 
TCP_STREAMs with 1 high priority TCP_RR. The author of the patchset also 
listed data for number of queued bytes but did not explain how he 
managed to get those measurements.
Additionally, I have tried using flent[2] (a wrapper for netperf) to run 
performance measurements when the system is under considerable load. In 
particular I tried the flent rrul_prio (Realtime Response Under Load - 
Test Prio Queue) and rtt_fair (RTT Fair Realtime Response Under Load) tests.

Again, a positive effect on performance is not as much as a concern for 
me as knowing that BQL is doing is enforcing queue size limits.

Thanks for your help,
Nick

[1] https://lwn.net/Articles/469652/
[2] https://flent.org/
Jakub Kicinski Oct. 25, 2022, 10:10 p.m. UTC | #3
On Tue, 25 Oct 2022 15:03:03 -0500 Nick Child wrote:
> Th qdisc is default pfifo_fast.

You need a more advanced qdisc to seen an effect. Try fq.
BQL tries to keep the NIC queue (fifo) as short as possible
to hold packets in the qdisc. But if the qdisc is also just
a fifo there's no practical difference.

I have no practical experience with BQL on virtualized NICs 
tho, so unsure what gains you should expect to see..
Dave Taht Oct. 26, 2022, 12:08 a.m. UTC | #4
On Tue, Oct 25, 2022 at 3:10 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Tue, 25 Oct 2022 15:03:03 -0500 Nick Child wrote:
> > Th qdisc is default pfifo_fast.
>
> You need a more advanced qdisc to seen an effect. Try fq.
> BQL tries to keep the NIC queue (fifo) as short as possible
> to hold packets in the qdisc. But if the qdisc is also just
> a fifo there's no practical difference.
>
> I have no practical experience with BQL on virtualized NICs
> tho, so unsure what gains you should expect to see..

fq_codel would be a better choice of underlying qdisc for a test, and
in this environment you'd need to pound the interface flat with hundreds
of flows, preferably in both directions.

My questions are:

If the ring buffers never fill, why do you need to allocate so many
buffers in the first place?
If bql never engages, what's the bottleneck elsewhere? XMIT_MORE?

Now the only tool for monitoring bql I know of is bqlmon.
Nick Child Oct. 26, 2022, 9:10 p.m. UTC | #5
On 10/25/22 19:08, Dave Taht wrote:
> On Tue, Oct 25, 2022 at 3:10 PM Jakub Kicinski <kuba@kernel.org> wrote:
>>
>> On Tue, 25 Oct 2022 15:03:03 -0500 Nick Child wrote:
>>> Th qdisc is default pfifo_fast.
>>
>> You need a more advanced qdisc to seen an effect. Try fq.
>> BQL tries to keep the NIC queue (fifo) as short as possible
>> to hold packets in the qdisc. But if the qdisc is also just
>> a fifo there's no practical difference.
>>
>> I have no practical experience with BQL on virtualized NICs
>> tho, so unsure what gains you should expect to see..
> 

I understand. I think that is why I am trying to investigate this 
further, because the whole virtualization aspect could undermine
everything that BQL is trying to accomplish. That being said, I could 
also be shining my flashlight in the wrong places. Hence the reason for
the RFC.

> fq_codel would be a better choice of underlying qdisc for a test, and
> in this environment you'd need to pound the interface flat with hundreds
> of flows, preferably in both directions.
> 

Enabling FQ_CODEL and restarting tests, I am still not seeing any 
noticeable difference in bytes sitting in the netdev_queue (but it is 
possible my tracing is incorrect). I also tried reducing the number of 
queues, disabling tso and even running 100-500 parallel iperf 
connections. I can see the throughput and latency taking a hit with more 
connections so I assume the systems are saturated.

> My questions are:
> 
> If the ring buffers never fill, why do you need to allocate so many
> buffers in the first place?

The reasoning for 16 tx queues was mostly to allow for more parallel 
calls to the devices xmit function. After hearing your points about 
resource issues, I will send a patch to reduce this number to 8 queues.

> If bql never engages, what's the bottleneck elsewhere? XMIT_MORE?
> 

I suppose the question I am trying to pose is: How do we know that bql 
is engaging?

> Now the only tool for monitoring bql I know of is bqlmon.
> 
bqlmon is to useful for tracking the bql `limit` value assigned to a 
queue (IOW `watch 
/sys/class/net/<device>/queues/tx*/byte_queue_limits/limit` ) but 
whether or not this value is being applied to an active network 
connection is what I would like to figure out.

Thanks again for feedback and helping me out with this.
Nick Child