diff mbox series

[bpf-next,2/6] samples/bpf: increment Tx stats at sending

Message ID 1604498942-24274-3-git-send-email-magnus.karlsson@gmail.com (mailing list archive)
State Changes Requested
Delegated to: BPF
Headers show
Series [bpf-next,1/6] i40e: introduce lazy Tx completions for AF_XDP zero-copy | expand

Commit Message

Magnus Karlsson Nov. 4, 2020, 2:08 p.m. UTC
From: Magnus Karlsson <magnus.karlsson@intel.com>

Increment the statistics over how many Tx packets have been sent at
the time of sending instead of at the time of completion. This as a
completion event means that the buffer has been sent AND returned to
user space. The packet always gets sent shortly after sendto() is
called. The kernel might, for performance reasons, decide to not
return every single buffer to user space immediately after sending,
for example, only after a batch of packets have been
transmitted. Incrementing the number of packets sent at completion,
will in that case be confusing as if you send a single packet, the
counter might show zero for a while even though the packet has been
transmitted.

Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
---
 samples/bpf/xdpsock_user.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

John Fastabend Nov. 9, 2020, 8:47 p.m. UTC | #1
Magnus Karlsson wrote:
> From: Magnus Karlsson <magnus.karlsson@intel.com>
> 
> Increment the statistics over how many Tx packets have been sent at
> the time of sending instead of at the time of completion. This as a
> completion event means that the buffer has been sent AND returned to
> user space. The packet always gets sent shortly after sendto() is
> called. The kernel might, for performance reasons, decide to not
> return every single buffer to user space immediately after sending,
> for example, only after a batch of packets have been
> transmitted. Incrementing the number of packets sent at completion,
> will in that case be confusing as if you send a single packet, the
> counter might show zero for a while even though the packet has been
> transmitted.
> 
> Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
> ---

LGTM. Just one question then if we wanted to know the old value, packet
completion counter it looks like (tx_npkts - outstanding_tx) would give
that value?

Acked-by: John Fastabend <john.fastabend@gmail.com>
Magnus Karlsson Nov. 10, 2020, 7:12 a.m. UTC | #2
On Mon, Nov 9, 2020 at 9:47 PM John Fastabend <john.fastabend@gmail.com> wrote:
>
> Magnus Karlsson wrote:
> > From: Magnus Karlsson <magnus.karlsson@intel.com>
> >
> > Increment the statistics over how many Tx packets have been sent at
> > the time of sending instead of at the time of completion. This as a
> > completion event means that the buffer has been sent AND returned to
> > user space. The packet always gets sent shortly after sendto() is
> > called. The kernel might, for performance reasons, decide to not
> > return every single buffer to user space immediately after sending,
> > for example, only after a batch of packets have been
> > transmitted. Incrementing the number of packets sent at completion,
> > will in that case be confusing as if you send a single packet, the
> > counter might show zero for a while even though the packet has been
> > transmitted.
> >
> > Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
> > ---
>
> LGTM. Just one question then if we wanted to know the old value, packet
> completion counter it looks like (tx_npkts - outstanding_tx) would give
> that value?

That is correct.

> Acked-by: John Fastabend <john.fastabend@gmail.com>
diff mbox series

Patch

diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
index 1149e94..2567f0d 100644
--- a/samples/bpf/xdpsock_user.c
+++ b/samples/bpf/xdpsock_user.c
@@ -1146,7 +1146,6 @@  static inline void complete_tx_l2fwd(struct xsk_socket_info *xsk,
 		xsk_ring_prod__submit(&xsk->umem->fq, rcvd);
 		xsk_ring_cons__release(&xsk->umem->cq, rcvd);
 		xsk->outstanding_tx -= rcvd;
-		xsk->ring_stats.tx_npkts += rcvd;
 	}
 }
 
@@ -1168,7 +1167,6 @@  static inline void complete_tx_only(struct xsk_socket_info *xsk,
 	if (rcvd > 0) {
 		xsk_ring_cons__release(&xsk->umem->cq, rcvd);
 		xsk->outstanding_tx -= rcvd;
-		xsk->ring_stats.tx_npkts += rcvd;
 	}
 }
 
@@ -1260,6 +1258,7 @@  static void tx_only(struct xsk_socket_info *xsk, u32 *frame_nb, int batch_size)
 	}
 
 	xsk_ring_prod__submit(&xsk->tx, batch_size);
+	xsk->ring_stats.tx_npkts += batch_size;
 	xsk->outstanding_tx += batch_size;
 	*frame_nb += batch_size;
 	*frame_nb %= NUM_FRAMES;
@@ -1348,6 +1347,7 @@  static void l2fwd(struct xsk_socket_info *xsk, struct pollfd *fds)
 		}
 		return;
 	}
+	xsk->ring_stats.rx_npkts += rcvd;
 
 	ret = xsk_ring_prod__reserve(&xsk->tx, rcvd, &idx_tx);
 	while (ret != rcvd) {
@@ -1379,7 +1379,7 @@  static void l2fwd(struct xsk_socket_info *xsk, struct pollfd *fds)
 	xsk_ring_prod__submit(&xsk->tx, rcvd);
 	xsk_ring_cons__release(&xsk->rx, rcvd);
 
-	xsk->ring_stats.rx_npkts += rcvd;
+	xsk->ring_stats.tx_npkts += rcvd;
 	xsk->outstanding_tx += rcvd;
 }