diff mbox series

[2/5] io_uring/net: add provided buffer support for IORING_OP_SEND

Message ID 20240420133233.500590-4-axboe@kernel.dk (mailing list archive)
State New
Headers show
Series Send and receive bundles | expand

Commit Message

Jens Axboe April 20, 2024, 1:29 p.m. UTC
It's pretty trivial to wire up provided buffer support for the send
side, just like how it's done the receive side. This enables setting up
a buffer ring that an application can use to push pending sends to,
and then have a send pick a buffer from that ring.

One of the challenges with async IO and networking sends is that you
can get into reordering conditions if you have more than one inflight
at the same time. Consider the following scenario where everything is
fine:

1) App queues sendA for socket1
2) App queues sendB for socket1
3) App does io_uring_submit()
4) sendA is issued, completes successfully, posts CQE
5) sendB is issued, completes successfully, posts CQE

All is fine. Requests are always issued in-order, and both complete
inline as most sends do.

However, if we're flooding socket1 with sends, the following could
also result from the same sequence:

1) App queues sendA for socket1
2) App queues sendB for socket1
3) App does io_uring_submit()
4) sendA is issued, socket1 is full, poll is armed for retry
5) Space frees up in socket1, this triggers sendA retry via task_work
6) sendB is issued, completes successfully, posts CQE
7) sendA is retried, completes successfully, posts CQE

Now we've sent sendB before sendA, which can make things unhappy. If
both sendA and sendB had been using provided buffers, then it would look
as follows instead:

1) App queues dataA for sendA, queues sendA for socket1
2) App queues dataB for sendB queues sendB for socket1
3) App does io_uring_submit()
4) sendA is issued, socket1 is full, poll is armed for retry
5) Space frees up in socket1, this triggers sendA retry via task_work
6) sendB is issued, picks first buffer (dataA), completes successfully,
   posts CQE (which says "I sent dataA")
7) sendA is retried, picks first buffer (dataB), completes successfully,
   posts CQE (which says "I sent dataB")

Now we've sent the data in order, and everybody is happy.

It's worth noting that this also opens the door for supporting multishot
sends, as provided buffers would be a prerequisite for that. Those can
trigger either when new buffers are added to the outgoing ring, or (if
stalled due to lack of space) when space frees up in the socket.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/net.c   | 25 ++++++++++++++++++++-----
 io_uring/opdef.c |  1 +
 2 files changed, 21 insertions(+), 5 deletions(-)

Comments

Gabriel Krisman Bertazi April 25, 2024, 11:56 a.m. UTC | #1
Jens Axboe <axboe@kernel.dk> writes:

> It's pretty trivial to wire up provided buffer support for the send
> side, just like how it's done the receive side. This enables setting up
> a buffer ring that an application can use to push pending sends to,
> and then have a send pick a buffer from that ring.
>
> One of the challenges with async IO and networking sends is that you
> can get into reordering conditions if you have more than one inflight
> at the same time. Consider the following scenario where everything is
> fine:
>
> 1) App queues sendA for socket1
> 2) App queues sendB for socket1
> 3) App does io_uring_submit()
> 4) sendA is issued, completes successfully, posts CQE
> 5) sendB is issued, completes successfully, posts CQE
>
> All is fine. Requests are always issued in-order, and both complete
> inline as most sends do.






>
> However, if we're flooding socket1 with sends, the following could
> also result from the same sequence:
>
> 1) App queues sendA for socket1
> 2) App queues sendB for socket1
> 3) App does io_uring_submit()
> 4) sendA is issued, socket1 is full, poll is armed for retry
> 5) Space frees up in socket1, this triggers sendA retry via task_work
> 6) sendB is issued, completes successfully, posts CQE
> 7) sendA is retried, completes successfully, posts CQE
>
> Now we've sent sendB before sendA, which can make things unhappy. If
> both sendA and sendB had been using provided buffers, then it would look
> as follows instead:
>
> 1) App queues dataA for sendA, queues sendA for socket1
> 2) App queues dataB for sendB queues sendB for socket1
> 3) App does io_uring_submit()
> 4) sendA is issued, socket1 is full, poll is armed for retry
> 5) Space frees up in socket1, this triggers sendA retry via task_work
> 6) sendB is issued, picks first buffer (dataA), completes successfully,
>    posts CQE (which says "I sent dataA")
> 7) sendA is retried, picks first buffer (dataB), completes successfully,
>    posts CQE (which says "I sent dataB")

Hi Jens,

If I understand correctly, when sending a buffer, we set sr->len to be
the smallest between the buffer size and what was requested in sqe->len.
But, when we disconnect the buffer from the request, we can get in a
situation where the buffers and requests mismatch,  and only one buffer
gets sent.

Say we are sending two buffers through non-bundle sends with different
sizes to the same socket in this order:

 buff[1]->len = 128
 buff[2]->len = 256

And SQEs like this:

 sqe[1]->len = 128
 sqe[2]->len = 256

If sqe1 picks buff1 it is all good. But, if sqe[2] runs first, then
sqe[1] picks buff2, and it will only send the first 128, won't it?
Looking at the patch I don't see how you avoid this condition, but
perhaps I'm missing something?

One suggestion would be requiring sqe->len to be 0 when using send with
provided buffers, so we simply use the entire buffer in
the ring.  wdyt?

Thanks,
Gabriel Krisman Bertazi April 25, 2024, 12:19 p.m. UTC | #2
Gabriel Krisman Bertazi <krisman@suse.de> writes:

...

> situation where the buffers and requests mismatch,  and only one buffer
> gets sent.

Sorry, here I meant that *only part of a buffer* might get sent because
we truncate to sqe->len.  As in the example I gave.
Jens Axboe April 25, 2024, 3:11 p.m. UTC | #3
On 4/25/24 5:56 AM, Gabriel Krisman Bertazi wrote:
> Jens Axboe <axboe@kernel.dk> writes:
> 
>> It's pretty trivial to wire up provided buffer support for the send
>> side, just like how it's done the receive side. This enables setting up
>> a buffer ring that an application can use to push pending sends to,
>> and then have a send pick a buffer from that ring.
>>
>> One of the challenges with async IO and networking sends is that you
>> can get into reordering conditions if you have more than one inflight
>> at the same time. Consider the following scenario where everything is
>> fine:
>>
>> 1) App queues sendA for socket1
>> 2) App queues sendB for socket1
>> 3) App does io_uring_submit()
>> 4) sendA is issued, completes successfully, posts CQE
>> 5) sendB is issued, completes successfully, posts CQE
>>
>> All is fine. Requests are always issued in-order, and both complete
>> inline as most sends do.
> 
> 
> 
> 
> 
> 
>>
>> However, if we're flooding socket1 with sends, the following could
>> also result from the same sequence:
>>
>> 1) App queues sendA for socket1
>> 2) App queues sendB for socket1
>> 3) App does io_uring_submit()
>> 4) sendA is issued, socket1 is full, poll is armed for retry
>> 5) Space frees up in socket1, this triggers sendA retry via task_work
>> 6) sendB is issued, completes successfully, posts CQE
>> 7) sendA is retried, completes successfully, posts CQE
>>
>> Now we've sent sendB before sendA, which can make things unhappy. If
>> both sendA and sendB had been using provided buffers, then it would look
>> as follows instead:
>>
>> 1) App queues dataA for sendA, queues sendA for socket1
>> 2) App queues dataB for sendB queues sendB for socket1
>> 3) App does io_uring_submit()
>> 4) sendA is issued, socket1 is full, poll is armed for retry
>> 5) Space frees up in socket1, this triggers sendA retry via task_work
>> 6) sendB is issued, picks first buffer (dataA), completes successfully,
>>    posts CQE (which says "I sent dataA")
>> 7) sendA is retried, picks first buffer (dataB), completes successfully,
>>    posts CQE (which says "I sent dataB")
> 
> Hi Jens,
> 
> If I understand correctly, when sending a buffer, we set sr->len to be
> the smallest between the buffer size and what was requested in sqe->len.
> But, when we disconnect the buffer from the request, we can get in a
> situation where the buffers and requests mismatch,  and only one buffer
> gets sent.
> 
> Say we are sending two buffers through non-bundle sends with different
> sizes to the same socket in this order:
> 
>  buff[1]->len = 128
>  buff[2]->len = 256
> 
> And SQEs like this:
> 
>  sqe[1]->len = 128
>  sqe[2]->len = 256
> 
> If sqe1 picks buff1 it is all good. But, if sqe[2] runs first, then
> sqe[1] picks buff2, and it will only send the first 128, won't it?
> Looking at the patch I don't see how you avoid this condition, but
> perhaps I'm missing something?
> 
> One suggestion would be requiring sqe->len to be 0 when using send with
> provided buffers, so we simply use the entire buffer in
> the ring.  wdyt?

It might not hurt to just enforce it to be 0, in fact I think any sane
use case would do that and I don't think the above use case is a very
valid one. It's a bit of "you get to keep both pieces when it breaks".

Do you want to send a patch that just enforces it to be 0? We do have
that requirement in other spots for provided buffers and multishot, so I
think it'll make sense to do here too regardless of the sanity of the
use case.
diff mbox series

Patch

diff --git a/io_uring/net.c b/io_uring/net.c
index dc310f0bfe4c..13685d133582 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -364,10 +364,12 @@  static int io_send_setup(struct io_kiocb *req)
 		kmsg->msg.msg_name = &kmsg->addr;
 		kmsg->msg.msg_namelen = sr->addr_len;
 	}
-	ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len, &kmsg->msg.msg_iter);
-	if (unlikely(ret < 0))
-		return ret;
-
+	if (!io_do_buffer_select(req)) {
+		ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len,
+				  &kmsg->msg.msg_iter);
+		if (unlikely(ret < 0))
+			return ret;
+	}
 	return 0;
 }
 
@@ -480,6 +482,7 @@  int io_send(struct io_kiocb *req, unsigned int issue_flags)
 	struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
 	struct io_async_msghdr *kmsg = req->async_data;
 	struct socket *sock;
+	unsigned int cflags;
 	unsigned flags;
 	int min_ret = 0;
 	int ret;
@@ -492,6 +495,17 @@  int io_send(struct io_kiocb *req, unsigned int issue_flags)
 	    (sr->flags & IORING_RECVSEND_POLL_FIRST))
 		return -EAGAIN;
 
+	if (io_do_buffer_select(req)) {
+		size_t len = sr->len;
+		void __user *buf;
+
+		buf = io_buffer_select(req, &len, issue_flags);
+		if (unlikely(!buf))
+			return -ENOBUFS;
+		sr->buf = buf;
+		sr->len = len;
+	}
+
 	flags = sr->msg_flags;
 	if (issue_flags & IO_URING_F_NONBLOCK)
 		flags |= MSG_DONTWAIT;
@@ -521,7 +535,8 @@  int io_send(struct io_kiocb *req, unsigned int issue_flags)
 	else if (sr->done_io)
 		ret = sr->done_io;
 	io_req_msg_cleanup(req, issue_flags);
-	io_req_set_res(req, ret, 0);
+	cflags = io_put_kbuf(req, issue_flags);
+	io_req_set_res(req, ret, cflags);
 	return IOU_OK;
 }
 
diff --git a/io_uring/opdef.c b/io_uring/opdef.c
index a16f73938ebb..2de5cca9504e 100644
--- a/io_uring/opdef.c
+++ b/io_uring/opdef.c
@@ -281,6 +281,7 @@  const struct io_issue_def io_issue_defs[] = {
 		.pollout		= 1,
 		.audit_skip		= 1,
 		.ioprio			= 1,
+		.buffer_select		= 1,
 #if defined(CONFIG_NET)
 		.async_size		= sizeof(struct io_async_msghdr),
 		.prep			= io_sendmsg_prep,