diff mbox series

[1/4] io_uring/uring_cmd: get rid of using req->imu

Message ID 20241018183948.464779-2-axboe@kernel.dk (mailing list archive)
State New
Headers show
Series Get rid of io_kiocb->imu | expand

Commit Message

Jens Axboe Oct. 18, 2024, 6:38 p.m. UTC
It's pretty pointless to use io_kiocb as intermediate storage for this,
so split the validity check and the actual usage.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 io_uring/uring_cmd.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

Comments

Pavel Begunkov Oct. 18, 2024, 7:34 p.m. UTC | #1
On 10/18/24 19:38, Jens Axboe wrote:
> It's pretty pointless to use io_kiocb as intermediate storage for this,
> so split the validity check and the actual usage.

The table is uring_lock protected, if we don't resolve in advance
we should take care of locking when importing.

Another concern is adding a gap b/w setting a rsrc node and looking
up a buffer. That should be fine, but worth mentioning that when
you grab a rsrc node it also prevent destruction of all objects that
are created after this point.

  
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   io_uring/uring_cmd.c | 9 +++++----
>   1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
> index 39c3c816ec78..cc8bb5550ff5 100644
> --- a/io_uring/uring_cmd.c
> +++ b/io_uring/uring_cmd.c
> @@ -211,11 +211,10 @@ int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
>   		struct io_ring_ctx *ctx = req->ctx;
>   		u16 index;
>   
> -		req->buf_index = READ_ONCE(sqe->buf_index);
> +		index = READ_ONCE(sqe->buf_index);
> +		req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
>   		if (unlikely(req->buf_index >= ctx->nr_user_bufs))
>   			return -EFAULT;

The rsrc should hold the table destruction, but it feels like it
should better move where importing happens.


> -		index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
> -		req->imu = ctx->user_bufs[index];
>   		io_req_set_rsrc_node(req, ctx, 0);
>   	}
>   	ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
> @@ -272,8 +271,10 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
>   			      struct iov_iter *iter, void *ioucmd)
>   {
>   	struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
> +	struct io_mapped_ubuf *imu;
>   
> -	return io_import_fixed(rw, iter, req->imu, ubuf, len);
> +	imu = req->ctx->user_bufs[req->buf_index];
> +	return io_import_fixed(rw, iter, imu, ubuf, len);
>   }
>   EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);
>
Jens Axboe Oct. 19, 2024, 9:56 p.m. UTC | #2
On 10/18/24 1:34 PM, Pavel Begunkov wrote:
> On 10/18/24 19:38, Jens Axboe wrote:
>> It's pretty pointless to use io_kiocb as intermediate storage for this,
>> so split the validity check and the actual usage.
> 
> The table is uring_lock protected, if we don't resolve in advance
> we should take care of locking when importing.
> 
> Another concern is adding a gap b/w setting a rsrc node and looking
> up a buffer. That should be fine, but worth mentioning that when
> you grab a rsrc node it also prevent destruction of all objects that
> are created after this point.

Yeah the last part should be fine, the first one surely not! I also
notice that the check for too large an index now happens after
the array_index_nospec(), that's also an issue.

I'll spin a v2. We should just put it all in one place.
diff mbox series

Patch

diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
index 39c3c816ec78..cc8bb5550ff5 100644
--- a/io_uring/uring_cmd.c
+++ b/io_uring/uring_cmd.c
@@ -211,11 +211,10 @@  int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
 		struct io_ring_ctx *ctx = req->ctx;
 		u16 index;
 
-		req->buf_index = READ_ONCE(sqe->buf_index);
+		index = READ_ONCE(sqe->buf_index);
+		req->buf_index = array_index_nospec(index, ctx->nr_user_bufs);
 		if (unlikely(req->buf_index >= ctx->nr_user_bufs))
 			return -EFAULT;
-		index = array_index_nospec(req->buf_index, ctx->nr_user_bufs);
-		req->imu = ctx->user_bufs[index];
 		io_req_set_rsrc_node(req, ctx, 0);
 	}
 	ioucmd->cmd_op = READ_ONCE(sqe->cmd_op);
@@ -272,8 +271,10 @@  int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
 			      struct iov_iter *iter, void *ioucmd)
 {
 	struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+	struct io_mapped_ubuf *imu;
 
-	return io_import_fixed(rw, iter, req->imu, ubuf, len);
+	imu = req->ctx->user_bufs[req->buf_index];
+	return io_import_fixed(rw, iter, imu, ubuf, len);
 }
 EXPORT_SYMBOL_GPL(io_uring_cmd_import_fixed);