diff mbox series

[3/8] ublk: truncate io command result

Message ID 20250324134905.766777-4-ming.lei@redhat.com (mailing list archive)
State New
Headers show
Series ublk: cleanup & improvement & zc follow-up | expand

Checks

Context Check Description
shin/vmtest-for-next-PR success PR summary
shin/vmtest-for-next-VM_Test-1 success Logs for build-kernel
shin/vmtest-for-next-VM_Test-0 success Logs for build-kernel

Commit Message

Ming Lei March 24, 2025, 1:48 p.m. UTC
If io command result is bigger than request bytes, truncate it to request
bytes. This way is more reliable, and avoids potential risk, even though
both blk_update_request() and ublk_copy_user_pages() works fine in this
way.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/block/ublk_drv.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Caleb Sander Mateos March 24, 2025, 3:51 p.m. UTC | #1
On Mon, Mar 24, 2025 at 6:49 AM Ming Lei <ming.lei@redhat.com> wrote:
>
> If io command result is bigger than request bytes, truncate it to request
> bytes. This way is more reliable, and avoids potential risk, even though
> both blk_update_request() and ublk_copy_user_pages() works fine in this
> way.
>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  drivers/block/ublk_drv.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 6fa1384c6436..acb6aed7be75 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -1071,6 +1071,10 @@ static inline void __ublk_complete_rq(struct request *req)
>                 goto exit;
>         }
>
> +       /* truncate result in case it is bigger than request bytes */
> +       if (io->res > blk_rq_bytes(req))
> +               io->res = blk_rq_bytes(req);

Is this not already handled by the code below that caps io->res?

unmapped_bytes = ublk_unmap_io(ubq, req, io);
// ...
if (unlikely(unmapped_bytes < io->res))
        io->res = unmapped_bytes;

ublk_unmap_io() returns either blk_rq_bytes(req) or the result of
ublk_copy_user_pages(), which should be at most blk_rq_bytes(req)?

Best,
Caleb
Ming Lei March 25, 2025, 12:50 a.m. UTC | #2
On Mon, Mar 24, 2025 at 08:51:20AM -0700, Caleb Sander Mateos wrote:
> On Mon, Mar 24, 2025 at 6:49 AM Ming Lei <ming.lei@redhat.com> wrote:
> >
> > If io command result is bigger than request bytes, truncate it to request
> > bytes. This way is more reliable, and avoids potential risk, even though
> > both blk_update_request() and ublk_copy_user_pages() works fine in this
> > way.
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  drivers/block/ublk_drv.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> > index 6fa1384c6436..acb6aed7be75 100644
> > --- a/drivers/block/ublk_drv.c
> > +++ b/drivers/block/ublk_drv.c
> > @@ -1071,6 +1071,10 @@ static inline void __ublk_complete_rq(struct request *req)
> >                 goto exit;
> >         }
> >
> > +       /* truncate result in case it is bigger than request bytes */
> > +       if (io->res > blk_rq_bytes(req))
> > +               io->res = blk_rq_bytes(req);
> 
> Is this not already handled by the code below that caps io->res?
> 
> unmapped_bytes = ublk_unmap_io(ubq, req, io);
> // ...
> if (unlikely(unmapped_bytes < io->res))
>         io->res = unmapped_bytes;
> 
> ublk_unmap_io() returns either blk_rq_bytes(req) or the result of
> ublk_copy_user_pages(), which should be at most blk_rq_bytes(req)?

Indeed, this patch can be dropped.


thanks,
Ming
diff mbox series

Patch

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 6fa1384c6436..acb6aed7be75 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1071,6 +1071,10 @@  static inline void __ublk_complete_rq(struct request *req)
 		goto exit;
 	}
 
+	/* truncate result in case it is bigger than request bytes */
+	if (io->res > blk_rq_bytes(req))
+		io->res = blk_rq_bytes(req);
+
 	/*
 	 * FLUSH, DISCARD or WRITE_ZEROES usually won't return bytes returned, so end them
 	 * directly.