Message ID | 20250324134905.766777-4-ming.lei@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | ublk: cleanup & improvement & zc follow-up | expand |
Context | Check | Description |
---|---|---|
shin/vmtest-for-next-PR | success | PR summary |
shin/vmtest-for-next-VM_Test-1 | success | Logs for build-kernel |
shin/vmtest-for-next-VM_Test-0 | success | Logs for build-kernel |
On Mon, Mar 24, 2025 at 6:49 AM Ming Lei <ming.lei@redhat.com> wrote: > > If io command result is bigger than request bytes, truncate it to request > bytes. This way is more reliable, and avoids potential risk, even though > both blk_update_request() and ublk_copy_user_pages() works fine in this > way. > > Signed-off-by: Ming Lei <ming.lei@redhat.com> > --- > drivers/block/ublk_drv.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c > index 6fa1384c6436..acb6aed7be75 100644 > --- a/drivers/block/ublk_drv.c > +++ b/drivers/block/ublk_drv.c > @@ -1071,6 +1071,10 @@ static inline void __ublk_complete_rq(struct request *req) > goto exit; > } > > + /* truncate result in case it is bigger than request bytes */ > + if (io->res > blk_rq_bytes(req)) > + io->res = blk_rq_bytes(req); Is this not already handled by the code below that caps io->res? unmapped_bytes = ublk_unmap_io(ubq, req, io); // ... if (unlikely(unmapped_bytes < io->res)) io->res = unmapped_bytes; ublk_unmap_io() returns either blk_rq_bytes(req) or the result of ublk_copy_user_pages(), which should be at most blk_rq_bytes(req)? Best, Caleb
On Mon, Mar 24, 2025 at 08:51:20AM -0700, Caleb Sander Mateos wrote: > On Mon, Mar 24, 2025 at 6:49 AM Ming Lei <ming.lei@redhat.com> wrote: > > > > If io command result is bigger than request bytes, truncate it to request > > bytes. This way is more reliable, and avoids potential risk, even though > > both blk_update_request() and ublk_copy_user_pages() works fine in this > > way. > > > > Signed-off-by: Ming Lei <ming.lei@redhat.com> > > --- > > drivers/block/ublk_drv.c | 4 ++++ > > 1 file changed, 4 insertions(+) > > > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c > > index 6fa1384c6436..acb6aed7be75 100644 > > --- a/drivers/block/ublk_drv.c > > +++ b/drivers/block/ublk_drv.c > > @@ -1071,6 +1071,10 @@ static inline void __ublk_complete_rq(struct request *req) > > goto exit; > > } > > > > + /* truncate result in case it is bigger than request bytes */ > > + if (io->res > blk_rq_bytes(req)) > > + io->res = blk_rq_bytes(req); > > Is this not already handled by the code below that caps io->res? > > unmapped_bytes = ublk_unmap_io(ubq, req, io); > // ... > if (unlikely(unmapped_bytes < io->res)) > io->res = unmapped_bytes; > > ublk_unmap_io() returns either blk_rq_bytes(req) or the result of > ublk_copy_user_pages(), which should be at most blk_rq_bytes(req)? Indeed, this patch can be dropped. thanks, Ming
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 6fa1384c6436..acb6aed7be75 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1071,6 +1071,10 @@ static inline void __ublk_complete_rq(struct request *req) goto exit; } + /* truncate result in case it is bigger than request bytes */ + if (io->res > blk_rq_bytes(req)) + io->res = blk_rq_bytes(req); + /* * FLUSH, DISCARD or WRITE_ZEROES usually won't return bytes returned, so end them * directly.
If io command result is bigger than request bytes, truncate it to request bytes. This way is more reliable, and avoids potential risk, even though both blk_update_request() and ublk_copy_user_pages() works fine in this way. Signed-off-by: Ming Lei <ming.lei@redhat.com> --- drivers/block/ublk_drv.c | 4 ++++ 1 file changed, 4 insertions(+)