diff mbox series

block: bio_map_user_iov should not be limited to BIO_MAX_PAGES

Message ID 20190417115008.27516-1-pbonzini@redhat.com (mailing list archive)
State New, archived
Headers show
Series block: bio_map_user_iov should not be limited to BIO_MAX_PAGES | expand

Commit Message

Paolo Bonzini April 17, 2019, 11:50 a.m. UTC
Because bio_kmalloc uses inline iovecs, the limit on the number of entries
is not BIO_MAX_PAGES but rather UIO_MAX_IOV, which indeed is already checked
in bio_kmalloc.  This could cause SG_IO requests to be truncated and the HBA
to report a DMA overrun.

Note that if the argument to iov_iter_npages were changed to UIO_MAX_IOV,
we would still truncate SG_IO requests beyond UIO_MAX_IOV pages.  Changing
it to UIO_MAX_IOV + 1 instead ensures that bio_kmalloc notices that the
request is too big and blocks it.

Cc: stable@vger.kernel.org
Cc: Al Viro <viro@zeniv.linux.org.uk>
Fixes: b282cc766958 ("bio_map_user_iov(): get rid of the iov_for_each()", 2017-10-11)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/bio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index 4db1008309ed..cc1195f5af7a 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1299,7 +1299,7 @@  struct bio *bio_map_user_iov(struct request_queue *q,
 	if (!iov_iter_count(iter))
 		return ERR_PTR(-EINVAL);
 
-	bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, BIO_MAX_PAGES));
+	bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, UIO_MAX_IOV + 1));
 	if (!bio)
 		return ERR_PTR(-ENOMEM);