diff mbox series

[2/2] block: support adding less than len in bio_add_hw_page

Message ID 20231204173419.782378-3-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/2] block: prevent an integer overflow in bvec_try_merge_hw_page | expand

Commit Message

Christoph Hellwig Dec. 4, 2023, 5:34 p.m. UTC
bio_add_hw_page currently always fails or succeeds.  This is fine for
the existing callers that always add PAGE_SIZE worth given that the
max_segment_size and max_sectors must always allow at least a page
worth of data.  But when we want to add it for bigger amounts of data
this means it can also fail when adding the data to a bio, and creating
a fallback for that becomes really annoying in the callers.

Make use of the existing API design that allows to return a smaller
length than the one passed in and add up to max_segment_size worth
of data from a larger input.  All the existing callers are fine with
this - not because they handle this return correctly, but because they
never pass more than a page in.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Johannes Thumshirn Dec. 5, 2023, 4:34 p.m. UTC | #1
On 04.12.23 18:35, Christoph Hellwig wrote:
> [...] All the existing callers are fine with
> this - not because they handle this return correctly, but because they
> never pass more than a page in.


Wouldn't it also be beneficial to do proper return checking in the 
current callers on top of this series?
Keith Busch Dec. 5, 2023, 4:37 p.m. UTC | #2
On Mon, Dec 04, 2023 at 06:34:19PM +0100, Christoph Hellwig wrote:
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index cef830adbc06e0..335d81398991b3 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -966,10 +966,13 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
>  		struct page *page, unsigned int len, unsigned int offset,
>  		unsigned int max_sectors, bool *same_page)
>  {
> +	unsigned int max_size = max_sectors << SECTOR_SHIFT;
> +
>  	if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
>  		return 0;
>  
> -	if (((bio->bi_iter.bi_size + len) >> SECTOR_SHIFT) > max_sectors)
> +	len = min3(len, max_size, queue_max_segment_size(q));
> +	if (len > max_size - bio->bi_iter.bi_size)
>  		return 0;
>  
>  	if (bio->bi_vcnt > 0) {

Not related to your patch, but noticed while reviewing: would it not be
beneficial to truncate 'len' down to what can fit in the current-segment
instead of assuming the max segment size?
Christoph Hellwig Dec. 5, 2023, 6:52 p.m. UTC | #3
On Tue, Dec 05, 2023 at 04:34:02PM +0000, Johannes Thumshirn wrote:
> On 04.12.23 18:35, Christoph Hellwig wrote:
> > [...] All the existing callers are fine with
> > this - not because they handle this return correctly, but because they
> > never pass more than a page in.
> 
> 
> Wouldn't it also be beneficial to do proper return checking in the 
> current callers on top of this series?

I did look into that - but given how they are structured it would
create an even bigger mess.  Except for the nvmet zns backend they
all either allocate the added page right next to them or take the
output from a pin_user_pages variant.
Christoph Hellwig Dec. 5, 2023, 6:52 p.m. UTC | #4
On Tue, Dec 05, 2023 at 09:37:47AM -0700, Keith Busch wrote:
> > -	if (((bio->bi_iter.bi_size + len) >> SECTOR_SHIFT) > max_sectors)
> > +	len = min3(len, max_size, queue_max_segment_size(q));
> > +	if (len > max_size - bio->bi_iter.bi_size)
> >  		return 0;
> >  
> >  	if (bio->bi_vcnt > 0) {
> 
> Not related to your patch, but noticed while reviewing: would it not be
> beneficial to truncate 'len' down to what can fit in the current-segment
> instead of assuming the max segment size?

That would be pretty intrusive to the code shared with the normal not
hw limited bio add code, so I decided to keep it simple.
diff mbox series

Patch

diff --git a/block/bio.c b/block/bio.c
index cef830adbc06e0..335d81398991b3 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -966,10 +966,13 @@  int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
 		unsigned int max_sectors, bool *same_page)
 {
+	unsigned int max_size = max_sectors << SECTOR_SHIFT;
+
 	if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
 		return 0;
 
-	if (((bio->bi_iter.bi_size + len) >> SECTOR_SHIFT) > max_sectors)
+	len = min3(len, max_size, queue_max_segment_size(q));
+	if (len > max_size - bio->bi_iter.bi_size)
 		return 0;
 
 	if (bio->bi_vcnt > 0) {