diff mbox series

[for-next,v10,5/7] block: factor out bio_map_get helper

Message ID 20220927173610.7794-6-joshi.k@samsung.com (mailing list archive)
State New, archived
Headers show
Series [for-next,v10,1/7] io_uring: add io_uring_cmd_import_fixed | expand

Commit Message

Kanchan Joshi Sept. 27, 2022, 5:36 p.m. UTC
Move bio allocation logic from bio_map_user_iov to a new helper
bio_map_get. It is named so because functionality is opposite of what is
done inside bio_map_put. This is a prep patch.

Signed-off-by: Kanchan Joshi <joshi.k@samsung.com>
---
 block/blk-map.c | 31 +++++++++++++++++++++----------
 1 file changed, 21 insertions(+), 10 deletions(-)

Comments

Christoph Hellwig Sept. 28, 2022, 5:31 p.m. UTC | #1
On Tue, Sep 27, 2022 at 11:06:08PM +0530, Kanchan Joshi wrote:
> Move bio allocation logic from bio_map_user_iov to a new helper
> bio_map_get. It is named so because functionality is opposite of what is
> done inside bio_map_put. This is a prep patch.

I'm still not a fan of using bio_sets for passthrough and would be
much happier if we could drill down what the problems with the
slab per-cpu allocator are, but it seems like I've lost that fight
against Jens..

> +static struct bio *bio_map_get(struct request *rq, unsigned int nr_vecs,
>  		gfp_t gfp_mask)

But these names just seems rather misleading.  Why not someting
like blk_rq_map_bio_alloc and blk_mq_map_bio_put?

Not really new in this code but a question to Jens:  The existing
bio_map_user_iov has no real upper bounds on the number of bios
allocated, how does that fit with the very limited pool size of
fs_bio_set?
Jens Axboe Sept. 28, 2022, 5:49 p.m. UTC | #2
On 9/28/22 11:31 AM, Christoph Hellwig wrote:
> On Tue, Sep 27, 2022 at 11:06:08PM +0530, Kanchan Joshi wrote:
>> Move bio allocation logic from bio_map_user_iov to a new helper
>> bio_map_get. It is named so because functionality is opposite of what is
>> done inside bio_map_put. This is a prep patch.
> 
> I'm still not a fan of using bio_sets for passthrough and would be
> much happier if we could drill down what the problems with the
> slab per-cpu allocator are, but it seems like I've lost that fight
> against Jens..

I don't think there are necessarily big problems with the slab side,
it's just that the per-cpu freeing there needs to be IRQ safe. And the
double cmpxchg() used for that isn't that fast compared to being able
to cache these locally with just preempt protection.

>> +static struct bio *bio_map_get(struct request *rq, unsigned int nr_vecs,
>>  		gfp_t gfp_mask)
> 
> But these names just seems rather misleading.  Why not someting
> like blk_rq_map_bio_alloc and blk_mq_map_bio_put?
> 
> Not really new in this code but a question to Jens:  The existing
> bio_map_user_iov has no real upper bounds on the number of bios
> allocated, how does that fit with the very limited pool size of
> fs_bio_set?

Good question - I think we'd need to ensure that once we get
past the initial alloc that we clear any gfp flags that'd make
the mempool_alloc() wait for completions.
Jens Axboe Sept. 28, 2022, 5:53 p.m. UTC | #3
On 9/28/22 11:49 AM, Jens Axboe wrote:
>> Not really new in this code but a question to Jens:  The existing
>> bio_map_user_iov has no real upper bounds on the number of bios
>> allocated, how does that fit with the very limited pool size of
>> fs_bio_set?
> 
> Good question - I think we'd need to ensure that once we get
> past the initial alloc that we clear any gfp flags that'd make
> the mempool_alloc() wait for completions.

Either that, or just have the passthrough code use non-waiting flags to
begin with. Not guaranteeing forward progress is a bit iffy though...
But in practice it'd be no different than getting a failure due to OOM
because the application submitted a big request and we needed to alloc
and map multiple bios.
Anuj Gupta Sept. 29, 2022, 11:34 a.m. UTC | #4
On Wed, Sep 28, 2022 at 07:31:21PM +0200, Christoph Hellwig wrote:
> On Tue, Sep 27, 2022 at 11:06:08PM +0530, Kanchan Joshi wrote:
> > Move bio allocation logic from bio_map_user_iov to a new helper
> > bio_map_get. It is named so because functionality is opposite of what is
> > done inside bio_map_put. This is a prep patch.
> 
> I'm still not a fan of using bio_sets for passthrough and would be
> much happier if we could drill down what the problems with the
> slab per-cpu allocator are, but it seems like I've lost that fight
> against Jens..
> 
> > +static struct bio *bio_map_get(struct request *rq, unsigned int nr_vecs,
> >  		gfp_t gfp_mask)
> 
> But these names just seems rather misleading.  Why not someting
> like blk_rq_map_bio_alloc and blk_mq_map_bio_put?

Agreed, will rename the alloc and put function in the next iteration

> 
> Not really new in this code but a question to Jens:  The existing
> bio_map_user_iov has no real upper bounds on the number of bios
> allocated, how does that fit with the very limited pool size of
> fs_bio_set?

--
Anuj Gupta

> 
>
diff mbox series

Patch

diff --git a/block/blk-map.c b/block/blk-map.c
index 7693f8e3c454..a7838879e28e 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -241,17 +241,10 @@  static void bio_map_put(struct bio *bio)
 	}
 }
 
-static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
+static struct bio *bio_map_get(struct request *rq, unsigned int nr_vecs,
 		gfp_t gfp_mask)
 {
-	unsigned int max_sectors = queue_max_hw_sectors(rq->q);
-	unsigned int nr_vecs = iov_iter_npages(iter, BIO_MAX_VECS);
 	struct bio *bio;
-	int ret;
-	int j;
-
-	if (!iov_iter_count(iter))
-		return -EINVAL;
 
 	if (rq->cmd_flags & REQ_POLLED) {
 		blk_opf_t opf = rq->cmd_flags | REQ_ALLOC_CACHE;
@@ -259,13 +252,31 @@  static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 		bio = bio_alloc_bioset(NULL, nr_vecs, opf, gfp_mask,
 					&fs_bio_set);
 		if (!bio)
-			return -ENOMEM;
+			return NULL;
 	} else {
 		bio = bio_kmalloc(nr_vecs, gfp_mask);
 		if (!bio)
-			return -ENOMEM;
+			return NULL;
 		bio_init(bio, NULL, bio->bi_inline_vecs, nr_vecs, req_op(rq));
 	}
+	return bio;
+}
+
+static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
+		gfp_t gfp_mask)
+{
+	unsigned int max_sectors = queue_max_hw_sectors(rq->q);
+	unsigned int nr_vecs = iov_iter_npages(iter, BIO_MAX_VECS);
+	struct bio *bio;
+	int ret;
+	int j;
+
+	if (!iov_iter_count(iter))
+		return -EINVAL;
+
+	bio = bio_map_get(rq, nr_vecs, gfp_mask);
+	if (bio == NULL)
+		return -ENOMEM;
 
 	while (iov_iter_count(iter)) {
 		struct page **pages, *stack_pages[UIO_FASTIOV];