diff mbox series

blk-mq: allow 4x BLK_MAX_REQUEST_COUNT at blk_plug for multiple_queues

Message ID 20210907230338.227903-1-songliubraving@fb.com (mailing list archive)
State New, archived
Headers show
Series blk-mq: allow 4x BLK_MAX_REQUEST_COUNT at blk_plug for multiple_queues | expand

Commit Message

Song Liu Sept. 7, 2021, 11:03 p.m. UTC
Limiting number of request to BLK_MAX_REQUEST_COUNT at blk_plug hurts
performance for large md arrays. [1] shows resync speed of md array drops
for md array with more than 16 HDDs.

Fix this by allowing more request at plug queue. The multiple_queue flag
is used to only apply higher limit to multiple queue cases.

[1] https://lore.kernel.org/linux-raid/CAFDAVznS71BXW8Jxv6k9dXc2iR3ysX3iZRBww_rzA8WifBFxGg@mail.gmail.com/
Tested-by: Marcin Wanat <marcin.wanat@gmail.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
---
 block/blk-mq.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

Comments

Jens Axboe Sept. 7, 2021, 11:05 p.m. UTC | #1
On 9/7/21 5:03 PM, Song Liu wrote:
> Limiting number of request to BLK_MAX_REQUEST_COUNT at blk_plug hurts
> performance for large md arrays. [1] shows resync speed of md array drops
> for md array with more than 16 HDDs.
> 
> Fix this by allowing more request at plug queue. The multiple_queue flag
> is used to only apply higher limit to multiple queue cases.

Applied, thanks.
Finlayson, James M CIV (USA) Sept. 10, 2021, 2:43 p.m. UTC | #2
All,
I have a suspicion this will help my efforts increasing the IOPS ability of mdraid  in 10-12 NVMe drive  per raid group situations.

Pure neophyte question, which I apologize for in advance, how can I test this?   Does this end up in  a 5.15 release candidate kernel?

I want to make contributions wherever I can, as I have hardware and needs, so I can act as a performance validator within reason.   I know I can't make contributions as a developer, but I'm willing to contribute in areas where our goals are in alignment and this appears to be one.

Regards,
Jim


-----Original Message-----
From: Jens Axboe <axboe@kernel.dk> 
Sent: Tuesday, September 7, 2021 7:06 PM
To: Song Liu <songliubraving@fb.com>; linux-block@vger.kernel.org; linux-raid@vger.kernel.org
Cc: marcin.wanat@gmail.com
Subject: [Non-DoD Source] Re: [PATCH] blk-mq: allow 4x BLK_MAX_REQUEST_COUNT at blk_plug for multiple_queues

On 9/7/21 5:03 PM, Song Liu wrote:
> Limiting number of request to BLK_MAX_REQUEST_COUNT at blk_plug hurts 
> performance for large md arrays. [1] shows resync speed of md array 
> drops for md array with more than 16 HDDs.
> 
> Fix this by allowing more request at plug queue. The multiple_queue 
> flag is used to only apply higher limit to multiple queue cases.

Applied, thanks.

--
Jens Axboe
Jens Axboe Sept. 10, 2021, 2:48 p.m. UTC | #3
On 9/10/21 8:43 AM, Finlayson, James M CIV (USA) wrote:
> All,
> I have a suspicion this will help my efforts increasing the IOPS
> ability of mdraid  in 10-12 NVMe drive  per raid group situations.
> 
> Pure neophyte question, which I apologize for in advance, how can I
> test this?   Does this end up in  a 5.15 release candidate kernel?

It's queued up to go into Linus's tree before -rc1, so should be in
5.15-rc1 for you to test.

> I want to make contributions wherever I can, as I have hardware and
> needs, so I can act as a performance validator within reason.   I know
> I can't make contributions as a developer, but I'm willing to
> contribute in areas where our goals are in alignment and this appears
> to be one.

Testing is definitely valuable, particularly for something like this! So
please do test -rc1 and report back your findings.
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2c4ac51e54eba..d4025c15bd108 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2155,6 +2155,18 @@  static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
 	}
 }
 
+/*
+ * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
+ * queues. This is important for md arrays to benefit from merging
+ * requests.
+ */
+static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
+{
+	if (plug->multiple_queues)
+		return BLK_MAX_REQUEST_COUNT * 4;
+	return BLK_MAX_REQUEST_COUNT;
+}
+
 /**
  * blk_mq_submit_bio - Create and send a request to block device.
  * @bio: Bio pointer.
@@ -2251,7 +2263,7 @@  blk_qc_t blk_mq_submit_bio(struct bio *bio)
 		else
 			last = list_entry_rq(plug->mq_list.prev);
 
-		if (request_count >= BLK_MAX_REQUEST_COUNT || (last &&
+		if (request_count >= blk_plug_max_rq_count(plug) || (last &&
 		    blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
 			blk_flush_plug_list(plug, false);
 			trace_block_plug(q);