diff mbox series

block: do not reverse request order when flushing plug list

Message ID 20230313093002.11756-1-jack@suse.cz (mailing list archive)
State New, archived
Headers show
Series block: do not reverse request order when flushing plug list | expand

Commit Message

Jan Kara March 13, 2023, 9:30 a.m. UTC
Commit 26fed4ac4eab ("block: flush plug based on hardware and software
queue order") changed flushing of plug list to submit requests one
device at a time. However while doing that it also started using
list_add_tail() instead of list_add() used previously thus effectively
submitting requests in reverse order. Also when forming a rq_list with
remaining requests (in case two or more devices are used), we
effectively reverse the ordering of the plug list for each device we
process. Submitting requests in reverse order has negative impact on
performance for rotational disks (when BFQ is not in use). We observe
10-25% regression in random 4k write throughput, as well as ~20%
regression in MariaDB OLTP benchmark on rotational storage on btrfs
filesystem.

Fix the problem by preserving ordering of the plug list when inserting
requests into the queuelist as well as by appending to requeue_list
instead of prepending to it.

Fixes: 26fed4ac4eab ("block: flush plug based on hardware and software queue order")
Signed-off-by: Jan Kara <jack@suse.cz>
---
 block/blk-mq.c         | 5 +++--
 include/linux/blk-mq.h | 6 ++++++
 2 files changed, 9 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig March 14, 2023, 7:57 a.m. UTC | #1
Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Ming Lei March 14, 2023, 10:18 a.m. UTC | #2
On Mon, Mar 13, 2023 at 10:30:02AM +0100, Jan Kara wrote:
> Commit 26fed4ac4eab ("block: flush plug based on hardware and software
> queue order") changed flushing of plug list to submit requests one
> device at a time. However while doing that it also started using
> list_add_tail() instead of list_add() used previously thus effectively
> submitting requests in reverse order. Also when forming a rq_list with
> remaining requests (in case two or more devices are used), we
> effectively reverse the ordering of the plug list for each device we
> process. Submitting requests in reverse order has negative impact on
> performance for rotational disks (when BFQ is not in use). We observe
> 10-25% regression in random 4k write throughput, as well as ~20%
> regression in MariaDB OLTP benchmark on rotational storage on btrfs
> filesystem.
> 
> Fix the problem by preserving ordering of the plug list when inserting
> requests into the queuelist as well as by appending to requeue_list
> instead of prepending to it.

Also in case of !plug->multiple_queues && !plug->has_elevator, requests
are still sent to device in reverse order, do we need to cover that case?


Thanks,
Ming
Jan Kara March 14, 2023, 12:09 p.m. UTC | #3
On Tue 14-03-23 18:18:20, Ming Lei wrote:
> On Mon, Mar 13, 2023 at 10:30:02AM +0100, Jan Kara wrote:
> > Commit 26fed4ac4eab ("block: flush plug based on hardware and software
> > queue order") changed flushing of plug list to submit requests one
> > device at a time. However while doing that it also started using
> > list_add_tail() instead of list_add() used previously thus effectively
> > submitting requests in reverse order. Also when forming a rq_list with
> > remaining requests (in case two or more devices are used), we
> > effectively reverse the ordering of the plug list for each device we
> > process. Submitting requests in reverse order has negative impact on
> > performance for rotational disks (when BFQ is not in use). We observe
> > 10-25% regression in random 4k write throughput, as well as ~20%
> > regression in MariaDB OLTP benchmark on rotational storage on btrfs
> > filesystem.
> > 
> > Fix the problem by preserving ordering of the plug list when inserting
> > requests into the queuelist as well as by appending to requeue_list
> > instead of prepending to it.
> 
> Also in case of !plug->multiple_queues && !plug->has_elevator, requests
> are still sent to device in reverse order, do we need to cover that case?

That's an interesting question. I suppose reversing the order in this case
could be suprising e.g. for shingled storage. I don't think it matters that
much for normal rotational storage as there you presumably run with at
least some IO scheduler (we were using mq-deadline in our testing).

Avoiding the reversal will require small changes to plug handling (so that
we append the plug list) but it shouldn't be too bad. Still I'd do it in a
separate patch.

								Honza
Damien Le Moal March 14, 2023, 12:22 p.m. UTC | #4
On 3/14/23 21:09, Jan Kara wrote:
> On Tue 14-03-23 18:18:20, Ming Lei wrote:
>> On Mon, Mar 13, 2023 at 10:30:02AM +0100, Jan Kara wrote:
>>> Commit 26fed4ac4eab ("block: flush plug based on hardware and software
>>> queue order") changed flushing of plug list to submit requests one
>>> device at a time. However while doing that it also started using
>>> list_add_tail() instead of list_add() used previously thus effectively
>>> submitting requests in reverse order. Also when forming a rq_list with
>>> remaining requests (in case two or more devices are used), we
>>> effectively reverse the ordering of the plug list for each device we
>>> process. Submitting requests in reverse order has negative impact on
>>> performance for rotational disks (when BFQ is not in use). We observe
>>> 10-25% regression in random 4k write throughput, as well as ~20%
>>> regression in MariaDB OLTP benchmark on rotational storage on btrfs
>>> filesystem.
>>>
>>> Fix the problem by preserving ordering of the plug list when inserting
>>> requests into the queuelist as well as by appending to requeue_list
>>> instead of prepending to it.
>>
>> Also in case of !plug->multiple_queues && !plug->has_elevator, requests
>> are still sent to device in reverse order, do we need to cover that case?
> 
> That's an interesting question. I suppose reversing the order in this case
> could be suprising e.g. for shingled storage. I don't think it matters that

We do not allow plugging writes to sequential zones on zoned block devices. The
reason is that write commands in the plug may end up being reordered with write
commands issued later but without plugging. We hit that issue while doing btrfs
work and the only easy solution we could think of was to not plug any write
command so that the write ordering is preserved down to the scheduler insertion.
For the dispatching order, it is the scheduler (mq-deadline) responsibility.

> much for normal rotational storage as there you presumably run with at
> least some IO scheduler (we were using mq-deadline in our testing).>
> Avoiding the reversal will require small changes to plug handling (so that
> we append the plug list) but it shouldn't be too bad. Still I'd do it in a
> separate patch.
> 
> 								Honza
Jens Axboe March 14, 2023, 3:26 p.m. UTC | #5
On Mon, 13 Mar 2023 10:30:02 +0100, Jan Kara wrote:
> Commit 26fed4ac4eab ("block: flush plug based on hardware and software
> queue order") changed flushing of plug list to submit requests one
> device at a time. However while doing that it also started using
> list_add_tail() instead of list_add() used previously thus effectively
> submitting requests in reverse order. Also when forming a rq_list with
> remaining requests (in case two or more devices are used), we
> effectively reverse the ordering of the plug list for each device we
> process. Submitting requests in reverse order has negative impact on
> performance for rotational disks (when BFQ is not in use). We observe
> 10-25% regression in random 4k write throughput, as well as ~20%
> regression in MariaDB OLTP benchmark on rotational storage on btrfs
> filesystem.
> 
> [...]

Applied, thanks!

[1/1] block: do not reverse request order when flushing plug list
      commit: 34e0a279a993debaff03158fc2fbf6a00c093643

Best regards,
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d0cb2ef18fe2..cf1a39adf9a5 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2725,6 +2725,7 @@  static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
 	struct blk_mq_hw_ctx *this_hctx = NULL;
 	struct blk_mq_ctx *this_ctx = NULL;
 	struct request *requeue_list = NULL;
+	struct request **requeue_lastp = &requeue_list;
 	unsigned int depth = 0;
 	LIST_HEAD(list);
 
@@ -2735,10 +2736,10 @@  static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
 			this_hctx = rq->mq_hctx;
 			this_ctx = rq->mq_ctx;
 		} else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) {
-			rq_list_add(&requeue_list, rq);
+			rq_list_add_tail(&requeue_lastp, rq);
 			continue;
 		}
-		list_add_tail(&rq->queuelist, &list);
+		list_add(&rq->queuelist, &list);
 		depth++;
 	} while (!rq_list_empty(plug->mq_list));
 
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index dd5ce1137f04..de0b0c3e7395 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -228,6 +228,12 @@  static inline unsigned short req_get_ioprio(struct request *req)
 	*(listptr) = rq;				\
 } while (0)
 
+#define rq_list_add_tail(lastpptr, rq)	do {		\
+	(rq)->rq_next = NULL;				\
+	**(lastpptr) = rq;				\
+	*(lastpptr) = &rq->rq_next;			\
+} while (0)
+
 #define rq_list_pop(listptr)				\
 ({							\
 	struct request *__req = NULL;			\