mbox series

[V2,0/5] blk-mq: quiesce improvement

Message ID 20211130073752.3005936-1-ming.lei@redhat.com (mailing list archive)
Headers show
Series blk-mq: quiesce improvement | expand

Message

Ming Lei Nov. 30, 2021, 7:37 a.m. UTC
Hi Guys,

The 1st patch removes hctx_lock and hctx_unlock, and optimize dispatch
code path a bit.

The 2nd patch moves srcu from blk_mq_hw_ctx to request_queue.

The other patches add one new helper for supporting quiesce in parallel.

V2:
	- add patch of 'remove hctx_lock and hctx_unlock'
	- replace ->alloc_srcu with queue flag, as suggested by Sagi

Ming Lei (5):
  blk-mq: remove hctx_lock and hctx_unlock
  blk-mq: move srcu from blk_mq_hw_ctx to request_queue
  blk-mq: add helper of blk_mq_shared_quiesce_wait()
  nvme: quiesce namespace queue in parallel
  scsi: use blk-mq quiesce APIs to implement scsi_host_block

 block/blk-core.c         |  27 +++++++--
 block/blk-mq-sysfs.c     |   2 -
 block/blk-mq.c           | 116 +++++++++++++--------------------------
 block/blk-sysfs.c        |   3 +-
 block/blk.h              |  10 +++-
 block/genhd.c            |   2 +-
 drivers/nvme/host/core.c |   9 ++-
 drivers/scsi/scsi_lib.c  |  16 +++---
 include/linux/blk-mq.h   |  21 ++++---
 include/linux/blkdev.h   |   9 +++
 10 files changed, 109 insertions(+), 106 deletions(-)

Comments

Ismael Luceno June 7, 2022, 11:21 a.m. UTC | #1
Hi Ming,

Has this patch been dropped/abandoned?

On Tue, 30 Nov 2021 15:37:51 +0800
Ming Lei <ming.lei@redhat.com> wrote:
> Chao Leng reported that in case of lots of namespaces, it may take
> quite a while for nvme_stop_queues() to quiesce all namespace queues.
>
> Improve nvme_stop_queues() by running quiesce in parallel, and just
> wait once if global quiesce wait is allowed.
>
> Link:
> https://lore.kernel.org/linux-block/cc732195-c053-9ce4-e1a7-e7f6dcf762ac@huawei.com/
> Reported-by: Chao Leng <lengchao@huawei.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
<...>
Ming Lei June 7, 2022, 2:03 p.m. UTC | #2
On Tue, Jun 07, 2022 at 01:21:18PM +0200, Ismael Luceno wrote:
> Hi Ming,
> 
> Has this patch been dropped/abandoned?

Hi Ismael,

The whole patchset wasn't be accepted if I remember correctly, but
finally we moved srcu out of hctx in another patchset.

If you think the patch of 'nvme: quiesce namespace queue in parallel'
is useful, please provide a bit info about your case, then we may
figure out similar patch if it is necessary.


Thanks,
Ming
Ismael Luceno July 6, 2022, 3:37 p.m. UTC | #3
On Tue, 7 Jun 2022 22:03:40 +0800
Ming Lei <ming.lei@redhat.com> wrote:
> On Tue, Jun 07, 2022 at 01:21:18PM +0200, Ismael Luceno wrote:
> > Hi Ming,
> > 
> > Has this patch been dropped/abandoned?
> 
> Hi Ismael,
> 
> The whole patchset wasn't be accepted if I remember correctly, but
> finally we moved srcu out of hctx in another patchset.
> 
> If you think the patch of 'nvme: quiesce namespace queue in parallel'
> is useful, please provide a bit info about your case, then we may
> figure out similar patch if it is necessary.

Chao Leng's outgoing email (lengchao@huawei.com) permission is
restricted; I got from him (through a couple of indirections):
> Hi, Ismael and Ming, The case: When the multipathing software is used,
> if one path failed, fail over to other good path may take long time.
> This is important for scenarios that require low latency and high
> reliability, such as real-time deals.
>
> This patch can fix the bug.

Same thing he said here:
https://lore.kernel.org/linux-nvme/cc732195-c053-9ce4-e1a7-e7f6dcf762ac@huawei.com/

Huawei is still looking for a solution to be merged in mainline.