mbox series

[RFC,0/7] blk-mq: improvement on handling IO during CPU hotplug

Message ID 20190712024726.1227-1-ming.lei@redhat.com (mailing list archive)
Headers show
Series blk-mq: improvement on handling IO during CPU hotplug | expand

Message

Ming Lei July 12, 2019, 2:47 a.m. UTC
Hi,

Thomas mentioned:
    "
     That was the constraint of managed interrupts from the very beginning:
    
      The driver/subsystem has to quiesce the interrupt line and the associated
      queue _before_ it gets shutdown in CPU unplug and not fiddle with it
      until it's restarted by the core when the CPU is plugged in again.
    "

But no drivers or blk-mq do that before one hctx becomes dead(all
CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().

This patchset tries to address the issue by two stages:

1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE

- mark the hctx as internal stopped, and drain all in-flight requests
if the hctx is going to be dead.

2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead

- steal bios from the request, and resubmit them via generic_make_request(),
then these IO will be mapped to other live hctx for dispatch

Please comment & review, thanks!


Ming Lei (7):
  blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED
  blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ
  blk-mq: stop to handle IO before hctx's all CPUs become offline
  blk-mq: add callback of .free_request
  SCSI: implement .free_request callback
  blk-mq: re-submit IO in case that hctx is dead
  blk-mq: handle requests dispatched from IO scheduler in case that hctx
    is dead

 block/blk-mq-debugfs.c     |   2 +
 block/blk-mq-tag.c         |   2 +-
 block/blk-mq-tag.h         |   2 +
 block/blk-mq.c             | 147 ++++++++++++++++++++++++++++++++++---
 block/blk-mq.h             |   3 +-
 drivers/block/loop.c       |   2 +-
 drivers/md/dm-rq.c         |   2 +-
 drivers/scsi/scsi_lib.c    |  13 ++++
 include/linux/blk-mq.h     |  12 +++
 include/linux/cpuhotplug.h |   1 +
 10 files changed, 170 insertions(+), 16 deletions(-)

Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Keith Busch <keith.busch@intel.com>

Comments

John Garry July 16, 2019, 6:54 a.m. UTC | #1
在 12/07/2019 10:47, Ming Lei 写道:
> Hi,
> 
> Thomas mentioned:
>      "
>       That was the constraint of managed interrupts from the very beginning:
>      
>        The driver/subsystem has to quiesce the interrupt line and the associated
>        queue _before_ it gets shutdown in CPU unplug and not fiddle with it
>        until it's restarted by the core when the CPU is plugged in again.
>      "
> 
> But no drivers or blk-mq do that before one hctx becomes dead(all
> CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
> to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
> 
> This patchset tries to address the issue by two stages:
> 
> 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
> 
> - mark the hctx as internal stopped, and drain all in-flight requests
> if the hctx is going to be dead.
> 
> 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
> 
> - steal bios from the request, and resubmit them via generic_make_request(),
> then these IO will be mapped to other live hctx for dispatch
> 
> Please comment & review, thanks!
> 

Hi Ming,

FWIW, to me this series looks reasonable.

So you have plans to post an updated "[PATCH 0/9] blk-mq/scsi: convert 
private reply queue into blk_mq hw queue" then?

All the best,
John

> 
> Ming Lei (7):
>    blk-mq: add new state of BLK_MQ_S_INTERNAL_STOPPED
>    blk-mq: add blk-mq flag of BLK_MQ_F_NO_MANAGED_IRQ
>    blk-mq: stop to handle IO before hctx's all CPUs become offline
>    blk-mq: add callback of .free_request
>    SCSI: implement .free_request callback
>    blk-mq: re-submit IO in case that hctx is dead
>    blk-mq: handle requests dispatched from IO scheduler in case that hctx
>      is dead
> 
>   block/blk-mq-debugfs.c     |   2 +
>   block/blk-mq-tag.c         |   2 +-
>   block/blk-mq-tag.h         |   2 +
>   block/blk-mq.c             | 147 ++++++++++++++++++++++++++++++++++---
>   block/blk-mq.h             |   3 +-
>   drivers/block/loop.c       |   2 +-
>   drivers/md/dm-rq.c         |   2 +-
>   drivers/scsi/scsi_lib.c    |  13 ++++
>   include/linux/blk-mq.h     |  12 +++
>   include/linux/cpuhotplug.h |   1 +
>   10 files changed, 170 insertions(+), 16 deletions(-)
> 
> Cc: Bart Van Assche <bvanassche@acm.org>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Keith Busch <keith.busch@intel.com>
> 
>
Ming Lei July 16, 2019, 7:18 a.m. UTC | #2
Hi John,

On Tue, Jul 16, 2019 at 2:55 PM John Garry <john.garry@huawei.com> wrote:
>
> 在 12/07/2019 10:47, Ming Lei 写道:
> > Hi,
> >
> > Thomas mentioned:
> >      "
> >       That was the constraint of managed interrupts from the very beginning:
> >
> >        The driver/subsystem has to quiesce the interrupt line and the associated
> >        queue _before_ it gets shutdown in CPU unplug and not fiddle with it
> >        until it's restarted by the core when the CPU is plugged in again.
> >      "
> >
> > But no drivers or blk-mq do that before one hctx becomes dead(all
> > CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
> > to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
> >
> > This patchset tries to address the issue by two stages:
> >
> > 1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
> >
> > - mark the hctx as internal stopped, and drain all in-flight requests
> > if the hctx is going to be dead.
> >
> > 2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
> >
> > - steal bios from the request, and resubmit them via generic_make_request(),
> > then these IO will be mapped to other live hctx for dispatch
> >
> > Please comment & review, thanks!
> >
>
> Hi Ming,
>
> FWIW, to me this series looks reasonable.

Thanks!

>
> So you have plans to post an updated "[PATCH 0/9] blk-mq/scsi: convert
> private reply queue into blk_mq hw queue" then?

V2 has been in the following tree for a while:

https://github.com/ming1/linux/commits/v5.2-rc-host-tags-V2

It works, however the implementation is a bit ugly even though the idea
is simple.

So I think we may need to think of it further, for better implementation or
approach.

Thanks,
Ming Lei
John Garry July 16, 2019, 8:13 a.m. UTC | #3
在 16/07/2019 15:18, Ming Lei 写道:
>> So you have plans to post an updated "[PATCH 0/9] blk-mq/scsi: convert
>> private reply queue into blk_mq hw queue" then?

Hi Ming,

> V2 has been in the following tree for a while:
> 
> https://github.com/ming1/linux/commits/v5.2-rc-host-tags-V2
> 
> It works, however the implementation is a bit ugly even though the idea
> is simple.

Yeah, sorry to say that I agree - the BLK_MQ_F_TAG_HCTX_SHARED checks 
look bolted on.

> 
> So I think we may need to think of it further, for better implementation or
> approach.

Understood.

But at least we may test it to ensure no performance regression.

Thanks,
John

> 
> Thanks,
> Ming Lei