diff mbox series

[1/8] blk-mq: add blk_mq_max_nr_hw_queues()

Message ID 20230712125455.1986455-2-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series blk-mq: fix wrong queue mapping for kdump kernel | expand

Commit Message

Ming Lei July 12, 2023, 12:54 p.m. UTC
blk_mq_alloc_tag_set() may override set->nr_hw_queues as 1 in case of kdump
kernel. This way causes trouble for driver, because blk-mq and driver see
different queue mapping. Especially the only online CPU may not be 1 for
kdump kernel, in which 'maxcpus=1' is passed from kernel command line,
then driver may map hctx0 into one inactive real hw queue which cpu
affinity is 0(offline).

The issue exists on all drivers which use managed irq and support
multiple hw queue.

Prepare for fixing this kind of issue by applying the added helper, so
driver can take blk-mq max nr_hw_queues knowledge into account when
calculating io queues.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c         | 9 +++++++++
 include/linux/blk-mq.h | 1 +
 2 files changed, 10 insertions(+)

Comments

Christoph Hellwig July 12, 2023, 1 p.m. UTC | #1
On Wed, Jul 12, 2023 at 08:54:48PM +0800, Ming Lei wrote:
> +/* Max nr_hw_queues for each hw queue type */
> +unsigned int blk_mq_max_nr_hw_queues(void)
> +{
> +	if (is_kdump_kernel())
> +		return 1;
> +	return nr_cpu_ids;

Again, these is_kdump_kernel hacks don't make any sense.   The amount
of maximum available CPU needs to come through a proper API, and we
need to use it, not add hacks like this.

The only thing that makes sense here is to find the last CPU
in cpu_possible_mask, and for kdump kernels to ensure that number
is 1 or whatever low value they want.
Ming Lei July 12, 2023, 1:16 p.m. UTC | #2
On Wed, Jul 12, 2023 at 03:00:17PM +0200, Christoph Hellwig wrote:
> On Wed, Jul 12, 2023 at 08:54:48PM +0800, Ming Lei wrote:
> > +/* Max nr_hw_queues for each hw queue type */
> > +unsigned int blk_mq_max_nr_hw_queues(void)
> > +{
> > +	if (is_kdump_kernel())
> > +		return 1;
> > +	return nr_cpu_ids;
> 
> Again, these is_kdump_kernel hacks don't make any sense.   The amount
> of maximum available CPU needs to come through a proper API, and we
> need to use it, not add hacks like this.
> 
> The only thing that makes sense here is to find the last CPU
> in cpu_possible_mask, and for kdump kernels to ensure that number
> is 1 or whatever low value they want.

It doesn't matter how many cpus are available, given at least one
cpu is online.

The problem is that blk_mq_alloc_tag_set() forces to set nr_hw_queues
as 1 for kdump kernel, that is why blk_mq_max_nr_hw_queues() has to
return 1 for kdump kernel.

We have to tell driver that blk-mq only supports 1 queue for kdump
kernel.

Or:

Thomas, can we disable managed irq for kdump kernel and switch to
non-managed irq? Then we can avoid driver's change. I'd suggest
this way if it is possible.

Thanks,
Ming
Christoph Hellwig July 12, 2023, 1:19 p.m. UTC | #3
On Wed, Jul 12, 2023 at 09:16:11PM +0800, Ming Lei wrote:
> The problem is that blk_mq_alloc_tag_set() forces to set nr_hw_queues
> as 1 for kdump kernel, that is why blk_mq_max_nr_hw_queues() has to
> return 1 for kdump kernel.

Well, let's fix that first and work from there.  Same argument against
that deep magic applies there as well.

> Thomas, can we disable managed irq for kdump kernel and switch to
> non-managed irq? Then we can avoid driver's change. I'd suggest
> this way if it is possible.

Why the heck would we?
Ming Lei July 12, 2023, 1:31 p.m. UTC | #4
On Wed, Jul 12, 2023 at 03:19:25PM +0200, Christoph Hellwig wrote:
> On Wed, Jul 12, 2023 at 09:16:11PM +0800, Ming Lei wrote:
> > The problem is that blk_mq_alloc_tag_set() forces to set nr_hw_queues
> > as 1 for kdump kernel, that is why blk_mq_max_nr_hw_queues() has to
> > return 1 for kdump kernel.
> 
> Well, let's fix that first and work from there.  Same argument against
> that deep magic applies there as well.

In short, driver needs to figure out nr_hw_queues first by hardware info,
then pass it to blk_mq_alloc_tag_set(), but blk_mq_alloc_tag_set() changes it,
so inconsistency is caused.

The only solution in this way is to tell driver the max supported
number from the beginning, that is what this patchset is doing.

> 
> > Thomas, can we disable managed irq for kdump kernel and switch to
> > non-managed irq? Then we can avoid driver's change. I'd suggest
> > this way if it is possible.
> 
> Why the heck would we?

IMO irq kernel doesn't make sense in kdump kernel, which is very
resource limited and has to be reliable.

PCI_IRQ_AFFINITY can be just one hint, pci_alloc_irq_vectors_affinity()
still allocates affinity in managed way, then queue mapping can work
just fine, and the only difference is that genirq handles this irqs
as non-manged wrt. migration.

This way should solve queue mapping issue, but driver still allocates
lots of queues, which take resource useless. So looks we still have to
fix drivers.


Thanks, 
Ming
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5504719b970d..b764da69a416 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -140,6 +140,15 @@  void blk_mq_freeze_queue_wait(struct request_queue *q)
 }
 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_wait);
 
+/* Max nr_hw_queues for each hw queue type */
+unsigned int blk_mq_max_nr_hw_queues(void)
+{
+	if (is_kdump_kernel())
+		return 1;
+	return nr_cpu_ids;
+}
+EXPORT_SYMBOL_GPL(blk_mq_max_nr_hw_queues);
+
 int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
 				     unsigned long timeout)
 {
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 2b7fb8e87793..2407978fbc30 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -713,6 +713,7 @@  int blk_mq_alloc_sq_tag_set(struct blk_mq_tag_set *set,
 		const struct blk_mq_ops *ops, unsigned int queue_depth,
 		unsigned int set_flags);
 void blk_mq_free_tag_set(struct blk_mq_tag_set *set);
+unsigned int blk_mq_max_nr_hw_queues(void);
 
 void blk_mq_free_request(struct request *rq);
 int blk_rq_poll(struct request *rq, struct io_comp_batch *iob,