diff mbox series

blk-mq: Don't disable preemption around __blk_mq_run_hw_queue().

Message ID YnQHqx/5+54jd+U+@linutronix.de (mailing list archive)
State New, archived
Headers show
Series blk-mq: Don't disable preemption around __blk_mq_run_hw_queue(). | expand

Commit Message

Sebastian Andrzej Siewior May 5, 2022, 5:21 p.m. UTC
__blk_mq_delay_run_hw_queue() disables preemption to get a stable
current CPU number and then invokes __blk_mq_run_hw_queue() if the CPU
number is part the mask.

__blk_mq_run_hw_queue() acquires a spin_lock_t which is a sleeping lock
on PREEMPT_RT and can't be acquired with disabled preemption.

If it is important that the current CPU matches the requested CPU mask
and that the context does not migrate to another CPU while
__blk_mq_run_hw_queue() is invoked then it possible to achieve this by
disabling migration and keeping the context preemptible.

Disable only migration while testing the CPU mask and invoking
__blk_mq_run_hw_queue().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 block/blk-mq.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 84d749511f551..a28406ea043a8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2046,14 +2046,14 @@  static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 		return;
 
 	if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
-		int cpu = get_cpu();
-		if (cpumask_test_cpu(cpu, hctx->cpumask)) {
+		migrate_disable();
+		if (cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
 			__blk_mq_run_hw_queue(hctx);
-			put_cpu();
+			migrate_enable();
 			return;
 		}
 
-		put_cpu();
+		migrate_enable();
 	}
 
 	kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,