diff mbox series

[V4] blk-mq: don't schedule block kworker on isolated CPUs

Message ID 20240320023446.882006-1-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series [V4] blk-mq: don't schedule block kworker on isolated CPUs | expand

Commit Message

Ming Lei March 20, 2024, 2:34 a.m. UTC
Kernel parameter of `isolcpus=` or 'nohz_full=' are used to isolate CPUs
for specific task, and it isn't expected to let block IO disturb these CPUs.
blk-mq kworker shouldn't be scheduled on isolated CPUs. Also if isolated
CPUs is run for blk-mq kworker, long block IO latency can be caused.

Kernel workqueue only respects CPU isolation for WQ_UNBOUND, for bound
WQ, the responsibility is on user because CPU is specified as WQ API
parameter, such as mod_delayed_work_on(cpu), queue_delayed_work_on(cpu)
and queue_work_on(cpu).

So not run blk-mq kworker on isolated CPUs by removing isolated CPUs
from hctx->cpumask. Meantime use queue map to check if all CPUs in this
hw queue are offline instead of hctx->cpumask, this way can avoid any
cost in fast IO code path, and is safe since hctx->cpumask are only
used in the two cases.

Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Andrew Theurer <atheurer@redhat.com>
Cc: Joe Mario <jmario@redhat.com>
Cc: Sebastian Jug <sejug@redhat.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Tejun Heo <tj@kernel.org>
Tested-by: Joe Mario <jmario@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
V4:
	- improve comment & commit log as suggested by Tim
V3:
	- avoid to check invalid cpu as reported by Bart
	- take current cpu(to be offline, not done yet) into account
	- simplify blk_mq_hctx_has_online_cpu()

V2:
	- remove module parameter, meantime use queue map to check if
	all cpus in one hctx are offline

 block/blk-mq.c | 51 ++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 41 insertions(+), 10 deletions(-)

Comments

Ming Lei March 21, 2024, 12:49 p.m. UTC | #1
On Wed, Mar 20, 2024 at 10:34:46AM +0800, Ming Lei wrote:
> Kernel parameter of `isolcpus=` or 'nohz_full=' are used to isolate CPUs
> for specific task, and it isn't expected to let block IO disturb these CPUs.
> blk-mq kworker shouldn't be scheduled on isolated CPUs. Also if isolated
> CPUs is run for blk-mq kworker, long block IO latency can be caused.
> 
> Kernel workqueue only respects CPU isolation for WQ_UNBOUND, for bound
> WQ, the responsibility is on user because CPU is specified as WQ API
> parameter, such as mod_delayed_work_on(cpu), queue_delayed_work_on(cpu)
> and queue_work_on(cpu).
> 
> So not run blk-mq kworker on isolated CPUs by removing isolated CPUs
> from hctx->cpumask. Meantime use queue map to check if all CPUs in this
> hw queue are offline instead of hctx->cpumask, this way can avoid any
> cost in fast IO code path, and is safe since hctx->cpumask are only
> used in the two cases.
> 
> Cc: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Juri Lelli <juri.lelli@redhat.com>
> Cc: Andrew Theurer <atheurer@redhat.com>
> Cc: Joe Mario <jmario@redhat.com>
> Cc: Sebastian Jug <sejug@redhat.com>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Bart Van Assche <bvanassche@acm.org>
> Cc: Tejun Heo <tj@kernel.org>
> Tested-by: Joe Mario <jmario@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> V4:
> 	- improve comment & commit log as suggested by Tim

Hello Jens, Tejun and Guys,

This patch fixes one issue in OpenShift low latency environment, I appreciate
you may take a look at the patch and merge it if you are fine.


Thanks,
Ming
Jens Axboe March 21, 2024, 5:07 p.m. UTC | #2
On 3/19/24 8:34 PM, Ming Lei wrote:
> Kernel parameter of `isolcpus=` or 'nohz_full=' are used to isolate CPUs
> for specific task, and it isn't expected to let block IO disturb these CPUs.
> blk-mq kworker shouldn't be scheduled on isolated CPUs. Also if isolated
> CPUs is run for blk-mq kworker, long block IO latency can be caused.
> 
> Kernel workqueue only respects CPU isolation for WQ_UNBOUND, for bound
> WQ, the responsibility is on user because CPU is specified as WQ API
> parameter, such as mod_delayed_work_on(cpu), queue_delayed_work_on(cpu)
> and queue_work_on(cpu).
> 
> So not run blk-mq kworker on isolated CPUs by removing isolated CPUs
> from hctx->cpumask. Meantime use queue map to check if all CPUs in this
> hw queue are offline instead of hctx->cpumask, this way can avoid any
> cost in fast IO code path, and is safe since hctx->cpumask are only
> used in the two cases.

In general, I think the fix is fine. Only thing that's a bit odd is:

> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 555ada922cf0..187fbfacb397 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -28,6 +28,7 @@
>  #include <linux/prefetch.h>
>  #include <linux/blk-crypto.h>
>  #include <linux/part_stat.h>
> +#include <linux/sched/isolation.h>
>  
>  #include <trace/events/block.h>
>  
> @@ -2179,7 +2180,11 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
>  	bool tried = false;
>  	int next_cpu = hctx->next_cpu;
>  
> -	if (hctx->queue->nr_hw_queues == 1)
> +	/*
> +	 * Switch to unbound work if all CPUs in this hw queue fall
> +	 * into isolated CPUs
> +	 */
> +	if (hctx->queue->nr_hw_queues == 1 || next_cpu >= nr_cpu_ids)
>  		return WORK_CPU_UNBOUND;

This relies on find_next_foo() returning >= nr_cpu_ids if the set is
empty, which is a lower level implementation detail that someone reading
this code may not know.

>  	if (--hctx->next_cpu_batch <= 0) {
> @@ -3488,14 +3493,30 @@ static bool blk_mq_hctx_has_requests(struct blk_mq_hw_ctx *hctx)
>  	return data.has_rq;
>  }
>  
> -static inline bool blk_mq_last_cpu_in_hctx(unsigned int cpu,
> -		struct blk_mq_hw_ctx *hctx)
> +static bool blk_mq_hctx_has_online_cpu(struct blk_mq_hw_ctx *hctx,
> +		unsigned int this_cpu)
>  {
> -	if (cpumask_first_and(hctx->cpumask, cpu_online_mask) != cpu)
> -		return false;
> -	if (cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) < nr_cpu_ids)
> -		return false;
> -	return true;
> +	enum hctx_type type = hctx->type;
> +	int cpu;
> +
> +	/*
> +	 * hctx->cpumask has rule out isolated CPUs, but userspace still
                            ^^

has to

> +	 * might submit IOs on these isolated CPUs, so use queue map to
							  ^^

use the queue map

> +	 * check if all CPUs mapped to this hctx are offline
> +	 */
Ming Lei March 22, 2024, 1:10 a.m. UTC | #3
On Thu, Mar 21, 2024 at 11:07:52AM -0600, Jens Axboe wrote:
> On 3/19/24 8:34 PM, Ming Lei wrote:
> > Kernel parameter of `isolcpus=` or 'nohz_full=' are used to isolate CPUs
> > for specific task, and it isn't expected to let block IO disturb these CPUs.
> > blk-mq kworker shouldn't be scheduled on isolated CPUs. Also if isolated
> > CPUs is run for blk-mq kworker, long block IO latency can be caused.
> > 
> > Kernel workqueue only respects CPU isolation for WQ_UNBOUND, for bound
> > WQ, the responsibility is on user because CPU is specified as WQ API
> > parameter, such as mod_delayed_work_on(cpu), queue_delayed_work_on(cpu)
> > and queue_work_on(cpu).
> > 
> > So not run blk-mq kworker on isolated CPUs by removing isolated CPUs
> > from hctx->cpumask. Meantime use queue map to check if all CPUs in this
> > hw queue are offline instead of hctx->cpumask, this way can avoid any
> > cost in fast IO code path, and is safe since hctx->cpumask are only
> > used in the two cases.
> 
> In general, I think the fix is fine. Only thing that's a bit odd is:

Thanks for the review!

> 
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index 555ada922cf0..187fbfacb397 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -28,6 +28,7 @@
> >  #include <linux/prefetch.h>
> >  #include <linux/blk-crypto.h>
> >  #include <linux/part_stat.h>
> > +#include <linux/sched/isolation.h>
> >  
> >  #include <trace/events/block.h>
> >  
> > @@ -2179,7 +2180,11 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
> >  	bool tried = false;
> >  	int next_cpu = hctx->next_cpu;
> >  
> > -	if (hctx->queue->nr_hw_queues == 1)
> > +	/*
> > +	 * Switch to unbound work if all CPUs in this hw queue fall
> > +	 * into isolated CPUs
> > +	 */
> > +	if (hctx->queue->nr_hw_queues == 1 || next_cpu >= nr_cpu_ids)
> >  		return WORK_CPU_UNBOUND;
> 
> This relies on find_next_foo() returning >= nr_cpu_ids if the set is
> empty, which is a lower level implementation detail that someone reading
> this code may not know.

Indeed, looks it is more readable to add one helper:

static bool blk_mq_hctx_empty_cpumask(struct blk_mq_hw_ctx *hctx)
{
	return hctx->next_cpu >= nr_cpu_ids;
}

> 
> >  	if (--hctx->next_cpu_batch <= 0) {
> > @@ -3488,14 +3493,30 @@ static bool blk_mq_hctx_has_requests(struct blk_mq_hw_ctx *hctx)
> >  	return data.has_rq;
> >  }
> >  
> > -static inline bool blk_mq_last_cpu_in_hctx(unsigned int cpu,
> > -		struct blk_mq_hw_ctx *hctx)
> > +static bool blk_mq_hctx_has_online_cpu(struct blk_mq_hw_ctx *hctx,
> > +		unsigned int this_cpu)
> >  {
> > -	if (cpumask_first_and(hctx->cpumask, cpu_online_mask) != cpu)
> > -		return false;
> > -	if (cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) < nr_cpu_ids)
> > -		return false;
> > -	return true;
> > +	enum hctx_type type = hctx->type;
> > +	int cpu;
> > +
> > +	/*
> > +	 * hctx->cpumask has rule out isolated CPUs, but userspace still
>                             ^^
> 
> has to
> 
> > +	 * might submit IOs on these isolated CPUs, so use queue map to
> 							  ^^
> 
> use the queue map

OK, will fix them in V5.


thanks,
Ming
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 555ada922cf0..187fbfacb397 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -28,6 +28,7 @@ 
 #include <linux/prefetch.h>
 #include <linux/blk-crypto.h>
 #include <linux/part_stat.h>
+#include <linux/sched/isolation.h>
 
 #include <trace/events/block.h>
 
@@ -2179,7 +2180,11 @@  static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
 	bool tried = false;
 	int next_cpu = hctx->next_cpu;
 
-	if (hctx->queue->nr_hw_queues == 1)
+	/*
+	 * Switch to unbound work if all CPUs in this hw queue fall
+	 * into isolated CPUs
+	 */
+	if (hctx->queue->nr_hw_queues == 1 || next_cpu >= nr_cpu_ids)
 		return WORK_CPU_UNBOUND;
 
 	if (--hctx->next_cpu_batch <= 0) {
@@ -3488,14 +3493,30 @@  static bool blk_mq_hctx_has_requests(struct blk_mq_hw_ctx *hctx)
 	return data.has_rq;
 }
 
-static inline bool blk_mq_last_cpu_in_hctx(unsigned int cpu,
-		struct blk_mq_hw_ctx *hctx)
+static bool blk_mq_hctx_has_online_cpu(struct blk_mq_hw_ctx *hctx,
+		unsigned int this_cpu)
 {
-	if (cpumask_first_and(hctx->cpumask, cpu_online_mask) != cpu)
-		return false;
-	if (cpumask_next_and(cpu, hctx->cpumask, cpu_online_mask) < nr_cpu_ids)
-		return false;
-	return true;
+	enum hctx_type type = hctx->type;
+	int cpu;
+
+	/*
+	 * hctx->cpumask has rule out isolated CPUs, but userspace still
+	 * might submit IOs on these isolated CPUs, so use queue map to
+	 * check if all CPUs mapped to this hctx are offline
+	 */
+	for_each_online_cpu(cpu) {
+		struct blk_mq_hw_ctx *h = blk_mq_map_queue_type(hctx->queue,
+				type, cpu);
+
+		if (h != hctx)
+			continue;
+
+		/* this hctx has at least one online CPU */
+		if (this_cpu != cpu)
+			return true;
+	}
+
+	return false;
 }
 
 static int blk_mq_hctx_notify_offline(unsigned int cpu, struct hlist_node *node)
@@ -3503,8 +3524,7 @@  static int blk_mq_hctx_notify_offline(unsigned int cpu, struct hlist_node *node)
 	struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node,
 			struct blk_mq_hw_ctx, cpuhp_online);
 
-	if (!cpumask_test_cpu(cpu, hctx->cpumask) ||
-	    !blk_mq_last_cpu_in_hctx(cpu, hctx))
+	if (blk_mq_hctx_has_online_cpu(hctx, cpu))
 		return 0;
 
 	/*
@@ -3912,6 +3932,8 @@  static void blk_mq_map_swqueue(struct request_queue *q)
 	}
 
 	queue_for_each_hw_ctx(q, hctx, i) {
+		int cpu;
+
 		/*
 		 * If no software queues are mapped to this hardware queue,
 		 * disable it and free the request entries.
@@ -3938,6 +3960,15 @@  static void blk_mq_map_swqueue(struct request_queue *q)
 		 */
 		sbitmap_resize(&hctx->ctx_map, hctx->nr_ctx);
 
+		/*
+		 * Rule out isolated CPUs from hctx->cpumask to avoid
+		 * running run wq worker on isolated CPU
+		 */
+		for_each_cpu(cpu, hctx->cpumask) {
+			if (cpu_is_isolated(cpu))
+				cpumask_clear_cpu(cpu, hctx->cpumask);
+		}
+
 		/*
 		 * Initialize batch roundrobin counts
 		 */