diff mbox series

[v2,1/1] io_uring/sqpoll: do not put cpumasks on stack

Message ID 20240916105514.1260506-1-felix.moessbauer@siemens.com (mailing list archive)
State New
Headers show
Series [v2,1/1] io_uring/sqpoll: do not put cpumasks on stack | expand

Commit Message

Felix Moessbauer Sept. 16, 2024, 10:55 a.m. UTC
Putting the cpumask on the stack is deprecated for a long time (since
2d3854a37e8), as these can be big. Given that, we port-over the stack
allocated mask to the cpumask allocation api.

Fixes: f011c9cf04c0 ("io_uring/sqpoll: do not allow pinning outside of cpuset")
Signed-off-by: Felix Moessbauer <felix.moessbauer@siemens.com>
---
Changes since v1:

- don't leak mask in case CPU is not online or too big

Best regards,
Felix

 io_uring/sqpoll.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

Comments

Jens Axboe Sept. 16, 2024, 11:03 a.m. UTC | #1
On 9/16/24 4:55 AM, Felix Moessbauer wrote:
> Putting the cpumask on the stack is deprecated for a long time (since
> 2d3854a37e8), as these can be big. Given that, we port-over the stack
> allocated mask to the cpumask allocation api.

I'd change that last sentence to:

Given that, change the on-stack allocation of allowed_mask to be
dynamically allocated.

> diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
> index 7adfcf6818ff..44b9f58e11b6 100644
> --- a/io_uring/sqpoll.c
> +++ b/io_uring/sqpoll.c
> @@ -461,15 +461,22 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
>  			return 0;
>  
>  		if (p->flags & IORING_SETUP_SQ_AFF) {
> -			struct cpumask allowed_mask;
> +			cpumask_var_t allowed_mask;
>  			int cpu = p->sq_thread_cpu;
>  
>  			ret = -EINVAL;
>  			if (cpu >= nr_cpu_ids || !cpu_online(cpu))
>  				goto err_sqpoll;
> -			cpuset_cpus_allowed(current, &allowed_mask);
> -			if (!cpumask_test_cpu(cpu, &allowed_mask))
> +			if (!alloc_cpumask_var(&allowed_mask, GFP_KERNEL)) {
> +				ret = -ENOMEM;
>  				goto err_sqpoll;
> +			}
> +			cpuset_cpus_allowed(current, allowed_mask);
> +			if (!cpumask_test_cpu(cpu, allowed_mask)) {
> +				free_cpumask_var(allowed_mask);
> +				goto err_sqpoll;
> +			}
> +			free_cpumask_var(allowed_mask);
>  			sqd->sq_cpu = cpu;

The kernel generally does:

ret = -ESOMEERROR;
if (fails_check)
	goto err_label;

and you're now mixing the two here. To keep it consistent, it'd be
better to do :

ret = -ENOMEM before the alloc, and then re-set it to -EINVAL before the
cpumask check.
diff mbox series

Patch

diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
index 7adfcf6818ff..44b9f58e11b6 100644
--- a/io_uring/sqpoll.c
+++ b/io_uring/sqpoll.c
@@ -461,15 +461,22 @@  __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
 			return 0;
 
 		if (p->flags & IORING_SETUP_SQ_AFF) {
-			struct cpumask allowed_mask;
+			cpumask_var_t allowed_mask;
 			int cpu = p->sq_thread_cpu;
 
 			ret = -EINVAL;
 			if (cpu >= nr_cpu_ids || !cpu_online(cpu))
 				goto err_sqpoll;
-			cpuset_cpus_allowed(current, &allowed_mask);
-			if (!cpumask_test_cpu(cpu, &allowed_mask))
+			if (!alloc_cpumask_var(&allowed_mask, GFP_KERNEL)) {
+				ret = -ENOMEM;
 				goto err_sqpoll;
+			}
+			cpuset_cpus_allowed(current, allowed_mask);
+			if (!cpumask_test_cpu(cpu, allowed_mask)) {
+				free_cpumask_var(allowed_mask);
+				goto err_sqpoll;
+			}
+			free_cpumask_var(allowed_mask);
 			sqd->sq_cpu = cpu;
 		} else {
 			sqd->sq_cpu = -1;