From patchwork Tue Jul 11 01:13:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tejun Heo X-Patchwork-Id: 13307905 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 576D0A41 for ; Tue, 11 Jul 2023 01:14:24 +0000 (UTC) Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F9C0106; Mon, 10 Jul 2023 18:14:23 -0700 (PDT) Received: by mail-oi1-x22e.google.com with SMTP id 5614622812f47-3a37909a64eso3627194b6e.1; Mon, 10 Jul 2023 18:14:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689038062; x=1691630062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=j1IM/JP2r1ZFTSnx810IcSLrL+A34L8i6fPDD3ZG5Pc=; b=PaODUOPx1yxNS20hdl146+YCmD7pNImrYzuVCjr6QVWhEmvbKVKV77t4jlWeGLLmU6 XXid6aMQl2fJ655/Msg24u2fibWEkmyqYN679ZInA7L0jxD/E7cqW1TL0Er8R+oLdWsY jljQiL/u2Tnaj74X2X3r2AzI2S3olQ9VXGexZVR1mpuoCVzEI9Lu4yQ6IgOyRNNPFc0e IKgGnA2BTEos5qFnWS2B3c50K3YukfHv2uTym2+L6Yp7pQg7LsL5u+AguPcVUuSF1VGo iMxlTCJhZgc+dHXTWtpBh9lRvrU5xEWLIuzQ0FvTBTjcgPYXow39cWk8YnfZhH8EmbJy iQYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689038062; x=1691630062; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=j1IM/JP2r1ZFTSnx810IcSLrL+A34L8i6fPDD3ZG5Pc=; b=amdJ33J8fOIq7EsDtP0m3LdvMcM3JY/kIe/6CFy9+nicHGF+sEF5JR6yDYEKUG1Avk X6ZVzHbW3CYUwZu4haqvQc/pdCvhLSJE4rAZUIyezQoaSVrQoU7TgFovFlq71UZs0+JJ 1gdkNhN/51xDwsLTv66KeGFngmuHcyeTrt9pnFaLkmw9LI24YocjbsX8ObWSk34rgkpz CwhILag5+EhHTb85SEbGQkMVLR/gBmjYJ0A/pYAQYMThFkuEumQ2YAD4IFFrEMpPCDIx 1GLbDauZ51jJPZ9G0hCc05BE//qC6wKnNhdkrJ/7F/NHQQTdpFvcsdkE1RmZWvs5KU/r TSMg== X-Gm-Message-State: ABy/qLacxfKnIqSMN/CBgM/hdg/NAODaTBflUCblkDxysUQmkg6H6rkO AdTe53rSA+8MqxtowhHQQmk= X-Google-Smtp-Source: APBJJlHdWg9jOkIWNrySjfWMVUuUBjUrDHE4gsj47D6MjLRwFztjdTRYxZZOYGRuaQvH2xVlA4w3Vw== X-Received: by 2002:a05:6808:1b0f:b0:39e:8678:4035 with SMTP id bx15-20020a0568081b0f00b0039e86784035mr16213156oib.13.1689038062175; Mon, 10 Jul 2023 18:14:22 -0700 (PDT) Received: from localhost ([2620:10d:c090:400::5:e2fe]) by smtp.gmail.com with ESMTPSA id fk25-20020a056a003a9900b00682a0184742sm407771pfb.148.2023.07.10.18.14.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 18:14:21 -0700 (PDT) Sender: Tejun Heo From: Tejun Heo To: torvalds@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, joshdon@google.com, brho@google.com, pjt@google.com, derkling@google.com, haoluo@google.com, dvernet@meta.com, dschatzberg@meta.com, dskarlat@cs.cmu.edu, riel@surriel.com Cc: linux-kernel@vger.kernel.org, bpf@vger.kernel.org, kernel-team@meta.com, Tejun Heo Subject: [PATCH 03/34] sched: Allow sched_cgroup_fork() to fail and introduce sched_cancel_fork() Date: Mon, 10 Jul 2023 15:13:21 -1000 Message-ID: <20230711011412.100319-4-tj@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230711011412.100319-1-tj@kernel.org> References: <20230711011412.100319-1-tj@kernel.org> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net A new BPF extensible sched_class will need more control over the forking process. It wants to be able to fail from sched_cgroup_fork() after the new task's sched_task_group is initialized so that the loaded BPF program can prepare the task with its cgroup association is established and reject fork if e.g. allocation fails. Allow sched_cgroup_fork() to fail by making it return int instead of void and adding sched_cancel_fork() to undo sched_fork() in the error path. sched_cgroup_fork() doesn't fail yet and this patch shouldn't cause any behavior changes. v2: Patch description updated to detail the expected use. Signed-off-by: Tejun Heo Reviewed-by: David Vernet Acked-by: Josh Don Acked-by: Hao Luo Acked-by: Barret Rhoden --- include/linux/sched/task.h | 3 ++- kernel/fork.c | 15 ++++++++++----- kernel/sched/core.c | 8 +++++++- 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 537cbf9a2ade..5431b350ed96 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -62,7 +62,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev); extern void init_idle(struct task_struct *idle, int cpu); extern int sched_fork(unsigned long clone_flags, struct task_struct *p); -extern void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs); +extern int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs); +extern void sched_cancel_fork(struct task_struct *p); extern void sched_post_fork(struct task_struct *p); extern void sched_dead(struct task_struct *p); diff --git a/kernel/fork.c b/kernel/fork.c index ed4e01daccaa..0a9c8c03889f 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2480,7 +2480,7 @@ __latent_entropy struct task_struct *copy_process( retval = perf_event_init_task(p, clone_flags); if (retval) - goto bad_fork_cleanup_policy; + goto bad_fork_sched_cancel_fork; retval = audit_alloc(p); if (retval) goto bad_fork_cleanup_perf; @@ -2615,7 +2615,9 @@ __latent_entropy struct task_struct *copy_process( * cgroup specific, it unconditionally needs to place the task on a * runqueue. */ - sched_cgroup_fork(p, args); + retval = sched_cgroup_fork(p, args); + if (retval) + goto bad_fork_cancel_cgroup; /* * From this point on we must avoid any synchronous user-space @@ -2661,13 +2663,13 @@ __latent_entropy struct task_struct *copy_process( /* Don't start children in a dying pid namespace */ if (unlikely(!(ns_of_pid(pid)->pid_allocated & PIDNS_ADDING))) { retval = -ENOMEM; - goto bad_fork_cancel_cgroup; + goto bad_fork_core_free; } /* Let kill terminate clone/fork in the middle */ if (fatal_signal_pending(current)) { retval = -EINTR; - goto bad_fork_cancel_cgroup; + goto bad_fork_core_free; } /* No more failure paths after this point. */ @@ -2743,10 +2745,11 @@ __latent_entropy struct task_struct *copy_process( return p; -bad_fork_cancel_cgroup: +bad_fork_core_free: sched_core_free(p); spin_unlock(¤t->sighand->siglock); write_unlock_irq(&tasklist_lock); +bad_fork_cancel_cgroup: cgroup_cancel_fork(p, args); bad_fork_put_pidfd: if (clone_flags & CLONE_PIDFD) { @@ -2785,6 +2788,8 @@ __latent_entropy struct task_struct *copy_process( audit_free(p); bad_fork_cleanup_perf: perf_event_free_task(p); +bad_fork_sched_cancel_fork: + sched_cancel_fork(p); bad_fork_cleanup_policy: lockdep_free_task(p); #ifdef CONFIG_NUMA diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 173a42336d54..a869236d0735 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4749,7 +4749,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) return 0; } -void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs) +int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs) { unsigned long flags; @@ -4776,6 +4776,12 @@ void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs) if (p->sched_class->task_fork) p->sched_class->task_fork(p); raw_spin_unlock_irqrestore(&p->pi_lock, flags); + + return 0; +} + +void sched_cancel_fork(struct task_struct *p) +{ } void sched_post_fork(struct task_struct *p)