From patchwork Tue May 18 09:47:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12264441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AA0DC433B4 for ; Tue, 18 May 2021 10:02:33 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DFD87610CB for ; Tue, 18 May 2021 10:02:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFD87610CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zhSbqE7K9x5VIhb0rX4f4aDUOCo8fgENAUg6ckOHekI=; b=OpySOt2uoN3DgSsE2SUQrq/ZN MIyee6mO/Z5BbLl7pr4nZiRxIAmDREhJKEbBpf6V+RA2ouLBt2bGPoHtYoRqbbOdJgPeGa6zAYhG9 I9t/6ABJUZep37+T4xU0LpKm0MEuON3be0hX3CwG1d3D28Sqjo8PhDiNfiGjHLnVJGYooYiF/D6wT vDFHVJXr8YBoJdEfxbi25wnGg+cEUzNkp7MrN2YVxOVNkDPrmOWfVkI0Fw72aMnIAF10iUE6RYKKr gayVjv5ankqMHp3hSiEFj5QcQCeSHPlnC4qjn3EATCHUgo78gz/CeZ4iLopcBOFcKpx8/qX+tlWGz cNIwwoTkQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1liwWd-000GI5-JU; Tue, 18 May 2021 10:00:32 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1liwKq-000Dwe-4Z for linux-arm-kernel@desiato.infradead.org; Tue, 18 May 2021 09:48:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=dzRLRVMKxIMoLf6Q+b/8jkbjgkjftPiV2WnZBxrNtrM=; b=RHK83tlPcFPxSUNVuLzcP6LbT8 Sx+Fgz2XJdPKIVuJ8FHa2NUIurc1xqJ7EN7hQivVm0KpRf1s6xR7Mf5HIHaBtNmSu+OTWqtHi7QAM AdHURDnUGJ9sMixaSuDTTqnutU5ROkyX3FnXlgltnsPI6KsUi7UgZG8N79VXtDqv8hx1/b5JOuPXC xX93VRM5kjuKTKNsDnVIEGY1nzlS33RVvbyaVBNpIsMQkYtAuzEQ9PcWDEJyAH4lucE07YnDg6jc7 rs9f7Acgr13iGYaBwnUHXOl8lu/g4ez1hId/uJO/MfARTvOvMdjyDPdgFTmw1wOlmJphnB+iAnRUf Ok2K9+Xw==; Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1liwKn-00EWiY-Aw for linux-arm-kernel@lists.infradead.org; Tue, 18 May 2021 09:48:18 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id DE8E2613DB; Tue, 18 May 2021 09:48:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621331297; bh=P87T2EfUL7peRiOL9j9h5VxX48Zfl+w2CWwNb/BHI/k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fEetJlCzqOez2e7J4B7XUnVnt3Uw1kt/v1RkvMkHpV/oc7I5qwF1IVj6EnkT8iqqu ezAtVrzs18o6PT/4U5I84PqtvXE6xaKZ7aEuHQMXidCvT6jjTsKVAvWxqZ1PdcAC/A UpZOBjToLwld7ULDIUXBJJyIP7S+IsVEB5NjUB3vpaiLa+LvdrYTftk5SnnYYmOSWb aKCvIGouAjn/2Y9kl02SmS/558f/bw3p1qgkUGNUS7+S9M+KFJTLwYwFXQyQj4a/cw DFUQABCuIHoe8TI51N+iNHceWzLnEpr4w/Q5eUA9sMI+tri175AxAZAMCkrIUjVc0i SR0zvVxQpNJ0A== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Li Zefan , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , kernel-team@android.com Subject: [PATCH v6 11/21] sched: Split the guts of sched_setaffinity() into a helper function Date: Tue, 18 May 2021 10:47:15 +0100 Message-Id: <20210518094725.7701-12-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210518094725.7701-1-will@kernel.org> References: <20210518094725.7701-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210518_024817_438301_DB59578B X-CRM114-Status: GOOD ( 17.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for replaying user affinity requests using a saved mask, split sched_setaffinity() up so that the initial task lookup and security checks are only performed when the request is coming directly from userspace. Signed-off-by: Will Deacon --- kernel/sched/core.c | 110 +++++++++++++++++++++++--------------------- 1 file changed, 58 insertions(+), 52 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9512623d5a60..808bbe669a6d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6788,9 +6788,61 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, return retval; } -long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) +static int +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask) { + int retval; cpumask_var_t cpus_allowed, new_mask; + + if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) + return -ENOMEM; + + if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) + return -ENOMEM; + + cpuset_cpus_allowed(p, cpus_allowed); + cpumask_and(new_mask, mask, cpus_allowed); + + /* + * Since bandwidth control happens on root_domain basis, + * if admission test is enabled, we only admit -deadline + * tasks allowed to run on all the CPUs in the task's + * root_domain. + */ +#ifdef CONFIG_SMP + if (task_has_dl_policy(p) && dl_bandwidth_enabled()) { + rcu_read_lock(); + if (!cpumask_subset(task_rq(p)->rd->span, new_mask)) { + retval = -EBUSY; + rcu_read_unlock(); + goto out_free_masks; + } + rcu_read_unlock(); + } +#endif +again: + retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); + if (retval) + goto out_free_masks; + + cpuset_cpus_allowed(p, cpus_allowed); + if (!cpumask_subset(new_mask, cpus_allowed)) { + /* + * We must have raced with a concurrent cpuset update. + * Just reset the cpumask to the cpuset's cpus_allowed. + */ + cpumask_copy(new_mask, cpus_allowed); + goto again; + } + +out_free_masks: + free_cpumask_var(new_mask); + free_cpumask_var(cpus_allowed); + return retval; +} + +long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) +{ struct task_struct *p; int retval; @@ -6810,68 +6862,22 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) retval = -EINVAL; goto out_put_task; } - if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) { - retval = -ENOMEM; - goto out_put_task; - } - if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) { - retval = -ENOMEM; - goto out_free_cpus_allowed; - } - retval = -EPERM; + if (!check_same_owner(p)) { rcu_read_lock(); if (!ns_capable(__task_cred(p)->user_ns, CAP_SYS_NICE)) { rcu_read_unlock(); - goto out_free_new_mask; + retval = -EPERM; + goto out_put_task; } rcu_read_unlock(); } retval = security_task_setscheduler(p); if (retval) - goto out_free_new_mask; - - - cpuset_cpus_allowed(p, cpus_allowed); - cpumask_and(new_mask, in_mask, cpus_allowed); - - /* - * Since bandwidth control happens on root_domain basis, - * if admission test is enabled, we only admit -deadline - * tasks allowed to run on all the CPUs in the task's - * root_domain. - */ -#ifdef CONFIG_SMP - if (task_has_dl_policy(p) && dl_bandwidth_enabled()) { - rcu_read_lock(); - if (!cpumask_subset(task_rq(p)->rd->span, new_mask)) { - retval = -EBUSY; - rcu_read_unlock(); - goto out_free_new_mask; - } - rcu_read_unlock(); - } -#endif -again: - retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK); + goto out_put_task; - if (!retval) { - cpuset_cpus_allowed(p, cpus_allowed); - if (!cpumask_subset(new_mask, cpus_allowed)) { - /* - * We must have raced with a concurrent cpuset - * update. Just reset the cpus_allowed to the - * cpuset's cpus_allowed - */ - cpumask_copy(new_mask, cpus_allowed); - goto again; - } - } -out_free_new_mask: - free_cpumask_var(new_mask); -out_free_cpus_allowed: - free_cpumask_var(cpus_allowed); + retval = __sched_setaffinity(p, in_mask); out_put_task: put_task_struct(p); return retval;