From patchwork Tue May 25 15:14:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12279207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB0BEC47084 for ; Tue, 25 May 2021 15:17:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 982876113B for ; Tue, 25 May 2021 15:17:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 982876113B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=d9/oMktCBKJOqBVWU3jh37WyW9Vgnbk6latxJdpqCgA=; b=cnN1xctp6L6hhP blVrb2p+Q3nm9rQItopFdi0ZfED1CLEOymRyDP/nBXO+RWLLl2+NQz8SX/SqD3a1g0LH4u4+RdqpU 7bxdUADHiksmMFm77CcFr5vhJBlkh4aZw7T7Vj++E4GZoFOlWPcigHrkO4gLR9LUQViWmkBY9jwYM +Xb/i6TwxrkIVCSInb0iMA04ERFazKUXfAjr27CF7yUiJvmIAj5/EL4NjiRa9+HtORIzZb3SbJPyZ Bj9UkYyrAY3gnE2mqD2PukQ99RTsuZD0X8zJQs2UBxPn8EwVvldZi3QHhXibbvRdO1YaLziEvOXoS ZPfmldfUM0tx4eKJGqaw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYmQ-005td1-47; Tue, 25 May 2021 15:15:38 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1llYm8-005tUL-7f for linux-arm-kernel@lists.infradead.org; Tue, 25 May 2021 15:15:23 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 80DAA61429; Tue, 25 May 2021 15:15:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955719; bh=CaGYuHn+SqIHxXU6UGDKeLwnmsQSpMp2vSwKl8cv+Hs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nvmR/T+9ImdYkkQM1ZIkxS6tIN/uW8GD9TCkPdOWhJ3aEGz7iodQEj0jl299SBLmu AqW+Sm2k4NLx6qCNcgNmpWR8xXGEiWmWahHD/NLhTJcvvbeBgNXy824r229OikQ0r2 U4QmE1KH1XElloVrc9LB9xRzwSJmAjfoqYCQuY+t2bpDDH6ou/T45Dr88vriiU9mUi DkFl+Q35Q8hfvU+7LCPbKN7VwLVgkYVrSfrjrU24kqwVHSLYWO4sAE8WE3z+mCAcM2 7re0y70/gZq2/v1xHXibaIn8Gd9y4aXemC5Ei6EIxutr8ZVk+gqHl21iPPxJsiguCJ ygvohKNFwCQCw== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com, Valentin Schneider Subject: [PATCH v7 01/22] sched: Favour predetermined active CPU as migration destination Date: Tue, 25 May 2021 16:14:11 +0100 Message-Id: <20210525151432.16875-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210525_081520_350229_D01889B3 X-CRM114-Status: GOOD ( 16.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since commit 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()"), the migration stopper thread is left to determine the destination CPU of the running task being migrated, even though set_cpus_allowed_ptr() already identified a candidate target earlier on. Unfortunately, the stopper doesn't check whether or not the new destination CPU is active or not, so __migrate_task() can leave the task sitting on a CPU that is outside of its affinity mask, even if the CPU originally chosen by SCA is still active. For example, with CONFIG_CPUSET=n: $ taskset -pc 0-2 $PID # offline CPUs 3-4 $ taskset -pc 3-5 $PID Then $PID remains on its current CPU (one of 0-2) and does not get migrated to CPU 5. Rework 'struct migration_arg' so that an optional pointer to an affinity mask can be provided to the stopper, allowing us to respect the original choice of destination CPU when migrating. Note that there is still the potential to race with a concurrent CPU hot-unplug of the destination CPU if the caller does not hold the hotplug lock. Reported-by: Valentin Schneider Signed-off-by: Will Deacon --- kernel/sched/core.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5226cc26a095..1702a60d178d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1869,6 +1869,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf, struct migration_arg { struct task_struct *task; int dest_cpu; + const struct cpumask *dest_mask; struct set_affinity_pending *pending; }; @@ -1917,6 +1918,7 @@ static int migration_cpu_stop(void *data) struct set_affinity_pending *pending = arg->pending; struct task_struct *p = arg->task; int dest_cpu = arg->dest_cpu; + const struct cpumask *dest_mask = arg->dest_mask; struct rq *rq = this_rq(); bool complete = false; struct rq_flags rf; @@ -1956,12 +1958,8 @@ static int migration_cpu_stop(void *data) complete = true; } - if (dest_cpu < 0) { - if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) - goto out; - - dest_cpu = cpumask_any_distribute(&p->cpus_mask); - } + if (dest_mask && (cpumask_test_cpu(task_cpu(p), dest_mask))) + goto out; if (task_on_rq_queued(p)) rq = __migrate_task(rq, &rf, p, dest_cpu); @@ -2249,7 +2247,8 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag init_completion(&my_pending.done); my_pending.arg = (struct migration_arg) { .task = p, - .dest_cpu = -1, /* any */ + .dest_cpu = dest_cpu, + .dest_mask = &p->cpus_mask, .pending = &my_pending, };