From patchwork Mon Mar 17 17:53:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 14019819 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2086.outbound.protection.outlook.com [40.107.236.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 761611B21B8; Mon, 17 Mar 2025 17:57:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.86 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234277; cv=fail; b=iESXkXHFBpQu9hGW3lee7A4/VlR3lVZjiJ7lNY5SAB6t8Zh7cXpmt8qGYkgWgYfBGPLun80zMMcnNSNq2kQ1QuQ8fQckJf0JG8nmOqiiIF4Zbu7xx8KkOAjj7zGwnXic2J1ky+fgH5g4EZQfxnDqkdgQY+4sucNK77u3DWU/V4c= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234277; c=relaxed/simple; bh=1xZ6NuSWEnrcGOl64typlA5iPb9lpll16ImyYgnWWUg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=Ig6/uuOJk5zLYj6bdkWkPg71uZKCAB6GhRjHuxINSaJJ3SJNEEHotUcosDBwEk9dDnEwhU0dOn/7Dfq8yiE2J3bh2+nYq00DYmp9M1DcKaj/cmhK9DCUcAZGRACryl8XtVvBmAyXoT8YDa97ErOdDWS7jTQRhFtszKSSa/jcMDc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=eFhMl8qc; arc=fail smtp.client-ip=40.107.236.86 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="eFhMl8qc" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Vla0p5xnwDP9v6jPqoZDkcm3NXIZz+egvqNYNoCDxDXDWSKOIMRL08FJ2MW0Gjd9X+bgnUMB237k/7t6kwoRNhzxDP2w18JpC/52MBRafwXyRfl3MoMnkn+CRUIKgFgkXquCuA76RycJ1b+GV7LYXFyVSkYcUfZerpylpElsU7vy9hro3vpn6R0E/Q1vpgBulIeqXaK1UkxUxMRxaTx580MiAQcfBiIv4IeG3IQXCPQ+RH8CLOhtKpv233BUIBf3AbwGEDNfVBNEmeCARggwpAhhT0DN9421wDn1jdlUMDqjK6urR2xk/fURAgEXLBZwib0H7AOwqBSon0md+Ei2tQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YIMO78KURZfbupH0XQxVsrLdbclyiSULczGSHEfJeaA=; b=l+chNOTVEnDTFsNZA8lzXeC8QJhXGlJqimuA1qsnPM43gkqcixgedHAHfER14iVFYjxsWQUmW6oWkgrjMSWWwgrIOr7DbY7Klzg/gL3tBlGsH9AO1E2qGa9D7roA1oKeaSafkK9SpChW4vEW2uXMPpQAgnbaBLNYTekZfNSzW24lYyj2y8s24PLLpNair38TPIxQ+gSEQ/aV+ORuSPavecisEPCsmBG1fPPrO0BCwJaBUhDXh21zZ2b966rB4wTvEbHlKgX1XBP0hwaIzCqEJs7HX7EgQfMi/QA90pgyDFwGEVBfRxwMESHr9zWn9WDmQHg/8sLWQP0tozlSYKiS2A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YIMO78KURZfbupH0XQxVsrLdbclyiSULczGSHEfJeaA=; b=eFhMl8qc7mF1rbuaz9DIJK5/yvDHYtDCeUKmQNIWuETzalECb+9I5Sj9vcSy9p13jMFlTCe+Puf0/FnqPGMDoHmNhCRGaCQ5YgDo2RCbR16h1v5/VDa7tGPqkaDYzGm6CJxctjSiLTo4Ny0fhzuAwMk4knfmRfvyBHUPNzm/c9RuqV4iD0vui9wY1fY3PLAgcKICxmpsafrXv7aCBNV7uUIPqig613X6MwBg4nREObp4AtYN+fmxwnzxiRHPrcx/8oIJBIUgyC3HoeU80EVt9ln2OpCyzyfUeneYM/CyCYQ0I0lCJbTTGF2MLEmjBou4VCw30f+/tvF3o3P6QO1KwA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by MW6PR12MB8708.namprd12.prod.outlook.com (2603:10b6:303:242::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.33; Mon, 17 Mar 2025 17:57:49 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%6]) with mapi id 15.20.8534.031; Mon, 17 Mar 2025 17:57:49 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Joel Fernandes , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Date: Mon, 17 Mar 2025 18:53:24 +0100 Message-ID: <20250317175717.163267-2-arighi@nvidia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317175717.163267-1-arighi@nvidia.com> References: <20250317175717.163267-1-arighi@nvidia.com> X-ClientProxiedBy: AS4PR10CA0014.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:5dc::18) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|MW6PR12MB8708:EE_ X-MS-Office365-Filtering-Correlation-Id: 0692229a-a8f3-4b8c-370f-08dd657d3af3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|1800799024|376014; X-Microsoft-Antispam-Message-Info: COl0ZoYcZJ7sp6Ngqet6CKcZ2G9Ser0P8jRoG6YyUZXvwxAe/oUgHnsveD8fW2VAH/7Z+721N6XO6BKVyF8TELyD36SUNj42/Pske1XRBQ91JLQr8haVzWtRn8KUJ7qcVzPsNz62N1x1uwiBS8oWxbwV5o5MrP51lsD2VONrPZoqNMx9j8McwlX67s3JmobDD8sXo5BIomnMtVcAOAeLfjlCz2WqXkQv9uWKgja5IDwTi9onGCjiPqVook/L6olfwEd5KVgGp3RXK8DseG+/DCGwB05tTzj4WxPGZg7fB+Go2Ep4qfWi3XZ5rrAUx0ac+1LYgacN1UmK+FjGifZEp12G/D0AJiTsYtOcMElSahRkObYfa5hJYUScwpB0ke+OXPjPWr2MZBHTUtK4bfP9r8Vl1sk1w/q+6WW8B8aQJNW8xHppLkL/4VOOQn+v6KYsMiH/MVN1r1xYq158mbAanADbbRMoLj0NrXgssDo30P1AhjsRylSbOCsXYOqklmkvqOakQoGciKpEklIWc9VF309HGCoF3KMCp7/tY+rtKb48/4kFeZWZM49W1Hcb0XWoExUubQOADjMOQkEEg03HLGIlCInnClSK6xXm9cdShC837oHc2+KC+WfNimy2sDuiYlvA3YuPhhsgsMJ6DjXfl4HMf6AZTrcEs1FzS0No5Z3DuPkiQRAnZLWB9BNTv3rxZReolhq2vOhVuFguSfDEcDwZE7M0M0YiTcrZ9aFBxMzezS7nmZkyEmcTzeNiKV8tAQsdqi+yyr9pTiC/DAXgznqVxWXLz3ws/m6SrC29kFmChFWow5c9Jn5/rEB9FoyhF7FDLqr6P3+/Lh9TPfMlAGc7DhCut8r6R/ISRmae1cTtdFd07Ao3MqHmCsyCXXmUpULuf6a4z+mwqNXuypuRmM+RUa3cWgHbffB6N5PrtxhJNkbTNKPFF9P6+yUCpr1OeYp42ERcrl2MhYCpoF8CW2uQEdq3qkMBpL8DUE1MzLN/EI2TstvLgM6XmO5Sb65xxlEPM3qgNhb0kdTZZA+SGZ1eLRKpQx6KrZ1QyAGzq0g574Tq8+Fpb/UB0AMXs7ZarD7DzAoDpiuwbsfSB7t4MjaPQ3YmJFLk7toO8WZjKFL2aSIo0Krxy3rzQ2hoaj3ldbEH5SToukpZW3jO/i565sPdt0YEFI4KLSMkQ7CAU8Ujk2p+sPDsDEKph6u0Jyhf8JtkVpQLUXXOnvvxlaO3lc9X+p0NE5+ZunlTundaEOW9QcuKlu3Xjt6CtdubAXoaMXoOP6k3/kZLj7LjJr7DOOKVG0+y8Ys17oi8ShrOVTrYLphJJOL5b9CoPJkIH5RvB90IoK5A0YC3PTRUWZkZ0Rv0vpKaLhzeK/WqB2CJ4GU+OKxYgXJCX1JUUlGhD6pv X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(1800799024)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: cJlRpInQPQ2PLSZ3HVIhZJcdmNXwv/EgvdmGAXu3iHMZozw4Alw08xyO/cUMF0I3Os3IZ7VaX+TjdQMkyuptt+3pbGWoX7V+vKDbe/d4s8Glk6W5OItkSYQEbAkuWyQB0yr4Pi9zbZfuGojLB6y05Ob/p6UZewZUlYZ8ak1N8l7eLXP15PqSRPdQ1HGCK+W4UIOaJf3FwYX/d/7ERYlAT4bnsf8rSmFojoyj/YxMRRpsEppWuLZxAArAOdpc9Kl5uB1HZMJDrFGZqTfYHES0E1s3MEBKtreaWtQ/hIevUN7U1EUNeCqu8SNPTLUyq4qoZ3a3jVc95ZybRU6aJwuIZSlmu6PEp1+1l+mxqzll4r1DmhdUkVynJFabjyCpKUeOztU+eddX89257DgwdZ4GGQ/OjOrJ4Vk4rBkRm0I2CLrnoo3Se7Wd+qKUpg58tcN7DTwi+7dBVsVVVUhOGnkSrablY0W/bQRCl9Dlx7Crs15eWx+2dy/dJY3ABo84gCrbXSl1Wdao1mR+MoKKJNdqdNYrJBnpsR+WWD7B2KHquQHB+xFWqXgftb9QoLts+WKS3fPUvd+yiX7i/mf7+jO0CB6FuVMMv6LHHqkufFcfNKwJwnd7kFqzyyWXMyw57ZjSyXfvua4FEPYpjdhKK66xXjKusMmhhvoe0Je76t04pL1J5KS2SB41iJb3JprkQFlf6a1hrweSkJJzHTMNqxMks35ONoHVf6YAQRa6BvwKd2bSFUqWwP8n8qo9F/V1PSrlhf0zpsq0RIhxwyGkFQxnhBkocpVmNyk6kflAgMfre0eWnip1wO/HqUQPhX7zYlS2l0CCWbfQFheqTvoQCN1srfZSorjyq9BPxWnoUTiJ8VwMIjjlpfLu5/IOYJ/jtaITkCk50uwJAwo41i++tsXytsRLxElZuiXMjmZz8uEweADPQhtgznI6DXFepEvg7DjdoMNZ+stohM9EyhcNZUuk5u1pTMPAnjJT1+w2AUH/d0vhqyqyZQIGyzsPi8PlgAkhjIqs9nHj5wDu98wa0D/3jfmisGfXFlCcPQNpvOlFKmIUQ7OUScIbjnm7LKNXxJsfAaWgulrz9u+rHcmJjsPJ/Q92JDNTF+mj1thNlra2mh1c3hk05rPuDjqbu3SkS5aFaKuDo7ebp7/MChxMWemR6YLlm9mR4yKSH4SmusPsYSwuhxdsWnaCD7vmtntXzkwEYs3lSEvaTRF72hVifbiMWAuO9fibgZlVdCuk/Z50e5C0nl455pdJ5d+drqVZHKsv5mDQGpTZrw/UNo7ytPFzKoa3AxA7JT/ZkPR/VALgtCVNK3e1phcolULuMzJLVbBjlmYC40U3gMtDkiXli38d52eE+AazJdyCWly3NoiVPxiVxEfmCFlQdImPe1aUCUIDVV8ou6EzbMzvb/yy5nd4KLS5NIBRgb98iEEQNCUYvkkDeiADjfR4H7B1M0deGATor3eOZvF0Gd0yYlnE+r5zUbbzQMJNyY1KJ+yxE44qShYOmU5Eat1TAcSxNSb9G91HE9pOb6tAPrIMf+z9bKy7FmtfnJVZ8cmCP18GDLqk/qECf818lHWIqDEKsZqQZR6b X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0692229a-a8f3-4b8c-370f-08dd657d3af3 X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2025 17:57:49.6101 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Yz78qIH8ZVG71YJymM62UnbIiOfSG5XaJE5MZzXiQjLLjWYC3uhmqHUrRyTUuvhaVXfgT9nPXSAqK7YPwNnDHg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8708 The built-in idle selection policy, scx_select_cpu_dfl(), always prioritizes picking idle CPUs within the same LLC or NUMA node, but these optimizations are currently applied only when a task has no CPU affinity constraints. This is done primarily for efficiency, as it avoids the overhead of updating a cpumask every time we need to select an idle CPU (which can be costly in large SMP systems). However, this approach limits the effectiveness of the built-in idle policy and results in inconsistent behavior, as affinity-restricted tasks don't benefit from topology-aware optimizations. To address this, modify the policy to apply LLC and NUMA-aware optimizations even when a task is constrained to a subset of CPUs. We can still avoid updating the cpumasks by checking if the subset of LLC and node CPUs are contained in the subset of allowed CPUs usable by the task (which is true in most of the cases - for tasks that don't have affinity constratints). Moreover, use temporary local per-CPU cpumasks to determine the LLC and node subsets, minimizing potential overhead even on large SMP systems. Signed-off-by: Andrea Righi --- kernel/sched/ext_idle.c | 78 ++++++++++++++++++++++++++++------------- 1 file changed, 54 insertions(+), 24 deletions(-) diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index 52c36a70a3d04..e1e020c27c07c 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -46,6 +46,12 @@ static struct scx_idle_cpus scx_idle_global_masks; */ static struct scx_idle_cpus **scx_idle_node_masks; +/* + * Local per-CPU cpumasks (used to generate temporary idle cpumasks). + */ +static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask); +static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask); + /* * Return the idle masks associated to a target @node. * @@ -391,6 +397,30 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) static_branch_disable_cpuslocked(&scx_selcpu_topo_numa); } +/* + * Return the subset of @cpus that task @p can use or NULL if none of the + * CPUs in the @cpus cpumask can be used. + */ +static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus, + struct cpumask *local_cpus) +{ + /* + * If the task is allowed to run on all CPUs, simply use the + * architecture's cpumask directly. Otherwise, compute the + * intersection of the architecture's cpumask and the task's + * allowed cpumask. + */ + if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() || + cpumask_subset(cpus, p->cpus_ptr)) + return cpus; + + if (!cpumask_equal(cpus, p->cpus_ptr) && + cpumask_and(local_cpus, cpus, p->cpus_ptr)) + return local_cpus; + + return NULL; +} + /* * Built-in CPU idle selection policy: * @@ -426,8 +456,7 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) */ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags) { - const struct cpumask *llc_cpus = NULL; - const struct cpumask *numa_cpus = NULL; + const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL; int node = scx_cpu_node_if_enabled(prev_cpu); s32 cpu; @@ -437,23 +466,16 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 rcu_read_lock(); /* - * Determine the scheduling domain only if the task is allowed to run - * on all CPUs. - * - * This is done primarily for efficiency, as it avoids the overhead of - * updating a cpumask every time we need to select an idle CPU (which - * can be costly in large SMP systems), but it also aligns logically: - * if a task's scheduling domain is restricted by user-space (through - * CPU affinity), the task will simply use the flat scheduling domain - * defined by user-space. + * Determine the subset of CPUs that the task can use in its + * current LLC and node. */ - if (p->nr_cpus_allowed >= num_possible_cpus()) { - if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) - numa_cpus = numa_span(prev_cpu); + if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) + numa_cpus = task_cpumask(p, numa_span(prev_cpu), + this_cpu_cpumask_var_ptr(local_numa_idle_cpumask)); - if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) - llc_cpus = llc_span(prev_cpu); - } + if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) + llc_cpus = task_cpumask(p, llc_span(prev_cpu), + this_cpu_cpumask_var_ptr(local_llc_idle_cpumask)); /* * If WAKE_SYNC, try to migrate the wakee to the waker's CPU. @@ -598,7 +620,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 */ void scx_idle_init_masks(void) { - int node; + int i; /* Allocate global idle cpumasks */ BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.cpu, GFP_KERNEL)); @@ -609,13 +631,21 @@ void scx_idle_init_masks(void) sizeof(*scx_idle_node_masks), GFP_KERNEL); BUG_ON(!scx_idle_node_masks); - for_each_node(node) { - scx_idle_node_masks[node] = kzalloc_node(sizeof(**scx_idle_node_masks), - GFP_KERNEL, node); - BUG_ON(!scx_idle_node_masks[node]); + for_each_node(i) { + scx_idle_node_masks[i] = kzalloc_node(sizeof(**scx_idle_node_masks), + GFP_KERNEL, i); + BUG_ON(!scx_idle_node_masks[i]); + + BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[i]->cpu, GFP_KERNEL, i)); + BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[i]->smt, GFP_KERNEL, i)); + } - BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->cpu, GFP_KERNEL, node)); - BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->smt, GFP_KERNEL, node)); + /* Allocate local per-cpu idle cpumasks */ + for_each_possible_cpu(i) { + BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_llc_idle_cpumask, i), + GFP_KERNEL, cpu_to_node(i))); + BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_numa_idle_cpumask, i), + GFP_KERNEL, cpu_to_node(i))); } } From patchwork Mon Mar 17 17:53:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 14019820 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2059.outbound.protection.outlook.com [40.107.92.59]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8686D1B425C; Mon, 17 Mar 2025 17:58:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.59 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234284; cv=fail; b=WJ0hi1A7lVxHFnUFCOdtCGNHb1hBBQe+d5khNALGj2nh2lRo2JQG9euRLTw/tQxtN9L2o4RY+vjJruKCw1Vrb2SPglTFfxh20Dy4l6Nn6G9IR4Hcx0AYmJJTLGAprJw5eKf11LYbjW7hWDStrJbN3r/5vfXT86YQgV2dQb0zvh0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234284; c=relaxed/simple; bh=C18IDofFdxJlzTJGnqI1t1Kou0Dou+RB4yI5AnN5j0w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=lKcDpyLsFL001esaf7RtDB4SAsujGSfo7KwYlKgf/1sYPqVLNQKAh9aZ6KAbC+LdbXGhpb/oqyx4JnUq4ikjCTsgjeGOP6rTcYUww3nlNYs/Mj/5+uKw4nL2AyABBa/dVAXlm+glpB5ISbcHMTD4ZnNmd93NNyYAx91pA065YvY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=n8cHgCeo; arc=fail smtp.client-ip=40.107.92.59 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="n8cHgCeo" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gNDA74g92/QG7Vwyg2Dq0MU4oTbKnhF9JylJ0qw0Mr51Jbsb96bmbvh4TljE8H6qU/kgjgq+itx8kiRo9KlfIqV+Evwf9h6Af2cFOq0aCOcsTDvG51p1PMNXmGGATglk3yJgOP5aWSOEpq7ZRL1j9msfM1HvLmIhJMDRw3hPvvq8yQMOTZK6gDUgMwHFHgWZYQ5XGo4xPcGnri/CYhs08AwWNthix3znZ5oD5As3V1GmrJU/vikcDwCUbqCAvKEE+hYqgfbRJBZLGugJ/+KvWZj6OJ+l6/zsLVI8OAQdZHzLXajcXMUoxMdjhl2eQplp3j/ZCBmsI8UTWANWkrS+QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bMBkb7XPindwCOEeRAoNCIAGYbWO7etKW1KjxfGW1w4=; b=XO72etA2noSSdXpsAsMUfMgjGNqiqb8uvpPDktMOBG2uHkutgu8IGmqiy3H7dQMz2sQDRRkKkFDOdZXKb3cfVVqYt/nXHcAqn36jeTPxnSGcePOLBrHDD3O5wWfZswk9tKAeZYSOqP+qVTA+XMWu1FQwP0uDxT+cGsjbwJc30oob3d3sY1D/ige4OnehgWsCItHQMUyQs6BGnXmKZRT9VO/3WEu1NY+qTiIj8E/edZW3kWzx/NCsGSYoVR/yoJDfyo3XxOIYdJtaOXPIjCsf+p5jrN+U79che+6zl6we7s3NFlL42hN9DwK2yDzqxXvX+byU+Ip9tNjyDLH3YxL3og== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bMBkb7XPindwCOEeRAoNCIAGYbWO7etKW1KjxfGW1w4=; b=n8cHgCeogOyljjy14fXdeIf5HrxoZnhuwCZR2rnLOs0BhgIt5ZWZ5khHPgOWlrpyOs4SAIL9ywRlJbPA9FMxJ8VhfszD6URe7Sp1aXvrJfRPy1z3zkpucO/37JSBEA31L9F7FZI3RWYDN3yQbXL/3N14v5Vjs0Y7J3q2J4aZmIMHX1le4YgKwPtGqrcTWqXwgIAQx8XUuNNo0cO2Q4X7xme/U+eQIrSwwsYqx7bwzHI9mOiQMHbayl6oUnESwnIpy9Tp2A0jplKZuwqQZZZovw8kMP6vnl/qGtYfbLSdKULFiRA3YyQ5YVWhwFF1gOkiQ5o9HA5Ay/zqo8rU0bvYHA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by MW4PR12MB6826.namprd12.prod.outlook.com (2603:10b6:303:20c::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.28; Mon, 17 Mar 2025 17:57:58 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%6]) with mapi id 15.20.8534.031; Mon, 17 Mar 2025 17:57:58 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Joel Fernandes , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Date: Mon, 17 Mar 2025 18:53:25 +0100 Message-ID: <20250317175717.163267-3-arighi@nvidia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317175717.163267-1-arighi@nvidia.com> References: <20250317175717.163267-1-arighi@nvidia.com> X-ClientProxiedBy: AM0PR02CA0151.eurprd02.prod.outlook.com (2603:10a6:20b:28d::18) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|MW4PR12MB6826:EE_ X-MS-Office365-Filtering-Correlation-Id: cefc6b3b-aa5e-45b9-831e-08dd657d4081 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: 2sjsS6Qvx7enySmZDiPk3vV+rKdVUhIztG6HWE+leYdD0RQzfQpCN17VbsH73RI0KVzqRBBVRNsUh2lA+Qh61BdRMsgpnwuqNee1vSzLqhJyIHZcVKSq7Y6mxltDvNxynL10G7r67O+ugPUJdJviGOOUL7SsNFuynwng4nEFeJrA6lG1C0OlFMekSMPnvJcPVVufxlq2W2NHu7IGJtMLPu6hBehksqBxassdFjy7zCv62fUZCs4kkKRT98EUl2BQDQmyZ26xc0v0afXjBkF8ejyRUDP3n/zL9BzeeB89ojypQzlk9BalDJBiJQCOFIofoWcepLZRyYibX9nrGaFZTmxpPHL7wn2ERdGP6amI9kB1Gtq1qyHaAZXeTqn3o7x/76lnYaF7g8B2yp6Jx8X06oH3GMDhQi0WDmZjrjqJC0rrwvgRfPJ3TAsWmvTjO5yz1ma59c2C4qPkFMhyYwEwR1ff3aZamKx9Aonz/sWNQ+sD4ntLz+T5atRcXELQNoQ4FhbNgRLgqqRQdfxeG8peUY09lgUFkW7RjGlZL7UwUftmQXCtQcKC0j1oEJj5QHq9fzUein80vPv2bIGpGPejSEQXxbvbW6pKo57havRUK0dRPsBCK8zqrV7MfOybbGso2OCJzEZXM6KqmKXsAi+sr8/TM0Hp5V+6+6YoGOMLBZRSqXeqHr+Qn+heGopq59ZoaULlmFBV2YRAjvwKE2lbZ3IdaqOkN/6xuObsSNR8SpMlGFWX50VfUMcCQo/v39BnMuIXXRLgtq8Ytq0xELbwxUvJr8xlsyyrCX1JZOUujcZcdl5svrhMFuNdxaOPkViTqw04WHY5if7j5ZBhfd0NrStY+1cG1OdPEOpRkYNxquV1KyDAhO7qHsTL/qTZCIuGaQ7ZgvjXZzrKqXtTErBKzI4/sDdonY/v9C+fEJqIP3YwOr0BivNr4hEojFi7agggr0wjNGgcqTSsWvKRtAMkSnnn1gJGAQDF60OlmbQCNaI76yRxSX+m1YBXR428IUv9CFcl8miteC5ZVgCHnwP13xhZkPCLLijnEq+kUuGtGAoilhF4f35YLlqzIMJZ8y4JP15/jU+43TcvBIASE/Xouu+o3K6DPKZ1IuXE8rX4gBuYlRQUzvQK/wWnJMwuW0l/yZLiFIabDRUIdy1Z81itDo6UrWTFIjwlyKZBQmBDIYcL/kr8W/7+espuCOt+6JelEWkcd1tG0OigZAPGpK+nTiH5C42eY12eSyHeifNoPGz9cuVC4bkEZrq8phXsKbBeHOMsLVpuGPKOEXIaAeJDXwSMlsqof1ncsuM9rbdYaeZq7cbtkC94NJIihvNlTj5BUvfuRQfUjSwYkhcQeVAha+3Tbl60fsHFbyRKswrty2SzeHt/frxp3LqOx0wZETNp X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: y66AcUIA1zVDL9OMAy/xgbKSytkQ3qq85ypq02mkLQjskOa0A9pNMBvYxbB45B5K5jFaljzBR8ExJtW09LOk1yJaeX1UL/ptkmBLfgrLDiTWdCTJqc6R8Yw/gzLq0awygUHsdScQdt2NBHoyl6GodzL1X7w5mlp6ojnkDD15QWFy0JQQ5Vx272sXvaemamfJ42sqqwk4WUKSMkhcd+qf3qjxopxxMubCME34pux/oE4XSC0JPTn2BfuhercaviEU3Z9c/FwEN3p7iM+w3zKqPTNqFL4ssKv3VL8EzdJj/9J8EVozbrjhwCUVUy2PgjW/XxrColoPGXFlUovxxJX43nt4avsnQlVC6sfOqnug0RhiqbyWo1jCjf1ahXOb5SUpa/vgSf3SbF/tZp51orHfSUl8GQhTL7Ro5oHBV6oy205AXKJOKu5O/DZaHqCNLEjj4vDTKPOQc0R0ttz/bilfCvFbkRhlhmvvHTZdgJIlOM8Kx6ZpjMpLsV8r4iozT3FeEJDfo43M+9aOcBsAwPEtmfJPqzOmnQOl35dYNWdZpkDJtrz/H07cgCKU3gwiWv13aDVBYxv5SRTMQHtcYc70KW8hUCgWeDllbiYHIQ6+wKSbj98XisIYR4iDP3mIxdZ8vRdzvN4kFsEx7BjyTp10Wgaca+P3ptAM1kbmzrz4Cj3IpCnBoBNre32CS/2u75ItQHPrq3mQt7GiRnT6xlVR/6ER/Bs9KAJ0qgb/BBT+q2pr34pSed28k4ygsvVxuavp+VwUwLm+4FE1bl94De8xA85UjR6IaikSgiqUjEbPEvM5OpuHJhyKjXi285qC48yjL81PujV7qWLFn4cU1pJ8y+T4kpSMlRWBqB7dmz+HI+SogTJHEgwzVvfWg84NQir44PZDXDF3P5edksAwmed3LDndRls6uW/OvPzBaYMNhfBhCJd27VZe31H6ZeytlCemcaJNCadgI/nq7gOR3tHHt7RTCr2qjV0XxhaDBSf9K4aQLVgQ7RMn4doS8K0UqVsMgHn3rO82DRIn+PdUCx0jVSrLhxjO06z8gH5hucxFGjjPovGOgaf0phyFISgLBqyb5wTJLC48SWN3oG2JXiyiGcLEiTettq2rvWnt8OSy1Lwlnzj4gvSfUvdbtTVhg6AnJf75hRJPwS/DaWHem1zGo/+E1uDvpMlaPUejs4MS8ahEtE9eqjqsq54YNx34QV6mPfw8v25QOJGPzd+DcvE67QYVEPi7IvoZ83MDdQv0zsWIDSfo8iA/KvGEHMxiQlqKLiVgPm0d98n+DRY3NcHRI0GD4ihj+8NvexskYBYkzkfppU6BKyQjTqtzqJwGkzbwkSlQ8K7RTdvZdICD4KPe8jJefQ4BTFir9oJ9gDyh0VpD/ZFIrNujZLllLfxKx0N25ItUTDY+Yzn15D0g+n9o1QGlmcGHgw5Ccl+IJHGQS/F1zg0Wnl3foxf5z1g/WasJvV1ys2P9XnuZaxCyu6VCZllAVMyoEAMPS+Fy49k8/YH2Bhz3Oa3OYrkD25NWDdsc5SuKMq27LjAdsnplL9B9QrgKm/YunCRnYm3mS3brUsDid0aTwwfYZ4kdLD0YT8Uo X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: cefc6b3b-aa5e-45b9-831e-08dd657d4081 X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2025 17:57:58.4891 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: hhipP7iaa5lngHFHH+WF0FtrB3MbiVWXjOikLQZCTAqhxlsil2CA6w87uuaOa+BnTjI0UFd0H1yy4Mx/SftUzQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6826 Modify scx_select_cpu_dfl() to take the allowed cpumask as an explicit argument, instead of implicitly using @p->cpus_ptr. This prepares for future changes where arbitrary cpumasks may be passed to the built-in idle CPU selection policy. This is a pure refactoring with no functional changes. Signed-off-by: Andrea Righi --- kernel/sched/ext.c | 2 +- kernel/sched/ext_idle.c | 45 ++++++++++++++++++++++++++--------------- kernel/sched/ext_idle.h | 3 ++- 3 files changed, 32 insertions(+), 18 deletions(-) diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 06561d6717c9a..f42352e8d889e 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3395,7 +3395,7 @@ static int select_task_rq_scx(struct task_struct *p, int prev_cpu, int wake_flag } else { s32 cpu; - cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0); + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0); if (cpu >= 0) { p->scx.slice = SCX_SLICE_DFL; p->scx.ddsp_dsq_id = SCX_DSQ_LOCAL; diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index e1e020c27c07c..a90d85bce1ccb 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -397,11 +397,19 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) static_branch_disable_cpuslocked(&scx_selcpu_topo_numa); } +static inline bool task_allowed_all_cpus(const struct task_struct *p) +{ + return p->nr_cpus_allowed >= num_possible_cpus(); +} + /* - * Return the subset of @cpus that task @p can use or NULL if none of the - * CPUs in the @cpus cpumask can be used. + * Return the subset of @cpus that task @p can use, according to + * @cpus_allowed, or NULL if none of the CPUs in the @cpus cpumask can be + * used. */ -static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus, +static const struct cpumask *task_cpumask(const struct task_struct *p, + const struct cpumask *cpus_allowed, + const struct cpumask *cpus, struct cpumask *local_cpus) { /* @@ -410,12 +418,10 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, const str * intersection of the architecture's cpumask and the task's * allowed cpumask. */ - if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() || - cpumask_subset(cpus, p->cpus_ptr)) + if (!cpus || task_allowed_all_cpus(p) || cpumask_subset(cpus, cpus_allowed)) return cpus; - if (!cpumask_equal(cpus, p->cpus_ptr) && - cpumask_and(local_cpus, cpus, p->cpus_ptr)) + if (cpumask_and(local_cpus, cpus, cpus_allowed)) return local_cpus; return NULL; @@ -454,7 +460,8 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, const str * NOTE: tasks that can only run on 1 CPU are excluded by this logic, because * we never call ops.select_cpu() for them, see select_task_rq(). */ -s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags) +s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, + const struct cpumask *cpus_allowed, u64 flags) { const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL; int node = scx_cpu_node_if_enabled(prev_cpu); @@ -469,13 +476,19 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 * Determine the subset of CPUs that the task can use in its * current LLC and node. */ - if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) - numa_cpus = task_cpumask(p, numa_span(prev_cpu), + if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) { + numa_cpus = task_cpumask(p, cpus_allowed, numa_span(prev_cpu), this_cpu_cpumask_var_ptr(local_numa_idle_cpumask)); + if (cpumask_equal(numa_cpus, cpus_allowed)) + numa_cpus = NULL; + } - if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) - llc_cpus = task_cpumask(p, llc_span(prev_cpu), + if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) { + llc_cpus = task_cpumask(p, cpus_allowed, llc_span(prev_cpu), this_cpu_cpumask_var_ptr(local_llc_idle_cpumask)); + if (cpumask_equal(llc_cpus, cpus_allowed)) + llc_cpus = NULL; + } /* * If WAKE_SYNC, try to migrate the wakee to the waker's CPU. @@ -512,7 +525,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 cpu_rq(cpu)->scx.local_dsq.nr == 0 && (!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) && !cpumask_empty(idle_cpumask(waker_node)->cpu)) { - if (cpumask_test_cpu(cpu, p->cpus_ptr)) + if (cpumask_test_cpu(cpu, cpus_allowed)) goto out_unlock; } } @@ -557,7 +570,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 * begin in prev_cpu's node and proceed to other nodes in * order of increasing distance. */ - cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags | SCX_PICK_IDLE_CORE); + cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE); if (cpu >= 0) goto out_unlock; @@ -605,7 +618,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 * in prev_cpu's node and proceed to other nodes in order of * increasing distance. */ - cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags); + cpu = scx_pick_idle_cpu(cpus_allowed, node, flags); if (cpu >= 0) goto out_unlock; @@ -861,7 +874,7 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, goto prev_cpu; #ifdef CONFIG_SMP - cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0); + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0); if (cpu >= 0) { *is_idle = true; return cpu; diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h index 511cc2221f7a8..37be78a7502b3 100644 --- a/kernel/sched/ext_idle.h +++ b/kernel/sched/ext_idle.h @@ -27,7 +27,8 @@ static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node } #endif /* CONFIG_SMP */ -s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags); +s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, + const struct cpumask *cpus_allowed, u64 flags); void scx_idle_enable(struct sched_ext_ops *ops); void scx_idle_disable(void); int scx_idle_init(void); From patchwork Mon Mar 17 17:53:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 14019821 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2048.outbound.protection.outlook.com [40.107.94.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41AF11D89FA; Mon, 17 Mar 2025 17:58:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234291; cv=fail; b=DT2O8MFRlgFfiwzXbVVVARL/uAWXGjkG8NI5nAExrjV8irsMjUTsSAj03tIm/dYfOxYLDU4DCkcGzli0D7nQASKLT+qZ+LDT4NZWMpapiMZ4rAudTJxe7M03xPcf5iUAp0zda3XilSNHhzH0b3TXWLfKKL7S94SSUoV7HxXRsI0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234291; c=relaxed/simple; bh=rIZvUBSKZFeWuKtJbnQLGdpD0kPRGfPAmQkTx9ecXmw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=IC7hH6OUaI1rQBRgQm3NXYMI2eQZh2w859sWsPV2LAzBjUsNR3tCcin7/me/1p+CEmySoM+Z8WWxdlcTu3mTUOsPkm6BWqvis7E/Y3wE1j+nEzDoJ782/6VLJhhpBuu6I0NWwLqWH4XLLmXrqLqluKQbL3nBv6klhCshvEhU2+Y= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=l95qzlXg; arc=fail smtp.client-ip=40.107.94.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="l95qzlXg" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=h/yCXWaRl4IWyxsrUrhs395PbBUjLWCG7erD67I8qcHQXcLxx3H4eHrw+n4bsOqOvgcfS66QfSq93RJC1gUS05xCMTJmvAp3aFx+69DXMDOzoolW8fxPebBCNBFIngRvN/0HF7jETdS00/ykUCduwDA8rEuy2AI0BLD5hT5sAjYoWqMsM6bAgxddgnBwZShlOaQYbpvL0ioeasUQ/ax/ul/FcaeJs0xOwV7kli18kNRgZkE0vwX5yqjlc0naD7vxDqh34lo3KJq1a5bD5HmhKDRMvIaVEi+WAr+R/Xbn9NV+Uh6XuivEklbLhXYbbLRq8PNK/2j3bd1/hXBnwIRF2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lRpqPZHFG2SenqK4OWDqIXgCeNlwQYzKErq5TpYPak4=; b=O/4uaTWoVRsM0d8pEx6uW4eES+FNqyy8DIQeKWJ1YaLFSh33Yj8DztlbrBYE8UJmhVBlKmROUd0LrExokWI4NHPdknx78f9TLkjQypZcO5XFxWRjdxMnEAEU4UkrtiM/s0Hpfp4YoE/5l7cgHzPMdbe99i54yfjKHHNi8PQFQ45GKcKMPIaPPouFMknHeyfHkXezsuZk/YBQFwu800qqQw5ECMd74lXMJx0FJiDXys/+Q/rZfH2vpps0g1xWpEoajzrflmOkPvCCf+RDZMFNVjyRdhaPizUO4FprdjqqRNXYVweNFDSSefUIsHggwcMbQ2kGx+TvEd3WgZsFi0r4dg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lRpqPZHFG2SenqK4OWDqIXgCeNlwQYzKErq5TpYPak4=; b=l95qzlXgvlEGUdiVjIxIldHMv37kgF0A49fgqlk4b6mXT/8A8aUSQonXSo2UVtBVJZohT0N23JZzFWTzqFGWyWz2eVeyDOZMjTmAJKSH8+bAe03YtujP57d9bpqKtsmX1u54Gh3xRQ5hhr5qoa3LxhD8Ca3rmSCxcCTrYhMg0FwznSaBY2uKY16n3RTD6GKuvmOsGIe/jLkxlJ1wjpyPL4Qx9HWGUG3ICs7jCzCNLJObmrZDr/MKYKJiYh5P4ZRiLYJVN4g4JtjjCBT+of4Li28/svfXysflzCruL0XghoDjDyzFo83ITE+Bu5lFbZiCnqNnpvSqFFDsxxXt1z1itg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by MW4PR12MB6826.namprd12.prod.outlook.com (2603:10b6:303:20c::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.28; Mon, 17 Mar 2025 17:58:06 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%6]) with mapi id 15.20.8534.031; Mon, 17 Mar 2025 17:58:06 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Joel Fernandes , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Date: Mon, 17 Mar 2025 18:53:26 +0100 Message-ID: <20250317175717.163267-4-arighi@nvidia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317175717.163267-1-arighi@nvidia.com> References: <20250317175717.163267-1-arighi@nvidia.com> X-ClientProxiedBy: LO4P123CA0168.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18a::11) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|MW4PR12MB6826:EE_ X-MS-Office365-Filtering-Correlation-Id: 6147f03a-ee6c-4d7b-3808-08dd657d4507 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: fHAty/G6h8QPtgKP/811yqzSxb1HDQRtvXQj9s/Vq/7DIQfDnVUGWUGemuZA9Z3kbSjMXSCS3of0c9fZ8xuNn6Vj2Bp+KAoVMPpIkm8cekWVNXL0aJBjGQhYma473n4Z4vR/R+h6b2AHqlqGUhCyFkSowkfTvf6aGaMEPWPM8fm9g1ImpXZm20wFeHxttf3Hy4VGJWX3eZ4vNuGWFl40jEwatJSUGfHtQG+ELb/a1fOBdKWBKhTsCdOpbVxp0mmRQkU7kN00e/HSAnvvMdooxH6u3ZDLuQK8MuNWMlBa7AQddGqvDcEH/aUumAGY/SlD2ss+27PE8GzUonQthfu4a/rIAuccTSsfXgDnHtNco+jL1whXS0/EfKSkt2+izW6G40XMtaJW0cNBqqH9e1+uNQTLGSl1o7ylfTFE+0otGzWNFflgJuRop/ykwU2LuZlgBl8jtlB0H3yc8Glp9jmQA6AImbnyj6wcAbYaeQr+GdBpbZtU7xaRloUpXceJ/R097tz6HViLbu0E/FTUk7EY36DHm9dElXZUyC7Jkm1wcHJONp3ibyFrOZ/0V2buPAgRnXMRJ/i1zolEx4znnehwo6zu3D3fcAptZQ6LmNWZeei0GmyGqQrCPdUQagGG1C2xP/4thdSh7dmbG7TIiX3JMcFn3lMsfiQkxilS9yFtZCwxvHMry4F6kfubg/HdgKwAWNWDOWF0KtNOFeOlswfTp5i1U0rsMX6S757LooMDe/RAZVjzrKVZu4phq2Z3LGUHGzs2s4FHtbXR78rdgBWDllDe5RdttX7b2VqHaZcApgPWeVldtRMMmbOV6sLay98Z118BB4k7nQM2FwccPJAt9JPMUTDSOdVV5cWaOGDoew88ZcplPrxJbYy8b5YndhyfBxe+u6L3QffLC4m7gysDcxoq0JE17wBFEG3N5glzkZvCe2mIrJr0AdGUKLUl/JKmCFFKjPJXAoDHkbRPrU5TCRR2r72zC6pNkBX2Vf76/W6nnmeOOluQyGD/1PaGXKOdK9/QW2Oj5N4e2EuRREBIo2XoQqfTgqc/7HoQOhaAM5T0Uu/MFvcGa+554lETJ887Iep8X4TztpD6Y+YlLdDzJ5s4C5sEwvRN/2XVU7Z7SAnIPEfpxmO84k84APCEHCY+mtfwxQ4MsBZZc7vprjKbhTFH4PNVh7a+BmH0P7szhqHxY6Vxf1rtvwSDMNfARnRWgPv6QY1JCfqSO9Ir5DK4KCSjkWo2qKsU12Zu4j33W6MRkokIsIaHZFRLwJ6KqARFL/N1+maylamLtg/fMxSDKHkgqxZRvR/kSkU8XDNMJ5HFAWfXIRFCXhicVrk/kj49Zo1GToodLOxTjwIZjJ4oCOC+MA7JjnjImI0lySKXx/gLwCjpCv2tPTOV2fueNiAq X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: U89M5tDfE1NPhFx2zxBOM/Eda20+WQOAikiRgvnG5qTuwnLcOsuUd4OGv30JYIWdR90/PZ8Z21Od5cyOhu6ufElbTkQJ5Sx4YhBKQWxmVwoYLuezPF3/qtZ1+crvyj0pOrN42D8pS5XukYNDva4zssc11BotUV/npIicpfDrgBAZlrZQ1NIXbyC4rgu67or/ci1rHHW7YmMPRoZrApLLC0X7fHr5mf38/RZRLeFYd48kHacK10PHNfxMBN9JHHcixYZUGq/QoSV4TZ3USEz5QwyMDr4Jnq354Gcr4qFI6uyG+9cj5AwQ7rFa4bQ6/0iwJVgh/fzSk0NovFHEYX4RA5AY6+nMquVU3fjxVeW45KcHuh8U7KuUuv0NdiiFbf7ElszO5x61j5rI+L4paRSshRYWGxk4sSLOPS9ixVBFVL9h0gxm1CIQYZ2K0M/MnlnjP5omJyjSp/isoEbc7HUkEojUs8rn+OIr+PtKK5UBTt6Ui3yCSFm4KRKIBHoFfObTvgrL0HZPUA8NzXvwU9YF+vMhEz2tHcmTaJJF4fy1bFdv/rVEwx7nxPoxOJwKWZs0HD6SKLI3J2M8kmRfQp/vpeyjUNMBGcqcKx+Ji6GgASuUOQxDQj2ohfxZAJodaxPU2FBwJJIQmsS01iJMZve2gGGaz96bzIZAHteX1oOvAleK6nNGZKRCSkytd5fOZk+0bxjShvNwo5tEAfqTbq6LmClj+l4//sqQCZY08+C4Ek5eAkLE7KFTFYPHIxXmnTALqxCtHUvlVZWlp0BWlMms1EKwpZADQbQxCHmt3Q4zra6I4rV23o0tfvg9Tz8U3WMVHtbLOAN6ger4wres5AyrOPtwFW+LF+UkQeGUn8/pR5klYkaTQDZd2HfZG7qlgqesN9k6DAjHYvkspy8xFUEjR/Q41pOcf3hqZhZFX/gVVjuh3WCtmtbe95tQm0CV/dgvSAQP/o+eF5ZO21AuDlHnptqF8cSerYZ0PyQd6ou3TuFtv8+FhyIGt4UCvNTYc2qMgn8OmpdLloZTvWS1Yz2r/oq0hjSXAE1xK769JexxhhQb5WSo3lccdq/hmnVxWnrkU7v5T4AyB0gE/+VeJhtNzygQV3K+JshiCcgLiDNVtGgZiSCH/J4kHIRkrC12OQUK23CC08Q43OAb3gA/GeEtdc+U++lTcIv+Gwi520O6zCqW07VjmuTObFuAM2RbyyvX8D1aYeQFOvTDJWul7D0Hq25VFsMfnkgadWYjgR0MISNdw3x0QjGznQ7+gthtCoe/1zWhFgl5P9cGIpIMcbcU2kalfLRE7sqoBb586/syvuMSyWD98V4GksTdtclRFcwEHF+YaEUX4a85iHXZo7CB4NCRu4HvjC6RVzi8/lF+jUjjtcRp/X/eaHNNZKdXa2pCZ7oIZpy3gZ1EyzQ9EVifNd2yA/o5rQ8tbn7Mz3etLGEKUnuREFQyOV9YUPtsu0aSBw7KfEXfpEuqh3TMT3bwTo4fhPOHVnQvfh6ZK72Ghp1LnH395t7QOhVUTvY2SBldhtZqcsaYolp0e4DfAVTQl1SP87RrWDyiXg0Uur16t6L7D2GKHE1O4IxbprJY7/Sx X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6147f03a-ee6c-4d7b-3808-08dd657d4507 X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2025 17:58:06.6083 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: vEaDVeOjYs4g+4LE7vAsd4Pbjx9K5CyVt7vts+isiHoNxkiYQh5BUyhEzA19qLeQuxQICT6le/36ABlVxzEmLw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6826 Many scx schedulers implement their own hard or soft-affinity rules to support topology characteristics, such as heterogeneous architectures (e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on specific properties (e.g., running certain tasks only in a subset of CPUs). Currently, there is no mechanism that allows to use the built-in idle CPU selection policy to an arbitrary subset of CPUs. As a result, schedulers often implement their own idle CPU selection policies, which are typically similar to one another, leading to a lot of code duplication. To address this, modify scx_select_cpu_dfl() to accept an arbitrary cpumask, that can be used by the BPF schedulers to apply the existent built-in idle CPU selection policy to a subset of allowed CPUs. With this concept the idle CPU selection policy becomes the following: - always prioritize CPUs from fully idle SMT cores (if SMT is enabled), - select the same CPU if it's idle and in the allowed CPUs, - select an idle CPU within the same LLC, if the LLC cpumask is a subset of the allowed CPUs, - select an idle CPU within the same node, if the node cpumask is a subset of the allowed CPUs, - select an idle CPU within the allowed CPUs. This functionality will be exposed through a dedicated kfunc in a separate patch. Signed-off-by: Andrea Righi --- kernel/sched/ext_idle.c | 96 ++++++++++++++++++++++++++++++----------- 1 file changed, 70 insertions(+), 26 deletions(-) diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index a90d85bce1ccb..a9755434e88b7 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -49,6 +49,7 @@ static struct scx_idle_cpus **scx_idle_node_masks; /* * Local per-CPU cpumasks (used to generate temporary idle cpumasks). */ +static DEFINE_PER_CPU(cpumask_var_t, local_idle_cpumask); static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask); static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask); @@ -397,15 +398,18 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) static_branch_disable_cpuslocked(&scx_selcpu_topo_numa); } -static inline bool task_allowed_all_cpus(const struct task_struct *p) +/* + * Return true if @p can run on all possible CPUs, false otherwise. + */ +static inline bool task_affinity_all(const struct task_struct *p) { return p->nr_cpus_allowed >= num_possible_cpus(); } /* * Return the subset of @cpus that task @p can use, according to - * @cpus_allowed, or NULL if none of the CPUs in the @cpus cpumask can be - * used. + * @cpus_allowed, or NULL if none of the CPUs in the target cpumask @cpus + * can be used. */ static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus_allowed, @@ -414,14 +418,20 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, { /* * If the task is allowed to run on all CPUs, simply use the - * architecture's cpumask directly. Otherwise, compute the - * intersection of the architecture's cpumask and the task's - * allowed cpumask. + * target cpumask directly (@cpus). Otherwise, compute the + * intersection of the target cpumask and the task's allowed + * cpumask. */ - if (!cpus || task_allowed_all_cpus(p) || cpumask_subset(cpus, cpus_allowed)) + if (!cpus || ((cpus_allowed == p->cpus_ptr) && task_affinity_all(p)) || + cpumask_subset(cpus, cpus_allowed)) return cpus; - if (cpumask_and(local_cpus, cpus, cpus_allowed)) + /* + * Compute the intersection and return NULL if the result is empty + * or if it perfectly overlaps with the subset of allowed CPUs. + */ + if (cpumask_and(local_cpus, cpus, cpus_allowed) && + !cpumask_equal(local_cpus, cpus_allowed)) return local_cpus; return NULL; @@ -439,13 +449,15 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, * branch prediction optimizations. * * 3. Pick a CPU within the same LLC (Last-Level Cache): - * - if the above conditions aren't met, pick a CPU that shares the same LLC - * to maintain cache locality. + * - if the above conditions aren't met, pick a CPU that shares the same + * LLC, if the LLC domain is a subset of @cpus_allowed, to maintain + * cache locality. * * 4. Pick a CPU within the same NUMA node, if enabled: - * - choose a CPU from the same NUMA node to reduce memory access latency. + * - choose a CPU from the same NUMA node, if the node cpumask is a + * subset of @cpus_allowed, to reduce memory access latency. * - * 5. Pick any idle CPU usable by the task. + * 5. Pick any idle CPU within the @cpus_allowed domain. * * Step 3 and 4 are performed only if the system has, respectively, * multiple LLCs / multiple NUMA nodes (see scx_selcpu_topo_llc and @@ -464,9 +476,43 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, const struct cpumask *cpus_allowed, u64 flags) { const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL; - int node = scx_cpu_node_if_enabled(prev_cpu); + const struct cpumask *allowed = p->cpus_ptr; + int node; s32 cpu; + preempt_disable(); + + /* + * Determine the subset of CPUs usable by @p within @cpus_allowed. + */ + if (cpus_allowed != p->cpus_ptr) { + struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_idle_cpumask); + + if (task_affinity_all(p) || cpumask_subset(cpus_allowed, p->cpus_ptr)) { + allowed = cpus_allowed; + } else if (cpumask_and(local_cpus, cpus_allowed, p->cpus_ptr)) { + allowed = local_cpus; + } else { + cpu = -EBUSY; + goto out_enable; + } + } + + /* + * If @prev_cpu is not in the allowed domain, try to assign a new + * arbitrary CPU usable by the task in the allowed domain. + */ + if (!cpumask_test_cpu(prev_cpu, allowed)) { + cpu = cpumask_any_and_distribute(p->cpus_ptr, allowed); + if (cpu < nr_cpu_ids) { + prev_cpu = cpu; + } else { + cpu = -EBUSY; + goto out_enable; + } + } + node = scx_cpu_node_if_enabled(prev_cpu); + /* * This is necessary to protect llc_cpus. */ @@ -476,19 +522,13 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, * Determine the subset of CPUs that the task can use in its * current LLC and node. */ - if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) { - numa_cpus = task_cpumask(p, cpus_allowed, numa_span(prev_cpu), + if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) + numa_cpus = task_cpumask(p, allowed, numa_span(prev_cpu), this_cpu_cpumask_var_ptr(local_numa_idle_cpumask)); - if (cpumask_equal(numa_cpus, cpus_allowed)) - numa_cpus = NULL; - } - if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) { - llc_cpus = task_cpumask(p, cpus_allowed, llc_span(prev_cpu), + if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) + llc_cpus = task_cpumask(p, allowed, llc_span(prev_cpu), this_cpu_cpumask_var_ptr(local_llc_idle_cpumask)); - if (cpumask_equal(llc_cpus, cpus_allowed)) - llc_cpus = NULL; - } /* * If WAKE_SYNC, try to migrate the wakee to the waker's CPU. @@ -525,7 +565,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, cpu_rq(cpu)->scx.local_dsq.nr == 0 && (!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) && !cpumask_empty(idle_cpumask(waker_node)->cpu)) { - if (cpumask_test_cpu(cpu, cpus_allowed)) + if (cpumask_test_cpu(cpu, allowed)) goto out_unlock; } } @@ -570,7 +610,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, * begin in prev_cpu's node and proceed to other nodes in * order of increasing distance. */ - cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE); + cpu = scx_pick_idle_cpu(allowed, node, flags | SCX_PICK_IDLE_CORE); if (cpu >= 0) goto out_unlock; @@ -618,12 +658,14 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, * in prev_cpu's node and proceed to other nodes in order of * increasing distance. */ - cpu = scx_pick_idle_cpu(cpus_allowed, node, flags); + cpu = scx_pick_idle_cpu(allowed, node, flags); if (cpu >= 0) goto out_unlock; out_unlock: rcu_read_unlock(); +out_enable: + preempt_enable(); return cpu; } @@ -655,6 +697,8 @@ void scx_idle_init_masks(void) /* Allocate local per-cpu idle cpumasks */ for_each_possible_cpu(i) { + BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_idle_cpumask, i), + GFP_KERNEL, cpu_to_node(i))); BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_llc_idle_cpumask, i), GFP_KERNEL, cpu_to_node(i))); BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_numa_idle_cpumask, i), From patchwork Mon Mar 17 17:53:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 14019822 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2089.outbound.protection.outlook.com [40.107.93.89]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 702C21B78F3; Mon, 17 Mar 2025 17:58:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.93.89 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234296; cv=fail; b=XZrYRZzTooomGxsVkLsE7iQYb9bn3y6fqPtJjl3/dsDz3IeD3OUETee/gnIijoWdJvjViq2oItPkWHV2gA/Iz4ZOjFg2MknHh7LDXVoVIWd7pKyYX5LxRuBg7xaHAtdLWNtUMYpShE+1IpzDTOyeE1snlKzOuSvd/IP5w4/jK10= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234296; c=relaxed/simple; bh=4VUlLFaGJiaHMyxhbI1wz4ycpa+Du7CC08Q+Dm5fHv4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=q0+ojR7VqW6cnz5aD83FmJbNL3ceiseBPEPQKq7Wid1ZvXQKOxbr+E2X+pnLpOC4Igd8/azNE2knpWql6ltltkpkg/oIg8h77TqpOoYSzBx+SrRralnZpid2x/IVh+18AOjW/9dqP2Ss5baOy5GrQHjgVJxGZ9+qFf/UbqV7lRA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=QeCjYTEW; arc=fail smtp.client-ip=40.107.93.89 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="QeCjYTEW" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=hvNYK8GrfEDQRwhqjKrnY9GtZenJSKJAedapQFJ1E9sSwOvJdCUBgLx3BKyA0MIqqDvcOQMTWdDpO5grAjgQvjDnOrpHEs0j0+9KXnx2aoNSkhtxz7YbP+YRPzuAYukLyoXAHxVJwjHDxnVBOmS9HAcmIAs/HiWdwrg+lo8q6eh3ZAEGFnFZy6hh0RIJFymPl+3xBzrWsTEpm4kn4YtWHeRsXeRIz5XJz6037yEirGcFVm9C2dbGoZESaM3C/lsMz1a0O3q4q3FdMThAI0IZ3YO+P9q8UnpExY1muBCpB7ViMfzq3kM8pGrlBUUySV/zbTaoWsm1b9q2MtzQdXD5+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5dsoOVM6R9tZdtXR5wfO18r3mWeI34mQkcfl9Plshxo=; b=LpkX9QOvSxdXb9+Gw6RExSO90JQIqZiOXMBNrDuc0WO7a9Zpv2bW1nwSMUHdB2IDY4uON4Of4MPMDYD0v3cY0Rn3LZ4F2Dl/t2Q8fogV2tYi3vfPiqM2dSw2FIvqMN9mrRGOYuolp+ENM6aBqXsP0MyR2mzUFaLkMPNJJHtduH8pyaVcJPG4+VHfTLU9a2qdFef7y8bHuRGYYLUL+Eo5Fdloet2ACU544QenfIFhuuouPsFMNNkGao+RcPbUv7qBftPqv/k9Ir5ZGv02u4gPTBV9KKyio5N3agI1jhnvr6ocJJ51BH3g30s0eSOYlzOA6Oko7M3lVyrpVL9a6hcBfg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5dsoOVM6R9tZdtXR5wfO18r3mWeI34mQkcfl9Plshxo=; b=QeCjYTEWq23+yL9Sasog5cQ8ilZ92Cs2bzrHgIvkh2y4kP2UdltvT6EfhHzv9Z2/J3e+nqcvIUiPRf6UwUfwzYnZ0fNmcbOGHSXwBrEx8RJe1fI2puG0SHMNDwyxFamXCRx0W7tdeC46DgeYIyr0oIrG7hfagvb9fRdAkSx8Li8zf5XwOd53z0dYScONenF2H3laolbbJZ0G+tGKLZf01BoY2me34svn9Mg7XLJPdkNzNARurVo9+L/uvYaJzUC/b29oYD1huhkq2l979tAqgBmYG8CXyyngVjGwnluK5KizTgKHoOBnPXUYvH1C8NSpupDCpl9ZDvfCp4ebMIq//g== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by MW4PR12MB6826.namprd12.prod.outlook.com (2603:10b6:303:20c::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.28; Mon, 17 Mar 2025 17:58:11 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%6]) with mapi id 15.20.8534.031; Mon, 17 Mar 2025 17:58:11 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Joel Fernandes , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Date: Mon, 17 Mar 2025 18:53:27 +0100 Message-ID: <20250317175717.163267-5-arighi@nvidia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317175717.163267-1-arighi@nvidia.com> References: <20250317175717.163267-1-arighi@nvidia.com> X-ClientProxiedBy: BY5PR16CA0015.namprd16.prod.outlook.com (2603:10b6:a03:1a0::28) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|MW4PR12MB6826:EE_ X-MS-Office365-Filtering-Correlation-Id: 651c2512-7b7b-4701-c049-08dd657d47f8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: oad83Q6oHDn1yHEual/B90rcKGFS9snbg9pGMLeGqNkYEjH06y2NL8VK8fNJiNm+IP54wT+Vw+AmfIhGpwODfI4TuNefnECT9HISKCdnVezmgPu/K3QzFFbq5fA4OoL8fDqbE6O2k3Hsuq487nKjvQ/oh7UJ1jMpRiBtbCImCJ/Y2hTA8CZ7qVOyYxuX2ryF6VOVFnWMUfZEPBJ0r8JVAnxzd3Twoeg03Z9T5k50A+L0wJ/aDKAz9e0Te+jQM9SRJj5kJj0WkaxE4BJji9uxGLB9qVMf5JGX10Q8amKRz2u31UPwufcxhmLBFKnI7MFYAFsT69F7+uIsMIheBKuQkJ9HUbC+Sqwrt5wIVhzykOrvNFs8HV8FvfKyfG9UTJS8tMU25E5etTHo63QRGj1S7gavQbOUmG6XbUu+vvO9mxQwB+uzwPNWf3oz6l2W5Kwi2oCZHOZxEhp3ahq5nHMZyVXOOWmkPnRdAnM2lt/RCX09JHxZgJ0nLwYDZp2RL9jaMBVX3vE/lEeTrsh7FeJzftMtiXj54gCxm2z5H6nD0kjOnMDt//FTzOupUGIdWpZ3H9feT0A54aUVcNVdeEiklMqTkzPC8ceIT4NFCOFNEurBfe6+DyEO0NG8ssgt0dPQEgW/CUjBAnAjx/bTKdZ43yMDyguADCeBEVUoqfVXZWkIPm3waQiJmsu0ksnflrHQ9Z1cX3DBVP5pNzAOOUTnDjvdTXQmhY+f7xZSUAoBjGOphYq/fqxgrf8u+rlRbwAU56rHqaNjh4yv0HYjzTqxsZX4VZ3u26T/DIGARiQ2WkLAtCO8RCg3247a+/fOgjM9K43nbROUPpmOxXWJbDrwlIhaw2Lp5i9j89U+l5nwQkDCR50Yok3EMho0d0nfJiL5n6bOX+McVWbedk/z+af6Ggn+tJk/Vz2gtXGYi0yUC7NvYwchjYKE8fvwNM7opHnDp8FIsS+aQiZHeM3sDbwwybwEv7um1eCivv7yG6PILaazWodAqd3xvfIH1o+J1hUQDARBDJoJqeWd0JEiTpg0z9QIHJalBw/sWqYmsMJe1ysD9QVc9BR86WmThePj1WfrVLGFb/hPq8M+SyhtZH999t7CPUGsMS7MvrUlxsTGdqnB20ntzTxT6UARfJVVtfEy371EyetKRLW1Xp2LoZezFtrhEzBJH/0tXG1U7a3BEeXDgrPQA6HI6M4FVx9lFaYDnS2Tqj8bXWsKGAZwyCXPZO3wXHiNjS766XD6xvOuj+tsLWXKxHd00x6wi0ObhsLEC/qu1Gmaz3olRWSW9VHKJzJdWhOh2X4drPUUYA6FPPs2IR2cCjcHw+xyzevwSPs4i/rla1Mtydv5cDhd14arKFy+e2NmVWo/LeFcyiIefV3lbVcYSLozc0A58DU1B9um X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 0j58LHWVF6O1Zwrtxq+VXMDk8voH4wMiOC4jCd6ScygGvWfx0t2soVGICdel6juHLtSeN/BEnGn5/qUSbLYP9UNd4xe0uGJbVZ87l1yFPC0imOEjM4+rImJVQAR56zEj3xoF9LWVieFncYh8ms73O0T5q5RCfp4vsEM/Mo14UB3q4lYGGEEb29IyBWDI0oXl+ADNPqM8hb/as5n7jm+G38CEYH/TlYkGlZqTEVWF4JxdLwnEl0aFp/n8iFFvLkbgmnDFQghuiiNdRD+OTcW1jLyG+qYfNSD9meNNVGr824wYHd5KtAvXs2R9jsDAaZW/0j4T/odt43ROCD1I8HbBe7ziCzLBHi794W/tOWfqQ6Zj03dCTBEZb6zP4W2uouw0pv4qO0+oVudq2RI5a+u7krTfo5KoaihFSERul/Ad/CPcyFOph/3BhmHqp2aJdyudZdCERq5vFBU8br1YrJwKaQgXwhy7o85m8CDvBq12OMUQODuguP7iCWzRsueNMJ8Dag4NLvVFHXBt4/HI/Csiy893u9ZvG1KOyGDNniMRE1dchNE8iA/AiecXggrXtSisSCnNmCtzymCc6ThKZ//WA2/vYM/b6ReDmt8RjlH8Hah/EYStJszh6utl57rYVKQ2SbgZnQpjhBdnhw2tjY7paVlwwc4VnxIlkFfw8JBXf3o44Ar3UrUI04UTrOyL8Q25sHCuaywNZK29dHP58NhmnQ8Wy5ZhLIbsErUfNMCcPVqEvaJ9xJqAvfC8/7vMl0BJf5P0fwPaSs1dzfBLj9NESvb2tfwGu4Bs8VfEsUOVq/oVmZiKPiZZ2S4y3Zf6ZsykWKgtKY+ki9H0ZBjH/YCYxUU/suNegrdUABOHmjvZKzy1IsfmWpdhPAGPPnWzYY3YNeG7ZrjdtvCmMZYRgzGHUzbqEOz3v4QPw7x5ds5xfSq8FdIsUrOqsrqMLPNJ+nrbuPq/E28u8J3e9O6VBbYjNUKuVLrVaKW4Z/LGiBukVOkWBhvDY/25+c5IBUryw5EW3lYhTA406thLpuxX7SU7zhVes2Zue7wgfDIIcrU5EaFhjCHsubXNcY+/vQEn13XijQyfJL2QI3xtYsqAS1hbda6LT0VN0AyDClcIKjo1W5RDezSvI47bzfiBWs0aSo2kxSz++MklcNoi0GOqgqDiW0BX8oEchljeHpB1OvhLt9amV7HzL0ESKyZZB21BsxxBaD5XocSlO/VZEHdyXgdcTWEcFfIFxbQyGDLS6qSDJFIHcIl7TNSYrcjlv98rj8BllhmGMvPvKgyGt9eF6vZGSPrQGx/Y66YKzITf2EyWq8lFHX2WmHlgACuQFyu+ZEGY2gPWq1F0NhiQZtVQtdWnAkBdwaGqsAfQglsNx42/jSrGoricg8JsKxAoDPiaYdXsauobO43pVEm12w2nrNH8AUYYiosfrHOkZBgB2H6D7I5slac/2Iy7ZCuX5YxcPiuWSkQAzmxTzYRsFRPn7lP7hEAvE3fBgCQYVqYMiCswrQGhDJKUfCjCAl6spaotBXMsSDybCtQxyjXPCTOZL8Q+GZ06iGaADW0cM8pi9G8uO+WAbLDG3pnVp8uvQU/MAzDs X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 651c2512-7b7b-4701-c049-08dd657d47f8 X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2025 17:58:11.0304 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: LrPnpK+lr4yKjkig0nIET6J4JScMU9ykKRRodttjNfR+wXSGiakWLAuglkWNQ1elkOa7E+7KcDx4rvjtyMdb6w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6826 Provide a new kfunc, scx_bpf_select_cpu_and(), that can be used to apply the built-in idle CPU selection policy to a subset of allowed CPU. This new helper is basically an extension of scx_bpf_select_cpu_dfl(). However, when an idle CPU can't be found, it returns a negative value instead of @prev_cpu, aligning its behavior more closely with scx_bpf_pick_idle_cpu(). It also accepts %SCX_PICK_IDLE_* flags, which can be used to enforce strict selection to @prev_cpu's node (%SCX_PICK_IDLE_IN_NODE), or to request only a full-idle SMT core (%SCX_PICK_IDLE_CORE), while applying the built-in selection logic. With this helper, BPF schedulers can apply the built-in idle CPU selection policy restricted to any arbitrary subset of CPUs. Example usage ============= Possible usage in ops.select_cpu(): s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags) { const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr; s32 cpu; cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0); if (cpu >= 0) { scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); return cpu; } return prev_cpu; } Results ======= Load distribution on a 4 sockets, 4 cores per socket system, simulated using virtme-ng, running a modified version of scx_bpfland that uses scx_bpf_select_cpu_and() with 0xff00 as the allowed subset of CPUs: $ vng --cpu 16,sockets=4,cores=4,threads=1 ... $ stress-ng -c 16 ... $ htop ... 0[ 0.0%] 8[||||||||||||||||||||||||100.0%] 1[ 0.0%] 9[||||||||||||||||||||||||100.0%] 2[ 0.0%] 10[||||||||||||||||||||||||100.0%] 3[ 0.0%] 11[||||||||||||||||||||||||100.0%] 4[ 0.0%] 12[||||||||||||||||||||||||100.0%] 5[ 0.0%] 13[||||||||||||||||||||||||100.0%] 6[ 0.0%] 14[||||||||||||||||||||||||100.0%] 7[ 0.0%] 15[||||||||||||||||||||||||100.0%] With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across all the available CPUs. Signed-off-by: Andrea Righi --- kernel/sched/ext.c | 1 + kernel/sched/ext_idle.c | 43 ++++++++++++++++++++++++ tools/sched_ext/include/scx/common.bpf.h | 2 ++ 3 files changed, 46 insertions(+) diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index f42352e8d889e..343f066c1185d 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -465,6 +465,7 @@ struct sched_ext_ops { * idle CPU tracking and the following helpers become unavailable: * * - scx_bpf_select_cpu_dfl() + * - scx_bpf_select_cpu_and() * - scx_bpf_test_and_clear_cpu_idle() * - scx_bpf_pick_idle_cpu() * diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index a9755434e88b7..e7aee9aa4841c 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -930,6 +930,48 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, return prev_cpu; } +/** + * scx_bpf_select_cpu_and - Pick an idle CPU usable by task @p, + * prioritizing those in @cpus_allowed + * @p: task_struct to select a CPU for + * @prev_cpu: CPU @p was on previously + * @wake_flags: %SCX_WAKE_* flags + * @cpus_allowed: cpumask of allowed CPUs + * @flags: %SCX_PICK_IDLE* flags + * + * Can only be called from ops.select_cpu() or ops.enqueue() if the + * built-in CPU selection is enabled: ops.update_idle() is missing or + * %SCX_OPS_KEEP_BUILTIN_IDLE is set. + * + * @p, @prev_cpu and @wake_flags match ops.select_cpu(). + * + * Returns the selected idle CPU, which will be automatically awakened upon + * returning from ops.select_cpu() and can be used for direct dispatch, or + * a negative value if no idle CPU is available. + */ +__bpf_kfunc s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags, + const struct cpumask *cpus_allowed, u64 flags) +{ + s32 cpu; + + if (!ops_cpu_valid(prev_cpu, NULL)) + return -EINVAL; + + if (!check_builtin_idle_enabled()) + return -EBUSY; + + if (!scx_kf_allowed(SCX_KF_SELECT_CPU | SCX_KF_ENQUEUE)) + return -EPERM; + +#ifdef CONFIG_SMP + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, cpus_allowed, flags); +#else + cpu = -EBUSY; +#endif + + return cpu; +} + /** * scx_bpf_get_idle_cpumask_node - Get a referenced kptr to the * idle-tracking per-CPU cpumask of a target NUMA node. @@ -1238,6 +1280,7 @@ static const struct btf_kfunc_id_set scx_kfunc_set_idle = { BTF_KFUNCS_START(scx_kfunc_ids_select_cpu) BTF_ID_FLAGS(func, scx_bpf_select_cpu_dfl, KF_RCU) +BTF_ID_FLAGS(func, scx_bpf_select_cpu_and, KF_RCU) BTF_KFUNCS_END(scx_kfunc_ids_select_cpu) static const struct btf_kfunc_id_set scx_kfunc_set_select_cpu = { diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h index dc4333d23189f..6f1da61cf7f17 100644 --- a/tools/sched_ext/include/scx/common.bpf.h +++ b/tools/sched_ext/include/scx/common.bpf.h @@ -48,6 +48,8 @@ static inline void ___vmlinux_h_sanity_check___(void) s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym; s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym; +s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags, + const struct cpumask *cpus_allowed, u64 flags) __ksym __weak; void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak; void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak; u32 scx_bpf_dispatch_nr_slots(void) __ksym; From patchwork Mon Mar 17 17:53:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 14019823 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2063.outbound.protection.outlook.com [40.107.92.63]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4FDA1DE3CB; Mon, 17 Mar 2025 17:58:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.92.63 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234300; cv=fail; b=AWqioHT3ucfBNhrTPPdYauuzSJf3bzCtXIzm9eOVGpFgRy7ZtIvRNHwYboKO0KamI4BoMmH+AULPPGTorBVhcz2eB0Gm6Kj4Zt34u+gyXUCXXMtiE/YSwoiAQRrtVt+RKJNNWSRE2xf2i18lIn1b/6mMhe2CcZX+62K+vVVxFi4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234300; c=relaxed/simple; bh=jKTxoAxrXvpBn7lWujg+NdqWN89RZk8i/am7nybMWNU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=XT4+QDHV/Ffvb+14NCLxGg1WMjt1TJEQPSZKCRlnyLBzugOt+zhK2lylAqPXQDV8IW15nAC51IpJvOJ6HnMutQA6nZ6U0+Ksl2aiIYAZC2SmTJWygN6k8DBxi6LL1FexrKAIBOUemC+X/zHq/omIfgsjP8nBOh02IxC9cELP5PQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=L6t+Z9zL; arc=fail smtp.client-ip=40.107.92.63 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="L6t+Z9zL" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=N0Gh5VDrf7o4mX7d+ShlN3HHYujrk2Mjg/DNM6R1E4MB/rn0sRkzNvSRh2Vg3kSxp8t7BI5WtcRJn5R70vijE/kFWR8C4Gfpha6tCQzb1Dltcr/oKQM5n+H9Xnau5N93FUcNRRkaubWNwBUnZbTjRk1V6AuyxrLkjWa+Qv3EbsajQ5oGpC1O/Zy7kc7W9XpBR+d30D7RXMQ9JhcmePqp2loU7PP9GtjsFwyee2+keYUcwiGji2e7xNeDrAlpetTkj3xjpCbXTm2RCHuAGWX6a6xxCXRk66NIEXumpjl+0L+A3V/iuNJOweKmDcpwc+2oucp6eS9Av8S8T7tryhJdrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tz0xGZ8Vq/x5hanX4aF/qX4QQdX3n+4+blVCZDmYCeE=; b=pZ+8+88WAat1z+7wfUtSUiXm7gHJ2rl3bgwWbJ/CxVBqLK1Ueyzik3Dr1zlliOgj7op8MevYkEFMM/7UH0DtKVX/cUaFYDGErtaeQjKJYHxLCi/am+q+GGx9K+7SLeCXC0lC/d4FLNFwtg2p0eDj2an4vdrxG2lTyMhDGW/N3lB6EaWuTuXaCwoKCEVfjjg541XQSe7P6y9yusirMWhR2syWTVY773PsPUsFyaaxMCYDnRD5SGe+bKFDf5X8GWYeeX/KqXfVGppyeDtKPenjQj8L0JwRlxtugtRgEh6VAvwIppnhnPEHyCWMl2in2fk14oqpkaXgiyec3eSTbizi3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tz0xGZ8Vq/x5hanX4aF/qX4QQdX3n+4+blVCZDmYCeE=; b=L6t+Z9zLZYEk2AFKzjxCBs1CR6depbQ5bJWhgGdVf0M3th9TPs9EeUVSpdZYXaaTkuzQW9ORGTtJFOXIuFFOOpU7VYCyesOvKN8TMLfnziLXYV9B216gVQtQdJS7EW6ME6jNLHDihbmyFXvwEGzAhhacLmozRqI3CJwHK14YOcqkyzoDgoYjDkCKddxj5sJ2ZoJSohOBGbhrnG7eYSp5Q971xONa2TPH1y8aWUobuI73y/i1n8CyxBeBN4KKHI1dqdiL48zdmfEHTCd/FHBL0LgSTyZRR2vVHMHpr52HF313ZW6QH0/0l+iXO6+lrpjv8Oj7tsFGt/hroyCWqu/v5Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by MW4PR12MB6826.namprd12.prod.outlook.com (2603:10b6:303:20c::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.28; Mon, 17 Mar 2025 17:58:15 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%6]) with mapi id 15.20.8534.031; Mon, 17 Mar 2025 17:58:15 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Joel Fernandes , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Date: Mon, 17 Mar 2025 18:53:28 +0100 Message-ID: <20250317175717.163267-6-arighi@nvidia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317175717.163267-1-arighi@nvidia.com> References: <20250317175717.163267-1-arighi@nvidia.com> X-ClientProxiedBy: SJ0PR13CA0230.namprd13.prod.outlook.com (2603:10b6:a03:2c1::25) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|MW4PR12MB6826:EE_ X-MS-Office365-Filtering-Correlation-Id: 5759792d-9e55-4da8-ea9c-08dd657d4aba X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: TcF+9suC3BYxAKrTnxH0ai8XfKXbyEw96uDZMHhSE7K2b027pjMVaItTKIoLUljEBNIc70hVS7+E00k+seNAmcJ9uEEn8C16H+10WPJKKRdREuzkYOkcHet5SXoSBLJzqXT/uv7GYuUH8B6H4FLb69gWCdP2k/fhb2nIHrbuVOXyQr01bWa8c7ubcRtVzwqydNfnzBeDtT1y3u1xtq6OCAm9FzNoUy/9woGYmMXnrGMojlNtK5CpPMdP4jAAPYE8v3vBPDlnFW6hwoHxRiufVfodTZEakdw+5G4Bz7YTNuKVqaqnBzQP1jgeXwVFfd7jN6J/26VgSfdMUPUACh3cZycMAJ3kXMU7wFvWRhyknAGDtZY0ArQi/p0lTZC4PT1tLohPpK0LCYrpApcwxNE/0iCn1wYAqQgaXSwFJoNOI4CPwJbIQiyQdz85AGUWIzBocQyx76AaCvWxc6+bI+uFMVvmG80UgD+LGj89AN7aBujqP3puQHPfYOkVg5/9mWQNI4AI+XgZbvJ7Z38rL4Z9ShSTdL6txvBX6YGfFUlPeBllvN+8Mbg4ghDKYTCD9r/XE1zSL/wJyZbiGCswVJekzeqGhKLK52aptafNf20SpIs4njwnT1EEWASH7P/a8s48BA67sUowiwB6xUEK9QcTd0hZSsKjWMhs9UhPbrK4BhZbocd2YP51RDa9mFAQOLFyXj8+vpTxxGShnclZwSr/G9xaB4G4v+s0NVlv8mIytdhE4Y8zuEtNkXdRmjsIyptMOMmNw1tDIkEpvMIH/2QEadpyw+AMdDG8D9oQ4REMia9WkvdMUrTWtOt6QTqiaFCL1B/8+QNFl2+JACrYonHykaaJdch/tLh2FrDdXCg1dSZjkrG4p4aps+cDr7pgSyyGlDm+DrpoWb4TevBwmZl3S6DjAXmGRo9fXKQItRCdq7jzs1lm8HArhFtwaLtgHn8vzw1G+4FVpi1rvOe3Nc1AoLBR7ZlGw8khqEyZ+Mj1kfTWC2Reqt1kMB/so1ENZLYBWszBiemwsvk9UdQTbY115J4uvq6/X4M+cJVaCpRzKp9d+Vj6QQv+f8+yLRYT5e/1s8QaUDEdbeYSEFI/80xWYZ2AptBpeBLQ9vHDpnoaCY2c53V/C9rg8kVbMc8Yv7H3DP6zTOnnJMMe25/sOcWNd7hHrOMm6AQeqapnukANwrmzC5L4P74FMAawVbvFFGel+WJWuqeyTJpLQyNuUnI10c4aHGt5k7zXrmletJOmKRMqgrc8NOrIbIuo3I/xX2sM/Q87jcK06cWF+ZjkpXZjSK2jcA2DiCqAfvW5isJ8IMHjIFX1z3EdCnyyxkv1FFMzvNrZGYOE+rK1UYQjSUlOEbRc9riCM+lBYCGKUTP3yDDNuFOfqrOachkkNz4Nia52 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: GYpEu2682+lqkHnZB0IvGwiBUWBNY4Ht3s6RtY7VmzOIyCcgcW8IlYLu/uFQSYKgrmSRxsKLhSCOI9jd/b5pe6KxfNJYViX5pJTR8HAJBeA0aRyr2T3Oith+bIYo7btVjZkqoeOAfz/7kw9Fn8Ns7CE06wSVe9TfxB7R7+VaLn6a2v1iJzM/jP6m+Lcu33daAAVUaWg/Nk0y9iek671MuuzrSNx+vqO6wyntN5o6zU010kXRh84NFnebdIvKY/jelZSudqjxiDYp29m1t9w2Qbn2jjsv6pLioZoMbrrgRJ4yeHWSzhKSUOKOTaJg5zOVZ8ZV1Gfo/tS5VfYr3yFkoaxJ0lxoum6nvsXdVo0f6IhEhrwqCMiRKsAApNxbp61maSZVjz9PmcDKCDeojTKFWxsshwDdIIsBzJLrUjN+q5xneUmGuQQ+e6PdFaneIIjvVCCK8XiYIxhH77zF3cZvHWkl3/uqC8r75SZoADklY1WqqYuE1fn0CTgYUgs9gi0K1gfalWwtPTUzqVFz8yPo+KXr+X/M9oNeA9ezXnaucsukckGnjDSPdzSeaSY901+miGO+goBI+PM/oKLSzHaDpJN4TmdOXrbpj/RDc+odDfTZ0eWXi2EyAa35z6VmkABpzlbNczQAzRvmgVZ0qNvBNvNI61wEf8Cr1Bsa9nb6gYwKg9M42knIylNIoOoumIGLDg25782MeSLvGgh0Ytfqtmy1T6A4jmeMDgv7eaJA/j7vtXPUeVv+wLWAJUJr7ZnbA2CFTq+YVHZ2KUhXyLrD9cbfYUAae9LRJH9pnGjRvl8uW47ubNWHh3EeiuG4mrQZbWvsXsmaz1d+GRRrQ2H2Vwj3R35U+Mup0AeheimTt866p4JT6bT0ioQ9UOkYoTS7EVQKSKzEWezLA7uOBlACUF6HTc5Zy9vRrZi0x6QO/cmaz4H6Ai7XKuzfVOu634nXYT6lAxnsZw+brdO/EW6JbS0KrjRVo7+wLy0v79DTqx2cfTzOK3tkjoA61jwN0lH53TRBOeJBw+3qhw7FWNh2vZfZCptFiADvsSa+Dzo3XwuyZ2B91/s08J4eMAG48pwa0TPnXd8XNVX7ffi0WSicMTsljB+1FuttccnZEBTxCGunKtVLkbm4cscskrYKND5EcSXTpOteD0y+Kzx2u1wgtZbt503p0olsfMpVVjPKvqmJ7Y3CdFJh6OmAimVXr5ichvFBe+6jOt/9A1ZcO9mOLY85YgqP4IZZqonXfSXfQ9kGRR6pLdsBi6ONXv3POlMo0vZj5YRrjEiy6NkPLfvG4RPDaEKEXUKLsoul3doj04QuZeDXruvG16S60hoVIyCNEeuqAJYEhgVlvYT/d62+/pVelZReRL+UOKUMCUeMnSfM/NztfNU0UDmYo3umA6yVtxmKF6EnRf2bkgNZoVC2fwoCzQVgn5zmqUjm4nSlc6+DkivdKC6UctNtajCRZCXccQXee9DGhgynD+sVOx+LfpAgzDQTUkM4PfBSE0Y+BpOSre/mlBwW2V+UX2MRRGwD5Yb/h8odET/DCmlSR+6sTDEnWb27mNtXqzSG3/cQjof+jcijr0thbXlcZP9lSXPS X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5759792d-9e55-4da8-ea9c-08dd657d4aba X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2025 17:58:15.6664 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: R4XgnTOjGyBw1Kqg/xKoFQTfSXOEKnaWvH4Th3yS47ZjiW6q3d3YyKVOXIrc9c4zNEyaJ1zpFeKS1iyEtZvlyw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6826 Add a selftest to validate the behavior of the built-in idle CPU selection policy applied to a subset of allowed CPUs, using scx_bpf_select_cpu_and(). Signed-off-by: Andrea Righi --- tools/testing/selftests/sched_ext/Makefile | 1 + .../selftests/sched_ext/allowed_cpus.bpf.c | 121 ++++++++++++++++++ .../selftests/sched_ext/allowed_cpus.c | 57 +++++++++ 3 files changed, 179 insertions(+) create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/selftests/sched_ext/Makefile index f4531327b8e76..e9d5bc575f806 100644 --- a/tools/testing/selftests/sched_ext/Makefile +++ b/tools/testing/selftests/sched_ext/Makefile @@ -173,6 +173,7 @@ auto-test-targets := \ maybe_null \ minimal \ numa \ + allowed_cpus \ prog_run \ reload_loop \ select_cpu_dfl \ diff --git a/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c b/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c new file mode 100644 index 0000000000000..39d57f7f74099 --- /dev/null +++ b/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * A scheduler that validates the behavior of scx_bpf_select_cpu_and() by + * selecting idle CPUs strictly within a subset of allowed CPUs. + * + * Copyright (c) 2025 Andrea Righi + */ + +#include + +char _license[] SEC("license") = "GPL"; + +UEI_DEFINE(uei); + +private(PREF_CPUS) struct bpf_cpumask __kptr * allowed_cpumask; + +static void +validate_idle_cpu(const struct task_struct *p, const struct cpumask *allowed, s32 cpu) +{ + if (scx_bpf_test_and_clear_cpu_idle(cpu)) + scx_bpf_error("CPU %d should be marked as busy", cpu); + + if (bpf_cpumask_subset(allowed, p->cpus_ptr) && + !bpf_cpumask_test_cpu(cpu, allowed)) + scx_bpf_error("CPU %d not in the allowed domain for %d (%s)", + cpu, p->pid, p->comm); +} + +s32 BPF_STRUCT_OPS(allowed_cpus_select_cpu, + struct task_struct *p, s32 prev_cpu, u64 wake_flags) +{ + const struct cpumask *allowed; + s32 cpu; + + allowed = cast_mask(allowed_cpumask); + if (!allowed) { + scx_bpf_error("allowed domain not initialized"); + return -EINVAL; + } + + /* + * Select an idle CPU strictly within the allowed domain. + */ + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, allowed, 0); + if (cpu >= 0) { + validate_idle_cpu(p, allowed, cpu); + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); + + return cpu; + } + + return prev_cpu; +} + +void BPF_STRUCT_OPS(allowed_cpus_enqueue, struct task_struct *p, u64 enq_flags) +{ + const struct cpumask *allowed; + s32 prev_cpu = scx_bpf_task_cpu(p), cpu; + + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); + + allowed = cast_mask(allowed_cpumask); + if (!allowed) { + scx_bpf_error("allowed domain not initialized"); + return; + } + + /* + * Use scx_bpf_select_cpu_and() to proactively kick an idle CPU + * within @allowed_cpumask, usable by @p. + */ + cpu = scx_bpf_select_cpu_and(p, prev_cpu, 0, allowed, 0); + if (cpu >= 0) { + validate_idle_cpu(p, allowed, cpu); + scx_bpf_kick_cpu(cpu, SCX_KICK_IDLE); + } +} + +s32 BPF_STRUCT_OPS_SLEEPABLE(allowed_cpus_init) +{ + struct bpf_cpumask *mask; + + mask = bpf_cpumask_create(); + if (!mask) + return -ENOMEM; + + mask = bpf_kptr_xchg(&allowed_cpumask, mask); + if (mask) + bpf_cpumask_release(mask); + + bpf_rcu_read_lock(); + + /* + * Assign the first online CPU to the allowed domain. + */ + mask = allowed_cpumask; + if (mask) { + const struct cpumask *online = scx_bpf_get_online_cpumask(); + + bpf_cpumask_set_cpu(bpf_cpumask_first(online), mask); + scx_bpf_put_cpumask(online); + } + + bpf_rcu_read_unlock(); + + return 0; +} + +void BPF_STRUCT_OPS(allowed_cpus_exit, struct scx_exit_info *ei) +{ + UEI_RECORD(uei, ei); +} + +SEC(".struct_ops.link") +struct sched_ext_ops allowed_cpus_ops = { + .select_cpu = (void *)allowed_cpus_select_cpu, + .enqueue = (void *)allowed_cpus_enqueue, + .init = (void *)allowed_cpus_init, + .exit = (void *)allowed_cpus_exit, + .name = "allowed_cpus", +}; diff --git a/tools/testing/selftests/sched_ext/allowed_cpus.c b/tools/testing/selftests/sched_ext/allowed_cpus.c new file mode 100644 index 0000000000000..a001a3a0e9f1f --- /dev/null +++ b/tools/testing/selftests/sched_ext/allowed_cpus.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2025 Andrea Righi + */ +#include +#include +#include +#include +#include "allowed_cpus.bpf.skel.h" +#include "scx_test.h" + +static enum scx_test_status setup(void **ctx) +{ + struct allowed_cpus *skel; + + skel = allowed_cpus__open(); + SCX_FAIL_IF(!skel, "Failed to open"); + SCX_ENUM_INIT(skel); + SCX_FAIL_IF(allowed_cpus__load(skel), "Failed to load skel"); + + *ctx = skel; + + return SCX_TEST_PASS; +} + +static enum scx_test_status run(void *ctx) +{ + struct allowed_cpus *skel = ctx; + struct bpf_link *link; + + link = bpf_map__attach_struct_ops(skel->maps.allowed_cpus_ops); + SCX_FAIL_IF(!link, "Failed to attach scheduler"); + + /* Just sleeping is fine, plenty of scheduling events happening */ + sleep(1); + + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_NONE)); + bpf_link__destroy(link); + + return SCX_TEST_PASS; +} + +static void cleanup(void *ctx) +{ + struct allowed_cpus *skel = ctx; + + allowed_cpus__destroy(skel); +} + +struct scx_test allowed_cpus = { + .name = "allowed_cpus", + .description = "Verify scx_bpf_select_cpu_and()", + .setup = setup, + .run = run, + .cleanup = cleanup, +}; +REGISTER_SCX_TEST(&allowed_cpus) From patchwork Mon Mar 17 17:53:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 14019824 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2061.outbound.protection.outlook.com [40.107.94.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A4DB1DE3CB; Mon, 17 Mar 2025 17:58:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.94.61 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234308; cv=fail; b=AVSYHdnfb8uAwI8q8i7zQCCKcIEqLADH87Z2vaOA5SoMKEy0ekNHu+/icas7hj8slEh1RYeU9MXmXFqhU3EO4YcosuwpLZutVaah9PlmzizWQBDe900OB2AKToTjOdq/azCcyHT1EG1XdHGIXrzuRN8b7TdxSKUFhc0EQfEZqAI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742234308; c=relaxed/simple; bh=//GU+Q7PBMuQSzr7l+95noi5N2tjn49kWBgwaxJJdRU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=VtGYAjW/oRclNgRg7AUThnER21DFZzEwK4wWJjjWPVKkcSyBtBI8gMUwDaDFyvR7Jk5tNnrBCFUvopytIDgKYkpQPxNucSwKEA/TgQtMdlvHAYnYTX8AVMO3ZVqMVH/tRSO+KkyDNEyJGkSlnu7sS0aXJwQav9nrlwIlaDsQzmk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Bc3pb2ol; arc=fail smtp.client-ip=40.107.94.61 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Bc3pb2ol" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=HEvVncOXxiakIGa4bUGKY4i+ndYUY+uqpo/zvZ0HP/Nhc07TFs/nd8n2iQ3XmcwzGSweR0i0FX4jEsSfsAYf12FWQ+aydVpkUn/kx7KEt7XlQSffJHVje0CAobBEKiYQPX3p3fREEKyBQ46R2DZtn1lSHe+tCHNw60+/vJBUu1PCv8xMPJz4BvttP47xM6s8wX4GP68iMUqeKzyjw4ZhZ7QVIzc3cTVzNjuF1fvVpyYe2xZobYriGi00puIIyEX63Op1WD0BKBYl+Is9JcGZlFvZeFryTCJbHrPoZdTDCIyZe7INfe6/DcPsP4X0ZC/lLhNnJK5n+9t1GL0cuqgpSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZTPdVY+xW6m2z4POLwgWDAV5mCoL8t0DAx6Nq/zjj+I=; b=EJmZ/Cvpnxs9vyszep7YeVdgp2OY588Wvc/gs5dl8dgSGTAO6YHak7owzwQxH1b2vzvLlE1Tj5zkERwJk2u/ma55HqbAW2cwtUYGQV76GLW+x7ZCe68iiUtxOPqpZNhXPY1fG8oVZENuEL8yWY9AzJpiKsY4mR++YweWT71Y87yzXYVbUPpkHBYa+7VFaCGJPA7fJG5COLDDoGzdQAmKg06vmU5PNWMnjRFSxW+J3VOdBXTIeP83OlsRLUgL/tYBcatzz4fRlzfGXgyoXyJlNdvTgTL9vVeU7P1jzlz3hTcO3ud/5YMqjqYOku4QbrF0qCPl5RocX4RUQPNEQMg/6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZTPdVY+xW6m2z4POLwgWDAV5mCoL8t0DAx6Nq/zjj+I=; b=Bc3pb2olrmK32Nelad11ZzsMfFOl9T+jy3qxt7/avTXQjc6qPr75UA2j/RyQ2MQDIvH4o3iCzOf7IbyXrUo5vtfcSFG5VRnTTc70dbfmp0aAj3oJqAeFzYecfiIapQDD7hoaVlJH3m8XJtHlVfbLFilPpjeHdf51YG5JF7ADaLbQMyK/Oyyqz7Pk+OJVL6O8AEQ8A5EyQv19HOngM9k4XKPso3rRcuJTtQc3N1HJ1GOj3zHMQyxYIcQEapgxuRfc5rVQDzz7CrUea22MHtwU7DnA8RE8mtKZMTh5XmfS/YZaLA+b72J8gz8joGONjJvLed5+NdbRSMHxtar7cyIlGA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) by MW4PR12MB6826.namprd12.prod.outlook.com (2603:10b6:303:20c::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.28; Mon, 17 Mar 2025 17:58:21 +0000 Received: from CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5]) by CY5PR12MB6405.namprd12.prod.outlook.com ([fe80::2119:c96c:b455:53b5%6]) with mapi id 15.20.8534.031; Mon, 17 Mar 2025 17:58:20 +0000 From: Andrea Righi To: Tejun Heo , David Vernet , Changwoo Min Cc: Joel Fernandes , bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Date: Mon, 17 Mar 2025 18:53:29 +0100 Message-ID: <20250317175717.163267-7-arighi@nvidia.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250317175717.163267-1-arighi@nvidia.com> References: <20250317175717.163267-1-arighi@nvidia.com> X-ClientProxiedBy: BY5PR16CA0006.namprd16.prod.outlook.com (2603:10b6:a03:1a0::19) To CY5PR12MB6405.namprd12.prod.outlook.com (2603:10b6:930:3e::17) Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY5PR12MB6405:EE_|MW4PR12MB6826:EE_ X-MS-Office365-Filtering-Correlation-Id: 3a8787cb-eaca-4a78-247e-08dd657d4d69 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: Y1HdvHTZZ5ZQJqLAW+q4ttYNnGXnyEH2bWL/0iSb/cwaBT6nmZ8H7b1p318htXhSbE1Idj1bzYKUQYsDuz5sspuVQIXzbDSPenytb9viWQJTmFa08G1Q3kYn6D/Aq/bEZLadfyUWl4HE1fYIHtmbX3HYoOSpp31XKQKojJfGSoXnBxhH/+G9xHjG5zdX31SEaEPMtHfBRL/m8ilBN4au/n47FK3Y4vmHh8W37C0g6sfm9qnpkVkWNz6H3LyWArTn71nzbW+kQBhtAV4fuBi93bAU6GZEetILC6d907PL3FVZNe8nBzEegNn3/8QQApNStgrdP+VRo5H9uKGShi2FMPIhrPEg8QJhXWf5eMs5Sv1TDmSHIY04NTG/tUNZN2/6GBB+eWKh0aFZVWFcXQLPVctvbF8kxKRFve9SE/q9peSUfgklSTueiJSDSZlFqJ4KiI/LnMHRorXnfkYWwoKqbrWGZRBSntGi+yURFsMWLSQXBa9ggt4C7xUd3bey78YYdj8H9NzcbW7CZd0W8Nht4gLfAlgLPgXMSYKRLyG+s77LiHetK6CPOmGsNEjijiao8XjK8TXE4aCkU/l9IhYOvos/3JFBa2iLCCfyoSWlDLtRNfBW3GHWjFfbpogzLgqIHf+cw89D+wuylGRTOs/gLFhlEOq3rEwfK7Ytqxo2ms4sdkAzbZ+oYW5LmpkD8Xv3c8jMhXZIRxfDhWDiofunVexDeoDOvOUkbq1DgYW/5tbkBlUCfj80QboqZFBM1QysPK6G+KiCVQHv+mHyOVAmkMRegRVryWDHsrOsA3gxO/+s5fk+qAmX/k1JyUGOUJ1n8FWOB4jmI7s0p6fzD/8AiFeU6D/HZaNp7l700kzLmQIYYc2eFzf5q/BAE5uk0F9v2hajyQXgVJPB2Rb66PzNcppJqwimiu6yU1Ycjh5Bwyv4lIcke5yIqWkd2zfA+WLGcfpAPlHgIInKFvP+CGPj6ZJNDpoecc5jNXP8p0p3Fyj1B5J2cfojvrV/cDZ0qxj4yYKLHQ+prv6IEz6jVpFWyC6HcY6ene6klDy+dwa2GuIqc3jQl6SeQnCToNHFnUbJDJiJHt7JC78aUfdFF3K8C/3U2Qg4rUa31+/FlWyiS4PyQSdhdZsfO9VC5yALWwg9wXcmEj6HuZtNZAHq+X6DzbBG54WOA/N0XzwlnMDyTPVnxn4et3Yy71uz1FcMg0Oa/wdcJJOhBpI3ag4/JyKG48Cn+cnBkHao6UXzN85+lPHAoQtClQOAYL4dwzC5aDZNidJLStwbwPq0lExBgG/t65ZvxoCc7h9/CF68t/0kWqSGbkYtOU8I+rklSHdQ5wGHy9rkU7v3nh7Maf0BC0bl8InIVGyDYEl3lZkTH1sXxCu2kSQBF85OAH8NghJVSqqz X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY5PR12MB6405.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: puJsmNQP+0zCaB/Kzr3dM34kSS5Mr9wqzvWVSBhjNbzI8728Ojh6aOnRr5ejleTgw70VTyXSTA/vbRuB5v0eE7uTqjUWxRLQu/1RVlskHfLlw7Ym8wWwRg+klhEHsFN4BVz/ADhvDN8vKf+Y250kUM7smfGspn33XB0gN7se5lFoM8An210qzR/FozRGpV4wEwktzejrjbR/m4hfePC7SkagTR5tajt9CZnbAF0/3YrM3J02J5r3iIGmZGPBfToQqW/gmR71U0YyEVy/5Z1XRnVbwafVoLvj4+qBCF3AD6KRHvECH3jyA3XB13wPqOgFXib66PgVKu5M+JIahefY0c3zyL4wnB32XCLIYaC6T2VVrM3dnjvTMwX7hKeYFJHuoHWENox9UeTKzyJ842vpxVuZoyv46Qc22feuzSGQq2Ll2RR/2poPqyeUloP1sm/mR6xeJ+/P623imS/U2RnXkMY+liaQsWPBxKMXRJ5zbgVXbw+4e0OvsEsQB8qLhTZ5lkl6Bjr72q67V4E0bueCvEDjlfLMnni2WSmPXmXbDpdUA1ZH7L6K8mQxiKb9GsN0B7ED7pmtpnCRz4lHmdlguIfXEKRCt1D63vDeU+vcvnuaJTDmNWtcdtmmavVkDdKqMCzFDHaJXFMA3grM1t2hTBKYW6uIcb41R+Su8MKTCBv4T8bX9duiSPzr6HPLT8MPkMuSrOsB2c+CaO6Py6ojDH4hWHH47U/ItCsY8hj4STrC78CoShfPbIIgwO36P1oZRFQK4My54ZJZwz7Gi6fuy+krW9/bpW3HLDypZpiSO36cQK3mFRRqS3+Xvu6mq3iqkirWrWw4FPlcnNnCmWR5waKyTEirYHYG5Eiz3SI9DBxSo6okw2Hngt8oCLul1GIgMc6pucAzE936BzzslfR/Yqyyrl8Um3Idir8QfzbzCO6qJ+AuWtou5A/7fYX4wXWW0WTaFY3yexRFeEEU+RxEnv0aE7cOgw+qh2397yb4y1B3lcXNpOXDmitJqFipxaUvaTGJmzVzMExUoY+wXdRef7IvgL0pZwHtBxG+6M4rbzXg9AjPKq+MeZV6QP14jV6AE8sYKXZgh+6+LrEQIBhVET5n6ve8RUm4rvBHzFNs7PGQHhK3FpRcVPTMcuvjktPEIClcBka7ZxTtvn6APlLqRk28CsGf9373aStGZC1wwvg3z7d2Ju/x6GT08G4NzKM099wHQG2nZMgpusYv7P4h858dJ0yyuxq9r6A6cCbt1nsHY7++1Kb5SUoDeQeCkUswtL40u0p8n/IgvDceb67TWp6gVylNNYLtcOEpOLa9xTfsXVmBZAYlL6W8tTBHvbiUaDIXC9el6KPE/Lc+pnsKmphBcmwgg+suekYEAYNotjLhQ8DZCvCOD+FuiqdkX2QbiQ3SsNluhCeuZnQwdyCP1NMernAkbfJ2I414rNh+MPe24wqDjd6oBtJVOi8LnetaPqOdF97Vzp046dovZTq+d8cuiNcEwUJO9+yeJVKe4pQFKmDUHjoo8DMAnmCqdO41q8n4uHzldInN2pTc7jcqj/9W8Tu7961wvfNRhpOdTuIgSg936ujyWoCouQ5YLu2a X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3a8787cb-eaca-4a78-247e-08dd657d4d69 X-MS-Exchange-CrossTenant-AuthSource: CY5PR12MB6405.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2025 17:58:20.3943 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: S+e/SK165xQ4zMgOBj1S1hv077GtIoAOyA15L3iohxaxJCetU8zkcuwdZTFoLTjlHBckZO2SA4R7G/cPslJ38Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6826 With the introduction of scx_bpf_select_cpu_and(), we can deprecate scx_bpf_select_cpu_dfl(), as it offers only a subset of features and it's also more consistent with other idle-related APIs (returning a negative value when no idle CPU is found). Therefore, mark scx_bpf_select_cpu_dfl() as deprecated (printing a warning when it's used), update all the scheduler examples and kselftests to adopt the new API, and ensure backward (source and binary) compatibility by providing the necessary macros and hooks. Support for scx_bpf_select_cpu_dfl() can be maintained until v6.17. Signed-off-by: Andrea Righi --- Documentation/scheduler/sched-ext.rst | 11 +++--- kernel/sched/ext.c | 3 +- kernel/sched/ext_idle.c | 18 ++------- tools/sched_ext/include/scx/common.bpf.h | 3 +- tools/sched_ext/include/scx/compat.bpf.h | 37 +++++++++++++++++++ tools/sched_ext/scx_flatcg.bpf.c | 12 +++--- tools/sched_ext/scx_simple.bpf.c | 9 +++-- .../sched_ext/enq_select_cpu_fails.bpf.c | 12 +----- .../sched_ext/enq_select_cpu_fails.c | 2 +- tools/testing/selftests/sched_ext/exit.bpf.c | 6 ++- .../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +++---- .../sched_ext/select_cpu_dfl_nodispatch.c | 2 +- 12 files changed, 73 insertions(+), 55 deletions(-) diff --git a/Documentation/scheduler/sched-ext.rst b/Documentation/scheduler/sched-ext.rst index 0993e41353db7..7f36f4fcf5f31 100644 --- a/Documentation/scheduler/sched-ext.rst +++ b/Documentation/scheduler/sched-ext.rst @@ -142,15 +142,14 @@ optional. The following modified excerpt is from s32 prev_cpu, u64 wake_flags) { s32 cpu; - /* Need to initialize or the BPF verifier will reject the program */ - bool direct = false; - cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &direct); - - if (direct) + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0); + if (cpu >= 0) scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); + return cpu; + } - return cpu; + return prev_cpu; } /* diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 343f066c1185d..d82e9d3cbc0dc 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -464,13 +464,12 @@ struct sched_ext_ops { * state. By default, implementing this operation disables the built-in * idle CPU tracking and the following helpers become unavailable: * - * - scx_bpf_select_cpu_dfl() * - scx_bpf_select_cpu_and() * - scx_bpf_test_and_clear_cpu_idle() * - scx_bpf_pick_idle_cpu() * * The user also must implement ops.select_cpu() as the default - * implementation relies on scx_bpf_select_cpu_dfl(). + * implementation relies on scx_bpf_select_cpu_and(). * * Specify the %SCX_OPS_KEEP_BUILTIN_IDLE flag to keep the built-in idle * tracking. diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c index e7aee9aa4841c..4813f1cc8e950 100644 --- a/kernel/sched/ext_idle.c +++ b/kernel/sched/ext_idle.c @@ -888,26 +888,16 @@ __bpf_kfunc int scx_bpf_cpu_node(s32 cpu) #endif } -/** - * scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu() - * @p: task_struct to select a CPU for - * @prev_cpu: CPU @p was on previously - * @wake_flags: %SCX_WAKE_* flags - * @is_idle: out parameter indicating whether the returned CPU is idle - * - * Can only be called from ops.select_cpu() if the built-in CPU selection is - * enabled - ops.update_idle() is missing or %SCX_OPS_KEEP_BUILTIN_IDLE is set. - * @p, @prev_cpu and @wake_flags match ops.select_cpu(). - * - * Returns the picked CPU with *@is_idle indicating whether the picked CPU is - * currently idle and thus a good candidate for direct dispatching. - */ +/* Provided for backward binary compatibility, will be removed in v6.17. */ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) { #ifdef CONFIG_SMP s32 cpu; #endif + printk_deferred_once(KERN_WARNING + "sched_ext: scx_bpf_select_cpu_dfl() deprecated in favor of scx_bpf_select_cpu_and()"); + if (!ops_cpu_valid(prev_cpu, NULL)) goto prev_cpu; diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h index 6f1da61cf7f17..1eb790eb90d40 100644 --- a/tools/sched_ext/include/scx/common.bpf.h +++ b/tools/sched_ext/include/scx/common.bpf.h @@ -47,7 +47,8 @@ static inline void ___vmlinux_h_sanity_check___(void) } s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym; -s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym; +s32 scx_bpf_select_cpu_dfl(struct task_struct *p, + s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym __weak; s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags, const struct cpumask *cpus_allowed, u64 flags) __ksym __weak; void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak; diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h index 9252e1a00556f..f9caa7baf356c 100644 --- a/tools/sched_ext/include/scx/compat.bpf.h +++ b/tools/sched_ext/include/scx/compat.bpf.h @@ -225,6 +225,43 @@ static inline bool __COMPAT_is_enq_cpu_selected(u64 enq_flags) scx_bpf_pick_any_cpu_node(cpus_allowed, node, flags) : \ scx_bpf_pick_any_cpu(cpus_allowed, flags)) +/** + * scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu(). + * We will preserve this compatible helper until v6.17. + * + * @p: task_struct to select a CPU for + * @prev_cpu: CPU @p was on previously + * @wake_flags: %SCX_WAKE_* flags + * @is_idle: out parameter indicating whether the returned CPU is idle + * + * Can only be called from ops.select_cpu() if the built-in CPU selection is + * enabled - ops.update_idle() is missing or %SCX_OPS_KEEP_BUILTIN_IDLE is set. + * @p, @prev_cpu and @wake_flags match ops.select_cpu(). + * + * Returns the picked CPU with *@is_idle indicating whether the picked CPU is + * currently idle and thus a good candidate for direct dispatching. + */ +#define scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, is_idle) \ +({ \ + s32 __cpu; \ + \ + if (bpf_ksym_exists(scx_bpf_select_cpu_and)) { \ + __cpu = scx_bpf_select_cpu_and((p), (prev_cpu), (wake_flags), \ + (p)->cpus_ptr, 0); \ + if (__cpu >= 0) { \ + *(is_idle) = true; \ + } else { \ + *(is_idle) = false; \ + __cpu = (prev_cpu); \ + } \ + } else { \ + __cpu = scx_bpf_select_cpu_dfl((p), (prev_cpu), \ + (wake_flags), (is_idle)); \ + } \ + \ + __cpu; \ +}) + /* * Define sched_ext_ops. This may be expanded to define multiple variants for * backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH(). diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c index 2c720e3ecad59..0075bff928893 100644 --- a/tools/sched_ext/scx_flatcg.bpf.c +++ b/tools/sched_ext/scx_flatcg.bpf.c @@ -317,15 +317,12 @@ static void set_bypassed_at(struct task_struct *p, struct fcg_task_ctx *taskc) s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags) { struct fcg_task_ctx *taskc; - bool is_idle = false; s32 cpu; - cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle); - taskc = bpf_task_storage_get(&task_ctx, p, 0, 0); if (!taskc) { scx_bpf_error("task_ctx lookup failed"); - return cpu; + return prev_cpu; } /* @@ -333,13 +330,16 @@ s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake * idle. Follow it and charge the cgroup later in fcg_stopping() after * the fact. */ - if (is_idle) { + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0); + if (cpu >= 0) { set_bypassed_at(p, taskc); stat_inc(FCG_STAT_LOCAL); scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); + + return cpu; } - return cpu; + return prev_cpu; } void BPF_STRUCT_OPS(fcg_enqueue, struct task_struct *p, u64 enq_flags) diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c index e6de99dba7db6..0e48b2e46a683 100644 --- a/tools/sched_ext/scx_simple.bpf.c +++ b/tools/sched_ext/scx_simple.bpf.c @@ -54,16 +54,17 @@ static void stat_inc(u32 idx) s32 BPF_STRUCT_OPS(simple_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags) { - bool is_idle = false; s32 cpu; - cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle); - if (is_idle) { + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0); + if (cpu >= 0) { stat_inc(0); /* count local queueing */ scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0); + + return cpu; } - return cpu; + return prev_cpu; } void BPF_STRUCT_OPS(simple_enqueue, struct task_struct *p, u64 enq_flags) diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c index a7cf868d5e311..d3c0716aa79c9 100644 --- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c +++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c @@ -9,10 +9,6 @@ char _license[] SEC("license") = "GPL"; -/* Manually specify the signature until the kfunc is added to the scx repo. */ -s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, - bool *found) __ksym; - s32 BPF_STRUCT_OPS(enq_select_cpu_fails_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags) { @@ -22,14 +18,8 @@ s32 BPF_STRUCT_OPS(enq_select_cpu_fails_select_cpu, struct task_struct *p, void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p, u64 enq_flags) { - /* - * Need to initialize the variable or the verifier will fail to load. - * Improving these semantics is actively being worked on. - */ - bool found = false; - /* Can only call from ops.select_cpu() */ - scx_bpf_select_cpu_dfl(p, 0, 0, &found); + scx_bpf_select_cpu_and(p, 0, 0, p->cpus_ptr, 0); scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); } diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c index a80e3a3b3698c..c964444998667 100644 --- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c +++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c @@ -52,7 +52,7 @@ static void cleanup(void *ctx) struct scx_test enq_select_cpu_fails = { .name = "enq_select_cpu_fails", - .description = "Verify we fail to call scx_bpf_select_cpu_dfl() " + .description = "Verify we fail to call scx_bpf_select_cpu_and() " "from ops.enqueue()", .setup = setup, .run = run, diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c index 4bc36182d3ffc..8122421856c1b 100644 --- a/tools/testing/selftests/sched_ext/exit.bpf.c +++ b/tools/testing/selftests/sched_ext/exit.bpf.c @@ -20,12 +20,14 @@ UEI_DEFINE(uei); s32 BPF_STRUCT_OPS(exit_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags) { - bool found; + s32 cpu; if (exit_point == EXIT_SELECT_CPU) EXIT_CLEANLY(); - return scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &found); + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0); + + return cpu >= 0 ? cpu : prev_cpu; } void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags) diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c index 815f1d5d61ac4..4e1b698f710e7 100644 --- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c +++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c @@ -27,10 +27,6 @@ struct { __type(value, struct task_ctx); } task_ctx_stor SEC(".maps"); -/* Manually specify the signature until the kfunc is added to the scx repo. */ -s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, - bool *found) __ksym; - s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags) { @@ -43,10 +39,13 @@ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p, return -ESRCH; } - cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, - &tctx->force_local); + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0); + if (cpu >= 0) { + tctx->force_local = true; + return cpu; + } - return cpu; + return prev_cpu; } void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p, diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c index 9b5d232efb7f6..2f450bb14e8d9 100644 --- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c +++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c @@ -66,7 +66,7 @@ static void cleanup(void *ctx) struct scx_test select_cpu_dfl_nodispatch = { .name = "select_cpu_dfl_nodispatch", - .description = "Verify behavior of scx_bpf_select_cpu_dfl() in " + .description = "Verify behavior of scx_bpf_select_cpu_and() in " "ops.select_cpu()", .setup = setup, .run = run,