From patchwork Fri Jun 21 13:53:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 13707630 Received: from mail.nearlyone.de (mail.nearlyone.de [49.12.199.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEB3E171AB; Fri, 21 Jun 2024 13:53:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=49.12.199.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718978011; cv=none; b=mKGJSCw+2FYK4BQzqirZDqRIijZmx4Iw69qBUPyBvUMtxS+MeNSXzapC1oRHX/5M/zz94ZgqPbpM2l3BhgedXBZ6PfNcuCDRywPiBZk1hXh51KA193BSpB/wiOcWkf//JkV04kzgNj+6OSxeiHh5XEFvu2ACGl/2KP/Wz4L7PyE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718978011; c=relaxed/simple; bh=3ubbHzxDYLS7GUt9vO4bLHGBxIHFTHvMWZGjsN4/xPM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=TmFVc1POhafM1isy73bEIrkP9XZQNSQK/GyI6dvqckMmPVDpG1TknV+9M6WBFSRJF8dqz2BjfoGgQ/GuaqU2VSWSyqaSkm/WqLrd4TD+6hAtA6tuL9qyS0YLDZ6XFDAjGp6VkNgN+6h9YvTkxMbT4b78a1eAQ/5fi/fZb1zn1Yc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.de; spf=pass smtp.mailfrom=monom.org; dkim=pass (2048-bit key) header.d=monom.org header.i=@monom.org header.b=htY7I9hc; arc=none smtp.client-ip=49.12.199.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=suse.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=monom.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=monom.org header.i=@monom.org header.b="htY7I9hc" Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id D7984DAD81; Fri, 21 Jun 2024 15:53:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1718978001; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=k5mhCBQQFyo/pEHelnW5w0jjgQcj3b/N98E7vKt3bO0=; b=htY7I9hcSpMywnTn+OdfkPekbPSrDaFGG0Bk8oZvxcOj20UnFyizWaZbbvX6l30hPMhn4+ sS8V6IuJ31CF+3yVpZULD9v8xe5n2/l5fiSWlUmmgc85BQpraQA3/1K/FZO/pfCTU+cE/5 5JSaeKfljBcxXAW1NhYX1/ih6CJSc4skiphQJpWWTHjcTbKGY6N2kGaulSpejFH368lEua 70gwOBwGSJtrZp6t0Ccut8BPHOX9v/c2WMCM5cOPl01F18aV81g+B8y4tSkn+1EN+yz4Pf J0P1KlokKjtL0yZE5g0ZyGYClpHHHL+tP4et8n60qp1+9TnNCBJo36Aj8fY5KA== From: Daniel Wagner Date: Fri, 21 Jun 2024 15:53:12 +0200 Subject: [PATCH 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240621-isolcpus-io-queues-v1-3-8b169bf41083@suse.de> References: <20240621-isolcpus-io-queues-v1-0-8b169bf41083@suse.de> In-Reply-To: <20240621-isolcpus-io-queues-v1-0-8b169bf41083@suse.de> To: Jens Axboe , Keith Busch , Sagi Grimberg , Thomas Gleixner , Christoph Hellwig Cc: Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Sridhar Balaraman , "brookxu.cn" , Ming Lei , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Daniel Wagner X-Mailer: b4 0.13.0 X-Last-TLS-Session-Version: TLSv1.3 group_cpus_evenly distributes all present CPUs into groups. This ignores the isolcpus configuration and assigns isolated CPUs into the groups. Make group_cpus_evenly aware of isolcpus configuration and use the housekeeping CPU mask as base for distributing the available CPUs into groups. Signed-off-by: Daniel Wagner --- lib/group_cpus.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 62 insertions(+), 2 deletions(-) diff --git a/lib/group_cpus.c b/lib/group_cpus.c index ee272c4cefcc..f1517a44abc9 100644 --- a/lib/group_cpus.c +++ b/lib/group_cpus.c @@ -8,6 +8,7 @@ #include #include #include +#include #ifdef CONFIG_SMP @@ -330,7 +331,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps, } /** - * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality + * group_possible_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality * @numgrps: number of groups * * Return: cpumask array if successful, NULL otherwise. And each element @@ -344,7 +345,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps, * We guarantee in the resulted grouping that all CPUs are covered, and * no same CPU is assigned to multiple groups */ -struct cpumask *group_cpus_evenly(unsigned int numgrps) +static struct cpumask *group_possible_cpus_evenly(unsigned int numgrps) { unsigned int curgrp = 0, nr_present = 0, nr_others = 0; cpumask_var_t *node_to_cpumask; @@ -423,6 +424,65 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) } return masks; } + +static struct cpumask *group_mask_cpus_evenly(unsigned int numgrps, + const struct cpumask *cpu_mask) +{ + cpumask_var_t *node_to_cpumask; + cpumask_var_t nmsk; + int ret = -ENOMEM; + struct cpumask *masks = NULL; + + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + return NULL; + + node_to_cpumask = alloc_node_to_cpumask(); + if (!node_to_cpumask) + goto fail_nmsk; + + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); + if (!masks) + goto fail_node_to_cpumask; + + build_node_to_cpumask(node_to_cpumask); + + ret = __group_cpus_evenly(0, numgrps, node_to_cpumask, cpu_mask, nmsk, + masks); + +fail_node_to_cpumask: + free_node_to_cpumask(node_to_cpumask); + +fail_nmsk: + free_cpumask_var(nmsk); + if (ret < 0) { + kfree(masks); + return NULL; + } + return masks; +} + +/** + * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality + * @numgrps: number of groups + * + * Return: cpumask array if successful, NULL otherwise. And each element + * includes CPUs assigned to this group + * + * @group_possible_cpus_evently is used for distributing the cpus + * on all possible cpus in absence of isolcpus command line argument. + * If the isolcpus argument is used with io_queue option only + * the housekeeping CPUs are considered. + */ +struct cpumask *group_cpus_evenly(unsigned int numgrps) +{ + const struct cpumask *io_queue_mask; + + io_queue_mask = housekeeping_cpumask(HK_TYPE_IO_QUEUE); + if (!cpumask_empty(io_queue_mask)) + return group_mask_cpus_evenly(numgrps, io_queue_mask); + else + return group_possible_cpus_evenly(numgrps); +} #else /* CONFIG_SMP */ struct cpumask *group_cpus_evenly(unsigned int numgrps) {