From patchwork Wed Jan 26 08:09:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 12724709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9F2B4C28CF5 for ; Wed, 26 Jan 2022 08:12:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=izLYCWTpr+hxRu+AumuwqUPMnGFuDikNMtxqKfNh5h8=; b=yZL8vVYO3PhSMB 7kvisQD8wDWWULcTMZ0m+5rMIZqi5jJxZlhgvzQ4kurnThCcroHJ4aMOBYxHUBbUwjL0U4tkwqJA/ XcXV6FkJD9sImcHcgUSyMiIPQQXRV356KyV0zvh00QACF8Vf1cA1ebB5zER4mvwFSZL8GvgqtHuj3 YEkpp83PI4VDdn8piLYENEtOZWlAYLdHRAMHyYBX/7Vvj/D3X/1oKqTtD1rgV4OYxnReSZsH8nBVb 8Dqwh6+exZzoywhf7YOh6IHgeomCZJvO4vmo9MhBybOX/utwcdm/H/zTGUqcpgWdUZSDYcIt4QkA7 IPOEibMTJ9pKf5Xl3gVg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCdOe-00Adye-6U; Wed, 26 Jan 2022 08:11:16 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCdOY-00AduI-Ii for linux-arm-kernel@lists.infradead.org; Wed, 26 Jan 2022 08:11:13 +0000 Received: from canpemm500009.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4JkGc74bCqzbk0M; Wed, 26 Jan 2022 16:10:15 +0800 (CST) Received: from localhost.localdomain (10.67.164.66) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan 2022 16:11:05 +0800 From: Yicong Yang To: , , , , , , , CC: , , , , , , , , , , <21cnbao@gmail.com>, , Subject: [PATCH v2 1/2] sched: Add per_cpu cluster domain info and cpus_share_resources API Date: Wed, 26 Jan 2022 16:09:46 +0800 Message-ID: <20220126080947.4529-2-yangyicong@hisilicon.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220126080947.4529-1-yangyicong@hisilicon.com> References: <20220126080947.4529-1-yangyicong@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.67.164.66] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220126_001111_006718_1203E53C X-CRM114-Status: GOOD ( 14.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Barry Song Add per-cpu cluster domain info and cpus_share_resources() API. This is the preparation for the optimization of select_idle_cpu() on platforms with cluster scheduler level. Signed-off-by: Barry Song Signed-off-by: Yicong Yang Reviewed-by: Gautham R. Shenoy --- include/linux/sched/sd_flags.h | 7 +++++++ include/linux/sched/topology.h | 8 +++++++- kernel/sched/core.c | 12 ++++++++++++ kernel/sched/sched.h | 2 ++ kernel/sched/topology.c | 15 +++++++++++++++ 5 files changed, 43 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/sd_flags.h b/include/linux/sched/sd_flags.h index 57bde66d95f7..42ed454e8b18 100644 --- a/include/linux/sched/sd_flags.h +++ b/include/linux/sched/sd_flags.h @@ -109,6 +109,13 @@ SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS) */ SD_FLAG(SD_SHARE_CPUCAPACITY, SDF_SHARED_CHILD | SDF_NEEDS_GROUPS) +/* + * Domain members share CPU cluster (LLC tags or L2 cache) + * + * NEEDS_GROUPS: Clusters are shared between groups. + */ +SD_FLAG(SD_CLUSTER, SDF_NEEDS_GROUPS) + /* * Domain members share CPU package resources (i.e. caches) * diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 8054641c0a7b..2f84fdf00481 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -45,7 +45,7 @@ static inline int cpu_smt_flags(void) #ifdef CONFIG_SCHED_CLUSTER static inline int cpu_cluster_flags(void) { - return SD_SHARE_PKG_RESOURCES; + return SD_CLUSTER | SD_SHARE_PKG_RESOURCES; } #endif @@ -177,6 +177,7 @@ cpumask_var_t *alloc_sched_domains(unsigned int ndoms); void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms); bool cpus_share_cache(int this_cpu, int that_cpu); +bool cpus_share_resources(int this_cpu, int that_cpu); typedef const struct cpumask *(*sched_domain_mask_f)(int cpu); typedef int (*sched_domain_flags_f)(void); @@ -230,6 +231,11 @@ static inline bool cpus_share_cache(int this_cpu, int that_cpu) return true; } +static inline bool cpus_share_resources(int this_cpu, int that_cpu) +{ + return true; +} + #endif /* !CONFIG_SMP */ #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 848eaa0efe0e..7b203a6d96b2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3746,6 +3746,18 @@ bool cpus_share_cache(int this_cpu, int that_cpu) return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu); } +/* + * Whether CPUs are share cache resources, which means LLC on non-cluster + * machines and LLC tag or L2 on machines with clusters. + */ +bool cpus_share_resources(int this_cpu, int that_cpu) +{ + if (this_cpu == that_cpu) + return true; + + return per_cpu(sd_share_id, this_cpu) == per_cpu(sd_share_id, that_cpu); +} + static inline bool ttwu_queue_cond(int cpu, int wake_flags) { /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index de53be905739..d04b342cc28d 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1765,7 +1765,9 @@ static inline struct sched_domain *lowest_flag_domain(int cpu, int flag) DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc); DECLARE_PER_CPU(int, sd_llc_size); DECLARE_PER_CPU(int, sd_llc_id); +DECLARE_PER_CPU(int, sd_share_id); DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared); +DECLARE_PER_CPU(struct sched_domain __rcu *, sd_cluster); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index d201a7052a29..408fede6e732 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -644,6 +644,8 @@ static void destroy_sched_domains(struct sched_domain *sd) DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc); DEFINE_PER_CPU(int, sd_llc_size); DEFINE_PER_CPU(int, sd_llc_id); +DEFINE_PER_CPU(int, sd_share_id); +DEFINE_PER_CPU(struct sched_domain __rcu *, sd_cluster); DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared); DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa); DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); @@ -669,6 +671,18 @@ static void update_top_cache_domain(int cpu) per_cpu(sd_llc_id, cpu) = id; rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds); + sd = lowest_flag_domain(cpu, SD_CLUSTER); + if (sd) + id = cpumask_first(sched_domain_span(sd)); + rcu_assign_pointer(per_cpu(sd_cluster, cpu), sd); + + /* + * This assignment should be placed after the sd_llc_id as + * we want this id equals to cluster id on cluster machines + * but equals to LLC id on non-Cluster machines. + */ + per_cpu(sd_share_id, cpu) = id; + sd = lowest_flag_domain(cpu, SD_NUMA); rcu_assign_pointer(per_cpu(sd_numa, cpu), sd); @@ -1514,6 +1528,7 @@ static unsigned long __read_mostly *sched_numa_onlined_nodes; */ #define TOPOLOGY_SD_FLAGS \ (SD_SHARE_CPUCAPACITY | \ + SD_CLUSTER | \ SD_SHARE_PKG_RESOURCES | \ SD_NUMA | \ SD_ASYM_PACKING) From patchwork Wed Jan 26 08:09:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 12724711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4748EC2BA4C for ; Wed, 26 Jan 2022 08:12:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1EQJ5luYOR4uJkh9IB1nlueu6MDRcTDCVH7Ng/cEbZo=; b=FpQlnQcpssQz4A MLUURyUm3i3kAEci7G1RRG24tH2fhD7SsJXdMKIivS+z+ZuE0khhmUtjhDSUFbImP6KSJ0l4kHbzf f4SCGZqN4ti+69CbDTSTiHckZzj5WQ/S5BU/4xr+dv0Xz9ua5/B+9TTWM76QLCSls67YNoFFOiag0 njZIVDW5ROw3HUCd1nZEitRTcJML59Lh7WtmYj6JwbSnsdSGlBQSpb6XAbdJC0IpoG1kBAf6otas3 wePlmXoiBjq7mDxZdD5BfJkwEAKRYbqcbdfsXO+uR/eA3pr5DZamoPYKCH0dElXLPgdEuRDRAt/va PypX2T4GB6sjzbTM3DeQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCdP5-00Ae8P-T1; Wed, 26 Jan 2022 08:11:44 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nCdOc-00AduL-KM for linux-arm-kernel@lists.infradead.org; Wed, 26 Jan 2022 08:11:16 +0000 Received: from canpemm500009.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4JkGbZ3D4gz9sJq; Wed, 26 Jan 2022 16:09:46 +0800 (CST) Received: from localhost.localdomain (10.67.164.66) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 26 Jan 2022 16:11:06 +0800 From: Yicong Yang To: , , , , , , , CC: , , , , , , , , , , <21cnbao@gmail.com>, , Subject: [PATCH v2 2/2] sched/fair: Scan cluster before scanning LLC in wake-up path Date: Wed, 26 Jan 2022 16:09:47 +0800 Message-ID: <20220126080947.4529-3-yangyicong@hisilicon.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220126080947.4529-1-yangyicong@hisilicon.com> References: <20220126080947.4529-1-yangyicong@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.67.164.66] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220126_001115_094266_81954B9B X-CRM114-Status: GOOD ( 16.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Barry Song For platforms having clusters like Kunpeng920, CPUs within the same cluster have lower latency when synchronizing and accessing shared resources like cache. Thus, this patch tries to find an idle cpu within the cluster of the target CPU before scanning the whole LLC to gain lower latency. Note neither Kunpeng920 nor x86 Jacobsville supports SMT, so this patch doesn't consider SMT for this moment. Testing has been done on Kunpeng920 by pinning tasks to one numa and two numa. On Kunpeng920, Each numa has 8 clusters and each cluster has 4 CPUs. With this patch, We noticed enhancement on tbench within one numa or cross two numa. On numa 0: 5.17-rc1 patched Hmean 1 324.73 ( 0.00%) 378.01 * 16.41%* Hmean 2 645.36 ( 0.00%) 754.63 * 16.93%* Hmean 4 1302.09 ( 0.00%) 1507.54 * 15.78%* Hmean 8 2612.03 ( 0.00%) 2982.57 * 14.19%* Hmean 16 5307.12 ( 0.00%) 5886.66 * 10.92%* Hmean 32 9354.22 ( 0.00%) 9908.13 * 5.92%* Hmean 64 7240.35 ( 0.00%) 7278.78 * 0.53%* Hmean 128 6186.40 ( 0.00%) 6187.85 ( 0.02%) On numa 0-1: 5.17-rc1 patched Hmean 1 320.01 ( 0.00%) 378.44 * 18.26%* Hmean 2 643.85 ( 0.00%) 752.52 * 16.88%* Hmean 4 1287.36 ( 0.00%) 1505.62 * 16.95%* Hmean 8 2564.60 ( 0.00%) 2955.29 * 15.23%* Hmean 16 5195.69 ( 0.00%) 5814.74 * 11.91%* Hmean 32 9769.16 ( 0.00%) 10872.63 * 11.30%* Hmean 64 15952.50 ( 0.00%) 17281.98 * 8.33%* Hmean 128 13113.77 ( 0.00%) 13895.20 * 5.96%* Hmean 256 10997.59 ( 0.00%) 11244.69 * 2.25%* Hmean 512 14623.60 ( 0.00%) 15526.25 * 6.17%* This will also help to improve the MySQL. With MySQL server running on numa 0 and client running on numa 1, both QPS and latency is imporved on read-write case: 5.17-rc1 patched QPS-16threads 143333.2633 145077.4033(+1.22%) QPS-24threads 195085.9367 202719.6133(+3.91%) QPS-32threads 241165.6867 249020.74(+3.26%) QPS-64threads 244586.8433 253387.7567(+3.60%) avg-lat-16threads 2.23 2.19(+1.19%) avg-lat-24threads 2.46 2.36(+3.79%) avg-lat-36threads 2.66 2.57(+3.26%) avg-lat-64threads 5.23 5.05(+3.44%) Tested-by: Yicong Yang Signed-off-by: Barry Song Signed-off-by: Yicong Yang --- kernel/sched/fair.c | 46 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5146163bfabb..2f84a933aedd 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6262,12 +6262,46 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd #endif /* CONFIG_SCHED_SMT */ +#ifdef CONFIG_SCHED_CLUSTER +/* + * Scan the cluster domain for idle CPUs and clear cluster cpumask after scanning + */ +static inline int scan_cluster(struct task_struct *p, int prev_cpu, int target) +{ + struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); + struct sched_domain *sd = rcu_dereference(per_cpu(sd_cluster, target)); + int cpu, idle_cpu; + + /* TODO: Support SMT case while a machine with both cluster and SMT born */ + if (!sched_smt_active() && sd) { + for_each_cpu_and(cpu, cpus, sched_domain_span(sd)) { + idle_cpu = __select_idle_cpu(cpu, p); + if ((unsigned int)idle_cpu < nr_cpumask_bits) + return idle_cpu; + } + + /* Don't ping-pong tasks in and out cluster frequently */ + if (cpus_share_resources(target, prev_cpu)) + return target; + + cpumask_andnot(cpus, cpus, sched_domain_span(sd)); + } + + return -1; +} +#else +static inline int scan_cluster(struct task_struct *p, int prev_cpu, int target) +{ + return -1; +} +#endif + /* * Scan the LLC domain for idle CPUs; this is dynamically regulated by * comparing the average scan cost (tracked in sd->avg_scan_cost) against the * average idle time for this rq (as found in rq->avg_idle). */ -static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool has_idle_core, int target) +static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool has_idle_core, int prev_cpu, int target) { struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); int i, cpu, idle_cpu = -1, nr = INT_MAX; @@ -6282,6 +6316,10 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); + idle_cpu = scan_cluster(p, prev_cpu, target); + if ((unsigned int)idle_cpu < nr_cpumask_bits) + return idle_cpu; + if (sched_feat(SIS_PROP) && !has_idle_core) { u64 avg_cost, avg_idle, span_avg; unsigned long now = jiffies; @@ -6416,7 +6454,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) /* * If the previous CPU is cache affine and idle, don't be stupid: */ - if (prev != target && cpus_share_cache(prev, target) && + if (prev != target && cpus_share_resources(prev, target) && (available_idle_cpu(prev) || sched_idle_cpu(prev)) && asym_fits_capacity(task_util, prev)) return prev; @@ -6442,7 +6480,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) p->recent_used_cpu = prev; if (recent_used_cpu != prev && recent_used_cpu != target && - cpus_share_cache(recent_used_cpu, target) && + cpus_share_resources(recent_used_cpu, target) && (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) && cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) && asym_fits_capacity(task_util, recent_used_cpu)) { @@ -6483,7 +6521,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) } } - i = select_idle_cpu(p, sd, has_idle_core, target); + i = select_idle_cpu(p, sd, has_idle_core, prev, target); if ((unsigned)i < nr_cpumask_bits) return i;