From patchwork Mon Dec 17 03:13:31 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "alex.shi" X-Patchwork-Id: 1885921 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id C1B85DFB79 for ; Mon, 17 Dec 2012 03:14:56 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TkR7f-0000Jo-3Q; Mon, 17 Dec 2012 03:12:07 +0000 Received: from mga14.intel.com ([143.182.124.37]) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TkR7b-0000Ij-SW for linux-arm-kernel@lists.infradead.org; Mon, 17 Dec 2012 03:12:05 +0000 Received: from azsmga001.ch.intel.com ([10.2.17.19]) by azsmga102.ch.intel.com with ESMTP; 16 Dec 2012 19:12:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.84,297,1355126400"; d="scan'208";a="232404002" Received: from debian-alexs.sh.intel.com (HELO [10.239.36.27]) ([10.239.36.27]) by azsmga001.ch.intel.com with ESMTP; 16 Dec 2012 19:11:56 -0800 Message-ID: <50CE8DDB.5080506@intel.com> Date: Mon, 17 Dec 2012 11:13:31 +0800 From: Alex Shi User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120912 Thunderbird/15.0.1 MIME-Version: 1.0 To: Mike Galbraith , peterz@infradead.org, mingo@kernel.org Subject: Re: [RFC PATCH v2 3/6] sched: pack small tasks References: <1355319092-30980-1-git-send-email-vincent.guittot@linaro.org> <1355319092-30980-4-git-send-email-vincent.guittot@linaro.org> <50C93AC1.1060202@intel.com> <50C9E552.1010600@intel.com> <1355460356.5777.12.camel@marge.simpson.net> In-Reply-To: <1355460356.5777.12.camel@marge.simpson.net> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20121216_221204_183773_DAD3D08E X-CRM114-Status: GOOD ( 20.60 ) X-Spam-Score: -4.6 (----) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-4.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [143.182.124.37 listed in list.dnswl.org] 3.0 KHOP_BIG_TO_CC Sent to 10+ recipients instaed of Bcc or a list -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: len.brown@intel.com, tony.luck@intel.com, linaro-dev@lists.linaro.org, arjan@linux.intel.com, linux-kernel@vger.kernel.org, cmetcalf@tilera.com, linux@arm.linux.org.uk, santosh.shilimkar@ti.com, paulmck@linux.vnet.ibm.com, amit.kucheria@linaro.org, viresh.kumar@linaro.org, preeti@linux.vnet.ibm.com, Vincent Guittot , tglx@linutronix.de, chander.kashyap@linaro.org, pjt@google.com, Morten.Rasmussen@arm.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org > > CPU is a bug that slipped into domain degeneration. You should have > SIBLING/MC/NUMA (chasing that down is on todo). Uh, the SD_PREFER_SIBLING on cpu domain is recovered by myself for a share memory benchmark regression. But consider all the situations, I think the flag is better to be removed. ============ From 96bee9a03b2048f2686fbd7de0e2aee458dbd917 Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Mon, 17 Dec 2012 09:42:57 +0800 Subject: [PATCH 01/18] sched: remove SD_PERFER_SIBLING flag The flag was introduced in commit b5d978e0c7e79a. Its purpose seems trying to fullfill one node first in NUMA machine via pulling tasks from other nodes when the node has capacity. Its advantage is when few tasks share memories among them, pulling together is helpful on locality, so has performance gain. The shortage is it will keep unnecessary task migrations thrashing among different nodes, that reduces the performance gain, and just hurt performance if tasks has no memory cross. Thinking about the sched numa balancing patch is coming. The small advantage are meaningless to us, So better to remove this flag. Reported-by: Mike Galbraith Signed-off-by: Alex Shi --- include/linux/sched.h | 1 - include/linux/topology.h | 2 -- kernel/sched/core.c | 1 - kernel/sched/fair.c | 19 +------------------ 4 files changed, 1 insertion(+), 22 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 5dafac3..6dca96c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -836,7 +836,6 @@ enum cpu_idle_type { #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */ #define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */ #define SD_ASYM_PACKING 0x0800 /* Place busy groups earlier in the domain */ -#define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */ #define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */ extern int __weak arch_sd_sibiling_asym_packing(void); diff --git a/include/linux/topology.h b/include/linux/topology.h index d3cf0d6..15864d1 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -100,7 +100,6 @@ int arch_update_cpu_topology(void); | 1*SD_SHARE_CPUPOWER \ | 1*SD_SHARE_PKG_RESOURCES \ | 0*SD_SERIALIZE \ - | 0*SD_PREFER_SIBLING \ | arch_sd_sibling_asym_packing() \ , \ .last_balance = jiffies, \ @@ -162,7 +161,6 @@ int arch_update_cpu_topology(void); | 0*SD_SHARE_CPUPOWER \ | 0*SD_SHARE_PKG_RESOURCES \ | 0*SD_SERIALIZE \ - | 1*SD_PREFER_SIBLING \ , \ .last_balance = jiffies, \ .balance_interval = 1, \ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 5dae0d2..8ed2784 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6014,7 +6014,6 @@ sd_numa_init(struct sched_domain_topology_level *tl, int cpu) | 0*SD_SHARE_CPUPOWER | 0*SD_SHARE_PKG_RESOURCES | 1*SD_SERIALIZE - | 0*SD_PREFER_SIBLING | sd_local_flags(level) , .last_balance = jiffies, diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 59e072b..5d175f2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4339,13 +4339,9 @@ static bool update_sd_pick_busiest(struct lb_env *env, static inline void update_sd_lb_stats(struct lb_env *env, int *balance, struct sd_lb_stats *sds) { - struct sched_domain *child = env->sd->child; struct sched_group *sg = env->sd->groups; struct sg_lb_stats sgs; - int load_idx, prefer_sibling = 0; - - if (child && child->flags & SD_PREFER_SIBLING) - prefer_sibling = 1; + int load_idx; load_idx = get_sd_load_idx(env->sd, env->idle); @@ -4362,19 +4358,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, sds->total_load += sgs.group_load; sds->total_pwr += sg->sgp->power; - /* - * In case the child domain prefers tasks go to siblings - * first, lower the sg capacity to one so that we'll try - * and move all the excess tasks away. We lower the capacity - * of a group only if the local group has the capacity to fit - * these excess tasks, i.e. nr_running < group_capacity. The - * extra check prevents the case where you always pull from the - * heaviest group when it is already under-utilized (possible - * with a large weight task outweighs the tasks on the system). - */ - if (prefer_sibling && !local_group && sds->this_has_capacity) - sgs.group_capacity = min(sgs.group_capacity, 1UL); - if (local_group) { sds->this_load = sgs.avg_load; sds->this = sg;