From patchwork Wed Dec 9 06:19:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 7804991 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 453969F39B for ; Wed, 9 Dec 2015 06:23:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 88E8D204D6 for ; Wed, 9 Dec 2015 06:22:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 851EF204AB for ; Wed, 9 Dec 2015 06:22:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752170AbbLIGUd (ORCPT ); Wed, 9 Dec 2015 01:20:33 -0500 Received: from mail-pf0-f172.google.com ([209.85.192.172]:35949 "EHLO mail-pf0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752183AbbLIGTp (ORCPT ); Wed, 9 Dec 2015 01:19:45 -0500 Received: by pfdd184 with SMTP id d184so24744008pfd.3 for ; Tue, 08 Dec 2015 22:19:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=R1w/jySmlCR6x4xa8FPyFNmFIGB0+h64VyPzAd0rdWA=; b=d0Ww4x2PbTJ7KxLawkq+9kQShJJy/DlhOONoqtWronrpq9ORygxC+VLtdloNhWiA9B JFrSA77qCK1EZk5XzF9l0FeBZtc4Tdt7ooaFWddfVS7qCjvN6e2RVvqHcd/4cf+oZETP LmVEqako7xowMVxQHoZx4H2c3MT00cIqC+LsOYr0TI1e1BdIW/2jPfsFPFF3xRtw7G1K jiQztMrKkASdKmJnuce1nLVwNw0GWL8edyRL0NdB+YUCb4MQ6ClunU9IygS/QeB05Fvv CGLzuUF16JIT+85h28Y4ar6flGjl2YaCtArn/nN2me7eoRrSF99WBlLkUjCMY4+kKH8X a9ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=R1w/jySmlCR6x4xa8FPyFNmFIGB0+h64VyPzAd0rdWA=; b=Z6hQUcaAOamWDu4n74Me/zFVtYCgyoJD78EKrNv4K0k32hrF6WnEMk81KcPEW1T6/P YvaTJNHRb5LEW3riJdUfnqQTK1HluPuXX7gh9nB6KePX2mrJet1daA9AN1vYnN4AcIyj ELN6D3iaU0ADfo6t1lg2skLw6iJ8A7eRqXZ44EzXxLBB0t/LBneLE/HnjPMlceh1HHvz zmS0edtCR61KfVBSnECLfsJLSmODUpox3Cn59pmhQ8LQoZ2VJiV42jayGBWSeyB1u8+l jADzpSJHJHQC9XgD62y1qrbKae1dvZFRnDl1lVAhkwDHod/ZgJlxttxtAWCum3K6B9YM 8eXA== X-Gm-Message-State: ALoCoQktOsA3a/mCYQ9HndJZBGO2etexCCwQBYsHJFyDa1kZKN7OdoS5x4Wdf+UVzXiu+onxYZWuSmyJRygbID2hUf5ynjH12g== X-Received: by 10.98.86.10 with SMTP id k10mr10643453pfb.85.1449641985195; Tue, 08 Dec 2015 22:19:45 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id l84sm8643078pfb.15.2015.12.08.22.19.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Dec 2015 22:19:44 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette , Juri Lelli Subject: [RFCv6 PATCH 06/10] sched/fair: cpufreq_sched triggers for load balancing Date: Tue, 8 Dec 2015 22:19:27 -0800 Message-Id: <1449641971-20827-7-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1449641971-20827-1-git-send-email-smuckle@linaro.org> References: <1449641971-20827-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Juri Lelli As we don't trigger freq changes from {en,de}queue_task_fair() during load balancing, we need to do explicitly so on load balancing paths. [smuckle@linaro.org: move update_capacity_of calls so rq lock is held] cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Juri Lelli Signed-off-by: Steve Muckle --- kernel/sched/fair.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1bfbbb7..880ceee 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6023,6 +6023,10 @@ static void attach_one_task(struct rq *rq, struct task_struct *p) { raw_spin_lock(&rq->lock); attach_task(rq, p); + /* + * We want to potentially raise target_cpu's OPP. + */ + update_capacity_of(cpu_of(rq)); raw_spin_unlock(&rq->lock); } @@ -6044,6 +6048,11 @@ static void attach_tasks(struct lb_env *env) attach_task(env->dst_rq, p); } + /* + * We want to potentially raise env.dst_cpu's OPP. + */ + update_capacity_of(env->dst_cpu); + raw_spin_unlock(&env->dst_rq->lock); } @@ -7183,6 +7192,11 @@ more_balance: * ld_moved - cumulative load moved across iterations */ cur_ld_moved = detach_tasks(&env); + /* + * We want to potentially lower env.src_cpu's OPP. + */ + if (cur_ld_moved) + update_capacity_of(env.src_cpu); /* * We've detached some tasks from busiest_rq. Every @@ -7547,8 +7561,13 @@ static int active_load_balance_cpu_stop(void *data) schedstat_inc(sd, alb_count); p = detach_one_task(&env); - if (p) + if (p) { schedstat_inc(sd, alb_pushed); + /* + * We want to potentially lower env.src_cpu's OPP. + */ + update_capacity_of(env.src_cpu); + } else schedstat_inc(sd, alb_failed); }