From patchwork Tue Feb 23 01:22:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 8385741 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 71610C0553 for ; Tue, 23 Feb 2016 01:23:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 894DC2047C for ; Tue, 23 Feb 2016 01:23:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8E52220457 for ; Tue, 23 Feb 2016 01:23:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932340AbcBWBXJ (ORCPT ); Mon, 22 Feb 2016 20:23:09 -0500 Received: from mail-pa0-f46.google.com ([209.85.220.46]:35339 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932309AbcBWBXC (ORCPT ); Mon, 22 Feb 2016 20:23:02 -0500 Received: by mail-pa0-f46.google.com with SMTP id ho8so103056698pac.2 for ; Mon, 22 Feb 2016 17:23:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=6H6lVCNnQgGHDazgLPePB18iD3JgWf5HDVrsm9gzdVI=; b=RTaI+EZldKx+lu+C6RgpF/8Y0F+KRApWUSqi7n87bIqt3/v9VPYzBuDh0BRx5x/yKU phAl5AshbMOkB8Xw62GNI/AUJBDJ9HX2bQ+F6c7xIMmDOSRzVEoaQOKCt5qCQi521EEG SRCubl7NBqyVgLKMhoVtMe+z+CgF5j9zkHxMg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=6H6lVCNnQgGHDazgLPePB18iD3JgWf5HDVrsm9gzdVI=; b=NtQV3jhofZUGpxW51jjJp9AQ6PthPvvhc5rX5fTsk9ZXFhFnGi6ymuuZDP7WNI+9b7 GSsqXlZRXgCB88Pt0Xm7xEmnBNHQEmtaf8Ci3pbBiirLhGKXGLGT+aff12VkZjS7cczL T+Y+D8mR8odpiTIpueANcTDJ4syn+N8t7nmz3ZT3hjaU1RikfsvjTyjnJeX3X2FU7ppP 0a8Wh/atgS4ZooBuFHctXY3CNLGaY5PxzHkulNZlgZcrLiL6srMYgOW755RsJqhos9Q5 8JtX9TynVoLOph6XOlUDebfPih+zPm5W5YMmMhsRLXhwoD9NQjDyzXMiFjscnXtDySi5 MKLg== X-Gm-Message-State: AG10YOSNKxLKDBvV1yP5+P5LcblSEPXvHrrdZznM0llSZ6JXa8SojxLTdW95HF4jQv5TiQjk X-Received: by 10.66.141.71 with SMTP id rm7mr42472526pab.106.1456190581451; Mon, 22 Feb 2016 17:23:01 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id t29sm39626789pfi.8.2016.02.22.17.23.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Feb 2016 17:23:00 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar , "Rafael J. Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette , Juri Lelli Subject: [RFCv7 PATCH 06/10] sched/fair: cpufreq_sched triggers for load balancing Date: Mon, 22 Feb 2016 17:22:46 -0800 Message-Id: <1456190570-4475-7-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1456190570-4475-1-git-send-email-smuckle@linaro.org> References: <1456190570-4475-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Juri Lelli As we don't trigger freq changes from {en,de}queue_task_fair() during load balancing, we need to do explicitly so on load balancing paths. [smuckle@linaro.org: move update_capacity_of calls so rq lock is held] cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Juri Lelli Signed-off-by: Steve Muckle --- kernel/sched/fair.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e7fab8f..5531513 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6107,6 +6107,10 @@ static void attach_one_task(struct rq *rq, struct task_struct *p) { raw_spin_lock(&rq->lock); attach_task(rq, p); + /* + * We want to potentially raise target_cpu's OPP. + */ + update_capacity_of(cpu_of(rq)); raw_spin_unlock(&rq->lock); } @@ -6128,6 +6132,11 @@ static void attach_tasks(struct lb_env *env) attach_task(env->dst_rq, p); } + /* + * We want to potentially raise env.dst_cpu's OPP. + */ + update_capacity_of(env->dst_cpu); + raw_spin_unlock(&env->dst_rq->lock); } @@ -7267,6 +7276,11 @@ more_balance: * ld_moved - cumulative load moved across iterations */ cur_ld_moved = detach_tasks(&env); + /* + * We want to potentially lower env.src_cpu's OPP. + */ + if (cur_ld_moved) + update_capacity_of(env.src_cpu); /* * We've detached some tasks from busiest_rq. Every @@ -7631,8 +7645,13 @@ static int active_load_balance_cpu_stop(void *data) schedstat_inc(sd, alb_count); p = detach_one_task(&env); - if (p) + if (p) { schedstat_inc(sd, alb_pushed); + /* + * We want to potentially lower env.src_cpu's OPP. + */ + update_capacity_of(env.src_cpu); + } else schedstat_inc(sd, alb_failed); }