From patchwork Wed Dec 9 06:19:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 7805021 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7C7FA9F39B for ; Wed, 9 Dec 2015 06:23:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2800F204E2 for ; Wed, 9 Dec 2015 06:23:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 790F1204DE for ; Wed, 9 Dec 2015 06:23:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752818AbbLIGUa (ORCPT ); Wed, 9 Dec 2015 01:20:30 -0500 Received: from mail-pf0-f170.google.com ([209.85.192.170]:33723 "EHLO mail-pf0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752100AbbLIGTn (ORCPT ); Wed, 9 Dec 2015 01:19:43 -0500 Received: by pfnn128 with SMTP id n128so24776357pfn.0 for ; Tue, 08 Dec 2015 22:19:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xZlwjjai+06UYrhjD7+35frRkluD5tpfYp5eZcfT+wg=; b=OZ9IhyxYEnDCsWzyVD8B1/KjdZ58CNzr3KyPcTvoaNp4XtPGSu7nxrSmnK89InUQ+t DqqdbxEgHyedLuRScRP3Or8XvLQi8ENTOmLd5IiFmvUyeJ8qC1k6N2LzrhATkd07xEpV 62bJ6ZsULfazaaV9N/BFlSTw0jI8ybEAZKrHVxT6p/fksym+8yLd+qozItjYJGTm+qOj C02+H8ei3URqDGzCey72yo6hMIjU2zL8cNub+DBDNMysPi+7mwKo0aN3dxZerMg2fhId X5ahzG5PcBzo4AtZHjcikPFT7Z0ft6ce15zIlLOsugNuARuusWnbx9KODKZBoucSODRL yJ6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xZlwjjai+06UYrhjD7+35frRkluD5tpfYp5eZcfT+wg=; b=KQXbwgr7+CjCkQ4GIcPtgILE5TkAYIBAQXRtO7rDN4Fw0TKinShPi6kCCzhU+gDYFl xsWOiumEtWIxXw7s4BTj0LwBg9YvY+Ouu+SYUpXf204cmDFuW/Npf/XQ59kAwMS04KxL OVq3Jm0ZpB9uUcf9IgVKqOLqjwtNKTwknuatHXFfXo1uNkEaLVpIhkH7fXEift6IjUb6 Skl38Olb/X9aHwZSbFAjHKLrnQYT+kk5n7hgly6hO66Zz/mazZUZ1vdzUZbdwWlX5HcU 8yeq0OIYEKhcKW1QWEINgb/N8vAhQH72oRcTOSSZtKwYkcjUA7dst9WtTXRWZC2ux1Vc oa4A== X-Gm-Message-State: ALoCoQnr/S+Me7fA6OhHSUHaVygTBtGot0az34tJrj2KK2fXuTh9n2hOvHp2uw2bDWWFBJbJXGAZyTjD4kfgNJaqBjgsENZjmw== X-Received: by 10.98.13.25 with SMTP id v25mr10620140pfi.123.1449641982470; Tue, 08 Dec 2015 22:19:42 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id l84sm8643078pfb.15.2015.12.08.22.19.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Dec 2015 22:19:41 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette , Juri Lelli Subject: [RFCv6 PATCH 04/10] sched/fair: add triggers for OPP change requests Date: Tue, 8 Dec 2015 22:19:25 -0800 Message-Id: <1449641971-20827-5-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1449641971-20827-1-git-send-email-smuckle@linaro.org> References: <1449641971-20827-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Juri Lelli Each time a task is {en,de}queued we might need to adapt the current frequency to the new usage. Add triggers on {en,de}queue_task_fair() for this purpose. Only trigger a freq request if we are effectively waking up or going to sleep. Filter out load balancing related calls to reduce the number of triggers. [smuckle@linaro.org: resolve merge conflicts, define task_new, use renamed static key sched_freq] cc: Ingo Molnar cc: Peter Zijlstra Signed-off-by: Juri Lelli Signed-off-by: Steve Muckle --- kernel/sched/fair.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 47 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 95b83c4..904188a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4199,6 +4199,21 @@ static inline void hrtick_update(struct rq *rq) } #endif +static unsigned long capacity_orig_of(int cpu); +static int cpu_util(int cpu); + +static void update_capacity_of(int cpu) +{ + unsigned long req_cap; + + if (!sched_freq()) + return; + + /* Convert scale-invariant capacity to cpu. */ + req_cap = cpu_util(cpu) * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu); + set_cfs_cpu_capacity(cpu, true, req_cap); +} + /* * The enqueue_task method is called before nr_running is * increased. Here we update the fair scheduling stats and @@ -4209,6 +4224,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) { struct cfs_rq *cfs_rq; struct sched_entity *se = &p->se; + int task_new = !(flags & ENQUEUE_WAKEUP); for_each_sched_entity(se) { if (se->on_rq) @@ -4240,9 +4256,23 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) update_cfs_shares(cfs_rq); } - if (!se) + if (!se) { add_nr_running(rq, 1); + /* + * We want to potentially trigger a freq switch + * request only for tasks that are waking up; this is + * because we get here also during load balancing, but + * in these cases it seems wise to trigger as single + * request after load balancing is done. + * + * XXX: how about fork()? Do we need a special + * flag/something to tell if we are here after a + * fork() (wakeup_task_new)? + */ + if (!task_new) + update_capacity_of(cpu_of(rq)); + } hrtick_update(rq); } @@ -4300,9 +4330,24 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) update_cfs_shares(cfs_rq); } - if (!se) + if (!se) { sub_nr_running(rq, 1); + /* + * We want to potentially trigger a freq switch + * request only for tasks that are going to sleep; + * this is because we get here also during load + * balancing, but in these cases it seems wise to + * trigger as single request after load balancing is + * done. + */ + if (task_sleep) { + if (rq->cfs.nr_running) + update_capacity_of(cpu_of(rq)); + else if (sched_freq()) + set_cfs_cpu_capacity(cpu_of(rq), false, 0); + } + } hrtick_update(rq); }