From patchwork Tue Feb 23 01:22:50 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 8385921 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id ADE3B9F1D4 for ; Tue, 23 Feb 2016 01:25:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D1CC020675 for ; Tue, 23 Feb 2016 01:25:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DF6B62047C for ; Tue, 23 Feb 2016 01:25:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756679AbcBWBYP (ORCPT ); Mon, 22 Feb 2016 20:24:15 -0500 Received: from mail-pf0-f169.google.com ([209.85.192.169]:35402 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932335AbcBWBXH (ORCPT ); Mon, 22 Feb 2016 20:23:07 -0500 Received: by mail-pf0-f169.google.com with SMTP id c10so105467899pfc.2 for ; Mon, 22 Feb 2016 17:23:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YgNrFTfrbAAN7woG5fMmmdvoz1kgg4IQljoqLNE8Vsw=; b=BLHbFuhmMROdtEHbxMlUut0mvKAaB8d5IFT2sNPld3c8enMDov3wbZrh73ZeiKl4SQ yw/FmfmEVAngVVgbQxWrxI/O5q8lgahokoPgyN8lvphTwx32RujzuXwmT8+ARXExzp9e nqkbnMVJxFj1TNagAwArXnlXaXT8Bm00VsHjU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YgNrFTfrbAAN7woG5fMmmdvoz1kgg4IQljoqLNE8Vsw=; b=ipx2GYfdmvT1NcPQOFsGELYoAfopfMIiHf1TC/xjleBfyPRpoIYxcJoKKwCYYwWNjX Zrl0dJXljBcon/MpqgLmMQWS1PB2qphhppNIGpNwKunIjgIVDmHmMHnQ0cvmbjt6EDKr tYJ555QZMgDhwIyB76x8G62A5kQak3GEMVVCHcHxoIXBoJaMr9vP8valh0DyAlmr//3V AyNZObpcfZee7q7XKJvnInNN+9RNJLPbqPP14LJGQvLprwX+9ukP/TB8uYQGMHadBgFV B7jsRFUWQ6C9PAKsdT4LWk2r1bOOPdK75pBEYsI8LQhFrKgVBwjZoD5OBptyV88tPiZq p3pA== X-Gm-Message-State: AG10YOQVTYyNVHAyFfApL3a181OEq6HNEvmUBIW8qZ9Yn1b4XHrqiTxnmLhtDJB4JqFFTSET X-Received: by 10.98.13.154 with SMTP id 26mr42444734pfn.164.1456190586728; Mon, 22 Feb 2016 17:23:06 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id t29sm39626789pfi.8.2016.02.22.17.23.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Feb 2016 17:23:06 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar , "Rafael J. Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [RFCv7 PATCH 10/10] sched: rt scheduler sets capacity requirement Date: Mon, 22 Feb 2016 17:22:50 -0800 Message-Id: <1456190570-4475-11-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1456190570-4475-1-git-send-email-smuckle@linaro.org> References: <1456190570-4475-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vincent Guittot RT tasks don't provide any running constraints like deadline ones except their running priority. The only current usable input to estimate the capacity needed by RT tasks is the rt_avg metric. We use it to estimate the CPU capacity needed for the RT scheduler class. In order to monitor the evolution for RT task load, we must peridiocally check it during the tick. Then, we use the estimated capacity of the last activity to estimate the next one which can not be that accurate but is a good starting point without any impact on the wake up path of RT tasks. Signed-off-by: Vincent Guittot Signed-off-by: Steve Muckle --- kernel/sched/rt.c | 48 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 8ec86ab..da9086c 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1426,6 +1426,41 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag #endif } +#ifdef CONFIG_CPU_FREQ_GOV_SCHED +static void sched_rt_update_capacity_req(struct rq *rq) +{ + u64 total, used, age_stamp, avg; + s64 delta; + + if (!sched_freq()) + return; + + sched_avg_update(rq); + /* + * Since we're reading these variables without serialization make sure + * we read them once before doing sanity checks on them. + */ + age_stamp = READ_ONCE(rq->age_stamp); + avg = READ_ONCE(rq->rt_avg); + delta = rq_clock(rq) - age_stamp; + + if (unlikely(delta < 0)) + delta = 0; + + total = sched_avg_period() + delta; + + used = div_u64(avg, total); + if (unlikely(used > SCHED_CAPACITY_SCALE)) + used = SCHED_CAPACITY_SCALE; + + set_rt_cpu_capacity(rq->cpu, true, (unsigned long)(used)); +} +#else +static inline void sched_rt_update_capacity_req(struct rq *rq) +{ } + +#endif + static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, struct rt_rq *rt_rq) { @@ -1494,8 +1529,17 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev) if (prev->sched_class == &rt_sched_class) update_curr_rt(rq); - if (!rt_rq->rt_queued) + if (!rt_rq->rt_queued) { + /* + * The next task to be picked on this rq will have a lower + * priority than rt tasks so we can spend some time to update + * the capacity used by rt tasks based on the last activity. + * This value will be the used as an estimation of the next + * activity. + */ + sched_rt_update_capacity_req(rq); return NULL; + } put_prev_task(rq, prev); @@ -2212,6 +2256,8 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) update_curr_rt(rq); + sched_rt_update_capacity_req(rq); + watchdog(rq, p); /*