From patchwork Wed Dec 9 06:19:31 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 7804941 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 613919F39B for ; Wed, 9 Dec 2015 06:22:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 755D1204C9 for ; Wed, 9 Dec 2015 06:22:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 73617204B0 for ; Wed, 9 Dec 2015 06:22:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753279AbbLIGUj (ORCPT ); Wed, 9 Dec 2015 01:20:39 -0500 Received: from mail-pa0-f48.google.com ([209.85.220.48]:36105 "EHLO mail-pa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752503AbbLIGTv (ORCPT ); Wed, 9 Dec 2015 01:19:51 -0500 Received: by pacdm15 with SMTP id dm15so24608512pac.3 for ; Tue, 08 Dec 2015 22:19:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4Yj4d7NEZ7DOkMomQAc6DlBeSBBRbPlISg8LxyJKwLY=; b=WXI89uaq/Snwdn9r3fPkha8LVPlt8ZOh38GY+fO9BhsRbkWTsuqYMv6VomfcmFt5R5 jJNQodbeYjQUtPh7WBYrDmyKKhZSUKO6DRfV5U1jmiTlBH362IsItN5sLYmdE2qBiHC7 VyKXMBAIauF2hxltqLCSh5V7Y15xRji5OU/L53S8vQjGX9/HkiLRN/U6K7CNjpZf/4Qg QtibhGOEr1LADSax0KeuEqCx7w1G91svU4pQlKka09e0E8FG2eEkMr7k8vkXroObDlEy 7Tjc6UcXYJ0lce0sFCLgVZp8mat84pD77lanF2U2MdiyIxUrup3OWJ9dnAyTdZAE3ivO acHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4Yj4d7NEZ7DOkMomQAc6DlBeSBBRbPlISg8LxyJKwLY=; b=eh1GUtz5rnr0/p0tFmKYbTKSM8Bu51+qy0rSPwHxAh4sRh57ur2vIjAQasKZ/sGXLP QVN9iebOfwoM7XJk+m3a7g17WM/Ikp5tUDWeK1NCKa+p6mHuddH0XoUt5SHoptyFkSWT ewgx1d9G+sBa4q+JLvzWKM+7OqKpXb9rY5hoLwuBK7UkNwg3hTXwHg2AjBTlnjyMVnms xWtQwZjf/aGYRZhcx8l4KG26dWTD6nWYQSRZ6qYtGG8tP+oLOxecnbAsBRrSkh1zdSK5 tI+vVFx2Nz+LrLvKrxbPJVLle57jgAEVsDGWM/ui6wscRAt5+BBEX9Fd3QhtCxDkakpX Yfww== X-Gm-Message-State: ALoCoQmUeaXTUvRSxfT7Fu8Qp3/7H7RiYIk0sPq85oZHDrGP3kCnUIYUHU289+I4m3jt1uApayZkrJSH2ftBQkXmTC87qBbBQQ== X-Received: by 10.66.139.72 with SMTP id qw8mr4919253pab.130.1449641990679; Tue, 08 Dec 2015 22:19:50 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id l84sm8643078pfb.15.2015.12.08.22.19.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Dec 2015 22:19:50 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [RFCv6 PATCH 10/10] sched: rt scheduler sets capacity requirement Date: Tue, 8 Dec 2015 22:19:31 -0800 Message-Id: <1449641971-20827-11-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1449641971-20827-1-git-send-email-smuckle@linaro.org> References: <1449641971-20827-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vincent Guittot RT tasks don't provide any running constraints like deadline ones except their running priority. The only current usable input to estimate the capacity needed by RT tasks is the rt_avg metric. We use it to estimate the CPU capacity needed for the RT scheduler class. In order to monitor the evolution for RT task load, we must peridiocally check it during the tick. Then, we use the estimated capacity of the last activity to estimate the next one which can not be that accurate but is a good starting point without any impact on the wake up path of RT tasks. Signed-off-by: Vincent Guittot Signed-off-by: Steve Muckle --- kernel/sched/rt.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 48 insertions(+), 1 deletion(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 8ec86ab..9694204 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1426,6 +1426,41 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag #endif } +#ifdef CONFIG_SMP +static void sched_rt_update_capacity_req(struct rq *rq) +{ + u64 total, used, age_stamp, avg; + s64 delta; + + if (!sched_freq()) + return; + + sched_avg_update(rq); + /* + * Since we're reading these variables without serialization make sure + * we read them once before doing sanity checks on them. + */ + age_stamp = READ_ONCE(rq->age_stamp); + avg = READ_ONCE(rq->rt_avg); + delta = rq_clock(rq) - age_stamp; + + if (unlikely(delta < 0)) + delta = 0; + + total = sched_avg_period() + delta; + + used = div_u64(avg, total); + if (unlikely(used > SCHED_CAPACITY_SCALE)) + used = SCHED_CAPACITY_SCALE; + + set_rt_cpu_capacity(rq->cpu, 1, (unsigned long)(used)); +} +#else +static inline void sched_rt_update_capacity_req(struct rq *rq) +{ } + +#endif + static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, struct rt_rq *rt_rq) { @@ -1494,8 +1529,17 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev) if (prev->sched_class == &rt_sched_class) update_curr_rt(rq); - if (!rt_rq->rt_queued) + if (!rt_rq->rt_queued) { + /* + * The next task to be picked on this rq will have a lower + * priority than rt tasks so we can spend some time to update + * the capacity used by rt tasks based on the last activity. + * This value will be the used as an estimation of the next + * activity. + */ + sched_rt_update_capacity_req(rq); return NULL; + } put_prev_task(rq, prev); @@ -2212,6 +2256,9 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) update_curr_rt(rq); + if (rq->rt.rt_nr_running) + sched_rt_update_capacity_req(rq); + watchdog(rq, p); /*