From patchwork Tue Feb 23 01:22:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 8385911 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C7455C0553 for ; Tue, 23 Feb 2016 01:24:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DCA2520678 for ; Tue, 23 Feb 2016 01:24:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B8D24204EB for ; Tue, 23 Feb 2016 01:24:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756840AbcBWBYQ (ORCPT ); Mon, 22 Feb 2016 20:24:16 -0500 Received: from mail-pf0-f174.google.com ([209.85.192.174]:36032 "EHLO mail-pf0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932327AbcBWBXF (ORCPT ); Mon, 22 Feb 2016 20:23:05 -0500 Received: by mail-pf0-f174.google.com with SMTP id e127so102321677pfe.3 for ; Mon, 22 Feb 2016 17:23:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XtxdMaqjcvpM5B76bzoGCWyJWSH0Qqw3ZAyL+6vw1mI=; b=FYE6r6H7CtK+NXTjdtjUuKidac1amoNuXd2w8q6lFE5k7vEn+CSe2rKWM7pJ6MeBew 622/vh0XJCC5Mkq0fgl4rfXj2GXSQJgxfFYd9WVHvFb7pNI2NZ0L4SuN8eH0Tf4yzv1f Frmafqm9U5+rrI+T0y82Q4hSMhRsguK3F/ca4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XtxdMaqjcvpM5B76bzoGCWyJWSH0Qqw3ZAyL+6vw1mI=; b=F6CZ+EH5nnmTh3ySLIvwZ0jC4WU2pPEkXov4Atufm8kZS9zHUYdIds19FhUGpvD7qJ wzml1dSs092Av+L6IZnsuID8gwA1BteW3Mi2KWRhxt3gXoqGxtj2ga2Zfy83ZhCAv24k LNQTTRFqojG1YpUGvc4Nm9JSc8zmzgqtPRZ1ggmr45SoGGw2g7VeKiUcZpzOi0zT9JkS rlqV0G8X8aTeri6JqgV0/skZ4ifrw7C2HPBZzBbSvG0XZv0sigayFuc+/Ux+seR/KEe1 ov1/y4g3PxdzFkIoFV0qem6FaoYZCVuj37oebk+NCkeao4tZf7aLdgE4Yl3uIiqWeUdA PS/Q== X-Gm-Message-State: AG10YOT2/DY5C/i2aGsKZy5p5PFSndEhxPud3fKCMbL6s0VHjvZpOlhQJIRxnu2s6xKODC/Z X-Received: by 10.98.64.202 with SMTP id f71mr42400049pfd.113.1456190585382; Mon, 22 Feb 2016 17:23:05 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id t29sm39626789pfi.8.2016.02.22.17.23.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 22 Feb 2016 17:23:04 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar , "Rafael J. Wysocki" Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [RFCv7 PATCH 09/10] sched/deadline: split rt_avg in 2 distincts metrics Date: Mon, 22 Feb 2016 17:22:49 -0800 Message-Id: <1456190570-4475-10-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1456190570-4475-1-git-send-email-smuckle@linaro.org> References: <1456190570-4475-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vincent Guittot rt_avg monitors the average load of rt tasks, deadline tasks and interruptions, when enabled. It's used to calculate the remaining capacity for CFS tasks. We split rt_avg in 2 metrics, one for rt and interruptions that keeps the name rt_avg and another one for deadline tasks that will be named dl_avg. Both values are still used to calculate the remaining capacity for cfs task. But rt_avg is now also used to request capacity to the sched-freq for the rt tasks. As the irq time is accounted with rt tasks, it will be taken into account in the request of capacity. Signed-off-by: Vincent Guittot Signed-off-by: Steve Muckle --- kernel/sched/core.c | 1 + kernel/sched/deadline.c | 2 +- kernel/sched/fair.c | 1 + kernel/sched/sched.h | 8 +++++++- 4 files changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 747a7af..12a4a3a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -759,6 +759,7 @@ void sched_avg_update(struct rq *rq) asm("" : "+rm" (rq->age_stamp)); rq->age_stamp += period; rq->rt_avg /= 2; + rq->dl_avg /= 2; } } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index cd64c97..87dcee3 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -747,7 +747,7 @@ static void update_curr_dl(struct rq *rq) curr->se.exec_start = rq_clock_task(rq); cpuacct_charge(curr, delta_exec); - sched_rt_avg_update(rq, delta_exec); + sched_dl_avg_update(rq, delta_exec); dl_se->runtime -= dl_se->dl_yielded ? 0 : delta_exec; if (dl_runtime_exceeded(dl_se)) { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index cf7ae0a..3a812fa 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6278,6 +6278,7 @@ static unsigned long scale_rt_capacity(int cpu) */ age_stamp = READ_ONCE(rq->age_stamp); avg = READ_ONCE(rq->rt_avg); + avg += READ_ONCE(rq->dl_avg); delta = __rq_clock_broken(rq) - age_stamp; if (unlikely(delta < 0)) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3df21f2..ad6cc8b 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -644,7 +644,7 @@ struct rq { struct list_head cfs_tasks; - u64 rt_avg; + u64 rt_avg, dl_avg; u64 age_stamp; u64 idle_stamp; u64 avg_idle; @@ -1499,8 +1499,14 @@ static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq)); } + +static inline void sched_dl_avg_update(struct rq *rq, u64 dl_delta) +{ + rq->dl_avg += dl_delta * arch_scale_freq_capacity(NULL, cpu_of(rq)); +} #else static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { } +static inline void sched_dl_avg_update(struct rq *rq, u64 dl_delta) { } static inline void sched_avg_update(struct rq *rq) { } #endif