From patchwork Wed Dec 9 06:19:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Muckle X-Patchwork-Id: 7804851 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 53D2EBEEE1 for ; Wed, 9 Dec 2015 06:20:42 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 77C3F204D3 for ; Wed, 9 Dec 2015 06:20:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9756C204B0 for ; Wed, 9 Dec 2015 06:20:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752276AbbLIGUg (ORCPT ); Wed, 9 Dec 2015 01:20:36 -0500 Received: from mail-pa0-f41.google.com ([209.85.220.41]:36086 "EHLO mail-pa0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752369AbbLIGTs (ORCPT ); Wed, 9 Dec 2015 01:19:48 -0500 Received: by pacdm15 with SMTP id dm15so24607879pac.3 for ; Tue, 08 Dec 2015 22:19:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N5Vjs+DeX0MUey+VhS1vSM5jhDW3ni03zS83MCm5THc=; b=ef/4xlWwbXvJHEQdYH+0i8GIN6O5urMll/KQ0wSbjWwrNMKpwfuYP76SHLmZf2YYX5 XdyhO3ey2B1ox8tEVxDjx2sN0zEYjGC26HDHn3mC6sB/Myty7tVmlud7qQ+00VEdsLN2 aEG8ZcQs+uHPNh1GSmR/bvc8ufVY7pu0X8+H1W3UrPnO6eamAE0wryKakIxsQsRUvbi+ XHFTPVbF8Ur7nVL3jiDZ1ADO8BLDvq/Zu16+UFOOAq5hZIVU+gvyZmtoWrnrIy/ZpDSH mDxAC/vMtrXPJsrLiy50nRqGe7fA0p6Lmtzgi6rc/S8Xj2d4EsXeYOI63EsdjY0eLCX7 zxSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N5Vjs+DeX0MUey+VhS1vSM5jhDW3ni03zS83MCm5THc=; b=RBDIEMi6eZrzgINtrkFxpmrrsS26jN3/Qj/W1lYpDXJO/AxVR114vJaapq5hjhXEmI kMrEdCvjozW2FOSTYlvQ0GoRFf1N6MHBvLo2XoJpmyQkMTnFc2YJRJsCaVlvJAxWBeiT 2xhmae+ep39P0jpsIjrBzghIcsHJ/bU0gpxit5H5uurZ3gySFAcKisqVoV6HFkhd6TGS l7+IrmnJ6j1W7NWLUx3qIYDIO6dFtxOWEI7KJ6nlAwIY2ZWKX3WVezfUiY1Q09kWZMmj QKZj/BR32/syawyKDwk7TgbpxTU+WJAbHLxT2orMXvt9L5EGomINtdoV6TCYg+lIEv65 gxxw== X-Gm-Message-State: ALoCoQmj1wMU+R28Tdkio+uixyR1p8aMk24aEZzOFDLs+NuwKt2RBp5uuRGVE5VL0yi8nuuPhBAIECRPNNrxBgj6ixFikl/Phg== X-Received: by 10.66.227.1 with SMTP id rw1mr5678266pac.35.1449641988078; Tue, 08 Dec 2015 22:19:48 -0800 (PST) Received: from graphite.smuckle.net (cpe-75-80-155-7.san.res.rr.com. [75.80.155.7]) by smtp.gmail.com with ESMTPSA id l84sm8643078pfb.15.2015.12.08.22.19.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Dec 2015 22:19:47 -0800 (PST) From: Steve Muckle X-Google-Original-From: Steve Muckle To: Peter Zijlstra , Ingo Molnar Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Vincent Guittot , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Patrick Bellasi , Michael Turquette Subject: [RFCv6 PATCH 08/10] sched: remove call of sched_avg_update from sched_rt_avg_update Date: Tue, 8 Dec 2015 22:19:29 -0800 Message-Id: <1449641971-20827-9-git-send-email-smuckle@linaro.org> X-Mailer: git-send-email 2.4.10 In-Reply-To: <1449641971-20827-1-git-send-email-smuckle@linaro.org> References: <1449641971-20827-1-git-send-email-smuckle@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Vincent Guittot rt_avg is only used to scale the available CPU's capacity for CFS tasks. As the update of this scaling is done during periodic load balance, we only have to ensure that sched_avg_update has been called before any periodic load balancing. This requirement is already fulfilled by __update_cpu_load so the call in sched_rt_avg_update, which is part of the hotpath, is useless. Signed-off-by: Vincent Guittot Signed-off-by: Steve Muckle --- kernel/sched/sched.h | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 90d5df6..08858d1 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1497,7 +1497,6 @@ static inline void set_dl_cpu_capacity(int cpu, bool request, static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { rq->rt_avg += rt_delta * arch_scale_freq_capacity(NULL, cpu_of(rq)); - sched_avg_update(rq); } #else static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { }