From patchwork Thu Oct 19 23:33:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramesh Thomas X-Patchwork-Id: 10018519 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F0A9A602C8 for ; Thu, 19 Oct 2017 23:40:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB1C528E39 for ; Thu, 19 Oct 2017 23:40:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CC9BE28E6E; Thu, 19 Oct 2017 23:40:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3FA1128E39 for ; Thu, 19 Oct 2017 23:40:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751191AbdJSXk3 (ORCPT ); Thu, 19 Oct 2017 19:40:29 -0400 Received: from mga03.intel.com ([134.134.136.65]:42212 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750993AbdJSXk2 (ORCPT ); Thu, 19 Oct 2017 19:40:28 -0400 Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Oct 2017 16:40:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,404,1503385200"; d="scan'208";a="165248142" Received: from rt-desktop1.sc.intel.com (HELO intel.com) ([143.183.189.155]) by fmsmga006.fm.intel.com with ESMTP; 19 Oct 2017 16:40:27 -0700 Date: Thu, 19 Oct 2017 16:33:55 -0700 From: Ramesh Thomas To: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org, rafael.j.wysocki@intel.com, alex.shi@linaro.org, daniel.lezcano@linaro.org Subject: [PATCH] cpuidle: ladder: Add per CPU PM QoS resume latency support Message-ID: <20171019233340.GA13902@intel.com> Reply-To: ramesh.thomas@intel.com MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Individual CPUs may have special requirements to not enter deep idle states. For example, a CPU running real time applications would not want to enter deep idle states to avoid latency impacts. At the same time other CPUs that do not have such a requirement could allow deep idle states to save power. This was already implemented in the menu governor. Implementing similar changes in the ladder governor which gets selected when CONFIG_NO_HZ and CONFIG_NO_HZ_IDLE are not set. Refer following commits for the menu governor changes. commit 9908859acaa9 ("cpuidle/menu: add per CPU PM QoS resume latency consideration") commit 6dbf5cea05a7 ("cpuidle: menu: Avoid taking spinlock for accessing QoS values") Signed-off-by: Ramesh Thomas --- drivers/cpuidle/governors/ladder.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c index ce1a2ff..f7cfee7 100644 --- a/drivers/cpuidle/governors/ladder.c +++ b/drivers/cpuidle/governors/ladder.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -67,10 +68,16 @@ static int ladder_select_state(struct cpuidle_driver *drv, struct cpuidle_device *dev) { struct ladder_device *ldev = this_cpu_ptr(&ladder_devices); + struct device *device = get_cpu_device(dev->cpu); struct ladder_device_state *last_state; int last_residency, last_idx = ldev->last_state_idx; int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0; int latency_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); + int resume_latency = dev_pm_qos_raw_read_value(device); + + /* resume_latency is 0 means no restriction */ + if (resume_latency && resume_latency < latency_req) + latency_req = resume_latency; /* Special case when user has set very strict latency requirement */ if (unlikely(latency_req == 0)) {