From patchwork Fri Mar 11 14:03:06 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 8565561 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BAFE8C0553 for ; Fri, 11 Mar 2016 14:03:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 960DC2035E for ; Fri, 11 Mar 2016 14:03:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AB41620340 for ; Fri, 11 Mar 2016 14:03:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752878AbcCKODM (ORCPT ); Fri, 11 Mar 2016 09:03:12 -0500 Received: from mx1.redhat.com ([209.132.183.28]:49430 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752628AbcCKODL (ORCPT ); Fri, 11 Mar 2016 09:03:11 -0500 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id D9DB4C657B; Fri, 11 Mar 2016 14:03:10 +0000 (UTC) Received: from annuminas.surriel.com (ovpn-116-37.rdu2.redhat.com [10.10.116.37]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u2BE38JR017340; Fri, 11 Mar 2016 09:03:09 -0500 Date: Fri, 11 Mar 2016 09:03:06 -0500 From: Rik van Riel To: "Rafael J. Wysocki" Cc: Doug Smythies , "Rafael J. Wysocki" , Viresh Kumar , Srinivas Pandruvada , "Chen, Yu C" , "linux-pm@vger.kernel.org" , Arto Jantunen , Len Brown Subject: Re: SKL BOOT FAILURE unless idle=nomwait (was Re: PROBLEM: Cpufreq constantly keeps frequency at maximum on 4.5-rc4) Message-ID: <20160311090306.1bfe380b@annuminas.surriel.com> In-Reply-To: References: <87si087tsr.fsf@iki.fi> <87a8m74mcc.fsf@iki.fi> <002d01d17a57$ec417030$c4c45090$@net> <003701d17a5d$cab287a0$601796e0$@net> Organization: Red Hat, Inc. MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Thu, 10 Mar 2016 00:59:01 +0100 "Rafael J. Wysocki" wrote: > OK, thanks. > > Rik, that seems to go against the changelog of > a9ceb78bc75ca47972096372ff3d48648b16317a: > > "This is not a big deal on most x86 CPUs, which have very low C1 > latencies, and the patch should not have any effect on those CPUs." > > The effect is actually measurable and quite substantial to my eyes. Indeed, my mistake was testing not just against the predicted latency, but against the predicted latency multiplied by the load correction factor, which can be as much as 10x the load... The patch below should fix that. It didn't for Arto, due to the other issues on his system, but it might resolve the issue for Doug, where cstate/pstate is otherwise working fine. Doug, does the patch below solve your issue? If it does not, we should figure out why the idle state selection loop is not selecting the right mode. Is the latency_req "load correction" too aggressive? Or is it only too aggressive for IDLE->HLT selection, and fine to drive choices between deeper C states? After all, if it causes the IDLE->HLT selection to go wrong, maybe it is also causing us to pick shallower C states when we should be picking deeper ones? /* * Find the idle state with the lowest power while satisfying * our constraints. */ for (i = data->last_state_idx + 1; i < drv->state_count; i++) { struct cpuidle_state *s = &drv->states[i]; struct cpuidle_state_usage *su = &dev->states_usage[i]; if (s->disabled || su->disable) continue; if (s->target_residency > data->predicted_us) continue; if (s->exit_latency > latency_req) continue; data->last_state_idx = i; } ---8<--- Subject: cpuidle: use predicted_us not interactivity_req to consider polling The interactivity_req variable is the expected sleep time, divided by the CPU load. This can be too aggressive a factor in deciding whether or not to consider polling in the cpuidle state selection. Use the (not corrected for load) predicted_us instead. Signed-off-by: Rik van Riel --- drivers/cpuidle/governors/menu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c index 0742b3296673..97022ae01d2e 100644 --- a/drivers/cpuidle/governors/menu.c +++ b/drivers/cpuidle/governors/menu.c @@ -330,7 +330,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev) * We want to default to C1 (hlt), not to busy polling * unless the timer is happening really really soon. */ - if (interactivity_req > 20 && + if (data->predicted_us > 20 && !drv->states[CPUIDLE_DRIVER_STATE_START].disabled && dev->states_usage[CPUIDLE_DRIVER_STATE_START].disable == 0) data->last_state_idx = CPUIDLE_DRIVER_STATE_START;