From patchwork Thu Oct 8 11:24:22 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Galbraith X-Patchwork-Id: 52496 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n98Bshsd006283 for ; Thu, 8 Oct 2009 11:54:43 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757712AbZJHLqB (ORCPT ); Thu, 8 Oct 2009 07:46:01 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757684AbZJHLqB (ORCPT ); Thu, 8 Oct 2009 07:46:01 -0400 Received: from mail.gmx.net ([213.165.64.20]:46583 "HELO mail.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1757314AbZJHLqA (ORCPT ); Thu, 8 Oct 2009 07:46:00 -0400 Received: (qmail invoked by alias); 08 Oct 2009 11:45:11 -0000 Received: from p4FE19D89.dip0.t-ipconnect.de (EHLO [192.168.178.27]) [79.225.157.137] by mail.gmx.net (mp029) with SMTP; 08 Oct 2009 13:45:11 +0200 X-Authenticated: #14349625 X-Provags-ID: V01U2FsdGVkX18FMYcCkugFIS7fH5twqAbz1dwDfjCj7fyQCm6IO6 2Qs8CzueipsW28 Subject: Re: [.32-rc3] scheduler: iwlagn consistently high in "waiting for CPU" From: Mike Galbraith To: Frans Pop Cc: Arjan van de Ven , Linux Kernel Mailing List , Ingo Molnar , Peter Zijlstra , linux-wireless@vger.kernel.org In-Reply-To: <200910072034.57511.elendil@planet.nl> References: <200910051500.55875.elendil@planet.nl> <200910061749.02805.elendil@planet.nl> <200910071910.53907.elendil@planet.nl> <200910072034.57511.elendil@planet.nl> Date: Thu, 08 Oct 2009 13:24:22 +0200 Message-Id: <1255001062.7500.1.camel@marge.simson.net> Mime-Version: 1.0 X-Mailer: Evolution 2.24.1.1 X-Y-GMX-Trusted: 0 X-FuHaFi: 0.43 Sender: linux-wireless-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Index: linux-2.6/include/linux/latencytop.h =================================================================== --- linux-2.6.orig/include/linux/latencytop.h +++ linux-2.6/include/linux/latencytop.h @@ -26,12 +26,12 @@ struct latency_record { struct task_struct; extern int latencytop_enabled; -void __account_scheduler_latency(struct task_struct *task, int usecs, int inter); +void __account_scheduler_latency(struct task_struct *task, unsigned long usecs); static inline void -account_scheduler_latency(struct task_struct *task, int usecs, int inter) +account_scheduler_latency(struct task_struct *task, unsigned long usecs) { if (unlikely(latencytop_enabled)) - __account_scheduler_latency(task, usecs, inter); + __account_scheduler_latency(task, usecs); } void clear_all_latency_tracing(struct task_struct *p); Index: linux-2.6/kernel/latencytop.c =================================================================== --- linux-2.6.orig/kernel/latencytop.c +++ linux-2.6/kernel/latencytop.c @@ -157,34 +157,17 @@ static inline void store_stacktrace(stru * __account_scheduler_latency - record an occured latency * @tsk - the task struct of the task hitting the latency * @usecs - the duration of the latency in microseconds - * @inter - 1 if the sleep was interruptible, 0 if uninterruptible * * This function is the main entry point for recording latency entries * as called by the scheduler. - * - * This function has a few special cases to deal with normal 'non-latency' - * sleeps: specifically, interruptible sleep longer than 5 msec is skipped - * since this usually is caused by waiting for events via select() and co. - * - * Negative latencies (caused by time going backwards) are also explicitly - * skipped. */ void __sched -__account_scheduler_latency(struct task_struct *tsk, int usecs, int inter) +__account_scheduler_latency(struct task_struct *tsk, unsigned long usecs) { unsigned long flags; int i, q; struct latency_record lat; - /* Long interruptible waits are generally user requested... */ - if (inter && usecs > 5000) - return; - - /* Negative sleeps are time going backwards */ - /* Zero-time sleeps are non-interesting */ - if (usecs <= 0) - return; - memset(&lat, 0, sizeof(lat)); lat.count = 1; lat.time = usecs; Index: linux-2.6/kernel/sched_fair.c =================================================================== --- linux-2.6.orig/kernel/sched_fair.c +++ linux-2.6/kernel/sched_fair.c @@ -495,8 +495,10 @@ static void update_curr(struct cfs_rq *c u64 now = rq_of(cfs_rq)->clock; unsigned long delta_exec; - if (unlikely(!curr)) + if (unlikely(!curr)) { + update_rq_clock(rq_of(cfs_rq)); return; + } /* * Get the amount of time the current task was running @@ -548,8 +550,11 @@ update_stats_wait_end(struct cfs_rq *cfs rq_of(cfs_rq)->clock - se->wait_start); #ifdef CONFIG_SCHEDSTATS if (entity_is_task(se)) { - trace_sched_stat_wait(task_of(se), - rq_of(cfs_rq)->clock - se->wait_start); + struct task_struct *tsk = task_of(se); + u64 delta = rq_of(cfs_rq)->clock - se->wait_start; + + trace_sched_stat_wait(tsk, delta); + account_scheduler_latency(tsk, delta >> 10); } #endif schedstat_set(se->wait_start, 0); @@ -643,10 +648,8 @@ static void enqueue_sleeper(struct cfs_r se->sleep_start = 0; se->sum_sleep_runtime += delta; - if (tsk) { - account_scheduler_latency(tsk, delta >> 10, 1); + if (tsk) trace_sched_stat_sleep(tsk, delta); - } } if (se->block_start) { u64 delta = rq_of(cfs_rq)->clock - se->block_start; @@ -677,7 +680,6 @@ static void enqueue_sleeper(struct cfs_r (void *)get_wchan(tsk), delta >> 20); } - account_scheduler_latency(tsk, delta >> 10, 0); } } #endif