From patchwork Mon Jul 15 03:50:44 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Wang X-Patchwork-Id: 2827276 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9832A9F967 for ; Mon, 15 Jul 2013 03:50:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 64DE42013C for ; Mon, 15 Jul 2013 03:50:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2262220138 for ; Mon, 15 Jul 2013 03:50:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753927Ab3GODuy (ORCPT ); Sun, 14 Jul 2013 23:50:54 -0400 Received: from e28smtp07.in.ibm.com ([122.248.162.7]:45476 "EHLO e28smtp07.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753930Ab3GODux (ORCPT ); Sun, 14 Jul 2013 23:50:53 -0400 Received: from /spool/local by e28smtp07.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 15 Jul 2013 09:13:07 +0530 Received: from d28dlp02.in.ibm.com (9.184.220.127) by e28smtp07.in.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 15 Jul 2013 09:13:05 +0530 Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id 79AE63940057; Mon, 15 Jul 2013 09:20:44 +0530 (IST) Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67]) by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r6F3pSxO22282390; Mon, 15 Jul 2013 09:21:28 +0530 Received: from d28av05.in.ibm.com (loopback [127.0.0.1]) by d28av05.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r6F3okd4029833; Mon, 15 Jul 2013 13:50:47 +1000 Received: from [9.111.17.129] (wangyun.cn.ibm.com [9.111.17.129]) by d28av05.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r6F3ojAe029796; Mon, 15 Jul 2013 13:50:45 +1000 Message-ID: <51E37194.1080103@linux.vnet.ibm.com> Date: Mon, 15 Jul 2013 11:50:44 +0800 From: Michael Wang User-Agent: Mozilla/5.0 (X11; Linux i686; rv:16.0) Gecko/20121011 Thunderbird/16.0.1 MIME-Version: 1.0 To: Sergey Senozhatsky CC: Jiri Kosina , Borislav Petkov , "Rafael J. Wysocki" , Viresh Kumar , "Srivatsa S. Bhat" , linux-kernel@vger.kernel.org, cpufreq@vger.kernel.org, linux-pm@vger.kernel.org Subject: Re: [LOCKDEP] cpufreq: possible circular locking dependency detected References: <20130625211544.GA2270@swordfish> <51D10899.1080501@linux.vnet.ibm.com> <20130710231305.GA4046@swordfish> <51DE1BE1.3090707@linux.vnet.ibm.com> <20130714114721.GB2178@swordfish> <20130714120629.GA4067@swordfish> In-Reply-To: <20130714120629.GA4067@swordfish> X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13071503-8878-0000-0000-000007F22872 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 07/14/2013 08:06 PM, Sergey Senozhatsky wrote: > On (07/14/13 14:47), Sergey Senozhatsky wrote: >> >> Now, as I fixed radeon kms, I can see: >> >> [ 806.660530] ------------[ cut here ]------------ >> [ 806.660539] WARNING: CPU: 0 PID: 2389 at arch/x86/kernel/smp.c:124 >> native_smp_send_reschedule+0x57/0x60() > > Well, this one is obviously not a lockdep error, I meant that previous > tests with disabled lockdep were invalid. Will re-do. > And may be we could try below patch to get more info, I've moved the timing of restore stop flag from 'after STOP' to 'before START', I suppose that could create a window to prevent the work re-queue, it could at least provide us more info... I think I may need to setup a environment for debug now, what's the steps to produce this WARN? Regards, Michael Wang > -ss > >>> Regards, >>> Michael Wang >>> >>> diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c >>> index dc9b72e..a64b544 100644 >>> --- a/drivers/cpufreq/cpufreq_governor.c >>> +++ b/drivers/cpufreq/cpufreq_governor.c >>> @@ -178,13 +178,14 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, >>> { >>> int i; >>> >>> + if (dbs_data->queue_stop) >>> + return; >>> + >>> if (!all_cpus) { >>> __gov_queue_work(smp_processor_id(), dbs_data, delay); >>> } else { >>> - get_online_cpus(); >>> for_each_cpu(i, policy->cpus) >>> __gov_queue_work(i, dbs_data, delay); >>> - put_online_cpus(); >>> } >>> } >>> EXPORT_SYMBOL_GPL(gov_queue_work); >>> @@ -193,12 +194,27 @@ static inline void gov_cancel_work(struct dbs_data *dbs_data, >>> struct cpufreq_policy *policy) >>> { >>> struct cpu_dbs_common_info *cdbs; >>> - int i; >>> + int i, round = 2; >>> >>> + dbs_data->queue_stop = 1; >>> +redo: >>> + round--; >>> for_each_cpu(i, policy->cpus) { >>> cdbs = dbs_data->cdata->get_cpu_cdbs(i); >>> cancel_delayed_work_sync(&cdbs->work); >>> } >>> + >>> + /* >>> + * Since there is no lock to prvent re-queue the >>> + * cancelled work, some early cancelled work might >>> + * have been queued again by later cancelled work. >>> + * >>> + * Flush the work again with dbs_data->queue_stop >>> + * enabled, this time there will be no survivors. >>> + */ >>> + if (round) >>> + goto redo; >>> + dbs_data->queue_stop = 0; >>> } >>> >>> /* Will return if we need to evaluate cpu load again or not */ >>> diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h >>> index e16a961..9116135 100644 >>> --- a/drivers/cpufreq/cpufreq_governor.h >>> +++ b/drivers/cpufreq/cpufreq_governor.h >>> @@ -213,6 +213,7 @@ struct dbs_data { >>> unsigned int min_sampling_rate; >>> int usage_count; >>> void *tuners; >>> + int queue_stop; >>> >>> /* dbs_mutex protects dbs_enable in governor start/stop */ >>> struct mutex mutex; >>> >>>> >>>> Signed-off-by: Sergey Senozhatsky >>>> >>>> --- >>>> >>>> drivers/cpufreq/cpufreq.c | 5 +---- >>>> drivers/cpufreq/cpufreq_governor.c | 17 +++++++++++------ >>>> drivers/cpufreq/cpufreq_stats.c | 2 +- >>>> 3 files changed, 13 insertions(+), 11 deletions(-) >>>> >>>> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c >>>> index 6a015ad..f8aacf1 100644 >>>> --- a/drivers/cpufreq/cpufreq.c >>>> +++ b/drivers/cpufreq/cpufreq.c >>>> @@ -1943,13 +1943,10 @@ static int __cpuinit cpufreq_cpu_callback(struct notifier_block *nfb, >>>> case CPU_ONLINE: >>>> cpufreq_add_dev(dev, NULL); >>>> break; >>>> - case CPU_DOWN_PREPARE: >>>> + case CPU_POST_DEAD: >>>> case CPU_UP_CANCELED_FROZEN: >>>> __cpufreq_remove_dev(dev, NULL); >>>> break; >>>> - case CPU_DOWN_FAILED: >>>> - cpufreq_add_dev(dev, NULL); >>>> - break; >>>> } >>>> } >>>> return NOTIFY_OK; >>>> diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c >>>> index 4645876..681d5d6 100644 >>>> --- a/drivers/cpufreq/cpufreq_governor.c >>>> +++ b/drivers/cpufreq/cpufreq_governor.c >>>> @@ -125,7 +125,11 @@ static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, >>>> unsigned int delay) >>>> { >>>> struct cpu_dbs_common_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); >>>> - >>>> + /* cpu offline might block existing gov_queue_work() user, >>>> + * unblocking it after CPU_DEAD and before CPU_POST_DEAD. >>>> + * thus potentially we can hit offlined CPU */ >>>> + if (unlikely(cpu_is_offline(cpu))) >>>> + return; >>>> mod_delayed_work_on(cpu, system_wq, &cdbs->work, delay); >>>> } >>>> >>>> @@ -133,15 +137,14 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, >>>> unsigned int delay, bool all_cpus) >>>> { >>>> int i; >>>> - >>>> + get_online_cpus(); >>>> if (!all_cpus) { >>>> __gov_queue_work(smp_processor_id(), dbs_data, delay); >>>> } else { >>>> - get_online_cpus(); >>>> for_each_cpu(i, policy->cpus) >>>> __gov_queue_work(i, dbs_data, delay); >>>> - put_online_cpus(); >>>> } >>>> + put_online_cpus(); >>>> } >>>> EXPORT_SYMBOL_GPL(gov_queue_work); >>>> >>>> @@ -354,8 +357,10 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy, >>>> /* Initiate timer time stamp */ >>>> cpu_cdbs->time_stamp = ktime_get(); >>>> >>>> - gov_queue_work(dbs_data, policy, >>>> - delay_for_sampling_rate(sampling_rate), true); >>>> + /* hotplug lock already held */ >>>> + for_each_cpu(j, policy->cpus) >>>> + __gov_queue_work(j, dbs_data, >>>> + delay_for_sampling_rate(sampling_rate)); >>>> break; >>>> >>>> case CPUFREQ_GOV_STOP: >>>> diff --git a/drivers/cpufreq/cpufreq_stats.c b/drivers/cpufreq/cpufreq_stats.c >>>> index cd9e817..833816e 100644 >>>> --- a/drivers/cpufreq/cpufreq_stats.c >>>> +++ b/drivers/cpufreq/cpufreq_stats.c >>>> @@ -355,7 +355,7 @@ static int __cpuinit cpufreq_stat_cpu_callback(struct notifier_block *nfb, >>>> case CPU_DOWN_PREPARE: >>>> cpufreq_stats_free_sysfs(cpu); >>>> break; >>>> - case CPU_DEAD: >>>> + case CPU_POST_DEAD: >>>> cpufreq_stats_free_table(cpu); >>>> break; >>>> case CPU_UP_CANCELED_FROZEN: >>>> -- >>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in >>>> the body of a message to majordomo@vger.kernel.org >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>> Please read the FAQ at http://www.tux.org/lkml/ >>>> >>> > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > --- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index dc9b72e..b1446fe 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -178,13 +178,14 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, { int i; + if (!dbs_data->queue_start) + return; + if (!all_cpus) { __gov_queue_work(smp_processor_id(), dbs_data, delay); } else { - get_online_cpus(); for_each_cpu(i, policy->cpus) __gov_queue_work(i, dbs_data, delay); - put_online_cpus(); } } EXPORT_SYMBOL_GPL(gov_queue_work); @@ -193,12 +194,26 @@ static inline void gov_cancel_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy) { struct cpu_dbs_common_info *cdbs; - int i; + int i, round = 2; + dbs_data->queue_start = 0; +redo: + round--; for_each_cpu(i, policy->cpus) { cdbs = dbs_data->cdata->get_cpu_cdbs(i); cancel_delayed_work_sync(&cdbs->work); } + + /* + * Since there is no lock to prvent re-queue the + * cancelled work, some early cancelled work might + * have been queued again by later cancelled work. + * + * Flush the work again with dbs_data->queue_stop + * enabled, this time there will be no survivors. + */ + if (round) + goto redo; } /* Will return if we need to evaluate cpu load again or not */ @@ -391,6 +406,7 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy, /* Initiate timer time stamp */ cpu_cdbs->time_stamp = ktime_get(); + dbs_data->queue_start = 1; gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate), true); diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index e16a961..9116135 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -213,6 +213,7 @@ struct dbs_data { unsigned int min_sampling_rate; int usage_count; void *tuners; + int queue_start; /* dbs_mutex protects dbs_enable in governor start/stop */ struct mutex mutex;