From patchwork Tue Jul 7 20:00:25 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 34510 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n67K0dwP012179 for ; Tue, 7 Jul 2009 20:00:39 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755195AbZGGUAg (ORCPT ); Tue, 7 Jul 2009 16:00:36 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754946AbZGGUAg (ORCPT ); Tue, 7 Jul 2009 16:00:36 -0400 Received: from casper.infradead.org ([85.118.1.10]:42204 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754655AbZGGUAf (ORCPT ); Tue, 7 Jul 2009 16:00:35 -0400 Received: from e53227.upc-e.chello.nl ([213.93.53.227] helo=dyad.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.69 #1 (Red Hat Linux)) id 1MOGq2-0005XQ-B1; Tue, 07 Jul 2009 20:00:26 +0000 Received: from [127.0.0.1] (dyad [192.168.0.60]) by dyad.programming.kicks-ass.net (Postfix) with ESMTP id 035C610C8B; Tue, 7 Jul 2009 21:59:37 +0200 (CEST) Subject: Re: mmu_notifiers: turn off lockdep around mm_take_all_locks From: Peter Zijlstra To: Linus Torvalds Cc: Marcelo Tosatti , Avi Kivity , Andrea Arcangeli , kvm , Ingo Molnar In-Reply-To: References: <20090707180630.GA8008@amt.cnet> <1246990505.5197.2.camel@laptop> <4A53917C.6080208@redhat.com> <20090707183741.GA8393@amt.cnet> <1246993442.5197.15.camel@laptop> Date: Tue, 07 Jul 2009 22:00:25 +0200 Message-Id: <1246996825.5197.34.camel@laptop> Mime-Version: 1.0 X-Mailer: Evolution 2.26.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, 2009-07-07 at 12:25 -0700, Linus Torvalds wrote: > > On Tue, 7 Jul 2009, Peter Zijlstra wrote: > > > > Another issue, at about >=256 vmas we'll overflow the preempt count. So > > disabling lockdep will only 'fix' this for a short while, until you've > > bloated beyond that ;-) > > We would? > > I don't think so. Sure, we'd "overflow" into the softirq bits, but it's > all designed to faile very gracefully. Somebody who tests our "status" > might think we're in softirq context, but that really doesn't matter: we > still have preemption disabled. Right, it might confuse the softirq (and when we extend the vma limit and go wild maybe the hardirq) state. > > Linus, Ingo, any opinions? > > I do think that if lockdep can't handle it, we probably should turn it off > around it. > > I don't think it's broken wrt regular preempt, though. It does feel slightly weird to explicitly overflow that preempt count though. Hmm, the CONFIG_DEBUG_PREEMPT bits in kernel/sched.c {sub,add}_preempt_count() will generate some splats though. But sure, something like the below would disable lockdep for the critical bits.. really don't like it though, but the alternative is modifying the rmap locking and I don't like that either :/ --- mm/mmap.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/mmap.c b/mm/mmap.c index 34579b2..cb7110e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2400,6 +2400,7 @@ int mm_take_all_locks(struct mm_struct *mm) mutex_lock(&mm_all_locks_mutex); + lockdep_off() for (vma = mm->mmap; vma; vma = vma->vm_next) { if (signal_pending(current)) goto out_unlock; @@ -2417,6 +2418,8 @@ int mm_take_all_locks(struct mm_struct *mm) ret = 0; out_unlock: + lockdep_on(); + if (ret) mm_drop_all_locks(mm); @@ -2470,12 +2473,14 @@ void mm_drop_all_locks(struct mm_struct *mm) BUG_ON(down_read_trylock(&mm->mmap_sem)); BUG_ON(!mutex_is_locked(&mm_all_locks_mutex)); + lockdep_off(); for (vma = mm->mmap; vma; vma = vma->vm_next) { if (vma->anon_vma) vm_unlock_anon_vma(vma->anon_vma); if (vma->vm_file && vma->vm_file->f_mapping) vm_unlock_mapping(vma->vm_file->f_mapping); } + lockdep_on(); mutex_unlock(&mm_all_locks_mutex); }