From patchwork Fri Sep 4 11:31:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adalbert_Laz=C4=83r?= X-Patchwork-Id: 11756763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9532E138C for ; Fri, 4 Sep 2020 11:31:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67D98206B7 for ; Fri, 4 Sep 2020 11:31:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67D98206B7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bitdefender.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 710598E0003; Fri, 4 Sep 2020 07:31:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6727A900003; Fri, 4 Sep 2020 07:31:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49B1D8E0006; Fri, 4 Sep 2020 07:31:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 2CCD38E0003 for ; Fri, 4 Sep 2020 07:31:10 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D8DA040E1 for ; Fri, 4 Sep 2020 11:31:09 +0000 (UTC) X-FDA: 77225162658.07.elbow81_0113bb5270b1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id AD36E1803F9B6 for ; Fri, 4 Sep 2020 11:31:09 +0000 (UTC) X-Spam-Summary: 1,0,0,75114902058c764a,d41d8cd98f00b204,alazar@bitdefender.com,,RULES_HIT:41:69:152:355:379:800:960:968:973:982:988:989:1260:1261:1277:1311:1313:1314:1345:1359:1431:1437:1500:1515:1516:1518:1535:1544:1593:1594:1676:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3866:3868:3870:3871:4250:4321:4605:5007:6119:6120:6261:6742:7576:7901:7903:9010:9592:10004:11026:11473:11658:11914:12043:12295:12296:12297:12438:12517:12519:12555:12679:12986:13161:13229:13255:13894:13972:14096:14097:14394:14659:14721:21080:21324:21451:21611:21627:21987:21990:30029:30045:30054,0,RBL:91.199.104.161:@bitdefender.com:.lbl8.mailshell.net-62.2.8.100 64.100.201.201;04y8b8qhrwo1ms3wnymissyqizzfaopng3o6stpxzdqf6mui5afc51euqaqyakz.3ky5aha63u8uho6wjg3upqmxjwo7xfd63xqphb5wmey5hopwwap4a311nrr4chm.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA _SUMMARY X-HE-Tag: elbow81_0113bb5270b1 X-Filterd-Recvd-Size: 5911 Received: from mx01.bbu.dsd.mx.bitdefender.com (mx01.bbu.dsd.mx.bitdefender.com [91.199.104.161]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Sep 2020 11:31:09 +0000 (UTC) Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id 9B813307C934; Fri, 4 Sep 2020 14:31:07 +0300 (EEST) Received: from localhost.localdomain (unknown [195.189.155.252]) by smtp.bitdefender.com (Postfix) with ESMTPSA id D31BB3072785; Fri, 4 Sep 2020 14:31:06 +0300 (EEST) From: =?utf-8?q?Adalbert_Laz=C4=83r?= To: linux-mm@kvack.org Cc: linux-api@vger.kernel.org, Andrew Morton , Alexander Graf , Stefan Hajnoczi , Jerome Glisse , Paolo Bonzini , =?utf-8?q?Mihai_Don=C8=9Bu?= , Mircea Cirjaliu , Andy Lutomirski , Arnd Bergmann , Sargun Dhillon , Aleksa Sarai , Oleg Nesterov , Jann Horn , Kees Cook , Matthew Wilcox , Christian Brauner , =?utf-8?q?Adalbert_Laz?= =?utf-8?q?=C4=83r?= Subject: [RESEND RFC PATCH 3/5] mm/mmu_notifier: remove lockdep map, allow mmu notifier to be used in nested scenarios Date: Fri, 4 Sep 2020 14:31:14 +0300 Message-Id: <20200904113116.20648-4-alazar@bitdefender.com> In-Reply-To: <20200904113116.20648-1-alazar@bitdefender.com> References: <20200904113116.20648-1-alazar@bitdefender.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AD36E1803F9B6 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mircea Cirjaliu The combination of remote mapping + KVM causes nested range invalidations, which reports lockdep warnings. Signed-off-by: Mircea Cirjaliu Signed-off-by: Adalbert Lazăr --- include/linux/mmu_notifier.h | 5 +---- mm/mmu_notifier.c | 19 ------------------- 2 files changed, 1 insertion(+), 23 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 736f6918335e..81ea457d41be 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -440,12 +440,10 @@ mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) { might_sleep(); - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); if (mm_has_notifiers(range->mm)) { range->flags |= MMU_NOTIFIER_RANGE_BLOCKABLE; __mmu_notifier_invalidate_range_start(range); } - lock_map_release(&__mmu_notifier_invalidate_range_start_map); } static inline int @@ -453,12 +451,11 @@ mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) { int ret = 0; - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); if (mm_has_notifiers(range->mm)) { range->flags &= ~MMU_NOTIFIER_RANGE_BLOCKABLE; ret = __mmu_notifier_invalidate_range_start(range); } - lock_map_release(&__mmu_notifier_invalidate_range_start_map); + return ret; } diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 06852b896fa6..928751bd8630 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -22,12 +22,6 @@ /* global SRCU for all MMs */ DEFINE_STATIC_SRCU(srcu); -#ifdef CONFIG_LOCKDEP -struct lockdep_map __mmu_notifier_invalidate_range_start_map = { - .name = "mmu_notifier_invalidate_range_start" -}; -#endif - /* * The mmu_notifier_subscriptions structure is allocated and installed in * mm->notifier_subscriptions inside the mm_take_all_locks() protected @@ -242,8 +236,6 @@ mmu_interval_read_begin(struct mmu_interval_notifier *interval_sub) * will always clear the below sleep in some reasonable time as * subscriptions->invalidate_seq is even in the idle state. */ - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); - lock_map_release(&__mmu_notifier_invalidate_range_start_map); if (is_invalidating) wait_event(subscriptions->wq, READ_ONCE(subscriptions->invalidate_seq) != seq); @@ -572,13 +564,11 @@ void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, struct mmu_notifier_subscriptions *subscriptions = range->mm->notifier_subscriptions; - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); if (subscriptions->has_itree) mn_itree_inv_end(subscriptions); if (!hlist_empty(&subscriptions->list)) mn_hlist_invalidate_end(subscriptions, range, only_end); - lock_map_release(&__mmu_notifier_invalidate_range_start_map); } void __mmu_notifier_invalidate_range(struct mm_struct *mm, @@ -612,13 +602,6 @@ int __mmu_notifier_register(struct mmu_notifier *subscription, lockdep_assert_held_write(&mm->mmap_sem); BUG_ON(atomic_read(&mm->mm_users) <= 0); - if (IS_ENABLED(CONFIG_LOCKDEP)) { - fs_reclaim_acquire(GFP_KERNEL); - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); - lock_map_release(&__mmu_notifier_invalidate_range_start_map); - fs_reclaim_release(GFP_KERNEL); - } - if (!mm->notifier_subscriptions) { /* * kmalloc cannot be called under mm_take_all_locks(), but we @@ -1062,8 +1045,6 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub) * The possible sleep on progress in the invalidation requires the * caller not hold any locks held by invalidation callbacks. */ - lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); - lock_map_release(&__mmu_notifier_invalidate_range_start_map); if (seq) wait_event(subscriptions->wq, READ_ONCE(subscriptions->invalidate_seq) != seq);