From patchwork Tue Oct 15 18:12:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11191381 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BA5514DB for ; Tue, 15 Oct 2019 18:16:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29CD5222C9 for ; Tue, 15 Oct 2019 18:16:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="M2J/KnGO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29CD5222C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 464CB8E000C; Tue, 15 Oct 2019 14:16:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3C5338E0001; Tue, 15 Oct 2019 14:16:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DBC38E000C; Tue, 15 Oct 2019 14:16:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 0A0AC8E0001 for ; Tue, 15 Oct 2019 14:16:25 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 9DF70813F for ; Tue, 15 Oct 2019 18:16:24 +0000 (UTC) X-FDA: 76046823888.01.lace14_37e6fc3e03105 X-Spam-Summary: 2,0,0,bc0e661af747d5ae,d41d8cd98f00b204,jgg@ziepe.ca,:jglisse@redhat.com:rcampbell@nvidia.com:jhubbard@nvidia.com:felix.kuehling@amd.com:linux-rdma@vger.kernel.org::aarcange@redhat.com:dri-devel@lists.freedesktop.org:amd-gfx@lists.freedesktop.org:bskeggs@redhat.com:jgg@mellanox.com,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4605:5007:6120:6261:6653:7576:7875:7901:7903:9010:9592:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:13255:13894:13972:14096:14181:14394:14721:21080:21444:21451:21627:21939:21966:30012:30054,0,RBL:209.85.214.196:@ziepe.ca:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: lace14_37e6fc3e03105 X-Filterd-Recvd-Size: 6879 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Oct 2019 18:16:24 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id u20so9980492plq.4 for ; Tue, 15 Oct 2019 11:16:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GMW6u8sAEpasPVClwfBBYlElWNrrf0/DwLqwZuYcSQY=; b=M2J/KnGOO/ynYpeFySRiBudsUiTvI2FrRfaB+aH7OnKlnUucmp2R4RQ3z95+mIc56C +g20pMBoKXyb31P9hQPfxdcXUYUvJHGoSxWwdPz8PdPEImr03JRyocWO5fTlrvE6O5ys ro8e9KyWA4yi36wrt+mpCwP6ZvL3AhynqBci5ewDZ/MRFeTdMgqY3ltqTfHL1st2ARtx hIxfwbvsZGs6rxOwg2BlTfWXb/3h9ed3xReb75tByFaMPai/ZH2dj9CAdDpRfbgUhcFQ eWSBu6EU3iiU3V7B68KEKG3nfNBpW9NI3yjcV1rsJhRwjcjZU7sLp7YJ1SVCc9CD5bJI o3SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GMW6u8sAEpasPVClwfBBYlElWNrrf0/DwLqwZuYcSQY=; b=Px8KqbBtaHnRG4euoOy4MjOnz7G6FWocf6Xlcs7HkxdNlLs45+B43WjT6jvZxM8rsm PngLxJAzvBrQdXxldkX9ljsjMLlOYyzhi07wD7IXfXyyj3zLneqtlmZW9dE/DUQrcapK uv5W2a9PvuytnbnwNtjpIqYB7YjJxq1erVXdMzuB4t26o6JPn3In0mhvPWsKkO49BK6f Nby46vsloGQvlwQgjESPyrHZ3L/9yemR4H+wjq8J57Gihfbcq7e9zyjWOLAS9Rhko5eT yYdDdJGOePk/W/E+PPP0UYBn3LpwiMa3bjAkwmtRRGOaD0MeEVisMStFcNL6HUNmkeTa 9o/A== X-Gm-Message-State: APjAAAXrHiBuXZBwAzXUfATRDGAGUzbixmECKX6hURkf2tP1tcMIPEGC 7RKApsbL9TwikUx1C0LfD5W9Bw== X-Google-Smtp-Source: APXvYqyhe/t6Rp9RdcodgUXqjHqLoBMW/iEiC3lVKeBbOWy+tiRj0M2CZprzUCvqWIdC9clgqoP/Uw== X-Received: by 2002:a17:902:8689:: with SMTP id g9mr36459912plo.131.1571163382804; Tue, 15 Oct 2019 11:16:22 -0700 (PDT) Received: from ziepe.ca ([24.114.26.129]) by smtp.gmail.com with ESMTPSA id 206sm21017956pge.80.2019.10.15.11.16.22 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 15 Oct 2019 11:16:22 -0700 (PDT) Received: from jgg by jggl.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1iKRJT-0002Bm-7p; Tue, 15 Oct 2019 15:12:51 -0300 From: Jason Gunthorpe To: Jerome Glisse , Ralph Campbell , John Hubbard , Felix.Kuehling@amd.com Cc: linux-rdma@vger.kernel.org, linux-mm@kvack.org, Andrea Arcangeli , dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Ben Skeggs , Jason Gunthorpe Subject: [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Date: Tue, 15 Oct 2019 15:12:28 -0300 Message-Id: <20191015181242.8343-2-jgg@ziepe.ca> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191015181242.8343-1-jgg@ziepe.ca> References: <20191015181242.8343-1-jgg@ziepe.ca> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jason Gunthorpe Now that we have KERNEL_HEADER_TEST all headers are generally compile tested, so relying on makefile tricks to avoid compiling code that depends on CONFIG_MMU_NOTIFIER is more annoying. Instead follow the usual pattern and provide most of the header with only the functions stubbed out when CONFIG_MMU_NOTIFIER is disabled. This ensures code compiles no matter what the config setting is. While here, struct mmu_notifier_mm is private to mmu_notifier.c, move it. Signed-off-by: Jason Gunthorpe Reviewed-by: Jérôme Glisse --- include/linux/mmu_notifier.h | 46 +++++++++++++----------------------- mm/mmu_notifier.c | 13 ++++++++++ 2 files changed, 30 insertions(+), 29 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 1bd8e6a09a3c27..12bd603d318ce7 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -7,8 +7,9 @@ #include #include +struct mmu_notifier_mm; struct mmu_notifier; -struct mmu_notifier_ops; +struct mmu_notifier_range; /** * enum mmu_notifier_event - reason for the mmu notifier callback @@ -40,36 +41,8 @@ enum mmu_notifier_event { MMU_NOTIFY_SOFT_DIRTY, }; -#ifdef CONFIG_MMU_NOTIFIER - -#ifdef CONFIG_LOCKDEP -extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; -#endif - -/* - * The mmu notifier_mm structure is allocated and installed in - * mm->mmu_notifier_mm inside the mm_take_all_locks() protected - * critical section and it's released only when mm_count reaches zero - * in mmdrop(). - */ -struct mmu_notifier_mm { - /* all mmu notifiers registerd in this mm are queued in this list */ - struct hlist_head list; - /* to serialize the list modifications and hlist_unhashed */ - spinlock_t lock; -}; - #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) -struct mmu_notifier_range { - struct vm_area_struct *vma; - struct mm_struct *mm; - unsigned long start; - unsigned long end; - unsigned flags; - enum mmu_notifier_event event; -}; - struct mmu_notifier_ops { /* * Called either by mmu_notifier_unregister or when the mm is @@ -249,6 +222,21 @@ struct mmu_notifier { unsigned int users; }; +#ifdef CONFIG_MMU_NOTIFIER + +#ifdef CONFIG_LOCKDEP +extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; +#endif + +struct mmu_notifier_range { + struct vm_area_struct *vma; + struct mm_struct *mm; + unsigned long start; + unsigned long end; + unsigned flags; + enum mmu_notifier_event event; +}; + static inline int mm_has_notifiers(struct mm_struct *mm) { return unlikely(mm->mmu_notifier_mm); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 7fde88695f35d6..367670cfd02b7b 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -27,6 +27,19 @@ struct lockdep_map __mmu_notifier_invalidate_range_start_map = { }; #endif +/* + * The mmu notifier_mm structure is allocated and installed in + * mm->mmu_notifier_mm inside the mm_take_all_locks() protected + * critical section and it's released only when mm_count reaches zero + * in mmdrop(). + */ +struct mmu_notifier_mm { + /* all mmu notifiers registered in this mm are queued in this list */ + struct hlist_head list; + /* to serialize the list modifications and hlist_unhashed */ + spinlock_t lock; +}; + /* * This function can't run concurrently against mmu_notifier_register * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap