From patchwork Mon Nov 9 19:21:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11892323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E102D1130 for ; Mon, 9 Nov 2020 19:21:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7FEB2206E3 for ; Mon, 9 Nov 2020 19:21:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="rSdlvJO1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FEB2206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B63CC6B005C; Mon, 9 Nov 2020 14:21:55 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B13B06B005D; Mon, 9 Nov 2020 14:21:55 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DC886B006C; Mon, 9 Nov 2020 14:21:55 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 704FD6B005C for ; Mon, 9 Nov 2020 14:21:55 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 16372180AD804 for ; Mon, 9 Nov 2020 19:21:55 +0000 (UTC) X-FDA: 77465849790.14.beam97_4008139272ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id E545918229835 for ; Mon, 9 Nov 2020 19:21:54 +0000 (UTC) X-Spam-Summary: 1,0,0,e38188cb4b7e6bf0,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:41:355:379:541:800:960:966:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2693:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4118:4321:4385:4605:5007:6261:6653:6742:6743:7576:8603:8660:10004:11026:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13161:13229:13230:13894:14096:14181:14394:14721:21063:21080:21451:21627:21939:21990:30012:30054:30070:30090,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yfzgaed78odrkpdmduqpef6dra3opna4qccextk7qpnifts9wqn6fh45jxm1y.9aaqq9zhwyjnid18miffx4n99f41hcjoxr9yjqhsnkp9ei4n8uoirhogepf7x9z.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: beam97_4008139272ee X-Filterd-Recvd-Size: 7706 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Nov 2020 19:21:54 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6901C20809; Mon, 9 Nov 2020 19:21:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604949713; bh=h7vn3gxjwfsVsLQ35Kx4jKwALCLmKMWicQ8E54QVB4Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rSdlvJO1ibHDkdea0knczQZR4AbttS8kfVDHQCw2titR3o4nJY2i/OgWKC7SHWY5J 1l/MTjAmGBVdJlgpBDWkYgMJ6ZdgPHgSggN5w9wbA7CkWkuyKB2Jfw1fChF6CIpW6b cmfNWDEk79APLhM4A7QRIdtX596XCytquuH1UwkU= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v7 1/4] mm: introduce debug_pagealloc_{map,unmap}_pages() helpers Date: Mon, 9 Nov 2020 21:21:25 +0200 Message-Id: <20201109192128.960-2-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201109192128.960-1-rppt@kernel.org> References: <20201109192128.960-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled(). The only place that calls __kernel_map_pages() without checking whether DEBUG_PAGEALLOC is enabled is the hibernation code that presumes availability of this function when ARCH_HAS_SET_DIRECT_MAP is set. Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is not enabled but set_direct_map_invalid_noflush() may render some pages not present in the direct map and hibernation code won't be able to save such pages. To make page allocation debugging and hibernation interaction more robust, the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be made more explicit. Start with combining the guard condition and the call to __kernel_map_pages() into debug_pagealloc_map_pages() and debug_pagealloc_unmap_pages() functions to emphasize that __kernel_map_pages() should not be called without DEBUG_PAGEALLOC and use these new functions to map/unmap pages when page allocation debugging is enabled. Signed-off-by: Mike Rapoport Reviewed-by: David Hildenbrand Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka --- include/linux/mm.h | 15 +++++++++++++++ mm/memory_hotplug.c | 3 +-- mm/page_alloc.c | 6 ++---- mm/slab.c | 2 +- 4 files changed, 19 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef360fe70aaf..bb8c70178f4e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2936,12 +2936,27 @@ kernel_map_pages(struct page *page, int numpages, int enable) { __kernel_map_pages(page, numpages, enable); } + +static inline void debug_pagealloc_map_pages(struct page *page, int numpages) +{ + if (debug_pagealloc_enabled_static()) + __kernel_map_pages(page, numpages, 1); +} + +static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) +{ + if (debug_pagealloc_enabled_static()) + __kernel_map_pages(page, numpages, 0); +} + #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ static inline void kernel_map_pages(struct page *page, int numpages, int enable) {} +static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {} +static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b44d4c7ba73b..f18f86ba2a68 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned int order) * so we should map it first. This is better than introducing a special * case in page freeing fast path. */ - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 1); + debug_pagealloc_map_pages(page, 1 << order); __free_pages_core(page, order); totalram_pages_add(1UL << order); #ifdef CONFIG_HIGHMEM diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23f5066bd4a5..db1bf70458d0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(struct page *page, */ arch_free_page(page, order); - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 0); + debug_pagealloc_unmap_pages(page, 1 << order); kasan_free_nondeferred_pages(page, order); @@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_refcounted(page); arch_alloc_page(page, order); - if (debug_pagealloc_enabled_static()) - kernel_map_pages(page, 1 << order, 1); + debug_pagealloc_map_pages(page, 1 << order); kasan_alloc_pages(page, order); kernel_poison_pages(page, 1 << order, 1); set_page_owner(page, order, gfp_flags); diff --git a/mm/slab.c b/mm/slab.c index b1113561b98b..5a46f5bd5957 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1434,7 +1434,7 @@ static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map) if (!is_debug_pagealloc_cache(cachep)) return; - kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map); + __kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map); } #else From patchwork Mon Nov 9 19:21:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11892327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BB2561668 for ; Mon, 9 Nov 2020 19:22:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5A228206E3 for ; Mon, 9 Nov 2020 19:22:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="15E9e9iG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A228206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 636416B005D; Mon, 9 Nov 2020 14:22:06 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5E3C56B006C; Mon, 9 Nov 2020 14:22:06 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4849E6B006E; Mon, 9 Nov 2020 14:22:06 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 1D9EF6B005D for ; Mon, 9 Nov 2020 14:22:06 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BC4971EF1 for ; Mon, 9 Nov 2020 19:22:05 +0000 (UTC) X-FDA: 77465850210.18.game06_541688e272ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 8FFF4100ED3B4 for ; Mon, 9 Nov 2020 19:22:05 +0000 (UTC) X-Spam-Summary: 1,0,0,c11d0c967e47dbed,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2693:2895:2897:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4385:4605:5007:6261:6653:6742:6743:7576:7903:8660:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13148:13149:13161:13229:13230:13894:14181:14394:14721:21063:21080:21325:21433:21451:21627:21939:21990:30054:30064:30070:30090,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04yr5rkb1ta834zegd6sq8e9ktwzxopk7xj68337eojdocecj5p8gh3c6jn8dc3.a3wa1cxgiw9p5pteickjz7shw5urci95ryfw1aamqkwmtmdrff75b6d3u5rn4py.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime :70,LUA_ X-HE-Tag: game06_541688e272ee X-Filterd-Recvd-Size: 7472 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Nov 2020 19:22:04 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CD3392074F; Mon, 9 Nov 2020 19:21:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604949724; bh=0+/Ijks+tfwTVzbr4Km98lXXrR2UvY+Np3e94RlQ6xg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=15E9e9iGIiP0778ceoXxhDg2WTSbP9tN/0WArX4Y6hxOlkrbAxSldulnMfvaeKSkT Ok6UuM0nTHwTXj5dmbTVRnpxL/uSftxXKSK/U0AUHE/S9WRN/CKoXuhBWNMF70lC7L a2mv+QhB91uyWUJVF9wrKutUyiix6jqpwnIgB9LI= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org, "Rafael J . Wysocki" Subject: [PATCH v7 2/4] PM: hibernate: make direct map manipulations more explicit Date: Mon, 9 Nov 2020 21:21:26 +0200 Message-Id: <20201109192128.960-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201109192128.960-1-rppt@kernel.org> References: <20201109192128.960-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() and hibernation_unmap_page() that will explicitly use set_direct_map_{default,invalid}_noflush() for ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_{map,unmap}_pages() for DEBUG_PAGEALLOC case. The remapping of the pages in safe_copy_page() presumes that it only changes protection bits in an existing PTE and so it is safe to ignore return value of set_direct_map_{default,invalid}_noflush(). Still, add a pr_warn() so that future changes in set_memory APIs will not silently break hibernation. Signed-off-by: Mike Rapoport Acked-by: Rafael J. Wysocki Reviewed-by: David Hildenbrand Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka --- include/linux/mm.h | 12 ------------ kernel/power/snapshot.c | 38 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 36 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bb8c70178f4e..e198b938f5c5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2927,16 +2927,6 @@ static inline bool debug_pagealloc_enabled_static(void) #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) extern void __kernel_map_pages(struct page *page, int numpages, int enable); -/* - * When called in DEBUG_PAGEALLOC context, the call should most likely be - * guarded by debug_pagealloc_enabled() or debug_pagealloc_enabled_static() - */ -static inline void -kernel_map_pages(struct page *page, int numpages, int enable) -{ - __kernel_map_pages(page, numpages, enable); -} - static inline void debug_pagealloc_map_pages(struct page *page, int numpages) { if (debug_pagealloc_enabled_static()) @@ -2953,8 +2943,6 @@ static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ -static inline void -kernel_map_pages(struct page *page, int numpages, int enable) {} static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {} static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {} #ifdef CONFIG_HIBERNATION diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 46b1804c1ddf..d848377dd8dc 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -76,6 +76,40 @@ static inline void hibernate_restore_protect_page(void *page_address) {} static inline void hibernate_restore_unprotect_page(void *page_address) {} #endif /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */ + +/* + * The calls to set_direct_map_*() should not fail because remapping a page + * here means that we only update protection bits in an existing PTE. + * It is still worth to have a warning here if something changes and this + * will no longer be the case. + */ +static inline void hibernate_map_page(struct page *page) +{ + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { + int ret = set_direct_map_default_noflush(page); + + if (ret) + pr_warn_once("Failed to remap page\n"); + } else { + debug_pagealloc_map_pages(page, 1); + } +} + +static inline void hibernate_unmap_page(struct page *page) +{ + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { + unsigned long addr = (unsigned long)page_address(page); + int ret = set_direct_map_invalid_noflush(page); + + if (ret) + pr_warn_once("Failed to remap page\n"); + + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + } else { + debug_pagealloc_unmap_pages(page, 1); + } +} + static int swsusp_page_is_free(struct page *); static void swsusp_set_page_forbidden(struct page *); static void swsusp_unset_page_forbidden(struct page *); @@ -1355,9 +1389,9 @@ static void safe_copy_page(void *dst, struct page *s_page) if (kernel_page_present(s_page)) { do_copy_page(dst, page_address(s_page)); } else { - kernel_map_pages(s_page, 1, 1); + hibernate_map_page(s_page); do_copy_page(dst, page_address(s_page)); - kernel_map_pages(s_page, 1, 0); + hibernate_unmap_page(s_page); } } From patchwork Mon Nov 9 19:21:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11892333 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8F3A11668 for ; Mon, 9 Nov 2020 19:22:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A0FF2074F for ; Mon, 9 Nov 2020 19:22:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="CTWaM3fP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A0FF2074F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 105686B006C; Mon, 9 Nov 2020 14:22:17 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B3A56B006E; Mon, 9 Nov 2020 14:22:17 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE4966B0070; Mon, 9 Nov 2020 14:22:16 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id C15836B006C for ; Mon, 9 Nov 2020 14:22:16 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 715498249980 for ; Mon, 9 Nov 2020 19:22:16 +0000 (UTC) X-FDA: 77465850672.13.ear11_0f00f18272ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 510DA18140B67 for ; Mon, 9 Nov 2020 19:22:16 +0000 (UTC) X-Spam-Summary: 1,0,0,eeb7beb2da2bfa6f,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:1:41:355:379:541:800:960:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2199:2393:2559:2562:2636:2903:3138:3139:3140:3141:3142:3280:3865:3866:3867:3868:3871:3872:3874:4250:4321:4605:5007:6119:6261:6653:6742:6743:7576:7903:7904:8957:9592:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13229:13255:13894:14096:14130:14394:21080:21433:21451:21627:21966:21990:30003:30054:30069:30070:30089,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04ygn4shydezn6d4pwxa7ck9eu9mjophythoc4uio8rmex7thdemb4bex6h5tkf.49stdhs4kzn9pxykpbkbes9dc4hdbsuhapjftkmy6a5itqoh9cahurryp1kntjy.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: ear11_0f00f18272ee X-Filterd-Recvd-Size: 13574 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Nov 2020 19:22:15 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7C70E20897; Mon, 9 Nov 2020 19:22:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604949734; bh=xxt5gB2oGFWeV+ktUdSmoyoWnGafPUKd3wAZ1eXcuss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CTWaM3fPpYcTa1Wg+fEjbmMpIv+Vdh423bvo98xHNxTfk5ad3BIyDuYlnZkSpYThp 5cC+6o+DY1hEkKS0kKf9+cBpI/aze/FR/XwKmHMQcPQ5v+2oIouZGu2QG/i4HHf29B kJ1Lo3ztHHnqelqHAd/yGBo6CBqL6Bv0yDdOpZ04= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v7 3/4] arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC Date: Mon, 9 Nov 2020 21:21:27 +0200 Message-Id: <20201109192128.960-4-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201109192128.960-1-rppt@kernel.org> References: <20201109192128.960-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap pages when page allocation debugging is disabled at runtime. As all the users of __kernel_map_pages() were converted to use debug_pagealloc_map_pages() it is safe to make it available only when DEBUG_PAGEALLOC is set. Signed-off-by: Mike Rapoport Acked-by: David Hildenbrand Acked-by: Kirill A. Shutemov --- arch/Kconfig | 3 +++ arch/arm64/Kconfig | 4 +--- arch/arm64/mm/pageattr.c | 8 ++++++-- arch/powerpc/Kconfig | 5 +---- arch/riscv/Kconfig | 4 +--- arch/riscv/include/asm/pgtable.h | 2 -- arch/riscv/mm/pageattr.c | 2 ++ arch/s390/Kconfig | 4 +--- arch/sparc/Kconfig | 4 +--- arch/x86/Kconfig | 4 +--- arch/x86/mm/pat/set_memory.c | 2 ++ include/linux/mm.h | 10 +++++++--- 12 files changed, 26 insertions(+), 26 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 56b6ccc0e32d..56d4752b6db6 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1028,6 +1028,9 @@ config HAVE_STATIC_CALL_INLINE bool depends on HAVE_STATIC_CALL +config ARCH_SUPPORTS_DEBUG_PAGEALLOC + bool + source "kernel/gcov/Kconfig" source "scripts/gcc-plugins/Kconfig" diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1d466addb078..a932810cfd90 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -71,6 +71,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_USE_SYM_ANNOTATIONS + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select ARCH_SUPPORTS_MEMORY_FAILURE select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW @@ -1025,9 +1026,6 @@ config HOLES_IN_ZONE source "kernel/Kconfig.hz" -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config ARCH_SPARSEMEM_ENABLE def_bool y select SPARSEMEM_VMEMMAP_ENABLE diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 1b94f5b82654..439325532be1 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -155,7 +155,7 @@ int set_direct_map_invalid_noflush(struct page *page) .clear_mask = __pgprot(PTE_VALID), }; - if (!rodata_full) + if (!debug_pagealloc_enabled() && !rodata_full) return 0; return apply_to_page_range(&init_mm, @@ -170,7 +170,7 @@ int set_direct_map_default_noflush(struct page *page) .clear_mask = __pgprot(PTE_RDONLY), }; - if (!rodata_full) + if (!debug_pagealloc_enabled() && !rodata_full) return 0; return apply_to_page_range(&init_mm, @@ -178,6 +178,7 @@ int set_direct_map_default_noflush(struct page *page) PAGE_SIZE, change_page_range, &data); } +#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (!debug_pagealloc_enabled() && !rodata_full) @@ -186,6 +187,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) set_memory_valid((unsigned long)page_address(page), numpages, enable); } +#ifdef CONFIG_HIBERNATION /* * This function is used to determine if a linear map page has been marked as * not-valid. Walk the page table and check the PTE_VALID bit. This is based @@ -232,3 +234,5 @@ bool kernel_page_present(struct page *page) ptep = pte_offset_kernel(pmdp, addr); return pte_valid(READ_ONCE(*ptep)); } +#endif /* CONFIG_HIBERNATION */ +#endif /* CONFIG_DEBUG_PAGEALLOC */ diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index e9f13fe08492..ad8a83f3ddca 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -146,6 +146,7 @@ config PPC select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC if PPC32 || PPC_BOOK3S_64 select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF if PPC64 select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS @@ -355,10 +356,6 @@ config PPC_OF_PLATFORM_PCI depends on PCI depends on PPC64 # not supported on 32 bits yet -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - depends on PPC32 || PPC_BOOK3S_64 - def_bool y - config ARCH_SUPPORTS_UPROBES def_bool y diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 44377fd7860e..9283c6f9ae2a 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -14,6 +14,7 @@ config RISCV def_bool y select ARCH_CLOCKSOURCE_INIT select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU select ARCH_HAS_BINFMT_FLAT select ARCH_HAS_DEBUG_VM_PGTABLE select ARCH_HAS_DEBUG_VIRTUAL if MMU @@ -153,9 +154,6 @@ config ARCH_SELECT_MEMORY_MODEL config ARCH_WANT_GENERAL_HUGETLB def_bool y -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config SYS_SUPPORTS_HUGETLBFS depends on MMU def_bool y diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 183f1f4b2ae6..41a72861987c 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -461,8 +461,6 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma, #define VMALLOC_START 0 #define VMALLOC_END TASK_SIZE -static inline void __kernel_map_pages(struct page *page, int numpages, int enable) {} - #endif /* !CONFIG_MMU */ #define kern_addr_valid(addr) (1) /* FIXME */ diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 19fecb362d81..321b09d2e2ea 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -184,6 +184,7 @@ int set_direct_map_default_noflush(struct page *page) return ret; } +#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (!debug_pagealloc_enabled()) @@ -196,3 +197,4 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) __set_memory((unsigned long)page_address(page), numpages, __pgprot(0), __pgprot(_PAGE_PRESENT)); } +#endif diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 4a2a12be04c9..991a850a6c0b 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -35,9 +35,6 @@ config GENERIC_LOCKBREAK config PGSTE def_bool y if KVM -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config AUDIT_ARCH def_bool y @@ -106,6 +103,7 @@ config S390 select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE select ARCH_STACKWALK select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index a6ca135442f9..2c729b8d097a 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -88,6 +88,7 @@ config SPARC64 select HAVE_C_RECORDMCOUNT select HAVE_ARCH_AUDITSYSCALL select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select HAVE_NMI select HAVE_REGS_AND_STACK_ACCESS_API select ARCH_USE_QUEUED_RWLOCKS @@ -148,9 +149,6 @@ config GENERIC_ISA_DMA bool default y if SPARC32 -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y if SPARC64 - config PGTABLE_LEVELS default 4 if 64BIT default 3 diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f6946b81f74a..0db3fb1da70c 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -91,6 +91,7 @@ config X86 select ARCH_STACKWALK select ARCH_SUPPORTS_ACPI select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_DEBUG_PAGEALLOC select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_QUEUED_RWLOCKS @@ -329,9 +330,6 @@ config ZONE_DMA32 config AUDIT_ARCH def_bool y if X86_64 -config ARCH_SUPPORTS_DEBUG_PAGEALLOC - def_bool y - config KASAN_SHADOW_OFFSET hex depends on KASAN diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 40baa90e74f4..bc9be96b777f 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2194,6 +2194,7 @@ int set_direct_map_default_noflush(struct page *page) return __set_pages_p(page, 1); } +#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (PageHighMem(page)) @@ -2239,6 +2240,7 @@ bool kernel_page_present(struct page *page) return (pte_val(*pte) & _PAGE_PRESENT); } #endif /* CONFIG_HIBERNATION */ +#endif /* CONFIG_DEBUG_PAGEALLOC */ int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, unsigned numpages, unsigned long page_flags) diff --git a/include/linux/mm.h b/include/linux/mm.h index e198b938f5c5..260113ba660a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void) return static_branch_unlikely(&_debug_pagealloc_enabled); } -#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) +#ifdef CONFIG_DEBUG_PAGEALLOC +/* + * To support DEBUG_PAGEALLOC architecture must ensure that + * __kernel_map_pages() never fails + */ extern void __kernel_map_pages(struct page *page, int numpages, int enable); static inline void debug_pagealloc_map_pages(struct page *page, int numpages) @@ -2942,13 +2946,13 @@ static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ -#else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ +#else /* CONFIG_DEBUG_PAGEALLOC */ static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {} static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ -#endif /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ +#endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm); From patchwork Mon Nov 9 19:21:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11892339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74BD51668 for ; Mon, 9 Nov 2020 19:22:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2AC462074F for ; Mon, 9 Nov 2020 19:22:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="h7GoaGlI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2AC462074F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4ACD26B006E; Mon, 9 Nov 2020 14:22:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 45C006B0070; Mon, 9 Nov 2020 14:22:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FCCC6B0071; Mon, 9 Nov 2020 14:22:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 03D006B006E for ; Mon, 9 Nov 2020 14:22:26 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9DAAA1EF1 for ; Mon, 9 Nov 2020 19:22:26 +0000 (UTC) X-FDA: 77465851092.14.way45_1d0457d272ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 767BF18229818 for ; Mon, 9 Nov 2020 19:22:26 +0000 (UTC) X-Spam-Summary: 1,0,0,19b492e1b0dde318,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:2:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:2895:2915:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4049:4120:4250:4321:4605:5007:6119:6261:6653:6742:6743:7576:9592:10004:11026:11473:11657:11658:11914:12043:12114:12291:12296:12297:12438:12517:12519:12555:12679:12895:12986:13894:14096:14394:21080:21220:21324:21451:21627:21990:30003:30054:30070,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yrater6sqdpo843q77mgokcg55fyche67kko4uzhri11wzh8drcncdaqttwx3.exfo9e815ribybtpmg8ah18b6jpkyn9787kmhxy5wa57dogsq4mmjwpycfrsiw4.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: way45_1d0457d272ee X-Filterd-Recvd-Size: 9294 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Nov 2020 19:22:25 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2212E206E3; Mon, 9 Nov 2020 19:22:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604949744; bh=zY00vAD1luWw8Wz9ntfdosu1D6lwJ6iTAdVQg6SgGXI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h7GoaGlI6ZnS5+tNuLhZ+aC4zG5wgvn+ssnT5NZVt6hCZ+FROUXJfk/0PgUT9JlZ5 mGIObipxEPl3j0riliyrfgkjUkc9kFl5oIeyDlAxX6ppKEextnPqqUCGWhTlqRDWWf zUS6q1M4FQcv4v73geWLHBK3mNqiSaNh87kIObVE= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A . Shutemov" , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v7 4/4] arch, mm: make kernel_page_present() always available Date: Mon, 9 Nov 2020 21:21:28 +0200 Message-Id: <20201109192128.960-5-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201109192128.960-1-rppt@kernel.org> References: <20201109192128.960-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part of set_memory API and remove ugly ifdefery in inlcude/linux/mm.h around current declarations of kernel_page_present(). Signed-off-by: Mike Rapoport Acked-by: Kirill A. Shutemov Acked-by: David Hildenbrand --- arch/arm64/include/asm/cacheflush.h | 1 + arch/arm64/mm/pageattr.c | 4 +--- arch/riscv/include/asm/set_memory.h | 1 + arch/riscv/mm/pageattr.c | 29 +++++++++++++++++++++++++++++ arch/x86/include/asm/set_memory.h | 1 + arch/x86/mm/pat/set_memory.c | 4 +--- include/linux/mm.h | 7 ------- include/linux/set_memory.h | 5 +++++ 8 files changed, 39 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 9384fd8fc13c..45217f21f1fe 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable); int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +bool kernel_page_present(struct page *page); #include diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 439325532be1..92eccaf595c8 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -186,8 +186,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) set_memory_valid((unsigned long)page_address(page), numpages, enable); } +#endif /* CONFIG_DEBUG_PAGEALLOC */ -#ifdef CONFIG_HIBERNATION /* * This function is used to determine if a linear map page has been marked as * not-valid. Walk the page table and check the PTE_VALID bit. This is based @@ -234,5 +234,3 @@ bool kernel_page_present(struct page *page) ptep = pte_offset_kernel(pmdp, addr); return pte_valid(READ_ONCE(*ptep)); } -#endif /* CONFIG_HIBERNATION */ -#endif /* CONFIG_DEBUG_PAGEALLOC */ diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h index 4c5bae7ca01c..d690b08dff2a 100644 --- a/arch/riscv/include/asm/set_memory.h +++ b/arch/riscv/include/asm/set_memory.h @@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +bool kernel_page_present(struct page *page); #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 321b09d2e2ea..87ba5a68bbb8 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -198,3 +198,32 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) __pgprot(0), __pgprot(_PAGE_PRESENT)); } #endif + +bool kernel_page_present(struct page *page) +{ + unsigned long addr = (unsigned long)page_address(page); + pgd_t *pgd; + pud_t *pud; + p4d_t *p4d; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd_offset_k(addr); + if (!pgd_present(*pgd)) + return false; + + p4d = p4d_offset(pgd, addr); + if (!p4d_present(*p4d)) + return false; + + pud = pud_offset(p4d, addr); + if (!pud_present(*pud)) + return false; + + pmd = pmd_offset(pud, addr); + if (!pmd_present(*pmd)) + return false; + + pte = pte_offset_kernel(pmd, addr); + return pte_present(*pte); +} diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 5948218f35c5..4352f08bfbb5 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages); int set_direct_map_invalid_noflush(struct page *page); int set_direct_map_default_noflush(struct page *page); +bool kernel_page_present(struct page *page); extern int kernel_set_to_readonly; diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index bc9be96b777f..16f878c26667 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2226,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) arch_flush_lazy_mmu_mode(); } +#endif /* CONFIG_DEBUG_PAGEALLOC */ -#ifdef CONFIG_HIBERNATION bool kernel_page_present(struct page *page) { unsigned int level; @@ -2239,8 +2239,6 @@ bool kernel_page_present(struct page *page) pte = lookup_address((unsigned long)page_address(page), &level); return (pte_val(*pte) & _PAGE_PRESENT); } -#endif /* CONFIG_HIBERNATION */ -#endif /* CONFIG_DEBUG_PAGEALLOC */ int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, unsigned numpages, unsigned long page_flags) diff --git a/include/linux/mm.h b/include/linux/mm.h index 260113ba660a..fe9a8d35a6eb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2942,16 +2942,9 @@ static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) if (debug_pagealloc_enabled_static()) __kernel_map_pages(page, numpages, 0); } - -#ifdef CONFIG_HIBERNATION -extern bool kernel_page_present(struct page *page); -#endif /* CONFIG_HIBERNATION */ #else /* CONFIG_DEBUG_PAGEALLOC */ static inline void debug_pagealloc_map_pages(struct page *page, int numpages) {} static inline void debug_pagealloc_unmap_pages(struct page *page, int numpages) {} -#ifdef CONFIG_HIBERNATION -static inline bool kernel_page_present(struct page *page) { return true; } -#endif /* CONFIG_HIBERNATION */ #endif /* CONFIG_DEBUG_PAGEALLOC */ #ifdef __HAVE_ARCH_GATE_AREA diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h index 860e0f843c12..fe1aa4e54680 100644 --- a/include/linux/set_memory.h +++ b/include/linux/set_memory.h @@ -23,6 +23,11 @@ static inline int set_direct_map_default_noflush(struct page *page) { return 0; } + +static inline bool kernel_page_present(struct page *page) +{ + return true; +} #endif #ifndef set_mce_nospec