From patchwork Tue Jan 29 00:34:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 10785265 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D92086C2 for ; Tue, 29 Jan 2019 00:42:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C82502B4DC for ; Tue, 29 Jan 2019 00:42:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BB1F22B55B; Tue, 29 Jan 2019 00:42:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id BF64B2B4DC for ; Tue, 29 Jan 2019 00:42:24 +0000 (UTC) Received: (qmail 30095 invoked by uid 550); 29 Jan 2019 00:39:40 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29968 invoked from network); 29 Jan 2019 00:39:37 -0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,535,1539673200"; d="scan'208";a="133921926" From: Rick Edgecombe To: Andy Lutomirski , Ingo Molnar Cc: linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com, Thomas Gleixner , Borislav Petkov , Nadav Amit , Dave Hansen , Peter Zijlstra , linux_dti@icloud.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org, akpm@linux-foundation.org, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, will.deacon@arm.com, ard.biesheuvel@linaro.org, kristen@linux.intel.com, deneen.t.dock@intel.com, Rick Edgecombe , "Rafael J. Wysocki" , Pavel Machek Subject: [PATCH v2 14/20] mm: Make hibernate handle unmapped pages Date: Mon, 28 Jan 2019 16:34:16 -0800 Message-Id: <20190129003422.9328-15-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190129003422.9328-1-rick.p.edgecombe@intel.com> References: <20190129003422.9328-1-rick.p.edgecombe@intel.com> X-Virus-Scanned: ClamAV using ClamSMTP For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not configured. So this changes kernel_map_pages and kernel_page_present to be defined when CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes places (page_alloc.c) where those functions are assumed to only be implemented when CONFIG_DEBUG_PAGEALLOC is defined. So now when CONFIG_ARCH_HAS_SET_ALIAS=y, hibernate will handle not present page when saving. Previously this was already done when CONFIG_DEBUG_PAGEALLOC was configured. It does not appear to have a big hibernating performance impact. Before: [ 4.670938] PM: Wrote 171996 kbytes in 0.21 seconds (819.02 MB/s) After: [ 4.504714] PM: Wrote 178932 kbytes in 0.22 seconds (813.32 MB/s) Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Pavel Machek Acked-by: Pavel Machek Signed-off-by: Rick Edgecombe --- arch/x86/mm/pageattr.c | 4 ---- include/linux/mm.h | 18 ++++++------------ mm/page_alloc.c | 7 +++++-- 3 files changed, 11 insertions(+), 18 deletions(-) diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 3a51915a1410..717bdc188aab 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2257,7 +2257,6 @@ int set_alias_default_noflush(struct page *page) return __set_pages_p(page, 1); } -#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (PageHighMem(page)) @@ -2302,11 +2301,8 @@ bool kernel_page_present(struct page *page) pte = lookup_address((unsigned long)page_address(page), &level); return (pte_val(*pte) & _PAGE_PRESENT); } - #endif /* CONFIG_HIBERNATION */ -#endif /* CONFIG_DEBUG_PAGEALLOC */ - int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, unsigned numpages, unsigned long page_flags) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 80bb6408fe73..b362a280a919 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2642,37 +2642,31 @@ static inline void kernel_poison_pages(struct page *page, int numpages, int enable) { } #endif -#ifdef CONFIG_DEBUG_PAGEALLOC extern bool _debug_pagealloc_enabled; -extern void __kernel_map_pages(struct page *page, int numpages, int enable); static inline bool debug_pagealloc_enabled(void) { - return _debug_pagealloc_enabled; + return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled; } +#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_ALIAS) +extern void __kernel_map_pages(struct page *page, int numpages, int enable); + static inline void kernel_map_pages(struct page *page, int numpages, int enable) { - if (!debug_pagealloc_enabled()) - return; - __kernel_map_pages(page, numpages, enable); } #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ -#else /* CONFIG_DEBUG_PAGEALLOC */ +#else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_ALIAS */ static inline void kernel_map_pages(struct page *page, int numpages, int enable) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ -static inline bool debug_pagealloc_enabled(void) -{ - return false; -} -#endif /* CONFIG_DEBUG_PAGEALLOC */ +#endif /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_ALIAS */ #ifdef __HAVE_ARCH_GATE_AREA extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d295c9bc01a8..92d0a0934274 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1074,7 +1074,9 @@ static __always_inline bool free_pages_prepare(struct page *page, } arch_free_page(page, order); kernel_poison_pages(page, 1 << order, 0); - kernel_map_pages(page, 1 << order, 0); + if (debug_pagealloc_enabled()) + kernel_map_pages(page, 1 << order, 0); + kasan_free_nondeferred_pages(page, order); return true; @@ -1944,7 +1946,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_refcounted(page); arch_alloc_page(page, order); - kernel_map_pages(page, 1 << order, 1); + if (debug_pagealloc_enabled()) + kernel_map_pages(page, 1 << order, 1); kernel_poison_pages(page, 1 << order, 1); kasan_alloc_pages(page, order); set_page_owner(page, order, gfp_flags);