From patchwork Wed Apr 8 15:01:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11480267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C81514B4 for ; Wed, 8 Apr 2020 15:02:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C6629206C0 for ; Wed, 8 Apr 2020 15:02:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="PnSAs4lR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6629206C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E8AC8E000D; Wed, 8 Apr 2020 11:01:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7FDA88E001B; Wed, 8 Apr 2020 11:01:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A0828E000D; Wed, 8 Apr 2020 11:01:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 3FE908E001B for ; Wed, 8 Apr 2020 11:01:55 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EEDC8181AF5C4 for ; Wed, 8 Apr 2020 15:01:54 +0000 (UTC) X-FDA: 76685002548.30.trip55_10a428c6c755b X-Spam-Summary: 2,0,0,f1c0636e007d3c3f,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:69:327:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2693:2731:2910:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4321:4385:4605:5007:6119:6261:6653:6691:7576:7875:8603:8957:9036:9592:11026:11473:11658:11914:12043:12219:12296:12297:12438:12555:12683:12895:12986:13138:13231:13894:14096:14394:21080:21324:21451:21611:21627:21796:21990:30036:30051:30054:30074,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:17,LUA_SUMMARY:none X-HE-Tag: trip55_10a428c6c755b X-Filterd-Recvd-Size: 22125 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Apr 2020 15:01:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=bmB+yP+vI+aqNdJLUjtywh0BfzDkdM9aB7csg6bXOz8=; b=PnSAs4lR11iPGX8qJY7VlmhRed CP1mUxxcxGR9lhqxSCJx1QD7pG+rHClEYl6QxjcpKx+kAa0F0boiKhZUO+KRF5yY55aTqdOLj8MBj Ui5EEGLU70KlRSbVO08WVgjTf1MpIt8xrSrTnlJ8G9leCrCHRewQ1n53yPBmha5bbmn5j5evrvzfT PoTQBZ37DHRdaxdmQDGs+JpSkbw0JAHsPm7n5+4/sLGINnYTRq9rW06LAJUX8VXjCkIijcBd3v25S CzoQgxZV9w5sblYI/+j/0sTzF9mkrZJ5Yl3LN+joY9ndZVO0x/hnTWlFuzlveuh+s/Es8tLPq5xjk Dz3xYltQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jMCD8-0006b6-4X; Wed, 08 Apr 2020 15:01:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , kirill.shutemov@linux.intel.com, pasha.tatashin@soleen.com Subject: [PATCH 1/5] mm: Constify a lot of struct page arguments Date: Wed, 8 Apr 2020 08:01:44 -0700 Message-Id: <20200408150148.25290-2-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200408150148.25290-1-willy@infradead.org> References: <20200408150148.25290-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" For an upcoming patch, we want to be able to pass a const struct page to dump_page(). That means some inline functions have to become macros so they can return a struct page which is the same const-ness as their argument. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Pavel Tatashin Acked-by: Kirill A. Shutemov --- include/linux/mm.h | 28 +++++++++++----------- include/linux/mm_types.h | 11 ++------- include/linux/mmdebug.h | 4 ++-- include/linux/page-flags.h | 41 +++++++++++++++------------------ include/linux/page_owner.h | 6 ++--- include/linux/page_ref.h | 4 ++-- include/linux/pageblock-flags.h | 2 +- include/linux/pagemap.h | 4 ++-- include/linux/swap.h | 2 +- mm/debug.c | 6 ++--- mm/hugetlb.c | 6 ++--- mm/page_alloc.c | 12 +++++----- mm/page_owner.c | 2 +- mm/swapfile.c | 6 ++--- mm/util.c | 6 ++--- 15 files changed, 65 insertions(+), 75 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e2f938c5a9d8..61aa63449e7e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -764,7 +764,7 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) extern void kvfree(const void *addr); -static inline int compound_mapcount(struct page *page) +static inline int compound_mapcount(const struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); page = compound_head(page); @@ -781,9 +781,9 @@ static inline void page_mapcount_reset(struct page *page) atomic_set(&(page)->_mapcount, -1); } -int __page_mapcount(struct page *page); +int __page_mapcount(const struct page *page); -static inline int page_mapcount(struct page *page) +static inline int page_mapcount(const struct page *page) { VM_BUG_ON_PAGE(PageSlab(page), page); @@ -857,14 +857,14 @@ static inline compound_page_dtor *get_compound_page_dtor(struct page *page) return compound_page_dtors[page[1].compound_dtor]; } -static inline unsigned int compound_order(struct page *page) +static inline unsigned int compound_order(const struct page *page) { if (!PageHead(page)) return 0; return page[1].compound_order; } -static inline bool hpage_pincount_available(struct page *page) +static inline bool hpage_pincount_available(const struct page *page) { /* * Can the page->hpage_pinned_refcount field be used? That field is in @@ -875,7 +875,7 @@ static inline bool hpage_pincount_available(struct page *page) return PageCompound(page) && compound_order(page) > 1; } -static inline int compound_pincount(struct page *page) +static inline int compound_pincount(const struct page *page) { VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); page = compound_head(page); @@ -1495,12 +1495,12 @@ void page_address_init(void); extern void *page_rmapping(struct page *page); extern struct anon_vma *page_anon_vma(struct page *page); -extern struct address_space *page_mapping(struct page *page); +extern struct address_space *page_mapping(const struct page *page); -extern struct address_space *__page_file_mapping(struct page *); +extern struct address_space *__page_file_mapping(const struct page *); static inline -struct address_space *page_file_mapping(struct page *page) +struct address_space *page_file_mapping(const struct page *page) { if (unlikely(PageSwapCache(page))) return __page_file_mapping(page); @@ -1508,13 +1508,13 @@ struct address_space *page_file_mapping(struct page *page) return page->mapping; } -extern pgoff_t __page_file_index(struct page *page); +extern pgoff_t __page_file_index(const struct page *page); /* * Return the pagecache index of the passed page. Regular pagecache pages * use ->index whereas swapcache pages use swp_offset(->private) */ -static inline pgoff_t page_index(struct page *page) +static inline pgoff_t page_index(const struct page *page) { if (unlikely(PageSwapCache(page))) return __page_file_index(page); @@ -1522,15 +1522,15 @@ static inline pgoff_t page_index(struct page *page) } bool page_mapped(struct page *page); -struct address_space *page_mapping(struct page *page); -struct address_space *page_mapping_file(struct page *page); +struct address_space *page_mapping(const struct page *page); +struct address_space *page_mapping_file(const struct page *page); /* * Return true only if the page has been allocated with * ALLOC_NO_WATERMARKS and the low watermark was not * met implying that the system is under some pressure. */ -static inline bool page_is_pfmemalloc(struct page *page) +static inline bool page_is_pfmemalloc(const struct page *page) { /* * Page index cannot be this large so this must be diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4aba6c0c2ba8..a8c3fa076f43 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -221,15 +221,8 @@ struct page { #endif } _struct_page_alignment; -static inline atomic_t *compound_mapcount_ptr(struct page *page) -{ - return &page[1].compound_mapcount; -} - -static inline atomic_t *compound_pincount_ptr(struct page *page) -{ - return &page[2].hpage_pinned_refcount; -} +#define compound_mapcount_ptr(page) (&(page)[1].compound_mapcount) +#define compound_pincount_ptr(page) (&(page)[2].hpage_pinned_refcount) /* * Used for sizing the vmemmap region on some architectures diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h index 2ad72d2c8cc5..71246df469a0 100644 --- a/include/linux/mmdebug.h +++ b/include/linux/mmdebug.h @@ -9,8 +9,8 @@ struct page; struct vm_area_struct; struct mm_struct; -extern void dump_page(struct page *page, const char *reason); -extern void __dump_page(struct page *page, const char *reason); +extern void dump_page(const struct page *page, const char *reason); +extern void __dump_page(const struct page *page, const char *reason); void dump_vma(const struct vm_area_struct *vma); void dump_mm(const struct mm_struct *mm); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 222f6f7b2bb3..f1ab1f2e6aba 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -175,23 +175,20 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H -struct page; /* forward declaration */ +#define compound_head(page) ({ \ + __typeof__(page) _page = page; \ + unsigned long head = READ_ONCE(_page->compound_head); \ + if (unlikely(head & 1)) \ + _page = (void *)(head - 1); \ + _page; \ +}) -static inline struct page *compound_head(struct page *page) -{ - unsigned long head = READ_ONCE(page->compound_head); - - if (unlikely(head & 1)) - return (struct page *) (head - 1); - return page; -} - -static __always_inline int PageTail(struct page *page) +static __always_inline int PageTail(const struct page *page) { return READ_ONCE(page->compound_head) & 1; } -static __always_inline int PageCompound(struct page *page) +static __always_inline int PageCompound(const struct page *page) { return test_bit(PG_head, &page->flags) || PageTail(page); } @@ -252,7 +249,7 @@ static inline void page_init_poison(struct page *page, size_t size) * Macros to create function definitions for page flags */ #define TESTPAGEFLAG(uname, lname, policy) \ -static __always_inline int Page##uname(struct page *page) \ +static __always_inline int Page##uname(const struct page *page) \ { return test_bit(PG_##lname, &policy(page, 0)->flags); } #define SETPAGEFLAG(uname, lname, policy) \ @@ -385,7 +382,7 @@ PAGEFLAG_FALSE(HighMem) #endif #ifdef CONFIG_SWAP -static __always_inline int PageSwapCache(struct page *page) +static __always_inline int PageSwapCache(const struct page *page) { #ifdef CONFIG_THP_SWAP page = compound_head(page); @@ -474,7 +471,7 @@ static __always_inline int PageMappingFlags(struct page *page) return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } -static __always_inline int PageAnon(struct page *page) +static __always_inline int PageAnon(const struct page *page) { page = compound_head(page); return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; @@ -493,7 +490,7 @@ static __always_inline int __PageMovable(struct page *page) * is found in VM_MERGEABLE vmas. It's a PageAnon page, pointing not to any * anon_vma, but to that page's node of the stable tree. */ -static __always_inline int PageKsm(struct page *page) +static __always_inline int PageKsm(const struct page *page) { page = compound_head(page); return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == @@ -586,14 +583,14 @@ static inline void ClearPageCompound(struct page *page) #define PG_head_mask ((1UL << PG_head)) #ifdef CONFIG_HUGETLB_PAGE -int PageHuge(struct page *page); -int PageHeadHuge(struct page *page); -bool page_huge_active(struct page *page); +int PageHuge(const struct page *page); +int PageHeadHuge(const struct page *page); +bool page_huge_active(const struct page *page); #else TESTPAGEFLAG_FALSE(Huge) TESTPAGEFLAG_FALSE(HeadHuge) -static inline bool page_huge_active(struct page *page) +static inline bool page_huge_active(const struct page *page) { return 0; } @@ -667,7 +664,7 @@ static inline int PageTransCompoundMap(struct page *page) * and hugetlbfs pages, so it should only be called when it's known * that hugetlbfs pages aren't involved. */ -static inline int PageTransTail(struct page *page) +static inline int PageTransTail(const struct page *page) { return PageTail(page); } @@ -685,7 +682,7 @@ static inline int PageTransTail(struct page *page) * * See also __split_huge_pmd_locked() and page_remove_anon_compound_rmap(). */ -static inline int PageDoubleMap(struct page *page) +static inline int PageDoubleMap(const struct page *page) { return PageHead(page) && test_bit(PG_double_map, &page[1].flags); } diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 8679ccd722e8..16d4885e2f7a 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -14,7 +14,7 @@ extern void __set_page_owner(struct page *page, extern void __split_page_owner(struct page *page, unsigned int order); extern void __copy_page_owner(struct page *oldpage, struct page *newpage); extern void __set_page_owner_migrate_reason(struct page *page, int reason); -extern void __dump_page_owner(struct page *page); +extern void __dump_page_owner(const struct page *page); extern void pagetypeinfo_showmixedcount_print(struct seq_file *m, pg_data_t *pgdat, struct zone *zone); @@ -46,7 +46,7 @@ static inline void set_page_owner_migrate_reason(struct page *page, int reason) if (static_branch_unlikely(&page_owner_inited)) __set_page_owner_migrate_reason(page, reason); } -static inline void dump_page_owner(struct page *page) +static inline void dump_page_owner(const struct page *page) { if (static_branch_unlikely(&page_owner_inited)) __dump_page_owner(page); @@ -69,7 +69,7 @@ static inline void copy_page_owner(struct page *oldpage, struct page *newpage) static inline void set_page_owner_migrate_reason(struct page *page, int reason) { } -static inline void dump_page_owner(struct page *page) +static inline void dump_page_owner(const struct page *page) { } #endif /* CONFIG_PAGE_OWNER */ diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index d27701199a4d..f2c4872f50f0 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -62,12 +62,12 @@ static inline void __page_ref_unfreeze(struct page *page, int v) #endif -static inline int page_ref_count(struct page *page) +static inline int page_ref_count(const struct page *page) { return atomic_read(&page->_refcount); } -static inline int page_count(struct page *page) +static inline int page_count(const struct page *page) { return atomic_read(&compound_head(page)->_refcount); } diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index c066fec5b74b..aa13dca7bf04 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -54,7 +54,7 @@ extern unsigned int pageblock_order; /* Forward declaration */ struct page; -unsigned long get_pfnblock_flags_mask(struct page *page, +unsigned long get_pfnblock_flags_mask(const struct page *page, unsigned long pfn, unsigned long end_bitidx, unsigned long mask); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a8f7bd8ea1c6..ca1aaa8ce813 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -401,7 +401,7 @@ static inline struct page *read_mapping_page(struct address_space *mapping, * Get index of the page with in radix-tree * (TODO: remove once hugetlb pages will have ->index in PAGE_SIZE) */ -static inline pgoff_t page_to_index(struct page *page) +static inline pgoff_t page_to_index(const struct page *page) { pgoff_t pgoff; @@ -421,7 +421,7 @@ static inline pgoff_t page_to_index(struct page *page) * Get the offset in PAGE_SIZE. * (TODO: hugepage should have ->index in PAGE_SIZE) */ -static inline pgoff_t page_to_pgoff(struct page *page) +static inline pgoff_t page_to_pgoff(const struct page *page) { if (unlikely(PageHeadHuge(page))) return page->index << compound_order(page); diff --git a/include/linux/swap.h b/include/linux/swap.h index b835d8dbea0e..6e602ce0eb0c 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -464,7 +464,7 @@ extern int page_swapcount(struct page *); extern int __swap_count(swp_entry_t entry); extern int __swp_swapcount(swp_entry_t entry); extern int swp_swapcount(swp_entry_t entry); -extern struct swap_info_struct *page_swap_info(struct page *); +extern struct swap_info_struct *page_swap_info(const struct page *); extern struct swap_info_struct *swp_swap_info(swp_entry_t entry); extern bool reuse_swap_page(struct page *, int *); extern int try_to_free_swap(struct page *); diff --git a/mm/debug.c b/mm/debug.c index 2189357f0987..69862d3b04e5 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -42,9 +42,9 @@ const struct trace_print_flags vmaflag_names[] = { {0, NULL} }; -void __dump_page(struct page *page, const char *reason) +void __dump_page(const struct page *page, const char *reason) { - struct page *head = compound_head(page); + const struct page *head = compound_head(page); struct address_space *mapping; bool page_poisoned = PagePoisoned(page); bool compound = PageCompound(page); @@ -140,7 +140,7 @@ void __dump_page(struct page *page, const char *reason) #endif } -void dump_page(struct page *page, const char *reason) +void dump_page(const struct page *page, const char *reason) { __dump_page(page, reason); dump_page_owner(page); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f5fb53fdfa02..0131974369cb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1305,7 +1305,7 @@ struct hstate *size_to_hstate(unsigned long size) * * This function can be called for tail pages, but never returns true for them. */ -bool page_huge_active(struct page *page) +bool page_huge_active(const struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); return PageHead(page) && PagePrivate(&page[1]); @@ -1509,7 +1509,7 @@ static void prep_compound_gigantic_page(struct page *page, unsigned int order) * transparent huge pages. See the PageTransHuge() documentation for more * details. */ -int PageHuge(struct page *page) +int PageHuge(const struct page *page) { if (!PageCompound(page)) return 0; @@ -1523,7 +1523,7 @@ EXPORT_SYMBOL_GPL(PageHuge); * PageHeadHuge() only returns true for hugetlbfs head page, but not for * normal or transparent huge pages. */ -int PageHeadHuge(struct page *page_head) +int PageHeadHuge(const struct page *page_head) { if (!PageHead(page_head)) return 0; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 114c56c3685d..9634c6e44197 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -446,7 +446,7 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) #endif /* Return a pointer to the bitmap storing bits affecting a block of pages */ -static inline unsigned long *get_pageblock_bitmap(struct page *page, +static inline unsigned long *get_pageblock_bitmap(const struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM @@ -456,7 +456,7 @@ static inline unsigned long *get_pageblock_bitmap(struct page *page, #endif /* CONFIG_SPARSEMEM */ } -static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) +static inline int pfn_to_bitidx(const struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM pfn &= (PAGES_PER_SECTION-1); @@ -476,7 +476,8 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) * * Return: pageblock_bits flags */ -static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page, +static __always_inline +unsigned long __get_pfnblock_flags_mask(const struct page *page, unsigned long pfn, unsigned long end_bitidx, unsigned long mask) @@ -495,9 +496,8 @@ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page return (word >> (BITS_PER_LONG - bitidx - 1)) & mask; } -unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn, - unsigned long end_bitidx, - unsigned long mask) +unsigned long get_pfnblock_flags_mask(const struct page *page, + unsigned long pfn, unsigned long end_bitidx, unsigned long mask) { return __get_pfnblock_flags_mask(page, pfn, end_bitidx, mask); } diff --git a/mm/page_owner.c b/mm/page_owner.c index 18ecde9f45b2..a22afbb95c46 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -399,7 +399,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn, return -ENOMEM; } -void __dump_page_owner(struct page *page) +void __dump_page_owner(const struct page *page) { struct page_ext *page_ext = lookup_page_ext(page); struct page_owner *page_owner; diff --git a/mm/swapfile.c b/mm/swapfile.c index 5871a2aa86a5..9d9e01f1716c 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3489,7 +3489,7 @@ struct swap_info_struct *swp_swap_info(swp_entry_t entry) return swap_type_to_swap_info(swp_type(entry)); } -struct swap_info_struct *page_swap_info(struct page *page) +struct swap_info_struct *page_swap_info(const struct page *page) { swp_entry_t entry = { .val = page_private(page) }; return swp_swap_info(entry); @@ -3498,13 +3498,13 @@ struct swap_info_struct *page_swap_info(struct page *page) /* * out-of-line __page_file_ methods to avoid include hell. */ -struct address_space *__page_file_mapping(struct page *page) +struct address_space *__page_file_mapping(const struct page *page) { return page_swap_info(page)->swap_file->f_mapping; } EXPORT_SYMBOL_GPL(__page_file_mapping); -pgoff_t __page_file_index(struct page *page) +pgoff_t __page_file_index(const struct page *page) { swp_entry_t swap = { .val = page_private(page) }; return swp_offset(swap); diff --git a/mm/util.c b/mm/util.c index 988d11e6c17c..05f5b36f81f9 100644 --- a/mm/util.c +++ b/mm/util.c @@ -655,7 +655,7 @@ struct anon_vma *page_anon_vma(struct page *page) return __page_rmapping(page); } -struct address_space *page_mapping(struct page *page) +struct address_space *page_mapping(const struct page *page) { struct address_space *mapping; @@ -683,7 +683,7 @@ EXPORT_SYMBOL(page_mapping); /* * For file cache pages, return the address_space, otherwise return NULL */ -struct address_space *page_mapping_file(struct page *page) +struct address_space *page_mapping_file(const struct page *page) { if (unlikely(PageSwapCache(page))) return NULL; @@ -691,7 +691,7 @@ struct address_space *page_mapping_file(struct page *page) } /* Slow path of page_mapcount() for compound pages */ -int __page_mapcount(struct page *page) +int __page_mapcount(const struct page *page) { int ret; From patchwork Wed Apr 8 15:01:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11480265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E64C14B4 for ; Wed, 8 Apr 2020 15:02:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E621F206C0 for ; Wed, 8 Apr 2020 15:02:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="UhqmhuXf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E621F206C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 329AA8E0019; Wed, 8 Apr 2020 11:01:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1933F8E000D; Wed, 8 Apr 2020 11:01:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2AE58E0019; Wed, 8 Apr 2020 11:01:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id C83618E000D for ; Wed, 8 Apr 2020 11:01:54 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 80B74180AD81F for ; Wed, 8 Apr 2020 15:01:54 +0000 (UTC) X-FDA: 76685002548.11.act22_1096d58aa131c X-Spam-Summary: 2,0,0,fc0e42891068ef3d,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2910:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:4321:4605:5007:6119:6261:6653:7576:7903:8603:8957:9036:9592:10004:11026:11232:11473:11658:11914:12043:12297:12438:12555:12683:12895:13149:13161:13229:13230:13894:14040:14181:14394:14721:21080:21451:21627:21990:30029:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: act22_1096d58aa131c X-Filterd-Recvd-Size: 5089 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Apr 2020 15:01:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=En0GFjlthaAw3JipK2GO3BaiiH8y9IXZyBkZBz2qJeo=; b=UhqmhuXfaS80+z3EcHcNqK0rBw hyjDODWgI6481lCyZh3LO0SSsenDXr0p170/wFPibDRXqCSAWhgDgKGwIW7Naa6z4mofQ5pDbogZi hLhQgzbb/EzsGNeOfnxriC6mP1kWd36EAEzp+IPdq5Nvu3kQR8mqMzqjfrzX163NNu/mC/DFG21cE Z9eP6i/FetHJtZb5Xrc4ExSokuTIQUcsH0dw6+MtJGEwuIweJlGzwrF25lbAXpe6nEuDckmMgJWXe 9B0V2nguZIvuKMcGjHyGy2rROT9vLMb+jtq4w+fiONLiSFKvV6O91iaUFkSVdV5Di8sZd37dsE8MU G5MqsSfw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jMCD8-0006bA-5b; Wed, 08 Apr 2020 15:01:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , kirill.shutemov@linux.intel.com, pasha.tatashin@soleen.com Subject: [PATCH 2/5] mm: Rename PF_POISONED_PAGE to page_poison_check Date: Wed, 8 Apr 2020 08:01:45 -0700 Message-Id: <20200408150148.25290-3-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200408150148.25290-1-willy@infradead.org> References: <20200408150148.25290-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" The PF_POISONED_PAGE name is misleading because it's not a page flag policy. Switch from VM_BUG_ON_PGFLAGS to VM_BUG_ON_PAGE. Move the implementation further up in the file for the benefit of future patches. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Pavel Tatashin Acked-by: Kirill A. Shutemov --- include/linux/mm.h | 2 +- include/linux/page-flags.h | 34 +++++++++++++++++----------------- 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 61aa63449e7e..933450bdcfd4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1248,7 +1248,7 @@ static inline int page_to_nid(const struct page *page) { struct page *p = (struct page *)page; - return (PF_POISONED_CHECK(p)->flags >> NODES_PGSHIFT) & NODES_MASK; + return (page_poison_check(p)->flags >> NODES_PGSHIFT) & NODES_MASK; } #endif diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f1ab1f2e6aba..331aef35f3e0 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -175,6 +175,18 @@ enum pageflags { #ifndef __GENERATING_BOUNDS_H +#define PAGE_POISON_PATTERN -1l +static inline int PagePoisoned(const struct page *page) +{ + return page->flags == PAGE_POISON_PATTERN; +} + +#define page_poison_check(page) ({ \ + __typeof__(page) ___page = page; \ + VM_BUG_ON_PAGE(PagePoisoned(___page), ___page); \ + ___page; \ +}) + #define compound_head(page) ({ \ __typeof__(page) _page = page; \ unsigned long head = READ_ONCE(_page->compound_head); \ @@ -193,12 +205,6 @@ static __always_inline int PageCompound(const struct page *page) return test_bit(PG_head, &page->flags) || PageTail(page); } -#define PAGE_POISON_PATTERN -1l -static inline int PagePoisoned(const struct page *page) -{ - return page->flags == PAGE_POISON_PATTERN; -} - #ifdef CONFIG_DEBUG_VM void page_init_poison(struct page *page, size_t size); #else @@ -210,9 +216,6 @@ static inline void page_init_poison(struct page *page, size_t size) /* * Page flags policies wrt compound pages * - * PF_POISONED_CHECK - * check if this struct page poisoned/uninitialized - * * PF_ANY: * the page flag is relevant for small, head and tail pages. * @@ -230,20 +233,17 @@ static inline void page_init_poison(struct page *page, size_t size) * PF_NO_COMPOUND: * the page flag is not relevant for compound pages. */ -#define PF_POISONED_CHECK(page) ({ \ - VM_BUG_ON_PGFLAGS(PagePoisoned(page), page); \ - page; }) -#define PF_ANY(page, enforce) PF_POISONED_CHECK(page) -#define PF_HEAD(page, enforce) PF_POISONED_CHECK(compound_head(page)) +#define PF_ANY(page, enforce) page_poison_check(page) +#define PF_HEAD(page, enforce) page_poison_check(compound_head(page)) #define PF_ONLY_HEAD(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(PageTail(page), page); \ - PF_POISONED_CHECK(page); }) + page_poison_check(page); }) #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ - PF_POISONED_CHECK(compound_head(page)); }) + page_poison_check(compound_head(page)); }) #define PF_NO_COMPOUND(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageCompound(page), page); \ - PF_POISONED_CHECK(page); }) + page_poison_check(page); }) /* * Macros to create function definitions for page flags From patchwork Wed Apr 8 15:01:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11480261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EDCFE112C for ; Wed, 8 Apr 2020 15:02:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B18F4206C0 for ; Wed, 8 Apr 2020 15:02:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ja0GX5Xk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B18F4206C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DA4D98E0016; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C20978E000D; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D12A8E0016; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 7B7098E000D for ; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 24E7FA8E5 for ; Wed, 8 Apr 2020 15:01:52 +0000 (UTC) X-FDA: 76685002464.21.foot27_1045bece96a04 X-Spam-Summary: 2,0,0,27b0e9d5549ab737,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1711:1714:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3350:3865:3866:3870:3874:5007:6261:6653:7576:9036:9592:10004:11026:11658:11914:12043:12296:12297:12555:12895:13069:13311:13357:13894:14096:14181:14384:14394:14721:21080:21451:21627:21990:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: foot27_1045bece96a04 X-Filterd-Recvd-Size: 2329 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Apr 2020 15:01:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=zzVFBtDmUhkfOeTRAYLgyVd2MT4j5JpTQDiGhm5MySo=; b=ja0GX5XkdIq9Jshzty3G+gzhfc EuzydHa/NC9KYudAlaaFz/Z3Q3lSmGAKNojMUWVsylfZ9YGoLWtZFoi0DFB4zf8Ak0imZPU4eqeIU IxniAoBEo4b9sSDGv3XehkqqF0aCaKr4O7DIIg0orcnv6l7XCF91nmer88pFwRMIVlW34TeR5JvI/ q6mk+XZk9aPnOKPwyYdJZpcq8UqW3LVSgZ8/7qD/3R+x/U7eyU3tOn/WuBACG+tomCJM0UjpBcn1k wwxotJz+GcD6YO+PbbvD2409DPshB6YBG46WZLBgCsrIPySH1lyLshv6mXQXNFWzF+m6KU2+LvkzB hU7e6svQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jMCD8-0006bE-6h; Wed, 08 Apr 2020 15:01:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , kirill.shutemov@linux.intel.com, pasha.tatashin@soleen.com Subject: [PATCH 3/5] mm: Remove casting away of constness Date: Wed, 8 Apr 2020 08:01:46 -0700 Message-Id: <20200408150148.25290-4-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200408150148.25290-1-willy@infradead.org> References: <20200408150148.25290-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Now that dump_page can take a const struct page pointer, we can get rid of the cast in page_to_nid(). Reviewed-by: Pavel Tatashin Acked-by: Kirill A. Shutemov --- include/linux/mm.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 933450bdcfd4..047144b894bd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1246,9 +1246,7 @@ extern int page_to_nid(const struct page *page); #else static inline int page_to_nid(const struct page *page) { - struct page *p = (struct page *)page; - - return (page_poison_check(p)->flags >> NODES_PGSHIFT) & NODES_MASK; + return (page_poison_check(page)->flags >> NODES_PGSHIFT) & NODES_MASK; } #endif From patchwork Wed Apr 8 15:01:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11480259 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0416112C for ; Wed, 8 Apr 2020 15:01:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7DB42206C0 for ; Wed, 8 Apr 2020 15:01:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="PWwK5tjq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7DB42206C0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9D4488E0018; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 90D7E8E0017; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 760398E0016; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 5D65D8E000D for ; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1891EA8C0 for ; Wed, 8 Apr 2020 15:01:52 +0000 (UTC) X-FDA: 76685002464.06.nerve75_103e49f3e3242 X-Spam-Summary: 2,0,0,ea61df854b3403f6,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1567:1711:1714:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3865:3870:3874:4321:5007:6261:6653:7576:10004:11026:11473:11658:11914:12043:12297:12438:12555:12895:12986:13069:13161:13229:13311:13357:13894:14181:14384:14394:14721:21080:21451:21627:21990:30054:30062:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: nerve75_103e49f3e3242 X-Filterd-Recvd-Size: 2327 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Apr 2020 15:01:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=cgY1xOEYhaB7DYisA80wEW537FXaWdS+HPW1i0wyAb4=; b=PWwK5tjqk4iZZJL4Xn8IbrRU8v fDZO7XN1zbk0/Iozxvm9q8W2OG115F0a7H0sbx1OY+NCC95WZM8ICaQhHaNzhJB3QEoJRtsstD3mt aGK5meeGNacg3+jk0O1tk7ubJU84tXWK7RPeNXCDBNqjc7MGKJePNBBnm3IG6zXekyEybK/eEbKDX EMwQI2yZRLz0mUc8US5NAIdbE2dAiefJNsmmHtC2UrBtZzp8kZm+s3w+4QGCeUgwYkhRNOY3v2pf8 n+wdCfhmT22835F5QvVeNG1iA8JKJzYyVSx9DUAcdVFiYXt0aWMNfWUA1G4DvnYdN69lhBcibPqC7 9U1/G2RQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jMCD8-0006bI-7e; Wed, 08 Apr 2020 15:01:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , kirill.shutemov@linux.intel.com, pasha.tatashin@soleen.com Subject: [PATCH 4/5] mm: Check for page poison in both page_to_nid implementations Date: Wed, 8 Apr 2020 08:01:47 -0700 Message-Id: <20200408150148.25290-5-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200408150148.25290-1-willy@infradead.org> References: <20200408150148.25290-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" The earlier patch that added page poison checking in page_to_nid() only modified one implementation; both configuration options should have this check. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Pavel Tatashin Acked-by: Kirill A. Shutemov --- mm/sparse.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/sparse.c b/mm/sparse.c index 1aee5a481571..39114451408a 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -46,6 +46,7 @@ static u16 section_to_node_table[NR_MEM_SECTIONS] __cacheline_aligned; int page_to_nid(const struct page *page) { + page_poison_check(page); return section_to_node_table[page_to_section(page)]; } EXPORT_SYMBOL(page_to_nid); From patchwork Wed Apr 8 15:01:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11480257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48F7E112C for ; Wed, 8 Apr 2020 15:01:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15CDD2076D for ; Wed, 8 Apr 2020 15:01:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="k4eym66f" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15CDD2076D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3DDD88E0015; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 367AB8E000D; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E198E0015; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 0C4988E000D for ; Wed, 8 Apr 2020 11:01:52 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B8ADF180AD807 for ; Wed, 8 Apr 2020 15:01:51 +0000 (UTC) X-FDA: 76685002422.06.wall82_103a2c3f08e3a X-Spam-Summary: 2,0,0,99915db9086bd4bc,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:5007:6261:6653:7576:8603:8957:10004:11026:11473:11658:11914:12043:12297:12555:12895:13894:14181:14394:14721:21080:21451:21627:21990:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: wall82_103a2c3f08e3a X-Filterd-Recvd-Size: 3967 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 8 Apr 2020 15:01:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=gUwTlZA5YznaZDjo26j+ItO4N59cLDljGymU+RRW3Gs=; b=k4eym66f2KUwQXSdiJezGWAPfD zL+leCQMepfZ/SrNQrWfKsgiG5/+2p9oBiOskaiSqCUM+OE0V7qVdZ2/OQfs6Mjr0SVz5MMn4jVal i8g9p3ZKs6+QsEAosts9uhHq3j9LhrYfHSFvbcnLITqZ5brDiC0eEG/mrFrrClxsSkwJ5bXlNxf+a IvkigC3X/a5NGZ/PUZDjhTR3B5aBguWquhallEwzjiGV+3rYf16GYMidSOi0byFZ3VT1ZgGtsy0zO 5I1MBMb1SMq77tdvFmT1rkFtc1sP1WYjE327YxIgbg3YI8ircTBP5TVkAf9W4IMZxIY9+MIM+sNbM UQo0hPjg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jMCD8-0006bM-8x; Wed, 08 Apr 2020 15:01:50 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , kirill.shutemov@linux.intel.com, pasha.tatashin@soleen.com Subject: [PATCH 5/5] mm: Check page poison before finding a head page Date: Wed, 8 Apr 2020 08:01:48 -0700 Message-Id: <20200408150148.25290-6-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200408150148.25290-1-willy@infradead.org> References: <20200408150148.25290-1-willy@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" If a page is poisoned, the page->compound_head will be set to -1. Since it has bit zero set, we will think it is a tail page, and the head page is at 0xff..fe. Checking said head page for being poisoned will not have good results. Therefore we need to check for poison in each of compound_head(), PageTail() and PageCompound() (and can remove the checks which are now redundant from the PF_ macros). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Pavel Tatashin Acked-by: Kirill A. Shutemov --- include/linux/page-flags.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 331aef35f3e0..340ceeeda8ed 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -190,6 +190,7 @@ static inline int PagePoisoned(const struct page *page) #define compound_head(page) ({ \ __typeof__(page) _page = page; \ unsigned long head = READ_ONCE(_page->compound_head); \ + VM_BUG_ON_PAGE(head == PAGE_POISON_PATTERN, page); \ if (unlikely(head & 1)) \ _page = (void *)(head - 1); \ _page; \ @@ -197,11 +198,13 @@ static inline int PagePoisoned(const struct page *page) static __always_inline int PageTail(const struct page *page) { + page_poison_check(page); return READ_ONCE(page->compound_head) & 1; } static __always_inline int PageCompound(const struct page *page) { + page_poison_check(page); return test_bit(PG_head, &page->flags) || PageTail(page); } @@ -234,13 +237,13 @@ static inline void page_init_poison(struct page *page, size_t size) * the page flag is not relevant for compound pages. */ #define PF_ANY(page, enforce) page_poison_check(page) -#define PF_HEAD(page, enforce) page_poison_check(compound_head(page)) +#define PF_HEAD(page, enforce) compound_head(page) #define PF_ONLY_HEAD(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(PageTail(page), page); \ - page_poison_check(page); }) + page; }) #define PF_NO_TAIL(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageTail(page), page); \ - page_poison_check(compound_head(page)); }) + compound_head(page); }) #define PF_NO_COMPOUND(page, enforce) ({ \ VM_BUG_ON_PGFLAGS(enforce && PageCompound(page), page); \ page_poison_check(page); })