diff mbox series

[4/9] mm: Support page_mapcount() on page_has_type() pages

Message ID 20240321142448.1645400-5-willy@infradead.org (mailing list archive)
State New
Headers show
Series Various significant MM patches | expand

Commit Message

Matthew Wilcox March 21, 2024, 2:24 p.m. UTC
Return 0 for pages which can't be mapped.  This matches how page_mapped()
works.  It is more convenient for users to not have to filter out
these pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/proc/page.c             | 7 ++-----
 include/linux/mm.h         | 8 +++++---
 include/linux/page-flags.h | 4 ++--
 3 files changed, 9 insertions(+), 10 deletions(-)

Comments

Vlastimil Babka March 22, 2024, 9:43 a.m. UTC | #1
On 3/21/24 15:24, Matthew Wilcox (Oracle) wrote:
> Return 0 for pages which can't be mapped.  This matches how page_mapped()
> works.  It is more convenient for users to not have to filter out
> these pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Hm strictly speaking you shouldn't be removing those PageSlab tests until
it's changed to a PageType in 7/9? If we're paranoid enough about not
breaking bisection between this and that patch.

Otherwise

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  fs/proc/page.c             | 7 ++-----
>  include/linux/mm.h         | 8 +++++---
>  include/linux/page-flags.h | 4 ++--
>  3 files changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index 195b077c0fac..9223856c934b 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -67,7 +67,7 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf,
>  		 */
>  		ppage = pfn_to_online_page(pfn);
>  
> -		if (!ppage || PageSlab(ppage) || page_has_type(ppage))
> +		if (!ppage)
>  			pcount = 0;
>  		else
>  			pcount = page_mapcount(ppage);
> @@ -124,11 +124,8 @@ u64 stable_page_flags(struct page *page)
>  
>  	/*
>  	 * pseudo flags for the well known (anonymous) memory mapped pages
> -	 *
> -	 * Note that page->_mapcount is overloaded in SLAB, so the
> -	 * simple test in page_mapped() is not enough.
>  	 */
> -	if (!PageSlab(page) && page_mapped(page))
> +	if (page_mapped(page))
>  		u |= 1 << KPF_MMAP;
>  	if (PageAnon(page))
>  		u |= 1 << KPF_ANON;
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0436b919f1c7..5ff3d687bc6c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1223,14 +1223,16 @@ static inline void page_mapcount_reset(struct page *page)
>   * a large folio, it includes the number of times this page is mapped
>   * as part of that folio.
>   *
> - * The result is undefined for pages which cannot be mapped into userspace.
> - * For example SLAB or special types of pages. See function page_has_type().
> - * They use this field in struct page differently.
> + * Will report 0 for pages which cannot be mapped into userspace, eg
> + * slab, page tables and similar.
>   */
>  static inline int page_mapcount(struct page *page)
>  {
>  	int mapcount = atomic_read(&page->_mapcount) + 1;
>  
> +	/* Handle page_has_type() pages */
> +	if (mapcount < 0)
> +		mapcount = 0;
>  	if (unlikely(PageCompound(page)))
>  		mapcount += folio_entire_mapcount(page_folio(page));
>  
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 8d0e6ce25ca2..5852f967c640 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -971,12 +971,12 @@ static inline bool is_page_hwpoison(struct page *page)
>   * page_type may be used.  Because it is initialised to -1, we invert the
>   * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
>   * __ClearPageFoo *sets* the bit used for PageFoo.  We reserve a few high and
> - * low bits so that an underflow or overflow of page_mapcount() won't be
> + * low bits so that an underflow or overflow of _mapcount won't be
>   * mistaken for a page type value.
>   */
>  
>  #define PAGE_TYPE_BASE	0xf0000000
> -/* Reserve		0x0000007f to catch underflows of page_mapcount */
> +/* Reserve		0x0000007f to catch underflows of _mapcount */
>  #define PAGE_MAPCOUNT_RESERVE	-128
>  #define PG_buddy	0x00000080
>  #define PG_offline	0x00000100
Matthew Wilcox March 22, 2024, 12:43 p.m. UTC | #2
On Fri, Mar 22, 2024 at 10:43:38AM +0100, Vlastimil Babka wrote:
> On 3/21/24 15:24, Matthew Wilcox (Oracle) wrote:
> > Return 0 for pages which can't be mapped.  This matches how page_mapped()
> > works.  It is more convenient for users to not have to filter out
> > these pages.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> Hm strictly speaking you shouldn't be removing those PageSlab tests until
> it's changed to a PageType in 7/9? If we're paranoid enough about not
> breaking bisection between this and that patch.

I thought about that.  Slub currently doesn't use the field while will
become __page_type, so it's left set to -1 by the page allocator.  So
this is safe.

Thanks for checking that though ;-)

> Otherwise
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> 
> > ---
> >  fs/proc/page.c             | 7 ++-----
> >  include/linux/mm.h         | 8 +++++---
> >  include/linux/page-flags.h | 4 ++--
> >  3 files changed, 9 insertions(+), 10 deletions(-)
> > 
> > diff --git a/fs/proc/page.c b/fs/proc/page.c
> > index 195b077c0fac..9223856c934b 100644
> > --- a/fs/proc/page.c
> > +++ b/fs/proc/page.c
> > @@ -67,7 +67,7 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf,
> >  		 */
> >  		ppage = pfn_to_online_page(pfn);
> >  
> > -		if (!ppage || PageSlab(ppage) || page_has_type(ppage))
> > +		if (!ppage)
> >  			pcount = 0;
> >  		else
> >  			pcount = page_mapcount(ppage);
> > @@ -124,11 +124,8 @@ u64 stable_page_flags(struct page *page)
> >  
> >  	/*
> >  	 * pseudo flags for the well known (anonymous) memory mapped pages
> > -	 *
> > -	 * Note that page->_mapcount is overloaded in SLAB, so the
> > -	 * simple test in page_mapped() is not enough.
> >  	 */
> > -	if (!PageSlab(page) && page_mapped(page))
> > +	if (page_mapped(page))
> >  		u |= 1 << KPF_MMAP;
> >  	if (PageAnon(page))
> >  		u |= 1 << KPF_ANON;
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 0436b919f1c7..5ff3d687bc6c 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1223,14 +1223,16 @@ static inline void page_mapcount_reset(struct page *page)
> >   * a large folio, it includes the number of times this page is mapped
> >   * as part of that folio.
> >   *
> > - * The result is undefined for pages which cannot be mapped into userspace.
> > - * For example SLAB or special types of pages. See function page_has_type().
> > - * They use this field in struct page differently.
> > + * Will report 0 for pages which cannot be mapped into userspace, eg
> > + * slab, page tables and similar.
> >   */
> >  static inline int page_mapcount(struct page *page)
> >  {
> >  	int mapcount = atomic_read(&page->_mapcount) + 1;
> >  
> > +	/* Handle page_has_type() pages */
> > +	if (mapcount < 0)
> > +		mapcount = 0;
> >  	if (unlikely(PageCompound(page)))
> >  		mapcount += folio_entire_mapcount(page_folio(page));
> >  
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index 8d0e6ce25ca2..5852f967c640 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/linux/page-flags.h
> > @@ -971,12 +971,12 @@ static inline bool is_page_hwpoison(struct page *page)
> >   * page_type may be used.  Because it is initialised to -1, we invert the
> >   * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
> >   * __ClearPageFoo *sets* the bit used for PageFoo.  We reserve a few high and
> > - * low bits so that an underflow or overflow of page_mapcount() won't be
> > + * low bits so that an underflow or overflow of _mapcount won't be
> >   * mistaken for a page type value.
> >   */
> >  
> >  #define PAGE_TYPE_BASE	0xf0000000
> > -/* Reserve		0x0000007f to catch underflows of page_mapcount */
> > +/* Reserve		0x0000007f to catch underflows of _mapcount */
> >  #define PAGE_MAPCOUNT_RESERVE	-128
> >  #define PG_buddy	0x00000080
> >  #define PG_offline	0x00000100
>
David Hildenbrand March 22, 2024, 3:04 p.m. UTC | #3
On 21.03.24 15:24, Matthew Wilcox (Oracle) wrote:
> Return 0 for pages which can't be mapped.  This matches how page_mapped()
> works.  It is more convenient for users to not have to filter out
> these pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---


Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/fs/proc/page.c b/fs/proc/page.c
index 195b077c0fac..9223856c934b 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -67,7 +67,7 @@  static ssize_t kpagecount_read(struct file *file, char __user *buf,
 		 */
 		ppage = pfn_to_online_page(pfn);
 
-		if (!ppage || PageSlab(ppage) || page_has_type(ppage))
+		if (!ppage)
 			pcount = 0;
 		else
 			pcount = page_mapcount(ppage);
@@ -124,11 +124,8 @@  u64 stable_page_flags(struct page *page)
 
 	/*
 	 * pseudo flags for the well known (anonymous) memory mapped pages
-	 *
-	 * Note that page->_mapcount is overloaded in SLAB, so the
-	 * simple test in page_mapped() is not enough.
 	 */
-	if (!PageSlab(page) && page_mapped(page))
+	if (page_mapped(page))
 		u |= 1 << KPF_MMAP;
 	if (PageAnon(page))
 		u |= 1 << KPF_ANON;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0436b919f1c7..5ff3d687bc6c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1223,14 +1223,16 @@  static inline void page_mapcount_reset(struct page *page)
  * a large folio, it includes the number of times this page is mapped
  * as part of that folio.
  *
- * The result is undefined for pages which cannot be mapped into userspace.
- * For example SLAB or special types of pages. See function page_has_type().
- * They use this field in struct page differently.
+ * Will report 0 for pages which cannot be mapped into userspace, eg
+ * slab, page tables and similar.
  */
 static inline int page_mapcount(struct page *page)
 {
 	int mapcount = atomic_read(&page->_mapcount) + 1;
 
+	/* Handle page_has_type() pages */
+	if (mapcount < 0)
+		mapcount = 0;
 	if (unlikely(PageCompound(page)))
 		mapcount += folio_entire_mapcount(page_folio(page));
 
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 8d0e6ce25ca2..5852f967c640 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -971,12 +971,12 @@  static inline bool is_page_hwpoison(struct page *page)
  * page_type may be used.  Because it is initialised to -1, we invert the
  * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
  * __ClearPageFoo *sets* the bit used for PageFoo.  We reserve a few high and
- * low bits so that an underflow or overflow of page_mapcount() won't be
+ * low bits so that an underflow or overflow of _mapcount won't be
  * mistaken for a page type value.
  */
 
 #define PAGE_TYPE_BASE	0xf0000000
-/* Reserve		0x0000007f to catch underflows of page_mapcount */
+/* Reserve		0x0000007f to catch underflows of _mapcount */
 #define PAGE_MAPCOUNT_RESERVE	-128
 #define PG_buddy	0x00000080
 #define PG_offline	0x00000100