diff mbox series

[RFC,v2,3/5] rmap: add page_add_file_rmap_range()

Message ID 20230201081737.2330141-4-fengwei.yin@intel.com (mailing list archive)
State New
Headers show
Series folio based filemap_map_pages() | expand

Commit Message

Yin Fengwei Feb. 1, 2023, 8:17 a.m. UTC
page_add_file_rmap_range() allows to add pte mapping to a specific
range of file folio. Comparing to original page_add_file_rmap(),
it batched updates __lruvec_stat for large folio.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
---
 include/linux/rmap.h |  2 ++
 mm/rmap.c            | 66 ++++++++++++++++++++++++++++++++++----------
 2 files changed, 54 insertions(+), 14 deletions(-)

Comments

Matthew Wilcox Feb. 1, 2023, 5:32 p.m. UTC | #1
On Wed, Feb 01, 2023 at 04:17:35PM +0800, Yin Fengwei wrote:
>  /**
> - * page_add_file_rmap - add pte mapping to a file page
> - * @page:	the page to add the mapping to
> + * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
> + * @folio:	The filio to add the mapping to
> + * @start:	The first sub page index in folio
> + * @nr_pages:	The number of sub pages from the first page
>   * @vma:	the vm area in which the mapping is added
>   * @compound:	charge the page as compound or small page
>   *
> + * The sub page range of folio is defined by
> + * 	[first_sub_page, first_sub_page + nr_pages)

Lose the "sub" from all of this.  That's legacy thinking; pages are
pages and folios are folios.  "subpages" was from when we were trying
to use the word "page" for both "the allocation" and "the PAGE_SIZE
range of bytes".

> + *
>   * The caller needs to hold the pte lock.
>   */
> -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
> -		bool compound)
> +void page_add_file_rmap_range(struct folio *folio, unsigned long start,
> +			unsigned int nr_pages, struct vm_area_struct *vma,
> +			bool compound)

I think this function needs to be called folio_add_file_rmap()

I'd like to lose the 'compound' parameter, and base it on nr_pages ==
folio_nr_pages(), but that may be a step far just now.
Yin Fengwei Feb. 2, 2023, 2 a.m. UTC | #2
On 2/2/2023 1:32 AM, Matthew Wilcox wrote:
> On Wed, Feb 01, 2023 at 04:17:35PM +0800, Yin Fengwei wrote:
>>  /**
>> - * page_add_file_rmap - add pte mapping to a file page
>> - * @page:	the page to add the mapping to
>> + * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
>> + * @folio:	The filio to add the mapping to
>> + * @start:	The first sub page index in folio
>> + * @nr_pages:	The number of sub pages from the first page
>>   * @vma:	the vm area in which the mapping is added
>>   * @compound:	charge the page as compound or small page
>>   *
>> + * The sub page range of folio is defined by
>> + * 	[first_sub_page, first_sub_page + nr_pages)
> 
> Lose the "sub" from all of this.  That's legacy thinking; pages are
> pages and folios are folios.  "subpages" was from when we were trying
> to use the word "page" for both "the allocation" and "the PAGE_SIZE
> range of bytes".
OK. Will remove sub in next version.

> 
>> + *
>>   * The caller needs to hold the pte lock.
>>   */
>> -void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
>> -		bool compound)
>> +void page_add_file_rmap_range(struct folio *folio, unsigned long start,
>> +			unsigned int nr_pages, struct vm_area_struct *vma,
>> +			bool compound)
> 
> I think this function needs to be called folio_add_file_rmap()
Yes. Maybe a followup patch after this series? Let me know if you want
this change in this series.

> 
> I'd like to lose the 'compound' parameter, and base it on nr_pages ==
> folio_nr_pages(), but that may be a step far just now.
Yes. I had a local change to remove if (folio_test_pmd_mappable(folio))
test (It's very close to removing 'compound'). I didn't include it in
this series. I prefer a follow up patch. Let me know if you want the
change in this series. Thanks.

Regards
Yin, Fengwei

>
diff mbox series

Patch

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index a4570da03e58..9631a3701504 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -198,6 +198,8 @@  void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
 		unsigned long address);
 void page_add_file_rmap(struct page *, struct vm_area_struct *,
 		bool compound);
+void page_add_file_rmap_range(struct folio *, unsigned long start,
+		unsigned int nr_pages, struct vm_area_struct *, bool compound);
 void page_remove_rmap(struct page *, struct vm_area_struct *,
 		bool compound);
 
diff --git a/mm/rmap.c b/mm/rmap.c
index 15ae24585fc4..090de52e1a9a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1303,31 +1303,44 @@  void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
 }
 
 /**
- * page_add_file_rmap - add pte mapping to a file page
- * @page:	the page to add the mapping to
+ * page_add_file_rmap_range - add pte mapping to a sub page range of a folio
+ * @folio:	The filio to add the mapping to
+ * @start:	The first sub page index in folio
+ * @nr_pages:	The number of sub pages from the first page
  * @vma:	the vm area in which the mapping is added
  * @compound:	charge the page as compound or small page
  *
+ * The sub page range of folio is defined by
+ * 	[first_sub_page, first_sub_page + nr_pages)
+ *
  * The caller needs to hold the pte lock.
  */
-void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
-		bool compound)
+void page_add_file_rmap_range(struct folio *folio, unsigned long start,
+			unsigned int nr_pages, struct vm_area_struct *vma,
+			bool compound)
 {
-	struct folio *folio = page_folio(page);
 	atomic_t *mapped = &folio->_nr_pages_mapped;
-	int nr = 0, nr_pmdmapped = 0;
-	bool first;
+	unsigned int nr = 0, nr_pmdmapped = 0, first;
 
-	VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
+	VM_WARN_ON_FOLIO(compound && !folio_test_pmd_mappable(folio), folio);
 
 	/* Is page being mapped by PTE? Is this its first map to be added? */
 	if (likely(!compound)) {
-		first = atomic_inc_and_test(&page->_mapcount);
-		nr = first;
-		if (first && folio_test_large(folio)) {
-			nr = atomic_inc_return_relaxed(mapped);
-			nr = (nr < COMPOUND_MAPPED);
-		}
+		struct page *page = folio_page(folio, start);
+
+		nr_pages = min_t(unsigned int, nr_pages,
+					folio_nr_pages(folio) - start);
+
+		do {
+			first = atomic_inc_and_test(&page->_mapcount);
+			if (first && folio_test_large(folio)) {
+				first = atomic_inc_return_relaxed(mapped);
+				first = (nr < COMPOUND_MAPPED);
+			}
+
+			if (first)
+				nr++;
+		} while (page++, --nr_pages > 0);
 	} else if (folio_test_pmd_mappable(folio)) {
 		/* That test is redundant: it's for safety or to optimize out */
 
@@ -1356,6 +1369,31 @@  void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
 	mlock_vma_folio(folio, vma, compound);
 }
 
+/**
+ * page_add_file_rmap - add pte mapping to a file page
+ * @page:	the page to add the mapping to
+ * @vma:	the vm area in which the mapping is added
+ * @compound:	charge the page as compound or small page
+ *
+ * The caller needs to hold the pte lock.
+ */
+void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
+		bool compound)
+{
+	struct folio *folio = page_folio(page);
+	unsigned int nr_pages;
+
+	VM_WARN_ON_ONCE_PAGE(compound && !PageTransHuge(page), page);
+
+	if (likely(!compound))
+		nr_pages = 1;
+	else
+		nr_pages = folio_nr_pages(folio);
+
+	page_add_file_rmap_range(folio, folio_page_idx(folio, page),
+			nr_pages, vma, compound);
+}
+
 /**
  * page_remove_rmap - take down pte mapping from a page
  * @page:	page to remove mapping from