diff mbox series

[1/3] mm: Correct page_mapped_in_vma() for large folios

Message ID 20240328225831.1765286-2-willy@infradead.org (mailing list archive)
State New
Headers show
Series Unify vma_address and vma_pgoff_address | expand

Commit Message

Matthew Wilcox March 28, 2024, 10:58 p.m. UTC
If 'page' is the first page of a large folio then vma_address() will scan
for any page in the entire folio.  This can lead to page_mapped_in_vma()
returning true if some of the tail pages are mapped and the head page
is not.  This could lead to memory failure choosing to kill a task
unnecessarily.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_vma_mapped.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

David Hildenbrand April 2, 2024, 10:07 a.m. UTC | #1
On 28.03.24 23:58, Matthew Wilcox (Oracle) wrote:
> If 'page' is the first page of a large folio then vma_address() will scan
> for any page in the entire folio.  This can lead to page_mapped_in_vma()
> returning true if some of the tail pages are mapped and the head page
> is not.  This could lead to memory failure choosing to kill a task
> unnecessarily.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   mm/page_vma_mapped.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 74d2de15fb5e..ac48d6284bad 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -325,6 +325,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>    */
>   int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
>   {
> +	struct folio *folio = page_folio(page);
> +	pgoff_t pgoff = folio->index + folio_page_idx(folio, page);
>   	struct page_vma_mapped_walk pvmw = {
>   		.pfn = page_to_pfn(page),
>   		.nr_pages = 1,
> @@ -332,7 +334,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
>   		.flags = PVMW_SYNC,
>   	};
>   
> -	pvmw.address = vma_address(page, vma);
> +	pvmw.address = vma_pgoff_address(pgoff, 1, vma);
>   	if (pvmw.address == -EFAULT)
>   		return 0;
>   	if (!page_vma_mapped_walk(&pvmw))

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 74d2de15fb5e..ac48d6284bad 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -325,6 +325,8 @@  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
  */
 int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 {
+	struct folio *folio = page_folio(page);
+	pgoff_t pgoff = folio->index + folio_page_idx(folio, page);
 	struct page_vma_mapped_walk pvmw = {
 		.pfn = page_to_pfn(page),
 		.nr_pages = 1,
@@ -332,7 +334,7 @@  int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 		.flags = PVMW_SYNC,
 	};
 
-	pvmw.address = vma_address(page, vma);
+	pvmw.address = vma_pgoff_address(pgoff, 1, vma);
 	if (pvmw.address == -EFAULT)
 		return 0;
 	if (!page_vma_mapped_walk(&pvmw))