diff mbox series

[v3,5/5] mips: use nth_page() in place of direct struct page manipulation.

Message ID 20230913201248.452081-6-zi.yan@sent.com (mailing list archive)
State Handled Elsewhere
Headers show
Series Use nth_page() in place of direct struct page manipulation | expand

Commit Message

Zi Yan Sept. 13, 2023, 8:12 p.m. UTC
From: Zi Yan <ziy@nvidia.com>

__flush_dcache_pages() is called during hugetlb migration via
migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page()
-> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and
without sparsemem vmemmap, struct page is not guaranteed to be contiguous
beyond a section. Use nth_page() instead.

Fixes: 15fa3e8e3269 ("mips: implement the new page table range API")
Cc: <stable@vger.kernel.org>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 arch/mips/mm/cache.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Zi Yan Sept. 14, 2023, 2:46 a.m. UTC | #1
On 13 Sep 2023, at 16:12, Zi Yan wrote:

> From: Zi Yan <ziy@nvidia.com>
>
> __flush_dcache_pages() is called during hugetlb migration via
> migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page()
> -> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and
> without sparsemem vmemmap, struct page is not guaranteed to be contiguous
> beyond a section. Use nth_page() instead.
>
> Fixes: 15fa3e8e3269 ("mips: implement the new page table range API")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>  arch/mips/mm/cache.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
> index 02042100e267..7f830634dbe7 100644
> --- a/arch/mips/mm/cache.c
> +++ b/arch/mips/mm/cache.c
> @@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
>  	 * get faulted into the tlb (and thus flushed) anyways.
>  	 */
>  	for (i = 0; i < nr; i++) {
> -		addr = (unsigned long)kmap_local_page(page + i);
> +		addr = (unsigned long)kmap_local_page(nth_page(page, i));
>  		flush_data_cache_page(addr);
>  		kunmap_local((void *)addr);
>  	}
> -- 
> 2.40.1

Without the fix, a wrong address might be used for data cache page flush.
No bug is reported. The fix comes from code inspection.


--
Best Regards,
Yan, Zi
diff mbox series

Patch

diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 02042100e267..7f830634dbe7 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -117,7 +117,7 @@  void __flush_dcache_pages(struct page *page, unsigned int nr)
 	 * get faulted into the tlb (and thus flushed) anyways.
 	 */
 	for (i = 0; i < nr; i++) {
-		addr = (unsigned long)kmap_local_page(page + i);
+		addr = (unsigned long)kmap_local_page(nth_page(page, i));
 		flush_data_cache_page(addr);
 		kunmap_local((void *)addr);
 	}