Message ID | 20230913201248.452081-2-zi.yan@sent.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | Use nth_page() in place of direct struct page manipulation | expand |
On 13 Sep 2023, at 16:12, Zi Yan wrote: > From: Zi Yan <ziy@nvidia.com> > > When dealing with hugetlb pages, manipulating struct page pointers > directly can get to wrong struct page, since struct page is not guaranteed > to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle > it properly. > > Fixes: 2813b9c02962 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") > Cc: <stable@vger.kernel.org> > Signed-off-by: Zi Yan <ziy@nvidia.com> > Reviewed-by: Muchun Song <songmuchun@bytedance.com> > --- > mm/cma.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/cma.c b/mm/cma.c > index da2967c6a223..2b2494fd6b59 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, > */ > if (page) { > for (i = 0; i < count; i++) > - page_kasan_tag_reset(page + i); > + page_kasan_tag_reset(nth_page(page, i)); > } > > if (ret && !no_warn) { > -- > 2.40.1 Without the fix, page_kasan_tag_reset() could reset wrong page tags, causing a wrong kasan result. No related bug is reported. The fix comes from code inspection. -- Best Regards, Yan, Zi
On Wed, 13 Sep 2023 22:16:45 -0400 Zi Yan <ziy@nvidia.com> wrote: > No related bug is reported. The fix comes from code > inspection. OK, thanks. Given that none of these appear urgent, I'll move the series from the 6.6-rcX queue (mm-hotfixes-unstable) and into the 6.7-rc1 queue (mm-unstable), while retaining the cc:stable. To give them more testing time before landing in mainline and -stable kernels.
diff --git a/mm/cma.c b/mm/cma.c index da2967c6a223..2b2494fd6b59 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -505,7 +505,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, */ if (page) { for (i = 0; i < count; i++) - page_kasan_tag_reset(page + i); + page_kasan_tag_reset(nth_page(page, i)); } if (ret && !no_warn) {