Message ID | 20220526113350.30806-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/page_table_check: fix accessing unmapped ptep | expand |
On 2022/5/26 7:33 PM, Miaohe Lin wrote: > ptep is unmapped too early, so ptep will be accessed while it's unmapped. > Fix it by deferring pte_unmap() until page table checking is done. In the beginning, page_table_check only supported x86_64, so there is no problem. But then the commit 3fee229a8eb9 ("riscv/mm: enable ARCH_SUPPORTS_PAGE_TABLE_CHECK") added support for riscv-32, it is indeed a problem in this case. So: Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com> > > Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/page_table_check.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_table_check.c b/mm/page_table_check.c > index 3692bea2ea2c..971c3129b0e3 100644 > --- a/mm/page_table_check.c > +++ b/mm/page_table_check.c > @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, > pte_t *ptep = pte_offset_map(&pmd, addr); > unsigned long i; > > - pte_unmap(ptep); > for (i = 0; i < PTRS_PER_PTE; i++) { > __page_table_check_pte_clear(mm, addr, *ptep); > addr += PAGE_SIZE; > ptep++; > } > + pte_unmap(ptep); > } > }
On Thu, May 26, 2022 at 9:04 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote: > > > > On 2022/5/26 7:33 PM, Miaohe Lin wrote: > > ptep is unmapped too early, so ptep will be accessed while it's unmapped. > > Fix it by deferring pte_unmap() until page table checking is done. > > In the beginning, page_table_check only supported x86_64, so there > is no problem. But then the commit 3fee229a8eb9 ("riscv/mm: enable > ARCH_SUPPORTS_PAGE_TABLE_CHECK") added support for riscv-32, it is > indeed a problem in this case. pte_unmap() is needed only with CONFIG_HIGHPTE. I do not see this config for riskv-32? Pasha > > So: > > Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com> > > > > > Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > --- > > mm/page_table_check.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/page_table_check.c b/mm/page_table_check.c > > index 3692bea2ea2c..971c3129b0e3 100644 > > --- a/mm/page_table_check.c > > +++ b/mm/page_table_check.c > > @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, > > pte_t *ptep = pte_offset_map(&pmd, addr); > > unsigned long i; > > > > - pte_unmap(ptep); > > for (i = 0; i < PTRS_PER_PTE; i++) { > > __page_table_check_pte_clear(mm, addr, *ptep); > > addr += PAGE_SIZE; > > ptep++; > > } > > + pte_unmap(ptep); > > } > > } > > -- > Thanks, > Qi
On Thu, May 26, 2022 at 7:33 AM Miaohe Lin <linmiaohe@huawei.com> wrote: > > ptep is unmapped too early, so ptep will be accessed while it's unmapped. > Fix it by deferring pte_unmap() until page table checking is done. I would re-word this as a cleanup. While pte_unmap() is currently unused, it is still better to call it after we are done with *ptep in case of future changes in other architectures. > > Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") This is more a clean-up, there is no existing bug, so no need to backport to stable. Please remove the above. > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> With the above changes: Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com> > --- > mm/page_table_check.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_table_check.c b/mm/page_table_check.c > index 3692bea2ea2c..971c3129b0e3 100644 > --- a/mm/page_table_check.c > +++ b/mm/page_table_check.c > @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, > pte_t *ptep = pte_offset_map(&pmd, addr); > unsigned long i; > > - pte_unmap(ptep); > for (i = 0; i < PTRS_PER_PTE; i++) { > __page_table_check_pte_clear(mm, addr, *ptep); > addr += PAGE_SIZE; > ptep++; > } > + pte_unmap(ptep); > } > } > -- > 2.23.0 >
On 2022/5/26 9:11 PM, Pasha Tatashin wrote: > On Thu, May 26, 2022 at 9:04 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote: >> >> >> >> On 2022/5/26 7:33 PM, Miaohe Lin wrote: >>> ptep is unmapped too early, so ptep will be accessed while it's unmapped. >>> Fix it by deferring pte_unmap() until page table checking is done. >> >> In the beginning, page_table_check only supported x86_64, so there >> is no problem. But then the commit 3fee229a8eb9 ("riscv/mm: enable >> ARCH_SUPPORTS_PAGE_TABLE_CHECK") added support for riscv-32, it is >> indeed a problem in this case. > > pte_unmap() is needed only with CONFIG_HIGHPTE. I do not see this > config for riskv-32? My bad, but it's better to call pte_unmap() after the check is done. > > Pasha > >> >> So: >> >> Reviewed-by: Qi Zheng <zhengqi.arch@bytedance.com> >> >>> >>> Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") >>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >>> --- >>> mm/page_table_check.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/mm/page_table_check.c b/mm/page_table_check.c >>> index 3692bea2ea2c..971c3129b0e3 100644 >>> --- a/mm/page_table_check.c >>> +++ b/mm/page_table_check.c >>> @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, >>> pte_t *ptep = pte_offset_map(&pmd, addr); >>> unsigned long i; >>> >>> - pte_unmap(ptep); >>> for (i = 0; i < PTRS_PER_PTE; i++) { >>> __page_table_check_pte_clear(mm, addr, *ptep); >>> addr += PAGE_SIZE; >>> ptep++; >>> } >>> + pte_unmap(ptep); >>> } >>> } >> >> -- >> Thanks, >> Qi
On Thu, May 26, 2022 at 07:33:50PM +0800, Miaohe Lin wrote: > ptep is unmapped too early, so ptep will be accessed while it's unmapped. > Fix it by deferring pte_unmap() until page table checking is done. > > Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/page_table_check.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_table_check.c b/mm/page_table_check.c > index 3692bea2ea2c..971c3129b0e3 100644 > --- a/mm/page_table_check.c > +++ b/mm/page_table_check.c > @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, > pte_t *ptep = pte_offset_map(&pmd, addr); > unsigned long i; > > - pte_unmap(ptep); > for (i = 0; i < PTRS_PER_PTE; i++) { > __page_table_check_pte_clear(mm, addr, *ptep); > addr += PAGE_SIZE; > ptep++; > } > + pte_unmap(ptep); But ptep was mutated in the loop. So surely this needs to be: pte_unmap(ptep - PTRS_PER_PTE); or you'll be unmapping the wrong page.
On Thu, May 26, 2022 at 9:34 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Thu, May 26, 2022 at 07:33:50PM +0800, Miaohe Lin wrote: > > ptep is unmapped too early, so ptep will be accessed while it's unmapped. > > Fix it by deferring pte_unmap() until page table checking is done. > > > > Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > --- > > mm/page_table_check.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/page_table_check.c b/mm/page_table_check.c > > index 3692bea2ea2c..971c3129b0e3 100644 > > --- a/mm/page_table_check.c > > +++ b/mm/page_table_check.c > > @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, > > pte_t *ptep = pte_offset_map(&pmd, addr); > > unsigned long i; > > > > - pte_unmap(ptep); > > for (i = 0; i < PTRS_PER_PTE; i++) { > > __page_table_check_pte_clear(mm, addr, *ptep); > > addr += PAGE_SIZE; > > ptep++; > > } > > + pte_unmap(ptep); > > But ptep was mutated in the loop. So surely this needs to be: > > pte_unmap(ptep - PTRS_PER_PTE); > > or you'll be unmapping the wrong page. Right, thank you Matthew. Miaohe, please store the ptep, or maybe drop this patch entirely. Thanks, Pasha
On Thu, 26 May 2022 09:37:43 -0400 Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > On Thu, May 26, 2022 at 9:34 AM Matthew Wilcox <willy@infradead.org> wrote: > > > > On Thu, May 26, 2022 at 07:33:50PM +0800, Miaohe Lin wrote: > > > ptep is unmapped too early, so ptep will be accessed while it's unmapped. > > > Fix it by deferring pte_unmap() until page table checking is done. > > > > > > Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") > > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > > --- > > > mm/page_table_check.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/mm/page_table_check.c b/mm/page_table_check.c > > > index 3692bea2ea2c..971c3129b0e3 100644 > > > --- a/mm/page_table_check.c > > > +++ b/mm/page_table_check.c > > > @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, > > > pte_t *ptep = pte_offset_map(&pmd, addr); > > > unsigned long i; > > > > > > - pte_unmap(ptep); > > > for (i = 0; i < PTRS_PER_PTE; i++) { > > > __page_table_check_pte_clear(mm, addr, *ptep); > > > addr += PAGE_SIZE; > > > ptep++; > > > } > > > + pte_unmap(ptep); > > > > But ptep was mutated in the loop. So surely this needs to be: > > > > pte_unmap(ptep - PTRS_PER_PTE); > > > > or you'll be unmapping the wrong page. > > Right, thank you Matthew. > > Miaohe, please store the ptep, or maybe drop this patch entirely. I think it's best to fix it. I rewrote the changelog as : ptep is unmapped too early, so ptep could theoretically be accessed while : it's unmapped. This might become a problem if/when CONFIG_HIGHPTE becomes : available on riscv. : : Fix it by deferring pte_unmap() until page table checking is done. I'll retain the Fixes:. This doesn't imply cc:stable in MM, and anyone who backports the original patchset will want to know about this fixup. And I queued a fixup for the thing Matthew noticed.
> > Miaohe, please store the ptep, or maybe drop this patch entirely. > > I think it's best to fix it. I rewrote the changelog as > > : ptep is unmapped too early, so ptep could theoretically be accessed while > : it's unmapped. This might become a problem if/when CONFIG_HIGHPTE becomes > : available on riscv. > : > : Fix it by deferring pte_unmap() until page table checking is done. > > I'll retain the Fixes:. This doesn't imply cc:stable in MM, and anyone > who backports the original patchset will want to know about this fixup. Makes sense. > And I queued a fixup for the thing Matthew noticed. Thank you Andrew. Pasha
On 2022/5/27 2:31, Pasha Tatashin wrote: >>> Miaohe, please store the ptep, or maybe drop this patch entirely. >> >> I think it's best to fix it. I rewrote the changelog as >> >> : ptep is unmapped too early, so ptep could theoretically be accessed while >> : it's unmapped. This might become a problem if/when CONFIG_HIGHPTE becomes >> : available on riscv. >> : >> : Fix it by deferring pte_unmap() until page table checking is done. >> >> I'll retain the Fixes:. This doesn't imply cc:stable in MM, and anyone >> who backports the original patchset will want to know about this fixup. > Makes sense. > > >> And I queued a fixup for the thing Matthew noticed. > Thank you Andrew. > Many thanks for all of your comments, review and fixup! :) > Pasha > . >
diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 3692bea2ea2c..971c3129b0e3 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -234,11 +234,11 @@ void __page_table_check_pte_clear_range(struct mm_struct *mm, pte_t *ptep = pte_offset_map(&pmd, addr); unsigned long i; - pte_unmap(ptep); for (i = 0; i < PTRS_PER_PTE; i++) { __page_table_check_pte_clear(mm, addr, *ptep); addr += PAGE_SIZE; ptep++; } + pte_unmap(ptep); } }
ptep is unmapped too early, so ptep will be accessed while it's unmapped. Fix it by deferring pte_unmap() until page table checking is done. Fixes: 80110bbfbba6 ("mm/page_table_check: check entries at pmd levels") Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/page_table_check.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)