Message ID | 20201019074853.50856-1-luoshijie1@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: mempolicy: fix potential pte_unmap_unlock pte error | expand |
On Mon, Oct 19, 2020 at 03:48:53AM -0400, Shijie Luo wrote: > When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL > bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like > not a good idea. I think the above is already explained below? > queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate > misplaced pages but returns with EIO when encountering such a page. Since > commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT > is specified") and early break on the first pte in the range results in > pte_unmap_unlock on an underflow pte. This can lead to lockups later on when > somebody tries to lock the pte resp. page_table_lock again.. > > Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when > MPOL_MF_STRICT is specified") > > Signed-off-by: Shijie Luo <luoshijie1@huawei.com> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Anyway, LGTM: Reviewed-by: Oscar Salvador <osalvador@suse.de>
On Mon 19-10-20 12:50:34, Oscar Salvador wrote: > On Mon, Oct 19, 2020 at 03:48:53AM -0400, Shijie Luo wrote: > > When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL > > bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like > > not a good idea. > > I think the above is already explained below? Yes > > queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate > > misplaced pages but returns with EIO when encountering such a page. Since > > commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT > > is specified") and early break on the first pte in the range results in > > pte_unmap_unlock on an underflow pte. This can lead to lockups later on when > > somebody tries to lock the pte resp. page_table_lock again.. > > > > Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when > > MPOL_MF_STRICT is specified") Cc: stable is due as well. There are even security concerns and I wouldn't be surprised if this gained a CVE. > > Signed-off-by: Shijie Luo <luoshijie1@huawei.com> > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > > Anyway, LGTM: > > Reviewed-by: Oscar Salvador <osalvador@suse.de> Acked-by: Michal Hocko <mhocko@suse.com>
diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3fde772ef5ef..3ca4898f3f24 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, unsigned long flags = qp->flags; int ret; bool has_unmovable = false; - pte_t *pte; + pte_t *pte, *mapped_pte; spinlock_t *ptl; ptl = pmd_trans_huge_lock(pmd, vma); @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, if (pmd_trans_unstable(pmd)) return 0; - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); for (; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) continue; @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, } else break; } - pte_unmap_unlock(pte - 1, ptl); + pte_unmap_unlock(mapped_pte, ptl); cond_resched(); if (has_unmovable)