diff mbox series

mm: fix potential pte_unmap_unlock pte error

Message ID 20201015121534.50910-1-luoshijie1@huawei.com (mailing list archive)
State New, archived
Headers show
Series mm: fix potential pte_unmap_unlock pte error | expand

Commit Message

Shijie Luo Oct. 15, 2020, 12:15 p.m. UTC
When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
 and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
Signed-off-by: linmiaohe <linmiaohe@huawei.com>
---
 mm/mempolicy.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

Comments

Oscar Salvador Oct. 15, 2020, 12:58 p.m. UTC | #1
On 2020-10-15 14:15, Shijie Luo wrote:
> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code 
> breaks
>  and passing origin pte - 1 to pte_unmap_unlock seems like not a good 
> idea.
> 
> Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
> Signed-off-by: linmiaohe <linmiaohe@huawei.com>
> ---
>  mm/mempolicy.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 3fde772ef5ef..01f088630d1d 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -571,7 +571,11 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
>  		} else
>  			break;
>  	}
> -	pte_unmap_unlock(pte - 1, ptl);
> +
> +	if (addr >= end)
> +		pte = pte - 1;
> +
> +	pte_unmap_unlock(pte, ptl);

But this is still wrong, isn't it?
Unless I am missing something, this is "only" important under 
CONFIG_HIGHPTE.

We have:

pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);

which under CONFIG_HIGHPTE does a kmap_atomoc.

Now, we either break the loop in the first pass because of 
!(MPOL_MF_MOVE | MPOL_MF_MOVE_ALL),
or we keep incrementing pte by every pass.
Either way is wrong, because the pointer kunmap_atomic gets will not be 
the same (since we incremented pte).

Or is the loop meant to be running only once, so pte - 1 will bring us 
back to the original pte?
Shijie Luo Oct. 15, 2020, 1:19 p.m. UTC | #2
On 2020/10/15 20:58, osalvador@suse.de wrote:
> On 2020-10-15 14:15, Shijie Luo wrote:
>> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
>>  and passing origin pte - 1 to pte_unmap_unlock seems like not a good 
>> idea.
>>
>> Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
>> Signed-off-by: linmiaohe <linmiaohe@huawei.com>
>> ---
>>  mm/mempolicy.c | 6 +++++-
>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index 3fde772ef5ef..01f088630d1d 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -571,7 +571,11 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>          } else
>>              break;
>>      }
>> -    pte_unmap_unlock(pte - 1, ptl);
>> +
>> +    if (addr >= end)
>> +        pte = pte - 1;
>> +
>> +    pte_unmap_unlock(pte, ptl);
>
> But this is still wrong, isn't it?
> Unless I am missing something, this is "only" important under 
> CONFIG_HIGHPTE.
>
> We have:
>
> pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>
> which under CONFIG_HIGHPTE does a kmap_atomoc.
>
> Now, we either break the loop in the first pass because of 
> !(MPOL_MF_MOVE | MPOL_MF_MOVE_ALL),
> or we keep incrementing pte by every pass.
> Either way is wrong, because the pointer kunmap_atomic gets will not 
> be the same (since we incremented pte).
>
> Or is the loop meant to be running only once, so pte - 1 will bring us 
> back to the original pte?
>
> .

Thanks for your reply, if we break the loop in the first pass, the pte 
pointer will not be incremented,

pte - 1 equals original pte - 1,  because we only increase pte pointer 
when not break the loop.
Michal Hocko Oct. 16, 2020, 12:31 p.m. UTC | #3
On Thu 15-10-20 08:15:34, Shijie Luo wrote:
> When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks
>  and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea.

Yes the code is suspicious to say the least. At least mbind can reach to
here with both MPOL_MF_MOVE, MPOL_MF_MOVE_ALL unset and then the pte
would be pointing outside of the current pmd.

I do not like the fix though. The code is really confusing. Why should
we check for flags in each iteration of the loop when it cannot change?
Also why should we take the ptl lock in the first place when the look is
broken out immediately?

I have to admit that I do not fully understand a7f40cfe3b7ad so this
should be carefuly evaluated.

If anything something like below would be a better fix

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eddbe4e56c73..7877b36a5a6d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -539,6 +539,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 	if (pmd_trans_unstable(pmd))
 		return 0;
 
+	/* A COMMENT GOES HERE. */
+	if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)))
+		return -EIO;
+
 	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
 	for (; addr != end; pte++, addr += PAGE_SIZE) {
 		if (!pte_present(*pte))
@@ -554,28 +558,26 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 			continue;
 		if (!queue_pages_required(page, qp))
 			continue;
-		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
-			/* MPOL_MF_STRICT must be specified if we get here */
-			if (!vma_migratable(vma)) {
-				has_unmovable = true;
-				break;
-			}
 
-			/*
-			 * Do not abort immediately since there may be
-			 * temporary off LRU pages in the range.  Still
-			 * need migrate other LRU pages.
-			 */
-			if (migrate_page_add(page, qp->pagelist, flags))
-				has_unmovable = true;
-		} else
+		/* MPOL_MF_STRICT must be specified if we get here */
+		if (!vma_migratable(vma)) {
+			has_unmovable = true;
 			break;
+		}
+
+		/*
+		 * Do not abort immediately since there may be
+		 * temporary off LRU pages in the range.  Still
+		 * need migrate other LRU pages.
+		 */
+		if (migrate_page_add(page, qp->pagelist, flags))
+			has_unmovable = true;
 	}
 	pte_unmap_unlock(pte - 1, ptl);
 	cond_resched();
 
 	if (has_unmovable)
 		return 1;
 	return addr != end ? -EIO : 0;
 }
Oscar Salvador Oct. 16, 2020, 12:37 p.m. UTC | #4
On 2020-10-16 14:31, Michal Hocko wrote:
> I do not like the fix though. The code is really confusing. Why should
> we check for flags in each iteration of the loop when it cannot change?
> Also why should we take the ptl lock in the first place when the look 
> is
> broken out immediately?

About checking the flags:

https://lore.kernel.org/linux-mm/20190320081643.3c4m5tec5vx653sn@d104.suse.de/#t
Michal Hocko Oct. 16, 2020, 1:11 p.m. UTC | #5
On Fri 16-10-20 14:37:08, osalvador@suse.de wrote:
> On 2020-10-16 14:31, Michal Hocko wrote:
> > I do not like the fix though. The code is really confusing. Why should
> > we check for flags in each iteration of the loop when it cannot change?
> > Also why should we take the ptl lock in the first place when the look is
> > broken out immediately?
> 
> About checking the flags:
> 
> https://lore.kernel.org/linux-mm/20190320081643.3c4m5tec5vx653sn@d104.suse.de/#t

This didn't really help. Maybe the code was different back then but
right now the code doesn't make much sense TBH. The only reason to check
inside the loop would be to have a completely unpopulated address range.
Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
it makes any difference.

Anyway this function would benefit from some uncluttering!
Michal Hocko Oct. 16, 2020, 1:15 p.m. UTC | #6
On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> On Fri 16-10-20 14:37:08, osalvador@suse.de wrote:
> > On 2020-10-16 14:31, Michal Hocko wrote:
> > > I do not like the fix though. The code is really confusing. Why should
> > > we check for flags in each iteration of the loop when it cannot change?
> > > Also why should we take the ptl lock in the first place when the look is
> > > broken out immediately?
> > 
> > About checking the flags:
> > 
> > https://lore.kernel.org/linux-mm/20190320081643.3c4m5tec5vx653sn@d104.suse.de/#t
> 
> This didn't really help. Maybe the code was different back then but
> right now the code doesn't make much sense TBH. The only reason to check
> inside the loop would be to have a completely unpopulated address range.
> Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
> it makes any difference.

Ohh, I have missed queue_pages_required. Let me think some more.
Michal Hocko Oct. 16, 2020, 1:42 p.m. UTC | #7
On Fri 16-10-20 15:15:32, Michal Hocko wrote:
> On Fri 16-10-20 15:11:17, Michal Hocko wrote:
> > On Fri 16-10-20 14:37:08, osalvador@suse.de wrote:
> > > On 2020-10-16 14:31, Michal Hocko wrote:
> > > > I do not like the fix though. The code is really confusing. Why should
> > > > we check for flags in each iteration of the loop when it cannot change?
> > > > Also why should we take the ptl lock in the first place when the look is
> > > > broken out immediately?
> > > 
> > > About checking the flags:
> > > 
> > > https://lore.kernel.org/linux-mm/20190320081643.3c4m5tec5vx653sn@d104.suse.de/#t
> > 
> > This didn't really help. Maybe the code was different back then but
> > right now the code doesn't make much sense TBH. The only reason to check
> > inside the loop would be to have a completely unpopulated address range.
> > Note about MPOL_MF_STRICT is not checked explicitly and I do not see how
> > it makes any difference.
> 
> Ohh, I have missed queue_pages_required. Let me think some more.

OK, I finally managed to convince my friday brain to think and grasped
what the code is intended to do. The loop is hairy and we want to
prevent from spurious EIO when all the pages are on a proper node. So
the check has to be done inside the loop. Anyway I would find the
following fix less error prone and easier to follow
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eddbe4e56c73..8cc1fc9c4d13 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 	unsigned long flags = qp->flags;
 	int ret;
 	bool has_unmovable = false;
-	pte_t *pte;
+	pte_t *pte, *mapped_pte;
 	spinlock_t *ptl;
 
 	ptl = pmd_trans_huge_lock(pmd, vma);
@@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 	if (pmd_trans_unstable(pmd))
 		return 0;
 
-	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+	mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
 	for (; addr != end; pte++, addr += PAGE_SIZE) {
 		if (!pte_present(*pte))
 			continue;
@@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 		} else
 			break;
 	}
-	pte_unmap_unlock(pte - 1, ptl);
+	pte_unmap_unlock(mapped_pte, ptl);
 	cond_resched();
 
 	if (has_unmovable)
Oscar Salvador Oct. 16, 2020, 2:05 p.m. UTC | #8
On 2020-10-16 15:42, Michal Hocko wrote:
> OK, I finally managed to convince my friday brain to think and grasped
> what the code is intended to do. The loop is hairy and we want to
> prevent from spurious EIO when all the pages are on a proper node. So
> the check has to be done inside the loop. Anyway I would find the
> following fix less error prone and easier to follow
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index eddbe4e56c73..8cc1fc9c4d13 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
>  	unsigned long flags = qp->flags;
>  	int ret;
>  	bool has_unmovable = false;
> -	pte_t *pte;
> +	pte_t *pte, *mapped_pte;
>  	spinlock_t *ptl;
> 
>  	ptl = pmd_trans_huge_lock(pmd, vma);
> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
>  	if (pmd_trans_unstable(pmd))
>  		return 0;
> 
> -	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> +	mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>  	for (; addr != end; pte++, addr += PAGE_SIZE) {
>  		if (!pte_present(*pte))
>  			continue;
> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
> unsigned long addr,
>  		} else
>  			break;
>  	}
> -	pte_unmap_unlock(pte - 1, ptl);
> +	pte_unmap_unlock(mapped_pte, ptl);
>  	cond_resched();
> 
>  	if (has_unmovable)

It is more clear to grasp, definitely.
Shijie Luo Oct. 17, 2020, 1:55 a.m. UTC | #9
On 2020/10/16 22:05, osalvador@suse.de wrote:
> On 2020-10-16 15:42, Michal Hocko wrote:
>> OK, I finally managed to convince my friday brain to think and grasped
>> what the code is intended to do. The loop is hairy and we want to
>> prevent from spurious EIO when all the pages are on a proper node. So
>> the check has to be done inside the loop. Anyway I would find the
>> following fix less error prone and easier to follow
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index eddbe4e56c73..8cc1fc9c4d13 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>      unsigned long flags = qp->flags;
>>      int ret;
>>      bool has_unmovable = false;
>> -    pte_t *pte;
>> +    pte_t *pte, *mapped_pte;
>>      spinlock_t *ptl;
>>
>>      ptl = pmd_trans_huge_lock(pmd, vma);
>> @@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>      if (pmd_trans_unstable(pmd))
>>          return 0;
>>
>> -    pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>> +    mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
>>      for (; addr != end; pte++, addr += PAGE_SIZE) {
>>          if (!pte_present(*pte))
>>              continue;
>> @@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd,
>> unsigned long addr,
>>          } else
>>              break;
>>      }
>> -    pte_unmap_unlock(pte - 1, ptl);
>> +    pte_unmap_unlock(mapped_pte, ptl);
>>      cond_resched();
>>
>>      if (has_unmovable)
>
> It is more clear to grasp, definitely.
Yeah, this one is more comprehensible, I 'll send a v2 patch, thank you.
diff mbox series

Patch

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 3fde772ef5ef..01f088630d1d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -571,7 +571,11 @@  static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 		} else
 			break;
 	}
-	pte_unmap_unlock(pte - 1, ptl);
+
+	if (addr >= end)
+		pte = pte - 1;
+
+	pte_unmap_unlock(pte, ptl);
 	cond_resched();
 
 	if (has_unmovable)