diff mbox series

fix judgment error in shmem_is_huge()

Message ID 20210908102648.2326917-2-liuyuntao10@huawei.com (mailing list archive)
State New
Headers show
Series fix judgment error in shmem_is_huge() | expand

Commit Message

liuyuntao Sept. 8, 2021, 10:26 a.m. UTC
In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded
up correctly. When the page index points to the first page in a huge
page, round_up() cannot bring it to the end of the huge page, but
to the end of the previous one.

an example:
HPAGE_PMD_NR on my machine is 512(2 MB huge page size).
After allcoating a 3000 KB buffer, I access it at location 2050 KB.
In shmem_is_huge(), the corresponding index happens to be 512.
After rounded up by HPAGE_PMD_NR, it will still be 512 which is
smaller than i_size, and shmem_is_huge() will return true.
As a result, my buffer takes an additional huge page, and that
shouldn't happen when shmem_enabled is set to within_size.

Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Liu Yuntao <liuyuntao10@huawei.com>
---
 mm/shmem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Kirill A . Shutemov Sept. 8, 2021, 2:58 p.m. UTC | #1
On Wed, Sep 08, 2021 at 06:26:48PM +0800, Liu Yuntao wrote:
> In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded
> up correctly. When the page index points to the first page in a huge
> page, round_up() cannot bring it to the end of the huge page, but
> to the end of the previous one.
> 
> an example:
> HPAGE_PMD_NR on my machine is 512(2 MB huge page size).
> After allcoating a 3000 KB buffer, I access it at location 2050 KB.
> In shmem_is_huge(), the corresponding index happens to be 512.
> After rounded up by HPAGE_PMD_NR, it will still be 512 which is
> smaller than i_size, and shmem_is_huge() will return true.
> As a result, my buffer takes an additional huge page, and that
> shouldn't happen when shmem_enabled is set to within_size.
> 
> Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages")
> Signed-off-by: Liu Yuntao <liuyuntao10@huawei.com>
> ---
>  mm/shmem.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 88742953532c..5747572859d1 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -490,7 +490,7 @@ bool shmem_is_huge(struct vm_area_struct *vma,
>  	case SHMEM_HUGE_ALWAYS:
>  		return true;
>  	case SHMEM_HUGE_WITHIN_SIZE:
> -		index = round_up(index, HPAGE_PMD_NR);
> +		index = round_up(index + 1, HPAGE_PMD_NR);
>  		i_size = round_up(i_size_read(inode), PAGE_SIZE);
>  		if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)

With the change, the condition can be simplified to

		if (i_size >> PAGE_SHIFT >= index)

right?

>  			return true;
> -- 
> 2.23.0
>
liuyuntao Sept. 9, 2021, 2:39 a.m. UTC | #2
On Wed, 8 Sep 2021 17:58:44 +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 08, 2021 at 06:26:48PM +0800, Liu Yuntao wrote:
> > In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded
> > up correctly. When the page index points to the first page in a huge
> > page, round_up() cannot bring it to the end of the huge page, but
> > to the end of the previous one.
> > 
> > an example:
> > HPAGE_PMD_NR on my machine is 512(2 MB huge page size).
> > After allcoating a 3000 KB buffer, I access it at location 2050 KB.
> > In shmem_is_huge(), the corresponding index happens to be 512.
> > After rounded up by HPAGE_PMD_NR, it will still be 512 which is
> > smaller than i_size, and shmem_is_huge() will return true.
> > As a result, my buffer takes an additional huge page, and that
> > shouldn't happen when shmem_enabled is set to within_size.
> > 
> > Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages")
> > Signed-off-by: Liu Yuntao <liuyuntao10@huawei.com>
> > ---
> >  mm/shmem.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 88742953532c..5747572859d1 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -490,7 +490,7 @@ bool shmem_is_huge(struct vm_area_struct *vma,
> >  	case SHMEM_HUGE_ALWAYS:
> >  		return true;
> >  	case SHMEM_HUGE_WITHIN_SIZE:
> > -		index = round_up(index, HPAGE_PMD_NR);
> > +		index = round_up(index + 1, HPAGE_PMD_NR);
> >  		i_size = round_up(i_size_read(inode), PAGE_SIZE);
> >  		if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)
> 
> With the change, the condition can be simplified to
> 
> 		if (i_size >> PAGE_SHIFT >= index)
> 
> right?

Yes, will add it.

> 
> >  			return true;
> > -- 
> > 2.23.0
> > 
> 
> -- 
>  Kirill A. Shutemov
diff mbox series

Patch

diff --git a/mm/shmem.c b/mm/shmem.c
index 88742953532c..5747572859d1 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -490,7 +490,7 @@  bool shmem_is_huge(struct vm_area_struct *vma,
 	case SHMEM_HUGE_ALWAYS:
 		return true;
 	case SHMEM_HUGE_WITHIN_SIZE:
-		index = round_up(index, HPAGE_PMD_NR);
+		index = round_up(index + 1, HPAGE_PMD_NR);
 		i_size = round_up(i_size_read(inode), PAGE_SIZE);
 		if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)
 			return true;