diff mbox series

mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.

Message ID 20180824063314.21981-1-aneesh.kumar@linux.ibm.com (mailing list archive)
State New, archived
Headers show
Series mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported. | expand

Commit Message

Aneesh Kumar K.V Aug. 24, 2018, 6:33 a.m. UTC
When scanning for movable pages, filter out Hugetlb pages if hugepage migration
is not supported. Without this we hit infinte loop in __offline pages where we
do
	pfn = scan_movable_pages(start_pfn, end_pfn);
	if (pfn) { /* We have movable pages */
		ret = do_migrate_range(pfn, end_pfn);
		goto repeat;
	}

We do support hugetlb migration ony if the hugetlb pages are at pmd level. Here
we just check for Kernel config. The gigantic page size check is done in
page_huge_active.

Acked-by: Michal Hocko <mhocko@suse.com>
Reported-by: Haren Myneni <haren@linux.vnet.ibm.com>
CC: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 mm/memory_hotplug.c | 3 ++-
 mm/page_alloc.c     | 4 ++++
 2 files changed, 6 insertions(+), 1 deletion(-)

Comments

Michal Hocko Aug. 24, 2018, 7:58 a.m. UTC | #1
On Fri 24-08-18 12:03:14, Aneesh Kumar K.V wrote:
> When scanning for movable pages, filter out Hugetlb pages if hugepage migration
> is not supported. Without this we hit infinte loop in __offline pages where we
> do
> 	pfn = scan_movable_pages(start_pfn, end_pfn);
> 	if (pfn) { /* We have movable pages */
> 		ret = do_migrate_range(pfn, end_pfn);
> 		goto repeat;
> 	}
> 
> We do support hugetlb migration ony if the hugetlb pages are at pmd level. Here
> we just check for Kernel config. The gigantic page size check is done in
> page_huge_active.

Well, this is a bit misleading. I would say that

Fix this by checking hugepage_migration_supported both in has_unmovable_pages
which is the primary backoff mechanism for page offlining and for
consistency reasons also into scan_movable_pages because it doesn't make
any sense to return a pfn to non-migrateable huge page.

> Acked-by: Michal Hocko <mhocko@suse.com>
> Reported-by: Haren Myneni <haren@linux.vnet.ibm.com>
> CC: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

I would add
Fixes: 72b39cfc4d75 ("mm, memory_hotplug: do not fail offlining too early")

Not because the bug has been introduced by that commit but rather
because the issue would be latent before that commit.

My Acked-by still holds.

> ---
>  mm/memory_hotplug.c | 3 ++-
>  mm/page_alloc.c     | 4 ++++
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9eea6e809a4e..38d94b703e9d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
>  			if (__PageMovable(page))
>  				return pfn;
>  			if (PageHuge(page)) {
> -				if (page_huge_active(page))
> +				if (hugepage_migration_supported(page_hstate(page)) &&
> +				    page_huge_active(page))
>  					return pfn;
>  				else
>  					pfn = round_up(pfn + 1,
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c677c1506d73..b8d91f59b836 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7709,6 +7709,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>  		 * handle each tail page individually in migration.
>  		 */
>  		if (PageHuge(page)) {
> +
> +			if (!hugepage_migration_supported(page_hstate(page)))
> +				goto unmovable;
> +
>  			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
>  			continue;
>  		}
> -- 
> 2.17.1
Naoya Horiguchi Aug. 24, 2018, 8:46 a.m. UTC | #2
On Fri, Aug 24, 2018 at 09:58:15AM +0200, Michal Hocko wrote:
> On Fri 24-08-18 12:03:14, Aneesh Kumar K.V wrote:
> > When scanning for movable pages, filter out Hugetlb pages if hugepage migration
> > is not supported. Without this we hit infinte loop in __offline pages where we
> > do
> > 	pfn = scan_movable_pages(start_pfn, end_pfn);
> > 	if (pfn) { /* We have movable pages */
> > 		ret = do_migrate_range(pfn, end_pfn);
> > 		goto repeat;
> > 	}
> > 
> > We do support hugetlb migration ony if the hugetlb pages are at pmd level. Here
> > we just check for Kernel config. The gigantic page size check is done in
> > page_huge_active.
> 
> Well, this is a bit misleading. I would say that
> 
> Fix this by checking hugepage_migration_supported both in has_unmovable_pages
> which is the primary backoff mechanism for page offlining and for
> consistency reasons also into scan_movable_pages because it doesn't make
> any sense to return a pfn to non-migrateable huge page.
> 
> > Acked-by: Michal Hocko <mhocko@suse.com>
> > Reported-by: Haren Myneni <haren@linux.vnet.ibm.com>
> > CC: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> 
> I would add
> Fixes: 72b39cfc4d75 ("mm, memory_hotplug: do not fail offlining too early")
> 
> Not because the bug has been introduced by that commit but rather
> because the issue would be latent before that commit.
> 
> My Acked-by still holds.

Looks good to me (with Michal's update on description).

Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

> 
> > ---
> >  mm/memory_hotplug.c | 3 ++-
> >  mm/page_alloc.c     | 4 ++++
> >  2 files changed, 6 insertions(+), 1 deletion(-)
> > 
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 9eea6e809a4e..38d94b703e9d 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
> >  			if (__PageMovable(page))
> >  				return pfn;
> >  			if (PageHuge(page)) {
> > -				if (page_huge_active(page))
> > +				if (hugepage_migration_supported(page_hstate(page)) &&
> > +				    page_huge_active(page))
> >  					return pfn;
> >  				else
> >  					pfn = round_up(pfn + 1,
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c677c1506d73..b8d91f59b836 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -7709,6 +7709,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> >  		 * handle each tail page individually in migration.
> >  		 */
> >  		if (PageHuge(page)) {
> > +
> > +			if (!hugepage_migration_supported(page_hstate(page)))
> > +				goto unmovable;
> > +
> >  			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
> >  			continue;
> >  		}
> > -- 
> > 2.17.1
> 
> -- 
> Michal Hocko
> SUSE Labs
>
diff mbox series

Patch

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9eea6e809a4e..38d94b703e9d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1333,7 +1333,8 @@  static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
 			if (__PageMovable(page))
 				return pfn;
 			if (PageHuge(page)) {
-				if (page_huge_active(page))
+				if (hugepage_migration_supported(page_hstate(page)) &&
+				    page_huge_active(page))
 					return pfn;
 				else
 					pfn = round_up(pfn + 1,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c677c1506d73..b8d91f59b836 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7709,6 +7709,10 @@  bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		 * handle each tail page individually in migration.
 		 */
 		if (PageHuge(page)) {
+
+			if (!hugepage_migration_supported(page_hstate(page)))
+				goto unmovable;
+
 			iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
 			continue;
 		}