diff mbox series

[1/3] hugetlb: simplify prep_compound_gigantic_page ref count racing code

Message ID 20210710002441.167759-2-mike.kravetz@oracle.com (mailing list archive)
State New
Headers show
Series hugetlb: fix potential ref counting races | expand

Commit Message

Mike Kravetz July 10, 2021, 12:24 a.m. UTC
Code in prep_compound_gigantic_page waits for a rcu grace period if it
notices a temporarily inflated ref count on a tail page.  This was due
to the identified potential race with speculative page cache references
which could only last for a rcu grace period.  This is overly complicated
as this situation is VERY unlikely to ever happen.  Instead, just quickly
return an error.

Also, only print a warning in prep_compound_gigantic_page instead of
multiple callers.

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 mm/hugetlb.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

Comments

Muchun Song July 13, 2021, 6:31 a.m. UTC | #1
On Sat, Jul 10, 2021 at 8:25 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> Code in prep_compound_gigantic_page waits for a rcu grace period if it
> notices a temporarily inflated ref count on a tail page.  This was due
> to the identified potential race with speculative page cache references
> which could only last for a rcu grace period.  This is overly complicated
> as this situation is VERY unlikely to ever happen.  Instead, just quickly
> return an error.

Right. The race is very very small. IMHO, that does not complicate
the code is the right thing to do.

>
> Also, only print a warning in prep_compound_gigantic_page instead of
> multiple callers.
>
> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  mm/hugetlb.c | 15 +++++----------
>  1 file changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 924553aa8f78..e59ebba63da7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1657,16 +1657,12 @@ static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
>                  * cache adding could take a ref on a 'to be' tail page.
>                  * We need to respect any increased ref count, and only set
>                  * the ref count to zero if count is currently 1.  If count
> -                * is not 1, we call synchronize_rcu in the hope that a rcu
> -                * grace period will cause ref count to drop and then retry.
> -                * If count is still inflated on retry we return an error and
> -                * must discard the pages.
> +                * is not 1, we return an error and caller must discard the
> +                * pages.

Shall we add more details about why we discard the pages?

Thanks.

>                  */
>                 if (!page_ref_freeze(p, 1)) {
> -                       pr_info("HugeTLB unexpected inflated ref count on freshly allocated page\n");
> -                       synchronize_rcu();
> -                       if (!page_ref_freeze(p, 1))
> -                               goto out_error;
> +                       pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
> +                       goto out_error;
>                 }
>                 set_page_count(p, 0);
>                 set_compound_head(p, page);
> @@ -1830,7 +1826,6 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
>                                 retry = true;
>                                 goto retry;
>                         }
> -                       pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
>                         return NULL;
>                 }
>         }
> @@ -2828,8 +2823,8 @@ static void __init gather_bootmem_prealloc(void)
>                         prep_new_huge_page(h, page, page_to_nid(page));
>                         put_page(page); /* add to the hugepage allocator */
>                 } else {
> +                       /* VERY unlikely inflated ref count on a tail page */
>                         free_gigantic_page(page, huge_page_order(h));
> -                       pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
>                 }
>
>                 /*
> --
> 2.31.1
>
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 924553aa8f78..e59ebba63da7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1657,16 +1657,12 @@  static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
 		 * cache adding could take a ref on a 'to be' tail page.
 		 * We need to respect any increased ref count, and only set
 		 * the ref count to zero if count is currently 1.  If count
-		 * is not 1, we call synchronize_rcu in the hope that a rcu
-		 * grace period will cause ref count to drop and then retry.
-		 * If count is still inflated on retry we return an error and
-		 * must discard the pages.
+		 * is not 1, we return an error and caller must discard the
+		 * pages.
 		 */
 		if (!page_ref_freeze(p, 1)) {
-			pr_info("HugeTLB unexpected inflated ref count on freshly allocated page\n");
-			synchronize_rcu();
-			if (!page_ref_freeze(p, 1))
-				goto out_error;
+			pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
+			goto out_error;
 		}
 		set_page_count(p, 0);
 		set_compound_head(p, page);
@@ -1830,7 +1826,6 @@  static struct page *alloc_fresh_huge_page(struct hstate *h,
 				retry = true;
 				goto retry;
 			}
-			pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
 			return NULL;
 		}
 	}
@@ -2828,8 +2823,8 @@  static void __init gather_bootmem_prealloc(void)
 			prep_new_huge_page(h, page, page_to_nid(page));
 			put_page(page); /* add to the hugepage allocator */
 		} else {
+			/* VERY unlikely inflated ref count on a tail page */
 			free_gigantic_page(page, huge_page_order(h));
-			pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
 		}
 
 		/*