diff mbox series

[v2,04/12] mm/hugetlb: use provided ac->gfp_mask for allocation

Message ID 1590561903-13186-5-git-send-email-iamjoonsoo.kim@lge.com (mailing list archive)
State New, archived
Headers show
Series clean-up the migration target allocation functions | expand

Commit Message

Joonsoo Kim May 27, 2020, 6:44 a.m. UTC
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

gfp_mask handling on alloc_huge_page_(node|nodemask) is
slightly changed, from ASSIGN to OR. It's safe since caller of these
functions doesn't pass extra gfp_mask except htlb_alloc_mask().

This is a preparation step for following patches.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/hugetlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Michal Hocko June 9, 2020, 1:26 p.m. UTC | #1
On Wed 27-05-20 15:44:55, Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> gfp_mask handling on alloc_huge_page_(node|nodemask) is
> slightly changed, from ASSIGN to OR. It's safe since caller of these
> functions doesn't pass extra gfp_mask except htlb_alloc_mask().
> 
> This is a preparation step for following patches.

This patch on its own doesn't make much sense to me. Should it be folded
in the patch which uses that?

> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/hugetlb.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 453ba94..dabe460 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1985,7 +1985,7 @@ struct page *alloc_huge_page_node(struct hstate *h,
>  {
>  	struct page *page = NULL;
>  
> -	ac->gfp_mask = htlb_alloc_mask(h);
> +	ac->gfp_mask |= htlb_alloc_mask(h);
>  	if (ac->nid != NUMA_NO_NODE)
>  		ac->gfp_mask |= __GFP_THISNODE;
>  
> @@ -2004,7 +2004,7 @@ struct page *alloc_huge_page_node(struct hstate *h,
>  struct page *alloc_huge_page_nodemask(struct hstate *h,
>  				struct alloc_control *ac)
>  {
> -	ac->gfp_mask = htlb_alloc_mask(h);
> +	ac->gfp_mask |= htlb_alloc_mask(h);
>  
>  	spin_lock(&hugetlb_lock);
>  	if (h->free_huge_pages - h->resv_huge_pages > 0) {
> -- 
> 2.7.4
>
Joonsoo Kim June 10, 2020, 3:08 a.m. UTC | #2
2020년 6월 9일 (화) 오후 10:26, Michal Hocko <mhocko@kernel.org>님이 작성:
>
> On Wed 27-05-20 15:44:55, Joonsoo Kim wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >
> > gfp_mask handling on alloc_huge_page_(node|nodemask) is
> > slightly changed, from ASSIGN to OR. It's safe since caller of these
> > functions doesn't pass extra gfp_mask except htlb_alloc_mask().
> >
> > This is a preparation step for following patches.
>
> This patch on its own doesn't make much sense to me. Should it be folded
> in the patch which uses that?

Splitting this patch is requested by Roman. :)

Anyway, the next version would not have this patch since many thing will be
changed.

Thanks.
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 453ba94..dabe460 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1985,7 +1985,7 @@  struct page *alloc_huge_page_node(struct hstate *h,
 {
 	struct page *page = NULL;
 
-	ac->gfp_mask = htlb_alloc_mask(h);
+	ac->gfp_mask |= htlb_alloc_mask(h);
 	if (ac->nid != NUMA_NO_NODE)
 		ac->gfp_mask |= __GFP_THISNODE;
 
@@ -2004,7 +2004,7 @@  struct page *alloc_huge_page_node(struct hstate *h,
 struct page *alloc_huge_page_nodemask(struct hstate *h,
 				struct alloc_control *ac)
 {
-	ac->gfp_mask = htlb_alloc_mask(h);
+	ac->gfp_mask |= htlb_alloc_mask(h);
 
 	spin_lock(&hugetlb_lock);
 	if (h->free_huge_pages - h->resv_huge_pages > 0) {