From patchwork Tue Jun 23 06:13:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619825 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E89D6C1 for ; Tue, 23 Jun 2020 06:14:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D218C20772 for ; Tue, 23 Jun 2020 06:14:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uNvLL5HW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D218C20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 02DF36B0007; Tue, 23 Jun 2020 02:14:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED1616B0008; Tue, 23 Jun 2020 02:14:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4C7A6B000A; Tue, 23 Jun 2020 02:14:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id BDE6D6B0007 for ; Tue, 23 Jun 2020 02:14:36 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 823F6824805A for ; Tue, 23 Jun 2020 06:14:36 +0000 (UTC) X-FDA: 76959462552.16.fuel86_240175326e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 4FD76100E6903 for ; Tue, 23 Jun 2020 06:14:36 +0000 (UTC) X-Spam-Summary: 2,0,0,d2a79b8b371b2de5,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:4118:4250:4321:4385:4605:5007:6119:6261:6653:7576:8660:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13161:13229:13230:14096:14181:14394:14721:21080:21212:21444:21451:21627:21666:21939:21990:30003:30054:30070:30080,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: fuel86_240175326e39 X-Filterd-Recvd-Size: 7750 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:35 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id z63so9606830pfb.1 for ; Mon, 22 Jun 2020 23:14:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pyWkbUPFywlUnCqFbpFEE4fSGCvnXjgq1ygQYJuoSV8=; b=uNvLL5HWoOF4zAqhjW2fWVD+KWhOJ7u1zP8Yiam7dbbMVGt+tWK8zjz4aDvXVQrhys sTwS0LM8YuYjjDP55AvYfqMO8cnj9HjtxQL8pZ1zZL1/32Z8YHYy2tfiwxaieKTy1CYS iANFnlZv4fQ5jySuYZhxRPc3Ebdx/+7Yx5Ak7g5PmxpFUyp6W5C+UdjyTkMzD7DSDIko ImD6Pga/V1Hdr9UulMJwnDwDKK/qDxLJBWxbNu/+fTOmie6fbhvl4MboHznUrR5niXM9 yaMhKyNLOgvk7ur1oB/Z8nJymYRT1JRXCc6n6eM6chRnPAAs35cvu1nVeEG5HVPq/NR1 d9IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pyWkbUPFywlUnCqFbpFEE4fSGCvnXjgq1ygQYJuoSV8=; b=HCng4Y7Ym/OzD8x9y+QDUD06/7glr6HQnxGSN+ix6K21OcY5Loe+B+dttn2fdMAHpf d+RED2EMsXCPUEpLeoAYJIOjHXh/LuYxjp9odkMH8a0SZ2ir8OhN1IMlbKWA6UMIxZj0 asLcUaN/Tj+5Uj8oaJaTCbAF9ovCi/U3j19rgY4c6kVcZSwdp0xpu1Hxje/DmM1dSs3a 1N47hfINYEGfOIP68QxF+pc84iOMY3++e3kKJaiwW/9ekbKBUrAQu6a2P79/j0bLbBnm WnLkS+j8Rp6n0Cn0ygkXZ2WSikbrYHyX81UoQuVkF9FgColSosCR3tjsYeKquZ3OewWj aSyA== X-Gm-Message-State: AOAM530uVhyMpuRTiwdcDprQ3QxZT8yGUbByBlr9luP0NwpqK2q3jkht 5XQ2efzyjofPBsvGwGzhR0k= X-Google-Smtp-Source: ABdhPJzwyQQtbQ5JzbpAA0kxeJXe/6GgmtIQYcaoxTuoAL78jLEBo3afozcGmr++GgyGwCZbn4ctCQ== X-Received: by 2002:a65:62ce:: with SMTP id m14mr16354063pgv.410.1592892874671; Mon, 22 Jun 2020 23:14:34 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:34 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 3/8] mm/hugetlb: unify migration callbacks Date: Tue, 23 Jun 2020 15:13:43 +0900 Message-Id: <1592892828-1934-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 4FD76100E6903 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. This patch adds an argument, gfp_mask, on alloc_huge_page_nodemask() and replace the callsite for alloc_huge_page_node() with the call to alloc_huge_page_nodemask(..., __GFP_THISNODE). It's safe to remove a node id check in alloc_huge_page_node() since there is no caller passing NUMA_NO_NODE as a node id. Signed-off-by: Joonsoo Kim Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 11 +++-------- mm/hugetlb.c | 26 +++----------------------- mm/mempolicy.c | 9 +++++---- mm/migrate.c | 5 +++-- 4 files changed, 14 insertions(+), 37 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 50650d0..8a8b755 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -504,9 +504,8 @@ struct huge_bootmem_page { struct page *alloc_huge_page(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve); -struct page *alloc_huge_page_node(struct hstate *h, int nid); struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask); + nodemask_t *nmask, gfp_t gfp_mask); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, unsigned long address); struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, @@ -759,13 +758,9 @@ static inline struct page *alloc_huge_page(struct vm_area_struct *vma, return NULL; } -static inline struct page *alloc_huge_page_node(struct hstate *h, int nid) -{ - return NULL; -} - static inline struct page * -alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask) +alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, + nodemask_t *nmask, gfp_t gfp_mask) { return NULL; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d54bb7e..bd408f2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1979,30 +1979,10 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, } /* page migration callback function */ -struct page *alloc_huge_page_node(struct hstate *h, int nid) -{ - gfp_t gfp_mask = htlb_alloc_mask(h); - struct page *page = NULL; - - if (nid != NUMA_NO_NODE) - gfp_mask |= __GFP_THISNODE; - - spin_lock(&hugetlb_lock); - if (h->free_huge_pages - h->resv_huge_pages > 0) - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); - spin_unlock(&hugetlb_lock); - - if (!page) - page = alloc_migrate_huge_page(h, gfp_mask, nid, NULL); - - return page; -} - -/* page migration callback function */ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask) + nodemask_t *nmask, gfp_t gfp_mask) { - gfp_t gfp_mask = htlb_alloc_mask(h); + gfp_mask |= htlb_alloc_mask(h); spin_lock(&hugetlb_lock); if (h->free_huge_pages - h->resv_huge_pages > 0) { @@ -2031,7 +2011,7 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, gfp_mask = htlb_alloc_mask(h); node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = alloc_huge_page_nodemask(h, node, nodemask); + page = alloc_huge_page_nodemask(h, node, nodemask, 0); mpol_cond_put(mpol); return page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b9e85d4..f21cff5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1068,10 +1068,11 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, /* page allocation callback for NUMA node migration */ struct page *alloc_new_node_page(struct page *page, unsigned long node) { - if (PageHuge(page)) - return alloc_huge_page_node(page_hstate(compound_head(page)), - node); - else if (PageTransHuge(page)) { + if (PageHuge(page)) { + return alloc_huge_page_nodemask( + page_hstate(compound_head(page)), node, + NULL, __GFP_THISNODE); + } else if (PageTransHuge(page)) { struct page *thp; thp = alloc_pages_node(node, diff --git a/mm/migrate.c b/mm/migrate.c index 6b5c75b..6ca9f0c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1543,10 +1543,11 @@ struct page *new_page_nodemask(struct page *page, unsigned int order = 0; struct page *new_page = NULL; - if (PageHuge(page)) + if (PageHuge(page)) { return alloc_huge_page_nodemask( page_hstate(compound_head(page)), - preferred_nid, nodemask); + preferred_nid, nodemask, 0); + } if (PageTransHuge(page)) { gfp_mask |= GFP_TRANSHUGE;