From patchwork Tue Jun 23 06:13:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619821 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B33781392 for ; Tue, 23 Jun 2020 06:14:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8143D2077D for ; Tue, 23 Jun 2020 06:14:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="vWo1Ivst" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8143D2077D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 92E656B0005; Tue, 23 Jun 2020 02:14:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8DDEF6B0006; Tue, 23 Jun 2020 02:14:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F45D6B0007; Tue, 23 Jun 2020 02:14:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 67DF66B0005 for ; Tue, 23 Jun 2020 02:14:29 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 24E1E8248047 for ; Tue, 23 Jun 2020 06:14:29 +0000 (UTC) X-FDA: 76959462258.08.gun91_1f0827e26e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 03DDA1819E764 for ; Tue, 23 Jun 2020 06:14:28 +0000 (UTC) X-Spam-Summary: 2,0,0,088defd01a0bf46a,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1540:1711:1714:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3351:3865:3866:3867:3872:4321:5007:6261:6653:7576:9413:10004:11026:11658:11914:12297:12438:12517:12519:12555:12679:12895:13069:13311:13357:14181:14384:14394:14721:21080:21444:21451:21627:21666:21990:30054:30064,0,RBL:209.85.214.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: gun91_1f0827e26e39 X-Filterd-Recvd-Size: 4054 Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:28 +0000 (UTC) Received: by mail-pl1-f194.google.com with SMTP id k6so8664355pll.9 for ; Mon, 22 Jun 2020 23:14:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=sJhG/EECQU60fMMJwW7ZSgqGSJ0Sa1bGibgzSWunQu4=; b=vWo1IvstvoWznzT3ERU5FriztuAmiopI/sb+aB05u1jaMs3PN5oYl4yHyzRG6CJCb8 FzJfFKI6Llo0xezihbIfX/06+y6IXBAAdNyF9D59mmFqGZZNQUWtrUocg909mXZVDg4t ZImR5hNQzQwUdZTlvNz3Juk8WhWsThIKMlz8Fygekl3e1lh5Kf2S4PSJSWCHtj31ARRa lxZ+SBxhHUiRstrhg0wcC7E7qmJEFr4ZS9/lwEh+tUus3J5XaGgqB9j1NiOJ4dq6+jNu drx814lJrhLFKLRUwXfEPoQxu6F5oarFFZ4euvZEB2a1wNEQIuKVxy53AoOuZLG+Yu7b BtTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=sJhG/EECQU60fMMJwW7ZSgqGSJ0Sa1bGibgzSWunQu4=; b=kcs21oqsmqaynIukIywA4NA9TjKqa9ldV6xi9OuUpl8dwYMB7eq3aPORBFDuhsrcRl 6rsTYs9aUuBXT9AjJf1gApFFq+xbEUFPnt2pMnDftvw1jsuMbzGRCtOqLLriHr2BkDhb K1wSGcivpXF5csFXSgQE2y8V/bwYIUOWI/kZibqCJNBCI+FGJT22lH70/0uH6535BoMt nV4aMEQHtjLqRnIgX0Bt8ifxCzzwpzKaebF8x6VdIsBGw8FO29yTjZK/4ssPh55sYEIY PaIhe6/j5eRgLuc8Ex59yUVcsrd0aRk0yB6tgsVgzm0itm1tW9Kj3jykzlfHdQStnZgF N7cA== X-Gm-Message-State: AOAM531G0g7kbo1yLb75PQP/NnDhpFberQNZunPcrqhDEwGVCXmNQFEF 7GIDWqsJ9+VcBgA7/PvI3GU= X-Google-Smtp-Source: ABdhPJylyWj57oSOyabKM1yxMkK1RxlL58E4vg04DIQfNVXGZdk+uFb/jTvDgkSA1CxgA9+zBkNyig== X-Received: by 2002:a17:90a:e50b:: with SMTP id t11mr22522184pjy.109.1592892867815; Mon, 22 Jun 2020 23:14:27 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:27 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 1/8] mm/page_isolation: prefer the node of the source page Date: Tue, 23 Jun 2020 15:13:41 +0900 Message-Id: <1592892828-1934-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 03DDA1819E764 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim For locality, it's better to migrate the page to the same node rather than the node of the current caller's cpu. Acked-by: Roman Gushchin Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- mm/page_isolation.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/page_isolation.c b/mm/page_isolation.c index f6d07c5..aec26d9 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -309,5 +309,7 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, struct page *alloc_migrate_target(struct page *page, unsigned long private) { - return new_page_nodemask(page, numa_node_id(), &node_states[N_MEMORY]); + int nid = page_to_nid(page); + + return new_page_nodemask(page, nid, &node_states[N_MEMORY]); } From patchwork Tue Jun 23 06:13:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619823 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E28A6C1 for ; Tue, 23 Jun 2020 06:14:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 41ECA20772 for ; Tue, 23 Jun 2020 06:14:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="vJdc2Dds" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41ECA20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 70B2B6B0006; Tue, 23 Jun 2020 02:14:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6EBF56B0007; Tue, 23 Jun 2020 02:14:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F86B6B0008; Tue, 23 Jun 2020 02:14:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id 4A85F6B0006 for ; Tue, 23 Jun 2020 02:14:33 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0F712180AD82F for ; Tue, 23 Jun 2020 06:14:33 +0000 (UTC) X-FDA: 76959462426.22.cave37_490f40226e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id DA50C18038E60 for ; Tue, 23 Jun 2020 06:14:32 +0000 (UTC) X-Spam-Summary: 2,0,0,93380cba6ab07005,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3355:3867:3870:3871:3872:4117:4385:4605:5007:6117:6261:6653:7576:7903:8957:9413:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13149:13230:14096:14110:14181:14394:14721:21080:21444:21451:21627:21666:21990:30046:30051:30054:30064:30070,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cave37_490f40226e39 X-Filterd-Recvd-Size: 6857 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:32 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id z5so905984pgb.6 for ; Mon, 22 Jun 2020 23:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+MVtWxNOjft+jnGBXpp4pllO9Ogv4crRgAM5bysUzc4=; b=vJdc2DdsINl4/duViwymAa2D/kIoWhki7coFXXsCJJ/saKbX3fjWfBe+ncAeJT/iEF joiD9mYpK1OxlPN5G3lrkNXEwY69XOkf6CpxNLD2W8cKodX6CmN/krlcq5WBd2U2hr7l ADmF5eBwGoqNB1DTHnA8g7uSeMPNYWD2mnKtuDL+5pLD1LrWYBW8pW3SF6oPxAzMmmyE cwsC/cLfoBN/CZKMLzVmSWYQ/74kyg/YY3MX8ozzeDxIFkkhhQ4pEDdDiA+Z9RhAMj6i 5R7c/FbCx8+F38QkX2+1lQFgFNMhg6fR34RrDexlw/mxbxIVDnk1umcTaaBoeJ4Yq8x6 V6mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+MVtWxNOjft+jnGBXpp4pllO9Ogv4crRgAM5bysUzc4=; b=R8dsVWS745bdBzoUGluDNwdokxZNJYtLJeaO8CLp2Ay12hapz9KZA3XBbeXdAeUQpe 5mbdR48W7cQoB1W5nPHp2bm6IUvFUseS+Gc2zNDKfTIBKuc2+g9fFe69jXhKZh0o/tif SPDkLoqmmvrP4+iUL3oOAKWNn1c1AOzErF2GuL6WzrD+nHY/GjrQuN/hSuKafiiFNJrX Qj39QbxlBuFLm4mAgXTh7+lcn/F5fG8ePurKNeagJUkU7vHbyf8JqJTj8li35+Vv+6nK GidoN9/xYBSUJvh8ANauPgNFFRUSX1CHHMzI/U/MCl5O7gkdQDqp5AtephGjYZ9FOhFC wT8w== X-Gm-Message-State: AOAM533F4SI+VhvSgVrgI+5Z/4yepX6eBzlEvpuHiv44w/bU2A8jWXl4 NvCcf2WEkTpWvTsZ6mU7/lo= X-Google-Smtp-Source: ABdhPJxvGPPxCMlACeH3Qo4G5gtuVjR48+Qt5rn3qikPJKHVLEIXNh/BCaT2G+RN58qcSRYI0gyW+A== X-Received: by 2002:a63:af50:: with SMTP id s16mr16564252pgo.365.1592892871363; Mon, 22 Jun 2020 23:14:31 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:30 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 2/8] mm/migrate: move migration helper from .h to .c Date: Tue, 23 Jun 2020 15:13:42 +0900 Message-Id: <1592892828-1934-3-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: DA50C18038E60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim It's not performance sensitive function. Move it to .c. This is a preparation step for future change. Acked-by: Mike Kravetz Acked-by: Michal Hocko Reviewed-by: Vlastimil Babka Signed-off-by: Joonsoo Kim --- include/linux/migrate.h | 33 +++++---------------------------- mm/migrate.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 28 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3e546cb..1d70b4a 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -31,34 +31,6 @@ enum migrate_reason { /* In mm/debug.c; also keep sync with include/trace/events/migrate.h */ extern const char *migrate_reason_names[MR_TYPES]; -static inline struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) -{ - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; - unsigned int order = 0; - struct page *new_page = NULL; - - if (PageHuge(page)) - return alloc_huge_page_nodemask(page_hstate(compound_head(page)), - preferred_nid, nodemask); - - if (PageTransHuge(page)) { - gfp_mask |= GFP_TRANSHUGE; - order = HPAGE_PMD_ORDER; - } - - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) - gfp_mask |= __GFP_HIGHMEM; - - new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); - - if (new_page && PageTransHuge(new_page)) - prep_transhuge_page(new_page); - - return new_page; -} - #ifdef CONFIG_MIGRATION extern void putback_movable_pages(struct list_head *l); @@ -67,6 +39,8 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern struct page *new_page_nodemask(struct page *page, + int preferred_nid, nodemask_t *nodemask); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); @@ -85,6 +59,9 @@ static inline int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason) { return -ENOSYS; } +static inline struct page *new_page_nodemask(struct page *page, + int preferred_nid, nodemask_t *nodemask) + { return NULL; } static inline int isolate_movable_page(struct page *page, isolate_mode_t mode) { return -EBUSY; } diff --git a/mm/migrate.c b/mm/migrate.c index c95912f..6b5c75b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1536,6 +1536,35 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } +struct page *new_page_nodemask(struct page *page, + int preferred_nid, nodemask_t *nodemask) +{ + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + unsigned int order = 0; + struct page *new_page = NULL; + + if (PageHuge(page)) + return alloc_huge_page_nodemask( + page_hstate(compound_head(page)), + preferred_nid, nodemask); + + if (PageTransHuge(page)) { + gfp_mask |= GFP_TRANSHUGE; + order = HPAGE_PMD_ORDER; + } + + if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) + gfp_mask |= __GFP_HIGHMEM; + + new_page = __alloc_pages_nodemask(gfp_mask, order, + preferred_nid, nodemask); + + if (new_page && PageTransHuge(new_page)) + prep_transhuge_page(new_page); + + return new_page; +} + #ifdef CONFIG_NUMA static int store_status(int __user *status, int start, int value, int nr) From patchwork Tue Jun 23 06:13:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619825 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E89D6C1 for ; Tue, 23 Jun 2020 06:14:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D218C20772 for ; Tue, 23 Jun 2020 06:14:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uNvLL5HW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D218C20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 02DF36B0007; Tue, 23 Jun 2020 02:14:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED1616B0008; Tue, 23 Jun 2020 02:14:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4C7A6B000A; Tue, 23 Jun 2020 02:14:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id BDE6D6B0007 for ; Tue, 23 Jun 2020 02:14:36 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 823F6824805A for ; Tue, 23 Jun 2020 06:14:36 +0000 (UTC) X-FDA: 76959462552.16.fuel86_240175326e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 4FD76100E6903 for ; Tue, 23 Jun 2020 06:14:36 +0000 (UTC) X-Spam-Summary: 2,0,0,d2a79b8b371b2de5,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:4118:4250:4321:4385:4605:5007:6119:6261:6653:7576:8660:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13161:13229:13230:14096:14181:14394:14721:21080:21212:21444:21451:21627:21666:21939:21990:30003:30054:30070:30080,0,RBL:209.85.210.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: fuel86_240175326e39 X-Filterd-Recvd-Size: 7750 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:35 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id z63so9606830pfb.1 for ; Mon, 22 Jun 2020 23:14:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pyWkbUPFywlUnCqFbpFEE4fSGCvnXjgq1ygQYJuoSV8=; b=uNvLL5HWoOF4zAqhjW2fWVD+KWhOJ7u1zP8Yiam7dbbMVGt+tWK8zjz4aDvXVQrhys sTwS0LM8YuYjjDP55AvYfqMO8cnj9HjtxQL8pZ1zZL1/32Z8YHYy2tfiwxaieKTy1CYS iANFnlZv4fQ5jySuYZhxRPc3Ebdx/+7Yx5Ak7g5PmxpFUyp6W5C+UdjyTkMzD7DSDIko ImD6Pga/V1Hdr9UulMJwnDwDKK/qDxLJBWxbNu/+fTOmie6fbhvl4MboHznUrR5niXM9 yaMhKyNLOgvk7ur1oB/Z8nJymYRT1JRXCc6n6eM6chRnPAAs35cvu1nVeEG5HVPq/NR1 d9IQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pyWkbUPFywlUnCqFbpFEE4fSGCvnXjgq1ygQYJuoSV8=; b=HCng4Y7Ym/OzD8x9y+QDUD06/7glr6HQnxGSN+ix6K21OcY5Loe+B+dttn2fdMAHpf d+RED2EMsXCPUEpLeoAYJIOjHXh/LuYxjp9odkMH8a0SZ2ir8OhN1IMlbKWA6UMIxZj0 asLcUaN/Tj+5Uj8oaJaTCbAF9ovCi/U3j19rgY4c6kVcZSwdp0xpu1Hxje/DmM1dSs3a 1N47hfINYEGfOIP68QxF+pc84iOMY3++e3kKJaiwW/9ekbKBUrAQu6a2P79/j0bLbBnm WnLkS+j8Rp6n0Cn0ygkXZ2WSikbrYHyX81UoQuVkF9FgColSosCR3tjsYeKquZ3OewWj aSyA== X-Gm-Message-State: AOAM530uVhyMpuRTiwdcDprQ3QxZT8yGUbByBlr9luP0NwpqK2q3jkht 5XQ2efzyjofPBsvGwGzhR0k= X-Google-Smtp-Source: ABdhPJzwyQQtbQ5JzbpAA0kxeJXe/6GgmtIQYcaoxTuoAL78jLEBo3afozcGmr++GgyGwCZbn4ctCQ== X-Received: by 2002:a65:62ce:: with SMTP id m14mr16354063pgv.410.1592892874671; Mon, 22 Jun 2020 23:14:34 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:34 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 3/8] mm/hugetlb: unify migration callbacks Date: Tue, 23 Jun 2020 15:13:43 +0900 Message-Id: <1592892828-1934-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 4FD76100E6903 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. This patch adds an argument, gfp_mask, on alloc_huge_page_nodemask() and replace the callsite for alloc_huge_page_node() with the call to alloc_huge_page_nodemask(..., __GFP_THISNODE). It's safe to remove a node id check in alloc_huge_page_node() since there is no caller passing NUMA_NO_NODE as a node id. Signed-off-by: Joonsoo Kim Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 11 +++-------- mm/hugetlb.c | 26 +++----------------------- mm/mempolicy.c | 9 +++++---- mm/migrate.c | 5 +++-- 4 files changed, 14 insertions(+), 37 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 50650d0..8a8b755 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -504,9 +504,8 @@ struct huge_bootmem_page { struct page *alloc_huge_page(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve); -struct page *alloc_huge_page_node(struct hstate *h, int nid); struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask); + nodemask_t *nmask, gfp_t gfp_mask); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, unsigned long address); struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, @@ -759,13 +758,9 @@ static inline struct page *alloc_huge_page(struct vm_area_struct *vma, return NULL; } -static inline struct page *alloc_huge_page_node(struct hstate *h, int nid) -{ - return NULL; -} - static inline struct page * -alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask) +alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, + nodemask_t *nmask, gfp_t gfp_mask) { return NULL; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d54bb7e..bd408f2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1979,30 +1979,10 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, } /* page migration callback function */ -struct page *alloc_huge_page_node(struct hstate *h, int nid) -{ - gfp_t gfp_mask = htlb_alloc_mask(h); - struct page *page = NULL; - - if (nid != NUMA_NO_NODE) - gfp_mask |= __GFP_THISNODE; - - spin_lock(&hugetlb_lock); - if (h->free_huge_pages - h->resv_huge_pages > 0) - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); - spin_unlock(&hugetlb_lock); - - if (!page) - page = alloc_migrate_huge_page(h, gfp_mask, nid, NULL); - - return page; -} - -/* page migration callback function */ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask) + nodemask_t *nmask, gfp_t gfp_mask) { - gfp_t gfp_mask = htlb_alloc_mask(h); + gfp_mask |= htlb_alloc_mask(h); spin_lock(&hugetlb_lock); if (h->free_huge_pages - h->resv_huge_pages > 0) { @@ -2031,7 +2011,7 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, gfp_mask = htlb_alloc_mask(h); node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = alloc_huge_page_nodemask(h, node, nodemask); + page = alloc_huge_page_nodemask(h, node, nodemask, 0); mpol_cond_put(mpol); return page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b9e85d4..f21cff5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1068,10 +1068,11 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, /* page allocation callback for NUMA node migration */ struct page *alloc_new_node_page(struct page *page, unsigned long node) { - if (PageHuge(page)) - return alloc_huge_page_node(page_hstate(compound_head(page)), - node); - else if (PageTransHuge(page)) { + if (PageHuge(page)) { + return alloc_huge_page_nodemask( + page_hstate(compound_head(page)), node, + NULL, __GFP_THISNODE); + } else if (PageTransHuge(page)) { struct page *thp; thp = alloc_pages_node(node, diff --git a/mm/migrate.c b/mm/migrate.c index 6b5c75b..6ca9f0c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1543,10 +1543,11 @@ struct page *new_page_nodemask(struct page *page, unsigned int order = 0; struct page *new_page = NULL; - if (PageHuge(page)) + if (PageHuge(page)) { return alloc_huge_page_nodemask( page_hstate(compound_head(page)), - preferred_nid, nodemask); + preferred_nid, nodemask, 0); + } if (PageTransHuge(page)) { gfp_mask |= GFP_TRANSHUGE; From patchwork Tue Jun 23 06:13:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619827 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6265C90 for ; Tue, 23 Jun 2020 06:14:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2265B20772 for ; Tue, 23 Jun 2020 06:14:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="up5DFWGz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2265B20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 02D836B0008; Tue, 23 Jun 2020 02:14:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ED2B76B000A; Tue, 23 Jun 2020 02:14:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D26596B000C; Tue, 23 Jun 2020 02:14:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id B72AE6B0008 for ; Tue, 23 Jun 2020 02:14:39 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 85F502C8F for ; Tue, 23 Jun 2020 06:14:39 +0000 (UTC) X-FDA: 76959462678.23.trip20_0b0256826e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 5D6233762A for ; Tue, 23 Jun 2020 06:14:39 +0000 (UTC) X-Spam-Summary: 2,0,0,26eca16357275e1a,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:1:2:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4050:4250:4321:4385:4605:5007:6119:6261:6653:7576:7903:8957:9010:9413:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12485:12517:12519:12555:12679:12683:12895:12986:14096:14394:21080:21212:21444:21451:21611:21627:21666:21740:21990:30003:30054:30064:30070,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: trip20_0b0256826e39 X-Filterd-Recvd-Size: 10832 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:38 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id i4so1092808pjd.0 for ; Mon, 22 Jun 2020 23:14:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dc8LghL6mPcKqqBO+Tl511spB3nCK4Lh+EbuLfVS40I=; b=up5DFWGzS5WrjfCoJJN5xdu/dSnt80JnCIvsIR4FO584vq8RGK7mjGJlO4zGnQnNt9 NDY1nG1cVMwu4sfBRNJyFolgiS5lSDAnc63N6CgrsQqsDGoeM2Jm2F0l009uokROvN6L TcUVe/dRulxTsgXTBZ2IbIXV2QdSufrJOOF6888yhzHDBKa6m9H7zbcooNgkaNZwgvHQ 8/p5jgQyvOV5rB+VkogjzEos4W758txaS7wzQQfMCXFgtwB6CP7yGkIMdJfhsgLs6nqY miY0y/GsbyMyZ9j/9XiP7D9ZxKNn3KHoCCQ3ouGbMvYgmwqqFC/mcZW5Vr5EErUq9TLy Jv7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Dc8LghL6mPcKqqBO+Tl511spB3nCK4Lh+EbuLfVS40I=; b=KotKFHtyRO70ihTWqJwvMgfGS5KzCprWqbmrMF+bwFI5K4SFM9HyU7inGhPtT9jfgQ ycpOQSCV/MRWkIIkESafri8U4W0UsHexXaWek+FCQsToq511bZhdalMaXnmgutV1gex5 w+Ugp42IBHucSUIEHdseipWG+TefXpzgy+m8DPBSmpOIDps4l1epTbAaar5Kaq0cx95z vhmt8gqbe/zv9V+9pZVtnaMwbabDnO3qa1r14uY6q+o7ZIVrwuTeBMl32VpL03TmfU/E IfN8+CqQ7rcAHonEGF4F5GTbqcL6tqDfpkJHbTp0Wh941STAw7HyCM8Va24Ap48QEj0q km5A== X-Gm-Message-State: AOAM532L23V2QojxsQWrFHe/XfUGS9SO6/xuGA1Hx92zDassHtAGqARd ++EvkjF00l029Fj5qOZYjjE= X-Google-Smtp-Source: ABdhPJzKX1HSJANYr7NhAx6gNraDC6+KJqFwV6wV6/d6ax9bWH8Sqo5hTpSz6Y3cF5ezGkJ0Cy/U0Q== X-Received: by 2002:a17:90a:f0d4:: with SMTP id fa20mr21701832pjb.160.1592892878004; Mon, 22 Jun 2020 23:14:38 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:37 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 4/8] mm/hugetlb: make hugetlb migration callback CMA aware Date: Tue, 23 Jun 2020 15:13:44 +0900 Message-Id: <1592892828-1934-5-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 5D6233762A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim new_non_cma_page() in gup.c which try to allocate migration target page requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way works well for THP page or normal page but not for hugetlb page. hugetlb page allocation process consists of two steps. First is dequeing from the pool. Second is, if there is no available page on the queue, allocating from the page allocator. new_non_cma_page() can control allocation from the page allocator by specifying correct gfp flag. However, dequeing cannot be controlled until now, so, new_non_cma_page() skips dequeing completely. It is a suboptimal since new_non_cma_page() cannot utilize hugetlb pages on the queue so this patch tries to fix this situation. This patch makes the deque function on hugetlb CMA aware and skip CMA pages if newly added skip_cma argument is passed as true. Acked-by: Mike Kravetz Signed-off-by: Joonsoo Kim --- include/linux/hugetlb.h | 6 ++---- mm/gup.c | 3 ++- mm/hugetlb.c | 31 ++++++++++++++++++++++--------- mm/mempolicy.c | 2 +- mm/migrate.c | 2 +- 5 files changed, 28 insertions(+), 16 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 8a8b755..858522e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -505,11 +505,9 @@ struct huge_bootmem_page { struct page *alloc_huge_page(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve); struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask, gfp_t gfp_mask); + nodemask_t *nmask, gfp_t gfp_mask, bool skip_cma); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, unsigned long address); -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, - int nid, nodemask_t *nmask); int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx); @@ -760,7 +758,7 @@ static inline struct page *alloc_huge_page(struct vm_area_struct *vma, static inline struct page * alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask, gfp_t gfp_mask) + nodemask_t *nmask, gfp_t gfp_mask, bool skip_cma) { return NULL; } diff --git a/mm/gup.c b/mm/gup.c index 6f47697..15be281 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1630,11 +1630,12 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) #ifdef CONFIG_HUGETLB_PAGE if (PageHuge(page)) { struct hstate *h = page_hstate(page); + /* * We don't want to dequeue from the pool because pool pages will * mostly be from the CMA region. */ - return alloc_migrate_huge_page(h, gfp_mask, nid, NULL); + return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask, true); } #endif if (PageTransHuge(page)) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bd408f2..1410e62 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1033,13 +1033,18 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) h->free_huge_pages_node[nid]++; } -static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) +static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid, bool skip_cma) { struct page *page; - list_for_each_entry(page, &h->hugepage_freelists[nid], lru) + list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { + if (skip_cma && is_migrate_cma_page(page)) + continue; + if (!PageHWPoison(page)) break; + } + /* * if 'non-isolated free hugepage' not found on the list, * the allocation fails. @@ -1054,7 +1059,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) } static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, int nid, - nodemask_t *nmask) + nodemask_t *nmask, bool skip_cma) { unsigned int cpuset_mems_cookie; struct zonelist *zonelist; @@ -1079,7 +1084,7 @@ static struct page *dequeue_huge_page_nodemask(struct hstate *h, gfp_t gfp_mask, continue; node = zone_to_nid(zone); - page = dequeue_huge_page_node_exact(h, node); + page = dequeue_huge_page_node_exact(h, node, skip_cma); if (page) return page; } @@ -1124,7 +1129,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, gfp_mask = htlb_alloc_mask(h); nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask, false); if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { SetPagePrivate(page); h->resv_huge_pages--; @@ -1937,7 +1942,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, return page; } -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, +static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { struct page *page; @@ -1980,7 +1985,7 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, /* page migration callback function */ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, - nodemask_t *nmask, gfp_t gfp_mask) + nodemask_t *nmask, gfp_t gfp_mask, bool skip_cma) { gfp_mask |= htlb_alloc_mask(h); @@ -1988,7 +1993,8 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, if (h->free_huge_pages - h->resv_huge_pages > 0) { struct page *page; - page = dequeue_huge_page_nodemask(h, gfp_mask, preferred_nid, nmask); + page = dequeue_huge_page_nodemask(h, gfp_mask, + preferred_nid, nmask, skip_cma); if (page) { spin_unlock(&hugetlb_lock); return page; @@ -1996,6 +2002,13 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, } spin_unlock(&hugetlb_lock); + /* + * To skip the memory on CMA area, we need to clear __GFP_MOVABLE. + * Clearing __GFP_MOVABLE at the top of this function would also skip + * the proper allocation candidates for dequeue so clearing it here. + */ + if (skip_cma) + gfp_mask &= ~__GFP_MOVABLE; return alloc_migrate_huge_page(h, gfp_mask, preferred_nid, nmask); } @@ -2011,7 +2024,7 @@ struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, gfp_mask = htlb_alloc_mask(h); node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); - page = alloc_huge_page_nodemask(h, node, nodemask, 0); + page = alloc_huge_page_nodemask(h, node, nodemask, 0, false); mpol_cond_put(mpol); return page; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index f21cff5..a3abf64 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1071,7 +1071,7 @@ struct page *alloc_new_node_page(struct page *page, unsigned long node) if (PageHuge(page)) { return alloc_huge_page_nodemask( page_hstate(compound_head(page)), node, - NULL, __GFP_THISNODE); + NULL, __GFP_THISNODE, false); } else if (PageTransHuge(page)) { struct page *thp; diff --git a/mm/migrate.c b/mm/migrate.c index 6ca9f0c..634f1ea 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1546,7 +1546,7 @@ struct page *new_page_nodemask(struct page *page, if (PageHuge(page)) { return alloc_huge_page_nodemask( page_hstate(compound_head(page)), - preferred_nid, nodemask, 0); + preferred_nid, nodemask, 0, false); } if (PageTransHuge(page)) { From patchwork Tue Jun 23 06:13:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619829 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C8356C1 for ; Tue, 23 Jun 2020 06:14:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5C31120774 for ; Tue, 23 Jun 2020 06:14:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BuEQd2jW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C31120774 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4E9516B000A; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 498506B000C; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 312996B000D; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 1666E6B000A for ; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D8B8B2C8F for ; Tue, 23 Jun 2020 06:14:42 +0000 (UTC) X-FDA: 76959462804.13.hair88_26008f626e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id ACC9818140B67 for ; Tue, 23 Jun 2020 06:14:42 +0000 (UTC) X-Spam-Summary: 2,0,0,18c35a79f09d032b,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2741:2899:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:3874:4049:4120:4321:4385:4605:5007:6119:6261:6653:7576:7875:7903:8660:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13161:13180:13229:13230:14394:21080:21433:21444:21451:21627:21666:21939:21990:30054:30080,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: hair88_26008f626e39 X-Filterd-Recvd-Size: 9358 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:42 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id b5so9376323pgm.8 for ; Mon, 22 Jun 2020 23:14:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JeJRVUHUWS+NkYIk84BTUuRS8wpVOF+4CdGFtPuQTi4=; b=BuEQd2jWXoEBKzZaSn/hHjBQOVNCpcUihjM+7y+30fNDJxRxKeKMDQPxV8S+xz0jMJ KaxBwMOrhwCNHRW0wrqzGgtoUcgfeN4w2nC+OVw6h6gF9wj8zTH9ZShRYycHGkUcOaGB uv7lbs23XHglLlChg4fuJ7+wbJlcu+g8TS06/d811dPgvt5iUQOoM2KTZMeWSevIHQoB peaoEuZJ5LVGa6x2QyITw6nw9vM45xK5xTpmzZHV2FcNrnccZB6Tkd1CqIVBf3fNbLmZ AW4Z6+I2q59VZJly1mM9pAaXRr9oeyvQo7anvM0z3/OQwjLRMbfWEkbsCk7x5UTJUeXu aNUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JeJRVUHUWS+NkYIk84BTUuRS8wpVOF+4CdGFtPuQTi4=; b=bbvl+ap89foE0O3778OMdy2GzjQQ5n0xLQZghAj3oHbAsne/mbEEJ1zIYaPDQbLGaZ FhOYeDp+fAAt7N5mYbMjW4hLqMHIgKvQbIdGqMuUyHqEK4DJ56i3FM+bGzVsQmb7o7oG 9M0Fsrb4sxGIXJ+auE3u7HCTjbvwO5hXNa/RO16qfOtel/1vh8VBN/q1p9Pd0HBjpLl9 n8xzTLqItdF5By0pO5jW71ovOXbFvj6XYvN+jIsg7AsPyfINn9KjYO/Uwzskf/OJNx5F lgYCog5/P87FD1Hd95cm+vqAkWpb4ITLJ9KCDGm1M4eqt3rZhAhdHQlze5e09xoDEeQ+ 4KBA== X-Gm-Message-State: AOAM5325B5x6Quchbls3IWfLMBeI/4mikbBJbfNSuYkZ8/IZVG70ZGLH whPYj3WtZApgtcomgo6ZCtA= X-Google-Smtp-Source: ABdhPJyf29dQ7hZpX23MUTob3xHfA47w0T/RxJ1iIW4kUumzNWT2DjCDiGwi6tfbdXoUTnhm7C+isQ== X-Received: by 2002:aa7:9497:: with SMTP id z23mr23656436pfk.222.1592892881392; Mon, 22 Jun 2020 23:14:41 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:40 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function Date: Tue, 23 Jun 2020 15:13:45 +0900 Message-Id: <1592892828-1934-6-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: ACC9818140B67 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be converted to use this function. Note that PageHighmem() call in previous function is changed to open-code "is_highmem_idx()" since it provides more readability. Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka --- include/linux/migrate.h | 5 +++-- mm/internal.h | 7 +++++++ mm/memory-failure.c | 8 ++++++-- mm/memory_hotplug.c | 14 +++++++++----- mm/migrate.c | 21 +++++++++++++-------- mm/page_isolation.c | 8 ++++++-- 6 files changed, 44 insertions(+), 19 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 1d70b4a..5e9c866 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -10,6 +10,8 @@ typedef struct page *new_page_t(struct page *page, unsigned long private); typedef void free_page_t(struct page *page, unsigned long private); +struct migration_target_control; + /* * Return values from addresss_space_operations.migratepage(): * - negative errno on page migration failure; @@ -39,8 +41,7 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); -extern struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask); +extern struct page *alloc_migration_target(struct page *page, unsigned long private); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); diff --git a/mm/internal.h b/mm/internal.h index 42cf0b6..f725aa8 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -614,4 +614,11 @@ static inline bool is_migrate_highatomic_page(struct page *page) void setup_zone_pageset(struct zone *zone); extern struct page *alloc_new_node_page(struct page *page, unsigned long node); + +struct migration_target_control { + int nid; /* preferred node id */ + nodemask_t *nmask; + gfp_t gfp_mask; +}; + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 47b8ccb..820ea5e 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1648,9 +1648,13 @@ EXPORT_SYMBOL(unpoison_memory); static struct page *new_page(struct page *p, unsigned long private) { - int nid = page_to_nid(p); + struct migration_target_control mtc = { + .nid = page_to_nid(p), + .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; - return new_page_nodemask(p, nid, &node_states[N_MEMORY]); + return alloc_migration_target(p, (unsigned long)&mtc); } /* diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index be3c62e3..d2b65a5 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1259,19 +1259,23 @@ static int scan_movable_pages(unsigned long start, unsigned long end, static struct page *new_node_page(struct page *page, unsigned long private) { - int nid = page_to_nid(page); nodemask_t nmask = node_states[N_MEMORY]; + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .nmask = &nmask, + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; /* * try to allocate from a different node but reuse this node if there * are no other online nodes to be used (e.g. we are offlining a part * of the only existing node) */ - node_clear(nid, nmask); - if (nodes_empty(nmask)) - node_set(nid, nmask); + node_clear(mtc.nid, *mtc.nmask); + if (nodes_empty(*mtc.nmask)) + node_set(mtc.nid, *mtc.nmask); - return new_page_nodemask(page, nid, &nmask); + return alloc_migration_target(page, (unsigned long)&mtc); } static int diff --git a/mm/migrate.c b/mm/migrate.c index 634f1ea..3afff59 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1536,29 +1536,34 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } -struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) +struct page *alloc_migration_target(struct page *page, unsigned long private) { - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + struct migration_target_control *mtc; + gfp_t gfp_mask; unsigned int order = 0; struct page *new_page = NULL; + int zidx; + + mtc = (struct migration_target_control *)private; + gfp_mask = mtc->gfp_mask; if (PageHuge(page)) { return alloc_huge_page_nodemask( - page_hstate(compound_head(page)), - preferred_nid, nodemask, 0, false); + page_hstate(compound_head(page)), mtc->nid, + mtc->nmask, gfp_mask, false); } if (PageTransHuge(page)) { + gfp_mask &= ~__GFP_RECLAIM; gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; } - - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) + zidx = zone_idx(page_zone(page)); + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); + mtc->nid, mtc->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index aec26d9..adba031 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -309,7 +309,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, struct page *alloc_migrate_target(struct page *page, unsigned long private) { - int nid = page_to_nid(page); + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; - return new_page_nodemask(page, nid, &node_states[N_MEMORY]); + return alloc_migration_target(page, (unsigned long)&mtc); } From patchwork Tue Jun 23 06:13:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619831 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A784B90 for ; Tue, 23 Jun 2020 06:14:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A76B20772 for ; Tue, 23 Jun 2020 06:14:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mJFf5ODp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A76B20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CDDE6B000C; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 756EC6B000D; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6473A6B000E; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 46D966B000C for ; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 383E12492 for ; Tue, 23 Jun 2020 06:14:46 +0000 (UTC) X-FDA: 76959462972.29.cable36_4717cc426e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 0E7B918086CB6 for ; Tue, 23 Jun 2020 06:14:46 +0000 (UTC) X-Spam-Summary: 2,0,0,43bf6cce7f9002d0,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1345:1359:1437:1535:1544:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2899:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4605:5007:6119:6261:6653:7576:7903:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:14110:14181:14394:14721:21080:21212:21444:21451:21627:21666:21990:30054:30080,0,RBL:209.85.215.195:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cable36_4717cc426e39 X-Filterd-Recvd-Size: 7212 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:45 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id b5so9376377pgm.8 for ; Mon, 22 Jun 2020 23:14:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QF5i+SE5gbif6Va56aYHUaIx3APq5EkmJc5AU12Pz8o=; b=mJFf5ODpBD9gu+7mDdXDz31TmMcHtiC98GFP0hobASpUDM4xOAyss2qOk0u5RA3vLJ yeaSEKxRfACslj6q4pzg6MmOMJs2Nk+7aZPOhWVGqx5exhF0U3izjuVbmwyKdFazHT4s 2U/6UIyUKZcIVeZ5z2k8g8sogP7GynPjXPprlwFHDqpfrvWZ85mMQnqb0UfIgewT3XoH bmhmp0Lli5yp5t/j9X3q7YqkB/3eyenHxEtuUoxyx8cZrgl0u3/nMXSYtx+5OSEpq4gs suc+k+jfluaY78PMckMubgEE4tZ1hh8HPYKXugMiw6ZzUK/gnPkWy+R+ZfSkMh7HbEcz UCmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QF5i+SE5gbif6Va56aYHUaIx3APq5EkmJc5AU12Pz8o=; b=CH1KBnVhmH0igZ/yLaEQ11dBBs2pJgSKUBWXcmFY75IpP6hw9utuXNiwa4cfhGndiv uMIvBdyOXJGqbC5ld0CnGwbdVHFlYFUgrmBYDGg+D7RFvwVtKQAj0tUZWYii3fFBp1xE ZJRjEUGV8bHbaCn2otDydxp8iLC62W4q15Unn9fdluwFXKWgeJYgnF+h7mqtKReY6yiZ 56EuWggNQVoSLAQua53QqlBwPsM3UiyNzuu1Lg7EdrxEvn3IKCFKnGfjQt2xsXx7I4Ru 3kMAca7PHpPIfrITdAlm55HiOI3MsSeqHU+L6eO/IF2RBLoSlk/Wy32HwR6PV7u0umYR pJpw== X-Gm-Message-State: AOAM530UYJHdPnTxXNP1dZlxyJ/XeFl9QLjFGvbQYVHrU5O/P2pRkE8P r6HdAoBXJRBeu5Se2rYvLS4= X-Google-Smtp-Source: ABdhPJwrMMzKL86HSk0nGtBbRa3gNPmTWuQbWt+R1Htn2k3+JEhVmmRornqm1es9r19YZtnhaZa3/g== X-Received: by 2002:aa7:91d4:: with SMTP id z20mr22230280pfa.153.1592892884702; Mon, 22 Jun 2020 23:14:44 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:44 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 6/8] mm/gup: use a standard migration target allocation callback Date: Tue, 23 Jun 2020 15:13:46 +0900 Message-Id: <1592892828-1934-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 0E7B918086CB6 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a well-defined migration target allocation callback. It's mostly similar with new_non_cma_page() except considering CMA pages. This patch adds a CMA consideration to the standard migration target allocation callback and use it on gup.c. Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka --- mm/gup.c | 57 ++++++++------------------------------------------------- mm/internal.h | 1 + mm/migrate.c | 4 +++- 3 files changed, 12 insertions(+), 50 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 15be281..f6124e3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1608,56 +1608,15 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) } #ifdef CONFIG_CMA -static struct page *new_non_cma_page(struct page *page, unsigned long private) +static struct page *alloc_migration_target_non_cma(struct page *page, unsigned long private) { - /* - * We want to make sure we allocate the new page from the same node - * as the source page. - */ - int nid = page_to_nid(page); - /* - * Trying to allocate a page for migration. Ignore allocation - * failure warnings. We don't force __GFP_THISNODE here because - * this node here is the node where we have CMA reservation and - * in some case these nodes will have really less non movable - * allocation memory. - */ - gfp_t gfp_mask = GFP_USER | __GFP_NOWARN; - - if (PageHighMem(page)) - gfp_mask |= __GFP_HIGHMEM; - -#ifdef CONFIG_HUGETLB_PAGE - if (PageHuge(page)) { - struct hstate *h = page_hstate(page); - - /* - * We don't want to dequeue from the pool because pool pages will - * mostly be from the CMA region. - */ - return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask, true); - } -#endif - if (PageTransHuge(page)) { - struct page *thp; - /* - * ignore allocation failure warnings - */ - gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN; - - /* - * Remove the movable mask so that we don't allocate from - * CMA area again. - */ - thp_gfpmask &= ~__GFP_MOVABLE; - thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER); - if (!thp) - return NULL; - prep_transhuge_page(thp); - return thp; - } + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .gfp_mask = GFP_USER | __GFP_NOWARN, + .skip_cma = true, + }; - return __alloc_pages_node(nid, gfp_mask, 0); + return alloc_migration_target(page, (unsigned long)&mtc); } static long check_and_migrate_cma_pages(struct task_struct *tsk, @@ -1719,7 +1678,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, new_non_cma_page, + if (migrate_pages(&cma_page_list, alloc_migration_target_non_cma, NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages diff --git a/mm/internal.h b/mm/internal.h index f725aa8..fb7f7fe 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -619,6 +619,7 @@ struct migration_target_control { int nid; /* preferred node id */ nodemask_t *nmask; gfp_t gfp_mask; + bool skip_cma; }; #endif /* __MM_INTERNAL_H */ diff --git a/mm/migrate.c b/mm/migrate.c index 3afff59..7c4cd74 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1550,7 +1550,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) if (PageHuge(page)) { return alloc_huge_page_nodemask( page_hstate(compound_head(page)), mtc->nid, - mtc->nmask, gfp_mask, false); + mtc->nmask, gfp_mask, mtc->skip_cma); } if (PageTransHuge(page)) { @@ -1561,6 +1561,8 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) zidx = zone_idx(page_zone(page)); if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; + if (mtc->skip_cma) + gfp_mask &= ~__GFP_MOVABLE; new_page = __alloc_pages_nodemask(gfp_mask, order, mtc->nid, mtc->nmask); From patchwork Tue Jun 23 06:13:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 328EC90 for ; Tue, 23 Jun 2020 06:14:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2F6220772 for ; Tue, 23 Jun 2020 06:14:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Nj4ZQbEg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2F6220772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E15976B000D; Tue, 23 Jun 2020 02:14:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D74046B000E; Tue, 23 Jun 2020 02:14:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C15BF6B0010; Tue, 23 Jun 2020 02:14:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id 9E2596B000D for ; Tue, 23 Jun 2020 02:14:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 60778181AD000 for ; Tue, 23 Jun 2020 06:14:49 +0000 (UTC) X-FDA: 76959463098.13.face80_5c04bd726e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 367B918140B60 for ; Tue, 23 Jun 2020 06:14:49 +0000 (UTC) X-Spam-Summary: 2,0,0,956103b65328ba37,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3871:3872:4117:4250:4321:5007:6119:6261:6653:7576:7903:8957:9413:9592:10004:11026:11232:11233:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:13255:14096:14110:14181:14394:14721:21080:21212:21220:21444:21451:21627:21666:21990:30034:30054:30080,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: face80_5c04bd726e39 X-Filterd-Recvd-Size: 6300 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:48 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id h185so9603489pfg.2 for ; Mon, 22 Jun 2020 23:14:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=RmnA1PFbRGGmJvObeVD1ZMw+BYis2vfr/Z0grw07WKA=; b=Nj4ZQbEgVVzZbOfRuxkyuT93Tk06HY9ZX5tEvbbL76EUCgqh5OUGp+AqxcqL9U9FT0 4ArB6SMRAHR0YUQJv/9ToBrbY8t0BX0BWq/7gtRis1WR6FoOQKtr5l7M48/oEmybefOA U1WqnC73kIflW7uhxec1zLNe6MRIdHUjyooA0+IhpSMrW91FxFrvKRiS8rwgJjOoRBdC ClpuyT5FOkM2/ZWdv4rg2ufcFn08Kqr/pPLsoYaD97+9rEnc8Q1IeOCyJvzUQ60dTqAD HD7CKFSSSMqgJ7hmfCBO5u7baSS96Zl4OGkrJNpGoTqp4lD69ExDONhAZiOnWgUGv0ui +14w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=RmnA1PFbRGGmJvObeVD1ZMw+BYis2vfr/Z0grw07WKA=; b=Nxn4jgRXHsAQyyBf9hxPxq8Pm4Sz6u71tKL2r86VE12bRloz/jPT1qwrQmzHHWxeTN 2i13heoFNRB05jQgt7f32Nqo8sz+5C15tW9LVO164lEJZUvFIatL92uPEIAF4816Vub3 +v/xKXubPL/T9AoFfhyjAbVlpgtFKiVAEewfGpNdqpBwlPGpHaikB3YXNd8hSZ2+cRtx rikKY0v5nI3VFF2N8h4C+nFKn9fW5qRYU+sWHjMeTbtRAbYyp1IvHTD/LbcYILbAv3ua wUsZ4J9vroXK//akg4wR7Cg3lkA2OUv07xFxkaGcQekYMxxAQZoZZjy+8C4tbu8Haqrl wbLA== X-Gm-Message-State: AOAM532x/bwnzYISR/Nf+/amYkTq790Glkxgb0wtHyIGPfY9CFQt1B/7 VmGNOnCxgtLMTMerbbWKusE= X-Google-Smtp-Source: ABdhPJytICau7bDEw5XuijMhkeXwsIk6IG1vYY6Tyvn5BeRX/4cv9qvLfNoROqzb+sdksxVAAVtPKA== X-Received: by 2002:a63:8f58:: with SMTP id r24mr15413541pgn.379.1592892887953; Mon, 22 Jun 2020 23:14:47 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:47 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 7/8] mm/mempolicy: use a standard migration target allocation callback Date: Tue, 23 Jun 2020 15:13:47 +0900 Message-Id: <1592892828-1934-8-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 367B918140B60 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Signed-off-by: Joonsoo Kim Acked-by: Michal Hocko Acked-by: Vlastimil Babka --- mm/internal.h | 1 - mm/mempolicy.c | 30 ++++++------------------------ mm/migrate.c | 8 ++++++-- 3 files changed, 12 insertions(+), 27 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index fb7f7fe..4f9f6b6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -613,7 +613,6 @@ static inline bool is_migrate_highatomic_page(struct page *page) } void setup_zone_pageset(struct zone *zone); -extern struct page *alloc_new_node_page(struct page *page, unsigned long node); struct migration_target_control { int nid; /* preferred node id */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a3abf64..85a3f21 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1065,28 +1065,6 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist, return 0; } -/* page allocation callback for NUMA node migration */ -struct page *alloc_new_node_page(struct page *page, unsigned long node) -{ - if (PageHuge(page)) { - return alloc_huge_page_nodemask( - page_hstate(compound_head(page)), node, - NULL, __GFP_THISNODE, false); - } else if (PageTransHuge(page)) { - struct page *thp; - - thp = alloc_pages_node(node, - (GFP_TRANSHUGE | __GFP_THISNODE), - HPAGE_PMD_ORDER); - if (!thp) - return NULL; - prep_transhuge_page(thp); - return thp; - } else - return __alloc_pages_node(node, GFP_HIGHUSER_MOVABLE | - __GFP_THISNODE, 0); -} - /* * Migrate pages from one node to a target node. * Returns error or the number of pages not migrated. @@ -1097,6 +1075,10 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, nodemask_t nmask; LIST_HEAD(pagelist); int err = 0; + struct migration_target_control mtc = { + .nid = dest, + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, + }; nodes_clear(nmask); node_set(source, nmask); @@ -1111,8 +1093,8 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest, flags | MPOL_MF_DISCONTIG_OK, &pagelist); if (!list_empty(&pagelist)) { - err = migrate_pages(&pagelist, alloc_new_node_page, NULL, dest, - MIGRATE_SYNC, MR_SYSCALL); + err = migrate_pages(&pagelist, alloc_migration_target, NULL, + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); if (err) putback_movable_pages(&pagelist); } diff --git a/mm/migrate.c b/mm/migrate.c index 7c4cd74..1c943b0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1590,9 +1590,13 @@ static int do_move_pages_to_node(struct mm_struct *mm, struct list_head *pagelist, int node) { int err; + struct migration_target_control mtc = { + .nid = node, + .gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_THISNODE, + }; - err = migrate_pages(pagelist, alloc_new_node_page, NULL, node, - MIGRATE_SYNC, MR_SYSCALL); + err = migrate_pages(pagelist, alloc_migration_target, NULL, + (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL); if (err) putback_movable_pages(pagelist); return err; From patchwork Tue Jun 23 06:13:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0CC536C1 for ; Tue, 23 Jun 2020 06:14:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE25D20772 for ; Tue, 23 Jun 2020 06:14:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uu36BHU3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE25D20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D53BA6B000E; Tue, 23 Jun 2020 02:14:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CD6556B0010; Tue, 23 Jun 2020 02:14:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9CBB6B0022; Tue, 23 Jun 2020 02:14:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 9DA3D6B000E for ; Tue, 23 Jun 2020 02:14:53 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6A90C2C8F for ; Tue, 23 Jun 2020 06:14:53 +0000 (UTC) X-FDA: 76959463266.26.edge34_4102ac726e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 967B51804B66B for ; Tue, 23 Jun 2020 06:14:52 +0000 (UTC) X-Spam-Summary: 2,0,0,d163ba763163e2cc,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1535:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3871:3872:3876:4321:5007:6119:6261:6653:7576:7903:9413:9592:10004:11026:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12679:12895:12986:13069:13161:13229:13311:13357:14181:14384:14394:14721:21080:21444:21451:21627:21666:21990:30054,0,RBL:209.85.215.194:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: edge34_4102ac726e39 X-Filterd-Recvd-Size: 5031 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:52 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id u128so9373699pgu.13 for ; Mon, 22 Jun 2020 23:14:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=aLO5MtRfj6NH9QBTrO4jlGsPPU/dMtpy8l2hw8EHd8U=; b=uu36BHU3hicW3omtidDGbKh1z20N5PY/FyKfczJGKsarHZz2s5tqDM6OUsoJlatpeI Hh2dRkvpIjOOd6o+qsz98mpMljkhw5jJAwR5WEHy8DRwJRG1AnIYxMdUAFW6HL6Q60Hu 3FeI8hZGfv/M8Q6Zddcws0/dSei6zWG4eBxFRwOmYu+RF6oCwxlKdapSYTlW1kltblf1 XuTKA5+TzvNBkfUfiN//Vs/prf6pgOIW0KAInT2Huu7abktcC8c47Iw0fraysYSIR5ce RmaASndb3OpgmfeDEQU/7qGPx4QbUMoX2T4fm01E5WmVOyW8mZ+fy0cxVtMRHgVPQsaN lfrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aLO5MtRfj6NH9QBTrO4jlGsPPU/dMtpy8l2hw8EHd8U=; b=H6L3EIvp0GXEN+H36p/cdPZx6VqVvor+RXVx1slNHmIb3wqxx0gXmwz607fJJMNQJa jiR2vbPGsH077JSZn4fxPskM3sNan9eXM8RUleeIwRg2xhmCGFyunkDedH+CKBppouQ2 M+av8xy6CM5zN10KFW1VbdRX5lOkHZn4pdIrChjwX3+hFY89/hjGC5QkhzR9lnrJOFHg exocRnzRYRdzWM7EmeDjJh/0kwV/YKiHvb6YRRdW3mMCV285FXzmau+NwKwwnvbP7i4G CHEmAFtRMW0Wc0WePNAdL8H2kT0hFL4J0aQGl/uigVLlmq15x/VCLtdfT85IExehlufn 7fkQ== X-Gm-Message-State: AOAM533Z7zRRtnD89gtg1i08rUl0QbARWc/WGa4ib+XXo7AA4diW1P/g uhLehLK2/PBeA291rTdPyjg= X-Google-Smtp-Source: ABdhPJyg2kZv3M0fSNkQ/Ab+6Tn54VFlyrQjmn6jE7KelDWuKlXnOv58fbhblhQQO1wRtkS0b4xn/g== X-Received: by 2002:a62:2acf:: with SMTP id q198mr24689706pfq.48.1592892891379; Mon, 22 Jun 2020 23:14:51 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:50 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 8/8] mm/page_alloc: remove a wrapper for alloc_migration_target() Date: Tue, 23 Jun 2020 15:13:48 +0900 Message-Id: <1592892828-1934-9-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 967B51804B66B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a well-defined standard migration target callback. Use it directly. Signed-off-by: Joonsoo Kim Acked-by: Michal Hocko Acked-by: Vlastimil Babka --- mm/page_alloc.c | 9 +++++++-- mm/page_isolation.c | 11 ----------- 2 files changed, 7 insertions(+), 13 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9808339..884dfb5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8359,6 +8359,11 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned long pfn = start; unsigned int tries = 0; int ret = 0; + struct migration_target_control mtc = { + .nid = zone_to_nid(cc->zone), + .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; migrate_prep(); @@ -8385,8 +8390,8 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, &cc->migratepages); cc->nr_migratepages -= nr_reclaimed; - ret = migrate_pages(&cc->migratepages, alloc_migrate_target, - NULL, 0, cc->mode, MR_CONTIG_RANGE); + ret = migrate_pages(&cc->migratepages, alloc_migration_target, + NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE); } if (ret < 0) { putback_movable_pages(&cc->migratepages); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index adba031..242c031 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -306,14 +306,3 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, return pfn < end_pfn ? -EBUSY : 0; } - -struct page *alloc_migrate_target(struct page *page, unsigned long private) -{ - struct migration_target_control mtc = { - .nid = page_to_nid(page), - .nmask = &node_states[N_MEMORY], - .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, - }; - - return alloc_migration_target(page, (unsigned long)&mtc); -}