From patchwork Tue Jun 23 06:13:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11619829 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C8356C1 for ; Tue, 23 Jun 2020 06:14:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5C31120774 for ; Tue, 23 Jun 2020 06:14:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BuEQd2jW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5C31120774 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4E9516B000A; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 498506B000C; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 312996B000D; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 1666E6B000A for ; Tue, 23 Jun 2020 02:14:43 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D8B8B2C8F for ; Tue, 23 Jun 2020 06:14:42 +0000 (UTC) X-FDA: 76959462804.13.hair88_26008f626e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id ACC9818140B67 for ; Tue, 23 Jun 2020 06:14:42 +0000 (UTC) X-Spam-Summary: 2,0,0,18c35a79f09d032b,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1535:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2741:2899:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3872:3874:4049:4120:4321:4385:4605:5007:6119:6261:6653:7576:7875:7903:8660:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13161:13180:13229:13230:14394:21080:21433:21444:21451:21627:21666:21939:21990:30054:30080,0,RBL:209.85.215.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: hair88_26008f626e39 X-Filterd-Recvd-Size: 9358 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:42 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id b5so9376323pgm.8 for ; Mon, 22 Jun 2020 23:14:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JeJRVUHUWS+NkYIk84BTUuRS8wpVOF+4CdGFtPuQTi4=; b=BuEQd2jWXoEBKzZaSn/hHjBQOVNCpcUihjM+7y+30fNDJxRxKeKMDQPxV8S+xz0jMJ KaxBwMOrhwCNHRW0wrqzGgtoUcgfeN4w2nC+OVw6h6gF9wj8zTH9ZShRYycHGkUcOaGB uv7lbs23XHglLlChg4fuJ7+wbJlcu+g8TS06/d811dPgvt5iUQOoM2KTZMeWSevIHQoB peaoEuZJ5LVGa6x2QyITw6nw9vM45xK5xTpmzZHV2FcNrnccZB6Tkd1CqIVBf3fNbLmZ AW4Z6+I2q59VZJly1mM9pAaXRr9oeyvQo7anvM0z3/OQwjLRMbfWEkbsCk7x5UTJUeXu aNUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JeJRVUHUWS+NkYIk84BTUuRS8wpVOF+4CdGFtPuQTi4=; b=bbvl+ap89foE0O3778OMdy2GzjQQ5n0xLQZghAj3oHbAsne/mbEEJ1zIYaPDQbLGaZ FhOYeDp+fAAt7N5mYbMjW4hLqMHIgKvQbIdGqMuUyHqEK4DJ56i3FM+bGzVsQmb7o7oG 9M0Fsrb4sxGIXJ+auE3u7HCTjbvwO5hXNa/RO16qfOtel/1vh8VBN/q1p9Pd0HBjpLl9 n8xzTLqItdF5By0pO5jW71ovOXbFvj6XYvN+jIsg7AsPyfINn9KjYO/Uwzskf/OJNx5F lgYCog5/P87FD1Hd95cm+vqAkWpb4ITLJ9KCDGm1M4eqt3rZhAhdHQlze5e09xoDEeQ+ 4KBA== X-Gm-Message-State: AOAM5325B5x6Quchbls3IWfLMBeI/4mikbBJbfNSuYkZ8/IZVG70ZGLH whPYj3WtZApgtcomgo6ZCtA= X-Google-Smtp-Source: ABdhPJyf29dQ7hZpX23MUTob3xHfA47w0T/RxJ1iIW4kUumzNWT2DjCDiGwi6tfbdXoUTnhm7C+isQ== X-Received: by 2002:aa7:9497:: with SMTP id z23mr23656436pfk.222.1592892881392; Mon, 22 Jun 2020 23:14:41 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:40 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function Date: Tue, 23 Jun 2020 15:13:45 +0900 Message-Id: <1592892828-1934-6-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: ACC9818140B67 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There are some similar functions for migration target allocation. Since there is no fundamental difference, it's better to keep just one rather than keeping all variants. This patch implements base migration target allocation function. In the following patches, variants will be converted to use this function. Note that PageHighmem() call in previous function is changed to open-code "is_highmem_idx()" since it provides more readability. Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka --- include/linux/migrate.h | 5 +++-- mm/internal.h | 7 +++++++ mm/memory-failure.c | 8 ++++++-- mm/memory_hotplug.c | 14 +++++++++----- mm/migrate.c | 21 +++++++++++++-------- mm/page_isolation.c | 8 ++++++-- 6 files changed, 44 insertions(+), 19 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 1d70b4a..5e9c866 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -10,6 +10,8 @@ typedef struct page *new_page_t(struct page *page, unsigned long private); typedef void free_page_t(struct page *page, unsigned long private); +struct migration_target_control; + /* * Return values from addresss_space_operations.migratepage(): * - negative errno on page migration failure; @@ -39,8 +41,7 @@ extern int migrate_page(struct address_space *mapping, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); -extern struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask); +extern struct page *alloc_migration_target(struct page *page, unsigned long private); extern int isolate_movable_page(struct page *page, isolate_mode_t mode); extern void putback_movable_page(struct page *page); diff --git a/mm/internal.h b/mm/internal.h index 42cf0b6..f725aa8 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -614,4 +614,11 @@ static inline bool is_migrate_highatomic_page(struct page *page) void setup_zone_pageset(struct zone *zone); extern struct page *alloc_new_node_page(struct page *page, unsigned long node); + +struct migration_target_control { + int nid; /* preferred node id */ + nodemask_t *nmask; + gfp_t gfp_mask; +}; + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 47b8ccb..820ea5e 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1648,9 +1648,13 @@ EXPORT_SYMBOL(unpoison_memory); static struct page *new_page(struct page *p, unsigned long private) { - int nid = page_to_nid(p); + struct migration_target_control mtc = { + .nid = page_to_nid(p), + .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; - return new_page_nodemask(p, nid, &node_states[N_MEMORY]); + return alloc_migration_target(p, (unsigned long)&mtc); } /* diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index be3c62e3..d2b65a5 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1259,19 +1259,23 @@ static int scan_movable_pages(unsigned long start, unsigned long end, static struct page *new_node_page(struct page *page, unsigned long private) { - int nid = page_to_nid(page); nodemask_t nmask = node_states[N_MEMORY]; + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .nmask = &nmask, + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; /* * try to allocate from a different node but reuse this node if there * are no other online nodes to be used (e.g. we are offlining a part * of the only existing node) */ - node_clear(nid, nmask); - if (nodes_empty(nmask)) - node_set(nid, nmask); + node_clear(mtc.nid, *mtc.nmask); + if (nodes_empty(*mtc.nmask)) + node_set(mtc.nid, *mtc.nmask); - return new_page_nodemask(page, nid, &nmask); + return alloc_migration_target(page, (unsigned long)&mtc); } static int diff --git a/mm/migrate.c b/mm/migrate.c index 634f1ea..3afff59 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1536,29 +1536,34 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, return rc; } -struct page *new_page_nodemask(struct page *page, - int preferred_nid, nodemask_t *nodemask) +struct page *alloc_migration_target(struct page *page, unsigned long private) { - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL; + struct migration_target_control *mtc; + gfp_t gfp_mask; unsigned int order = 0; struct page *new_page = NULL; + int zidx; + + mtc = (struct migration_target_control *)private; + gfp_mask = mtc->gfp_mask; if (PageHuge(page)) { return alloc_huge_page_nodemask( - page_hstate(compound_head(page)), - preferred_nid, nodemask, 0, false); + page_hstate(compound_head(page)), mtc->nid, + mtc->nmask, gfp_mask, false); } if (PageTransHuge(page)) { + gfp_mask &= ~__GFP_RECLAIM; gfp_mask |= GFP_TRANSHUGE; order = HPAGE_PMD_ORDER; } - - if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) + zidx = zone_idx(page_zone(page)); + if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; new_page = __alloc_pages_nodemask(gfp_mask, order, - preferred_nid, nodemask); + mtc->nid, mtc->nmask); if (new_page && PageTransHuge(new_page)) prep_transhuge_page(new_page); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index aec26d9..adba031 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -309,7 +309,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, struct page *alloc_migrate_target(struct page *page, unsigned long private) { - int nid = page_to_nid(page); + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .nmask = &node_states[N_MEMORY], + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, + }; - return new_page_nodemask(page, nid, &node_states[N_MEMORY]); + return alloc_migration_target(page, (unsigned long)&mtc); }