From patchwork Wed Jul 15 05:05:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11664199 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84CA017C5 for ; Wed, 15 Jul 2020 05:05:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4863B207F5 for ; Wed, 15 Jul 2020 05:05:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mVdrq8+U" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4863B207F5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4FCFB6B0005; Wed, 15 Jul 2020 01:05:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 485846B0006; Wed, 15 Jul 2020 01:05:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 325F66B0007; Wed, 15 Jul 2020 01:05:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 179366B0005 for ; Wed, 15 Jul 2020 01:05:47 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9D05D180AD801 for ; Wed, 15 Jul 2020 05:05:46 +0000 (UTC) X-FDA: 77039122692.19.bird05_2e0853f26ef6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 752271AD1B3 for ; Wed, 15 Jul 2020 05:05:46 +0000 (UTC) X-Spam-Summary: 1,0,0,c0b0aff19b54c078,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1437:1535:1544:1605:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4118:4250:4321:4385:4605:5007:6261:6653:6742:7576:7903:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:12986:13215:13229:14096:14181:14394:14721:21080:21220:21324:21433:21444:21451:21611:21627:21666:21740:21990:30054:30070,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yg5kzuqujpir4m8zdzpa35sz473octr86h5omnz96xg7b51iqn4by7bxye499.td1zc5djdh93j4by8t9s1jbyn9xx13xkt45z3owizptmittg9sdc9p7q9narrip.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: bird05_2e0853f26ef6 X-Filterd-Recvd-Size: 7245 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 05:05:45 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id gc9so2420099pjb.2 for ; Tue, 14 Jul 2020 22:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=z+wv8TGtFVQ1hlwrfyzpVwymGKMjlXdbSWkQQF4iJxI=; b=mVdrq8+UEYotcCGr+EDqNcpyWzyi3Tj+1an8oO/wgzzahQj8FEchq3rg5A6emWKwin bdpUO4taDuCCYFVROBKfbHlaZ5rFUFY5v89yvkVhm/t+WPA4P7zGZoIRufJvUdDO9hBt J3w2XHcwQgBQgNLMB/dqJqCW5F768LhYctTWl+qvaMu6M/nNWMsnC/AkmZj4cBtK96ir 4ApDGo+FIPCh/F303Cjjxli3Aeo6r/F2pSFbogYpQDIYYR9R2j2jWIa63j/nvXn6X8Zx VQq9E4roHRDNlTHNiwt9v5bHL+Yv02VbmNBosu/MHVi8kfYb0cG7vbvM/b+MFHM9mJik RoGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=z+wv8TGtFVQ1hlwrfyzpVwymGKMjlXdbSWkQQF4iJxI=; b=gYa6LmhFCRTBlY5jFCxCXkd4HrpD1Q+FbC3DIhttw3uML83F5ZdTjnTPg0dIpKy2zt nJzyixUQ7TNJVrRw8wAKY8UFD+zf7RGFeiN3EF/gnaomqKZjyBNqi+7bjr2+tmtzluJb swbQ3wLsJY8VHf47oBllTUy0GPXs8/iw660uRfApqSbUZKdPyfi5n4WkFnRJ3WZC85si 4NtdKzuG1MpoFGWmC1YW/uZiCcXEepIqzbe0yMCSvl4Rxg46fRe5X+EK1fWSyQgK+O3D EJtEGVNyhRDaMgcnscgYVCCwLL3WRUxL8vH01zZ7VpoAkL6ozavVpSM6eVg9iNDNsQjb +8IA== X-Gm-Message-State: AOAM531l6KzxyNobG2W88o4/NuRhGnodqgQKqKWn1KrX5im9yhcOqUNT 2MInJdn1r7qghczfLILC74g= X-Google-Smtp-Source: ABdhPJza7Rlli4ZIDhwyoO76GHLo1Ah2DCPstIOcYT7GHUj3pMDr9aKgEtz17O8TCQIjcamlkOFgiw== X-Received: by 2002:a17:90b:3612:: with SMTP id ml18mr8160176pjb.193.1594789544814; Tue, 14 Jul 2020 22:05:44 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id i21sm747251pfa.18.2020.07.14.22.05.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jul 2020 22:05:44 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , "Aneesh Kumar K . V" , Joonsoo Kim , stable@vger.kernel.org Subject: [PATCH 1/4] mm/page_alloc: fix non cma alloc context Date: Wed, 15 Jul 2020 14:05:26 +0900 Message-Id: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 X-Rspamd-Queue-Id: 752271AD1B3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim Currently, preventing cma area in page allocation is implemented by using current_gfp_context(). However, there are two problems of this implementation. First, this doesn't work for allocation fastpath. In the fastpath, original gfp_mask is used since current_gfp_context() is introduced in order to control reclaim and it is on slowpath. Second, clearing __GFP_MOVABLE has a side effect to exclude the memory on the ZONE_MOVABLE for allocation target. To fix these problems, this patch changes the implementation to exclude cma area in page allocation. Main point of this change is using the alloc_flags. alloc_flags is mainly used to control allocation so it fits for excluding cma area in allocation. Fixes: d7fefcc (mm/cma: add PF flag to force non cma alloc) Cc: Signed-off-by: Joonsoo Kim --- include/linux/sched/mm.h | 4 ---- mm/page_alloc.c | 27 +++++++++++++++------------ 2 files changed, 15 insertions(+), 16 deletions(-) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 44ad5b7..a73847a 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -191,10 +191,6 @@ static inline gfp_t current_gfp_context(gfp_t flags) flags &= ~(__GFP_IO | __GFP_FS); else if (pflags & PF_MEMALLOC_NOFS) flags &= ~__GFP_FS; -#ifdef CONFIG_CMA - if (pflags & PF_MEMALLOC_NOCMA) - flags &= ~__GFP_MOVABLE; -#endif } return flags; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6416d08..cd53894 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2791,7 +2791,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, * allocating from CMA when over half of the zone's free memory * is in the CMA area. */ - if (migratetype == MIGRATE_MOVABLE && + if (alloc_flags & ALLOC_CMA && zone_page_state(zone, NR_FREE_CMA_PAGES) > zone_page_state(zone, NR_FREE_PAGES) / 2) { page = __rmqueue_cma_fallback(zone, order); @@ -2802,7 +2802,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { - if (migratetype == MIGRATE_MOVABLE) + if (alloc_flags & ALLOC_CMA) page = __rmqueue_cma_fallback(zone, order); if (!page && __rmqueue_fallback(zone, order, migratetype, @@ -3502,11 +3502,9 @@ static inline long __zone_watermark_unusable_free(struct zone *z, if (likely(!alloc_harder)) unusable_free += z->nr_reserved_highatomic; -#ifdef CONFIG_CMA /* If allocation can't use CMA areas don't use free CMA pages */ - if (!(alloc_flags & ALLOC_CMA)) + if (IS_ENABLED(CONFIG_CMA) && !(alloc_flags & ALLOC_CMA)) unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES); -#endif return unusable_free; } @@ -3693,6 +3691,16 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) return alloc_flags; } +static inline void current_alloc_flags(gfp_t gfp_mask, + unsigned int *alloc_flags) +{ + unsigned int pflags = READ_ONCE(current->flags); + + if (!(pflags & PF_MEMALLOC_NOCMA) && + gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) + *alloc_flags |= ALLOC_CMA; +} + /* * get_page_from_freelist goes through the zonelist trying to allocate * a page. @@ -3706,6 +3714,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, struct pglist_data *last_pgdat_dirty_limit = NULL; bool no_fallback; + current_alloc_flags(gfp_mask, &alloc_flags); + retry: /* * Scan zonelist, looking for a zone with enough free. @@ -4339,10 +4349,6 @@ gfp_to_alloc_flags(gfp_t gfp_mask) } else if (unlikely(rt_task(current)) && !in_interrupt()) alloc_flags |= ALLOC_HARDER; -#ifdef CONFIG_CMA - if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) - alloc_flags |= ALLOC_CMA; -#endif return alloc_flags; } @@ -4808,9 +4814,6 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, if (should_fail_alloc_page(gfp_mask, order)) return false; - if (IS_ENABLED(CONFIG_CMA) && ac->migratetype == MIGRATE_MOVABLE) - *alloc_flags |= ALLOC_CMA; - return true; } From patchwork Wed Jul 15 05:05:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11664201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78F961392 for ; Wed, 15 Jul 2020 05:05:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 458CD2067D for ; Wed, 15 Jul 2020 05:05:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nuj/Shff" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 458CD2067D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3797B6B0006; Wed, 15 Jul 2020 01:05:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2B34C8D0001; Wed, 15 Jul 2020 01:05:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1570A6B0008; Wed, 15 Jul 2020 01:05:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id EED3A6B0006 for ; Wed, 15 Jul 2020 01:05:49 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AC5C48248D51 for ; Wed, 15 Jul 2020 05:05:49 +0000 (UTC) X-FDA: 77039122818.19.shame44_2f062b926ef6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 848EA1AD1B3 for ; Wed, 15 Jul 2020 05:05:49 +0000 (UTC) X-Spam-Summary: 1,0,0,26bcd1eb9f28ad15,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1437:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2895:2899:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3872:3874:4117:4250:4321:4385:4605:5007:6119:6261:6653:6742:7576:7901:7903:8957:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13180:13229:14181:14394:14721:21080:21444:21451:21627:21666:21990:30054:30070,0,RBL:209.85.216.68:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yfxkmnmc4gfpr6yrdcsddpdok6kyctruegkmtsp7w5tn96g7qaawhrc1o19tr.ezqh4nqpuufnx69jhexjwbnnfcepo73j694zw6yjcrns8o69qhkjaoady35ifob.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: shame44_2f062b926ef6 X-Filterd-Recvd-Size: 6575 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 05:05:49 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id o22so2332931pjw.2 for ; Tue, 14 Jul 2020 22:05:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=e7byfgHw8QS4EFeIqD5PDYqTNizYFOWzHsxHEo0G+W4=; b=nuj/ShffpoKXVL9Twu4mnjAAGc6trFyAcytARH/jXA7FlxltODfayDtDpf3tZp1bnZ 5ypy2TkSdXVby/FZBmOXGUDefNSH0TGR13s3r4SKlgdonIe+fsACRrFKRNNk320EFflm AeALOCBkibmcwxa7uwGTJIVolr1eOLLqolSKKkmPOhXbyrGyg6RJoA+RRDaBKXL9fa9k qtwjlV1STLsyeJqe/98Rao5M1fvvsBBbhoIanlqO03o5kwLS9IKwqKL4CRDs8HZQ6Yf2 w0XSLF5CqORruKWR0AQCERc/RXk3488q4N6tCnkMxZaqSaUKj/ofOm91Rf7WArXa8FSK sy0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=e7byfgHw8QS4EFeIqD5PDYqTNizYFOWzHsxHEo0G+W4=; b=eT6mzYoGLuNF0+0ChChaC/RwEQX4xlCi2P91QNGd9b5AyhlXZUhkHVKVkMjM2nV4+X RAW6FNR168UxB9Do9lgGAVf0lp/QcqU/w1F/uGmyB6Um7I4VngSwbKZJtK9AhfmhnQNg HZaIHkaOhsPMvzDy+2T6aFnHf1BUAUUY5xDtB7JVpWIQHE5VWb3Y7sAAhljqJXE0z0jZ 9zyDP7MZ5UG/Tj/Mnk2WBvTDnry3bRwngQkhkAQV2CE42WW79Nu9gjdZ4EEcoA+C1A91 +MMD/MLNkKDMpPyJVuHu5/Nw5VqLgs+lAI2YmkoC2xQB49QsyNZfeu2/n5diF2D0Loka IjHw== X-Gm-Message-State: AOAM530pASpbT9gG1mD2wqbDIunIKXH3xSdRkAoWRSYn0wCVanAoF/jB iOw7BbNCQ4kaB8pQLdF3TEI= X-Google-Smtp-Source: ABdhPJyEk74SdinwCEymCOZL8TIvU3lrpB1LNieqQ+aiZb/vLtcI8gc4oa2hwcSXm4m0mHX4JQjJvQ== X-Received: by 2002:a17:90a:a393:: with SMTP id x19mr8520227pjp.228.1594789548231; Tue, 14 Jul 2020 22:05:48 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id i21sm747251pfa.18.2020.07.14.22.05.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jul 2020 22:05:47 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , "Aneesh Kumar K . V" , Joonsoo Kim Subject: [PATCH 2/4] mm/gup: restrict CMA region by using allocation scope API Date: Wed, 15 Jul 2020 14:05:27 +0900 Message-Id: <1594789529-6206-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 848EA1AD1B3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim We have well defined scope API to exclude CMA region. Use it rather than manipulating gfp_mask manually. With this change, we can now use __GFP_MOVABLE for gfp_mask and the ZONE_MOVABLE is also searched by page allocator. For hugetlb, gfp_mask is redefined since it has a regular allocation mask filter for migration target. Note that this can be considered as a fix for the commit 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region"). However, "Fixes" tag isn't added here since it is just suboptimal but it doesn't cause any problem. Suggested-by: Michal Hocko Signed-off-by: Joonsoo Kim --- include/linux/hugetlb.h | 2 ++ mm/gup.c | 17 ++++++++--------- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 6b9508d..2660b04 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -708,6 +708,8 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) /* Some callers might want to enfoce node */ modified_mask |= (gfp_mask & __GFP_THISNODE); + modified_mask |= (gfp_mask & __GFP_NOWARN); + return modified_mask; } diff --git a/mm/gup.c b/mm/gup.c index 5daadae..bbd36a1 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1619,10 +1619,12 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) * Trying to allocate a page for migration. Ignore allocation * failure warnings. We don't force __GFP_THISNODE here because * this node here is the node where we have CMA reservation and - * in some case these nodes will have really less non movable + * in some case these nodes will have really less non CMA * allocation memory. + * + * Note that CMA region is prohibited by allocation scope. */ - gfp_t gfp_mask = GFP_USER | __GFP_NOWARN; + gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN; if (PageHighMem(page)) gfp_mask |= __GFP_HIGHMEM; @@ -1630,6 +1632,8 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) #ifdef CONFIG_HUGETLB_PAGE if (PageHuge(page)) { struct hstate *h = page_hstate(page); + + gfp_mask = htlb_modify_alloc_mask(h, gfp_mask); /* * We don't want to dequeue from the pool because pool pages will * mostly be from the CMA region. @@ -1644,11 +1648,6 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) */ gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN; - /* - * Remove the movable mask so that we don't allocate from - * CMA area again. - */ - thp_gfpmask &= ~__GFP_MOVABLE; thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER); if (!thp) return NULL; @@ -1794,7 +1793,6 @@ static long __gup_longterm_locked(struct task_struct *tsk, vmas_tmp, NULL, gup_flags); if (gup_flags & FOLL_LONGTERM) { - memalloc_nocma_restore(flags); if (rc < 0) goto out; @@ -1807,9 +1805,10 @@ static long __gup_longterm_locked(struct task_struct *tsk, rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages, vmas_tmp, gup_flags); +out: + memalloc_nocma_restore(flags); } -out: if (vmas_tmp != vmas) kfree(vmas_tmp); return rc; From patchwork Wed Jul 15 05:05:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11664203 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 248CE1510 for ; Wed, 15 Jul 2020 05:05:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E4FC820720 for ; Wed, 15 Jul 2020 05:05:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NW76YEKe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E4FC820720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D13FD6B0007; Wed, 15 Jul 2020 01:05:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C9C3B6B0008; Wed, 15 Jul 2020 01:05:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B182B6B000A; Wed, 15 Jul 2020 01:05:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0038.hostedemail.com [216.40.44.38]) by kanga.kvack.org (Postfix) with ESMTP id 99A6D6B0007 for ; Wed, 15 Jul 2020 01:05:53 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4F75A2DFA for ; Wed, 15 Jul 2020 05:05:53 +0000 (UTC) X-FDA: 77039122986.12.jeans26_1114a9426ef6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 221AC18002325 for ; Wed, 15 Jul 2020 05:05:53 +0000 (UTC) X-Spam-Summary: 1,0,0,79c19171da03ceda,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1345:1359:1431:1437:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4385:4605:5007:6261:6653:6742:7576:7903:8957:9010:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:14181:14394:14721:14819:21080:21212:21444:21451:21627:21666:21740:21990:30003:30054:30064:30070,0,RBL:209.85.215.194:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yr1dxk68c64x6hs7sk9rnrp9n91ocd5xd1cz7dnd7xcpfkxzmsh87rim5tntc.t54qwdgu1gnnnbmyogb6ufij5m9npcbn5kpfyi9rt4qr639qwgc7p464e59iw7k.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: jeans26_1114a9426ef6 X-Filterd-Recvd-Size: 6883 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 05:05:52 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id o13so2424348pgf.0 for ; Tue, 14 Jul 2020 22:05:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0AKrEHiEzPuoCEsO4TLc5HH/VubY2AXZn4AqF30isMQ=; b=NW76YEKerrPYarzLcXnVQVWD6TxcJG9J0lXNL/HGI7507rHkwHGlZ8TVCyj037c3AM sR5zgx2v2HDPWkCfroy5gIuGXerzk+N1WneD4s4wd+uHNAqw2HM7ptR9Wuo88t+WkVdI HJqYIFnr6nCAB5Fw+D+thZM797p6zrRGcZbh0o+bnr52xk0OPpsnlpQZGQdmxjkpzd86 vAHjW4K5pzgXAwjljpRzbgYDXsmz4AkGMu1hUgAMzzHCO+3jtMvdSM3gHVLjI6Y/vLgj /ug729iHvy+svuYUY5Pnzu4WvuOkLe66qjhQp7GTK17WeDcBMd6pHxQq1a5MSy0bmRkb GpCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0AKrEHiEzPuoCEsO4TLc5HH/VubY2AXZn4AqF30isMQ=; b=CkT6c5lpuF2BjyhZBxfi2ITgrCNSq/Sgmdi/xXDYfLECmvd/QjbgIwAfzq6Tn2JG7J uVbxVIT4fv7odilk9OgYFbllS+msotWusv1LSo6C8+KqmuZY1GEvoz2A1wGgJfun/Rdx /k2t9HLEODbU3XXp1z66DitwhdI3egLbtZ4oCu/zkv4KQ9x/PviMZpZ9KILS/QpArYb7 rcVBX60K6GiDJ+BLwCoCnjAgk7UhsuDnEN+cWAJnwCFSAeq1dXKyRnWCBss2eHykWZIF SXGXTIypFLdFIcpWjfUBNcTQjuN3g16qIoI6Hc/eUVMcYAnVqQoPwiNysdZBxp/OCTFI /dCQ== X-Gm-Message-State: AOAM530PX9Ow/J7skAHNiJPMi14Ldt2LL9VamagoRkjnqWPie80iws6D 0VloKSkRjQkdHBkQh2pUbY0= X-Google-Smtp-Source: ABdhPJwNMsauklBHBIb0B1nsPp0Np6cFzW1wF3KaWP7sM5TueUd92REoB48cljvZTwzyWVvwUQqEgA== X-Received: by 2002:a62:80d5:: with SMTP id j204mr7331645pfd.115.1594789551624; Tue, 14 Jul 2020 22:05:51 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id i21sm747251pfa.18.2020.07.14.22.05.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jul 2020 22:05:51 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , "Aneesh Kumar K . V" , Joonsoo Kim Subject: [PATCH 3/4] mm/hugetlb: make hugetlb migration callback CMA aware Date: Wed, 15 Jul 2020 14:05:28 +0900 Message-Id: <1594789529-6206-3-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 221AC18002325 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim new_non_cma_page() in gup.c requires to allocate the new page that is not on the CMA area. new_non_cma_page() implements it by using allocation scope APIs. However, there is a work-around for hugetlb. Normal hugetlb page allocation API for migration is alloc_huge_page_nodemask(). It consists of two steps. First is dequeing from the pool. Second is, if there is no available page on the queue, allocating by using the page allocator. new_non_cma_page() can't use this API since first step (deque) isn't aware of scope API to exclude CMA area. So, new_non_cma_page() exports hugetlb internal function for the second step, alloc_migrate_huge_page(), to global scope and uses it directly. This is suboptimal since hugetlb pages on the queue cannot be utilized. This patch tries to fix this situation by making the deque function on hugetlb CMA aware. In the deque function, CMA memory is skipped if PF_MEMALLOC_NOCMA flag is found. Acked-by: Mike Kravetz Signed-off-by: Joonsoo Kim Acked-by: Michal Hocko Acked-by: Vlastimil Babka --- include/linux/hugetlb.h | 2 -- mm/gup.c | 6 +----- mm/hugetlb.c | 11 +++++++++-- 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2660b04..fb2b5aa 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -509,8 +509,6 @@ struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask, gfp_t gfp_mask); struct page *alloc_huge_page_vma(struct hstate *h, struct vm_area_struct *vma, unsigned long address); -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, - int nid, nodemask_t *nmask); int huge_add_to_page_cache(struct page *page, struct address_space *mapping, pgoff_t idx); diff --git a/mm/gup.c b/mm/gup.c index bbd36a1..4ba822a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1634,11 +1634,7 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private) struct hstate *h = page_hstate(page); gfp_mask = htlb_modify_alloc_mask(h, gfp_mask); - /* - * We don't want to dequeue from the pool because pool pages will - * mostly be from the CMA region. - */ - return alloc_migrate_huge_page(h, gfp_mask, nid, NULL); + return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask); } #endif if (PageTransHuge(page)) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3245aa0..514e29c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -1036,10 +1037,16 @@ static void enqueue_huge_page(struct hstate *h, struct page *page) static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) { struct page *page; + bool nocma = !!(READ_ONCE(current->flags) & PF_MEMALLOC_NOCMA); + + list_for_each_entry(page, &h->hugepage_freelists[nid], lru) { + if (nocma && is_migrate_cma_page(page)) + continue; - list_for_each_entry(page, &h->hugepage_freelists[nid], lru) if (!PageHWPoison(page)) break; + } + /* * if 'non-isolated free hugepage' not found on the list, * the allocation fails. @@ -1928,7 +1935,7 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, return page; } -struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, +static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { struct page *page; From patchwork Wed Jul 15 05:05:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joonsoo Kim X-Patchwork-Id: 11664205 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 747FE1392 for ; Wed, 15 Jul 2020 05:05:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3737420720 for ; Wed, 15 Jul 2020 05:05:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DpAKl9wZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3737420720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 041E46B0008; Wed, 15 Jul 2020 01:05:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F0D866B000A; Wed, 15 Jul 2020 01:05:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD63D6B000C; Wed, 15 Jul 2020 01:05:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id C67BE6B0008 for ; Wed, 15 Jul 2020 01:05:56 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6C6111EE6 for ; Wed, 15 Jul 2020 05:05:56 +0000 (UTC) X-FDA: 77039123112.23.laugh54_3b023bd26ef6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 36F723760C for ; Wed, 15 Jul 2020 05:05:56 +0000 (UTC) X-Spam-Summary: 1,0,0,f631feb65a3dbb8b,d41d8cd98f00b204,js1304@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1535:1542:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2899:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:5007:6119:6261:6653:6742:7576:7903:8531:8957:9413:9592:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:14110:14181:14394:14721:21080:21212:21220:21444:21451:21627:21666:21990:30054,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yr8kqq7adinka3ru93x9awea73uoc71t6nfz6hmtu4h885ersgxpnnmkfdkyo.g4ju9f111p18nsxofr1sawn8h9w5qeor3eaoc1ixms4tqfqq9oqfzepn5qwd6uu.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: laugh54_3b023bd26ef6 X-Filterd-Recvd-Size: 6016 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 05:05:55 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id 72so1853362ple.0 for ; Tue, 14 Jul 2020 22:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bm4thXoFSfa1/sQw0dtkD1wfDchbOx50xJxWqo/+1Is=; b=DpAKl9wZ94ymn0jXczVpLmfmouKGejMw0H+LWr6u928uaQH089ycktNCP3VFc16BrR /tPaEE5K0XO9+pwvIXhiDGrzrGspRGGuhY3a9Mv0TTNCsugn2zoSMs686ZtKPOHOaARu VBWd60bWjnM+hFua7jgvgTjOknWW6s9/TC5hbI2um9NlGZmrOLeacOwhDygsEn5htpT6 +BLZVgrzZeVKGHSNloE33Fu1w6Yr6GvrprqpBAvQLEVhlV6fUSwUHjZ90wSImBSdTZT7 b0Z897YTUnpXKwLthhf9dKpvUkpdWXEPSElWwRKeb9XMDlvYZfX7p4JEBZd9fyo/vyK6 tlxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bm4thXoFSfa1/sQw0dtkD1wfDchbOx50xJxWqo/+1Is=; b=alsFm50tvFOHfVdHLeGgPkjoYgfi4g+1LRoahtzxWiC97m4o1Iv5WE1KNXsl14v2aN ZqG4araXscJrlNWri6FUgpKH8Pelq7rijwpt6ceXbGVluLeuM5ezKJtTWOcFY/g2M5YG 3P6dAwdIibHe72JxbFOY/d6PDSbUzpwkRj0o08f22A23J33bo0No1tImKtIIbVF/FzNj ZuefKY73EfxGs9RuQ61hEcNMAcExz+MHwPim0MeQ7rty1DfxKu1bgnuXuv58jhPSDbyM 2FMH3jyX6AB8DULjuMfCd69fEhByK9LONSaJBM+oSZE6Leet9Am6ueccDDQscY0kep29 qf0Q== X-Gm-Message-State: AOAM531alBVyS7JmWbUxQlY0a5eQKvIkbEw0WYupN74bYOa0R6xmLbv9 FsYRJ8nYnYvvu9ycefwPx2w= X-Google-Smtp-Source: ABdhPJw0C70BIwKGpjkfx8nJpPkB5JKp40Mgim31v9kg4oiAjTmQAM/a78rYMdr6ckToybx3SLrqeg== X-Received: by 2002:a17:90a:ce96:: with SMTP id g22mr8354681pju.9.1594789555034; Tue, 14 Jul 2020 22:05:55 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id i21sm747251pfa.18.2020.07.14.22.05.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jul 2020 22:05:54 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , "Aneesh Kumar K . V" , Joonsoo Kim Subject: [PATCH 4/4] mm/gup: use a standard migration target allocation callback Date: Wed, 15 Jul 2020 14:05:29 +0900 Message-Id: <1594789529-6206-4-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 36F723760C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a well-defined migration target allocation callback. Use it. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim Acked-by: Michal Hocko --- mm/gup.c | 54 ++++++------------------------------------------------ 1 file changed, 6 insertions(+), 48 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 4ba822a..628ca4c 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1608,52 +1608,6 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) } #ifdef CONFIG_CMA -static struct page *new_non_cma_page(struct page *page, unsigned long private) -{ - /* - * We want to make sure we allocate the new page from the same node - * as the source page. - */ - int nid = page_to_nid(page); - /* - * Trying to allocate a page for migration. Ignore allocation - * failure warnings. We don't force __GFP_THISNODE here because - * this node here is the node where we have CMA reservation and - * in some case these nodes will have really less non CMA - * allocation memory. - * - * Note that CMA region is prohibited by allocation scope. - */ - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN; - - if (PageHighMem(page)) - gfp_mask |= __GFP_HIGHMEM; - -#ifdef CONFIG_HUGETLB_PAGE - if (PageHuge(page)) { - struct hstate *h = page_hstate(page); - - gfp_mask = htlb_modify_alloc_mask(h, gfp_mask); - return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask); - } -#endif - if (PageTransHuge(page)) { - struct page *thp; - /* - * ignore allocation failure warnings - */ - gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN; - - thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER); - if (!thp) - return NULL; - prep_transhuge_page(thp); - return thp; - } - - return __alloc_pages_node(nid, gfp_mask, 0); -} - static long check_and_migrate_cma_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, @@ -1668,6 +1622,10 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, bool migrate_allow = true; LIST_HEAD(cma_page_list); long ret = nr_pages; + struct migration_target_control mtc = { + .nid = NUMA_NO_NODE, + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, + }; check_again: for (i = 0; i < nr_pages;) { @@ -1713,8 +1671,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, new_non_cma_page, - NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) { + if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, + (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages * without migration.