From patchwork Fri Feb 21 21:53:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 11397603 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9BE892A for ; Fri, 21 Feb 2020 21:54:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A0CC4207FD for ; Fri, 21 Feb 2020 21:54:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0CC4207FD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A7086B0007; Fri, 21 Feb 2020 16:53:59 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 357B96B0008; Fri, 21 Feb 2020 16:53:59 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 226596B000A; Fri, 21 Feb 2020 16:53:59 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 028756B0007 for ; Fri, 21 Feb 2020 16:53:58 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9D7D98248047 for ; Fri, 21 Feb 2020 21:53:58 +0000 (UTC) X-FDA: 76515487356.30.flesh01_5935ecf04e2a X-Spam-Summary: 2,0,0,136621c6300f1f68,d41d8cd98f00b204,riel@shelob.surriel.com,,RULES_HIT:41:355:379:541:800:960:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1543:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:4250:4321:5007:6117:6119:6120:6261:6630:7875:7901:8957:10004:11026:11232:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14096:14181:14394:14721:21080:21627:21740:21990:30012:30054:30070,0,RBL:96.67.55.147:@shelob.surriel.com:.lbl8.mailshell.net-62.14.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:41,LUA_SUMMARY:none X-HE-Tag: flesh01_5935ecf04e2a X-Filterd-Recvd-Size: 4404 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Feb 2020 21:53:58 +0000 (UTC) Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92.3) (envelope-from ) id 1j5GF1-0002PL-PR; Fri, 21 Feb 2020 16:53:47 -0500 From: Rik van Riel To: linux-kernel@vger.kernel.org, riel@fb.com Cc: kernel-team@fb.com, akpm@linux-foundation.org, linux-mm@kvack.org, mhocko@kernel.org, vbabka@suse.cz, mgorman@techsingularity.net, rientjes@google.com, aarcange@redhat.com, Rik van Riel Subject: [PATCH 2/2] mm,thp,compaction,cma: allow THP migration for CMA allocations Date: Fri, 21 Feb 2020 16:53:43 -0500 Message-Id: <3289dc5e6c4c3174999598d8293adf8ed3e93b57.1582321645.git.riel@surriel.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The code to implement THP migrations already exists, and the code for CMA to clear out a region of memory already exists. Only a few small tweaks are needed to allow CMA to move THP memory when attempting an allocation from alloc_contig_range. With these changes, migrating THPs from a CMA area works when allocating a 1GB hugepage from CMA memory. Signed-off-by: Rik van Riel Reviewed-by: Zi Yan --- mm/compaction.c | 16 +++++++++------- mm/page_alloc.c | 6 ++++-- 2 files changed, 13 insertions(+), 9 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 672d3c78c6ab..f3e05c91df62 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -894,12 +894,12 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* * Regardless of being on LRU, compound pages such as THP and - * hugetlbfs are not to be compacted. We can potentially save - * a lot of iterations if we skip them at once. The check is - * racy, but we can consider only valid values and the only - * danger is skipping too much. + * hugetlbfs are not to be compacted most of the time. We can + * potentially save a lot of iterations if we skip them at + * once. The check is racy, but we can consider only valid + * values and the only danger is skipping too much. */ - if (PageCompound(page)) { + if (PageCompound(page) && !cc->alloc_contig) { const unsigned int order = compound_order(page); if (likely(order < MAX_ORDER)) @@ -969,7 +969,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * and it's on LRU. It can only be a THP so the order * is safe to read and it's 0 for tail pages. */ - if (unlikely(PageCompound(page))) { + if (unlikely(PageCompound(page) && !cc->alloc_contig)) { low_pfn += compound_nr(page) - 1; goto isolate_fail; } @@ -981,7 +981,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (__isolate_lru_page(page, isolate_mode) != 0) goto isolate_fail; - VM_BUG_ON_PAGE(PageCompound(page), page); + /* The whole page is taken off the LRU; skip the tail pages. */ + if (PageCompound(page)) + low_pfn += compound_nr(page) - 1; /* Successfully isolated */ del_page_from_lru_list(page, lruvec, page_lru(page)); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a36736812596..38c8ddfcecc8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8253,14 +8253,16 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page, /* * Hugepages are not in LRU lists, but they're movable. + * THPs are on the LRU, but need to be counted as #small pages. * We need not scan over tail pages because we don't * handle each tail page individually in migration. */ - if (PageHuge(page)) { + if (PageTransHuge(page)) { struct page *head = compound_head(page); unsigned int skip_pages; - if (!hugepage_migration_supported(page_hstate(head))) + if (PageHuge(page) && + !hugepage_migration_supported(page_hstate(head))) return page; skip_pages = compound_nr(head) - (page - head);