From patchwork Fri Mar 6 22:06:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 11424721 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2CD4138D for ; Fri, 6 Mar 2020 22:06:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 70A31206E2 for ; Fri, 6 Mar 2020 22:06:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70A31206E2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A29E76B0003; Fri, 6 Mar 2020 17:06:56 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9B2A26B0006; Fri, 6 Mar 2020 17:06:56 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A1BF6B0007; Fri, 6 Mar 2020 17:06:56 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 7219A6B0003 for ; Fri, 6 Mar 2020 17:06:56 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3868D2C8F for ; Fri, 6 Mar 2020 22:06:56 +0000 (UTC) X-FDA: 76566323232.22.van16_3c8445c1b6d07 X-Spam-Summary: 2,0,0,604c22ceda634bc7,d41d8cd98f00b204,riel@shelob.surriel.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1277:1311:1313:1314:1345:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:5007:6119:6261:8660:8957:9592:10004:10400:11026:11658:11914:12043:12114:12297:12438:12517:12519:12555:12683:12760:12895:12986:13146:13148:13161:13229:13230:13439:14096:14097:14181:14394:14659:14721:21080:21627:21987:21990:30012:30054:30070:30090,0,RBL:96.67.55.147:@shelob.surriel.com:.lbl8.mailshell.net-64.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: van16_3c8445c1b6d07 X-Filterd-Recvd-Size: 3855 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 22:06:55 +0000 (UTC) Received: from [2603:3005:d05:2b00:6e0b:84ff:fee2:98bb] (helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92.3) (envelope-from ) id 1jAL7H-0002Ia-Sx; Fri, 06 Mar 2020 17:06:47 -0500 Date: Fri, 6 Mar 2020 17:06:47 -0500 From: Rik van Riel To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Anshuman Khandual , Mel Gorman , Vlastimil Babka , Qian Cai , Roman Gushchin Subject: [PATCH] mm,cma: remove pfn_range_valid_contig Message-ID: <20200306170647.455a2db3@imladris.surriel.com> X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The function pfn_range_valid_contig checks whether all memory in the target area is free. This causes unnecessary CMA failures, since alloc_contig_range will migrate movable memory out of a target range, and has its own sanity check early on in has_unmovable_pages, which is called from start_isolate_page_range & set_migrate_type_isolate. Relying on that has_unmovable_pages call simplifies the CMA code and results in an increased success rate of CMA allocations. Signed-off-by: Rik van Riel --- mm/page_alloc.c | 47 +++-------------------------------------------- 1 file changed, 3 insertions(+), 44 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0fb3c1719625..75e84907d8c6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8539,32 +8539,6 @@ static int __alloc_contig_pages(unsigned long start_pfn, gfp_mask); } -static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, - unsigned long nr_pages) -{ - unsigned long i, end_pfn = start_pfn + nr_pages; - struct page *page; - - for (i = start_pfn; i < end_pfn; i++) { - page = pfn_to_online_page(i); - if (!page) - return false; - - if (page_zone(page) != z) - return false; - - if (PageReserved(page)) - return false; - - if (page_count(page) > 0) - return false; - - if (PageHuge(page)) - return false; - } - return true; -} - static bool zone_spans_last_pfn(const struct zone *zone, unsigned long start_pfn, unsigned long nr_pages) { @@ -8605,28 +8579,13 @@ struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, zonelist = node_zonelist(nid, gfp_mask); for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nodemask) { - spin_lock_irqsave(&zone->lock, flags); - pfn = ALIGN(zone->zone_start_pfn, nr_pages); while (zone_spans_last_pfn(zone, pfn, nr_pages)) { - if (pfn_range_valid_contig(zone, pfn, nr_pages)) { - /* - * We release the zone lock here because - * alloc_contig_range() will also lock the zone - * at some point. If there's an allocation - * spinning on this lock, it may win the race - * and cause alloc_contig_range() to fail... - */ - spin_unlock_irqrestore(&zone->lock, flags); - ret = __alloc_contig_pages(pfn, nr_pages, - gfp_mask); - if (!ret) - return pfn_to_page(pfn); - spin_lock_irqsave(&zone->lock, flags); - } + ret = __alloc_contig_pages(pfn, nr_pages, gfp_mask); + if (!ret) + return pfn_to_page(pfn); pfn += nr_pages; } - spin_unlock_irqrestore(&zone->lock, flags); } return NULL; }