From patchwork Fri Mar 6 20:01:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 11424675 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AD071580 for ; Fri, 6 Mar 2020 20:01:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 323022070A for ; Fri, 6 Mar 2020 20:01:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 323022070A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 72CA66B0005; Fri, 6 Mar 2020 15:01:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6DCAE6B0006; Fri, 6 Mar 2020 15:01:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F2986B0007; Fri, 6 Mar 2020 15:01:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id 45D176B0005 for ; Fri, 6 Mar 2020 15:01:14 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 166618248068 for ; Fri, 6 Mar 2020 20:01:14 +0000 (UTC) X-FDA: 76566006468.24.page55_7f26a85e52b5b X-Spam-Summary: 2,0,0,ddba0d2740404c84,d41d8cd98f00b204,riel@shelob.surriel.com,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1277:1311:1313:1314:1345:1437:1515:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:2736:2899:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:5007:6261:7576:7903:8531:10004:10400:11026:11473:11658:11914:12297:12438:12517:12519:12555:12679:12760:12895:12986:13069:13255:13311:13357:13439:14096:14097:14181:14394:14659:14721:14819:21080:21627:21990:30054:30064,0,RBL:96.67.55.147:@shelob.surriel.com:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: page55_7f26a85e52b5b X-Filterd-Recvd-Size: 3074 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 20:01:13 +0000 (UTC) Received: from [2603:3005:d05:2b00:6e0b:84ff:fee2:98bb] (helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92.3) (envelope-from ) id 1jAJ9b-0007hU-10; Fri, 06 Mar 2020 15:01:03 -0500 Date: Fri, 6 Mar 2020 15:01:02 -0500 From: Rik van Riel To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Roman Gushchin , Qian Cai , Vlastimil Babka , Mel Gorman , Anshuman Khandual Subject: [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Message-ID: <20200306150102.3e77354b@imladris.surriel.com> X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Posting this one for Roman so I can deal with any upstream feedback and create a v2 if needed, while scratching my head over the next piece of this puzzle :) ---8<--- From: Roman Gushchin Currently a cma area is barely used by the page allocator because it's used only as a fallback from movable, however kswapd tries hard to make sure that the fallback path isn't used. This results in a system evicting memory and pushing data into swap, while lots of CMA memory is still available. This happens despite the fact that alloc_contig_range is perfectly capable of moving any movable allocations out of the way of an allocation. To effectively use the cma area let's alter the rules: if the zone has more free cma pages than the half of total free pages in the zone, use cma pageblocks first and fallback to movable blocks in the case of failure. Signed-off-by: Rik van Riel Co-developed-by: Rik van Riel Signed-off-by: Roman Gushchin Acked-by: Vlastimil Babka Acked-by: Minchan Kim Signed-off-by: Roman Gushchin --- mm/page_alloc.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3c4eb750a199..0fb3c1719625 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, { struct page *page; + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + if (migratetype == MIGRATE_MOVABLE && + zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2) { + page = __rmqueue_cma_fallback(zone, order); + if (page) + return page; + } retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) {