From patchwork Thu Jun 11 12:09:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charan Teja Kalla X-Patchwork-Id: 11599881 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A8111739 for ; Thu, 11 Jun 2020 12:10:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67B892078D for ; Thu, 11 Jun 2020 12:10:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="bdTiIHmu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67B892078D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 76BE18D009A; Thu, 11 Jun 2020 08:10:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 71BD18D0084; Thu, 11 Jun 2020 08:10:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 631528D009A; Thu, 11 Jun 2020 08:10:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 4AED88D0084 for ; Thu, 11 Jun 2020 08:10:00 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 022C2181ABE9F for ; Thu, 11 Jun 2020 12:10:00 +0000 (UTC) X-FDA: 76916812560.05.pear67_130ce6d26dd3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id D4EAA18000DCE for ; Thu, 11 Jun 2020 12:09:59 +0000 (UTC) X-Spam-Summary: 2,0,0,b47cd22b31d47479,d41d8cd98f00b204,bounce+d06763.be9e4a-linux-mm=kvack.org@mg.codeaurora.org,,RULES_HIT:41:69:152:355:379:560:800:854:960:966:967:973:988:989:1260:1261:1277:1311:1313:1314:1345:1437:1515:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2525:2559:2563:2682:2685:2693:2731:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4118:4321:4385:4605:5007:6261:6630:6653:6691:7652:7903:8603:8784:9025:10004:11026:11218:11232:11473:11658:11914:12043:12198:12291:12295:12296:12297:12438:12517:12519:12555:12683:12760:13161:13229:13255:14181:14394:14721:21080:21325:21451:21611:21627:21740:21972:21990:22119:30012:30054:30056:30070,0,RBL:104.130.122.27:@mg.codeaurora.org:.lbl8.mailshell.net-64.201.201.201 62.14.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL :0,DNSBL X-HE-Tag: pear67_130ce6d26dd3 X-Filterd-Recvd-Size: 7435 Received: from mail27.static.mailgun.info (mail27.static.mailgun.info [104.130.122.27]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Thu, 11 Jun 2020 12:09:58 +0000 (UTC) DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1591877399; h=Content-Transfer-Encoding: Content-Type: MIME-Version: Date: Message-ID: Subject: From: Cc: To: Sender; bh=GHyDQ0CNebWZew+ULIczlDAXcOVl5+2NRVl7coOxMuc=; b=bdTiIHmuVFdu7el6KHQks1fRnrEB/EafsyqJ6CHOhyGCy2fBcTsGYCk/qMwtEAx3MTr5XGjM M6wP5NlTlmqYrysQ8Dxq2BXO5dwowCFve+e33pmlJ3fw+ar/nywlsqXQszCFNGQd8tNGzyun 1XGMqGzX9nkS3YAxG0Y+Dw29Y8M= X-Mailgun-Sending-Ip: 104.130.122.27 X-Mailgun-Sid: WyIwY2Q3OCIsICJsaW51eC1tbUBrdmFjay5vcmciLCAiYmU5ZTRhIl0= Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n01.prod.us-west-2.postgun.com with SMTP id 5ee21f13e144dd5115397b0c (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Thu, 11 Jun 2020 12:09:55 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id CCF09C433C8; Thu, 11 Jun 2020 12:09:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from [192.168.1.102] (unknown [183.83.143.239]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: charante) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2005EC433C8; Thu, 11 Jun 2020 12:09:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 2005EC433C8 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=charante@codeaurora.org To: Andrew Morton , mgorman@techsingularity.net, linux-mm@kvack.org Cc: LKML , vinmenon@codeaurora.org From: Charan Teja Kalla Subject: [PATCH] mm, page_alloc: skip ->watermark_boost for atomic order-0 allocations-fix Message-ID: <31556793-57b1-1c21-1a9d-22674d9bd938@codeaurora.org> Date: Thu, 11 Jun 2020 17:39:47 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 Content-Language: en-US X-Rspamd-Queue-Id: D4EAA18000DCE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When boosting is enabled, it is observed that rate of atomic order-0 allocation failures are high due to the fact that free levels in the system are checked with ->watermark_boost offset. This is not a problem for sleepable allocations but for atomic allocations which looks like regression. This problem is seen frequently on system setup of Android kernel running on Snapdragon hardware with 4GB RAM size. When no extfrag event occurred in the system, ->watermark_boost factor is zero, thus the watermark configurations in the system are: _watermark = ( [WMARK_MIN] = 1272, --> ~5MB [WMARK_LOW] = 9067, --> ~36MB [WMARK_HIGH] = 9385), --> ~38MB watermark_boost = 0 After launching some memory hungry applications in Android which can cause extfrag events in the system to an extent that ->watermark_boost can be set to max i.e. default boost factor makes it to 150% of high watermark. _watermark = ( [WMARK_MIN] = 1272, --> ~5MB [WMARK_LOW] = 9067, --> ~36MB [WMARK_HIGH] = 9385), --> ~38MB watermark_boost = 14077, -->~57MB With default system configuration, for an atomic order-0 allocation to succeed, having free memory of ~2MB will suffice. But boosting makes the min_wmark to ~61MB thus for an atomic order-0 allocation to be successful system should have minimum of ~23MB of free memory(from calculations of zone_watermark_ok(), min = 3/4(min/2)). But failures are observed despite system is having ~20MB of free memory. In the testing, this is reproducible as early as first 300secs since boot and with furtherlowram configurations(<2GB) it is observed as early as first 150secs since boot. These failures can be avoided by excluding the ->watermark_boost in watermark caluculations for atomic order-0 allocations. Fix-suggested-by: Mel Gorman Signed-off-by: Charan Teja Reddy Acked-by: Vlastimil Babka Signed-off-by: Charan Teja Reddy Signed-off-by: Andrew Morton Signed-off-by: Charan Teja Reddy Signed-off-by: Andrew Morton Acked-by: Vlastimil Babka Signed-off-by: Linus Torvalds Signed-off-by: Ralph Siemsen --- Change in linux-next: https://lore.kernel.org/patchwork/patch/1244272/ mm/page_alloc.c | 36 ++++++++++++++++++++---------------- 1 file changed, 20 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0c435b2..18f407e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3580,7 +3580,7 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, static inline bool zone_watermark_fast(struct zone *z, unsigned int order, unsigned long mark, int highest_zoneidx, - unsigned int alloc_flags) + unsigned int alloc_flags, gfp_t gfp_mask) { long free_pages = zone_page_state(z, NR_FREE_PAGES); long cma_pages = 0; @@ -3602,8 +3602,23 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order, mark + z->lowmem_reserve[highest_zoneidx]) return true; - return __zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags, - free_pages); + if (__zone_watermark_ok(z, order, mark, highest_zoneidx, alloc_flags, + free_pages)) + return true; + /* + * Ignore watermark boosting for GFP_ATOMIC order-0 allocations + * when checking the min watermark. The min watermark is the + * point where boosting is ignored so that kswapd is woken up + * when below the low watermark. + */ + if (unlikely(!order && (gfp_mask & __GFP_ATOMIC) && z->watermark_boost + && ((alloc_flags & ALLOC_WMARK_MASK) == WMARK_MIN))) { + mark = z->_watermark[WMARK_MIN]; + return __zone_watermark_ok(z, order, mark, highest_zoneidx, + alloc_flags, free_pages); + } + + return false; } bool zone_watermark_ok_safe(struct zone *z, unsigned int order, @@ -3746,20 +3761,9 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) } mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); - /* - * Allow GFP_ATOMIC order-0 allocations to exclude the - * zone->watermark_boost in their watermark calculations. - * We rely on the ALLOC_ flags set for GFP_ATOMIC requests in - * gfp_to_alloc_flags() for this. Reason not to use the - * GFP_ATOMIC directly is that we want to fall back to slow path - * thus wake up kswapd. - */ - if (unlikely(!order && !(alloc_flags & ALLOC_WMARK_MASK) && - (alloc_flags & (ALLOC_HARDER | ALLOC_HIGH)))) { - mark = zone->_watermark[WMARK_MIN]; - } if (!zone_watermark_fast(zone, order, mark, - ac->highest_zoneidx, alloc_flags)) { + ac->highest_zoneidx, alloc_flags, + gfp_mask)) { int ret; #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT