From patchwork Tue Oct 1 23:56:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 11169969 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30E7B15AB for ; Tue, 1 Oct 2019 23:57:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 056B221D79 for ; Tue, 1 Oct 2019 23:57:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 056B221D79 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2A0AD8E0006; Tue, 1 Oct 2019 19:57:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2069D8E0001; Tue, 1 Oct 2019 19:57:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02CA58E0006; Tue, 1 Oct 2019 19:57:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D03868E0001 for ; Tue, 1 Oct 2019 19:57:01 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 5762B34A3 for ; Tue, 1 Oct 2019 23:57:01 +0000 (UTC) X-FDA: 75996879042.26.suit78_200e43c920725 X-Spam-Summary: 2,0,0,4f5d81a906ac0ce9,d41d8cd98f00b204,yang.shi@linux.alibaba.com,:kirill.shutemov@linux.intel.com:ktkhai@virtuozzo.com:hannes@cmpxchg.org:mhocko@suse.com:hughd@google.com:shakeelb@google.com:rientjes@google.com:akpm@linux-foundation.org:yang.shi@linux.alibaba.com::linux-kernel@vger.kernel.org,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1437:1534:1541:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:2918:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:4041:4321:5007:6117:6261:7901:8908:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:13069:13161:13229:13311:13357:14096:14181:14384:14394:14721:14877:21060:21080:21324:21451:21627:21740:30054:30070:30090,0,RBL:115.124.30.43:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: suit78_200e43c920725 X-Filterd-Recvd-Size: 2877 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Oct 2019 23:56:59 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R641e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0TdwevRd_1569974210; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0TdwevRd_1569974210) by smtp.aliyun-inc.com(127.0.0.1); Wed, 02 Oct 2019 07:56:56 +0800 From: Yang Shi To: kirill.shutemov@linux.intel.com, ktkhai@virtuozzo.com, hannes@cmpxchg.org, mhocko@suse.com, hughd@google.com, shakeelb@google.com, rientjes@google.com, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm thp: shrink deferred split THPs harder Date: Wed, 2 Oct 2019 07:56:50 +0800 Message-Id: <1569974210-55366-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The deferred split THPs may get accumulated with some workloads, they would get shrunk when memory pressure is hit. Now we use DEFAULT_SEEKS to determine how many objects would get scanned then split if possible, but actually they are not like other system cache objects, i.e. inode cache which would incur extra I/O if over reclaimed, the unmapped pages will not be accessed anymore, so we could shrink them more aggressively. We could shrink THPs more pro-actively even though memory pressure is not hit, however, IMHO waiting for memory pressure is still a good compromise and trade-off. And, we do have simpler ways to shrink these objects harder until we have to take other means do pro-actively drain. Change shrinker->seeks to 0 to shrink deferred split THPs harder. Cc: Kirill A. Shutemov Cc: Kirill Tkhai Cc: Johannes Weiner Cc: Michal Hocko Cc: Hugh Dickins Cc: Shakeel Butt Cc: David Rientjes Signed-off-by: Yang Shi --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3b78910..1d6b1f1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2955,7 +2955,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, static struct shrinker deferred_split_shrinker = { .count_objects = deferred_split_count, .scan_objects = deferred_split_scan, - .seeks = DEFAULT_SEEKS, + .seeks = 0, .flags = SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE | SHRINKER_NONSLAB, };