From patchwork Tue Sep 17 11:58:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lin Feng X-Patchwork-Id: 11148643 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 13A91912 for ; Tue, 17 Sep 2019 11:58:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C3C8A214AF for ; Tue, 17 Sep 2019 11:58:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3C8A214AF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=wangsu.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C12756B0003; Tue, 17 Sep 2019 07:58:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BEA726B0005; Tue, 17 Sep 2019 07:58:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFF406B0006; Tue, 17 Sep 2019 07:58:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id 8CC576B0003 for ; Tue, 17 Sep 2019 07:58:51 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 34A9C181AC9AE for ; Tue, 17 Sep 2019 11:58:51 +0000 (UTC) X-FDA: 75944266062.05.lamp67_4ce87314e1861 X-Spam-Summary: 75,0,0,b1d238b49968be11,d41d8cd98f00b204,linf@wangsu.com,:corbet@lwn.net:mcgrof@kernel.org:akpm@linux-foundation.org:linux-kernel@vger.kernel.org::keescook@chromium.org:mchehab+samsung@kernel.org:mgorman@techsingularity.net:vbabka@suse.cz:mhocko@suse.com:ktkhai@virtuozzo.com:hannes@cmpxchg.org:linf@wangsu.com,RULES_HIT:1:2:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2553:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4605:5007:6117:6119:6261:6299:7903:8778:10004:11026:11232:11473:11658:11914:12043:12048:12050:12291:12296:12297:12438:12555:12679:12740:12895:12986:13053:13161:13215:13229:13255:13894:14096:14394:21060:21080:21324:21433:21451:21627:21795:30012:30034:30051:30054:30080:30090,0,RBL:47.90.73.12-irl.urbl.hostedemail.com-127.0.0.175,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL :0,DNSBL X-HE-Tag: lamp67_4ce87314e1861 X-Filterd-Recvd-Size: 11776 Received: from aliyun-sdnproxy-4.icoremail.net (aliyun-cloud.icoremail.net [47.90.73.12]) by imf17.hostedemail.com (Postfix) with SMTP for ; Tue, 17 Sep 2019 11:58:48 +0000 (UTC) Received: from bogon.wangsu.com (unknown [218.85.123.226]) by app2 (Coremail) with SMTP id 4zNnewCnreVjyoBdoWhyAA--.30568S2; Tue, 17 Sep 2019 19:58:31 +0800 (CST) From: Lin Feng To: corbet@lwn.net, mcgrof@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: keescook@chromium.org, mchehab+samsung@kernel.org, mgorman@techsingularity.net, vbabka@suse.cz, mhocko@suse.com, ktkhai@virtuozzo.com, hannes@cmpxchg.org, linf@wangsu.com Subject: [PATCH] [RFC] vmscan.c: add a sysctl entry for controlling memory reclaim IO congestion_wait length Date: Tue, 17 Sep 2019 19:58:24 +0800 Message-Id: <20190917115824.16990-1-linf@wangsu.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-CM-TRANSID: 4zNnewCnreVjyoBdoWhyAA--.30568S2 X-Coremail-Antispam: 1UD129KBjvJXoWxtFyDXw45Cr48JF4Dtr4Utwb_yoWfCFy3pF yDZr1Sva4UJFWfJFZxA3WUJFn5J3s7CFyDtw4UGr1FvryUXFykKwn5CF1UZa48ur1UG398 tF4qqws5Gr18JF7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9m1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l8cAvFVAK 0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVWDJVCq3wA2z4 x0Y4vE2Ix0cI8IcVCY1x0267AKxVWxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW0oVCq3wA2 z4x0Y4vEx4A2jsIEc7CjxVAFwI0_GcCE3s1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqx4 xG64xvF2IEw4CE5I8CrVC2j2WlYx0EF7xvrVAajcxG14v26r1j6r4UMcIj6x8ErcxFaVAv 8VW8GwAv7VCY1x0262k0Y48FwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48Icx kI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02628vn2kIc2xKxwCY 02Avz4vE14v_Gw4l42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8VW8GwCFx2IqxV CFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r10 6r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxV WUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG 6rWUJVWrZr1UMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr 0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JjOrchUUUUU= X-CM-SenderInfo: holqwq5zdqw23xof0z/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This sysctl is named as mm_reclaim_congestion_wait_jiffies, default to HZ/10 as unchanged to old codes. It is in jiffies unit and can be set in range between [1, 100], so refers to CONFIG_HZ before tuning. In direct and background(kswapd) pages reclaim paths both may fall into calling msleep(100) or congestion_wait(HZ/10) or wait_iff_congested(HZ/10) while under IO pressure, and the sleep length is hard-coded and the later two will introduce 100ms iowait length per time. So if pages reclaim is relatively active in some circumstances such as high order pages reappings, it's possible to see a lot of iowait introduced by congestion_wait(HZ/10) and wait_iff_congested(HZ/10). The 100ms sleep length is proper if the backing drivers are slow like traditionnal rotation disks. While if the backing drivers are high-end storages such as high iops ssds or even faster drivers, the high iowait inroduced by pages reclaim is really misleading, because the storage IO utils seen by iostat is quite low, in this case the congestion_wait time modified to 1ms is likely enough for high-end ssds. Another benifit is that it's potentially shorter the direct reclaim blocked time when kernel falls into sync reclaim path, which may improve user applications response time. All ssds box is a trend, so introduce this sysctl entry for making a way to relieving the concerns of system administrators. Tested: 1. Before this patch: top - 10:10:40 up 8 days, 16:22, 4 users, load average: 2.21, 2.15, 2.10 Tasks: 718 total, 5 running, 712 sleeping, 0 stopped, 1 zombie Cpu0 : 0.3%us, 3.4%sy, 0.0%ni, 95.3%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 1.4%us, 1.7%sy, 0.0%ni, 95.2%id, 0.0%wa, 0.0%hi, 1.7%si, 0.0%st Cpu2 : 4.7%us, 3.3%sy, 0.0%ni, 91.0%id, 0.0%wa, 0.0%hi, 1.0%si, 0.0%st Cpu3 : 7.0%us, 3.7%sy, 0.0%ni, 87.7%id, 1.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu4 : 1.0%us, 2.0%sy, 0.0%ni, 96.3%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu5 : 1.0%us, 2.0%sy, 0.0%ni, 1.7%id, 95.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu6 : 1.0%us, 1.3%sy, 0.0%ni, 97.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu7 : 1.3%us, 1.0%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 4.3%us, 1.3%sy, 0.0%ni, 94.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.7%us, 0.7%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu10 : 0.7%us, 1.0%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 1.0%us, 1.0%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu12 : 3.0%us, 1.0%sy, 0.0%ni, 95.3%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Cpu13 : 0.3%us, 1.3%sy, 0.0%ni, 88.6%id, 9.4%wa, 0.0%hi, 0.3%si, 0.0%st Cpu14 : 3.3%us, 2.3%sy, 0.0%ni, 93.7%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Cpu15 : 6.4%us, 3.0%sy, 0.0%ni, 90.2%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu16 : 2.7%us, 1.7%sy, 0.0%ni, 95.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu17 : 1.0%us, 1.7%sy, 0.0%ni, 97.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu18 : 1.3%us, 1.0%sy, 0.0%ni, 97.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Cpu19 : 4.3%us, 1.7%sy, 0.0%ni, 86.0%id, 7.7%wa, 0.0%hi, 0.3%si, 0.0%st Cpu20 : 0.7%us, 1.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu21 : 0.3%us, 1.7%sy, 0.0%ni, 50.2%id, 47.5%wa, 0.0%hi, 0.3%si, 0.0%st Cpu22 : 0.7%us, 0.7%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu23 : 0.7%us, 0.7%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st 2. After this patch and set mm_reclaim_congestion_wait_jiffies to 1: top - 10:12:19 up 8 days, 16:24, 4 users, load average: 1.32, 1.93, 2.03 Tasks: 724 total, 2 running, 721 sleeping, 0 stopped, 1 zombie Cpu0 : 4.4%us, 3.0%sy, 0.0%ni, 90.3%id, 1.3%wa, 0.0%hi, 1.0%si, 0.0%st Cpu1 : 2.1%us, 1.4%sy, 0.0%ni, 93.5%id, 0.7%wa, 0.0%hi, 2.4%si, 0.0%st Cpu2 : 2.7%us, 1.0%sy, 0.0%ni, 96.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 1.0%us, 1.0%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu4 : 0.7%us, 1.0%sy, 0.0%ni, 97.7%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Cpu5 : 1.0%us, 0.7%sy, 0.0%ni, 97.7%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Cpu6 : 1.7%us, 1.0%sy, 0.0%ni, 97.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 2.0%us, 0.7%sy, 0.0%ni, 94.3%id, 2.7%wa, 0.0%hi, 0.3%si, 0.0%st Cpu8 : 2.0%us, 0.7%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu9 : 0.7%us, 1.0%sy, 0.0%ni, 97.7%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.7%us, 0.3%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.7%us, 1.0%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu13 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu14 : 1.7%us, 0.7%sy, 0.0%ni, 97.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 4.3%us, 1.0%sy, 0.0%ni, 94.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu16 : 1.7%us, 1.3%sy, 0.0%ni, 96.3%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Cpu17 : 2.0%us, 1.3%sy, 0.0%ni, 96.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu18 : 0.3%us, 0.3%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu19 : 1.0%us, 1.0%sy, 0.0%ni, 97.6%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu20 : 1.3%us, 0.7%sy, 0.0%ni, 97.0%id, 0.7%wa, 0.0%hi, 0.3%si, 0.0%st Cpu21 : 0.7%us, 0.7%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st Cpu22 : 1.0%us, 1.0%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu23 : 0.7%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.7%si, 0.0%st Signed-off-by: Lin Feng --- Documentation/admin-guide/sysctl/vm.rst | 17 +++++++++++++++++ kernel/sysctl.c | 10 ++++++++++ mm/vmscan.c | 12 +++++++++--- 3 files changed, 36 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst index 64aeee1009ca..e4dd83731ecf 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -837,6 +837,23 @@ than the high water mark in a zone. The default value is 60. +mm_reclaim_congestion_wait_jiffies +========== + +This control is used to define how long kernel will wait/sleep while +system memory is under pressure and memroy reclaim is relatively active. +Lower values will decrease the kernel wait/sleep time. + +It's suggested to lower this value on high-end box that system is under memory +pressure but with low storage IO utils and high CPU iowait, which could also +potentially decrease user application response time in this case. + +Keep this control as it were if your box are not above case. + +The default value is HZ/10, which is of equal value to 100ms independ of how +many HZ is defined. + + unprivileged_userfaultfd ======================== diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 078950d9605b..064a3da04789 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -114,6 +114,7 @@ extern int pid_max; extern int pid_max_min, pid_max_max; extern int percpu_pagelist_fraction; extern int latencytop_enabled; +extern int mm_reclaim_congestion_wait_jiffies; extern unsigned int sysctl_nr_open_min, sysctl_nr_open_max; #ifndef CONFIG_MMU extern int sysctl_nr_trim_pages; @@ -1413,6 +1414,15 @@ static struct ctl_table vm_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, + { + .procname = "mm_reclaim_congestion_wait_jiffies", + .data = &mm_reclaim_congestion_wait_jiffies, + .maxlen = sizeof(mm_reclaim_congestion_wait_jiffies), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = &SYSCTL_ONE, + .extra2 = &one_hundred, + }, #ifdef CONFIG_HUGETLB_PAGE { .procname = "nr_hugepages", diff --git a/mm/vmscan.c b/mm/vmscan.c index a6c5d0b28321..8c19afdcff95 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -165,6 +165,12 @@ struct scan_control { * From 0 .. 100. Higher means more swappy. */ int vm_swappiness = 60; + +/* + * From 0 .. 100. Lower means shorter memory reclaim IO congestion wait time. + */ +int mm_reclaim_congestion_wait_jiffies = HZ / 10; + /* * The total number of pages which are beyond the high watermark within all * zones. @@ -1966,7 +1972,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, return 0; /* wait a bit for the reclaimer. */ - msleep(100); + msleep(jiffies_to_msecs(mm_reclaim_congestion_wait_jiffies)); stalled = true; /* We are about to die and free our memory. Return now. */ @@ -2788,7 +2794,7 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) * faster than they are written so also forcibly stall. */ if (sc->nr.immediate) - congestion_wait(BLK_RW_ASYNC, HZ/10); + congestion_wait(BLK_RW_ASYNC, mm_reclaim_congestion_wait_jiffies); } /* @@ -2807,7 +2813,7 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) */ if (!sc->hibernation_mode && !current_is_kswapd() && current_may_throttle() && pgdat_memcg_congested(pgdat, root)) - wait_iff_congested(BLK_RW_ASYNC, HZ/10); + wait_iff_congested(BLK_RW_ASYNC, mm_reclaim_congestion_wait_jiffies); } while (should_continue_reclaim(pgdat, sc->nr_reclaimed - nr_reclaimed, sc->nr_scanned - nr_scanned, sc));