From patchwork Mon Aug 17 14:08:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 11718345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97FA913B1 for ; Mon, 17 Aug 2020 14:10:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5ECF720748 for ; Mon, 17 Aug 2020 14:10:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Lh2beVP8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5ECF720748 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4EBE46B000C; Mon, 17 Aug 2020 10:10:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4C09B6B000D; Mon, 17 Aug 2020 10:10:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FEB46B000E; Mon, 17 Aug 2020 10:10:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 2AF796B000C for ; Mon, 17 Aug 2020 10:10:32 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D8D338245571 for ; Mon, 17 Aug 2020 14:10:31 +0000 (UTC) X-FDA: 77160245862.01.wren60_060dafc27017 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 4384110045364 for ; Mon, 17 Aug 2020 14:10:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,longman@redhat.com,,RULES_HIT:4423:30054:30070:30090,0,RBL:205.139.110.61:@redhat.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100;04y8qs1ojaibg9hnz9ng6gfu77yaryp8pwwcz6wyk8uzt6eadqpufrtey3bzx6i.agyqbt941djs9ctcykgs8h9nf38rxmq5asiuef3pcspycprz7foygj33f1hgja3.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: wren60_060dafc27017 X-Filterd-Recvd-Size: 5031 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Mon, 17 Aug 2020 14:10:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597673405; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=lMSQ3zYe5JdMkN7xyyD+27PUqcIN4NvxLvoYzin94RU=; b=Lh2beVP8U2MYUcABT48pLfJbGguURkli87Xtj8GlBzXc5G98m/ssC1G53DvP98r4k0Pgb2 oI3hw6sLMHCKyo5FR5jA8XO9Pokzubw37oV2rBVF5B/cGhlEvn+r3DaVY1pZJd5V6KtON5 zd9WuDL5aL1MdNspkrx0ywoN+AHgncM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-170-zyuWZt46PtifejKTk5r3ZA-1; Mon, 17 Aug 2020 10:10:01 -0400 X-MC-Unique: zyuWZt46PtifejKTk5r3ZA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BF50918686C4; Mon, 17 Aug 2020 14:09:59 +0000 (UTC) Received: from llong.com (ovpn-118-35.rdu2.redhat.com [10.10.118.35]) by smtp.corp.redhat.com (Postfix) with ESMTP id AED9821E8F; Mon, 17 Aug 2020 14:09:57 +0000 (UTC) From: Waiman Long To: Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov , Jonathan Corbet , Alexey Dobriyan , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Waiman Long Subject: [RFC PATCH 6/8] memcg: Introduce additional memory control slowdown if needed Date: Mon, 17 Aug 2020 10:08:29 -0400 Message-Id: <20200817140831.30260-7-longman@redhat.com> In-Reply-To: <20200817140831.30260-1-longman@redhat.com> References: <20200817140831.30260-1-longman@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Queue-Id: 4384110045364 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For fast cpus on slow disks, yielding the cpus repeatedly with PR_MEMACT_SLOWDOWN may not be able to slow down memory allocation enough for memory reclaim to catch up. In case a large memory block is mmap'ed and the pages are faulted in one-by-one, the syscall delays won't be activated during this process. To be safe, an additional variable delay of 20-5000 us will be added to __mem_cgroup_over_high_action() if the excess memory used is more than 1/256 of the memory limit. Signed-off-by: Waiman Long --- mm/memcontrol.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6488f8a10d66..bddf3e659469 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2643,11 +2643,10 @@ get_rss_counter(struct mm_struct *mm, int mm_bit, u16 flags, int rss_bit) static bool __mem_cgroup_over_high_action(struct mem_cgroup *memcg, u8 action, u16 flags) { - unsigned long mem = 0; + unsigned long mem = 0, limit = 0, excess = 0; bool ret = false; struct mm_struct *mm = get_task_mm(current); u8 signal = READ_ONCE(current->memcg_over_high_signal); - u32 limit; if (!mm) return true; /* No more check is needed */ @@ -2657,9 +2656,10 @@ static bool __mem_cgroup_over_high_action(struct mem_cgroup *memcg, u8 action, if (memcg) { mem = page_counter_read(&memcg->memory); - limit = READ_ONCE(current->memcg_over_high_climit); - if (mem <= memcg->memory.high + limit) + limit = READ_ONCE(current->memcg_over_high_climit) + memcg->memory.high; + if (mem <= limit) goto out; + excess = mem - limit; } /* @@ -2676,6 +2676,7 @@ static bool __mem_cgroup_over_high_action(struct mem_cgroup *memcg, u8 action, limit = READ_ONCE(current->memcg_over_high_plimit); if (mem <= limit) goto out; + excess = mem - limit; } ret = true; @@ -2685,10 +2686,19 @@ static bool __mem_cgroup_over_high_action(struct mem_cgroup *memcg, u8 action, break; case PR_MEMACT_SLOWDOWN: /* - * Slow down by yielding the cpu & adding delay to - * memory allocation syscalls. + * Slow down by yielding the cpu & adding delay to memory + * allocation syscalls. + * + * An additional 20-5000 us of delay is added in case the + * excess memory is more than 1/256 of the limit. */ WRITE_ONCE(current->memcg_over_limit, true); + limit >>= 8; + if (limit && (excess > limit)) { + int delay = min(5000UL, excess/limit * 20UL); + + udelay(delay); + } set_tsk_need_resched(current); set_preempt_need_resched(); break;