From patchwork Thu Mar 24 09:22:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 12790580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B35E7C433EF for ; Thu, 24 Mar 2022 09:23:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 102846B0072; Thu, 24 Mar 2022 05:23:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 088576B0073; Thu, 24 Mar 2022 05:23:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6CE06B0074; Thu, 24 Mar 2022 05:23:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id D57A86B0072 for ; Thu, 24 Mar 2022 05:23:36 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8E4F3181953FE for ; Thu, 24 Mar 2022 09:23:36 +0000 (UTC) X-FDA: 79278742032.23.286E92E Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by imf31.hostedemail.com (Postfix) with ESMTP id 6264A20030 for ; Thu, 24 Mar 2022 09:23:33 +0000 (UTC) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTPS id 22O9MgKw031135 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NO); Thu, 24 Mar 2022 17:22:43 +0800 (CST) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Thu, 24 Mar 2022 17:22:43 +0800 From: "zhaoyang.huang" To: Andrew Morton , Johannes Weiner , Michal Hocko , Vladimir Davydov , ke wang , Zhaoyang Huang , , , Subject: [RFC PATCH] cgroup: introduce proportional protection on memcg Date: Thu, 24 Mar 2022 17:22:23 +0800 Message-ID: <1648113743-32622-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 22O9MgKw031135 X-Stat-Signature: wrq47dkicyw39mbwaqpsoh9bgqqqtc4q Authentication-Results: imf31.hostedemail.com; dkim=none; spf=pass (imf31.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6264A20030 X-HE-Tag: 1648113813-521488 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang current memcg protection via min,low,high asks for an evaluation of protected entity, which could be hard for some system. Furthermore, the usage could also be various under different scenarios(imagin keep protecting 50M when usage change from 100M to 300M), which make the protection less meaning. So we introduce the proportional protection over memcg's ever highest usage(watermark) to overcome above constraints. Signed-off-by: Zhaoyang Huang --- include/linux/page_counter.h | 3 +++ mm/memcontrol.c | 17 +++++++++++++---- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h index 6795913..7762629 100644 --- a/include/linux/page_counter.h +++ b/include/linux/page_counter.h @@ -27,6 +27,9 @@ struct page_counter { unsigned long watermark; unsigned long failcnt; + /* proportional protection */ + unsigned long min_prop; + unsigned long low_prop; /* * 'parent' is placed here to be far from 'usage' to reduce * cache false sharing, as 'usage' is written mostly while diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 508bcea..937c6ce 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6616,6 +6616,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, { unsigned long usage, parent_usage; struct mem_cgroup *parent; + unsigned long memcg_emin, memcg_elow, parent_emin, parent_elow; if (mem_cgroup_disabled()) return; @@ -6650,14 +6651,22 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, parent_usage = page_counter_read(&parent->memory); + /* use proportional protect first and take 1024 as 100% */ + memcg_emin = READ_ONCE(memcg->memory.min_prop) ? + READ_ONCE(memcg->memory.min_prop) * READ_ONCE(memcg->memory.watermark) / 1024 : READ_ONCE(memcg->memory.min); + memcg_elow = READ_ONCE(memcg->memory.low_prop) ? + READ_ONCE(memcg->memory.low_prop) * READ_ONCE(memcg->memory.watermark) / 1024 : READ_ONCE(memcg->memory.low); + parent_emin = READ_ONCE(parent->memory.min_prop) ? + READ_ONCE(parent->memory.min_prop) * READ_ONCE(parent->memory.watermark) / 1024 : READ_ONCE(parent->memory.emin); + parent_elow = READ_ONCE(parent->memory.low_prop) ? + READ_ONCE(parent->memory.low_prop) * READ_ONCE(parent->memory.watermark) / 1024 : READ_ONCE(parent->memory.elow); + WRITE_ONCE(memcg->memory.emin, effective_protection(usage, parent_usage, - READ_ONCE(memcg->memory.min), - READ_ONCE(parent->memory.emin), + memcg_emin, parent_emin, atomic_long_read(&parent->memory.children_min_usage))); WRITE_ONCE(memcg->memory.elow, effective_protection(usage, parent_usage, - READ_ONCE(memcg->memory.low), - READ_ONCE(parent->memory.elow), + memcg_elow, parent_elow, atomic_long_read(&parent->memory.children_low_usage))); }