From patchwork Mon Apr 5 17:08:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Chen X-Patchwork-Id: 12183449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D430EC433ED for ; Mon, 5 Apr 2021 18:08:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CF54613B1 for ; Mon, 5 Apr 2021 18:08:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CF54613B1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ECC426B0073; Mon, 5 Apr 2021 14:08:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7AC56B0075; Mon, 5 Apr 2021 14:08:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1B6F6B0078; Mon, 5 Apr 2021 14:08:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id B5ED46B0073 for ; Mon, 5 Apr 2021 14:08:57 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6790BF04E for ; Mon, 5 Apr 2021 18:08:57 +0000 (UTC) X-FDA: 77999099514.03.1AB8FA3 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by imf06.hostedemail.com (Postfix) with ESMTP id 3BA13C0001F7 for ; Mon, 5 Apr 2021 18:08:57 +0000 (UTC) IronPort-SDR: MsUYGF+6CP+LZNkSXFqdXkUQ9XAOassvjK2x6YiI3ktG3OcCBBFhANMoVP3kN0M+PGF0j8daIm 9nUkizStPU1Q== X-IronPort-AV: E=McAfee;i="6000,8403,9945"; a="172968190" X-IronPort-AV: E=Sophos;i="5.81,307,1610438400"; d="scan'208";a="172968190" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Apr 2021 11:08:54 -0700 IronPort-SDR: ppiyaFDCkV1nQdzghDpl80nb90mpZhmi6TgJsW2rkqmdwWxiXUl22phoUg7/f7tWWO10yOkN98 DIg4+Pw449TQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,307,1610438400"; d="scan'208";a="448153854" Received: from skl-02.jf.intel.com ([10.54.74.28]) by fmsmga002.fm.intel.com with ESMTP; 05 Apr 2021 11:08:53 -0700 From: Tim Chen To: Michal Hocko Cc: Tim Chen , Johannes Weiner , Andrew Morton , Dave Hansen , Ying Huang , Dan Williams , David Rientjes , Shakeel Butt , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 00/11] Manage the top tier memory in a tiered memory Date: Mon, 5 Apr 2021 10:08:24 -0700 Message-Id: X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3BA13C0001F7 X-Stat-Signature: 3f77ei4i8r3bzr713539fx346o6faf75 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from=""; helo=mga17.intel.com; client-ip=192.55.52.151 X-HE-DKIM-Result: none/none X-HE-Tag: 1617646137-966617 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Traditionally, all memory is DRAM. Some DRAM might be closer/faster than others NUMA wise, but a byte of media has about the same cost whether it is close or far. But, with new memory tiers such as Persistent Memory (PMEM). there is a choice between fast/expensive DRAM and slow/cheap PMEM. The fast/expensive memory lives in the top tier of the memory hierachy. Previously, the patchset [PATCH 00/10] [v7] Migrate Pages in lieu of discard https://lore.kernel.org/linux-mm/20210401183216.443C4443@viggo.jf.intel.com/ provides a mechanism to demote cold pages from DRAM node into PMEM. And the patchset [PATCH 0/6] [RFC v6] NUMA balancing: optimize memory placement for memory tiering system https://lore.kernel.org/linux-mm/20210311081821.138467-1-ying.huang@intel.com/ provides a mechanism to promote hot pages in PMEM to the DRAM node leveraging autonuma. The two patchsets together keep the hot pages in DRAM and colder pages in PMEM. To make fine grain cgroup based management of the precious top tier DRAM memory possible, this patchset adds a few new features: 1. Provides memory monitors on the amount of top tier memory used per cgroup and by the system as a whole. 2. Applies soft limits on the top tier memory each cgroup uses 3. Enables kswapd to demote top tier pages from cgroup with excess top tier memory usages. This allows us to provision different amount of top tier memory to each cgroup according to the cgroup's latency need. The patchset is based on cgroup v1 interface. One shortcoming of the v1 interface is the limit on the cgroup is a soft limit, so a cgroup can exceed the limit quite a bit before reclaim before page demotion reins it in. We are also working on a cgroup v2 interface control interface that will will have a max limit on the top tier memory per cgroup but requires much additional logic to fall back and allocate from non top tier memory when a cgroup reaches the maximum limit. This simpler cgroup v1 implementation with all its warts is used to illustrate the concept of cgroup based top tier memory management and serves as a starting point of discussions. The soft limit and soft reclaim logic in this patchset will be similar for what we would do for a cgroup v2 interface when we reach the high watermark for top tier usage in a cgroup v2 interface. This patchset is applied on top of [PATCH 00/10] [v7] Migrate Pages in lieu of discard and [PATCH 0/6] [RFC v6] NUMA balancing: optimize memory placement for memory tiering system It is part of a larger patchset. You can play with the complete set of patches using the tree: https://git.kernel.org/pub/scm/linux/kernel/git/vishal/tiering.git/log/?h=tiering-0.71 Tim Chen (11): mm: Define top tier memory node mask mm: Add soft memory limit for mem cgroup mm: Account the top tier memory usage per cgroup mm: Report top tier memory usage in sysfs mm: Add soft_limit_top_tier tree for mem cgroup mm: Handle top tier memory in cgroup soft limit memory tree utilities mm: Account the total top tier memory in use mm: Add toptier option for mem_cgroup_soft_limit_reclaim() mm: Use kswapd to demote pages when toptier memory is tight mm: Set toptier_scale_factor via sysctl mm: Wakeup kswapd if toptier memory need soft reclaim Documentation/admin-guide/sysctl/vm.rst | 12 + drivers/base/node.c | 2 + include/linux/memcontrol.h | 20 +- include/linux/mm.h | 4 + include/linux/mmzone.h | 7 + include/linux/nodemask.h | 1 + include/linux/vmstat.h | 18 ++ kernel/sysctl.c | 10 + mm/memcontrol.c | 303 +++++++++++++++++++----- mm/memory_hotplug.c | 3 + mm/migrate.c | 1 + mm/page_alloc.c | 36 ++- mm/vmscan.c | 73 +++++- mm/vmstat.c | 22 +- 14 files changed, 444 insertions(+), 68 deletions(-)