From patchwork Tue May 10 06:39:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 12844585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52680C433F5 for ; Tue, 10 May 2022 06:40:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E58576B0074; Tue, 10 May 2022 02:40:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E08BC8D0003; Tue, 10 May 2022 02:40:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCFA28D0002; Tue, 10 May 2022 02:40:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BD7726B0074 for ; Tue, 10 May 2022 02:40:40 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 92E1831534 for ; Tue, 10 May 2022 06:40:40 +0000 (UTC) X-FDA: 79448885040.28.8600053 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf29.hostedemail.com (Postfix) with ESMTP id 6062A1200B1 for ; Tue, 10 May 2022 06:40:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652164839; x=1683700839; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Zi5acPYW8kItYABL06yrdnT8ZAOgTg+ybOskYR3O5U0=; b=ckXgunqb1uvIvsML+OHdULiOsi0s2MZu912xnDj++AfpYVfw2iqWxFuE qsk7xqOXjiF2Bk/IpSZ1GbJq1uettwYAVzF2+3iIfNl8tRZGo0n3GlunM rR5DiJO0JaHUgoBZXfha8tdfL+4j2yAAl7FOSuHFtupro/8ynJl00S56p INDjzUWqxzHIhhg0yqXJRuh8yXPArqMfXFSUNyOfvo6wEScVQz7rEH1Y8 9xPq4QA2eHMR66R97zSiraI5O7+TVyh1H2zSdllhNTrjjEvHsg0Mr8D3G IYAwKeLm28FD8E74tVMFxLZNhSq5PNWOrEMwDROUUBXg0xYDWn6BCbxqp g==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="266867506" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="266867506" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 23:40:24 -0700 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="519605199" Received: from sijieyux-mobl3.ccr.corp.intel.com (HELO yhuang6-mobl1.ccr.corp.intel.com) ([10.254.212.195]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 23:40:20 -0700 From: Huang Ying To: Peter Zijlstra , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Andrew Morton , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , osalvador , Shakeel Butt Subject: [PATCH -V2 3/3 RESEND] memory tiering: adjust hot threshold automatically Date: Tue, 10 May 2022 14:39:58 +0800 Message-Id: <20220510063958.86985-4-ying.huang@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220510063958.86985-1-ying.huang@intel.com> References: <20220510063958.86985-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6062A1200B1 X-Stat-Signature: n6mu8i3j7wesuc7majcxsgm4onzu9hex X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ckXgunqb; spf=none (imf29.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-HE-Tag: 1652164833-566044 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The promotion hot threshold is workload and system configuration dependent. So in this patch, a method to adjust the hot threshold automatically is implemented. The basic idea is to control the number of the candidate promotion pages to match the promotion rate limit. If the hint page fault latency of a page is less than the hot threshold, we will try to promote the page, and the page is called the candidate promotion page. If the number of the candidate promotion pages in the statistics interval is much more than the promotion rate limit, the hot threshold will be decreased to reduce the number of the candidate promotion pages. Otherwise, the hot threshold will be increased to increase the number of the candidate promotion pages. To make the above method works, in each statistics interval, the total number of the pages to check (on which the hint page faults occur) and the hot/cold distribution need to be stable. Because the page tables are scanned linearly in NUMA balancing, but the hot/cold distribution isn't uniform along the address usually, the statistics interval should be larger than the NUMA balancing scan period. So in the patch, the max scan period is used as statistics interval and it works well in our tests. Signed-off-by: "Huang, Ying" Cc: Andrew Morton Cc: Michal Hocko Cc: Rik van Riel Cc: Mel Gorman Cc: Peter Zijlstra Cc: Dave Hansen Cc: Yang Shi Cc: Zi Yan Cc: Wei Xu Cc: osalvador Cc: Shakeel Butt Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/mmzone.h | 5 +++++ kernel/sched/core.c | 15 ++++++++++++++ kernel/sched/fair.c | 46 +++++++++++++++++++++++++++++++++++++----- 3 files changed, 61 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index f2887b1c9b0b..d542b03b9d5c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -920,6 +920,11 @@ typedef struct pglist_data { unsigned long numa_nr_candidate; /* number of promote candidate pages at * rate limit start time */ unsigned int numa_ts; /* promote rate limit start time in ms */ + /* promote threshold adjusting start time in ms */ + unsigned int numa_threshold_ts; + unsigned int numa_threshold; /* promote threshold in ms */ + /* number of promote candidate pages at numa_threshold_ts */ + unsigned long numa_threshold_nr_candidate; #endif /* Fields commonly accessed by the page reclaim scanner */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 51efaabac3e4..671eef0c6a21 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4364,6 +4364,18 @@ void set_numabalancing_state(bool enabled) } #ifdef CONFIG_PROC_SYSCTL +static void reset_memory_tiering(void) +{ + struct pglist_data *pgdat; + + for_each_online_pgdat(pgdat) { + pgdat->numa_threshold = 0; + pgdat->numa_threshold_nr_candidate = + node_page_state(pgdat, PGPROMOTE_CANDIDATE); + pgdat->numa_threshold_ts = jiffies_to_msecs(jiffies); + } +} + int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { @@ -4380,6 +4392,9 @@ int sysctl_numa_balancing(struct ctl_table *table, int write, if (err < 0) return err; if (write) { + if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && + (state & NUMA_BALANCING_MEMORY_TIERING)) + reset_memory_tiering(); sysctl_numa_balancing_mode = state; __set_numabalancing_state(state); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2975c1cbdb60..e8ba1e977708 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1491,6 +1491,35 @@ static bool numa_promotion_rate_limit(struct pglist_data *pgdat, return false; } +#define NUMA_MIGRATION_ADJUST_STEPS 16 + +static void numa_promotion_adjust_threshold(struct pglist_data *pgdat, + unsigned long rate_limit, + unsigned int ref_th) +{ + unsigned int now, last_th_ts, th_period, unit_th, th; + unsigned long nr_cand, ref_cand, diff_cand; + + now = jiffies_to_msecs(jiffies); + th_period = sysctl_numa_balancing_scan_period_max; + last_th_ts = pgdat->numa_threshold_ts; + if (now - last_th_ts > th_period && + cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) { + ref_cand = rate_limit * + sysctl_numa_balancing_scan_period_max / MSEC_PER_SEC; + nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE); + diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate; + unit_th = ref_th * 2 / NUMA_MIGRATION_ADJUST_STEPS; + th = pgdat->numa_threshold ? : ref_th; + if (diff_cand > ref_cand * 11 / 10) + th = max(th - unit_th, unit_th); + else if (diff_cand < ref_cand * 9 / 10) + th = min(th + unit_th, ref_th * 2); + pgdat->numa_threshold_nr_candidate = nr_cand; + pgdat->numa_threshold = th; + } +} + bool should_numa_migrate_memory(struct task_struct *p, struct page * page, int src_nid, int dst_cpu) { @@ -1505,19 +1534,26 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !node_is_toptier(src_nid)) { struct pglist_data *pgdat; - unsigned long rate_limit, latency, th; + unsigned long rate_limit; + unsigned int latency, th, def_th; pgdat = NODE_DATA(dst_nid); - if (pgdat_free_space_enough(pgdat)) + if (pgdat_free_space_enough(pgdat)) { + /* workload changed, reset hot threshold */ + pgdat->numa_threshold = 0; return true; + } + + def_th = sysctl_numa_balancing_hot_threshold; + rate_limit = sysctl_numa_balancing_promote_rate_limit << \ + (20 - PAGE_SHIFT); + numa_promotion_adjust_threshold(pgdat, rate_limit, def_th); - th = sysctl_numa_balancing_hot_threshold; + th = pgdat->numa_threshold ? : def_th; latency = numa_hint_fault_latency(page); if (latency >= th) return false; - rate_limit = sysctl_numa_balancing_promote_rate_limit << \ - (20 - PAGE_SHIFT); return !numa_promotion_rate_limit(pgdat, rate_limit, thp_nr_pages(page)); }