From patchwork Fri Apr 22 19:36:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Tomlin X-Patchwork-Id: 12824040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C484C433EF for ; Fri, 22 Apr 2022 19:36:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 422986B0074; Fri, 22 Apr 2022 15:36:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D2F46B0075; Fri, 22 Apr 2022 15:36:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29B706B0078; Fri, 22 Apr 2022 15:36:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 1B72B6B0074 for ; Fri, 22 Apr 2022 15:36:53 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id E15EB120B87 for ; Fri, 22 Apr 2022 19:36:52 +0000 (UTC) X-FDA: 79385522664.10.311CE3B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 560B318002E for ; Fri, 22 Apr 2022 19:36:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650656211; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=u7GJtMbCE34QoOXHn+8Ig2ao8hDA4umJBBrgrTadkEM=; b=R69hSS7D5gjSfQMX1QdqdAhn/Nm3QXbi3zoEocfZFz0BlfSZsIfT0RcaFiuZaUSTHrU1BB Ume88PlqyXrL5J41FGRDzurWxx7YugQMb5j1d/mBl2pPR1jH5eS/7fVB815KmmYGDwMDjG E7gzF1KulPLSbHmdGRiFWP2gjHgKHzM= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-481-PTQ1YGmDPCCfw3RJlyFEeA-1; Fri, 22 Apr 2022 15:36:50 -0400 X-MC-Unique: PTQ1YGmDPCCfw3RJlyFEeA-1 Received: by mail-wr1-f71.google.com with SMTP id v21-20020adfa1d5000000b0020a80b3b107so2274786wrv.0 for ; Fri, 22 Apr 2022 12:36:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=u7GJtMbCE34QoOXHn+8Ig2ao8hDA4umJBBrgrTadkEM=; b=WanuLIvru5jqU6aOhcvY4fQdglyRWThhO47WSxVpoyg/DoH8knCKHXIBcbIHOxI/3S U4nj4wN94MrZSvut0pnvTspBHs/2pGQyXrqqY3yDYPqMfSwIKRHHAJ1BsQGbE27/oGFa SCLg4Pz3lbp8tAI4Q/ju/cgTGHLhFD3CfDMxeBAC8budFcbCaTtMKU4uqiE5+tf3c6Cv EO7BbMc173ASV01A14+JsZ2Mqrqt2FGo4lYPCjprMVCH+zInVOOsYVw5ffaZ0cjS783X VBzZSOvF/v32ZNJ2rEvtgbq11ngvc82vna04UxmUtVzxpxFyrWpocvocVVsyjFC25Afb FuEg== X-Gm-Message-State: AOAM532dDL41fHdx5m/I+qqGQvqk9KOz70xjariKS4MekYU0/+KL+aIL 3z5IG/jtSbVYWflJMFzgeL6w2x/qP3W6N8OPaVUWnVb+ZYBZqneF3XOh7wmvbT/jlKI6uxBqOwm +/Anja2lkhQ== X-Received: by 2002:a05:6000:1b91:b0:207:9869:1dfe with SMTP id r17-20020a0560001b9100b0020798691dfemr4846323wru.689.1650656209520; Fri, 22 Apr 2022 12:36:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJweTLHKgey8s+MPWjUCZkTfPrfufQ74d31Z3iIQ1HW/4aGVuUQfGjiIuh9S9A0AahFTbvk78Q== X-Received: by 2002:a05:6000:1b91:b0:207:9869:1dfe with SMTP id r17-20020a0560001b9100b0020798691dfemr4846306wru.689.1650656209234; Fri, 22 Apr 2022 12:36:49 -0700 (PDT) Received: from localhost (cpc111743-lutn13-2-0-cust979.9-3.cable.virginm.net. [82.17.115.212]) by smtp.gmail.com with ESMTPSA id n10-20020a5d598a000000b0020a9493bdddsm2803421wri.40.2022.04.22.12.36.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Apr 2022 12:36:48 -0700 (PDT) From: Aaron Tomlin To: frederic@kernel.org, mtosatti@redhat.com Cc: cl@linux.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, pauld@redhat.com, neelx@redhat.com, oleksandr@natalenko.name, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v3] tick/sched: Ensure quiet_vmstat() is called when the idle tick was stopped too Date: Fri, 22 Apr 2022 20:36:47 +0100 Message-Id: <20220422193647.3808657-1-atomlin@redhat.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 560B318002E X-Stat-Signature: 6absyu7apgqzix8oufsc17wtykycrfwi X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=R69hSS7D; spf=none (imf06.hostedemail.com: domain of atomlin@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=atomlin@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1650656211-716081 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Frederic and Marcelo, Oops, please ignore RFC Patch v2 [1] since I forgot to compile test - sorry about that! Any feedback would be appreciated. Thanks. [1]: https://lore.kernel.org/lkml/20220422112053.3695526-1-atomlin@redhat.com/ In the context of the idle task and an adaptive-tick mode/or a nohz_full CPU, quiet_vmstat() can be called: before stopping the idle tick, entering an idle state and on exit. In particular, for the latter case, when the idle task is required to reschedule, the idle tick can remain stopped and the timer expiration time endless i.e., KTIME_MAX. Now, indeed before a nohz_full CPU enters an idle state, CPU-specific vmstat counters should be processed to ensure the respective values have been reset and folded into the zone specific 'vm_stat[]'. That being said, it can only occur when: the idle tick was previously stopped, and reprogramming of the timer is not required. A customer provided some evidence which indicates that the idle tick was stopped; albeit, CPU-specific vmstat counters still remained populated. Thus one can only assume quiet_vmstat() was not invoked on return to the idle loop. If I understand correctly, I suspect this divergence might erroneously prevent a reclaim attempt by kswapd. If the number of zone specific free pages are below their per-cpu drift value then zone_page_state_snapshot() is used to compute a more accurate view of the aforementioned statistic. Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will be unable to make significant progress via direct reclaim unless it is killed after being woken up by kswapd (see throttle_direct_reclaim()). Consider the following theoretical scenario: 1. CPU Y migrated running task A to CPU X that was in an idle state i.e. waiting for an IRQ - not polling; marked the current task on CPU X to need/or require a reschedule i.e., set TIF_NEED_RESCHED and invoked a reschedule IPI to CPU X (see sched_move_task()) 2. CPU X acknowledged the reschedule IPI from CPU Y; generic idle loop code noticed the TIF_NEED_RESCHED flag against the idle task and attempts to exit of the loop and calls the main scheduler function i.e. __schedule(). Since the idle tick was previously stopped no scheduling-clock tick would occur. So, no deferred timers would be handled 3. Post transition to kernel execution Task A running on CPU Y, indirectly released a few pages (e.g. see __free_one_page()); CPU Y's 'vm_stat_diff[NR_FREE_PAGES]' was updated and zone specific 'vm_stat[]' update was deferred as per the CPU-specific stat threshold 4. Task A does invoke exit(2) and the kernel does remove the task from the run-queue; the idle task was selected to execute next since there are no other runnable tasks assigned to the given CPU (see pick_next_task() and pick_next_task_idle()) 5. On return to the idle loop since the idle tick was already stopped and can remain so (see [1] below) e.g. no pending soft IRQs, no attempt is made to zero and fold CPU Y's vmstat counters since reprogramming of the scheduling-clock tick is not required/or needed (see [2]) ... do_idle { __current_set_polling() tick_nohz_idle_enter() while (!need_resched()) { local_irq_disable() ... /* No polling or broadcast event */ cpuidle_idle_call() { if (cpuidle_not_available(drv, dev)) { tick_nohz_idle_stop_tick() __tick_nohz_idle_stop_tick(this_cpu_ptr(&tick_cpu_sched)) { int cpu = smp_processor_id() if (ts->timer_expires_base) expires = ts->timer_expires else if (can_stop_idle_tick(cpu, ts)) (1) -------> expires = tick_nohz_next_event(ts, cpu) else return ts->idle_calls++ if (expires > 0LL) { tick_nohz_stop_tick(ts, cpu) { if (ts->tick_stopped && (expires == ts->next_tick)) { (2) -------> if (tick == KTIME_MAX || ts->next_tick == hrtimer_get_expires(&ts->sched_timer)) return } ... } So the idea of with this patch is to ensure refresh_cpu_vm_stats(false) is called, when it is appropriate, on return to the idle loop when the idle tick was previously stopped too. Additionally, when the scheduling-tick is stopped and a task in kernel-mode, modifies the CPU-specific 'vm_stat_diff[]' and goes to user-mode for a long time. Signed-off-by: Aaron Tomlin --- include/linux/tick.h | 9 ++------- kernel/time/tick-sched.c | 18 +++++++++++++++++- 2 files changed, 19 insertions(+), 8 deletions(-) diff --git a/include/linux/tick.h b/include/linux/tick.h index bfd571f18cfd..4c576c9ca0a2 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -11,7 +11,6 @@ #include #include #include -#include #ifdef CONFIG_GENERIC_CLOCKEVENTS extern void __init tick_init(void); @@ -123,6 +122,8 @@ enum tick_dep_bits { #define TICK_DEP_MASK_RCU (1 << TICK_DEP_BIT_RCU) #define TICK_DEP_MASK_RCU_EXP (1 << TICK_DEP_BIT_RCU_EXP) +void tick_nohz_user_enter_prepare(void); + #ifdef CONFIG_NO_HZ_COMMON extern bool tick_nohz_enabled; extern bool tick_nohz_tick_stopped(void); @@ -305,10 +306,4 @@ static inline void tick_nohz_task_switch(void) __tick_nohz_task_switch(); } -static inline void tick_nohz_user_enter_prepare(void) -{ - if (tick_nohz_full_cpu(smp_processor_id())) - rcu_nocb_flush_deferred_wakeup(); -} - #endif diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index d257721c68b8..c6cac2d8e8ed 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -26,6 +26,7 @@ #include #include #include +#include #include @@ -43,6 +44,19 @@ struct tick_sched *tick_get_tick_sched(int cpu) return &per_cpu(tick_cpu_sched, cpu); } +void tick_nohz_user_enter_prepare(void) +{ + struct tick_sched *ts; + + if (tick_nohz_full_cpu(smp_processor_id())) { + ts = this_cpu_ptr(&tick_cpu_sched); + + if (ts->tick_stopped) + quiet_vmstat(); + rcu_nocb_flush_deferred_wakeup(); + } +} + #if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS) /* * The time, when the last jiffy update happened. Write access must hold @@ -891,6 +905,9 @@ static void tick_nohz_stop_tick(struct tick_sched *ts, int cpu) ts->do_timer_last = 0; } + /* Attempt to fold when the idle tick is stopped or not */ + quiet_vmstat(); + /* Skip reprogram of event if its not changed */ if (ts->tick_stopped && (expires == ts->next_tick)) { /* Sanity check: make sure clockevent is actually programmed */ @@ -912,7 +929,6 @@ static void tick_nohz_stop_tick(struct tick_sched *ts, int cpu) */ if (!ts->tick_stopped) { calc_load_nohz_start(); - quiet_vmstat(); ts->last_tick = hrtimer_get_expires(&ts->sched_timer); ts->tick_stopped = 1;