From patchwork Tue Mar 14 18:59:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Tosatti X-Patchwork-Id: 13174929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C217EC6FD1F for ; Tue, 14 Mar 2023 19:01:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D8468E0008; Tue, 14 Mar 2023 15:01:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 262E08E0001; Tue, 14 Mar 2023 15:01:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 128C18E0008; Tue, 14 Mar 2023 15:01:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0516C8E0001 for ; Tue, 14 Mar 2023 15:01:15 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BB520160D9C for ; Tue, 14 Mar 2023 19:01:14 +0000 (UTC) X-FDA: 80568421668.24.ED242BE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 3307E16002A for ; Tue, 14 Mar 2023 19:01:12 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="dyT/QMqq"; spf=pass (imf08.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678820472; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=Nr9fYbpF3v+q6jpxCLVFeOy0r5BagqTS2hiKgCt5CBQ=; b=Y+F3cag9ySJWN6yC3pZX1TN8VKV5Q/tKmQ7Dm/e9HFhPEKgZONAenif9yh2Zhx0Us76DRd 5c59yAdGHeSE8RLbk9kVCmfWu2I7lrejYgYw9sXuohJq82ZP6Rt9CghG00v24HoupId/P9 +ilimtyRwN81reqCHrl6NS0ac1LOdAY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="dyT/QMqq"; spf=pass (imf08.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678820472; a=rsa-sha256; cv=none; b=NRgY2YAplvCGQSnpI8jhHG47itCpmqqrhZjlR7kKwlK6/oFBo5aXrVbHxFX6N6/bB2Vi4i L6SMGcsza0QrKz+FWdonpgCYmFjeaO6vYTqRjRnxUtyS0h3yzAnCWsIVwNvdJ73v6YEntP FND5gqOrgWhLBe44EgKgYEkOLQxQIP4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678820471; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=Nr9fYbpF3v+q6jpxCLVFeOy0r5BagqTS2hiKgCt5CBQ=; b=dyT/QMqqWnMd91A1deuQH4Q9NLmWHZ3GRY0GMvAxruDa2c4/CxoC/+fALfgzSjIZaopxao XSTiXnQTCgOuR0tt7hHc53tfdCHy1fL06KWeylCHJ3MOF7MzLeA0VrrQdS8mDHD9wErvNn kEHoedpS6CJ7Zjs4eVGeLSvVxwXYEXE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-117-FUxLZ_pZN0q1wQRTUiKORQ-1; Tue, 14 Mar 2023 15:01:06 -0400 X-MC-Unique: FUxLZ_pZN0q1wQRTUiKORQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1088728004F0; Tue, 14 Mar 2023 19:01:05 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D9360C164E7; Tue, 14 Mar 2023 19:01:04 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 78F4D4039C294; Tue, 14 Mar 2023 16:00:44 -0300 (-03) Message-ID: <20230314185951.779596601@redhat.com> User-Agent: quilt/0.67 Date: Tue, 14 Mar 2023 15:59:24 -0300 From: Marcelo Tosatti To: Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Russell King , Huacai Chen , Heiko Carstens , x86@kernel.org, Vlastimil Babka , Michal Hocko , Marcelo Tosatti Subject: [PATCH v6 10/12] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely References: <20230314185914.836510860@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3307E16002A X-Stat-Signature: txibzy34o87pk6rstfcgcw61wc54wr5d X-HE-Tag: 1678820472-815001 X-HE-Meta: U2FsdGVkX18ArmtTe6GRbgicAo7P2rgNLOrsS3sigYvGyoTXlGaWTTaBGth6HrJ5qsXUEnrAV/Mnr6vD0s65vTofP+NVjru292KNhFN/lmaiI6HVPVxt/wmW1JeySJDOpTxdpgtWtPQ6NCKI5HKpTsDjKnu7Iv267tAUT/AL6lrdzxO1bHW2PP46FhKMyNCEy1xRLReKBUo1OluXEh1xnIogqqFExWzb7f5qiESHwrFZoSgOTpGiTATFl91fgfUEKJiZPGJdXg+DvBsOOspYGiOFxoO1UD7azwbzEYsBQ4CikyxBrPXMkqKHxXdA8rlKSm30JoBkisVzJQZliT09K4CqJ8hcLe4KsNl7n9mjPBuFcNq2zwyVFym2U0WMUusbtBX/LyvjUy5zNe7w240bvRWhtOgVWkkuh1aK6kn3Ee+RuR7NRkEv55ggH2zhpS0HoYdsUkG2jtxnyu/Wjk31+ukFvtKV3bNsmL7581lRhKfbTlHluMw9PK9eN453S5C4f18zOeVVxf0bIPz5Bqj9wWKNk114qUVh4fkJr5md4kCwtJC+cYFQZVl+BreBipfA1jQ0JMt27SavmoSzPjX8AlgM3AFAGqyacqkIp8DJJS3RIhqpRbu680BbrIeqkX/BlZO5DqhaCZ2ZcnrrxsatUc2Ihndqf6gCOg9FkmpIFjfZDKeBYvA2yc80jo+13FqcURi/8guRuksx8FC+cLe/XJUbWJ+kDaEBka+geqvAz2C8Ybu3/LKI+otDOqZTE8eqHJoWl40sispqHrM5AjIixgv32wtMa8VBAbAMayvHaUvthifEElYX4wHZiMPdHxaUwsUFpax1pk4Sj+373Vtu/x7ttqMtklMyAw1Sl9jCSak3E0I58jDT+LBcg12hL8oTbCxEZ2CHSIZk45Cn1yvTWnGK70Wf1zrQ8/9j8Kh2ycNJcZLk31+s5n32y9Y9KnUB367T4Oi4y/Mcc0cmHex rNXiVZdq XcqWb3ez2OKEJ77GLGjyzQ2D/iPsiNKzbydpPDl2B5HsCnzBfDuP1F6fPY8WtOylwnKsyr2hIJNXbD6W1LBlbhMgWfPVkMKgtmGcCCEty2/nAOeP74SpKWoCdPKTrSJTWLCQjoAh4dCDadVW7kyRQ+H+76bZZcqgbAy8INQ9/hZFslhfCEm7Ul5gD73b6XEY1T2iq0IV1dA+Hnnw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the counters are modified via cmpxchg both CPU locally (via the account functions), and remotely (via cpu_vm_stats_fold), its possible to switch vmstat_shepherd to perform the per-CPU vmstats folding remotely. This fixes the following two problems: 1. A customer provided evidence which indicates that the idle tick was stopped; albeit, CPU-specific vmstat counters still remained populated. Thus one can only assume quiet_vmstat() was not invoked on return to the idle loop. If I understand correctly, I suspect this divergence might erroneously prevent a reclaim attempt by kswapd. If the number of zone specific free pages are below their per-cpu drift value then zone_page_state_snapshot() is used to compute a more accurate view of the aforementioned statistic. Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will be unable to make significant progress via direct reclaim unless it is killed after being woken up by kswapd (see throttle_direct_reclaim()). The evidence is: - The process was trapped in throttle_direct_reclaim(). The function wait_event_killable() was called to wait condition allow_direct_reclaim(pgdat) for current node to be true. The allow_direct_reclaim(pgdat) examined the number of free pages on the node by zone_page_state() which just returns value in zone->vm_stat[NR_FREE_PAGES]. - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. However, the freelist on this node was not empty. - This inconsistent of vmstat value was caused by percpu vmstat on nohz_full cpus. Every increment/decrement of vmstat is performed on percpu vmstat counter at first, then pooled diffs are cumulated to the zone's vmstat counter in timely manner. However, on nohz_full cpus (in case of this customer's system, 48 of 52 cpus) these pooled diffs were not cumulated once the cpu had no event on it so that the cpu started sleeping infinitely. I checked percpu vmstat and found there were total 69 counts not cumulated to the zone's vmstat counter yet. - In this situation, kswapd did not help the trapped process. In pgdat_balanced(), zone_wakermark_ok_safe() examined the number of free pages on the node by zone_page_state_snapshot() which checks pending counts on percpu vmstat. Therefore kswapd could know there were 69 free pages correctly. Since zone->_watermark = {8, 20, 32}, kswapd did not work because 69 was greater than 32 as high watermark. 2. With a SCHED_FIFO task that busy loops on a given CPU, and kworker for that CPU at SCHED_OTHER priority, queuing work to sync per-vmstats will either cause that work to never execute, or stalld (i.e. stall daemon) boosts kworker priority which causes a latency violation Signed-off-by: Marcelo Tosatti Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -2043,6 +2043,23 @@ static void vmstat_shepherd(struct work_ static DECLARE_DEFERRABLE_WORK(shepherd, vmstat_shepherd); +#ifdef CONFIG_HAVE_CMPXCHG_LOCAL +/* Flush counters remotely if CPU uses cmpxchg to update its per-CPU counters */ +static void vmstat_shepherd(struct work_struct *w) +{ + int cpu; + + cpus_read_lock(); + for_each_online_cpu(cpu) { + cpu_vm_stats_fold(cpu); + cond_resched(); + } + cpus_read_unlock(); + + schedule_delayed_work(&shepherd, + round_jiffies_relative(sysctl_stat_interval)); +} +#else static void vmstat_shepherd(struct work_struct *w) { int cpu; @@ -2062,6 +2079,7 @@ static void vmstat_shepherd(struct work_ schedule_delayed_work(&shepherd, round_jiffies_relative(sysctl_stat_interval)); } +#endif static void __init start_shepherd_timer(void) {