From patchwork Mon Apr 3 22:03:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13198866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 038E2C76188 for ; Mon, 3 Apr 2023 22:03:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233610AbjDCWDp (ORCPT ); Mon, 3 Apr 2023 18:03:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232111AbjDCWDm (ORCPT ); Mon, 3 Apr 2023 18:03:42 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A5EA2D60 for ; Mon, 3 Apr 2023 15:03:41 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id b1-20020a17090a8c8100b002400db03706so7963273pjo.0 for ; Mon, 03 Apr 2023 15:03:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680559421; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kId3QKjERA5KglVeO1VKvGvv5azlGeISt3c2gJrYgyo=; b=W/ze/sAlU7O+7GhoVMA+gO+b3zK1GLSBb2wxG7cqOl+9vjmm6QdV40GnYhvFBAr6we 6QpU1r1iDXHFCTCDI3KvZCA5Fdx66xMvZLtqAf+RggvSzScbqU6Qb+D3U4BXwhb/OMA1 mj7EUoKa/2hKudyISX9IDUT5QtGjglqN6doUhxmfcL1+w5LR9gcm9RCdAs4iQ3yY+fKw 8EP2M2Ss5aeWYRlPCW8BW8qsios85nr5jm7RooCKJYkRU/xUY3Tcdg08l4PT4At3QjPp 8RJrh+/qwzB7v9VlZ60Az/txlakHCjAKDKadt2/B61f8k1qeYBs4X+ujruMC4SFwxMjN IvmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680559421; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kId3QKjERA5KglVeO1VKvGvv5azlGeISt3c2gJrYgyo=; b=eT9r6gp7S7CkrnsQZRBYLLi1V2KTbyAj5v8Obh3cAS8VU1aJNdhoYo2ujmczRukVO+ 2kX1R2r6NL9yhEotp69OSs1U2O8CAoNKhpPtiMp5U4biBTteI1vZzoq9x0ZV1VJeos/G DEr/AGKwcwa9QUrHVwpbj39RdzJ7FfJ75wkyjwdQRwSoGpCg4N4wP0SVGqo90Y2NMPRs 8dUHZkG/dfz04hOP15YP1dyeseEeXjfTCvuBFQIzXHmlVZnNP3RXu910WZnt4qudjs5o uPkJR6vlK8Cty5mb4AWQ0yFsNPx43YHFbeWjrHgPVKRQuFj0J17oimqydmIImq3hEp5/ /6hw== X-Gm-Message-State: AAQBX9clzcvamzFv/ao+3pfI9VXrDnQB+dXqeRasWUh/r+25OxTYb5q+ F7DgZNOLk8ra83EEpIrSqpHOS5JuirKc7lmR X-Google-Smtp-Source: AKy350bROYdy/7K8uGxBM0X/ZxH2AG6/5IpCycN9GV2HdX4MbVmRhF2WLCfVj205eUvkzCs7+3h4yAAbVuROFoGd X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:f307:0:b0:50c:bd0:eb8c with SMTP id l7-20020a63f307000000b0050c0bd0eb8cmr39435pgh.6.1680559421112; Mon, 03 Apr 2023 15:03:41 -0700 (PDT) Date: Mon, 3 Apr 2023 22:03:33 +0000 In-Reply-To: <20230403220337.443510-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230403220337.443510-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230403220337.443510-2-yosryahmed@google.com> Subject: [PATCH mm-unstable RFC 1/5] writeback: move wb_over_bg_thresh() call outside lock section From: Yosry Ahmed To: Alexander Viro , Christian Brauner , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org wb_over_bg_thresh() calls mem_cgroup_wb_stats() which invokes an rstat flush, which can be expensive on large systems. Currently, wb_writeback() calls wb_over_bg_thresh() within a lock section, so we have to make the rstat flush atomically. On systems with a lot of cpus/cgroups, this can cause us to disable irqs for a long time, potentially causing problems. Move the call to wb_over_bg_thresh() outside the lock section in preparation to make the rstat flush in mem_cgroup_wb_stats() non-atomic. The list_empty(&wb->work_list) should be okay outside the lock section of wb->list_lock as it is protected by a separate lock (wb->work_lock), and wb_over_bg_thresh() doesn't seem like it is modifying any of the b_* lists the wb->list_lock is protecting. Also, the loop seems to be already releasing and reacquring the lock, so this refactoring looks safe. Signed-off-by: Yosry Ahmed Reviewed-by: Michal Koutný Reviewed-by: Jan Kara --- fs/fs-writeback.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 195dc23e0d831..012357bc8daa3 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -2021,7 +2021,6 @@ static long wb_writeback(struct bdi_writeback *wb, struct blk_plug plug; blk_start_plug(&plug); - spin_lock(&wb->list_lock); for (;;) { /* * Stop writeback when nr_pages has been consumed @@ -2046,6 +2045,9 @@ static long wb_writeback(struct bdi_writeback *wb, if (work->for_background && !wb_over_bg_thresh(wb)) break; + + spin_lock(&wb->list_lock); + /* * Kupdate and background works are special and we want to * include all inodes that need writing. Livelock avoidance is @@ -2075,13 +2077,19 @@ static long wb_writeback(struct bdi_writeback *wb, * mean the overall work is done. So we keep looping as long * as made some progress on cleaning pages or inodes. */ - if (progress) + if (progress) { + spin_unlock(&wb->list_lock); continue; + } + /* * No more inodes for IO, bail */ - if (list_empty(&wb->b_more_io)) + if (list_empty(&wb->b_more_io)) { + spin_unlock(&wb->list_lock); break; + } + /* * Nothing written. Wait for some inode to * become available for writeback. Otherwise @@ -2093,9 +2101,7 @@ static long wb_writeback(struct bdi_writeback *wb, spin_unlock(&wb->list_lock); /* This function drops i_lock... */ inode_sleep_on_writeback(inode); - spin_lock(&wb->list_lock); } - spin_unlock(&wb->list_lock); blk_finish_plug(&plug); return nr_pages - work->nr_pages; From patchwork Mon Apr 3 22:03:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13198867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E68DC77B6C for ; Mon, 3 Apr 2023 22:03:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233633AbjDCWDr (ORCPT ); Mon, 3 Apr 2023 18:03:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233577AbjDCWDo (ORCPT ); Mon, 3 Apr 2023 18:03:44 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 868A5E5B for ; Mon, 3 Apr 2023 15:03:43 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-546422bd3ceso129342417b3.21 for ; Mon, 03 Apr 2023 15:03:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680559423; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9Hf88QAEgRalaH3f7thcio7vExcVqqDxAYa/aCaE9Jc=; b=Qd1B7SbDQyGizezUFQyV/0tO/bOhj44AtZ1VKHtgOG9TbJavAnnMGZmO6zQIpqIoK7 5cj24hiW5m5RcceJDQ1KEplDfhRxIe4M8gmfj3L+NZAuLHvyis3MtYjzW6MZhBerb1QT JbsSvLpI6syO5GW80nsYNqGOYj2kCCLCJxDrHY++EHRBCuORnOG3ZRIIIY9qF02f9mrK 7QKQBjFz8zSpKFSQE5ME0/wr0Ah4Zv+Tdpoq6l9nLH2lZJPiIROIUeme5UI6HVL7rK6b PMhT9C0HP1QYba587rjt9xa6OsG8uX3e11ytS9ZKFnspyAhVnb6ce2SHS6u9IWMy0H74 szug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680559423; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9Hf88QAEgRalaH3f7thcio7vExcVqqDxAYa/aCaE9Jc=; b=6KiPFHO7NVtweYpwCkLF7kiQ6q5LznaQUZTwehBFbfqM8MlYmXtRHPDMeGX63z4Kvz vZ9GWlzwxHXTla7XyfCcvreid+FhH46wBxzaGsKqee149pHLQTM+0dR5n0ry/M2Ihud8 VGcVR5jPhrDdrYMdvtOI59jH31aat4RMytrWlWxNKLiXEyltpWE1NYCVwkrmr/fAi8my I4xmG418nPmesn0NJxR2CckwdZgPCOdIMj/hvU9q7G81KVGpaoN2DrZfddpZjTBmDDaU VIEk1xZ2tOC0gRStKM/zrLGR714PXsdotS1AnzwN7X2jPsMwbmom3epHitCDf0UeKTT2 tYgQ== X-Gm-Message-State: AAQBX9fQK3q3kaJBhtJfjGiWfp7atdsy1IKQd/RSMTTY5dF6TNLeyKcL 2HtrTl8Q1i2ihjlrtR+Y1KXxqN4NOIfmLl6z X-Google-Smtp-Source: AKy350Yk7PN55oyTS/2Ex0GZoWZBdH4qO4jUdn7c2dJd2J2mTTnpo7+qXB1LBcPp06lmU8+Do/T+aXIn4r5a27Uq X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a81:b60f:0:b0:545:bade:c57e with SMTP id u15-20020a81b60f000000b00545badec57emr288637ywh.5.1680559422857; Mon, 03 Apr 2023 15:03:42 -0700 (PDT) Date: Mon, 3 Apr 2023 22:03:34 +0000 In-Reply-To: <20230403220337.443510-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230403220337.443510-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230403220337.443510-3-yosryahmed@google.com> Subject: [PATCH mm-unstable RFC 2/5] memcg: flush stats non-atomically in mem_cgroup_wb_stats() From: Yosry Ahmed To: Alexander Viro , Christian Brauner , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The previous patch moved the wb_over_bg_thresh()->mem_cgroup_wb_stats() code path in wb_writeback() outside the lock section. We no longer need to flush the stats atomically. Flush the stats non-atomically. Signed-off-by: Yosry Ahmed Reviewed-by: Michal Koutný Acked-by: Shakeel Butt --- mm/memcontrol.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3d040a5fa7a35..bdd52fe9e7e4b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4637,11 +4637,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages, struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css); struct mem_cgroup *parent; - /* - * wb_writeback() takes a spinlock and calls - * wb_over_bg_thresh()->mem_cgroup_wb_stats(). Do not sleep. - */ - mem_cgroup_flush_stats_atomic(); + mem_cgroup_flush_stats(); *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY); *pwriteback = memcg_page_state(memcg, NR_WRITEBACK); From patchwork Mon Apr 3 22:03:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13198868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34ECDC76188 for ; Mon, 3 Apr 2023 22:03:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233698AbjDCWDy (ORCPT ); Mon, 3 Apr 2023 18:03:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57078 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233536AbjDCWDq (ORCPT ); Mon, 3 Apr 2023 18:03:46 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2ED6235A1 for ; Mon, 3 Apr 2023 15:03:45 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id r78-20020a632b51000000b00513d1de5204so1739549pgr.15 for ; Mon, 03 Apr 2023 15:03:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680559424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kF9j9xhVt+HAtgUv/sW6MmUPAVRA6NHiHn3mXAJRGAA=; b=QZZrc3A9Ala3Y1fUrUeH9aHQEpEHIm56zSkCNE0A0d2CfnkVvyEn83miymwgAbVi5v 9d0zVCVjgbHefP1pXO+Nwo1Ul1PtmNl2tpG2LpPChio/My4WZElf1o1jchSNOgvJZUsW WbV5lrmHTcH3fAqoJ7gZenJSRA3/dWbn3kNYUTeryGOWXdDf83/N+i1IFVKcpHglHDCj nH4nWeMmhgItD/X0NlX5brqxxgrkgNj//KowEl9SOf+dOa58iks3E9glke7V87blHVFo T2x7kiC5rzRzeUUzMZ/tXlij1MqA5Wi+XXZnHplS+57XrL+yaVLwig14jPX/om3gIc+M IAag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680559424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kF9j9xhVt+HAtgUv/sW6MmUPAVRA6NHiHn3mXAJRGAA=; b=Kbi9BR/Oa3k40+tasHCVgfyaMFC+SvB3Q4PTUWcsDee0swACu3uVAglJ7usde6kozO i5mYRZrrl8jHSblfT8hjmJEyZhwEUvWxP7lTu0V3qxQ8PC1bnEThRQoYtJQiwKURRCJk NfMlUs6Y2ESKxeGU4Ccu8AjrFWoXYK4bjNmQokBIPNGj/+QKCoDSfVHbonS9wi9ZJC1c Gshscx11uA8f7ztMtA33Pq0vvSpG3ToNRczdLCUV8HU6JLe1+7OqVhCyN1WaY9gokiS9 lXaxkSCXIRYjSMb8p9nIN8JGav8u1Wf7holYWxN7lHQg7HDt/6d8r49/pomp7YGAOiej UFzA== X-Gm-Message-State: AAQBX9enGH3XtUCWoloHEt5rwxmWarACd58W7c2I263W6VCdUL+/LScf VJzF7jbt529uUMfwbHsnHZLQv5L5/rX556bu X-Google-Smtp-Source: AKy350Y61C08ugQDQ0T+NdoZquMxt4ZL2sOiJByhoRsOjzxwb2aWtKxGtikw0lJtgE93Bg+tt+KjbLE8KoMgZHnh X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:2301:0:b0:503:91ff:8dd8 with SMTP id j1-20020a632301000000b0050391ff8dd8mr27700pgj.4.1680559424700; Mon, 03 Apr 2023 15:03:44 -0700 (PDT) Date: Mon, 3 Apr 2023 22:03:35 +0000 In-Reply-To: <20230403220337.443510-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230403220337.443510-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230403220337.443510-4-yosryahmed@google.com> Subject: [PATCH mm-unstable RFC 3/5] memcg: calculate root usage from global state From: Yosry Ahmed To: Alexander Viro , Christian Brauner , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently, we approximate the root usage by adding the memcg stats for anon, file, and conditionally swap (for memsw). To read the memcg stats we need to invoke an rstat flush. rstat flushes can be expensive, they scale with the number of cpus and cgroups on the system. mem_cgroup_usage() is called by memcg_events()->mem_cgroup_threshold() with irqs disabled, so such an expensive operation with irqs disabled can cause problems. Instead, approximate the root usage from global state. This is not 100% accurate, but the root usage has always been ill-defined anyway. Signed-off-by: Yosry Ahmed Reviewed-by: Michal Koutný Acked-by: Shakeel Butt --- mm/memcontrol.c | 24 +++++------------------- 1 file changed, 5 insertions(+), 19 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index bdd52fe9e7e4b..e7fe18c0c0ef2 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3698,27 +3698,13 @@ static unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) if (mem_cgroup_is_root(memcg)) { /* - * We can reach here from irq context through: - * uncharge_batch() - * |--memcg_check_events() - * |--mem_cgroup_threshold() - * |--__mem_cgroup_threshold() - * |--mem_cgroup_usage - * - * rstat flushing is an expensive operation that should not be - * done from irq context; use stale stats in this case. - * Arguably, usage threshold events are not reliable on the root - * memcg anyway since its usage is ill-defined. - * - * Additionally, other call paths through memcg_check_events() - * disable irqs, so make sure we are flushing stats atomically. + * Approximate root's usage from global state. This isn't + * perfect, but the root usage was always an approximation. */ - if (in_task()) - mem_cgroup_flush_stats_atomic(); - val = memcg_page_state(memcg, NR_FILE_PAGES) + - memcg_page_state(memcg, NR_ANON_MAPPED); + val = global_node_page_state(NR_FILE_PAGES) + + global_node_page_state(NR_ANON_MAPPED); if (swap) - val += memcg_page_state(memcg, MEMCG_SWAP); + val += total_swap_pages - get_nr_swap_pages(); } else { if (!swap) val = page_counter_read(&memcg->memory); From patchwork Mon Apr 3 22:03:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13198869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3E08C77B62 for ; Mon, 3 Apr 2023 22:04:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233766AbjDCWEJ (ORCPT ); Mon, 3 Apr 2023 18:04:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233690AbjDCWDy (ORCPT ); Mon, 3 Apr 2023 18:03:54 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E28DE40F6 for ; Mon, 3 Apr 2023 15:03:46 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 194-20020a6301cb000000b00513c951ff2bso2235209pgb.10 for ; Mon, 03 Apr 2023 15:03:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680559426; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hNJBwGVVZOmg9VOOVOlV/1s5wuc6P4CJY0BTT3hY1AM=; b=PLSbfUpOYbUqIYWD0lVUspaLdyltho/9MJ34jFGgaIu6ps/MvfYgJtvBICL0/UsCg8 Eo1bG+CpaO1W/JWBORyxWnVMwYforRdWFpLcNNnnTr6WckgWqX8cpRotu3SoW3jnNsSm PizUX3QJilPLsW/5Gft8jVYazARk9Ut3n4G+JQv0SopU/bBBAHVR6m+9um87Q19Q1oES JkmqUr13ZWZzVKrKo78OL0QtsqIHc61QjUXFxRzUrqMEuTOHPCZ7yp3LqOiDIHhK80Se /pN6/hsU4YI1MwbFb/U4Ozh2+s+tmlO9zkIjBJHts6ziq2HpcgAoxfLImivvQtFY+h9I 09eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680559426; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hNJBwGVVZOmg9VOOVOlV/1s5wuc6P4CJY0BTT3hY1AM=; b=bPIt4zjCWWOYnrffwaV9cQt2l1uvsgQiI8kILcyj0FT3hLBbOpfuy/VNue/ocgaAFy jgBZm3kK/6NPCQmyIpoI3lrvkGsvh6VKqAcJLxawni7ycsUN1HvuExVyqHpQhSBZ2EUb NWS6lxm1Dv8GmyNkxdh/R53c6zNuWh4Oc+0dO0h8XmS14m7xg+8fnDrqncSDZJyyoUIZ v40HhniTuEIfl4jDA+bGMFM6u7Vvo1kP21HmAwdQq/IfDL+5jqf0OkquafNH4uRGJZl0 kUyQuFSFC8Qh6495pCj/3tdcEKJndF8jtFwrYlCSIhlA8GJedbKAFGiGd2mJwxxoqKnm Ba+Q== X-Gm-Message-State: AAQBX9f9BcH8yAd4ps1PvojMz/Wladfbu5cfvXbBr1FGFj12QM0OdM5u nY9ScnQGdWxwIFVqBbkU/1be7jyARi3/kzNv X-Google-Smtp-Source: AKy350YqjuJx0cPEvkhGlL8e3G8Xhx/mHcAIKY7AoMYKpy6RY5t/7e2yXmvxs0CQV10HpQGcD3F9B9OJJYiZoWXv X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:90b:4cce:b0:23b:5155:309f with SMTP id nd14-20020a17090b4cce00b0023b5155309fmr7558849pjb.0.1680559426408; Mon, 03 Apr 2023 15:03:46 -0700 (PDT) Date: Mon, 3 Apr 2023 22:03:36 +0000 In-Reply-To: <20230403220337.443510-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230403220337.443510-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230403220337.443510-5-yosryahmed@google.com> Subject: [PATCH mm-unstable RFC 4/5] memcg: remove mem_cgroup_flush_stats_atomic() From: Yosry Ahmed To: Alexander Viro , Christian Brauner , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Previous patches removed all callers of mem_cgroup_flush_stats_atomic(). Remove the function and simplify the code. Signed-off-by: Yosry Ahmed Acked-by: Shakeel Butt --- include/linux/memcontrol.h | 5 ----- mm/memcontrol.c | 24 +++++------------------- 2 files changed, 5 insertions(+), 24 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 222d7370134c7..00a88cf947e14 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1038,7 +1038,6 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void mem_cgroup_flush_stats(void); -void mem_cgroup_flush_stats_atomic(void); void mem_cgroup_flush_stats_ratelimited(void); void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, @@ -1537,10 +1536,6 @@ static inline void mem_cgroup_flush_stats(void) { } -static inline void mem_cgroup_flush_stats_atomic(void) -{ -} - static inline void mem_cgroup_flush_stats_ratelimited(void) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e7fe18c0c0ef2..33339106f1d9b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -638,7 +638,7 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val) } } -static void do_flush_stats(bool atomic) +static void do_flush_stats(void) { /* * We always flush the entire tree, so concurrent flushers can just @@ -651,30 +651,16 @@ static void do_flush_stats(bool atomic) WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME); - if (atomic) - cgroup_rstat_flush_atomic(root_mem_cgroup->css.cgroup); - else - cgroup_rstat_flush(root_mem_cgroup->css.cgroup); + cgroup_rstat_flush(root_mem_cgroup->css.cgroup); atomic_set(&stats_flush_threshold, 0); atomic_set(&stats_flush_ongoing, 0); } -static bool should_flush_stats(void) -{ - return atomic_read(&stats_flush_threshold) > num_online_cpus(); -} - void mem_cgroup_flush_stats(void) { - if (should_flush_stats()) - do_flush_stats(false); -} - -void mem_cgroup_flush_stats_atomic(void) -{ - if (should_flush_stats()) - do_flush_stats(true); + if (atomic_read(&stats_flush_threshold) > num_online_cpus()) + do_flush_stats(); } void mem_cgroup_flush_stats_ratelimited(void) @@ -689,7 +675,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w) * Always flush here so that flushing in latency-sensitive paths is * as cheap as possible. */ - do_flush_stats(false); + do_flush_stats(); queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); } From patchwork Mon Apr 3 22:03:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13198870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A50B6C76188 for ; Mon, 3 Apr 2023 22:04:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233779AbjDCWEN (ORCPT ); Mon, 3 Apr 2023 18:04:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233706AbjDCWEG (ORCPT ); Mon, 3 Apr 2023 18:04:06 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24C924C35 for ; Mon, 3 Apr 2023 15:03:49 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id d13-20020a17090ad98d00b00240922fdb7cso10509936pjv.6 for ; Mon, 03 Apr 2023 15:03:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1680559428; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QacupsFZGNVPJSGTJC6YypXeluEyL82tnZkM1VlhIdc=; b=kWaOfIm3ix3CjQ6P0LRgdpWoqBgJu3ykmdsz32NiUF8x9A1dmCU0K7B4JSzYOReEDZ FYgPQlT3SuWLrHKrP3wDopLbn9eVfq4lg7l4naylQ8tTZgKaGOhXPPjrkCrnIEo3mSYz lk/IOCkztnHJOg8hCDeaYralpjItb0pP1L9kf7xz8p5kouBsn/T4mHo837Hsw6EVM4fX wXifin0wFGMZCFLJ9fRpD2UTXhFCMfIx9NWguUNma600XyrmJZUcuoS1IkX3lwn5WJbF /q9yx8rYhgQBtZhs0hPFucZM0Rd23Z/nGcnjDiwXILvEaD94FJtXEZgMXINCRE/Es3tO KyGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680559428; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QacupsFZGNVPJSGTJC6YypXeluEyL82tnZkM1VlhIdc=; b=DAwE7HSStJUPQzr+gf5Ul9mEjb+0k6aMWQ4bLZEp6L2srcH5XA+DHNZcYoke+IoQBR HGafX4LPle20hIazqmb1N0wMvEH+sVv9/Qq9ewy3zG7MkXzk+8ya9yRJOdM7cdQEYXKG c2eaaZoTOvYq7vXf2G4TBhiYdwKsAhj/7bqiVjf1xc5eI5qlyu0Fv4ztL0uGmAZy4UOQ LiWtC9iFhBVf8zUdG3oMnCP/bvY51ABlM5IRt5RFivpi8k0GrfmzpIepRAniqpkQeHDc VXmpb8gtuvtysi/zRIdpUmBxOLa//bqXE11gDE4XwHWIygsCMXoeijLGswCpQ2xMRAu4 YAHQ== X-Gm-Message-State: AAQBX9c1Z56fEqr6jV0J9nZWjerqmbxUhsuZqFW07Z7+RhThw3dITD5t 91WSnm9luGYNahC8WcsAPwk4t666eOPpkE6I X-Google-Smtp-Source: AKy350bOmcX4bhvMcYbhTSAl8lBQ2O0deW3xZkttmpHKSB+VEjNEWJFCZZnxKvEdtC6SV8VkwoJDDIrEQNByEpas X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:485d:0:b0:4f2:8281:8afb with SMTP id x29-20020a63485d000000b004f282818afbmr35913pgk.4.1680559428241; Mon, 03 Apr 2023 15:03:48 -0700 (PDT) Date: Mon, 3 Apr 2023 22:03:37 +0000 In-Reply-To: <20230403220337.443510-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230403220337.443510-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230403220337.443510-6-yosryahmed@google.com> Subject: [PATCH mm-unstable RFC 5/5] cgroup: remove cgroup_rstat_flush_atomic() From: Yosry Ahmed To: Alexander Viro , Christian Brauner , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Previous patches removed the only caller of cgroup_rstat_flush_atomic(). Remove the function and simplify the code. Signed-off-by: Yosry Ahmed --- include/linux/cgroup.h | 1 - kernel/cgroup/rstat.c | 26 +++++--------------------- 2 files changed, 5 insertions(+), 22 deletions(-) diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 885f5395fcd04..567c547cf371f 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -692,7 +692,6 @@ static inline void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen) */ void cgroup_rstat_updated(struct cgroup *cgrp, int cpu); void cgroup_rstat_flush(struct cgroup *cgrp); -void cgroup_rstat_flush_atomic(struct cgroup *cgrp); void cgroup_rstat_flush_hold(struct cgroup *cgrp); void cgroup_rstat_flush_release(void); diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index d3252b0416b69..f9ad33f117c82 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -171,7 +171,7 @@ __weak noinline void bpf_rstat_flush(struct cgroup *cgrp, __diag_pop(); /* see cgroup_rstat_flush() */ -static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) +static void cgroup_rstat_flush_locked(struct cgroup *cgrp) __releases(&cgroup_rstat_lock) __acquires(&cgroup_rstat_lock) { int cpu; @@ -207,9 +207,8 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) } raw_spin_unlock_irqrestore(cpu_lock, flags); - /* if @may_sleep, play nice and yield if necessary */ - if (may_sleep && (need_resched() || - spin_needbreak(&cgroup_rstat_lock))) { + /* play nice and yield if necessary */ + if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) { spin_unlock_irq(&cgroup_rstat_lock); if (!cond_resched()) cpu_relax(); @@ -236,25 +235,10 @@ __bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) might_sleep(); spin_lock_irq(&cgroup_rstat_lock); - cgroup_rstat_flush_locked(cgrp, true); + cgroup_rstat_flush_locked(cgrp); spin_unlock_irq(&cgroup_rstat_lock); } -/** - * cgroup_rstat_flush_atomic- atomic version of cgroup_rstat_flush() - * @cgrp: target cgroup - * - * This function can be called from any context. - */ -void cgroup_rstat_flush_atomic(struct cgroup *cgrp) -{ - unsigned long flags; - - spin_lock_irqsave(&cgroup_rstat_lock, flags); - cgroup_rstat_flush_locked(cgrp, false); - spin_unlock_irqrestore(&cgroup_rstat_lock, flags); -} - /** * cgroup_rstat_flush_hold - flush stats in @cgrp's subtree and hold * @cgrp: target cgroup @@ -269,7 +253,7 @@ void cgroup_rstat_flush_hold(struct cgroup *cgrp) { might_sleep(); spin_lock_irq(&cgroup_rstat_lock); - cgroup_rstat_flush_locked(cgrp, true); + cgroup_rstat_flush_locked(cgrp); } /**