From patchwork Wed Apr 28 09:49:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12228447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6AA4C433B4 for ; Wed, 28 Apr 2021 09:54:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 689F261429 for ; Wed, 28 Apr 2021 09:54:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 689F261429 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EB2C36B0074; Wed, 28 Apr 2021 05:54:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E61AB6B0075; Wed, 28 Apr 2021 05:54:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDC9F6B0078; Wed, 28 Apr 2021 05:54:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id B20AB6B0074 for ; Wed, 28 Apr 2021 05:54:33 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6DA9B181AF5CA for ; Wed, 28 Apr 2021 09:54:33 +0000 (UTC) X-FDA: 78081316026.08.54FFE34 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf13.hostedemail.com (Postfix) with ESMTP id E2A05E000128 for ; Wed, 28 Apr 2021 09:54:24 +0000 (UTC) Received: by mail-pf1-f181.google.com with SMTP id j6so3983640pfh.5 for ; Wed, 28 Apr 2021 02:54:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tsZpxnSCiHU3yOSRKuQ5Nn0zsBnruhO0+WHjObzR7x8=; b=CVN6j6GWu+63QHDHliyf2AZ1WZ20KwHrSvk5XqYiXCO7XP6C3pKHdUDBT4SxVeMhrP L7tMKIH47RxVvddI2lTDpeJ8zoSoyEYMuQ45LVFeb5DXE+Fh6X6ChHpirl3PFzwWAMLR Mq16ofgtQ7FlNKOSAT4pY+afeqGjvzzZCTuj0MZdV7TOq3YnjHw7tiV/vRs6qP2M2Xgm th2Wnk7lULXJi9N8kST37AJ0xLNcyrFfwLaX9ZH7VPC/rOXuYy0wadwRq00TeBP8sxq3 UWfh5WsU99QG0r6ObL1e/F7tLvVLs7RbR4LYSVxhVou+9gePD8fjRXBe3VlMtaRlw2Xe LMlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tsZpxnSCiHU3yOSRKuQ5Nn0zsBnruhO0+WHjObzR7x8=; b=RQGV9Z+JW3OUQOQWkBq18gTv6i+8hr7gBg2zqFtSvHs6rLVCr1MpYm+5Q2l5gHQ9ga bIcqFwN2YtDmj6HZTSJ8TNbjPG2S0p1H15NR3rbGMPHg332FQJhmJpjsvu9wu+WedjQF pjrGNCz3jG28IECUmqLToi8lkjcgK8A++tiZW8NqrIrdsq7UFrnFL9gDfQUaNDv8ywTM fdIszceifjDoLfOGxidp6lflVqFKTcjpXrM6nPKM7fs+ebTfzjNB68PKjt0u5o4/U49x T0Wigu/hNSEu18yGEmaBBFVGi+Fu4m7A0fvEOLcANAjRuX0cr05hkYDGmEkIAzhPLz27 xVRg== X-Gm-Message-State: AOAM532QZOrWsqEEmncjcqvsetY4TcFhZ5daT2C4x7xGQn09K+vyPZ6Z jzJqp2W8pOszpWAb0xYQ8wejfw== X-Google-Smtp-Source: ABdhPJxeJwwMM87CerPEYoMJbuNWzwWZUAqJ1LCPLGY2k3lesIokTfkNVrUPpW+Jem/IzuHxzll8pA== X-Received: by 2002:a63:570b:: with SMTP id l11mr26184726pgb.193.1619603672214; Wed, 28 Apr 2021 02:54:32 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id x77sm4902365pfc.19.2021.04.28.02.54.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 28 Apr 2021 02:54:31 -0700 (PDT) From: Muchun Song To: willy@infradead.org, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, shakeelb@google.com, guro@fb.com, shy828301@gmail.com, alexs@kernel.org, alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH 6/9] mm: list_lru: support for shrinking list lru Date: Wed, 28 Apr 2021 17:49:46 +0800 Message-Id: <20210428094949.43579-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210428094949.43579-1-songmuchun@bytedance.com> References: <20210428094949.43579-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E2A05E000128 X-Stat-Signature: ojonka53ctjfh897z5u439h4whtun8x5 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=mail-pf1-f181.google.com; client-ip=209.85.210.181 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619603664-837353 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now memcg_update_all_list_lrus() only can increase the size of all the list lrus. This patch adds an ability to make it can shrink the size of all the list lrus. This can help us save memory when the user want to shrink the size. Signed-off-by: Muchun Song --- mm/list_lru.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 49 insertions(+), 4 deletions(-) diff --git a/mm/list_lru.c b/mm/list_lru.c index d78dba5a6dab..3ee5239922c9 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -383,13 +383,11 @@ static void memcg_destroy_list_lru_node(struct list_lru_node *nlru) kvfree(memcg_lrus); } -static int memcg_update_list_lru_node(struct list_lru_node *nlru, - int old_size, int new_size) +static int memcg_list_lru_node_inc(struct list_lru_node *nlru, + int old_size, int new_size) { struct list_lru_memcg *old, *new; - BUG_ON(old_size > new_size); - old = rcu_dereference_protected(nlru->memcg_lrus, lockdep_is_held(&list_lrus_mutex)); new = kvmalloc(sizeof(*new) + new_size * sizeof(void *), GFP_KERNEL); @@ -418,11 +416,58 @@ static int memcg_update_list_lru_node(struct list_lru_node *nlru, return 0; } +/* This function always returns 0. */ +static int memcg_list_lru_node_dec(struct list_lru_node *nlru, + int old_size, int new_size) +{ + struct list_lru_memcg *old, *new; + + old = rcu_dereference_protected(nlru->memcg_lrus, + lockdep_is_held(&list_lrus_mutex)); + __memcg_destroy_list_lru_node(old, new_size, old_size); + + /* Reuse the old array if the allocation failures here. */ + new = kvmalloc(sizeof(*new) + new_size * sizeof(void *), GFP_KERNEL); + if (!new) + return 0; + + memcpy(&new->lru, &old->lru, new_size * sizeof(void *)); + + /* + * The locking below allows readers that hold nlru->lock avoid taking + * rcu_read_lock (see list_lru_from_memcg_idx). + * + * Since list_lru_{add,del} may be called under an IRQ-safe lock, + * we have to use IRQ-safe primitives here to avoid deadlock. + */ + spin_lock_irq(&nlru->lock); + rcu_assign_pointer(nlru->memcg_lrus, new); + spin_unlock_irq(&nlru->lock); + + kvfree_rcu(old, rcu); + return 0; +} + +static int memcg_update_list_lru_node(struct list_lru_node *nlru, + int old_size, int new_size) +{ + if (new_size > old_size) + return memcg_list_lru_node_inc(nlru, old_size, new_size); + else if (new_size < old_size) + return memcg_list_lru_node_dec(nlru, old_size, new_size); + + return 0; +} + static void memcg_cancel_update_list_lru_node(struct list_lru_node *nlru, int old_size, int new_size) { struct list_lru_memcg *memcg_lrus; + /* Nothing to do for the shrinking case. */ + if (old_size >= new_size) + return; + memcg_lrus = rcu_dereference_protected(nlru->memcg_lrus, lockdep_is_held(&list_lrus_mutex)); /* do not bother shrinking the array back to the old size, because we