From patchwork Wed Aug 16 08:34:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13354735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA6BFC001E0 for ; Wed, 16 Aug 2023 08:36:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07B4B8D002C; Wed, 16 Aug 2023 04:36:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B3E8D0001; Wed, 16 Aug 2023 04:36:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0EBE8D002C; Wed, 16 Aug 2023 04:36:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CEE628D0001 for ; Wed, 16 Aug 2023 04:36:14 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A17DF1C9E27 for ; Wed, 16 Aug 2023 08:36:14 +0000 (UTC) X-FDA: 81129310668.27.2DDC54B Received: from mail-oi1-f169.google.com (mail-oi1-f169.google.com [209.85.167.169]) by imf11.hostedemail.com (Postfix) with ESMTP id C753E40009 for ; Wed, 16 Aug 2023 08:36:11 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ajh7xjzp; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf11.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.167.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692174971; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qRHd4JcGnkoybpwH6/5aA4cQ+LBMzk7z4xqKJvv8dhI=; b=fRj1apGmw/paQGX5xO+HwEnxk7YijlnSDp2IRPdsLDW8sZcT70HLq5nMVtJ9tuhDH7Zxt3 t0Nu72rhp88yc/slm7vf3mUaNu6QhHKkIqtdSAhDOECGq4Iu94U+knm/JFepOl+VHR25u/ xKxyzde7NVhutWUGkvQGVkR5TwPHjr0= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ajh7xjzp; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf11.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.167.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692174971; a=rsa-sha256; cv=none; b=1VvXKfh2vqIHfJ9EzcCnUoiqVY/OX/5Ci07KsVSuSmXHB1lWPMXlkwt5ix0GS2dOL6u6/b sZDd2hALqkYLw6w4CzebBe80AiGnXWnkylj2rCCFWgUcHDiUuiyDDV51vnglLVOLrD2ypt a1gXgty+bDV2KwXTxJ5v6FDRphVWd1M= Received: by mail-oi1-f169.google.com with SMTP id 5614622812f47-3a7491aa219so1184173b6e.1 for ; Wed, 16 Aug 2023 01:36:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692174971; x=1692779771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qRHd4JcGnkoybpwH6/5aA4cQ+LBMzk7z4xqKJvv8dhI=; b=ajh7xjzpXNgMTh4unZjbjB0bFFDGBpv7e78SZ7k2dZ45J7+SIUcZkV4QPiWIFHULos 50eHEit2nZduxy3147qh5g3HarPg4p/RgWl6Jmf9wh6iH+CmLIStC6E2bi35cMhgi2LM 52E12LBqoCpz4VTJt5gTqma9BJ/dr3BXi/KRETgyDFVNXdiGoACcNnI+ppP92DELRr8K +eyQs2ExWZadFGr4AlXFWhYSf5l5QwIaiph4OI8Pudlh1EgEb6iBulipasm/vqmB9pVe fFsAy2Hiq8rs5UzlNJgQ3I0OD+5UH7s/6OOoXIqviS65Mv7ff9yPSTE8+2+usf2p6Qzs qxtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692174971; x=1692779771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qRHd4JcGnkoybpwH6/5aA4cQ+LBMzk7z4xqKJvv8dhI=; b=SZ3al4j4vOMAGc1WMKr8UZ84me5To52JKxArzKt30Fu78fU2rQAwV4tx5XQfyHbzWw e++v6BR9sURtDdVVk2yNA+kRm8D9X6rtciMEa1WCXW0NwuFO2myRMUP06Uzmsa8nWrNG XkXThuOqeQEWQAxV5qEEDBeCleL/TtjjN/g4qA6baEkCIKcK5adzwPV8eH8M38hYNjUn 6gBMkIgncRKi78ukd3hI4RPhvAbD6pyV6mCDwTMTw5Gl6er68qYJN9BEV+9OHabV99WQ ay6BmmyI19JxBTJMBu6JM7AEca2/RVNwyoBMmCopiGkfkVee5uuIZqdQUc7reTGeTUAX wZYA== X-Gm-Message-State: AOJu0Ywj6gekqROgrEhbM9QsSGLEx+Fco2anuXPzAGYUUqufhOlmTs/i NzyasC4CnBOao2Xi+7v87fJ1mQ== X-Google-Smtp-Source: AGHT+IF28Gu3pzoNkt0lhF4apUrH5iwZUnFBt5ekqtY4PdfHts1SFMZwDvJgI46TmbtZtTo+TVsxpg== X-Received: by 2002:a05:6808:2e4e:b0:3a1:a62a:9ed0 with SMTP id gp14-20020a0568082e4e00b003a1a62a9ed0mr1225989oib.1.1692174970946; Wed, 16 Aug 2023 01:36:10 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id p16-20020a639510000000b005658d3a46d7sm7506333pgd.84.2023.08.16.01.36.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Aug 2023 01:36:10 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song Subject: [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem Date: Wed, 16 Aug 2023 16:34:18 +0800 Message-Id: <20230816083419.41088-5-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230816083419.41088-1-zhengqi.arch@bytedance.com> References: <20230816083419.41088-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C753E40009 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: y9dt8giibipxp9kdxorgqp41eq64o6wo X-HE-Tag: 1692174971-880207 X-HE-Meta: U2FsdGVkX18rbnAohcZ/m4nChyZGFsR+/Et1wiKb5Kic6TTsA98iNC+0RRt+3PGVtfc1py19q2pA3xEOUiqryTbEQ/XGxct8t8ABFBs4h6EOpNZQ4MU87UOHjqFiesuUOrqVlFNDicjPMf/ySS2/b9wW5KiIxT7B3YK1k6OsO4o4rmpt5BsW4zVyr3/CVk65UqxCsaGA8LBVssChtmw/3r20sv+atVn+2PUCpxLkb0ydk8IvvfwfM4h6aREfBqRndmUkbXjZBEvdbjp2++Nn5vsZ35KbPJgLS4mxqL13UONccgZbx0ntnbyiiWfwb32kHuuiW+CvWmBxXeYWOC1L+tSR3g9bK+R0OvCzYEoXX+00AJNB6g6KpejWKdTjcWcCm6uFd5aG7EBPUJ8rKLdbZGUxFhPlH1us2yHHvv0/evIJ72xOCb4CTO2wDNTFzMaitZAC2e4uLw+W3XnE81Fut4n//6qDtgmij3yhAzGiuacnTP0dgl4woUAwU1G8lAcaO0h5uHmK0Jk5Uj0YAU3FlxgFp40Lbwy+CbInPoEZw465wBcF6NHkCcAph8D7ADe4VFwSfULzz+SXUbGMTC0X74Y5IRCIOZXJBxHRstOPI4/0EConlH2cF5SBH0BI6AAUd3M3gdFMOLwTVpX3kvpmu/13lkj//7cse9MwbgepvABuDYcje9+qKnVoj43m2pLWb5YbCu9UdW+1amiq5KGxXpYlT2E4L6U9KGcwBniqVwrq9iVzqqLRCu/+qTS56N2Gnp2xyPp4WH6biu1JoCTE639/FHz4S+pHbEyjHNKk5YazGCzQj+EBuwDUi9E+eCYUkVxElzahaUUvbJ006VYv3okn6n8QFBz/lODgrrArF7G2VFHW3kr01HRjeFFS00qQr0HGYevRPBkxJvMZRYaMAIro5E/Mf+l3fsheg/upDQtUIj8djYGMPSKusbsPnRy6Dg8r872Dt9popRH3BxO OW6HTnSn WoV3ORVP11PMasP4hD+I5M0H3uayIheG78AVghgqmMEQk841/lqC2IDWhas5HyfNPva+59J4pY/Rs+coLEwI10ADZrx17VJh8hffpWDcTNKfTrT/HZyzczunojHAbmZLth8OF2VxJw71ktypcJbYSi/OTONAFMj8tGGvSPJzGqaYrCVKQAtC5fPJjH0z6AC4+CyC1jYaGJRbPStaoypzlk1p/x7c3ZBWWu57ypCfuIZiLFzespUIBDgKdxnuememGwRUXrGzmuzU8hmTNSOGrgObRZpCf4E1LUOP+a7vq0exu9/2NIBkiBj3maiE83Wo76jBoT7+uPN2gIT3iODATh7i1RP4tNbI6VyMBPtYL1JgexjnFfOT8JOMWKJ2L6bHxnQXVc15eDDUCfRlVF/uURJhr9gfGxMmtTvu5c8v3xDZzGen++Wn2L4W6eGueVtRTUrVffR+HlLL2gWMZZqMZ/GD4bg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, the synchronize_shrinkers() is only used by TTM pool. It only requires that no shrinkers run in parallel. After we use RCU+refcount method to implement the lockless slab shrink, we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all shrinker invocations have seen an update before freeing memory. So we introduce a new pool_shrink_rwsem to implement a private synchronize_shrinkers(), so as to achieve the same purpose. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song --- drivers/gpu/drm/ttm/ttm_pool.c | 15 +++++++++++++++ include/linux/shrinker.h | 1 - mm/shrinker.c | 15 --------------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index cddb9151d20f..713b1c0a70e1 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -74,6 +74,7 @@ static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; static struct shrinker mm_shrinker; +static DECLARE_RWSEM(pool_shrink_rwsem); /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, @@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void) unsigned int num_pages; struct page *p; + down_read(&pool_shrink_rwsem); spin_lock(&shrinker_lock); pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); list_move_tail(&pt->shrinker_list, &shrinker_list); @@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void) } else { num_pages = 0; } + up_read(&pool_shrink_rwsem); return num_pages; } @@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, } EXPORT_SYMBOL(ttm_pool_init); +/** + * synchronize_shrinkers - Wait for all running shrinkers to complete. + * + * This is useful to guarantee that all shrinker invocations have seen an + * update, before freeing memory, similar to rcu. + */ +static void synchronize_shrinkers(void) +{ + down_write(&pool_shrink_rwsem); + up_write(&pool_shrink_rwsem); +} + /** * ttm_pool_fini - Cleanup a pool * diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 8dc15aa37410..6b5843c3b827 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -103,7 +103,6 @@ extern int __printf(2, 3) register_shrinker(struct shrinker *shrinker, const char *fmt, ...); extern void unregister_shrinker(struct shrinker *shrinker); extern void free_prealloced_shrinker(struct shrinker *shrinker); -extern void synchronize_shrinkers(void); #ifdef CONFIG_SHRINKER_DEBUG extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker, diff --git a/mm/shrinker.c b/mm/shrinker.c index 043c87ccfab4..a16cd448b924 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -692,18 +692,3 @@ void unregister_shrinker(struct shrinker *shrinker) shrinker->nr_deferred = NULL; } EXPORT_SYMBOL(unregister_shrinker); - -/** - * synchronize_shrinkers - Wait for all running shrinkers to complete. - * - * This is equivalent to calling unregister_shrink() and register_shrinker(), - * but atomically and with less overhead. This is useful to guarantee that all - * shrinker invocations have seen an update, before freeing memory, similar to - * rcu. - */ -void synchronize_shrinkers(void) -{ - down_write(&shrinker_rwsem); - up_write(&shrinker_rwsem); -} -EXPORT_SYMBOL(synchronize_shrinkers);