From patchwork Thu Aug 24 03:42:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13363433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37BB9C71145 for ; Thu, 24 Aug 2023 03:43:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A65C628006D; Wed, 23 Aug 2023 23:43:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A15E28E0011; Wed, 23 Aug 2023 23:43:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B61928006D; Wed, 23 Aug 2023 23:43:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7BA868E0011 for ; Wed, 23 Aug 2023 23:43:49 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 42C541C902C for ; Thu, 24 Aug 2023 03:43:49 +0000 (UTC) X-FDA: 81157604178.19.88D994A Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf05.hostedemail.com (Postfix) with ESMTP id 8505010000E for ; Thu, 24 Aug 2023 03:43:47 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=HsgeAYmw; spf=pass (imf05.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692848627; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=i+bw+IQkI/2s2be3E55fpweFSEuoO3Ujp8gufkbOd1I=; b=SAKaB3NOhdbznqFUIx4fjJg2qgoyI5mWy541BBXxNk5aiwWb+Vk500iGhj3++UBa6JQW9V 7ZMwK9soHPb+tW+86OL36SrEALlQJCFYYnfixYDwEqr+xA8TzBPdQsyOeyE/MuAIryhhsM qPwaKOZ/4vMXe91WtDdw+gXp2a9iiqs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692848627; a=rsa-sha256; cv=none; b=cXdSqwfkDifou9iQ8fCBM8jeNMruID1SqrQfqGSXpuTlRHsbUv0peoURUq1PTxjrTSqfKU EzTS95i9k2L4g1nMdFox7WNmWQiehpR55bQdhv87t9J3GTLBWE/EoMqgD7BiII19nqITkc 3Y+zH3SNnr2UxfhAtxCIpJ/0BlVOktA= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=HsgeAYmw; spf=pass (imf05.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-68824a0e747so1237683b3a.1 for ; Wed, 23 Aug 2023 20:43:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1692848626; x=1693453426; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=i+bw+IQkI/2s2be3E55fpweFSEuoO3Ujp8gufkbOd1I=; b=HsgeAYmwcxsIbo71bvZkoQxON+OMENSHzF9aH7H3nXD7cTNX2Q0HGb8bv3EY/L9D0M 9sKkBIP9yqu9QyOqpsRr8u5+3BZicZAFQVZzQRY6m9//Xje5KicLS0TSolBImH0cz6ri n2tlC2/5Qt6pqPjReGx9GuO7vMoB0X4GDY4NrqLqKCnmYhChAcQt265fNERVCuBtYsk9 ifMNS2an1cNXTOQVpARZ6NUQlfJezBc8IIuWR1H+UBQ9vBeYF1S5bMp+s8d2wEb9UTZW KL6yUbESJy9SpETVBtiVSzEmFjgv/TJxaZnV7qSgw1Zufj9/xHN0nkq8yKrcuTMU8ims Q1dA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692848626; x=1693453426; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=i+bw+IQkI/2s2be3E55fpweFSEuoO3Ujp8gufkbOd1I=; b=hMUWaY4PW+SrLG7k+54bj5nqa5EC4Latvg6apABf8sYKSu78X3kx4+y3vOLRN2FhPn HFDeCEaOwflyYCr2tBEGvst07M1CNf668D76z3Q9Vdpw16VZog6uy01AaVvuvIUTVC08 7wnG1FQIMdXGyPqfg4MzIrnHweAwPSRYbCdYKkoFO2xRxkPvD0YSzNOgTkYQMK50UkoT aXsx8sWwddncN/CJbrbqKOO5fJ1Plkv3mycCRiZeSZXzYJhfSPE1teFdFFPZ3mxgq3JF QfBZK0T6MC2Een8Rw0x4bk8q8pqSQBgEFrgkGsFL1ZzI5jiN8yd7/ZR8KO2Cb/c09LpO wv4w== X-Gm-Message-State: AOJu0YzOoJlCEL1IkRJIVD3AL+pUy7Y0OjCOvFj9rRrzOL7M39FzBDpJ 87C/oRH0JDqDto3WTk4RYRtVmg== X-Google-Smtp-Source: AGHT+IEHcT5LZMLMmSWr0TBoBztwoOtjBQ8WxtXnz+gjTzTvoaArZdYoaEaCOkkhgT/FIYn5MhE6Qw== X-Received: by 2002:a05:6a00:1808:b0:68b:c013:2eaf with SMTP id y8-20020a056a00180800b0068bc0132eafmr5450521pfa.2.1692848626349; Wed, 23 Aug 2023 20:43:46 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id t6-20020a63b246000000b005579f12a238sm10533157pgo.86.2023.08.23.20.43.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Aug 2023 20:43:45 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Qi Zheng Subject: [PATCH v5 00/45] use refcount+RCU method to implement lockless slab shrink Date: Thu, 24 Aug 2023 11:42:19 +0800 Message-Id: <20230824034304.37411-1-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 X-Stat-Signature: syuhcw5a5gpzejqw6q67gndofr4xg7fb X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 8505010000E X-Rspam-User: X-HE-Tag: 1692848627-30224 X-HE-Meta: U2FsdGVkX18HH1cw/S1Cq0aCM6kCm0b+gl8Wa+T4D+4JpSbdEpDEX6mVJJHGW+FWT3jblCAXHEklpogy1I2VtVstjtu5NgpejzMseWPCUajIqkUsv2Eh348l0fSTdlS9AwuR5cgat8kuMiHKs8LpdAur9XorK6LIeclqcF3Znc0BoOTBtjKNheU4EKi+/aPI/ZbRdUAeiZIYPi7lwDasbBSZ6z4jMVOrrvEnSoZbaBT9ueE2BtuszLDvxLGrfUjXtMF7aSwo6GxmHF5g6E3UPFGrs9EQ1kGP0IUfnPKgwBMQH6uxnkbgMcTXsUtAODpmrnpK1qKTHozT2NTRsA3Ksp5eT/2Z4dnk7/1D1JS6PYCnYE4Usl3+u0n3GwqSFoXPpBaiRkBPvVUhBR12pHWnkONGpfaoUhuednithwIgwfdIWHBEU+RyQCUHZYvLLKNzT20fv6Tj6IYnpImXsIVXs4cPWmRLcUPUygyl2xKuvZ8W4fkwpU9upjYCrqr3RO33l6YBX/avJJBo3xXQHjj+CnmhmZSWHkC0Fi1udtdUrthmb49rD2vh/EARB7c0B4Iiw84t0BQTQ1Qa+LhbSlaKAoK1kekTvFbyeQPvbHJ/TyusG0ZhCCFw/Zcwgxi7LeD2qpgdBWr01ifGUeDknDlLEEg8PiC+WS4uAYxaxAoNIWUplkhcIIfvyDD4cxwumsD4lZYalM17cVqC4FpQRZeAlHhHPSGfRYU0H3d/QT2fjX/WsBwokVqhqK7RStvbuM2/8xqHkxpYXhm7Yg2eTOGcNfHldcPL5CYPHFwozrGAawSTHOclgm6JOTssWX4YT4vK3r+w68tm8vKJ8eibcmx+Nbdssd0XnfckFtgp4OtTQ75xvFuP5nnSGu7PXz2c1nKblV1p9coely1uFfSWzc1W9cJ1zCLV+siCez2oy91VUp8/ev/uAHQyrZlXF7sLhmfLhudnBJAv16bNUr4igT6 CxDHGTNj pRYzVPQZqdG5MDsvhn6rZjwzX2P94f5OXS3pmrfPzoHp4uzDpQmJbceXO9FDieb9XIS/0S6oTMqtY9lCRr8U+jA2D48u3P1yw3apUfeiMFOu59AN+hVkYMeaedZsFY6J54GVyZ87aSqRg9Ms+8jJsczLwTIkx6Ei0Lp6MQWCjBJsDP4+RphlDpfPQ7gxMwfqkRmPA04DS5BMFMh6x1oIun5jbFOOhxXnLWbwKi8XSA/jeMxDR5bSP6fH1PJGXw6qlxjHjFtSdp4/9w8Z99kcrzESbVIFZ6l67hHRTC8KesqAWTGWenYbu4vQ22u1GPR10kOMkyuSEbTDw+udbU9BSRRinK18m2wIplYFBDSUdoFDQBG70O8SInxrTozqcySc+Uhm0U9NbmiD7Fd/Vuj1TrCDEWHpGbYZJwHSrvZQyhOKS3kUGZ5Jkft/XfwVQ0egQ4DzkRzkk8/d5wnUqU8zwQROhGNljp7E028BWn0wwBgmnM9AR92haIcLaoA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi all, 1. Background ============= We used to implement the lockless slab shrink with SRCU [1], but then kernel test robot reported -88.8% regression in stress-ng.ramfs.ops_per_sec test case [2], so we reverted it [3]. This patch series aims to re-implement the lockless slab shrink using the refcount+RCU method proposed by Dave Chinner [4]. [1]. https://lore.kernel.org/lkml/20230313112819.38938-1-zhengqi.arch@bytedance.com/ [2]. https://lore.kernel.org/lkml/202305230837.db2c233f-yujie.liu@intel.com/ [3]. https://lore.kernel.org/all/20230609081518.3039120-1-qi.zheng@linux.dev/ [4]. https://lore.kernel.org/lkml/ZIJhou1d55d4H1s0@dread.disaster.area/ 2. Implementation ================= Currently, the shrinker instances can be divided into the following three types: a) global shrinker instance statically defined in the kernel, such as workingset_shadow_shrinker. b) global shrinker instance statically defined in the kernel modules, such as mmu_shrinker in x86. c) shrinker instance embedded in other structures. For case a, the memory of shrinker instance is never freed. For case b, the memory of shrinker instance will be freed after synchronize_rcu() when the module is unloaded. For case c, the memory of shrinker instance will be freed along with the structure it is embedded in. In preparation for implementing lockless slab shrink, we need to dynamically allocate those shrinker instances in case c, then the memory can be dynamically freed alone by calling kfree_rcu(). This patchset adds the following new APIs for dynamically allocating shrinker, and add a private_data field to struct shrinker to record and get the original embedded structure. 1. shrinker_alloc() 2. shrinker_register() 3. shrinker_free() In order to simplify shrinker-related APIs and make shrinker more independent of other kernel mechanisms, this patchset uses the above APIs to convert all shrinkers (including case a and b) to dynamically allocated, and then remove all existing APIs. This will also have another advantage mentioned by Dave Chinner: ``` The other advantage of this is that it will break all the existing out of tree code and third party modules using the old API and will no longer work with a kernel using lockless slab shrinkers. They need to break (both at the source and binary levels) to stop bad things from happening due to using uncoverted shrinkers in the new setup. ``` Then we free the shrinker by calling call_rcu(), and use rcu_read_{lock,unlock}() to ensure that the shrinker instance is valid. And the shrinker::refcount mechanism ensures that the shrinker instance will not be run again after unregistration. So the structure that records the pointer of shrinker instance can be safely freed without waiting for the RCU read-side critical section. In this way, while we implement the lockless slab shrink, we don't need to be blocked in unregister_shrinker() to wait RCU read-side critical section. PATCH 1: introduce new APIs PATCH 2~38: convert all shrinnkers to use new APIs PATCH 39: remove old APIs PATCH 40~41: some cleanups and preparations PATCH 42-43: implement the lockless slab shrink PATCH 44~45: convert shrinker_rwsem to mutex 3. Testing ========== 3.1 slab shrink stress test --------------------------- We can reproduce the down_read_trylock() hotspot through the following script: ``` DIR="/root/shrinker/memcg/mnt" do_create() { mkdir -p /sys/fs/cgroup/memory/test echo 4G > /sys/fs/cgroup/memory/test/memory.limit_in_bytes for i in `seq 0 $1`; do mkdir -p /sys/fs/cgroup/memory/test/$i; echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs; mkdir -p $DIR/$i; done } do_mount() { for i in `seq $1 $2`; do mount -t tmpfs $i $DIR/$i; done } do_touch() { for i in `seq $1 $2`; do echo $$ > /sys/fs/cgroup/memory/test/$i/cgroup.procs; dd if=/dev/zero of=$DIR/$i/file$i bs=1M count=1 & done } case "$1" in touch) do_touch $2 $3 ;; test) do_create 4000 do_mount 0 4000 do_touch 0 3000 ;; *) exit 1 ;; esac ``` Save the above script, then run test and touch commands. Then we can use the following perf command to view hotspots: perf top -U -F 999 1) Before applying this patchset: 40.44% [kernel] [k] down_read_trylock 17.59% [kernel] [k] up_read 13.64% [kernel] [k] pv_native_safe_halt 11.90% [kernel] [k] shrink_slab 8.21% [kernel] [k] idr_find 2.71% [kernel] [k] _find_next_bit 1.36% [kernel] [k] shrink_node 0.81% [kernel] [k] shrink_lruvec 0.80% [kernel] [k] __radix_tree_lookup 0.50% [kernel] [k] do_shrink_slab 0.21% [kernel] [k] list_lru_count_one 0.16% [kernel] [k] mem_cgroup_iter 2) After applying this patchset: 60.17% [kernel] [k] shrink_slab 20.42% [kernel] [k] pv_native_safe_halt 3.03% [kernel] [k] do_shrink_slab 2.73% [kernel] [k] shrink_node 2.27% [kernel] [k] shrink_lruvec 2.00% [kernel] [k] __rcu_read_unlock 1.92% [kernel] [k] mem_cgroup_iter 0.98% [kernel] [k] __rcu_read_lock 0.91% [kernel] [k] osq_lock 0.63% [kernel] [k] mem_cgroup_calculate_protection 0.55% [kernel] [k] shrinker_put 0.46% [kernel] [k] list_lru_count_one We can see that the first perf hotspot becomes shrink_slab, which is what we expect. 3.2 registeration and unregisteration stress test ------------------------------------------------- Run the command below to test: stress-ng --timeout 60 --times --verify --metrics-brief --ramfs 9 & 1) Before applying this patchset: setting to a 60 second run per stressor dispatching hogs: 9 ramfs stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s (secs) (secs) (secs) (real time) (usr+sys time) ramfs 735238 60.00 12.37 363.70 12253.05 1955.08 for a 60.01s run time: 1440.27s available CPU time 12.36s user time ( 0.86%) 363.70s system time ( 25.25%) 376.06s total time ( 26.11%) load average: 10.79 4.47 1.69 passed: 9: ramfs (9) failed: 0 skipped: 0 successful run completed in 60.01s (1 min, 0.01 secs) 2) After applying this patchset: setting to a 60 second run per stressor dispatching hogs: 9 ramfs stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s (secs) (secs) (secs) (real time) (usr+sys time) ramfs 746698 60.00 12.45 376.16 12444.02 1921.47 for a 60.01s run time: 1440.28s available CPU time 12.44s user time ( 0.86%) 376.16s system time ( 26.12%) 388.60s total time ( 26.98%) load average: 9.01 3.85 1.49 passed: 9: ramfs (9) failed: 0 skipped: 0 successful run completed in 60.01s (1 min, 0.01 secs) We can see that the ops/s has hardly changed. This series is based on next-20230823 and depends on the cleanup patchset [5]. Comments and suggestions are welcome. [5]. https://lore.kernel.org/lkml/20230824033539.34570-1-zhengqi.arch@bytedance.com/ Thanks, Qi Changelog in v4 -> v5: - split out some cleanups - split cleanup in [PATCH v4 46/48] as a separate patch (pointed by Dave Chinner) - add more comments for lockless algorithm (pointed by Dave Chinner) - remove shrinker_info_rcu() helper (pointed by Dave Chinner) - collect Acked-bys and Reviewed-bys - rebase onto the next-20230823 Changelog in v3 -> v4: - [PATCH v3 01/49] has been merged, so discard it. - fix wrong return value in patch v3 15\16\22\27\28\29\34\40. (pointed by Damien Le Moal) - fix uninitialized variable in [PATCH v3 04/49] (pointed by Simon Horman) - fix typo in [PATCH v3 05/49] (pointed by Simon Horman) - rebase onto the next-20230807. Changelog in v2 -> v3: - add the patch that [PATCH v3 07/49] depends on - move some shrinker-related function declarations to mm/internal.h (suggested by Muchun Song) - combine shrinker_free_non_registered() and shrinker_unregister() into shrinker_free() (suggested by Dave Chinner) - add missing __init and fix return value in bch_btree_cache_alloc() (pointed by Muchun Song) - remove unnecessary WARN_ON() (pointed by Steven Price) - go back to use completion to implement lockless slab shrink (pointed by Dave Chinner) - collect Acked-bys and Reviewed-bys - rebase onto the next-20230726. Changelog in v1 -> v2: - implement the new APIs and convert all shrinkers to use it. (suggested by Dave Chinner) - fix UAF in PATCH [05/29] (pointed by Steven Price) - add a secondary array for shrinker_info::{map, nr_deferred} - re-implement the lockless slab shrink (Since unifying the processing of global and memcg slab shrink needs to modify the startup sequence (As I mentioned in https://lore.kernel.org/lkml/38b14080-4ce5-d300-8a0a-c630bca6806b@bytedance.com/), I finally choose to process them separately.) - collect Acked-bys Qi Zheng (45): mm: shrinker: add infrastructure for dynamically allocating shrinker kvm: mmu: dynamically allocate the x86-mmu shrinker binder: dynamically allocate the android-binder shrinker drm/ttm: dynamically allocate the drm-ttm_pool shrinker xenbus/backend: dynamically allocate the xen-backend shrinker erofs: dynamically allocate the erofs-shrinker f2fs: dynamically allocate the f2fs-shrinker gfs2: dynamically allocate the gfs2-glock shrinker gfs2: dynamically allocate the gfs2-qd shrinker NFSv4.2: dynamically allocate the nfs-xattr shrinkers nfs: dynamically allocate the nfs-acl shrinker nfsd: dynamically allocate the nfsd-filecache shrinker quota: dynamically allocate the dquota-cache shrinker ubifs: dynamically allocate the ubifs-slab shrinker rcu: dynamically allocate the rcu-lazy shrinker rcu: dynamically allocate the rcu-kfree shrinker mm: thp: dynamically allocate the thp-related shrinkers sunrpc: dynamically allocate the sunrpc_cred shrinker mm: workingset: dynamically allocate the mm-shadow shrinker drm/i915: dynamically allocate the i915_gem_mm shrinker drm/msm: dynamically allocate the drm-msm_gem shrinker drm/panfrost: dynamically allocate the drm-panfrost shrinker dm: dynamically allocate the dm-bufio shrinker dm zoned: dynamically allocate the dm-zoned-meta shrinker md/raid5: dynamically allocate the md-raid5 shrinker bcache: dynamically allocate the md-bcache shrinker vmw_balloon: dynamically allocate the vmw-balloon shrinker virtio_balloon: dynamically allocate the virtio-balloon shrinker mbcache: dynamically allocate the mbcache shrinker ext4: dynamically allocate the ext4-es shrinker jbd2,ext4: dynamically allocate the jbd2-journal shrinker nfsd: dynamically allocate the nfsd-client shrinker nfsd: dynamically allocate the nfsd-reply shrinker xfs: dynamically allocate the xfs-buf shrinker xfs: dynamically allocate the xfs-inodegc shrinker xfs: dynamically allocate the xfs-qm shrinker zsmalloc: dynamically allocate the mm-zspool shrinker fs: super: dynamically allocate the s_shrink mm: shrinker: remove old APIs mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} mm: shrinker: rename {prealloc|unregister}_memcg_shrinker() to shrinker_memcg_{alloc|remove}() mm: shrinker: make global slab shrink lockless mm: shrinker: make memcg slab shrink lockless mm: shrinker: hold write lock to reparent shrinker nr_deferred mm: shrinker: convert shrinker_rwsem to mutex arch/x86/kvm/mmu/mmu.c | 18 +- drivers/android/binder_alloc.c | 31 +- drivers/gpu/drm/i915/gem/i915_gem_shrinker.c | 30 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- drivers/gpu/drm/msm/msm_drv.c | 4 +- drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem_shrinker.c | 34 +- drivers/gpu/drm/panfrost/panfrost_device.h | 2 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 6 +- drivers/gpu/drm/panfrost/panfrost_gem.h | 2 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 30 +- drivers/gpu/drm/ttm/ttm_pool.c | 23 +- drivers/md/bcache/bcache.h | 2 +- drivers/md/bcache/btree.c | 27 +- drivers/md/bcache/sysfs.c | 3 +- drivers/md/dm-bufio.c | 28 +- drivers/md/dm-cache-metadata.c | 2 +- drivers/md/dm-zoned-metadata.c | 29 +- drivers/md/raid5.c | 26 +- drivers/md/raid5.h | 2 +- drivers/misc/vmw_balloon.c | 38 +- drivers/virtio/virtio_balloon.c | 25 +- drivers/xen/xenbus/xenbus_probe_backend.c | 18 +- fs/btrfs/super.c | 2 +- fs/erofs/utils.c | 20 +- fs/ext4/ext4.h | 2 +- fs/ext4/extents_status.c | 24 +- fs/f2fs/super.c | 32 +- fs/gfs2/glock.c | 20 +- fs/gfs2/main.c | 6 +- fs/gfs2/quota.c | 26 +- fs/gfs2/quota.h | 3 +- fs/jbd2/journal.c | 30 +- fs/kernfs/mount.c | 2 +- fs/mbcache.c | 23 +- fs/nfs/nfs42xattr.c | 87 +-- fs/nfs/super.c | 22 +- fs/nfsd/filecache.c | 23 +- fs/nfsd/netns.h | 4 +- fs/nfsd/nfs4state.c | 20 +- fs/nfsd/nfscache.c | 31 +- fs/proc/root.c | 2 +- fs/quota/dquot.c | 18 +- fs/super.c | 36 +- fs/ubifs/super.c | 22 +- fs/xfs/xfs_buf.c | 25 +- fs/xfs/xfs_buf.h | 2 +- fs/xfs/xfs_icache.c | 26 +- fs/xfs/xfs_mount.c | 4 +- fs/xfs/xfs_mount.h | 2 +- fs/xfs/xfs_qm.c | 28 +- fs/xfs/xfs_qm.h | 2 +- include/linux/fs.h | 2 +- include/linux/jbd2.h | 2 +- include/linux/memcontrol.h | 12 +- include/linux/shrinker.h | 54 +- kernel/rcu/tree.c | 22 +- kernel/rcu/tree_nocb.h | 20 +- mm/huge_memory.c | 69 ++- mm/internal.h | 11 + mm/shrinker.c | 581 +++++++++++------- mm/shrinker_debug.c | 31 +- mm/workingset.c | 27 +- mm/zsmalloc.c | 28 +- net/sunrpc/auth.c | 21 +- 65 files changed, 1076 insertions(+), 734 deletions(-)