From patchwork Tue Jul 25 18:57:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13326951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AC6EEB64DD for ; Tue, 25 Jul 2023 18:58:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA0588D0002; Tue, 25 Jul 2023 14:57:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C4F7E8D0001; Tue, 25 Jul 2023 14:57:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF03C6B0083; Tue, 25 Jul 2023 14:57:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9E8736B0081 for ; Tue, 25 Jul 2023 14:57:59 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 68805140E8F for ; Tue, 25 Jul 2023 18:57:59 +0000 (UTC) X-FDA: 81051043878.04.2AED3CC Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf11.hostedemail.com (Postfix) with ESMTP id 489B440028 for ; Tue, 25 Jul 2023 18:57:56 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=RtcjT95B; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690311477; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NLnbyyuzr210ga9qdxdP+jywCc3qVm5fWtkraBmR3dU=; b=a3treDNgtEU9ylzProTR3jiHTsqoX3RuT8l8sfx7b2QLXb3/IHcCrR8eWNrORz16VIZCnn kbLYEIH9G3o2ORIDswO89npPLIx6+WpeAtWS/MluH+MJwFnYt/Ghq6wXd3a4VQ/GFlHEmZ qixluTiuJtXR1gJhz6pf78EvfhZOZXU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=RtcjT95B; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690311477; a=rsa-sha256; cv=none; b=XmUULHBIu8tf9PNBQdCANxmWt36ChcChGl+nRQcLeacxjvGtW692Ej94HNHPFNTyT6f5Ge cjCpHVOvUeBZoZGawW/M39pkn7U6Zjs46dxSOqAN8TimubYD+L1jVo8Z5rwfaNna9lXZq/ dKbUj28ICen7Aoi4PjyS33Js3+Tbk8I= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1b8b4749013so46165595ad.2 for ; Tue, 25 Jul 2023 11:57:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690311475; x=1690916275; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=NLnbyyuzr210ga9qdxdP+jywCc3qVm5fWtkraBmR3dU=; b=RtcjT95BPBz8/lif3mE1d4th4BnweAK8o6arZuAtevy/3ypXyRKCvUdQwXnKP3zlZK QPZrgO3eyNH/VBX9YvbXbJbDT4qv+w6din6bNCc+x7acUiHTZ8WztKuRJ3J9fiqAhKGU 0tqZkr//iv++gMTaAgyxFKH3uSPZTwUllaNs4mYF8iz/J7qNIGs4He4mwc01DaH0jxxP I4Y7h9UaFE+YM2cp+VLzMfqZ84Sn8ytHHuc8N8F6SlQre6am85wXZobYtTux76Iq1zjI +z6pTyLBJbVoZksqep5bdGJAe6bPtvOn5trogqA79ofoX/DEdauWd8tinQOlb+jBKGji h/UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690311475; x=1690916275; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NLnbyyuzr210ga9qdxdP+jywCc3qVm5fWtkraBmR3dU=; b=OAQzUPDNQdd3JBIltosIGwdbqWVdkh9PRXWTLvED0+4xCrJk059bxfhL+4UoZptk10 4qf7ioCa/plI7YhRP+/+z2Bsv3ANq7rPr3Kf+L5ofI0NfDUBYikFiRWqG3f9gQ+OIsTw X0+DqV4vQ3JkODYnL1bGSdAEM5VQyKbX/6xUBp077X+DVBtcZjHNLDpQw4Xp7e4gBuV8 /1egp4IX8TPM0H5Fb0Jdu87PRMCj8bxSQRigH1KIT/gQHSXIth277KTD5L7OOPev5+7p +SXFU1m1cqg6d0ZlShTHi+T/AnTUiIs/0K5YgT/UjlAOSVtV7jpYyAKfQq8MMI2nw6jz 3IZg== X-Gm-Message-State: ABy/qLYVDu3vW5XWTa6BGZJqpDJhstNWUy5g+kUqZWWb/FG4+Uate7xP YGgWNAGLwHn83KZoc4LBOS/gCGYqZ1N3z+jqWdrUAw== X-Google-Smtp-Source: APBJJlEoAHI5spn7bfwYJL8CaCLfz7BuoQpgPX7+4Pa2k9q9i08qVAZPkIOQ+B9VBKXuUk8B2Fzgww== X-Received: by 2002:a17:903:2286:b0:1b8:66f6:87a3 with SMTP id b6-20020a170903228600b001b866f687a3mr23619plh.52.1690311475213; Tue, 25 Jul 2023 11:57:55 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([240e:306:1582:8dc2:70ef:4d19:edfe:c3a3]) by smtp.gmail.com with ESMTPSA id j17-20020a170902c3d100b001b03f208323sm11443150plj.64.2023.07.25.11.57.51 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 25 Jul 2023 11:57:54 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Yu Zhao , Roman Gushchin , Johannes Weiner , Michal Hocko , Hugh Dickins , Nhat Pham , Yuanchu Xie , Suren Baghdasaryan , "T . J . Mercier" , Kairui Song Subject: [RFC PATCH 4/4] workingset, lru_gen: apply refault-distance based re-activation Date: Wed, 26 Jul 2023 02:57:32 +0800 Message-ID: <20230725185733.43929-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230725185733.43929-1-ryncsn@gmail.com> References: <20230725185733.43929-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: 489B440028 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 69y4pijyqbso391nh8g6tywnp8jptxh7 X-HE-Tag: 1690311476-510198 X-HE-Meta: U2FsdGVkX182ojJ2zPvBxJjZiBieD5z0xpYwi7OL1w1zn+jGyBYtj2bOzO8yhxP4eTosDCCbTltslHPC4higHLqRoZu1NboHcgZBZpIfEWv53IrC2v/hgeNn8dXZ3xG1eqsXTxT5XPgA+1kK7ZT90wv9f96BYdTgs9ZwDZWTKlffyx+tYdf7Qzq2o7DntaAOhW4NSkp4zvrdj40sN+jg7el6FGGKMWJvPKM4c4DKBAlwVjAnDswtSZgnSHTBp4g0htPzFkonAqAUtrZr8ehdulDp6763sTScQpjoQk0JQ2I/tPGvYMh2cnlKCw2UbqFi52rNfXRUEqJ1bXUegtwKuuV7oVppDp+Lzkq57/Tmr1ojVMpm8y0bDSpBs81opw0IWTS9aFXGjmk8ukNC3ckP7RzUAqULkAGDBbiA4orWmlkA7rAPF3eUuc38gy43GCPydGewXqI0dewQJEcL8/bW+TyBp82kHDmahYd+wOVBKeh/8T8YHd/1cPA2StsHMKXdt9ECHvwdgaW6aaqWezIkFJrozVbzdfSSbPnpVWDqe5Ro1GCsaYv3Us1UGtNzGK67ap4orgp5IXxDPIRjBOcUIBVvtLS/5cM0K8cnwqQX+jT9KLab87NYeJPfYi03ilz92nLGfe/DYRrT+4aZMFQe6QO2aMx3BxduXWaDOIBMNsAMjd9x5qVI57DJZZQqs4QeVfLZYx+P9/86Fwdf/WKpBvzvPUVOeqTnjbY73JUPIbowmsCy3G/r/DsXD8HsC2h+N93XJ9vSao46eTb7bVX927ZELmQWU0eijpfAgU/tt6Q9fhJGtQ7F3XKsrZj/tqXuXNJ/UiyJxwFbjDBK6xzOeIvT9wSZDxVMOwoGx0A7ywWHfKH71Bw4FdkXzrkddEEfZ/mhXxqWd3RBmqq+W6gODVRTjPhAZZAGcw5eSpky85aZTPmhCITtAypq0u0FqsnRyk18mtmsjLRFAW9id6L lrk7Y4VJ HlmElBON/lyQel6ye+Y2u58uarbeqo2JfrzHRoGNABG9tMDh5OoMgLLPlq0WaQDZ8tNhI35xQ+odKZZIaNW1a5j1oN7R64arJ1VzmmNNX2y1YQ1edpQEet2qi/s5wifa3f6OlBYet9BAc7WGGS4JQh664POSB+atBPPJR4gAHv6Rxuk0um1lzWtp0dawVICS8YcdG2eZd6jY4QvxBEdfVImuBnQpGrs7gFpS3mnOMgQdrP3s2dDrHz1XEREhXgCL08uAQy7It31tqYYo1aWg1ldHIDlWaB7RtmCO7job8cy+emOLCwORuy7vkJamD7gZAz8125sSAt/whJSDwYslstxYHM5IM0+ses4zlmlM3HoSKFNYoy2ilBvN91PcDSIEZiuLC/QQsoctZzGORsz2eHjiPAT2i+p9P8FvLFasKrb5z8vFPAHozdvox8ARUAu5I24KLLt5uNWpJ603kzMbRYPM0VzbB9fVy1Tju6AejfX/owUh8LiwPiOxT0opWlHFeY7SiM0tWD5ATo4dHF3atfA0GATZvz9hAiQvpD6pPcAnAe0kZtnzxvtt5UswFIbalZGC628pTryQS0/dTqtmD8sykbmTqEoU33PxVcMBbtiiwwT3fkXHx1vMZZzJx7aKaZEvMc3Fdu8OqPyVGoFKnVPpX8XZr6fLCZwKNOKIIKyfCfFYug1APMDvyEPI15x5LfgSGLgjr5Niv+kUPW9fAFbYtTA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Kairui Song I noticed MGLRU not working very well on certain workflows, which is observed on some heavily stressed databases. That is when the file page workingset size exceeds total memory, and the access distance (the left-shift time of a page before it gets activated, considering LRU starts from right) of file pages also larger than total memory. All file pages are stuck on the oldest generation and getting read-in then evicted permutably. Despite anon pages being idle, they never get aged. PID controller didn't kickin until there are some minor access pattern changes. And file pages are not promoted or reused. Even though the memory can't cover the whole workingset, the refault-distance based re-activation can help hold part of the workingset in-memory to help reduce the IO workload significantly. So apply it for MGLRU as well. The updated refault-distance model fits well for MGLRU in most cases, if we just consider the last two generation as the inactive LRU and the first two generations as active LRU. Some minor tinkering is done to fit the logic better, also make the refault-distance contributed to page tiering and PID refault detection of MGLRU: - If a tier-0 page have a qualified refault-distance, just promote it to higher tier, send it to second oldest gen. - If a tier >= 1 page have a qualified refault-distance, mark it as active and send it to youngest gen. - Increase the reference of every page that have a qualified refault-distance and increase the PID countroled refault rate of the updated tier. Following benchmark showed a major improvement. To simulate the workflow, I setup a 3-replicated mongodb cluster using docker, each in a standalone cgroup, set to use 5 gb of cache and 10g of oplog, on a 32G VM. The benchmark is done using https://github.com/apavlo/py-tpcc.git, modified to run STOCK_LEVEL query only, for simulating slow query and get a stable result. Before the patch (with 10G swap, the result won't change whether swap is on or not): $ tpcc.py --config=mongodb.config mongodb --duration=900 --warehouses=500 --clients=30 ================================================================== Execution Results after 904 seconds ------------------------------------------------------------------ Executed Time (µs) Rate STOCK_LEVEL 503 27150226136.4 0.02 txn/s ------------------------------------------------------------------ TOTAL 503 27150226136.4 0.02 txn/s $ cat /proc/vmstat | grep working workingset_nodes 53391 workingset_refault_anon 0 workingset_refault_file 23856735 workingset_activate_anon 0 workingset_activate_file 23845737 workingset_restore_anon 0 workingset_restore_file 18280692 workingset_nodereclaim 1024 $ free -m total used free shared buff/cache available Mem: 31837 6752 379 23 24706 24607 Swap: 10239 0 10239 After the patch (with 10G swap on same disk, similar result using ZRAM): $ tpcc.py --config=mongodb.config mongodb --duration=900 --warehouses=500 --clients=30 ================================================================== Execution Results after 903 seconds ------------------------------------------------------------------ Executed Time (µs) Rate STOCK_LEVEL 2575 27094953498.8 0.10 txn/s ------------------------------------------------------------------ TOTAL 2575 27094953498.8 0.10 txn/s $ cat /proc/vmstat | grep working workingset_nodes 78249 workingset_refault_anon 10139 workingset_refault_file 23001863 workingset_activate_anon 7238 workingset_activate_file 6718032 workingset_restore_anon 7432 workingset_restore_file 6719406 workingset_nodereclaim 9747 $ free -m total used free shared buff/cache available Mem: 31837 7376 320 3 24140 24014 Swap: 10239 1662 8577 The performance is 5x times better than before, and the idle anon pages now can get swapped out as expected. The result is also better with lower test stress, testing with lower stress also shows a improvement. I also checked the benchmark with memtier/memcached and fio, using similar setup as in commit ac35a4902374 but scaled down to fit in my test environment: memcached test (with 16G ramdisk as swap and 2G cgroup limit): memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 \ -t 12 -B binary & memtier_benchmark -S /tmp/memcached.socket -P memcache_binary -n allkeys\ --key-minimum=1 --key-maximum=24000000 --key-pattern=P:P -c 1 \ -t 12 --ratio 1:0 --pipeline 8 -d 2000 -x 6 fio test (with 16G ramdisk on /mnt and 4G cgroup limit): fio -name=refault --numjobs=12 --directory=/mnt --size=1024m \ --buffered=1 --ioengine=io_uring --iodepth=128 \ --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ --rw=randread --random_distribution=random --norandommap \ --time_based --ramp_time=5m --runtime=5m --group_reporting Before this patch: memcached read: Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec Best 52832.79 0.00 0.00 1.82042 1.70300 4.54300 6.27100 105641.69 Worst 46613.56 0.00 0.00 2.05686 1.77500 7.80700 11.83900 93206.05 Avg (6x) 51024.85 0.00 0.00 1.88506 1.73500 5.43900 9.47100 102026.64 fio: read: IOPS=2211k, BW=8637MiB/s (9056MB/s)(2530GiB/300001msec) After this patch: memcached read: Ops/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec Best 54218.92 1.76930 1.65500 4.41500 6.27100 108413.34 Worst 47640.13 2.01495 1.74300 7.64700 11.64700 95258.72 Avg (6x) 51408.33 1.86988 1.71900 5.43900 9.34300 102793.42 fio: read: IOPS=2166k, BW=8462MiB/s (8873MB/s)(2479GiB/300001msec) memcached looks ok but there is a %2 performance drop for FIO test, and after some profiling this is mainly caused by the extra atomic operations and new functions, there seems to be no LRU accuracy drop. Signed-off-by: Kairui Song --- mm/workingset.c | 74 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 51 insertions(+), 23 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index 126f1fec41ed..40cb0df980f7 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -185,6 +185,7 @@ MEM_CGROUP_ID_SHIFT) #define EVICTION_BITS (BITS_PER_LONG - (EVICTION_SHIFT)) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) +#define LRU_GEN_EVICTION_BITS (EVICTION_BITS - LRU_REFS_WIDTH - LRU_GEN_WIDTH) /* * Eviction timestamps need to be able to cover the full range of @@ -195,6 +196,7 @@ * evictions into coarser buckets by shaving off lower timestamp bits. */ static unsigned int bucket_order __read_mostly; +static unsigned int lru_gen_bucket_order __read_mostly; static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction, bool workingset) @@ -345,10 +347,14 @@ static void *lru_gen_eviction(struct folio *folio) lruvec = mem_cgroup_lruvec(memcg, pgdat); lrugen = &lruvec->lrugen; min_seq = READ_ONCE(lrugen->min_seq[type]); + token = (min_seq << LRU_REFS_WIDTH) | max(refs - 1, 0); + token <<= LRU_GEN_EVICTION_BITS; + token |= lru_eviction(lruvec, LRU_GEN_EVICTION_BITS, lru_gen_bucket_order); hist = lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); + workingset_age_nonresident(lruvec, folio_nr_pages(folio)); return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs); } @@ -363,44 +369,55 @@ static bool lru_gen_test_recent(struct lruvec *lruvec, bool file, unsigned long min_seq; min_seq = READ_ONCE(lruvec->lrugen.min_seq[file]); + token >>= LRU_GEN_EVICTION_BITS; return (token >> LRU_REFS_WIDTH) == (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)); } static void lru_gen_refault(struct folio *folio, void *shadow) { int memcgid; - bool recent; + bool refault; bool workingset; unsigned long token; + bool recent = false; + int refault_tier = 0; int hist, tier, refs; struct lruvec *lruvec; + struct mem_cgroup *memcg; struct pglist_data *pgdat; struct lru_gen_folio *lrugen; int type = folio_is_file_lru(folio); int delta = folio_nr_pages(folio); - rcu_read_lock(); - unpack_shadow(shadow, &memcgid, &pgdat, &token, &workingset); - lruvec = mem_cgroup_lruvec(mem_cgroup_from_id(memcgid), pgdat); - if (lruvec != folio_lruvec(folio)) - goto unlock; + memcg = mem_cgroup_from_id(memcgid); + lruvec = mem_cgroup_lruvec(memcg, pgdat); + /* memcg can be NULL, go through lruvec */ + memcg = lruvec_memcg(lruvec); mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta); - - recent = lru_gen_test_recent(lruvec, type, token); - if (!recent) - goto unlock; + refault = lru_refault(memcg, lruvec, token, LRU_GEN_EVICTION_BITS, + lru_gen_bucket_order); + if (lruvec == folio_lruvec(folio)) + recent = lru_gen_test_recent(lruvec, type, token); + if (!recent && !refault) + return; lrugen = &lruvec->lrugen; - hist = lru_hist_from_seq(READ_ONCE(lrugen->min_seq[type])); /* see the comment in folio_lru_refs() */ + token >>= LRU_GEN_EVICTION_BITS; refs = (token & (BIT(LRU_REFS_WIDTH) - 1)) + workingset; tier = lru_tier_from_refs(refs); - - atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]); - mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta); + refault_tier = tier; + + if (refault) { + if (refs) + folio_set_active(folio); + if (refs != BIT(LRU_REFS_WIDTH)) + refault_tier = lru_tier_from_refs(refs + 1); + mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + type, delta); + } /* * Count the following two cases as stalls: @@ -409,12 +426,17 @@ static void lru_gen_refault(struct folio *folio, void *shadow) * 2. For pages accessed multiple times through file descriptors, * numbers of accesses might have been out of the range. */ - if (lru_gen_in_fault() || refs == BIT(LRU_REFS_WIDTH)) { + if (refault || lru_gen_in_fault() || refs == BIT(LRU_REFS_WIDTH)) { folio_set_workingset(folio); mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta); } -unlock: - rcu_read_unlock(); + + if (recent && refault_tier == tier) { + atomic_long_add(delta, &lrugen->refaulted[hist][type][tier]); + } else { + atomic_long_add(delta, &lrugen->avg_total[type][refault_tier]); + atomic_long_add(delta, &lrugen->avg_refaulted[type][refault_tier]); + } } #else /* !CONFIG_LRU_GEN */ @@ -536,16 +558,15 @@ void workingset_refault(struct folio *folio, void *shadow) bool workingset; long nr; - if (lru_gen_enabled()) { - lru_gen_refault(folio, shadow); - return; - } - /* Flush stats (and potentially sleep) before holding RCU read lock */ mem_cgroup_flush_stats_ratelimited(); - rcu_read_lock(); + if (lru_gen_enabled()) { + lru_gen_refault(folio, shadow); + goto out; + } + /* * The activation decision for this folio is made at the level * where the eviction occurred, as that is where the LRU order @@ -791,6 +812,13 @@ static int __init workingset_init(void) pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n", EVICTION_BITS, max_order, bucket_order); +#ifdef CONFIG_LRU_GEN + if (max_order > LRU_GEN_EVICTION_BITS) + lru_gen_bucket_order = max_order - LRU_GEN_EVICTION_BITS; + pr_info("workingset: lru_gen_timestamp_bits=%d lru_gen_bucket_order=%u\n", + LRU_GEN_EVICTION_BITS, lru_gen_bucket_order); +#endif + ret = prealloc_shrinker(&workingset_shadow_shrinker, "mm-shadow"); if (ret) goto err;