From patchwork Tue Mar 26 18:50:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13604893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24FCAC54E67 for ; Tue, 26 Mar 2024 19:04:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87E496B0083; Tue, 26 Mar 2024 15:04:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 82D146B0085; Tue, 26 Mar 2024 15:04:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F52B6B0087; Tue, 26 Mar 2024 15:04:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6193B6B0083 for ; Tue, 26 Mar 2024 15:04:20 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0EC1DA0201 for ; Tue, 26 Mar 2024 19:04:20 +0000 (UTC) X-FDA: 81940115880.09.C9BB951 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf19.hostedemail.com (Postfix) with ESMTP id 299251A0018 for ; Tue, 26 Mar 2024 19:04:17 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NLMxjpQz; spf=pass (imf19.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711479858; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=5TWiE8V41/HEEkIeCCX/V6RExr75EhLxd3ozyRzvmEY=; b=wNlxBtk/DvQDPDbY8KItlKlEyxGwbi/2dju2bP7VKuhJmgXk7FrpAlIVSDbjpW19nkwiAW J1YiXXMwZIcSslPr6AWv9msj+cIrpaye8RBBClPYWQJrmLENobRw3ssfZ2OOAHwLqtG9mp onUh10X3EyZPf5vtPYcyud55TkYwI7U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711479858; a=rsa-sha256; cv=none; b=D3tQB6OXjMQEkjiKINYm0Kld3bkUGsO9a6lkjrcDdKoF/rIHRZuOA8fTmoOagUpAr5sj6U PM2tOt9avJYJJG0d+Zt49JgsQh98E5NaoK2whRbD9DRaI/b5vInWTPeS2e698GEaFz9lTC mB4J1FOfS/pkXaiQanqW9dELVvz50MA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NLMxjpQz; spf=pass (imf19.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-6e6f4ad4c57so4494789b3a.2 for ; Tue, 26 Mar 2024 12:04:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711479856; x=1712084656; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:message-id:date :subject:cc:to:from:from:to:cc:subject:date:message-id:reply-to; bh=5TWiE8V41/HEEkIeCCX/V6RExr75EhLxd3ozyRzvmEY=; b=NLMxjpQzaXaeiMHDMxCwnimxhG7ZvFgEpywrx++2RSTkmtsh+B1aKLfzgbHbAPEzEc yoC5Y7eiA55P064bQuMLa2YPIrLI/28U8xWEwsxcEgaSjT/RY8+Vkoic5TyBhirJy8tg Zfjdy5A0dQeuP15k/3Kq/kl83Rl+HUs6gHwFJFI/NgvQax7NulObZDw1GtY7Cv0ADGFN 2t8r66U7R+aU1haqxwrzoT95GaGP/8oolB5MykBr0gYEf9cJhSW95uUVMZSXeT24KIwo UXmZU4BoIiyJplz1PdCwrwtQ2N8LTs1Iv3pUw4R9PBqIf9NNHLaL5By4vdtfvmyZ+mwJ zdTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711479856; x=1712084656; h=content-transfer-encoding:mime-version:reply-to:message-id:date :subject:cc:to:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=5TWiE8V41/HEEkIeCCX/V6RExr75EhLxd3ozyRzvmEY=; b=IQOM9DGMECgKvNPLulUt2hty0iSFZU4Z3XO8VR02OoEeiwQQPJ7EpXGw7yPlZumEvm k0f92QGT+pGw0ftpbnWKg4rLq4yP1TT8z2TGxEfUO99LaMcYU0xDWVwsyFZsVUEP21LO ytBM5YjhcpxMAp46Q+lzh8Mvd6HbXKxhegMcPKZz+ybo2I1Q+NU+e9dPH9TsvjuivZHW RVrw0bDDjEO9Hb6HN40Z7Akxj2OhV3BUte3mkYUaP5zP2YZtEIMczkDBL+z5ckp4X61d il4fMlDtk5FFpI424soRdmE2uj6ZKB/gdfYl52/4puPBZnhViua/lq8ZWjBEq4ohRdyn jOuQ== X-Gm-Message-State: AOJu0YzrHl/761GEbEoa8sG1/2+687lXVZh/3I6/vfugrhPTk0rWj9SD DzgILhCBnR3DV+kAEuneY4JLAR38JJZksWA/Vf9JihvudnPhF7yG0TaU++844/jT/Ski X-Google-Smtp-Source: AGHT+IFIfGQ6R0T488evah4toOxvgSsRkizRRM/1ePf0r5KQNN9KPJ5OobJvjzwCnxXLpwoSk0Kw3w== X-Received: by 2002:a05:6a00:1996:b0:6ea:ad01:3590 with SMTP id d22-20020a056a00199600b006eaad013590mr6937777pfl.24.1711479856357; Tue, 26 Mar 2024 12:04:16 -0700 (PDT) Received: from KASONG-MB2.tencent.com ([115.171.40.106]) by smtp.gmail.com with ESMTPSA id j14-20020aa783ce000000b006ea790c2232sm6298350pfn.79.2024.03.26.12.04.12 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 26 Mar 2024 12:04:15 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: "Huang, Ying" , Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org, Kairui Song Subject: [RFC PATCH 00/10] mm/swap: always use swap cache for synchronization Date: Wed, 27 Mar 2024 02:50:22 +0800 Message-ID: <20240326185032.72159-1-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: 299251A0018 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: h6r487ogeso3ym1yuoeysyrex1uhdmyc X-HE-Tag: 1711479857-265501 X-HE-Meta: U2FsdGVkX1/TphGomj9hSK/M5bsItRVm4bva99RViA88p1g2pZftF8JGC4/BIL3ve7Q5q4/pfFJOpeTlzXwNYL6N4hmoc9K7qvSjtSQm1+MHmqwTulM7wWIoi357j8UEZ0eiJDwIsBV+pGVbgGkYxjCldIsqaBwBsh4tsZlQ24jC4iww/Zp7/34EHllM2ywps2hE8po+GTje5YT234KQ70yH36OQKBRtPZFUAhcKDoMXc3NsXAYLTGLU/atRIlNFG0Ngm3BdMe92E4dOQWAgmgwwvJF29U9o79dqqw8Af84SOlbNKLYEIhFUh5b4sfFnfc02oTy2qsfUzgXdAMCqWek2A1p7EGvgGJ8EsyU9meuysyiCazBKWIgvyiSqTBfehltorsUGcVC8yOJCEe1vluni9HLC+ZD1G2TnYBnfzp2RDCnoEMcE0qzcVwq9K5qwo/QPaM8ew7b/8JrdcRtZDB7GdkdyJShItLy+enIx/J2plBKRutEDrxzlD9nh8hzohdUnPHdKWDQ818UDmAaOvG8G1uv9dhk0sVHOQHdSoKHf6/jzVVZnv5F4uTU1YUdK6CIDjDhZ8waexmX/9btYuJuacWvWO3eYuGiFjesog1dVwUKPvCV3KEfzRMnqK39NeVVpte77jth3EcQvRMDQ5fm1SXePXQCCAvukaXDrEVejm18Zvs+D46DjYiKTzg+1+dYA+kBb6ylb+e/2LuT3aZaP/1Vk92iQsBE7UxXAeeZqVm6+iFi/HkDvGj1I9KThnPyTNu6k2/GNjpDbvnz3LmHJng9YwoVS73z/qbhGVsuEDdfV64seO8vocGyaIuUhQWWjkPqV5aX/4tfVPf5n+es1NZJ352SgP9h2BCOw38eWIm7DJ7yp8onskhpYW7TYuaOojRMCa9nTxmZmGc8APLmLfFabOyJYgGzuPeGVXTnBW8WdjjKM8Ml23xm35VHyraSJr+xB2yXaZgv/FiF SvBw4MoT IHxCCcBF3HduBGYIJGgelu1G4aSuBCzc8nP+dVuYth786YpJZs+geGkSbv/m8e7zs5PvzGFuewT9KULj1YWLWfRb6bHA2dLv23sKWXyBOxuzpDPfq0285owp+UfRaZlMM8jhv+KRE04q9AHuRRfyvcjvWp94eHNb16nEZQO3jdCeKwLF9+IYXRfv3w60OkSV03sOMxGnOUK7qoFFBR/7nqhhFCPAyERmei8iRaWdMuusONpm3LCgiroFotuc/3c0h6t3z0/swHwlsvqUB/lahfv/4j+cq5er7kSHsiEfNJVKko9dtNOJle0yYrm7C4prhaiIa3ZQ+H2mB/ZMkwVm7CIROC5HiCPmCiBFPZFbkqkBVo6EKOBrFmY0YP+zU6zqzkPTufHF7p4B+VDPqXOAIwHTewtozN1OA5YYk/qQ8I8ZTLpQxs3yFit82W+WvGcXB0gaBWG6GNzBCM5QCP0F8ChmZ38k+5/soLXnCJGqJh75gFDn+TB3BwYpgggXblH1v4AVwxLpUgl+iIn8bPMQ9uVRMTbytEFGqOmbn+bvIofeRjPz+M/oPmBJ8F4V4cwp+CbQ1wADPenVZEnu/55TIAGpmXSY9lUPng4eb83hICQaIwt8uEk+JPsfkunckO5IhQQCwtn8D7lk2lzP1Gmz3fkTRRIVy2E9/eYOMoFW4EdNYr0v/N5PDB/PLcbF13Oc2KeHLq+n2DIQKIKLa7BIZtEvOjQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song A month ago a bug was fixed for SWP_SYNCHRONOUS_IO swapin (swap cache bypass swapin): https://lore.kernel.org/linux-mm/20240219082040.7495-1-ryncsn@gmail.com/ Because we have to spin on the swap map on race, and swap map is too small to contain more usable info, an ugly schedule_timeout_uninterruptible(1) is added. It's not the first time a hackish workaround was added for cache bypass swapin and not the last time. I did many experiments locally to see if the swap cache bypass path can be dropped while keeping the performance still comparable. And it seems doable. This series does the following things: 1. Remove swap cache bypass completely. 2. Apply multiple optimizations after that, these optimizations are either undoable or very difficult to do without dropping the cache bypass swapin path. 3. Use swap cache as a synchronization layer, also unify some code with page cache (filemap). As a result, we have: 1. A comparable performance, some tests are even faster. 2. Multi-index support for swap cache. 3. Removed many hackish workarounds including above long tailing issue is gone. Sending this as RFC to collect some discussion, suggestion, or rejection early, this seems need to be split into multiple series, but the performance is not good until the last patch so I think start by seperating them may make this approach not very convincing. And there are still some (maybe further) TODO items and optimization space if we are OK with this approach. This is based on my another series, for reusing filemap code for swapcache: [PATCH v2 0/4] mm/filemap: optimize folio adding and splitting https://lore.kernel.org/linux-mm/20240325171405.99971-1-ryncsn@gmail.com/ Patch 1/10, introduce a helper from filemap side to be used later. Patch 2/10, 3/10 are clean up and prepare for removing the swap cache bypass swapin path. Patch 4/10, removed the swap cache bypass swapin path, and the performance drop heavily (-28%). Patch 5/10, apply the first optimization after the removal, since all folios goes through swap cache now, there is no need to explicit shadow clearing any more. Patch 6/10, apply another optimization after clean up shadow clearing routines. Now swapcache is very alike page cache, so just reuse page cache code and we will have multi-index support. Shadow memory usage dropped a lot. Patch 7/10, just rename __read_swap_cache_async, it will be refactored and a key part of this series, and the naming is very confusing to me. Patch 8/10, make swap cache as a synchronization layer, introduce two helpers for adding folios to swap cache, caller will either succeed or get a folio to wait on. Patch 9/10, apply another optimization. With above two helpers, looking up of swapcache can be optimized and avoid false looking up, which helped improve the performance. Patch 10/10, apply a major optimization for SWP_SYNCHRONOUS_IO devices, after this commit, performance for simple swapin/swapout is basically same as before. Test 1, sequential swapin/out of 30G zero page on ZRAM: Before (us) After (us) Swapout: 33619409 33886008 Swapin: 32393771 32465441 (- 0.2%) Swapout (THP): 7817909 6899938 (+11.8%) Swapin (THP) : 32452387 33193479 (- 2.2%) And after swapping out 30G with THP, the radix node usage dropped by a lot: Before: radix_tree_node 73728K After: radix_tree_node 7056K (-94%) Test 2: Mysql (16g buffer pool, 32G ZRAM SWAP, 4G memcg, Zswap disabled, THP never) sysbench /usr/share/sysbench/oltp_read_only.lua --mysql-user=root \ --mysql-password=1234 --mysql-db=sb --tables=36 --table-size=2000000 \ --threads=48 --time=300 --report-interval=10 run Before: transactions: 4849.25 per sec After: transactions: 4849.40 per sec Test 3: Mysql (16g buffer pool, NVME SWAP, 4G memcg, Zswap enabled, THP never) echo never > /sys/kernel/mm/transparent_hugepage/enabled echo 100 > /sys/module/zswap/parameters/max_pool_percent echo 1 > /sys/module/zswap/parameters/enabled echo y > /sys/module/zswap/parameters/shrinker_enabled sysbench /usr/share/sysbench/oltp_read_only.lua --mysql-user=root \ --mysql-password=1234 --mysql-db=sb --tables=36 --table-size=2000000 \ --threads=48 --time=600 --report-interval=10 run Before: transactions: 1662.90 per sec After: transactions: 1726.52 per sec Test 4: Mysql (16g buffer pool, NVME SWAP, 4G memcg, Zswap enabled, THP always) echo always > /sys/kernel/mm/transparent_hugepage/enabled echo 100 > /sys/module/zswap/parameters/max_pool_percent echo 1 > /sys/module/zswap/parameters/enabled echo y > /sys/module/zswap/parameters/shrinker_enabled sysbench /usr/share/sysbench/oltp_read_only.lua --mysql-user=root \ --mysql-password=1234 --mysql-db=sb --tables=36 --table-size=2000000 \ --threads=48 --time=600 --report-interval=10 run Before: transactions: 2860.90 per sec. After: transactions: 2802.55 per sec. Test 5: Memtier / memcached (16G brd SWAP, 8G memcg, THP never): memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 -t 16 -B binary & memtier_benchmark -S /tmp/memcached.socket \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=24000000 --key-pattern=P:P -c 1 -t 16 \ --ratio 1:0 --pipeline 8 -d 1000 Before: 106730.31 Ops/sec After: 106360.11 Ops/sec Test 5: Memtier / memcached (16G brd SWAP, 8G memcg, THP always): memcached -u nobody -m 16384 -s /tmp/memcached.socket -a 0766 -t 16 -B binary & memtier_benchmark -S /tmp/memcached.socket \ -P memcache_binary -n allkeys --key-minimum=1 \ --key-maximum=24000000 --key-pattern=P:P -c 1 -t 16 \ --ratio 1:0 --pipeline 8 -d 1000 Before: 83193.11 Ops/sec After: 82504.89 Ops/sec These tests are tested under heavy memory stress, and the performance seems basically same as before,very slightly better/worse for certain cases, the benefits of multi-index are basically erased by fragmentation and workingset nodes usage is slightly lower. Some (maybe further) TODO items if we are OK with this approach: - I see a slight performance regression for THP tests, could identify a clear hotspot with perf, my guess is the content on the xa_lock is an issue (we have a xa_lock for every 64M swap cache space), THP handling needs to take the lock longer than usual. splitting the xa_lock to be more fine-grained seems a good solution. We have SWAP_ADDRESS_SPACE_SHIFT = 14 which is not an optimal value. Considering XA_CHUNK_SHIFT is 6, we will have three layer of Xarray just for 2 extra bits. 12 should be better to always make use of the whole XA chunk and having two layers at most. But duplicated address_space struct also wastes more memory and cacheline. I see an observable performance drop (~3%) after change SWAP_ADDRESS_SPACE_SHIFT to 12. Might be a good idea to decouple swap cache xarray from address_space (there are too many user for swapcache, shouldn't come too dirty). - Actually after patch Patch 4/10, the performance is much better for tests limited with memory cgroup, until 10/10 applied the direct swap cache freeing logic for SWP_SYNCHRONOUS_IO swapin. Because if the swap device is not near full, swapin doesn't clear up the swapcache, so repeated swapout doesn't need to re-alloc a swap entry, make things faster. This may indicate that lazy freeing of swap cache could benifit certain workloads and may worth looking into later. - Now SWP_SYNCHRONOUS_IO swapin will bypass readahead and force drop swap cache after swapin is done, which can be cleaned up and optimized further after this patch. Device type will only determine the readahead logic, and swap cache drop check can be based purely on swap count. - Recent mTHP swapin/swapout series should have no fundamental conflict with this. Kairui Song (10): mm/filemap: split filemap storing logic into a standalone helper mm/swap: move no readahead swapin code to a stand-alone helper mm/swap: convert swapin_readahead to return a folio mm/swap: remove cache bypass swapin mm/swap: clean shadow only in unmap path mm/swap: switch to use multi index entries mm/swap: rename __read_swap_cache_async to swap_cache_alloc_or_get mm/swap: use swap cache as a synchronization layer mm/swap: delay the swap cache look up for swapin mm/swap: optimize synchronous swapin include/linux/swapops.h | 5 +- mm/filemap.c | 161 +++++++++----- mm/huge_memory.c | 78 +++---- mm/internal.h | 2 + mm/memory.c | 133 ++++------- mm/shmem.c | 44 ++-- mm/swap.h | 71 ++++-- mm/swap_state.c | 478 +++++++++++++++++++++------------------- mm/swapfile.c | 64 +++--- mm/vmscan.c | 8 +- mm/workingset.c | 2 +- mm/zswap.c | 4 +- 12 files changed, 540 insertions(+), 510 deletions(-)