From patchwork Tue Jul 30 07:13:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13746777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24476C3DA49 for ; Tue, 30 Jul 2024 07:14:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A73106B008A; Tue, 30 Jul 2024 03:14:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A23106B008C; Tue, 30 Jul 2024 03:14:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C3766B0092; Tue, 30 Jul 2024 03:14:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6CADC6B008A for ; Tue, 30 Jul 2024 03:14:22 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D52F080133 for ; Tue, 30 Jul 2024 07:14:21 +0000 (UTC) X-FDA: 82395555522.27.C731CED Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf11.hostedemail.com (Postfix) with ESMTP id 079DF4000D for ; Tue, 30 Jul 2024 07:14:19 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DDnm8J20; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722323656; a=rsa-sha256; cv=none; b=sh/swmBs9jtLj96AkPF85JTgsaffH/t07fo0JeLmmXHgW4t6VkwQSa6ZuurFU8xnJLNJry MJq9KIQxOP+DwTn9rzhBuatabQI3GuQL/dlAXojqg+rjoatopHGUEtcBq0EHQ/+dSJ8NJM HqPsKMmNRTe72GQr4YuD3Tyh3p8hSw8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DDnm8J20; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722323656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xQpazpiL2+G6+cUit2O/uKGCRhu7hXvuY55j9nmwkok=; b=7PMqApGh2ym2SrlroLbPSc75044r+vWlGxVs8YpXPhUUAD4HeYFzlz2t+1GP3JYAtLaBT2 uAtLqrglFgyhi3BRWnGYjD1JZdQW0lbMx8ULOwok84CVgsyd8kYXTfwYC7/4DMtkMUnVR3 d7ccHb6awP4dEwQrVFX1lT80Bfc4XYs= Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-710437d0affso2692029a12.3 for ; Tue, 30 Jul 2024 00:14:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722323659; x=1722928459; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xQpazpiL2+G6+cUit2O/uKGCRhu7hXvuY55j9nmwkok=; b=DDnm8J20O0wC8R5vnbkaNpCtWcccIz6h/GGRJGyypjj8VV7akE/2q7qC/a0R6CJ+g7 gunjw2DmGapnSQLdW9oWbQ5+BkY57XrT6vs7jGyuCxKetfAMEg8KkD5r2FSzS5R1UF35 ALX9C9JZBlkPU7egH8yJtpWkCfn3wKVNn+5pqCHsXuke9smXaKFSng1c0Ycy3ST5Awvw bP+p/z75CYhg1M3av8G77XMueM3jxVmVHD2eciqbVmj3D9cb6q0hS2pg9xUbz+PIvtKI SdKe3hdrHyC4wM34as7Z0oeVOt3lmbKquMOf3kDWoLpJWth7HEY3m1Pm0vkvxW/d/RSp bK6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722323659; x=1722928459; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xQpazpiL2+G6+cUit2O/uKGCRhu7hXvuY55j9nmwkok=; b=xEPnFonYlPOG4Wbg+IBRIZKJoOk4J0mvf9zXRV0HariCHtY/eU2AvrBS3viryTE5EG Ys0skcHwSnB6r0QKK9VMiH9R6cxC7N7V+0w0of8s//TZhMoJYfciQuVgs6yhJxphqQrw G4ARa+G+dFGCutzQY5SIony/PPXIZ1cEF847CT53xCGg9DZ0PREgk45yB1GBYQySnY+n I6SOmJjVkmiXE95ZQGq8cd08Y2xdfO9Ar3lBZrWmVxmoU3lWXP5bfaFgxnOgog5izGl2 Fmh4KTKTI65z7wCdU1UubIWZnaDhdWaw1lImU4r2XiAkfrWt7JmctgGyg++J7NHg7YA1 yI8A== X-Forwarded-Encrypted: i=1; AJvYcCU3az0ppu+JJfi4rjO8gePFK5Npua+xhqXvod5r7PRBX3ptIYl/RYyozWYTSrbfXsuYW5RseuhtV/r6aq7uA7P4rtA= X-Gm-Message-State: AOJu0YyehI4VTwd0qfRAElYACDkfV/SYQoMqmOeaSsCq8l/7rb2/XJ1+ bG+INEIKige5xkXNMOYKGPvOcPocUp01kIPEiuxrpD8NB6Z7H26B X-Google-Smtp-Source: AGHT+IHR2F/7cQC2eRe07LF6FoMylryQEtNYGOLQZT983NzYLofRYEI2eSotTa1j3Z1dqT2SWoxLsg== X-Received: by 2002:a05:6a21:8cc8:b0:1c0:f5ea:d447 with SMTP id adf61e73a8af0-1c4a130cd94mr9351287637.31.1722323658755; Tue, 30 Jul 2024 00:14:18 -0700 (PDT) Received: from localhost.localdomain ([2407:7000:8942:5500:aaa1:59ff:fe57:eb97]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2cf28de71a1sm9736452a91.41.2024.07.30.00.14.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Jul 2024 00:14:18 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, mhocko@suse.com, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, shakeel.butt@linux.dev, shy828301@gmail.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com Subject: [PATCH 1/1] mm: swap: add nr argument in swapcache_prepare and swapcache_clear to support large folios Date: Tue, 30 Jul 2024 19:13:39 +1200 Message-Id: <20240730071339.107447-2-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240730071339.107447-1-21cnbao@gmail.com> References: <20240730071339.107447-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 079DF4000D X-Rspamd-Server: rspam01 X-Stat-Signature: 1gsudhz1twh9x4pzfy4dgi7uu3tg81g8 X-HE-Tag: 1722323659-240864 X-HE-Meta: U2FsdGVkX18jGBi/FScZVATsZ7dykHluU477Oa3eieCRcHaTDdO6D7UKYWgBc4NZRUmOFXglpmmmk+1h6kZ8EpKAMztugyrJAoT29C6A3xJIfbXnoXZeuudBeFzInbZxmOtq5A9Up9fykaLcnaGFNMmFRAzsqs5GjOGsC0fcpNcYTV5m9j1CGgRbjn2Q7jnKfPG0GPCe5srRCiV1awO4+Cb3oEllsp6ubcn8uXZpAWbjLpd2NRFYvCwEW/gaEK2JZQlu4/4GlZ5EWx8DHQBF5AOqmeYJOd1pmxRvvS2Sqj5O1cuA5cXxzQciuAcqNYr3w27lDe/LD+mdDZJZod8k8XH+o5JSbxnFi1yleiMbDCDZQkzWejKQB8Vns4ejv1RofkYyXGDGS4L3N+9iw4YapQSqgOnk44PwXptLHlkS57mAxmueKcEqt+2GKLAsEDX6ss/40rNr5NdtdqxJ7cORat35BTKIYp6pmmxKdqg/NVRP7ZANCyF0sN1kMGIC9OFBUKPgQHNaIEGKS/spFBvA0IZVHtW4DHeVQV2tz1FvUijw7xX7d9NrCUO/lkaF9h0r8jhTBA/cFfzqGo4cnkmgygMOEKn8pKWFlbStvnf7aopD2E5jNP1wU6o24DbCrICLl+9TG7xZHMjU8qIMuTzDUr9V239ti+Cgq5xwA5uAtW2LY+Rbnr2lBMiWYZHHK/YDNPgwjIwLdYV/RA8o1/Ce2/DxAOjYqSR3rv94M9JgL5dezfvF5co0eS7jzr7ODd4lskAdi+VJSkc39kw4OYh6uxOTVBNQ23bfilo3J1G/rIiUr+UZ3QnBYfMMzdYRSQ79qut9K1s6WkU6YDY8ZfTLLe0hq7r3HcjI2fIMqSz6ScYwXT9TYqh9myF2jPiAjW3bt///doGFUgY7qMRAoAonV1KHvpS0StBqWWcAkxxfNBsKb7TBjTOEifKwW3TmDqWboQj7D4Z/yqLFXNFF+J0 OtCh69XT yNX7bGYZHOJWpQt8xcQCu0XNZfqteY9SJ1pRTrunshH8231Q6pDgfb8+RomDW93b7/7EutgnfutmCwgsOTYnWrxrldaBaj4Z5Ojcj8izlWGaDAUb8Zd1zNrSAuZDTpizHGdvxD5Oaqi8dCLLbNEUZ1zszTkKj88GGx+6Zw16bPswHbJ+kGVikbOWeKdWWB2ZE1FHmaebKdqErszRvk6GesQ/8RwcrWEm2widkvZAgCppkF1pw6FDXvO+h1+ZWopl864atNxundXTe+2/KepuHpVFETUbnt8VnbVk9zZUZH4LPxoZXaunsXJNMZLhO2pJQLuvzeUQkH4znIAO+7du38cg0cmbU7oDltE+T1l61IgVcfqr+bhRqoc7XPsZF11PlVjZkhtY3gh0IIBojlYjajtsx/kSXiOxQeSJRoWK0k3clVR1dp5I+M4gNVYPP8JvrfRZfG7G2ykUM3OOJUrGhz2UikipVqcfIK9ptG47iiWD4Tzxs90QmvW8Xyw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song Right now, swapcache_prepare() and swapcache_clear() supports one entry only, to support large folios, we need to handle multiple swap entries. To optimize stack usage, we iterate twice in __swap_duplicate(): the first time to verify that all entries are valid, and the second time to apply the modifications to the entries. Currently, we're using nr=1 for the existing users. Reviewed-by: Baolin Wang Signed-off-by: Barry Song Acked-by: David Hildenbrand Tested-by: Baolin Wang Signed-off-by: Barry Song --- include/linux/swap.h | 4 +- mm/memory.c | 6 +-- mm/swap.h | 5 ++- mm/swap_state.c | 2 +- mm/swapfile.c | 101 +++++++++++++++++++++++++------------------ 5 files changed, 68 insertions(+), 50 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index ba7ea95d1c57..5b920fa2315b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -480,7 +480,7 @@ extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order); extern int add_swap_count_continuation(swp_entry_t, gfp_t); extern void swap_shmem_alloc(swp_entry_t); extern int swap_duplicate(swp_entry_t); -extern int swapcache_prepare(swp_entry_t); +extern int swapcache_prepare(swp_entry_t entry, int nr); extern void swap_free_nr(swp_entry_t entry, int nr_pages); extern void swapcache_free_entries(swp_entry_t *entries, int n); extern void free_swap_and_cache_nr(swp_entry_t entry, int nr); @@ -554,7 +554,7 @@ static inline int swap_duplicate(swp_entry_t swp) return 0; } -static inline int swapcache_prepare(swp_entry_t swp) +static inline int swapcache_prepare(swp_entry_t swp, int nr) { return 0; } diff --git a/mm/memory.c b/mm/memory.c index 833d2cad6eb2..b8675617a5e3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4081,7 +4081,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * reusing the same entry. It's undetectable as * pte_same() returns true due to entry reuse. */ - if (swapcache_prepare(entry)) { + if (swapcache_prepare(entry, 1)) { /* Relax a bit to prevent rapid repeated page faults */ schedule_timeout_uninterruptible(1); goto out; @@ -4387,7 +4387,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) out: /* Clear the swap cache pin for direct swapin after PTL unlock */ if (need_clear_cache) - swapcache_clear(si, entry); + swapcache_clear(si, entry, 1); if (si) put_swap_device(si); return ret; @@ -4403,7 +4403,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_put(swapcache); } if (need_clear_cache) - swapcache_clear(si, entry); + swapcache_clear(si, entry, 1); if (si) put_swap_device(si); return ret; diff --git a/mm/swap.h b/mm/swap.h index baa1fa946b34..7c6330561d84 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -59,7 +59,7 @@ void __delete_from_swap_cache(struct folio *folio, void delete_from_swap_cache(struct folio *folio); void clear_shadow_from_swap_cache(int type, unsigned long begin, unsigned long end); -void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry); +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr); struct folio *swap_cache_get_folio(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); struct folio *filemap_get_incore_folio(struct address_space *mapping, @@ -120,7 +120,7 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc) return 0; } -static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr) { } @@ -172,4 +172,5 @@ static inline unsigned int folio_swap_flags(struct folio *folio) return 0; } #endif /* CONFIG_SWAP */ + #endif /* _MM_SWAP_H */ diff --git a/mm/swap_state.c b/mm/swap_state.c index a1726e49a5eb..b06f2a054f5a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -477,7 +477,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* * Swap entry may have been freed since our caller observed it. */ - err = swapcache_prepare(entry); + err = swapcache_prepare(entry, 1); if (!err) break; diff --git a/mm/swapfile.c b/mm/swapfile.c index 5f73a8553371..757d38a86f56 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3363,7 +3363,7 @@ void si_swapinfo(struct sysinfo *val) } /* - * Verify that a swap entry is valid and increment its swap map count. + * Verify that nr swap entries are valid and increment their swap map counts. * * Returns error code in following case. * - success -> 0 @@ -3373,60 +3373,77 @@ void si_swapinfo(struct sysinfo *val) * - swap-cache reference is requested but the entry is not used. -> ENOENT * - swap-mapped reference requested but needs continued swap count. -> ENOMEM */ -static int __swap_duplicate(swp_entry_t entry, unsigned char usage) +static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) { struct swap_info_struct *p; struct swap_cluster_info *ci; unsigned long offset; unsigned char count; unsigned char has_cache; - int err; + int err, i; p = swp_swap_info(entry); offset = swp_offset(entry); + VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); ci = lock_cluster_or_swap_info(p, offset); - count = p->swap_map[offset]; + err = 0; + for (i = 0; i < nr; i++) { + count = p->swap_map[offset + i]; - /* - * swapin_readahead() doesn't check if a swap entry is valid, so the - * swap entry could be SWAP_MAP_BAD. Check here with lock held. - */ - if (unlikely(swap_count(count) == SWAP_MAP_BAD)) { - err = -ENOENT; - goto unlock_out; - } + /* + * swapin_readahead() doesn't check if a swap entry is valid, so the + * swap entry could be SWAP_MAP_BAD. Check here with lock held. + */ + if (unlikely(swap_count(count) == SWAP_MAP_BAD)) { + err = -ENOENT; + goto unlock_out; + } - has_cache = count & SWAP_HAS_CACHE; - count &= ~SWAP_HAS_CACHE; - err = 0; + has_cache = count & SWAP_HAS_CACHE; + count &= ~SWAP_HAS_CACHE; - if (usage == SWAP_HAS_CACHE) { + if (usage == SWAP_HAS_CACHE) { + /* set SWAP_HAS_CACHE if there is no cache and entry is used */ + if (!has_cache && count) + continue; + else if (has_cache) /* someone else added cache */ + err = -EEXIST; + else /* no users remaining */ + err = -ENOENT; - /* set SWAP_HAS_CACHE if there is no cache and entry is used */ - if (!has_cache && count) - has_cache = SWAP_HAS_CACHE; - else if (has_cache) /* someone else added cache */ - err = -EEXIST; - else /* no users remaining */ - err = -ENOENT; + } else if (count || has_cache) { - } else if (count || has_cache) { + if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) + continue; + else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) + err = -EINVAL; + else if (swap_count_continued(p, offset + i, count)) + continue; + else + err = -ENOMEM; + } else + err = -ENOENT; /* unused swap entry */ - if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) + if (err) + goto unlock_out; + } + + for (i = 0; i < nr; i++) { + count = p->swap_map[offset + i]; + has_cache = count & SWAP_HAS_CACHE; + count &= ~SWAP_HAS_CACHE; + + if (usage == SWAP_HAS_CACHE) + has_cache = SWAP_HAS_CACHE; + else if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX) count += usage; - else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX) - err = -EINVAL; - else if (swap_count_continued(p, offset, count)) - count = COUNT_CONTINUED; else - err = -ENOMEM; - } else - err = -ENOENT; /* unused swap entry */ + count = COUNT_CONTINUED; - if (!err) - WRITE_ONCE(p->swap_map[offset], count | has_cache); + WRITE_ONCE(p->swap_map[offset + i], count | has_cache); + } unlock_out: unlock_cluster_or_swap_info(p, ci); @@ -3439,7 +3456,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage) */ void swap_shmem_alloc(swp_entry_t entry) { - __swap_duplicate(entry, SWAP_MAP_SHMEM); + __swap_duplicate(entry, SWAP_MAP_SHMEM, 1); } /* @@ -3453,29 +3470,29 @@ int swap_duplicate(swp_entry_t entry) { int err = 0; - while (!err && __swap_duplicate(entry, 1) == -ENOMEM) + while (!err && __swap_duplicate(entry, 1, 1) == -ENOMEM) err = add_swap_count_continuation(entry, GFP_ATOMIC); return err; } /* - * @entry: swap entry for which we allocate swap cache. + * @entry: first swap entry from which we allocate nr swap cache. * - * Called when allocating swap cache for existing swap entry, + * Called when allocating swap cache for existing swap entries, * This can return error codes. Returns 0 at success. * -EEXIST means there is a swap cache. * Note: return code is different from swap_duplicate(). */ -int swapcache_prepare(swp_entry_t entry) +int swapcache_prepare(swp_entry_t entry, int nr) { - return __swap_duplicate(entry, SWAP_HAS_CACHE); + return __swap_duplicate(entry, SWAP_HAS_CACHE, nr); } -void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry) +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr) { unsigned long offset = swp_offset(entry); - cluster_swap_free_nr(si, offset, 1, SWAP_HAS_CACHE); + cluster_swap_free_nr(si, offset, nr, SWAP_HAS_CACHE); } struct swap_info_struct *swp_swap_info(swp_entry_t entry)