From patchwork Wed Feb 15 16:14:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m?= X-Patchwork-Id: 13141854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7CE9C64ED9 for ; Wed, 15 Feb 2023 16:15:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 789726B0089; Wed, 15 Feb 2023 11:15:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 73A716B008C; Wed, 15 Feb 2023 11:15:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BD116B0089; Wed, 15 Feb 2023 11:15:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 475246B0089 for ; Wed, 15 Feb 2023 11:15:34 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BAEC5A0ABF for ; Wed, 15 Feb 2023 16:15:33 +0000 (UTC) X-FDA: 80470026546.22.2736492 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf09.hostedemail.com (Postfix) with ESMTP id 52E1B140014 for ; Wed, 15 Feb 2023 16:15:31 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=gYVRHTJW; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676477731; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ghLMD4aAFpFNrg1JlGnpCPp6L0taZbupXg6K0j/l40I=; b=X+C7vxwQdw8p5art8YkddLD8V3SonMs3Ck86MC9+YSzhKbfZmO2HvQZqt7odwR+DpbvlsL nCnEP6BTTsscvBqY7zCGzXS7BxhA+0/Szaf0XeFcV1Y6cvvSBgRB9Sdz3umu5J/bqn84Rk LQbB9TLM8DPvuE6FlWeSa2GlSzW2/E8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=gYVRHTJW; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676477731; a=rsa-sha256; cv=none; b=bVOBH8N/9FK2Y1uIvgEzNW5oAqcFc4FDDQyGgFJJO9IPCkYS/hlnDt4HebxKn57oG28lR9 pVX4q5hvQIeNTg50ZEmSopEl6kolBAFFBqKbjYSgd0TWTOzwtRxNrsVBVJumEa2mCRuTN3 otv+riAT8GZKK4MGAJGxDRQk0SPmlQk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676477731; x=1708013731; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0GOXh0gS6sWajQ1gHnS/5gVSVEN8A7soG6TR0c6oZTI=; b=gYVRHTJWoISOyS3PvWQZ/noq5kGrXANFEWovJbLKRLziGKsKsm6Cwknh q6gB14PCYW3qGLPvcbMMOclrv00aZqmyu8VSHEyZQi4UmQanc4c3J1HDM Q5HurouLk2KrbgsZWisms4/acyxCBKe8Y2hsig/pckMTlY8bs/op9zqwv pdkeeaQSu3vERhObT9uUdHTj9a+xG2OaptKajJt8GEo8Rjl3lEKXqSKW2 hTz15TonmVwSUK5f3BPY6rgEk3mHs/Ijbn8GbidXh7xBhsa/H9aLA4WQ3 A8LIcJYyMqgf/ZCTxnVSYAABohC4A4oFReFdcTWqvoKGu5ons3tERZUaE A==; X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="393871084" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="393871084" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:15:30 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="758472640" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="758472640" Received: from auliel-mobl1.ger.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.14]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:15:25 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: dri-devel@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Andrew Morton , "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , linux-mm@kvack.org, Daniel Vetter , Christian Koenig , Dave Airlie , Dave Hansen , Matthew Auld , linux-graphics-maintainer@vmware.com, intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 12/16] mm: Add interfaces to back up and recover folio contents using swap Date: Wed, 15 Feb 2023 17:14:01 +0100 Message-Id: <20230215161405.187368-13-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> References: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 52E1B140014 X-Stat-Signature: 5dp64cdmpfn34turiys11bx6k6pxpycn X-HE-Tag: 1676477731-862802 X-HE-Meta: U2FsdGVkX19AvJGib7//7Yw5ctWO5nCt7SLZEYjw62Xb0Jb2h7XQ2DZPEsUE8xxmT1ZEmPPzRubUzkzxV2akkXa2acU0MNuc6gRU0v4JvzUA4LxmwIq0xzaKTi62pgpX12xrXWc9nL3ivP3XkU3ng+Pq5Kq6+vPNClWEcGnh7nWDfXJ5DXP/7HZoBFQd7N21OX7OcCPMlFREhFq3ZE9TcDKzYhKiNFN1UJZ5g+/mmzToHtopxbbL9HFGvFIMfCdI7uenGjtaVpVCaFwUZiL849C4zY0LKN5fb68YBY8yzoysRg1WFHFMGEh3fipjghn9drNF1zV5Byaep7f86ievcAIsHpp5d7DvoleW96IaJtH5u+PMOftEUpR70iIqVcvhdo2IbIs/pzb42MueGA81oAzHMigfbA9gsnEdTzdA/2WFtEAskGoXyFmMC38+6SNQK9QKq/97h4jtr30FoQV1mysd0kJJv98JWP7AaIwel71RZNRb78oB4575n/Fz2dBz9WONkxYCyUpFYBFNEbBx25VScXUf+dNcasysQPG71S5GngAfqiULCftMDwumbrv+wQfgxK+u4vFzcW56WdmBs8gsInctKG0V2Rv2qRyirO9IfjINHaYRp6nlOYou3xtpUJjLyVXl4M7uQC0FZuJ/VasqzX/dXyGJRFNYBsEjUtFndayxVgc1BhsjHJy04WKrtUcc6P+Ze5sMGDnK2dlVjDWu4vCTmpEgBX81iNKP5zemF3ZU0jE+p03RJ+15GB+eYXkaAp/Vmu9Muy/waCuONx6KOhMYB62VPeLISJuJNtK898pd9pD0hDKSF3IUqIimadJ6EpBNW6pWC3gaBn9KVQt73Mfvq1vgz70SJ3LhSeGKhzt7B+KXlZrWyomwnunN8L1apTgr9kDTSvdjEzljUPU83lnE9CUG9+wOXcE0/5a8bsTVp5P6eiYsBM+Lcb0DjnWZQPU3y7eesfisRVq CR0qQw/3 FVOQpFbxYwTusqHWCrB7UN6AYuOca/69Z/xYaooOhudshTymsLccDX9Qaxwlva1vXWe0+53bFjIrOLU0nn7AT/FQHuvGb8BpqEt5dBcGRetQ/g0X6w59SBVTpR651LKyxKMXSdDM3EVSRqGrb9ALHRU0Kok68VSJQIGtOGyZ5NYISQNz/e4mZ3uMO1eRPR4348CWz9uh5NUzVa0wF9W3Z9vYmy8YwYMQldKJj+NGBJO8XSnN20Lchd6qB2Nro72iIE3bpHh2CpC5dMYZ9IUmiGwI+tNwxxEhESrMbRaeKnN047qucSqNI/LUeAWgr8jU49MmbO0sQJzzLkNWKXpm1lwgMZRVWku/cvZ5mUzJiZtZelN5uxWPSFtNFEl/p3Y2AGKK5kl8dvBTcHgrkoDeaYOtrabAuGauogOCNRutpaA265Gu+1r1sH17Ok08zVEAAjbNtbVXVueCFhLtkPUgH0b4syL+r8w0ZM3xC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: GPU drivers have traditionally used shmem to back up GPU buffer contents for swap on physical memory shortage. Some integrated GPU drivers use shmem files as the backing storage for their GPU buffers, other drivers, in particular drivers that need a Write-Combining caching strategy on system pages, (but also drivers for discrete gpus in general) need to copy to shmem on anticipated memory shortage. The latter strategy does not lend itself very well to shrinker usage, since shmem memory needs to be allocated and page trylocking of pagecache pages need to be performed from reclaim context and both are prone to failures. That makes the approach very fragile at best. Add interfaces for GPU drivers to directly insert pages into the swap-cache, thereby bypassing shmem and avoiding the shmem page allocation and locking at shrink time completely, as well as the content copy. Also add a kunit test for experimenting with the interface functionality, currently it seems PMD size folios doesn't work properly. Needs further investigation if this is a viable approach. Cc: Andrew Morton Cc: "Matthew Wilcox (Oracle)" Cc: Miaohe Lin Cc: David Hildenbrand Cc: Johannes Weiner Cc: Peter Xu Cc: NeilBrown Cc: linux-mm@kvack.org Signed-off-by: Thomas Hellström --- include/linux/swap.h | 10 ++ mm/Kconfig | 18 ++++ mm/Makefile | 2 + mm/swap_backup_folio.c | 178 ++++++++++++++++++++++++++++++++++++ mm/swap_backup_folio_test.c | 111 ++++++++++++++++++++++ 5 files changed, 319 insertions(+) create mode 100644 mm/swap_backup_folio.c create mode 100644 mm/swap_backup_folio_test.c diff --git a/include/linux/swap.h b/include/linux/swap.h index 0ceed49516ad..fc38c72fe9ab 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -706,5 +706,15 @@ static inline bool mem_cgroup_swap_full(struct folio *folio) } #endif +#ifdef CONFIG_SWAP_BACKUP_FOLIO +swp_entry_t swap_backup_folio(struct folio *folio, bool writeback, + gfp_t folio_gfp, gfp_t alloc_gfp); + +int swap_copy_folio(swp_entry_t swap, struct page *page, unsigned long index, + bool killable); + +void swap_drop_folio(swp_entry_t swap); +#endif + #endif /* __KERNEL__*/ #endif /* _LINUX_SWAP_H */ diff --git a/mm/Kconfig b/mm/Kconfig index ff7b209dec05..b9e0a40e9e1a 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -191,6 +191,10 @@ config ZSMALLOC_STAT information to userspace via debugfs. If unsure, say N. +config SWAP_BACKUP_FOLIO + bool + default n + menu "SLAB allocator options" choice @@ -1183,6 +1187,20 @@ config LRU_GEN_STATS This option has a per-memcg and per-node memory overhead. # } +config SWAP_BACKUP_FOLIO_KUNIT_TEST + tristate "KUnit tests for swap_backup_folio() functionality" if !KUNIT_ALL_TESTS + depends on SWAP && KUNIT && SWAP_BACKUP_FOLIO + help + This builds unit tests for the swap_backup_folio_functionality(). + This option is not useful for distributions or general kernels, + but only for kernel developers working on MM swap functionality. + + For more information on KUnit and unit tests in general, + please refer to the KUnit documentation in + Documentation/dev-tools/kunit/. + + If in doubt, say "N". + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 8e105e5b3e29..91cb9c73e16e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -138,3 +138,5 @@ obj-$(CONFIG_IO_MAPPING) += io-mapping.o obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o +obj-$(CONFIG_SWAP_BACKUP_FOLIO) += swap_backup_folio.o +obj-$(CONFIG_SWAP_BACKUP_FOLIO_KUNIT_TEST) += swap_backup_folio_test.o diff --git a/mm/swap_backup_folio.c b/mm/swap_backup_folio.c new file mode 100644 index 000000000000..f77ca478e625 --- /dev/null +++ b/mm/swap_backup_folio.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include + +#include +#include "swap.h" + +/** + * swap_backup_folio() - Insert an isolated folio into the swap-cache. + * @folio: The folio to insert. + * @writeback: Whether to perform immediate writeback. + * @folio_gfp: The gfp value used when the folio was allocated. Used for + * cgroup charging only. + * @alloc_fgp: The gfp value used for swap cache radix tree memory allocations. + * + * Insert a folio into the swap cache and get a swp_entry_t back as a reference. + * If the swap cache folio should be subject of immediate writeback to + * a swap device, @writeback should be set to true. + * After a call to swap_backup_folio() the caller can + * drop its folio reference and use swap_copy_folio() to get the folio + * content back, or swap_drop_folio() to drop it completely. + * Currently only PAGE_SIZE folios work, or if CONFIG_THP_SWAP is + * enabled, HPAGE_PMD_NR*PAGE_SIZE may work as well, although that + * needs further testing. + * + * Return: A swp_entry_t. If its .val field is zero, an error occurred. + */ +swp_entry_t swap_backup_folio(struct folio *folio, bool writeback, + gfp_t folio_gfp, gfp_t alloc_gfp) +{ + swp_entry_t swap = {}; + + if (VM_WARN_ON_ONCE_FOLIO(folio_nr_pages(folio) != 1 && + !(IS_ENABLED(CONFIG_THP_SWAP) && + folio_nr_pages(folio) == HPAGE_PMD_NR), + folio)) + return swap; + + if (VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio) != 1 || + folio_test_lru(folio) || + folio_test_locked(folio), folio)) + return swap; + + /* + * Typically called from reclaim so use folio_trylock. If the folio + * is isolated with refcount == 1, then this trylock should always + * succeed. + */ + if (!folio_trylock(folio)) + return swap; + + __folio_mark_uptodate(folio); + __folio_set_swapbacked(folio); + + mem_cgroup_charge(folio, NULL, folio_gfp); + + swap = folio_alloc_swap(folio); + if (!swap.val) + goto out; + + if (add_to_swap_cache(folio, swap, alloc_gfp, NULL) == 0) { + int ret = -EINVAL; + + swap_shmem_alloc(swap); + folio_add_lru(folio); + lru_add_drain(); + + /* Stolen from pageout(). */ + if (writeback && folio_clear_dirty_for_io(folio)) { + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + .nr_to_write = SWAP_CLUSTER_MAX, + .range_start = 0, + .range_end = LLONG_MAX, + .for_reclaim = 1, + }; + + folio_set_reclaim(folio); + ret = swap_writepage(folio_page(folio, 0), &wbc); + if (!folio_test_writeback(folio)) + folio_clear_reclaim(folio); + } + + if (ret) + folio_unlock(folio); + return swap; + } + + put_swap_folio(folio, swap); +out: + folio_clear_swapbacked(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + mem_cgroup_uncharge(folio); + + return swap; +} +EXPORT_SYMBOL(swap_backup_folio); + +/** + * swap_copy_folio() - Copy folio content that was previously backed up + * @swap: The swp_entry_t returned from swap_backup_folio(). + * @to_page: The page to copy to. + * @index: The index to the source page in the folio represented by @swap. + * @killable: Whether to perform sleeping operations killable. + * + * Copies content that was previously backed up using swap_backup_folio(), + * to the destination page to_page. The swp_entry_t @swap is not freed, and + * copying can thus be done multiple times using @swap. + * + * Return: Zero on success, negative error code on error. In particular, + * -EINTR may be returned if a fatal signal is pending during wait for + * page-lock or wait for writeback and @killable is set to true. + */ +int swap_copy_folio(swp_entry_t swap, struct page *to_page, + unsigned long index, bool killable) +{ + struct folio *folio = swap_cache_get_folio(swap, NULL, 0); + int ret; + + if (!folio) { + struct vm_fault vmf = {}; + struct page *page; + + page = swap_cluster_readahead(swap, GFP_HIGHUSER_MOVABLE, &vmf); + if (page) + folio = page_folio(page); + } + + if (!folio) + return -ENOMEM; + + if (killable) { + ret = __folio_lock_killable(folio); + if (ret) + goto out_err; + } else { + folio_lock(folio); + } + + VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio) || + folio_swap_entry(folio).val != swap.val || + !folio_test_uptodate(folio), folio); + + if (killable) { + ret = folio_wait_writeback_killable(folio); + if (ret) + goto out_err; + } else { + folio_wait_writeback(folio); + } + + arch_swap_restore(swap, folio); + folio_unlock(folio); + + copy_highpage(to_page, folio_page(folio, index)); +out_err: + folio_put(folio); + return ret; +} +EXPORT_SYMBOL(swap_copy_folio); + +/** + * swap_drop_folio - Drop a swap entry and its associated swap cache folio + * if any. + * @swap: The swap entry. + * + * Releases resources associated with a swap entry returned from + * swap_backup_folio(). + */ +void swap_drop_folio(swp_entry_t swap) +{ + free_swap_and_cache(swap); +} +EXPORT_SYMBOL(swap_drop_folio); diff --git a/mm/swap_backup_folio_test.c b/mm/swap_backup_folio_test.c new file mode 100644 index 000000000000..34cde56d2a57 --- /dev/null +++ b/mm/swap_backup_folio_test.c @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: MIT or GPL-2.0 +/* + * Copyright © 2022 Intel Corporation + */ + +#include +#include +#include +#include + +struct gpu_swapped_page { + struct list_head link; + swp_entry_t swap; +}; + +static void swap_backup_test(struct kunit *test) +{ + gfp_t gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL | __GFP_NOWARN; + struct gpu_swapped_page *gsp, *next; + struct folio *folio; + LIST_HEAD(list); + long i = 0L; + long num_folios; + unsigned long avail_ram; + + avail_ram = si_mem_available() << PAGE_SHIFT; + kunit_info(test, "Available RAM is %lu MiB.\n", avail_ram / SZ_1M); + num_folios = get_nr_swap_pages(); + num_folios = min_t(long, num_folios, avail_ram >> PAGE_SHIFT); + + kunit_info(test, "Trying %ld swap pages\n", num_folios); + + do { + /* + * Expect folio_alloc() (out-of-physical-memory) or + * swap_backup_folio() (out-of-swap-space) to fail before + * this kzalloc(). + */ + gsp = kzalloc(sizeof(*gsp), GFP_KERNEL); + if (!gsp) { + KUNIT_FAIL(test, "alloc gsp failed.\n"); + break; + } + + folio = vma_alloc_folio(gfp, 0, NULL, 0, false); + if (!folio) { + kunit_info(test, "folio_alloc failed.\n"); + kfree(gsp); + break; + } + + folio_mark_dirty(folio); + + /* Use true instead of false here to trigger immediate writeback. */ + gsp->swap = swap_backup_folio(folio, false, gfp, + GFP_KERNEL | __GFP_HIGH | + __GFP_NOWARN); + if (gsp->swap.val == 0) { + kunit_info(test, "swap_backup_folio() failed.\n"); + folio_put(folio); + kfree(gsp); + break; + } + + list_add_tail(&gsp->link, &list); + folio_put(folio); + cond_resched(); + if (i % 1000 == 0) + kunit_info(test, "Backed up %ld\n", i); + } while (i++ < num_folios); + + i = 0; + list_for_each_entry_safe(gsp, next, &list, link) { + int ret; + + folio = folio_alloc(GFP_HIGHUSER, 0); + if (!folio) { + KUNIT_FAIL(test, "Allocation of readback folio failed.\n"); + } else { + ret = swap_copy_folio(gsp->swap, folio_page(folio, 0), + 0, false); + if (ret) + KUNIT_FAIL(test, "swap_copy_folio() failed.\n"); + } + folio_put(folio); + swap_drop_folio(gsp->swap); + list_del(&gsp->link); + kfree(gsp); + i++; + cond_resched(); + if (i % 1000 == 0) + kunit_info(test, "Recovered %ld\n", i); + } + + kunit_info(test, "Recover_total: %ld\n", i); +} + +static struct kunit_case swap_backup_tests[] = { + KUNIT_CASE(swap_backup_test), + {} +}; + +static struct kunit_suite swap_backup_test_suite = { + .name = "swap_backup_folio", + .test_cases = swap_backup_tests, +}; + +kunit_test_suite(swap_backup_test_suite); + +MODULE_AUTHOR("Intel Corporation"); +MODULE_LICENSE("Dual MIT/GPL");