From patchwork Thu Nov 21 22:25:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13882401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FDCCE64025 for ; Thu, 21 Nov 2024 22:26:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE5666B0085; Thu, 21 Nov 2024 17:26:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B954B6B0088; Thu, 21 Nov 2024 17:26:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0FCF6B0089; Thu, 21 Nov 2024 17:26:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7B2476B0085 for ; Thu, 21 Nov 2024 17:26:06 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DF51F1215F2 for ; Thu, 21 Nov 2024 22:26:05 +0000 (UTC) X-FDA: 82811533938.27.478570E Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf22.hostedemail.com (Postfix) with ESMTP id 8B6A4C000E for ; Thu, 21 Nov 2024 22:24:57 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=CS1DyU95; spf=pass (imf22.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732227901; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tMb405c5DZI5c/GWeHnBtUPE9wRVlUNRdpkUDwoDqsM=; b=8IAlXeRfgg5Xj5xUp6RW0qlnJjlMiqa80Ko/DKtAMUi4JM+dzzT4ZvxFIXh+m2CUTzSiyg Qa76mRDB8C1VuLxlx4Wq3J4cA+d9Rbh3rm8CF6TIEr85vll3Tpf6rXL55lQ5+fG8e4bT8Z ALo91adSSaExb+YR3menPqbc8eT+IWQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732227901; a=rsa-sha256; cv=none; b=iMSkN+lCqTD7dThjcIhI63w2aDSpLgnY9Zyr3mAh0jKRtFkTFsyzr0pruynCD3HXazidCy dxdhubGGAtXvqXJdfVFf/xK1yNvUKQIaOTfeFxIyg8NbgcgI2mTcLPgW5WNnDTwc9p7pgP NWNlk0rXvNc8Dc8a71yUaDYSn9HKTJs= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=CS1DyU95; spf=pass (imf22.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-212776d6449so15897755ad.1 for ; Thu, 21 Nov 2024 14:26:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732227963; x=1732832763; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tMb405c5DZI5c/GWeHnBtUPE9wRVlUNRdpkUDwoDqsM=; b=CS1DyU95oQoxyiskgUCArbRg6T91Sbz8+4at4130/l/FIkbgovnMwh4EiF1GDSEhMD LlRn4XPSTEpF0jWxRmNz2PkXwaV9JXMZJ1uYcwh56/XDqlT39+o3SFHqsWq/ndgzo/1t RD8gXwnLygqRNM7dFUPBwrF549dDcrKEzvZNJDsIdkYR5eRpXruVTiZTc0NscngYMZK6 9ZUwI8uKWeOcvSdJQSLU+e+Nj/J3g/zOSfMfEirgOa+Rg2SxgHQAYaDp2DsDa/PuI2+X S1tZTcK80ZGGft4LH6h4Thm25Q+LN8AEiAmpEStY2Xc+0GU+8kSNm/1vfK3gXaO7JWPv APxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732227963; x=1732832763; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tMb405c5DZI5c/GWeHnBtUPE9wRVlUNRdpkUDwoDqsM=; b=iaxJo7RKglyMeNfno2J8p2Fgg4P/DIcU9Gxu691sIGxsHXzMIXq0Tc2gQ7VBFGWQ2o XCTVW2ZfXjdQFS9hVOgKTifsseWd0ymXVtBh+1qeF++7/jOINxlDdjuEcHtvBOI/Khjk mLUS5mkUN+488sZ7bNNZ/hnXo1/pUaf0+Sr/6AMPVlVU0q6+cs7WSLS56xTEOVGW0hIQ RfJGXZVDxjyW+eCMrEryJbI0ibWuNJ+x7y33+j2AOzJrGREqCcEboPodaHCBXijJ+Eab uz9txfJ4cnSdW0AvboLFpV/zq+J272ckTr9+BAg6HcASC68JheWWbTfK5zbQpOcdoXDq TrYg== X-Forwarded-Encrypted: i=1; AJvYcCXQRIvotCiCw1Hvub61c950vWaehxy1VNHuuUZHLDaA0sfBBXT4FPWx7uwJhok0M2EkKSCh5qEVpw==@kvack.org X-Gm-Message-State: AOJu0YwKTgNaXvALdnh5pdBOEnDLz0PRoOpPx2FROsv0mhufgNHdb9T6 plMh9gDNzG2Ln8jbAG3w1uGtwWcMm8JuKm0y7BokK/brX2uP2+C6 X-Gm-Gg: ASbGncsDN/659fIcuI/WnBh+UCrd80qMsnxc+QkKEtrL+JeWkiKGiHacIXA1fvG8xQL vI/cgvf6cIPgX1S0Qt9VMNZaKXmsmBVW8q5ljz3fUuKfw2uJOsOUJ2hUREgJ4dxTpnpbgDSBHnd JrXHF1KZ6AS9SdUrz0ApLZQ+ezKwso68d1uBPytIv0LLDcLnNzBuYCgUKdTQW8PHIaMSQ4lTqLZ g2/WmrcigepDfdY95MsP2ZSjR0ok7U1F7dIVOyNqf6J8B1EVf9PfbjU6QkX//YHrKdZIQVv X-Google-Smtp-Source: AGHT+IH1F7rSx1mWbIYnSHXRFDY20mIXNnt2o6SpVteSmqhSA3+G7SsM/KO4zEEQbFBp87aTMXC5tA== X-Received: by 2002:a17:902:d4cb:b0:20c:8331:cb6e with SMTP id d9443c01a7336-2129f53714cmr8717185ad.19.1732227962445; Thu, 21 Nov 2024 14:26:02 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:9d64:b0ba:faf2:680e]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2129dba22f4sm3334745ad.100.2024.11.21.14.25.54 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 21 Nov 2024 14:26:01 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: axboe@kernel.dk, bala.seshasayee@linux.intel.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, kanchana.p.sridhar@intel.com, kasong@tencent.com, linux-block@vger.kernel.org, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, surenb@google.com, terrelln@fb.com, usamaarif642@gmail.com, v-songbaohua@oppo.com, wajdi.k.feghali@intel.com, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, zhengtangquan@oppo.com, zhouchengming@bytedance.com Subject: [PATCH RFC v3 1/4] mm: zsmalloc: support objects compressed based on multiple pages Date: Fri, 22 Nov 2024 11:25:18 +1300 Message-Id: <20241121222521.83458-2-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20241121222521.83458-1-21cnbao@gmail.com> References: <20241121222521.83458-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Stat-Signature: szqhu5ikkorjim86pgptdyd3e5f4ind1 X-Rspam-User: X-Rspamd-Queue-Id: 8B6A4C000E X-Rspamd-Server: rspam02 X-HE-Tag: 1732227897-177548 X-HE-Meta: U2FsdGVkX19Dj228W8F1n94Bzu3q+c6NQkCr+t/XyU8tYMJpmM76C7o1UIWQfgpOsUpTH2fjXoquDw2AE13hbiFazP8vUBoqbhijLU1O03mjf2+9DUyGRTY9FXw588k2FwSZ7jaLgnEHVjW4BsIvnWNtanUlfiMZzKU4sZeeN/pPQsu62BFVOqdcaZVasJb/b4gdr5Ob9nIcymKsTjsA96NaQoJ4hzPdzPbqVjqWu9OsQevudl7D7uVTFDYBlCdp0M7CQfqnD2S1LMLHKpQf1/SqU8D9v4Kue4tYJrZqYeVhK9cya2uPtJPjA23Yf2QjvscFd/+eaC19pFF1k42yAUgOJrQQArV85mdptJkqrqkht+TL/C2R0kMHf8YOBBL4j8W082ez1OqEUMyoqlB6fbKTUlVyBF9dSOwsM7ERIugRDLVmhWYj5is4A5i6kpSH9qWXCMU43QT+P6IIkzq0mUNInb0QxzlCYFGZhIeXPbjaJQU8MPI+SgZn6qMTTIZ3VeNVUWgAFxt9xK8nM5Uvswe3Hphn+wUuyIJDzDHaM9MPlhqXByFqhS4XcaLtWkc+R4rHyBn9ofOf5p9+/c0PEUc6f5ITB07JFKOL5Hw2YUe9ZcLuIt/DhNxIs48WkLS4ijB37lJeUlB+7Wp/0v5znFbHcT1c3NldMrx6FRWz5zLBmfZS0PVh3+gxqw0PC6jY1OGAA44G6gqBEygh1afAMV1fL+Ah3yQ3EzfJ66eQy5qDsd228DBTKs/aCv1EWJOyEXx7A9UQ9aaI/u4KNcUUtGxAPOGOYKQP/2fsM8xsmLulcDATbOxUftkpF73UOzpxWfFOlWuULjBg0ookDDQCrDfCWA/3ecJrbokB4S5J5n7W09RAOAdCppPu7MIAGM1LLLsKMFylKnJKpIEPqw5la23I8/mbdMw10OH84f2mUbX2+F3vgqEM0lhjK83NqwWaSYAAysDpx1uVMSVoOlM VKlJZ4D0 UmBOITkz+/t37SzrTm5cdFzMyF0SDIT2ZqnpwIrWk+eVxuy59zOI3JoSYp8+wOIZUCfLtTLdtmIju1VGiVhZ8/1HK0zujAcGW4RwhS5tKMp0k0A1bRWiSjo6U8lEIMZrpw0zlqFou4zp3unlQrwCGhFw3V1LAgshf9c23H97cy1FnUOqv7vwbJtXZSH8DjSIHS1qyj9AqDxK0im2FWqFBtp2xmx/kFg/ab//gsQ4Fr7P4B3BqSsLJg8aNlukKtAMZ3ParbWjACK09vO7Lo3fHpt12Ku+RI7z63gRceCtDlqBYHzf8y4Wj1ztfLn6M7CFJK4NNGRTdKZ65W4OZeK+49ibfBU86Zdo/hsTXNskbisPlQ7blu1LHi/7ubjuuoRM1Glgkp/knzsQkEepIFxRRaL0kLt46pj3em++SjVtj/ZHHnHM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Tangquan Zheng This patch introduces support for zsmalloc to store compressed objects based on multi-pages. Previously, a large folio with nr_pages subpages would undergo compression one by one, each at the granularity of PAGE_SIZE. However, by compressing them at a larger granularity, we can conserve both memory and CPU resources. We define the granularity with a configuration option called ZSMALLOC_MULTI_PAGES_ORDER, set to a default value of 2, which matches the minimum order of anonymous mTHP. As a result, a large folio with 8 subpages will now be split into 2 parts instead of 8. The introduction of the multi-pages feature necessitates the creation of new size classes to accommodate it. Signed-off-by: Tangquan Zheng Co-developed-by: Barry Song Signed-off-by: Barry Song --- drivers/block/zram/zram_drv.c | 3 +- include/linux/zsmalloc.h | 10 +- mm/Kconfig | 18 +++ mm/zsmalloc.c | 235 ++++++++++++++++++++++++++-------- 4 files changed, 207 insertions(+), 59 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 3dee026988dc..6cb7d1e57362 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1461,8 +1461,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize) return false; } - if (!huge_class_size) - huge_class_size = zs_huge_class_size(zram->mem_pool); + huge_class_size = zs_huge_class_size(zram->mem_pool, 0); for (index = 0; index < num_pages; index++) spin_lock_init(&zram->table[index].lock); diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h index a48cd0ffe57d..9fa3e7669557 100644 --- a/include/linux/zsmalloc.h +++ b/include/linux/zsmalloc.h @@ -33,6 +33,14 @@ enum zs_mapmode { */ }; +enum zsmalloc_type { + ZSMALLOC_TYPE_BASEPAGE, +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES + ZSMALLOC_TYPE_MULTI_PAGES, +#endif + ZSMALLOC_TYPE_MAX, +}; + struct zs_pool_stats { /* How many pages were migrated (freed) */ atomic_long_t pages_compacted; @@ -46,7 +54,7 @@ void zs_destroy_pool(struct zs_pool *pool); unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags); void zs_free(struct zs_pool *pool, unsigned long obj); -size_t zs_huge_class_size(struct zs_pool *pool); +size_t zs_huge_class_size(struct zs_pool *pool, enum zsmalloc_type type); void *zs_map_object(struct zs_pool *pool, unsigned long handle, enum zs_mapmode mm); diff --git a/mm/Kconfig b/mm/Kconfig index 33fa51d608dc..6b302b66fc0a 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -237,6 +237,24 @@ config ZSMALLOC_CHAIN_SIZE For more information, see zsmalloc documentation. +config ZSMALLOC_MULTI_PAGES + bool "support zsmalloc multiple pages" + depends on ZSMALLOC && !CONFIG_HIGHMEM + help + This option configures zsmalloc to support allocations larger than + PAGE_SIZE, enabling compression across multiple pages. The size of + these multiple pages is determined by the configured + ZSMALLOC_MULTI_PAGES_ORDER. + +config ZSMALLOC_MULTI_PAGES_ORDER + int "zsmalloc multiple pages order" + default 2 + range 1 9 + depends on ZSMALLOC_MULTI_PAGES + help + This option is used to configure zsmalloc to support the compression + of multiple pages. + menu "Slab allocator options" config SLUB diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 64b66a4d3e6e..ab57266b43f6 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -70,6 +70,12 @@ #define ZSPAGE_MAGIC 0x58 +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES +#define ZSMALLOC_MULTI_PAGES_ORDER (_AC(CONFIG_ZSMALLOC_MULTI_PAGES_ORDER, UL)) +#define ZSMALLOC_MULTI_PAGES_NR (1 << ZSMALLOC_MULTI_PAGES_ORDER) +#define ZSMALLOC_MULTI_PAGES_SIZE (PAGE_SIZE * ZSMALLOC_MULTI_PAGES_NR) +#endif + /* * This must be power of 2 and greater than or equal to sizeof(link_free). * These two conditions ensure that any 'struct link_free' itself doesn't @@ -120,7 +126,8 @@ #define HUGE_BITS 1 #define FULLNESS_BITS 4 -#define CLASS_BITS 8 +#define CLASS_BITS 9 +#define ISOLATED_BITS 5 #define MAGIC_VAL_BITS 8 #define ZS_MAX_PAGES_PER_ZSPAGE (_AC(CONFIG_ZSMALLOC_CHAIN_SIZE, UL)) @@ -129,7 +136,11 @@ #define ZS_MIN_ALLOC_SIZE \ MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS)) /* each chunk includes extra space to keep handle */ +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES +#define ZS_MAX_ALLOC_SIZE (ZSMALLOC_MULTI_PAGES_SIZE) +#else #define ZS_MAX_ALLOC_SIZE PAGE_SIZE +#endif /* * On systems with 4K page size, this gives 255 size classes! There is a @@ -144,9 +155,22 @@ * ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN * (reason above) */ -#define ZS_SIZE_CLASS_DELTA (PAGE_SIZE >> CLASS_BITS) -#define ZS_SIZE_CLASSES (DIV_ROUND_UP(ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZE, \ - ZS_SIZE_CLASS_DELTA) + 1) + +#define ZS_PAGE_SIZE_CLASS_DELTA (PAGE_SIZE >> (CLASS_BITS - 1)) +#define ZS_PAGE_SIZE_CLASSES (DIV_ROUND_UP(PAGE_SIZE - ZS_MIN_ALLOC_SIZE, \ + ZS_PAGE_SIZE_CLASS_DELTA) + 1) + +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES +#define ZS_MULTI_PAGES_SIZE_CLASS_DELTA (ZSMALLOC_MULTI_PAGES_SIZE >> (CLASS_BITS - 1)) +#define ZS_MULTI_PAGES_SIZE_CLASSES (DIV_ROUND_UP(ZS_MAX_ALLOC_SIZE - PAGE_SIZE, \ + ZS_MULTI_PAGES_SIZE_CLASS_DELTA) + 1) +#endif + +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES +#define ZS_SIZE_CLASSES (ZS_PAGE_SIZE_CLASSES + ZS_MULTI_PAGES_SIZE_CLASSES) +#else +#define ZS_SIZE_CLASSES (ZS_PAGE_SIZE_CLASSES) +#endif /* * Pages are distinguished by the ratio of used memory (that is the ratio @@ -182,7 +206,8 @@ struct zs_size_stat { static struct dentry *zs_stat_root; #endif -static size_t huge_class_size; +/* huge_class_size[0] for page, huge_class_size[1] for multiple pages. */ +static size_t huge_class_size[ZSMALLOC_TYPE_MAX]; struct size_class { spinlock_t lock; @@ -260,6 +285,29 @@ struct zspage { rwlock_t lock; }; +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES +static inline unsigned int class_size_to_zs_order(unsigned long size) +{ + unsigned int order = 0; + + /* used large order to alloc page for zspage when class_size > PAGE_SIZE */ + if (size > PAGE_SIZE) + return ZSMALLOC_MULTI_PAGES_ORDER; + + return order; +} +#else +static inline unsigned int class_size_to_zs_order(unsigned long size) +{ + return 0; +} +#endif + +static inline unsigned long class_size_to_zs_size(unsigned long size) +{ + return PAGE_SIZE * (1 << class_size_to_zs_order(size)); +} + struct mapping_area { local_lock_t lock; char *vm_buf; /* copy buffer for objects that span pages */ @@ -510,11 +558,22 @@ static int get_size_class_index(int size) { int idx = 0; +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES + if (size > PAGE_SIZE + ZS_HANDLE_SIZE) { + idx = ZS_PAGE_SIZE_CLASSES; + idx += DIV_ROUND_UP(size - PAGE_SIZE, + ZS_MULTI_PAGES_SIZE_CLASS_DELTA); + + return min_t(int, ZS_SIZE_CLASSES - 1, idx); + } +#endif + + idx = 0; if (likely(size > ZS_MIN_ALLOC_SIZE)) - idx = DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE, - ZS_SIZE_CLASS_DELTA); + idx += DIV_ROUND_UP(size - ZS_MIN_ALLOC_SIZE, + ZS_PAGE_SIZE_CLASS_DELTA); - return min_t(int, ZS_SIZE_CLASSES - 1, idx); + return min_t(int, ZS_PAGE_SIZE_CLASSES - 1, idx); } static inline void class_stat_add(struct size_class *class, int type, @@ -564,11 +623,11 @@ static int zs_stats_size_show(struct seq_file *s, void *v) unsigned long total_freeable = 0; unsigned long inuse_totals[NR_FULLNESS_GROUPS] = {0, }; - seq_printf(s, " %5s %5s %9s %9s %9s %9s %9s %9s %9s %9s %9s %9s %9s %13s %10s %10s %16s %8s\n", - "class", "size", "10%", "20%", "30%", "40%", + seq_printf(s, " %5s %5s %5s %9s %9s %9s %9s %9s %9s %9s %9s %9s %9s %9s %13s %10s %10s %16s %16s %8s\n", + "class", "size", "order", "10%", "20%", "30%", "40%", "50%", "60%", "70%", "80%", "90%", "99%", "100%", "obj_allocated", "obj_used", "pages_used", - "pages_per_zspage", "freeable"); + "pages_per_zspage", "objs_per_zspage", "freeable"); for (i = 0; i < ZS_SIZE_CLASSES; i++) { @@ -579,7 +638,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) spin_lock(&class->lock); - seq_printf(s, " %5u %5u ", i, class->size); + seq_printf(s, " %5u %5u %5u", i, class->size, class_size_to_zs_order(class->size)); for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { inuse_totals[fg] += class_stat_read(class, fg); seq_printf(s, "%9lu ", class_stat_read(class, fg)); @@ -594,9 +653,9 @@ static int zs_stats_size_show(struct seq_file *s, void *v) pages_used = obj_allocated / objs_per_zspage * class->pages_per_zspage; - seq_printf(s, "%13lu %10lu %10lu %16d %8lu\n", + seq_printf(s, "%13lu %10lu %10lu %16d %16d %8lu\n", obj_allocated, obj_used, pages_used, - class->pages_per_zspage, freeable); + class->pages_per_zspage, objs_per_zspage, freeable); total_objs += obj_allocated; total_used_objs += obj_used; @@ -811,7 +870,8 @@ static inline bool obj_allocated(struct page *page, void *obj, static void reset_page(struct page *page) { - __ClearPageMovable(page); + if (PageMovable(page)) + __ClearPageMovable(page); ClearPagePrivate(page); set_page_private(page, 0); page->index = 0; @@ -863,7 +923,8 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class, cache_free_zspage(pool, zspage); class_stat_sub(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage); - atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated); + atomic_long_sub(class->pages_per_zspage * (1 << class_size_to_zs_order(class->size)), + &pool->pages_allocated); } static void free_zspage(struct zs_pool *pool, struct size_class *class, @@ -892,6 +953,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) unsigned int freeobj = 1; unsigned long off = 0; struct page *page = get_first_page(zspage); + unsigned long page_size = class_size_to_zs_size(class->size); while (page) { struct page *next_page; @@ -903,7 +965,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) vaddr = kmap_local_page(page); link = (struct link_free *)vaddr + off / sizeof(*link); - while ((off += class->size) < PAGE_SIZE) { + while ((off += class->size) < page_size) { link->next = freeobj++ << OBJ_TAG_BITS; link += class->size / sizeof(*link); } @@ -925,7 +987,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) } kunmap_local(vaddr); page = next_page; - off %= PAGE_SIZE; + off %= page_size; } set_freeobj(zspage, 0); @@ -975,6 +1037,8 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE]; struct zspage *zspage = cache_alloc_zspage(pool, gfp); + unsigned int order = class_size_to_zs_order(class->size); + if (!zspage) return NULL; @@ -984,12 +1048,14 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, for (i = 0; i < class->pages_per_zspage; i++) { struct page *page; - page = alloc_page(gfp); + if (order > 0) + gfp &= ~__GFP_MOVABLE; + page = alloc_pages(gfp | __GFP_COMP, order); if (!page) { while (--i >= 0) { dec_zone_page_state(pages[i], NR_ZSPAGES); __ClearPageZsmalloc(pages[i]); - __free_page(pages[i]); + __free_pages(pages[i], order); } cache_free_zspage(pool, zspage); return NULL; @@ -1047,7 +1113,9 @@ static void *__zs_map_object(struct mapping_area *area, struct page *pages[2], int off, int size) { size_t sizes[2]; + void *addr; char *buf = area->vm_buf; + unsigned long page_size = class_size_to_zs_size(size); /* disable page faults to match kmap_local_page() return conditions */ pagefault_disable(); @@ -1056,12 +1124,16 @@ static void *__zs_map_object(struct mapping_area *area, if (area->vm_mm == ZS_MM_WO) goto out; - sizes[0] = PAGE_SIZE - off; + sizes[0] = page_size - off; sizes[1] = size - sizes[0]; /* copy object to per-cpu buffer */ - memcpy_from_page(buf, pages[0], off, sizes[0]); - memcpy_from_page(buf + sizes[0], pages[1], 0, sizes[1]); + addr = kmap_local_page(pages[0]); + memcpy(buf, addr + off, sizes[0]); + kunmap_local(addr); + addr = kmap_local_page(pages[1]); + memcpy(buf + sizes[0], addr, sizes[1]); + kunmap_local(addr); out: return area->vm_buf; } @@ -1070,7 +1142,9 @@ static void __zs_unmap_object(struct mapping_area *area, struct page *pages[2], int off, int size) { size_t sizes[2]; + void *addr; char *buf; + unsigned long page_size = class_size_to_zs_size(size); /* no write fastpath */ if (area->vm_mm == ZS_MM_RO) @@ -1081,12 +1155,16 @@ static void __zs_unmap_object(struct mapping_area *area, size -= ZS_HANDLE_SIZE; off += ZS_HANDLE_SIZE; - sizes[0] = PAGE_SIZE - off; + sizes[0] = page_size - off; sizes[1] = size - sizes[0]; /* copy per-cpu buffer to object */ - memcpy_to_page(pages[0], off, buf, sizes[0]); - memcpy_to_page(pages[1], 0, buf + sizes[0], sizes[1]); + addr = kmap_local_page(pages[0]); + memcpy(addr + off, buf, sizes[0]); + kunmap_local(addr); + addr = kmap_local_page(pages[1]); + memcpy(addr, buf + sizes[0], sizes[1]); + kunmap_local(addr); out: /* enable page faults to match kunmap_local() return conditions */ @@ -1184,6 +1262,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, struct mapping_area *area; struct page *pages[2]; void *ret; + unsigned long page_size; + unsigned long page_mask; /* * Because we use per-cpu mapping areas shared among the @@ -1208,12 +1288,14 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, read_unlock(&pool->migrate_lock); class = zspage_class(pool, zspage); - off = offset_in_page(class->size * obj_idx); + page_size = class_size_to_zs_size(class->size); + page_mask = ~(page_size - 1); + off = (class->size * obj_idx) & ~page_mask; local_lock(&zs_map_area.lock); area = this_cpu_ptr(&zs_map_area); area->vm_mm = mm; - if (off + class->size <= PAGE_SIZE) { + if (off + class->size <= page_size) { /* this object is contained entirely within a page */ area->vm_addr = kmap_local_page(page); ret = area->vm_addr + off; @@ -1243,15 +1325,20 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) struct size_class *class; struct mapping_area *area; + unsigned long page_size; + unsigned long page_mask; obj = handle_to_obj(handle); obj_to_location(obj, &page, &obj_idx); zspage = get_zspage(page); class = zspage_class(pool, zspage); - off = offset_in_page(class->size * obj_idx); + + page_size = class_size_to_zs_size(class->size); + page_mask = ~(page_size - 1); + off = (class->size * obj_idx) & ~page_mask; area = this_cpu_ptr(&zs_map_area); - if (off + class->size <= PAGE_SIZE) + if (off + class->size <= page_size) kunmap_local(area->vm_addr); else { struct page *pages[2]; @@ -1281,9 +1368,9 @@ EXPORT_SYMBOL_GPL(zs_unmap_object); * * Return: the size (in bytes) of the first huge zsmalloc &size_class. */ -size_t zs_huge_class_size(struct zs_pool *pool) +size_t zs_huge_class_size(struct zs_pool *pool, enum zsmalloc_type type) { - return huge_class_size; + return huge_class_size[type]; } EXPORT_SYMBOL_GPL(zs_huge_class_size); @@ -1298,13 +1385,21 @@ static unsigned long obj_malloc(struct zs_pool *pool, struct page *m_page; unsigned long m_offset; void *vaddr; + unsigned long page_size; + unsigned long page_mask; + unsigned long page_shift; class = pool->size_class[zspage->class]; obj = get_freeobj(zspage); offset = obj * class->size; - nr_page = offset >> PAGE_SHIFT; - m_offset = offset_in_page(offset); + page_size = class_size_to_zs_size(class->size); + page_shift = PAGE_SHIFT + class_size_to_zs_order(class->size); + page_mask = ~(page_size - 1); + + nr_page = offset >> page_shift; + m_offset = offset & ~page_mask; + m_page = get_first_page(zspage); for (i = 0; i < nr_page; i++) @@ -1385,12 +1480,14 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) obj_malloc(pool, zspage, handle); newfg = get_fullness_group(class, zspage); insert_zspage(class, zspage, newfg); - atomic_long_add(class->pages_per_zspage, &pool->pages_allocated); + atomic_long_add(class->pages_per_zspage * (1 << class_size_to_zs_order(class->size)), + &pool->pages_allocated); class_stat_add(class, ZS_OBJS_ALLOCATED, class->objs_per_zspage); class_stat_add(class, ZS_OBJS_INUSE, 1); /* We completely set up zspage so mark them as movable */ - SetZsPageMovable(pool, zspage); + if (class_size_to_zs_order(class->size) == 0) + SetZsPageMovable(pool, zspage); out: spin_unlock(&class->lock); @@ -1406,9 +1503,14 @@ static void obj_free(int class_size, unsigned long obj) unsigned long f_offset; unsigned int f_objidx; void *vaddr; + unsigned long page_size; + unsigned long page_mask; obj_to_location(obj, &f_page, &f_objidx); - f_offset = offset_in_page(class_size * f_objidx); + page_size = class_size_to_zs_size(class_size); + page_mask = ~(page_size - 1); + + f_offset = (class_size * f_objidx) & ~page_mask; zspage = get_zspage(f_page); vaddr = kmap_local_page(f_page); @@ -1469,20 +1571,22 @@ static void zs_object_copy(struct size_class *class, unsigned long dst, void *s_addr, *d_addr; int s_size, d_size, size; int written = 0; + unsigned long page_size = class_size_to_zs_size(class->size); + unsigned long page_mask = ~(page_size - 1); s_size = d_size = class->size; obj_to_location(src, &s_page, &s_objidx); obj_to_location(dst, &d_page, &d_objidx); - s_off = offset_in_page(class->size * s_objidx); - d_off = offset_in_page(class->size * d_objidx); + s_off = (class->size * s_objidx) & ~page_mask; + d_off = (class->size * d_objidx) & ~page_mask; - if (s_off + class->size > PAGE_SIZE) - s_size = PAGE_SIZE - s_off; + if (s_off + class->size > page_size) + s_size = page_size - s_off; - if (d_off + class->size > PAGE_SIZE) - d_size = PAGE_SIZE - d_off; + if (d_off + class->size > page_size) + d_size = page_size - d_off; s_addr = kmap_local_page(s_page); d_addr = kmap_local_page(d_page); @@ -1507,7 +1611,7 @@ static void zs_object_copy(struct size_class *class, unsigned long dst, * kunmap_local(d_addr). For more details see * Documentation/mm/highmem.rst. */ - if (s_off >= PAGE_SIZE) { + if (s_off >= page_size) { kunmap_local(d_addr); kunmap_local(s_addr); s_page = get_next_page(s_page); @@ -1517,7 +1621,7 @@ static void zs_object_copy(struct size_class *class, unsigned long dst, s_off = 0; } - if (d_off >= PAGE_SIZE) { + if (d_off >= page_size) { kunmap_local(d_addr); d_page = get_next_page(d_page); d_addr = kmap_local_page(d_page); @@ -1541,11 +1645,12 @@ static unsigned long find_alloced_obj(struct size_class *class, int index = *obj_idx; unsigned long handle = 0; void *addr = kmap_local_page(page); + unsigned long page_size = class_size_to_zs_size(class->size); offset = get_first_obj_offset(page); offset += class->size * index; - while (offset < PAGE_SIZE) { + while (offset < page_size) { if (obj_allocated(page, addr + offset, &handle)) break; @@ -1765,6 +1870,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, unsigned long handle; unsigned long old_obj, new_obj; unsigned int obj_idx; + unsigned int page_size = PAGE_SIZE; VM_BUG_ON_PAGE(!PageIsolated(page), page); @@ -1781,6 +1887,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, */ write_lock(&pool->migrate_lock); class = zspage_class(pool, zspage); + page_size = class_size_to_zs_size(class->size); /* * the class lock protects zpage alloc/free in the zspage. @@ -1796,10 +1903,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * Here, any user cannot access all objects in the zspage so let's move. */ d_addr = kmap_local_page(newpage); - copy_page(d_addr, s_addr); + memcpy(d_addr, s_addr, page_size); kunmap_local(d_addr); - for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE; + for (addr = s_addr + offset; addr < s_addr + page_size; addr += class->size) { if (obj_allocated(page, addr, &handle)) { @@ -2085,6 +2192,7 @@ static int calculate_zspage_chain_size(int class_size) { int i, min_waste = INT_MAX; int chain_size = 1; + unsigned long page_size = class_size_to_zs_size(class_size); if (is_power_of_2(class_size)) return chain_size; @@ -2092,7 +2200,7 @@ static int calculate_zspage_chain_size(int class_size) for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) { int waste; - waste = (i * PAGE_SIZE) % class_size; + waste = (i * page_size) % class_size; if (waste < min_waste) { min_waste = waste; chain_size = i; @@ -2138,18 +2246,33 @@ struct zs_pool *zs_create_pool(const char *name) * for merging should be larger or equal to current size. */ for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) { - int size; + unsigned int size = 0; int pages_per_zspage; int objs_per_zspage; struct size_class *class; int fullness; + int order = 0; + int idx = ZSMALLOC_TYPE_BASEPAGE; + + if (i < ZS_PAGE_SIZE_CLASSES) + size = ZS_MIN_ALLOC_SIZE + i * ZS_PAGE_SIZE_CLASS_DELTA; +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES + if (i >= ZS_PAGE_SIZE_CLASSES) + size = PAGE_SIZE + (i - ZS_PAGE_SIZE_CLASSES) * + ZS_MULTI_PAGES_SIZE_CLASS_DELTA; +#endif - size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA; if (size > ZS_MAX_ALLOC_SIZE) size = ZS_MAX_ALLOC_SIZE; - pages_per_zspage = calculate_zspage_chain_size(size); - objs_per_zspage = pages_per_zspage * PAGE_SIZE / size; +#ifdef CONFIG_ZSMALLOC_MULTI_PAGES + order = class_size_to_zs_order(size); + if (order == ZSMALLOC_MULTI_PAGES_ORDER) + idx = ZSMALLOC_TYPE_MULTI_PAGES; +#endif + + pages_per_zspage = calculate_zspage_chain_size(size); + objs_per_zspage = pages_per_zspage * PAGE_SIZE * (1 << order) / size; /* * We iterate from biggest down to smallest classes, * so huge_class_size holds the size of the first huge @@ -2157,8 +2280,8 @@ struct zs_pool *zs_create_pool(const char *name) * endup in the huge class. */ if (pages_per_zspage != 1 && objs_per_zspage != 1 && - !huge_class_size) { - huge_class_size = size; + !huge_class_size[idx]) { + huge_class_size[idx] = size; /* * The object uses ZS_HANDLE_SIZE bytes to store the * handle. We need to subtract it, because zs_malloc() @@ -2168,7 +2291,7 @@ struct zs_pool *zs_create_pool(const char *name) * class because it grows by ZS_HANDLE_SIZE extra bytes * right before class lookup. */ - huge_class_size -= (ZS_HANDLE_SIZE - 1); + huge_class_size[idx] -= (ZS_HANDLE_SIZE - 1); } /* From patchwork Thu Nov 21 22:25:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13882402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07889E64025 for ; Thu, 21 Nov 2024 22:26:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 900026B0089; Thu, 21 Nov 2024 17:26:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B0F56B008A; Thu, 21 Nov 2024 17:26:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 750806B008C; Thu, 21 Nov 2024 17:26:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 522D26B0089 for ; Thu, 21 Nov 2024 17:26:17 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CAD9F16134C for ; Thu, 21 Nov 2024 22:26:16 +0000 (UTC) X-FDA: 82811535744.04.67D58B4 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf05.hostedemail.com (Postfix) with ESMTP id 19E43100007 for ; Thu, 21 Nov 2024 22:24:33 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ds9Wo8GI; spf=pass (imf05.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732227791; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BCNqlqQ4rfs7IHemmBfATy34F1QXfYdj79CZjZxBivg=; b=kRsrCQDjWwgg+YLRd+49DD34BNjrjrf4I0Fl61V2ylJitoYvmbdpDx2PY3xt3G4ScETGO0 PKWSw9wFUN8WlrSHKX6ywwG3dDkDIoeeFx52CdFoHzCg0ie7NFTOW4zUQLtCm9sOU9qZ2f F86y3nR2RXxvQ0tmt+588GmyA3/O7rU= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ds9Wo8GI; spf=pass (imf05.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732227791; a=rsa-sha256; cv=none; b=SV+nqme76C/fDtTlko6cxAjo1trRpiyEOuWnuUSbZgimFeSjqAyY/CDIeop+/RsaH2T6wy 6BYibcIXJdOIFpfOjhrZaSdzfCGFrqZjklHMDps8lnuxgmP6cILFMvsXCKV8c6at/MR38V HtQWvJiKUi+2Jc4GPotY4HBtaFcLDAI= Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21288d3b387so11421615ad.1 for ; Thu, 21 Nov 2024 14:26:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732227973; x=1732832773; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BCNqlqQ4rfs7IHemmBfATy34F1QXfYdj79CZjZxBivg=; b=Ds9Wo8GIovG/bj+fa2P+cqsuYbudUSqzyL1IiGwCq3HEPoxIWp7YIuafC0fBGkMCF4 EJoxEVo3HBsTkQvF7xVG9t6XMCaNDt21xcXPlHTPzE/qXKpjhuuGdv2Fk0oWxpHKY52J X0AvJLCMTHQ2PLzwEgGKd4WaLK5XeDNzi1sC1FP89uizTbH5ZDccEkcPyRUPojiKGUom FWYajtvHErxb7WGMZ2wL2oSFUoNWj2TAt7T7yAMNEdpjkpQbbOZ7udgq2wxJufYIyeds CgTCIVrvokNcU5+zDWtCI07WqBvt2gPnWjZxPRSOUsIHKVtSWR102o6X4GWY8XxSq3uF 9zvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732227973; x=1732832773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BCNqlqQ4rfs7IHemmBfATy34F1QXfYdj79CZjZxBivg=; b=TsTDmRqZCFIdbnrHMPxIB7Av9nOjn5E3DQATv0jvrUmg67oCV7E5G1Hhd2T9LWnoXo JI53tGki4Ikns77lffZofmyj2LQb9MUrcWL7Gre+h+45fSitNkOKK9Dg66+ad2e2ZcIw JeklF6vafKKyO7FC1FwaKiJ87/kWqXdxslujN2WKEgJU1Q0mhLHlb/vLeBjK27jhAzSb xtJnlk+LCHJw/lNHGUFfzRcNCtHCdxlE1JfL9SFz1f9fnTcIZ5bU+Ykjrlaa/Nmudz/A x8H9NGJZzGWtr/trXxbR4Ix37xAJAqP5QA8XAMzhWLfz8WThlIJhPK8ZIkQwM0CBJR6Z 8hNw== X-Forwarded-Encrypted: i=1; AJvYcCX3XVxht8PfX8SasALkpyaE56Bs9Yle27PPAAcgv6jcw/AR1JpPH2d+hpVSjeggedgDIzrZr0J66A==@kvack.org X-Gm-Message-State: AOJu0YxIStrkgtspBVJRSBUJaxfwTdQoGYIxs3P8RD1DHzf007UoXojq rjLxB1HKhzbtSMckWju5Z9+KRjfUV75L9bTh9GiLnddTT9PEmzp7 X-Gm-Gg: ASbGncuYPy4hRznTaj1RClb6YQ8E62RAFny2rWxkE2elMNWZn5kovNU0FIIeas8Ijqn YK9glG1aiBFwoREsphQPFyXoocf5Fj9BZTIEMxGyWW8eYCkwG+lZQM4zDtbQuhNay1yxhG7JUYj DgCPPha3jTYZop90o8p2lgCMS3m5xVX25RhYRjkidI/z3x6CATh0PNkOLQBLJv4LoQY7/koYRF0 gZNplZsCpvHzxofwPf5C1sboUFxZEb8aDLh/GMqIAT5wg7s3DHNGo++urbeJdr+d0oJPthp X-Google-Smtp-Source: AGHT+IEIYvAU6oHpRQ9h3xJVhAtrmO6W7kXx41o4KSLXJIlB4kYtYFzhhJPkt8xEcKZau4F/uTnsCw== X-Received: by 2002:a17:902:d4cb:b0:212:996:3536 with SMTP id d9443c01a7336-2129f5c3cf0mr8895795ad.10.1732227973295; Thu, 21 Nov 2024 14:26:13 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:9d64:b0ba:faf2:680e]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2129dba22f4sm3334745ad.100.2024.11.21.14.26.05 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 21 Nov 2024 14:26:12 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: axboe@kernel.dk, bala.seshasayee@linux.intel.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, kanchana.p.sridhar@intel.com, kasong@tencent.com, linux-block@vger.kernel.org, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, surenb@google.com, terrelln@fb.com, usamaarif642@gmail.com, v-songbaohua@oppo.com, wajdi.k.feghali@intel.com, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, zhengtangquan@oppo.com, zhouchengming@bytedance.com Subject: [PATCH RFC v3 2/4] zram: support compression at the granularity of multi-pages Date: Fri, 22 Nov 2024 11:25:19 +1300 Message-Id: <20241121222521.83458-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20241121222521.83458-1-21cnbao@gmail.com> References: <20241121222521.83458-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 19E43100007 X-Stat-Signature: 6pw1rs41r7tmas4o9efnpaqahz4dhgbr X-Rspam-User: X-HE-Tag: 1732227873-696646 X-HE-Meta: U2FsdGVkX191WRppJQCBlXuKpcFeDq5RrIKF9EesjRRrmX9R6DNOLHeeVxz2Nr7atmeymDO3zt8ZXidcocaum+YKITk1hLFlr0VBV9DRBAFVHjlrAVnzVytZuRFj3McGO5MytDQqyUez1ApF6StnH8+RONFBLHt/AHVy9a90uuNezEaiLeOZniAmWgqy7LPJ+h7sGNfNRic3IYab0707iIeoenT3U1/d03/5/apH4PEasTQsG0V6uprID/WiuE/E1qiFAucGYT6jN4y3vvPB7XHeuyBIwkNeJdePBpfr9ohbr2c0xpzWjY0bxUxSB2Q7wQQDyI9xCRPtkxrEKIsH0cMvfSqABBrBEEkkgJm8zg2R8lZwNHAbWWXLUA0miigdIwMVuQxQ+QUVqBP6/mOFTCBRQXvYolEZBoWLm7VgVb8vy9rKLRavCxJVIZfxekVvgfTzQhgA3XeTCjzzZjx+gEe8HAkbVx1Y8YMdSCOYNXLOqSVMup598V+63hL/KA1dj0Q1n3x2xJlP4iY6f85KbNROuXAHgh1Qy9jY8/m1hIuAt0+BSx5C8RDtcXIy8ZMpuGNtuCt6w4IJWUI5uK56bZVRXmPRRYsk/qIm1tXSPRC36accImJzGJFoRcFOmZw72J5YPnC3Zbg40RZjPoqFSOpwGfmWOfDi+4IwqsEvdbxLQRjlyOMXnanJci+cGM1dfFKvx5M6s2O7cObAJUIBgDZtwi68n9y+M32eSgfEmkiCUnB1IfUXjNafvr9WG+lt9/9N95NVr9DO+ggtzHtPQed2WtaUBC+V9VlFrPVRV/5U6igoUbb5apZiCHaLkABEHiH9MU103ZK0yxNLlnGGAJhttQK9BwCI7GA58oCg/nNOdIVibLMkwnT9tytuf3LeXryhyc2Vt88wUShqt9sCtNoE2Z5ZGZfQ+SkkZM/5gbHW9XX07ynNeuB5T+wV5zqsLqBH9dMYyL2DrapUTBq 9JCtEGt6 EGsJPBp0VlileGk1Mq6pyhpTIFm9qSl7pV2PwQ3ih+Tn8ZTs9+s3eVfKpcexIdC1KdNHDx7iGS4Dpqou2qF2KyXApS9LEQfB5L6+OHM+2Io3I88CAWet593O8YcPvl6YLPg/cZKFoA5zt8xKxMQiEMNxEnqX6KKgcqGLFHUCiqkZ4OT2KCquesWX4IufBH56xubA9/1/5f10kRLnGgsLxIviaJP4PQTbExe/VYTbG3taMJmrAfmKPtEXOSp1IfSEXs80JaXnrG/KjKHQCRBR9kOCVNsARBQ19P5l/VUcIZVjkvlmWSG1/z+Xx1SeQpAuv3LZYGZd1UdVL8S2HMAwXrzmdR/H9tF3uukN11P5tUDXcZhGEYOKCiV3HMMbo7R7qHJkNgABT3O3W5lzIJQ4IcvOIxoIbVgMrsE6BraAXRoKz6AM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Tangquan Zheng Currently, when a large folio with nr_pages is submitted to zram, it is divided into nr_pages parts for compression and storage individually. By transitioning to a higher granularity, we can notably enhance compression rates while simultaneously reducing CPU consumption. This patch introduces the capability for large folios to be divided based on the granularity specified by ZSMALLOC_MULTI_PAGES_ORDER, which defaults to 2. For instance, for folios sized at 128KiB, compression will occur in eight 16KiB multi-pages. This modification will notably reduce CPU consumption and enhance compression ratios. The following data illustrates the time and compressed data for typical anonymous pages gathered from Android phones. Signed-off-by: Tangquan Zheng Co-developed-by: Barry Song Signed-off-by: Barry Song --- drivers/block/zram/Kconfig | 9 + drivers/block/zram/zcomp.c | 17 +- drivers/block/zram/zcomp.h | 12 +- drivers/block/zram/zram_drv.c | 449 +++++++++++++++++++++++++++++++--- drivers/block/zram/zram_drv.h | 45 ++++ 5 files changed, 495 insertions(+), 37 deletions(-) diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig index 402b7b175863..716e92c5fdfe 100644 --- a/drivers/block/zram/Kconfig +++ b/drivers/block/zram/Kconfig @@ -145,3 +145,12 @@ config ZRAM_MULTI_COMP re-compress pages using a potentially slower but more effective compression algorithm. Note, that IDLE page recompression requires ZRAM_TRACK_ENTRY_ACTIME. + +config ZRAM_MULTI_PAGES + bool "Enable multiple pages compression and decompression" + depends on ZRAM && ZSMALLOC_MULTI_PAGES + help + Initially, zram divided large folios into blocks of nr_pages, each sized + equal to PAGE_SIZE, for compression. This option fine-tunes zram to + improve compression granularity by dividing large folios into larger + parts defined by the configuration option ZSMALLOC_MULTI_PAGES_ORDER. diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..44f5b404495a 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -52,6 +52,11 @@ static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *zstrm) static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm) { +#ifdef CONFIG_ZRAM_MULTI_PAGES + unsigned long page_size = ZCOMP_MULTI_PAGES_SIZE; +#else + unsigned long page_size = PAGE_SIZE; +#endif int ret; ret = comp->ops->create_ctx(comp->params, &zstrm->ctx); @@ -62,7 +67,7 @@ static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm) * allocate 2 pages. 1 for compressed data, plus 1 extra for the * case when compressed size is larger than the original one */ - zstrm->buffer = vzalloc(2 * PAGE_SIZE); + zstrm->buffer = vzalloc(2 * page_size); if (!zstrm->buffer) { zcomp_strm_free(comp, zstrm); return -ENOMEM; @@ -119,13 +124,13 @@ void zcomp_stream_put(struct zcomp *comp) } int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, - const void *src, unsigned int *dst_len) + const void *src, unsigned int src_len, unsigned int *dst_len) { struct zcomp_req req = { .src = src, .dst = zstrm->buffer, - .src_len = PAGE_SIZE, - .dst_len = 2 * PAGE_SIZE, + .src_len = src_len, + .dst_len = src_len * 2, }; int ret; @@ -136,13 +141,13 @@ int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, } int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, - const void *src, unsigned int src_len, void *dst) + const void *src, unsigned int src_len, void *dst, unsigned int dst_len) { struct zcomp_req req = { .src = src, .dst = dst, .src_len = src_len, - .dst_len = PAGE_SIZE, + .dst_len = dst_len, }; return comp->ops->decompress(comp->params, &zstrm->ctx, &req); diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..471c16be293c 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -30,6 +30,13 @@ struct zcomp_ctx { void *context; }; +#ifdef CONFIG_ZRAM_MULTI_PAGES +#define ZCOMP_MULTI_PAGES_ORDER (_AC(CONFIG_ZSMALLOC_MULTI_PAGES_ORDER, UL)) +#define ZCOMP_MULTI_PAGES_NR (1 << ZCOMP_MULTI_PAGES_ORDER) +#define ZCOMP_MULTI_PAGES_SIZE (PAGE_SIZE * ZCOMP_MULTI_PAGES_NR) +#define MULTI_PAGE_SHIFT (ZCOMP_MULTI_PAGES_ORDER + PAGE_SHIFT) +#endif + struct zcomp_strm { local_lock_t lock; /* compression buffer */ @@ -80,8 +87,9 @@ struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); void zcomp_stream_put(struct zcomp *comp); int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, - const void *src, unsigned int *dst_len); + const void *src, unsigned int src_len, unsigned int *dst_len); int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, - const void *src, unsigned int src_len, void *dst); + const void *src, unsigned int src_len, void *dst, + unsigned int dst_len); #endif /* _ZCOMP_H_ */ diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 6cb7d1e57362..90f87894ff3e 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -50,7 +50,7 @@ static unsigned int num_devices = 1; * Pages that compress to sizes equals or greater than this are stored * uncompressed in memory. */ -static size_t huge_class_size; +static size_t huge_class_size[ZSMALLOC_TYPE_MAX]; static const struct block_device_operations zram_devops; @@ -296,11 +296,11 @@ static inline void zram_fill_page(void *ptr, unsigned long len, memset_l(ptr, value, len / sizeof(unsigned long)); } -static bool page_same_filled(void *ptr, unsigned long *element) +static bool page_same_filled(void *ptr, unsigned long *element, unsigned int page_size) { unsigned long *page; unsigned long val; - unsigned int pos, last_pos = PAGE_SIZE / sizeof(*page) - 1; + unsigned int pos, last_pos = page_size / sizeof(*page) - 1; page = (unsigned long *)ptr; val = page[0]; @@ -1426,13 +1426,40 @@ static ssize_t debug_stat_show(struct device *dev, return ret; } +#ifdef CONFIG_ZRAM_MULTI_PAGES +static ssize_t multi_pages_debug_stat_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct zram *zram = dev_to_zram(dev); + ssize_t ret = 0; + + down_read(&zram->init_lock); + ret = scnprintf(buf, PAGE_SIZE, + "zram_bio write/read multi_pages count:%8llu %8llu\n" + "zram_bio failed write/read multi_pages count%8llu %8llu\n" + "zram_bio partial write/read multi_pages count%8llu %8llu\n" + "multi_pages_miss_free %8llu\n", + (u64)atomic64_read(&zram->stats.zram_bio_write_multi_pages_count), + (u64)atomic64_read(&zram->stats.zram_bio_read_multi_pages_count), + (u64)atomic64_read(&zram->stats.multi_pages_failed_writes), + (u64)atomic64_read(&zram->stats.multi_pages_failed_reads), + (u64)atomic64_read(&zram->stats.zram_bio_write_multi_pages_partial_count), + (u64)atomic64_read(&zram->stats.zram_bio_read_multi_pages_partial_count), + (u64)atomic64_read(&zram->stats.multi_pages_miss_free)); + up_read(&zram->init_lock); + + return ret; +} +#endif static DEVICE_ATTR_RO(io_stat); static DEVICE_ATTR_RO(mm_stat); #ifdef CONFIG_ZRAM_WRITEBACK static DEVICE_ATTR_RO(bd_stat); #endif static DEVICE_ATTR_RO(debug_stat); - +#ifdef CONFIG_ZRAM_MULTI_PAGES +static DEVICE_ATTR_RO(multi_pages_debug_stat); +#endif static void zram_meta_free(struct zram *zram, u64 disksize) { size_t num_pages = disksize >> PAGE_SHIFT; @@ -1449,6 +1476,7 @@ static void zram_meta_free(struct zram *zram, u64 disksize) static bool zram_meta_alloc(struct zram *zram, u64 disksize) { size_t num_pages, index; + int i; num_pages = disksize >> PAGE_SHIFT; zram->table = vzalloc(array_size(num_pages, sizeof(*zram->table))); @@ -1461,7 +1489,10 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize) return false; } - huge_class_size = zs_huge_class_size(zram->mem_pool, 0); + for (i = 0; i < ZSMALLOC_TYPE_MAX; i++) { + if (!huge_class_size[i]) + huge_class_size[i] = zs_huge_class_size(zram->mem_pool, i); + } for (index = 0; index < num_pages; index++) spin_lock_init(&zram->table[index].lock); @@ -1476,10 +1507,17 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize) static void zram_free_page(struct zram *zram, size_t index) { unsigned long handle; + int nr_pages = 1; #ifdef CONFIG_ZRAM_TRACK_ENTRY_ACTIME zram->table[index].ac_time = 0; #endif +#ifdef CONFIG_ZRAM_MULTI_PAGES + if (zram_test_flag(zram, index, ZRAM_COMP_MULTI_PAGES)) { + zram_clear_flag(zram, index, ZRAM_COMP_MULTI_PAGES); + nr_pages = ZCOMP_MULTI_PAGES_NR; + } +#endif zram_clear_flag(zram, index, ZRAM_IDLE); zram_clear_flag(zram, index, ZRAM_INCOMPRESSIBLE); @@ -1503,7 +1541,7 @@ static void zram_free_page(struct zram *zram, size_t index) */ if (zram_test_flag(zram, index, ZRAM_SAME)) { zram_clear_flag(zram, index, ZRAM_SAME); - atomic64_dec(&zram->stats.same_pages); + atomic64_sub(nr_pages, &zram->stats.same_pages); goto out; } @@ -1516,7 +1554,7 @@ static void zram_free_page(struct zram *zram, size_t index) atomic64_sub(zram_get_obj_size(zram, index), &zram->stats.compr_data_size); out: - atomic64_dec(&zram->stats.pages_stored); + atomic64_sub(nr_pages, &zram->stats.pages_stored); zram_set_handle(zram, index, 0); zram_set_obj_size(zram, index, 0); } @@ -1526,7 +1564,7 @@ static void zram_free_page(struct zram *zram, size_t index) * Corresponding ZRAM slot should be locked. */ static int zram_read_from_zspool(struct zram *zram, struct page *page, - u32 index) + u32 index, enum zsmalloc_type zs_type) { struct zcomp_strm *zstrm; unsigned long handle; @@ -1534,6 +1572,12 @@ static int zram_read_from_zspool(struct zram *zram, struct page *page, void *src, *dst; u32 prio; int ret; + unsigned long page_size = PAGE_SIZE; + +#ifdef CONFIG_ZRAM_MULTI_PAGES + if (zs_type == ZSMALLOC_TYPE_MULTI_PAGES) + page_size = ZCOMP_MULTI_PAGES_SIZE; +#endif handle = zram_get_handle(zram, index); if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) { @@ -1542,28 +1586,28 @@ static int zram_read_from_zspool(struct zram *zram, struct page *page, value = handle ? zram_get_element(zram, index) : 0; mem = kmap_local_page(page); - zram_fill_page(mem, PAGE_SIZE, value); + zram_fill_page(mem, page_size, value); kunmap_local(mem); return 0; } size = zram_get_obj_size(zram, index); - if (size != PAGE_SIZE) { + if (size != page_size) { prio = zram_get_priority(zram, index); zstrm = zcomp_stream_get(zram->comps[prio]); } src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); - if (size == PAGE_SIZE) { + if (size == page_size) { dst = kmap_local_page(page); - copy_page(dst, src); + memcpy(dst, src, page_size); kunmap_local(dst); ret = 0; } else { dst = kmap_local_page(page); ret = zcomp_decompress(zram->comps[prio], zstrm, - src, size, dst); + src, size, dst, page_size); kunmap_local(dst); zcomp_stream_put(zram->comps[prio]); } @@ -1579,7 +1623,7 @@ static int zram_read_page(struct zram *zram, struct page *page, u32 index, zram_slot_lock(zram, index); if (!zram_test_flag(zram, index, ZRAM_WB)) { /* Slot should be locked through out the function call */ - ret = zram_read_from_zspool(zram, page, index); + ret = zram_read_from_zspool(zram, page, index, ZSMALLOC_TYPE_BASEPAGE); zram_slot_unlock(zram, index); } else { /* @@ -1636,13 +1680,24 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) struct zcomp_strm *zstrm; unsigned long element = 0; enum zram_pageflags flags = 0; + unsigned long page_size = PAGE_SIZE; + int huge_class_idx = ZSMALLOC_TYPE_BASEPAGE; + int nr_pages = 1; + +#ifdef CONFIG_ZRAM_MULTI_PAGES + if (folio_size(page_folio(page)) >= ZCOMP_MULTI_PAGES_SIZE) { + page_size = ZCOMP_MULTI_PAGES_SIZE; + huge_class_idx = ZSMALLOC_TYPE_MULTI_PAGES; + nr_pages = ZCOMP_MULTI_PAGES_NR; + } +#endif mem = kmap_local_page(page); - if (page_same_filled(mem, &element)) { + if (page_same_filled(mem, &element, page_size)) { kunmap_local(mem); /* Free memory associated with this sector now. */ flags = ZRAM_SAME; - atomic64_inc(&zram->stats.same_pages); + atomic64_add(nr_pages, &zram->stats.same_pages); goto out; } kunmap_local(mem); @@ -1651,7 +1706,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); src = kmap_local_page(page); ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, - src, &comp_len); + src, page_size, &comp_len); kunmap_local(src); if (unlikely(ret)) { @@ -1661,8 +1716,8 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return ret; } - if (comp_len >= huge_class_size) - comp_len = PAGE_SIZE; + if (comp_len >= huge_class_size[huge_class_idx]) + comp_len = page_size; /* * handle allocation has 2 paths: * a) fast path is executed with preemption disabled (for @@ -1691,7 +1746,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (IS_ERR_VALUE(handle)) return PTR_ERR((void *)handle); - if (comp_len != PAGE_SIZE) + if (comp_len != page_size) goto compress_again; /* * If the page is not compressible, you need to acquire the @@ -1715,10 +1770,10 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); src = zstrm->buffer; - if (comp_len == PAGE_SIZE) + if (comp_len == page_size) src = kmap_local_page(page); memcpy(dst, src, comp_len); - if (comp_len == PAGE_SIZE) + if (comp_len == page_size) kunmap_local(src); zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); @@ -1732,7 +1787,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) zram_slot_lock(zram, index); zram_free_page(zram, index); - if (comp_len == PAGE_SIZE) { + if (comp_len == page_size) { zram_set_flag(zram, index, ZRAM_HUGE); atomic64_inc(&zram->stats.huge_pages); atomic64_inc(&zram->stats.huge_pages_since); @@ -1745,10 +1800,19 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) zram_set_handle(zram, index, handle); zram_set_obj_size(zram, index, comp_len); } + +#ifdef CONFIG_ZRAM_MULTI_PAGES + if (page_size == ZCOMP_MULTI_PAGES_SIZE) { + /* Set multi-pages compression flag for free or overwriting */ + for (int i = 0; i < ZCOMP_MULTI_PAGES_NR; i++) + zram_set_flag(zram, index + i, ZRAM_COMP_MULTI_PAGES); + } +#endif + zram_slot_unlock(zram, index); /* Update stats */ - atomic64_inc(&zram->stats.pages_stored); + atomic64_add(nr_pages, &zram->stats.pages_stored); return ret; } @@ -1861,7 +1925,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, if (comp_len_old < threshold) return 0; - ret = zram_read_from_zspool(zram, page, index); + ret = zram_read_from_zspool(zram, page, index, ZSMALLOC_TYPE_BASEPAGE); if (ret) return ret; @@ -1892,7 +1956,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, zstrm = zcomp_stream_get(zram->comps[prio]); src = kmap_local_page(page); ret = zcomp_compress(zram->comps[prio], zstrm, - src, &comp_len_new); + src, PAGE_SIZE, &comp_len_new); kunmap_local(src); if (ret) { @@ -2056,7 +2120,7 @@ static ssize_t recompress_store(struct device *dev, } } - if (threshold >= huge_class_size) + if (threshold >= huge_class_size[ZSMALLOC_TYPE_BASEPAGE]) return -EINVAL; down_read(&zram->init_lock); @@ -2178,7 +2242,7 @@ static void zram_bio_discard(struct zram *zram, struct bio *bio) bio_endio(bio); } -static void zram_bio_read(struct zram *zram, struct bio *bio) +static void zram_bio_read_page(struct zram *zram, struct bio *bio) { unsigned long start_time = bio_start_io_acct(bio); struct bvec_iter iter = bio->bi_iter; @@ -2209,7 +2273,7 @@ static void zram_bio_read(struct zram *zram, struct bio *bio) bio_endio(bio); } -static void zram_bio_write(struct zram *zram, struct bio *bio) +static void zram_bio_write_page(struct zram *zram, struct bio *bio) { unsigned long start_time = bio_start_io_acct(bio); struct bvec_iter iter = bio->bi_iter; @@ -2239,6 +2303,311 @@ static void zram_bio_write(struct zram *zram, struct bio *bio) bio_endio(bio); } +#ifdef CONFIG_ZRAM_MULTI_PAGES + +/* + * The index is compress by multi-pages when any index ZRAM_COMP_MULTI_PAGES flag is set. + * Return: 0 : compress by page + * > 0 : compress by multi-pages + */ +static inline int __test_multi_pages_comp(struct zram *zram, u32 index) +{ + int i; + int count = 0; + int head_index = index & ~((unsigned long)ZCOMP_MULTI_PAGES_NR - 1); + + for (i = 0; i < ZCOMP_MULTI_PAGES_NR; i++) { + if (zram_test_flag(zram, head_index + i, ZRAM_COMP_MULTI_PAGES)) + count++; + } + + return count; +} + +static inline bool want_multi_pages_comp(struct zram *zram, struct bio *bio) +{ + u32 index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT; + + if (bio->bi_io_vec->bv_len >= ZCOMP_MULTI_PAGES_SIZE) + return true; + + zram_slot_lock(zram, index); + if (__test_multi_pages_comp(zram, index)) { + zram_slot_unlock(zram, index); + return true; + } + zram_slot_unlock(zram, index); + + return false; +} + +static inline bool test_multi_pages_comp(struct zram *zram, struct bio *bio) +{ + u32 index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT; + + return !!__test_multi_pages_comp(zram, index); +} + +static inline bool is_multi_pages_partial_io(struct bio_vec *bvec) +{ + return bvec->bv_len != ZCOMP_MULTI_PAGES_SIZE; +} + +static int zram_read_multi_pages(struct zram *zram, struct page *page, u32 index, + struct bio *parent) +{ + int ret; + + zram_slot_lock(zram, index); + if (!zram_test_flag(zram, index, ZRAM_WB)) { + /* Slot should be locked through out the function call */ + ret = zram_read_from_zspool(zram, page, index, ZSMALLOC_TYPE_MULTI_PAGES); + zram_slot_unlock(zram, index); + } else { + /* + * The slot should be unlocked before reading from the backing + * device. + */ + zram_slot_unlock(zram, index); + + ret = read_from_bdev(zram, page, zram_get_element(zram, index), + parent); + } + + /* Should NEVER happen. Return bio error if it does. */ + if (WARN_ON(ret < 0)) + pr_err("Decompression failed! err=%d, page=%u\n", ret, index); + + return ret; +} + +static int zram_read_partial_from_zspool(struct zram *zram, struct page *page, + u32 index, enum zsmalloc_type zs_type, int offset) +{ + struct zcomp_strm *zstrm; + unsigned long handle; + unsigned int size; + void *src, *dst; + u32 prio; + int ret; + unsigned long page_size = PAGE_SIZE; + +#ifdef CONFIG_ZRAM_MULTI_PAGES + if (zs_type == ZSMALLOC_TYPE_MULTI_PAGES) + page_size = ZCOMP_MULTI_PAGES_SIZE; +#endif + + handle = zram_get_handle(zram, index); + if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) { + unsigned long value; + void *mem; + + value = handle ? zram_get_element(zram, index) : 0; + mem = kmap_local_page(page); + atomic64_inc(&zram->stats.zram_bio_read_multi_pages_partial_count); + zram_fill_page(mem, PAGE_SIZE, value); + kunmap_local(mem); + return 0; + } + + size = zram_get_obj_size(zram, index); + + if (size != page_size) { + prio = zram_get_priority(zram, index); + zstrm = zcomp_stream_get(zram->comps[prio]); + } + + src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); + if (size == page_size) { + dst = kmap_local_page(page); + atomic64_inc(&zram->stats.zram_bio_read_multi_pages_partial_count); + memcpy(dst, src + offset, PAGE_SIZE); + kunmap_local(dst); + ret = 0; + } else { + dst = kmap_local_page(page); + /* use zstrm->buffer to store decompress thp and copy page to dst */ + atomic64_inc(&zram->stats.zram_bio_read_multi_pages_partial_count); + ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, zstrm->buffer, page_size); + memcpy(dst, zstrm->buffer + offset, PAGE_SIZE); + kunmap_local(dst); + zcomp_stream_put(zram->comps[prio]); + } + zs_unmap_object(zram->mem_pool, handle); + return ret; +} + +/* + * Use a temporary buffer to decompress the page, as the decompressor + * always expects a full page for the output. + */ +static int zram_bvec_read_multi_pages_partial(struct zram *zram, struct page *page, u32 index, + struct bio *parent, int offset) +{ + int ret; + + zram_slot_lock(zram, index); + if (!zram_test_flag(zram, index, ZRAM_WB)) { + /* Slot should be locked through out the function call */ + ret = zram_read_partial_from_zspool(zram, page, index, ZSMALLOC_TYPE_MULTI_PAGES, offset); + zram_slot_unlock(zram, index); + } else { + /* + * The slot should be unlocked before reading from the backing + * device. + */ + zram_slot_unlock(zram, index); + + ret = read_from_bdev(zram, page, zram_get_element(zram, index), + parent); + } + + /* Should NEVER happen. Return bio error if it does. */ + if (WARN_ON(ret < 0)) + pr_err("Decompression failed! err=%d, page=%u offset=%d\n", ret, index, offset); + + return ret; +} + +static int zram_bvec_read_multi_pages(struct zram *zram, struct bio_vec *bvec, + u32 index, int offset, struct bio *bio) +{ + if (is_multi_pages_partial_io(bvec)) + return zram_bvec_read_multi_pages_partial(zram, bvec->bv_page, index, bio, offset); + return zram_read_multi_pages(zram, bvec->bv_page, index, bio); +} + +/* + * This is a partial IO. Read the full page before writing the changes. + */ +static int zram_bvec_write_multi_pages_partial(struct zram *zram, struct bio_vec *bvec, + u32 index, int offset, struct bio *bio) +{ + struct page *page = alloc_pages(GFP_NOIO | __GFP_COMP, ZCOMP_MULTI_PAGES_ORDER); + int ret; + void *src, *dst; + + if (!page) + return -ENOMEM; + + ret = zram_read_multi_pages(zram, page, index, bio); + if (!ret) { + src = kmap_local_page(bvec->bv_page); + dst = kmap_local_page(page); + memcpy(dst + offset, src + bvec->bv_offset, bvec->bv_len); + kunmap_local(dst); + kunmap_local(src); + + atomic64_inc(&zram->stats.zram_bio_write_multi_pages_partial_count); + ret = zram_write_page(zram, page, index); + } + __free_pages(page, ZCOMP_MULTI_PAGES_ORDER); + return ret; +} + +static int zram_bvec_write_multi_pages(struct zram *zram, struct bio_vec *bvec, + u32 index, int offset, struct bio *bio) +{ + if (is_multi_pages_partial_io(bvec)) + return zram_bvec_write_multi_pages_partial(zram, bvec, index, offset, bio); + return zram_write_page(zram, bvec->bv_page, index); +} + + +static void zram_bio_read_multi_pages(struct zram *zram, struct bio *bio) +{ + unsigned long start_time = bio_start_io_acct(bio); + struct bvec_iter iter = bio->bi_iter; + + do { + /* Use head index, and other indexes are used as offset */ + u32 index = (iter.bi_sector >> SECTORS_PER_PAGE_SHIFT) & + ~((unsigned long)ZCOMP_MULTI_PAGES_NR - 1); + u32 offset = (iter.bi_sector & (SECTORS_PER_MULTI_PAGE - 1)) << SECTOR_SHIFT; + struct bio_vec bv = multi_pages_bio_iter_iovec(bio, iter); + + atomic64_add(1, &zram->stats.zram_bio_read_multi_pages_count); + bv.bv_len = min_t(u32, bv.bv_len, ZCOMP_MULTI_PAGES_SIZE - offset); + + if (zram_bvec_read_multi_pages(zram, &bv, index, offset, bio) < 0) { + atomic64_inc(&zram->stats.multi_pages_failed_reads); + bio->bi_status = BLK_STS_IOERR; + break; + } + flush_dcache_page(bv.bv_page); + + zram_slot_lock(zram, index); + zram_accessed(zram, index); + zram_slot_unlock(zram, index); + + bio_advance_iter_single(bio, &iter, bv.bv_len); + } while (iter.bi_size); + + bio_end_io_acct(bio, start_time); + bio_endio(bio); +} + +static void zram_bio_write_multi_pages(struct zram *zram, struct bio *bio) +{ + unsigned long start_time = bio_start_io_acct(bio); + struct bvec_iter iter = bio->bi_iter; + + do { + /* Use head index, and other indexes are used as offset */ + u32 index = (iter.bi_sector >> SECTORS_PER_PAGE_SHIFT) & + ~((unsigned long)ZCOMP_MULTI_PAGES_NR - 1); + u32 offset = (iter.bi_sector & (SECTORS_PER_MULTI_PAGE - 1)) << SECTOR_SHIFT; + struct bio_vec bv = multi_pages_bio_iter_iovec(bio, iter); + + bv.bv_len = min_t(u32, bv.bv_len, ZCOMP_MULTI_PAGES_SIZE - offset); + + atomic64_add(1, &zram->stats.zram_bio_write_multi_pages_count); + if (zram_bvec_write_multi_pages(zram, &bv, index, offset, bio) < 0) { + atomic64_inc(&zram->stats.multi_pages_failed_writes); + bio->bi_status = BLK_STS_IOERR; + break; + } + + zram_slot_lock(zram, index); + zram_accessed(zram, index); + zram_slot_unlock(zram, index); + + bio_advance_iter_single(bio, &iter, bv.bv_len); + } while (iter.bi_size); + + bio_end_io_acct(bio, start_time); + bio_endio(bio); +} +#else +static inline bool test_multi_pages_comp(struct zram *zram, struct bio *bio) +{ + return false; +} + +static inline bool want_multi_pages_comp(struct zram *zram, struct bio *bio) +{ + return false; +} +static void zram_bio_read_multi_pages(struct zram *zram, struct bio *bio) {} +static void zram_bio_write_multi_pages(struct zram *zram, struct bio *bio) {} +#endif + +static void zram_bio_read(struct zram *zram, struct bio *bio) +{ + if (test_multi_pages_comp(zram, bio)) + zram_bio_read_multi_pages(zram, bio); + else + zram_bio_read_page(zram, bio); +} + +static void zram_bio_write(struct zram *zram, struct bio *bio) +{ + if (want_multi_pages_comp(zram, bio)) + zram_bio_write_multi_pages(zram, bio); + else + zram_bio_write_page(zram, bio); +} + /* * Handler function for all zram I/O requests. */ @@ -2276,6 +2645,25 @@ static void zram_slot_free_notify(struct block_device *bdev, return; } +#ifdef CONFIG_ZRAM_MULTI_PAGES + int comp_count = __test_multi_pages_comp(zram, index); + + if (comp_count > 1) { + zram_clear_flag(zram, index, ZRAM_COMP_MULTI_PAGES); + zram_slot_unlock(zram, index); + return; + } else if (comp_count == 1) { + zram_clear_flag(zram, index, ZRAM_COMP_MULTI_PAGES); + zram_slot_unlock(zram, index); + /*only need to free head index*/ + index &= ~((unsigned long)ZCOMP_MULTI_PAGES_NR - 1); + if (!zram_slot_trylock(zram, index)) { + atomic64_inc(&zram->stats.multi_pages_miss_free); + return; + } + } +#endif + zram_free_page(zram, index); zram_slot_unlock(zram, index); } @@ -2493,6 +2881,9 @@ static struct attribute *zram_disk_attrs[] = { #endif &dev_attr_io_stat.attr, &dev_attr_mm_stat.attr, +#ifdef CONFIG_ZRAM_MULTI_PAGES + &dev_attr_multi_pages_debug_stat.attr, +#endif #ifdef CONFIG_ZRAM_WRITEBACK &dev_attr_bd_stat.attr, #endif diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 134be414e210..ac4eb4f39cb7 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -28,6 +28,10 @@ #define ZRAM_SECTOR_PER_LOGICAL_BLOCK \ (1 << (ZRAM_LOGICAL_BLOCK_SHIFT - SECTOR_SHIFT)) +#ifdef CONFIG_ZRAM_MULTI_PAGES +#define SECTORS_PER_MULTI_PAGE_SHIFT (MULTI_PAGE_SHIFT - SECTOR_SHIFT) +#define SECTORS_PER_MULTI_PAGE (1 << SECTORS_PER_MULTI_PAGE_SHIFT) +#endif /* * ZRAM is mainly used for memory efficiency so we want to keep memory @@ -38,7 +42,15 @@ * * We use BUILD_BUG_ON() to make sure that zram pageflags don't overflow. */ + +#ifdef CONFIG_ZRAM_MULTI_PAGES +#define ZRAM_FLAG_SHIFT (PAGE_SHIFT + \ + CONFIG_ZSMALLOC_MULTI_PAGES_ORDER + 1) +#else #define ZRAM_FLAG_SHIFT (PAGE_SHIFT + 1) +#endif + +#define ENABLE_HUGEPAGE_ZRAM_DEBUG 1 /* Only 2 bits are allowed for comp priority index */ #define ZRAM_COMP_PRIORITY_MASK 0x3 @@ -55,6 +67,10 @@ enum zram_pageflags { ZRAM_COMP_PRIORITY_BIT1, /* First bit of comp priority index */ ZRAM_COMP_PRIORITY_BIT2, /* Second bit of comp priority index */ +#ifdef CONFIG_ZRAM_MULTI_PAGES + ZRAM_COMP_MULTI_PAGES, /* Compressed by multi-pages */ +#endif + __NR_ZRAM_PAGEFLAGS, }; @@ -90,6 +106,16 @@ struct zram_stats { atomic64_t bd_reads; /* no. of reads from backing device */ atomic64_t bd_writes; /* no. of writes from backing device */ #endif + +#ifdef CONFIG_ZRAM_MULTI_PAGES + atomic64_t zram_bio_write_multi_pages_count; + atomic64_t zram_bio_read_multi_pages_count; + atomic64_t multi_pages_failed_writes; + atomic64_t multi_pages_failed_reads; + atomic64_t zram_bio_write_multi_pages_partial_count; + atomic64_t zram_bio_read_multi_pages_partial_count; + atomic64_t multi_pages_miss_free; +#endif }; #ifdef CONFIG_ZRAM_MULTI_COMP @@ -141,4 +167,23 @@ struct zram { #endif atomic_t pp_in_progress; }; + +#ifdef CONFIG_ZRAM_MULTI_PAGES + #define multi_pages_bvec_iter_offset(bvec, iter) \ + (mp_bvec_iter_offset((bvec), (iter)) % ZCOMP_MULTI_PAGES_SIZE) + +#define multi_pages_bvec_iter_len(bvec, iter) \ + min_t(unsigned int, mp_bvec_iter_len((bvec), (iter)), \ + ZCOMP_MULTI_PAGES_SIZE - bvec_iter_offset((bvec), (iter))) + +#define multi_pages_bvec_iter_bvec(bvec, iter) \ +((struct bio_vec) { \ + .bv_page = bvec_iter_page((bvec), (iter)), \ + .bv_len = multi_pages_bvec_iter_len((bvec), (iter)), \ + .bv_offset = multi_pages_bvec_iter_offset((bvec), (iter)), \ +}) + +#define multi_pages_bio_iter_iovec(bio, iter) \ + multi_pages_bvec_iter_bvec((bio)->bi_io_vec, (iter)) +#endif #endif From patchwork Thu Nov 21 22:25:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13882403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4F24E64039 for ; Thu, 21 Nov 2024 22:26:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CB206B008C; Thu, 21 Nov 2024 17:26:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 57BA16B0092; Thu, 21 Nov 2024 17:26:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 443816B0093; Thu, 21 Nov 2024 17:26:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 224D76B008C for ; Thu, 21 Nov 2024 17:26:29 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9DB6EA08C0 for ; Thu, 21 Nov 2024 22:26:28 +0000 (UTC) X-FDA: 82811536122.18.1AEA320 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf30.hostedemail.com (Postfix) with ESMTP id 2755180016 for ; Thu, 21 Nov 2024 22:24:45 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KetEBzxw; spf=pass (imf30.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732227782; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bnUtikKG6kYaTFvoCtF8la8TvUYmRutVMcxq2kAfDzs=; b=Pr5UI8NuMaguIQJEeNV+59i1Rjida2vQqmv5ckYT/wrTJOJ02+3v1gZMCg88rzxzV7rcTz 0jaoFsWvmLdRl6SG1M7Q9LjNHHmukyJy/xIArr1Z/Y+oxLQhBmIeTDDsEjrVhnClaXjtVd WlsqCeypoTQkFVLfAQ6CdtTUA2agAoI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KetEBzxw; spf=pass (imf30.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732227782; a=rsa-sha256; cv=none; b=5dYMnhnSsW/+ofET674bj4v2tHcrjgVJ+GZ+2snFcBZ6i5fBkkiaT9yRqL+cXxcfENByar 9vkRcUb2IehInlS6jMVIGtCMgqcf4KANiLex9T7304CB8oEwZmzmnbrMpM/sMuVDhCL32G BgutIb3LJXCOsxZHJapfJgTL6PAW0Zg= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-21271dc4084so14527865ad.2 for ; Thu, 21 Nov 2024 14:26:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732227985; x=1732832785; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bnUtikKG6kYaTFvoCtF8la8TvUYmRutVMcxq2kAfDzs=; b=KetEBzxwrqvW8Unludl+rHmbnuXdi+f2rTACFlydmxz65KxkQUzsj+OsEuEYSdYVla Cz94XAQKNtGlJYjxpgauUQeUf7Rkfu7jF5hLmrz757nLbfPaTtc6+lnhRqQTOAgDL6T0 hAyMOI3ZxSsT8qrHnDHSOIRSCQtK3GjcGTjeR799m5RgBitwG4Gnhem3Y+8BIqq3a6/N VIXAY5QTcCWrtRjVovnXdDs6JnXJ+DE7FhsxmT/PmGD/f7k4K+YoA1Lug+J6abS6WO5i 31+TGz7syAOSSA4Xq6ZwLZR2KZcY/NnhRdAo7TLsPMz9pZSx0Zaz7G5X70qO+xXgGsqP zQuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732227985; x=1732832785; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bnUtikKG6kYaTFvoCtF8la8TvUYmRutVMcxq2kAfDzs=; b=ptv42pNAu4piDKmhVDWmiBjiQez2Say/bAr9VU6wfmlzJXn6N9MYnCZOZ6kHxHxDFU 2jvY+QHRTvig9FS3vOGQdaLUZs1lMfL9+97rW0AUcZ3Acz/85IVhFfSdL36Z/MowVAaY 7t5/+eyQwau6qTZNgYa2ZWQ/zCWagh3l9p2Zypi5ArllzXLvGV3X2NlE7THkT+fkD2WB F3NB19pbJAvXKf5Lua+xOiQXlR+1M2leCuFggstwbvkoEy24px9QpmLfmOb3hDAptvyC 8HmwOICJS2irv4mKJknDx+yLxVbrd4YjlS6pWI6Ww/lPp4FuNcLPFh6micw/XHuuaMt3 z9hg== X-Forwarded-Encrypted: i=1; AJvYcCXxv7QBtTVKjpqPZQgXTlqgirsZwjUnHq3XMQr11JjLcA3SXT8lvu+sdbDxU70HHrLCS8AgPrGoWA==@kvack.org X-Gm-Message-State: AOJu0YypZaA2JGRcMdod2gUpWHK2p+2vspEiCilRPTH2ljLSWcg6hJOJ AMKaovhyU9IvyBPMLhJ1WC6a5D5cbsjqs1S6x0kg/Q8SSO7g7BtOu08pLQ== X-Gm-Gg: ASbGnctTq4GSaviZbo9lW9bu8HK1OCzBRmbcDG2mMgH5dSmjqAs+/ORqIK0IWK6oxUW w8HBu5rldsuyAQ9y4h9hK4eWNsAnMrEGP+U7Q/AHnWXpAtU3XPz8Dpe+qmCTWhWbEscVEZHuBdK tgSKQUojB/tIer92zxSNBBn8PIZ1PHuwrCOyzvxlPKyGUiRLZTEeFvysPB7AW0lfJXvxbm26TJJ hM2DGYI4+o1KAlDnsM8b54XUMmJzZjQmxMVJSfgroYfQPu9AMF8186Pj7FsoK6uiGhQ37NB X-Google-Smtp-Source: AGHT+IFYXTSN4Arskmw5br0QxBuvikrOLITGUKeWiVS90/P9GJacujBndu+rNI50ulwvEXnP/KRJnA== X-Received: by 2002:a17:902:f646:b0:20c:b9ca:c12d with SMTP id d9443c01a7336-2129f6807d4mr7603135ad.38.1732227985525; Thu, 21 Nov 2024 14:26:25 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:9d64:b0ba:faf2:680e]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2129dba22f4sm3334745ad.100.2024.11.21.14.26.17 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 21 Nov 2024 14:26:25 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: axboe@kernel.dk, bala.seshasayee@linux.intel.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, kanchana.p.sridhar@intel.com, kasong@tencent.com, linux-block@vger.kernel.org, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, surenb@google.com, terrelln@fb.com, usamaarif642@gmail.com, v-songbaohua@oppo.com, wajdi.k.feghali@intel.com, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, zhengtangquan@oppo.com, zhouchengming@bytedance.com Subject: [PATCH RFC v3 3/4] zram: backend_zstd: Adjust estimated_src_size to accommodate multi-page compression Date: Fri, 22 Nov 2024 11:25:20 +1300 Message-Id: <20241121222521.83458-4-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20241121222521.83458-1-21cnbao@gmail.com> References: <20241121222521.83458-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2755180016 X-Stat-Signature: uaf7pjkqqnyx9w1wkzsk694ewbqdj18f X-Rspam-User: X-HE-Tag: 1732227885-439709 X-HE-Meta: U2FsdGVkX1+4W3etEgOY7qCVubKVeCFpcQGeJhYyiiVnmZrv2MYPo7JM6y7g3M0lOKb+jchGeEoKiTdTTMksxiRK6WVgVW18/VPqHlEQBHwYnIImCJTCcrh8WVyF2i0AVnDaZC7k6iha5aPKmm+Gqmf9kwXkrF4DqyaGW1QyYgxiRR3cmF3/iatQuncSW/lOZ6zlSlgGfy5MxDJORPgco6Wic0I8IKL49EkUvp1cRyEVVSCs8Rq4RsV3rFTiqL8dHVJNjlYJ0PArgUIxrObpGpTlI+lKIW6rFY86AXMbse/7bBK4JrbUSECSJU+DEKdREjoiMY+qe6nJGkiSV+AN23UNTXl+GLDn99Sj7VeeTunHuSIFzsz2Y2GVIkg8eOORpcnYzPThGcRInE6O5ULmCVuwpHrC6fk4Sx5fucw6BCwvMnqHJCWkqtiETCcTqIRJa7+yOkn4sAg4p7aDpgbujFj3krsvzaX331nM+XiIGxcZ9y2Gi0NU1X9/U5IbcSMYlbyc62mgUJlZI5IZaWus7vC5yFoRHAgauS4q9kzrQI+okjwiP+4g1o+nMxGEAYAW5/ZJ66LFqe6aaYAZAEsyalmjI/o/fOOeVaX6E+CXydw7uGOBGa4yw7tysy6hp4+fvhiZha3bPKXX+AE6pU+dk7Jc23EdiEvwIIC7YAHtD43BztLm6fpT25Sa3SjYUWvJOCXEX3PGH6BVQVVEo17kcUnlpcEDSrkS5SGEshQmYlZZx4U87jxnuRZ0chMWaXF/5WdpK8dmdjPiqYDuvygQtbLIYuGPbRjnktKcY3AqWsG7wKAVRyk+vX5bJt0Bl57DjuaL/XuL5Bn0hvGyeQC6Uz3br/iI0FpsnqSuF+KU2DwQZw2zQHBoj4KiDMfaU5C5mCOWt+PMk+PLpptCycpJGowdv0L/lxALv2Ov+Q4sI5hQbGNwLLqEaCJe3V5hn4c1EUSrww696vQWGmGR4kK +fW+J0Vq x4yJWzVdGFZGWQTEBTRwT4oMotWIyOBP+gH/u05KwCMSgNASV8Ruy1HEbYNG7UX0G4VF7BLWxLE0apF9oviQ4bRGtwwl4XSRcGT+s3D0onFPxgk6RIZ++herFFk6vdY1/llkbcgOrrhxdOWURISRPw33pbeh+WE2WoaaJ0y7zPyvpOP/6RYmt5mxlZhSIMU266AyJAqafyilSC1aDez9wWxOCOwJo99UW5gO0avhZAwDY0ussFGiwbaqEtbXZMujP95ogQP8qeOkskTKvvUyDtz040+CFHinshUHIxll8uY5Y28NYrHmOWmNeXbttwVC905P2/XqBOpehLN8x3Yay8PdD3eY3YkO6odmJw2dCq3NovIg3bNNKenlCaRR+HQ44ouUfB8DVXXaJBqzfqOrak+8q2j7Ng/+CLpjFWTQbQVhq0svNcmLviWsdZCejZrd3JUbMdba2jZ1yqm4U5zile+GilA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000770, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song If we continue using PAGE_SIZE as the estimated_src_size, we won't benefit from the reduced CPU usage and improved compression ratio brought by larger block compression. Signed-off-by: Barry Song --- drivers/block/zram/backend_zstd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/block/zram/backend_zstd.c b/drivers/block/zram/backend_zstd.c index 1184c0036f44..e126615eeff2 100644 --- a/drivers/block/zram/backend_zstd.c +++ b/drivers/block/zram/backend_zstd.c @@ -70,12 +70,12 @@ static int zstd_setup_params(struct zcomp_params *params) if (params->level == ZCOMP_PARAM_NO_LEVEL) params->level = zstd_default_clevel(); - zp->cprm = zstd_get_params(params->level, PAGE_SIZE); + zp->cprm = zstd_get_params(params->level, ZCOMP_MULTI_PAGES_SIZE); zp->custom_mem.customAlloc = zstd_custom_alloc; zp->custom_mem.customFree = zstd_custom_free; - prm = zstd_get_cparams(params->level, PAGE_SIZE, + prm = zstd_get_cparams(params->level, ZCOMP_MULTI_PAGES_SIZE, params->dict_sz); zp->cdict = zstd_create_cdict_byreference(params->dict, @@ -137,7 +137,7 @@ static int zstd_create(struct zcomp_params *params, struct zcomp_ctx *ctx) ctx->context = zctx; if (params->dict_sz == 0) { - prm = zstd_get_params(params->level, PAGE_SIZE); + prm = zstd_get_params(params->level, ZCOMP_MULTI_PAGES_SIZE); sz = zstd_cctx_workspace_bound(&prm.cParams); zctx->cctx_mem = vzalloc(sz); if (!zctx->cctx_mem) From patchwork Thu Nov 21 22:25:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13882404 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B55CE64025 for ; Thu, 21 Nov 2024 22:26:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DA326B0093; Thu, 21 Nov 2024 17:26:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 862346B0095; Thu, 21 Nov 2024 17:26:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 702A26B0096; Thu, 21 Nov 2024 17:26:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 52F536B0093 for ; Thu, 21 Nov 2024 17:26:40 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0C514C1430 for ; Thu, 21 Nov 2024 22:26:40 +0000 (UTC) X-FDA: 82811535030.13.D6847BA Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf20.hostedemail.com (Postfix) with ESMTP id AE95B1C000A for ; Thu, 21 Nov 2024 22:25:31 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=g32+NVki; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732227846; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pZAV40eLukdAUm3AIOVig1F6aMrQYsyjN2fkwd6I2Qs=; b=h4XJuAsN32VCk+uq8opOYK/5a5Eo6vn0r0AgaJWdxr2Isq9Kb3rW82qh87WCYiWdXaQTrA D+UD7TrratW7sN3JxGkY+dNbQJhj0nAs0hmaMZ6mLXcgvhEm6D2EkVsuP/kWTwQ9K7jqP9 fcSXU+w7ZKwG7Q9rorYuFNu5ubradVw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732227846; a=rsa-sha256; cv=none; b=5d1irg5WN9DU2gCAb15SuLDdRanTlGP7aN6pWz67YT3jiLcWa8QPOAiKVDtyPBiEIotvlL NkkCccATZXazhkVHmtC9gdZBaz55o1lHOP7DuVhTdSykYJ8X51Gs4aQiAbBJMQkipvhhN1 mmLBEAwPKSlY+lvhav0gxTPY2FLMcFI= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=g32+NVki; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2128383b86eso14641435ad.2 for ; Thu, 21 Nov 2024 14:26:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732227997; x=1732832797; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pZAV40eLukdAUm3AIOVig1F6aMrQYsyjN2fkwd6I2Qs=; b=g32+NVki++up8RN0ecNBCFdNsZPkPu8a322r6w0fG2IctIlEWJBSMLmlmQyH8s3Quv twM2Ki4ukdc7YCDIlY7Q8e51BJPK2ncaDdBi8qXg6NB2zrVF1OOO/glrKa/ea83UWqYP TcqmSkCAwv6ehV5yml01jwvo4HBzrvx5EdDNAsT48C2CzZiEKIiUPVvffRFIxQhAp6Rg nAE4/RBHp9Hs6j72NWURm+2BCoXkbK3fyevYK1Evo5yb+y7TK8uULsUNe1ZNHMUeNuLn Xc6762e3rwoAwO1DlJE72eeydBu95sdfzKvx24jRQcaYtVT7cKe+JdZHwLm16ZJE1x2k HDrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732227997; x=1732832797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pZAV40eLukdAUm3AIOVig1F6aMrQYsyjN2fkwd6I2Qs=; b=v7E4kpfKW2qmDTJliogDhE4FWOT4/Kkc4x8xrTpDZ35MzvqcaHkb/T4pFfM/n5OLFG aaGXdwWYis00gZtzFozCI4xudOqPdJ8ghVJgfLnjktt8SfPYhjIdklgDIg3R9k1+oPl8 syEE8YT4/eHacC4wvr0dxGauYe/lHtEq3vO/UfBiDZke04RYtWXV+hbBSuzEhRDaohdm 1dMQQK+MKPzxhQcOeLrWyLU8hDO72x5RtWoyirKCYdboPxK2b2hN/V52F2vx+ejbnI91 pp4c16kOuZ49VR7qKf42lX13BkVhsGx1QBvxfuIf8OlKlpvJ/7v6cJ5jvdj0rV8tqXbw MKtQ== X-Forwarded-Encrypted: i=1; AJvYcCUFJJI3y4dQ36bbm6HT+v1/UEac9EjpRIp3gDbnoNS60Nx9bfjrt/yfk1gZYU3/cr8l7RfN5N4dXw==@kvack.org X-Gm-Message-State: AOJu0YxirccHv+0f67/QgwwW36OPHUYHdH45nnll0iLjUxs1sEkXwW0N GQSigAu2iWZsxIJqMTNp0Hx12Ikl8AswvgnhTy+oMMoKOS8j7I7K X-Gm-Gg: ASbGncuWIbnqIompXtit0nlHUvf2SMzWWlJLpF958GUuvCbFQIk6YPnVmw12ymBCSpX GKxYvrDnSHQk47vLIc5lVLjgeoLKmq6upBUkBmcJy3V1Qhl3w2KM0LfNes+lHG9FcIb6RHaHdTf xjeQxquVx915JXdD6gB4+QmLiHTUDVbhLUVlaqyCf8SLwsrD9yaLUzQVH7vLUWKlvUz5h4A24qo c0k5DsbB/V421L500gUmkhywMD9G0eCN/IC2CVhah3DgXTuZoc4zb20ZkfZ83EXmnUffhOd X-Google-Smtp-Source: AGHT+IFk8IcOaVtDODSKrSa6m+UG9KLonALKh/GnCgbMtgcUZBCu3HpUQkp54L4Z2yATbp/3yYDCjw== X-Received: by 2002:a17:903:244b:b0:211:f6e4:d68e with SMTP id d9443c01a7336-2129fce2e23mr6801055ad.10.1732227996748; Thu, 21 Nov 2024 14:26:36 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:9d64:b0ba:faf2:680e]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2129dba22f4sm3334745ad.100.2024.11.21.14.26.28 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 21 Nov 2024 14:26:36 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: axboe@kernel.dk, bala.seshasayee@linux.intel.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, kanchana.p.sridhar@intel.com, kasong@tencent.com, linux-block@vger.kernel.org, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, surenb@google.com, terrelln@fb.com, usamaarif642@gmail.com, v-songbaohua@oppo.com, wajdi.k.feghali@intel.com, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com, zhengtangquan@oppo.com, zhouchengming@bytedance.com, Chuanhua Han Subject: [PATCH RFC v3 4/4] mm: fall back to four small folios if mTHP allocation fails Date: Fri, 22 Nov 2024 11:25:21 +1300 Message-Id: <20241121222521.83458-5-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20241121222521.83458-1-21cnbao@gmail.com> References: <20241121222521.83458-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: AE95B1C000A X-Stat-Signature: i5aqdse16wxfm38w3gqnx66jwrcrkf3n X-HE-Tag: 1732227931-407041 X-HE-Meta: U2FsdGVkX1/2VdYvTEedogmgN2BHcrJgbTl6PVxBbqJuYRHdltiHMgenSpkv1ixh8RjOSjUPI9hSlJGF+vb65e+wkgp7g31KsJjOp/00bqJysmWDikrtfJ7B32i/3DK3/dYLaZEf8YYs90TxlTeKAwJ1elwfgHTyQ7DHHwP2jcnRCK+xqOAI0L33fQqA+8NfXl27CDz1uUNFc3qtBSKw5eRXw0d7AKznkEskSRd7CwpujuTrIkW3I4MFtKTuwFofsyKKZuIgDTuuZu2fnpxwgJpv8nE41cZzh4+evdlg/anmc2HoJRGjDwQZNiJC56D6BAdFyjBfhEENSpQ6dN+y988YnlW1LQagvK9EKNJtfiSTyBU9ZMwgo0TNByPjirAXwm2VPjO1xY8Vt5PBJNSJmVkH+9bZVRJWuwumqvbyMM28dt1H3g4pgJHgjXv+54Pugckb/Uu3cGqEiL80VafRSG1T0Gr3IRyckmUX7ZK2R+uwLgMk9yBHo+xlwqEZWlFdnkp8KgRE+6oH5kb8Bvw3fYPpiWDbc+oY2T74E4r3minZuunKj5V1f2UeOHMsOVWfmRVdlbQG7k0CPi6oWjzsBllxc5t8KlIJ65cR1gEMcExNJP3pYvf0fJjStok/OT2eEBMv9VGH5TOMmC569R22LUb6nUsp8gx8BTRiy91LxXeY50cPp7AszF7dWyx8tQhF1qep8vJ+bxQxGRB1/nZRF0tVrRzacFGvdUpHzsP8QdY/KxTcZ7JvY3/5dHJ34OXwg9W2LSpejRdPvjxIY9oyk9YFqKGgut0uaz8XXn2qYW9zAliRO1zEf+AuOXCtVwbOeOsvF09yJ9qNJG70AG88f/UrkmfGOar7siNvwW31/zc2xgZOaVWpQwYegdjTm1b0vnDviiKNLD2DU5FS/j51BbLg2r9eHLtH/R+WlbXB5RPBlaXaSwfnXvMjo/Rxlto8ExO2tmjITRy4d6IU08M LWT9Oxrz NIpMqvpha1d7wkEJwrF7Wv2P9KSLK+ySGiUMO6DyyC8PA2xkY6pm8gKvjARV8xtODsLmew9kGEB8INIbccmssBc9mx4GT18cUAeT+G0j2GGdhdWkZRMOLnfeG+hF3XymmVOGZXmdi3ifuOia/aEFigqd3/ppHnDrhAcn8tSjORNfSZlo6Cs9nluEc3IyCbcsgZJSriaaiwwM8x7e82FiOGmCT6jUsukU6rAAsRRCPZRmJf2CqGe9bDwnY36xaap/3d2e32+nWnl5j03ksvI7c1VJM/IRGNpVGYUcnOgOVwQ7j0VPGZJWBFXMe7tEpaJJbTS5etDsKs5rYXYfHR6NesQXNuPqHHg16lhK4kD3/g/5+ROsX+xvc2QJ2xuQKCV19L0GDhluR26lw2y12u8deViEPoT/h4Qjkvsbq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song The swapfile can compress/decompress at 4 * PAGES granularity, reducing CPU usage and improving the compression ratio. However, if allocating an mTHP fails and we fall back to a single small folio, the entire large block must still be decompressed. This results in a 16 KiB area requiring 4 page faults, where each fault decompresses 16 KiB but retrieves only 4 KiB of data from the block. To address this inefficiency, we instead fall back to 4 small folios, ensuring that each decompression occurs only once. Allowing swap_read_folio() to decompress and read into an array of 4 folios would be extremely complex, requiring extensive changes throughout the stack, including swap_read_folio, zeromap, zswap, and final swap implementations like zRAM. In contrast, having these components fill a large folio with 4 subpages is much simpler. To avoid a full-stack modification, we introduce a per-CPU order-2 large folio as a buffer. This buffer is used for swap_read_folio(), after which the data is copied into the 4 small folios. Finally, in do_swap_page(), all these small folios are mapped. Co-developed-by: Chuanhua Han Signed-off-by: Chuanhua Han Signed-off-by: Barry Song --- mm/memory.c | 203 +++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 192 insertions(+), 11 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 209885a4134f..e551570c1425 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4042,6 +4042,15 @@ static struct folio *__alloc_swap_folio(struct vm_fault *vmf) return folio; } +#define BATCH_SWPIN_ORDER 2 +#define BATCH_SWPIN_COUNT (1 << BATCH_SWPIN_ORDER) +#define BATCH_SWPIN_SIZE (PAGE_SIZE << BATCH_SWPIN_ORDER) + +struct batch_swpin_buffer { + struct folio *folio; + struct mutex mutex; +}; + #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) { @@ -4120,7 +4129,101 @@ static inline unsigned long thp_swap_suitable_orders(pgoff_t swp_offset, return orders; } -static struct folio *alloc_swap_folio(struct vm_fault *vmf) +static DEFINE_PER_CPU(struct batch_swpin_buffer, swp_buf); + +static int __init batch_swpin_buffer_init(void) +{ + int ret, cpu; + struct batch_swpin_buffer *buf; + + for_each_possible_cpu(cpu) { + buf = per_cpu_ptr(&swp_buf, cpu); + buf->folio = (struct folio *)alloc_pages_node(cpu_to_node(cpu), + GFP_KERNEL | __GFP_COMP, BATCH_SWPIN_ORDER); + if (!buf->folio) { + ret = -ENOMEM; + goto err; + } + mutex_init(&buf->mutex); + } + return 0; + +err: + for_each_possible_cpu(cpu) { + buf = per_cpu_ptr(&swp_buf, cpu); + if (buf->folio) { + folio_put(buf->folio); + buf->folio = NULL; + } + } + return ret; +} +core_initcall(batch_swpin_buffer_init); + +static struct folio *alloc_batched_swap_folios(struct vm_fault *vmf, + struct batch_swpin_buffer **buf, struct folio **folios, + swp_entry_t entry) +{ + unsigned long haddr = ALIGN_DOWN(vmf->address, BATCH_SWPIN_SIZE); + struct batch_swpin_buffer *sbuf = raw_cpu_ptr(&swp_buf); + struct folio *folio = sbuf->folio; + unsigned long addr; + int i; + + if (unlikely(!folio)) + return NULL; + + for (i = 0; i < BATCH_SWPIN_COUNT; i++) { + addr = haddr + i * PAGE_SIZE; + folios[i] = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vmf->vma, addr); + if (!folios[i]) + goto err; + if (mem_cgroup_swapin_charge_folio(folios[i], vmf->vma->vm_mm, + GFP_KERNEL, entry)) + goto err; + } + + mutex_lock(&sbuf->mutex); + *buf = sbuf; +#ifdef CONFIG_MEMCG + folio->memcg_data = (*folios)->memcg_data; +#endif + return folio; + +err: + for (i--; i >= 0; i--) + folio_put(folios[i]); + return NULL; +} + +static void fill_batched_swap_folios(struct vm_fault *vmf, + void *shadow, struct batch_swpin_buffer *buf, + struct folio *folio, struct folio **folios) +{ + unsigned long haddr = ALIGN_DOWN(vmf->address, BATCH_SWPIN_SIZE); + unsigned long addr; + int i; + + for (i = 0; i < BATCH_SWPIN_COUNT; i++) { + addr = haddr + i * PAGE_SIZE; + __folio_set_locked(folios[i]); + __folio_set_swapbacked(folios[i]); + if (shadow) + workingset_refault(folios[i], shadow); + folio_add_lru(folios[i]); + copy_user_highpage(&folios[i]->page, folio_page(folio, i), + addr, vmf->vma); + if (folio_test_uptodate(folio)) + folio_mark_uptodate(folios[i]); + } + + folio->flags &= ~(PAGE_FLAGS_CHECK_AT_PREP & ~(1UL << PG_head)); + mutex_unlock(&buf->mutex); +} + +static struct folio *alloc_swap_folio(struct vm_fault *vmf, + struct batch_swpin_buffer **buf, + struct folio **folios) { struct vm_area_struct *vma = vmf->vma; unsigned long orders; @@ -4180,6 +4283,9 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) pte_unmap_unlock(pte, ptl); + if (!orders) + goto fallback; + /* Try allocating the highest of the remaining orders. */ gfp = vma_thp_gfp_mask(vma); while (orders) { @@ -4194,14 +4300,29 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf) order = next_order(&orders, order); } + /* + * During swap-out, a THP might have been compressed into multiple + * order-2 blocks to optimize CPU usage and compression ratio. + * Attempt to batch swap-in 4 smaller folios to ensure they are + * decompressed together as a single unit only once. + */ + return alloc_batched_swap_folios(vmf, buf, folios, entry); + fallback: return __alloc_swap_folio(vmf); } #else /* !CONFIG_TRANSPARENT_HUGEPAGE */ -static struct folio *alloc_swap_folio(struct vm_fault *vmf) +static struct folio *alloc_swap_folio(struct vm_fault *vmf, + struct batch_swpin_buffer **buf, + struct folio **folios) { return __alloc_swap_folio(vmf); } +static inline void fill_batched_swap_folios(struct vm_fault *vmf, + void *shadow, struct batch_swpin_buffer *buf, + struct folio *folio, struct folio **folios) +{ +} #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq); @@ -4216,6 +4337,8 @@ static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq); */ vm_fault_t do_swap_page(struct vm_fault *vmf) { + struct folio *folios[BATCH_SWPIN_COUNT] = { NULL }; + struct batch_swpin_buffer *buf = NULL; struct vm_area_struct *vma = vmf->vma; struct folio *swapcache, *folio = NULL; DECLARE_WAITQUEUE(wait, current); @@ -4228,7 +4351,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) pte_t pte; vm_fault_t ret = 0; void *shadow = NULL; - int nr_pages; + int nr_pages, i; unsigned long page_idx; unsigned long address; pte_t *ptep; @@ -4296,7 +4419,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ - folio = alloc_swap_folio(vmf); + folio = alloc_swap_folio(vmf, &buf, folios); if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); @@ -4327,10 +4450,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) mem_cgroup_swapin_uncharge_swap(entry, nr_pages); shadow = get_shadow_from_swap_cache(entry); - if (shadow) + if (shadow && !buf) workingset_refault(folio, shadow); - - folio_add_lru(folio); + if (!buf) + folio_add_lru(folio); /* To provide entry to swap_read_folio() */ folio->swap = entry; @@ -4361,6 +4484,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) count_vm_event(PGMAJFAULT); count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); page = folio_file_page(folio, swp_offset(entry)); + /* + * Copy data into batched small folios from the large + * folio buffer + */ + if (buf) { + fill_batched_swap_folios(vmf, shadow, buf, folio, folios); + folio = folios[0]; + page = &folios[0]->page; + goto do_map; + } } else if (PageHWPoison(page)) { /* * hwpoisoned dirty swapcache pages are kept for killing @@ -4415,6 +4548,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) lru_add_drain(); } +do_map: folio_throttle_swaprate(folio, GFP_KERNEL); /* @@ -4431,8 +4565,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } /* allocated large folios for SWP_SYNCHRONOUS_IO */ - if (folio_test_large(folio) && !folio_test_swapcache(folio)) { - unsigned long nr = folio_nr_pages(folio); + if ((folio_test_large(folio) || buf) && !folio_test_swapcache(folio)) { + unsigned long nr = buf ? BATCH_SWPIN_COUNT : folio_nr_pages(folio); unsigned long folio_start = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); unsigned long idx = (vmf->address - folio_start) / PAGE_SIZE; pte_t *folio_ptep = vmf->pte - idx; @@ -4527,6 +4661,42 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } } + /* Batched mapping of allocated small folios for SWP_SYNCHRONOUS_IO */ + if (buf) { + for (i = 0; i < nr_pages; i++) + arch_swap_restore(swp_entry(swp_type(entry), + swp_offset(entry) + i), folios[i]); + swap_free_nr(entry, nr_pages); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); + rmap_flags |= RMAP_EXCLUSIVE; + for (i = 0; i < nr_pages; i++) { + unsigned long addr = address + i * PAGE_SIZE; + + pte = mk_pte(&folios[i]->page, vma->vm_page_prot); + if (pte_swp_soft_dirty(vmf->orig_pte)) + pte = pte_mksoft_dirty(pte); + if (pte_swp_uffd_wp(vmf->orig_pte)) + pte = pte_mkuffd_wp(pte); + if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && + !pte_needs_soft_dirty_wp(vma, pte)) { + pte = pte_mkwrite(pte, vma); + if ((vmf->flags & FAULT_FLAG_WRITE) && (i == page_idx)) { + pte = pte_mkdirty(pte); + vmf->flags &= ~FAULT_FLAG_WRITE; + } + } + flush_icache_page(vma, &folios[i]->page); + folio_add_new_anon_rmap(folios[i], vma, addr, rmap_flags); + set_pte_at(vma->vm_mm, addr, ptep + i, pte); + arch_do_swap_page_nr(vma->vm_mm, vma, addr, pte, pte, 1); + if (i == page_idx) + vmf->orig_pte = pte; + folio_unlock(folios[i]); + } + goto wp_page; + } + /* * Some architectures may have to restore extra metadata to the page * when reading from swap. This metadata may be indexed by swap entry @@ -4612,6 +4782,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_put(swapcache); } +wp_page: if (vmf->flags & FAULT_FLAG_WRITE) { ret |= do_wp_page(vmf); if (ret & VM_FAULT_ERROR) @@ -4638,9 +4809,19 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); out_page: - folio_unlock(folio); + if (!buf) { + folio_unlock(folio); + } else { + for (i = 0; i < BATCH_SWPIN_COUNT; i++) + folio_unlock(folios[i]); + } out_release: - folio_put(folio); + if (!buf) { + folio_put(folio); + } else { + for (i = 0; i < BATCH_SWPIN_COUNT; i++) + folio_put(folios[i]); + } if (folio != swapcache && swapcache) { folio_unlock(swapcache); folio_put(swapcache);