From patchwork Mon Dec 18 11:50:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13496759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9740FC35274 for ; Mon, 18 Dec 2023 11:50:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A6748D000D; Mon, 18 Dec 2023 06:50:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 00AF38D0001; Mon, 18 Dec 2023 06:50:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC2AB8D000D; Mon, 18 Dec 2023 06:50:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C37098D0001 for ; Mon, 18 Dec 2023 06:50:47 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 98A17A2061 for ; Mon, 18 Dec 2023 11:50:47 +0000 (UTC) X-FDA: 81579772134.13.FE8FF2D Received: from out-176.mta0.migadu.com (out-176.mta0.migadu.com [91.218.175.176]) by imf13.hostedemail.com (Postfix) with ESMTP id A1C9320008 for ; Mon, 18 Dec 2023 11:50:45 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine); spf=pass (imf13.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.176 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702900245; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XP4AUvt47Yj/TKGHm/+RxHfD94HrqsyeRyV0iF80ZQY=; b=Ah7Py1Cr7TQ8kPQHr2lQ+L+BK9BKvtfyC74mxSmOfGmZmXG/a0pPy78p2h7fP7DEy2FZu/ KQi3f5YSV+N1wSZngsLE8GPFid0Tv3OVGVT7HDKOwntpKrczpIydVGEO5unqMNH9KrjxE2 ca0Vyoa6sCrnjR6LDktzqUUQ7OCAZYE= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine); spf=pass (imf13.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.176 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702900245; a=rsa-sha256; cv=none; b=W4LyxyplgX0lacDOY8Wjdv1Bfrb+J9Q2Tkq16M9q5wbeEFnZIGwZPF/eR373VPUUfp/rm0 cfPNGf4ucQ4MQ7FjsrtvtcR7AwdLRky4Ga69lmYuPxcZPxwBW+y7pcy5XSXQG+KVxxlBYv dLmZath3IIh00ui9MEoq+1oelAHHEcU= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Mon, 18 Dec 2023 11:50:32 +0000 Subject: [PATCH v3 2/6] mm/zswap: reuse dstmem when decompress MIME-Version: 1.0 Message-Id: <20231213-zswap-dstmem-v3-2-4eac09b94ece@bytedance.com> References: <20231213-zswap-dstmem-v3-0-4eac09b94ece@bytedance.com> In-Reply-To: <20231213-zswap-dstmem-v3-0-4eac09b94ece@bytedance.com> To: Seth Jennings , Yosry Ahmed , Vitaly Wool , Dan Streetman , Johannes Weiner , Chris Li , Andrew Morton , Nhat Pham Cc: Chris Li , Yosry Ahmed , linux-kernel@vger.kernel.org, Chengming Zhou , linux-mm@kvack.org, Nhat Pham X-Developer-Signature: v=1; a=ed25519-sha256; t=1702900234; l=4279; i=zhouchengming@bytedance.com; s=20231204; h=from:subject:message-id; bh=JxHludCxMK0OPSXqNpXYMeMWjmEk17jj62CkEYJJ0Kc=; b=FkiCcWV+1XpQFAG5Ryg3DEVWXLZeHzNrlB7Z4iSS32PbvemI7sdMZ3kL9sbWdE9EZPQ8OiHKX xI4/eIK1X/tCF7SpnRPu4ma50Jp3RTDNqWgnkzjDcnf1OYQ4y7gNKV3 X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=xFTmRtMG3vELGJBUiml7OYNdM393WOMv0iWWeQEVVdA= X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: A1C9320008 X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: zowqhyda1kf8x7g6jheamxp6wrf83fxw X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam: Yes X-HE-Tag: 1702900245-217174 X-HE-Meta: U2FsdGVkX185kLcC+vQ0lSICjwa9GDiiReE+g7xRfU+5wxk0dJIYF3C6eX6TZ3sYSC71OPv76ouEexWSr05oUDZQ4OpKZOrO/YU7Kw5WHb3XhZFZG0OqLwTlYdE3dW3XabhHUsJMs6cqLmv7RIYZwKIufXENjTSB1/do5iZmv5MYtf9Jcek9j9+n/k1BlfmcZZIgDRYFaGD3i3sDKNjxDnAB+yekSkl6Nn2o8Oj+Hg1uMJvYFgzVkZbVFTI3DuON81HO3jWxwcWMCYiOvw9nidCis+whdeFEKuZdF8tdoBLvDpD3TkS7OwyV3O70i6y/1FIrDk14nr2YSW97OkA5L13no7qiIx87rSLav7iiA+gfZyqmbfgtJB0n4kX3wtn7GEIHYn5z0zrV/rR9T+GgYf8KIiNTPBat63ZkFhzggAF1Pl8aWNjOR+Djm/OFdbZYUEuwDwUYxHPV6bnA6+xudaapsFyPFUErGL0R6Mpdj0JGiALVzBSGm7nsj25EiEdqp7B+XFMR8a/Phvm8t5yn+WGEL3xEsNDDEMXgSrg3i6k0QnnekYxWP4C8cuUvQ4in5KTyzEhLA0RXI4ar28YLsCwU4au7lgLgxBcYbqrxOoMoz6DSlqC+ej9cxFAPVQWDSdiNffjvgI0b6Kn5JfV3GRi70/jiFu4aJFjPvKmkz9T8f2fGCF14d6ralcl8l9foWrdTlhentvk+qVIi11ndzO7jd8q0p46b4Er2wKuGAQvrye8/P0QmLSFh2Txrf6QdoTjKC87ZczFAKhPR66T0EOAgUXBTUkqygMSSOOFjrG/PHDpEV6+PneLd7IFmIuyi0Pe2GbR4P15jty8Rm7lzAe2PN6lLlmoFYadPnmmsC9eImczhAMzd6bIlyXH8T/PbYR33eK6B7tQIMN6lLOaUKGP0sXJK6lWN7OWSOvQADO5Cvv6uZgx+AjS1GgZRzaiFPkd8UpUvemZ46KsdePk L1X6fHgG hWM1q4zmbBnT77WAWDX8SieeqlTgIw9NXlmdU3Ql6ZDlt+qfOn152f/4kMSGv1bvJaJEIKqlI4kNAtBVw9vcwJkQjnRAcZKd8yiaYVlq/8bLb0gfnOJ39umYO3cIs7745bpXPZWVRvgp8kYG0fdTWW54ARAhICwSCEw8/xTcJ/EIWRAKsC1F5UDjAmfARXbSmLIh3hfzLXA4UXjKvzC3ERc6YCRupyZtn3l7IyGYNHxaCbupxjnImkbO4eDhfKRjUHQVj4JzvegeYCoUDIIjmyt6s2TZBLCUuGy5g3rOeWt9WsOOL7lhuXGJrRR9yfVjXJyV/agkmEdHfaoKd0hBKmIiFxC9+aN2i256s20bSCk/E7ZPrbRtNeUcFDCy/NARmjHh11NL08PFakFAOx/8NTTzI2nWiGkxRLkd14dZHtF5z7yEtha1+4Qnjxg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In the !zpool_can_sleep_mapped() case such as zsmalloc, we need to first copy the entry->handle memory to a temporary memory, which is allocated using kmalloc. Obviously we can reuse the per-compressor dstmem to avoid allocating every time, since it's percpu-compressor and protected in percpu mutex. Reviewed-by: Nhat Pham Acked-by: Chris Li Reviewed-by: Yosry Ahmed Signed-off-by: Chengming Zhou --- mm/zswap.c | 44 ++++++++++++-------------------------------- 1 file changed, 12 insertions(+), 32 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 976f278aa507..6b872744e962 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1417,19 +1417,13 @@ static int zswap_writeback_entry(struct zswap_entry *entry, struct crypto_acomp_ctx *acomp_ctx; struct zpool *pool = zswap_find_zpool(entry); bool page_was_allocated; - u8 *src, *tmp = NULL; + u8 *src; unsigned int dlen; int ret; struct writeback_control wbc = { .sync_mode = WB_SYNC_NONE, }; - if (!zpool_can_sleep_mapped(pool)) { - tmp = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!tmp) - return -ENOMEM; - } - /* try to allocate swap cache page */ mpol = get_task_policy(current); page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol, @@ -1465,15 +1459,15 @@ static int zswap_writeback_entry(struct zswap_entry *entry, /* decompress */ acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); dlen = PAGE_SIZE; + mutex_lock(acomp_ctx->mutex); src = zpool_map_handle(pool, entry->handle, ZPOOL_MM_RO); if (!zpool_can_sleep_mapped(pool)) { - memcpy(tmp, src, entry->length); - src = tmp; + memcpy(acomp_ctx->dstmem, src, entry->length); + src = acomp_ctx->dstmem; zpool_unmap_handle(pool, entry->handle); } - mutex_lock(acomp_ctx->mutex); sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_page(&output, page, PAGE_SIZE, 0); @@ -1482,9 +1476,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry, dlen = acomp_ctx->req->dlen; mutex_unlock(acomp_ctx->mutex); - if (!zpool_can_sleep_mapped(pool)) - kfree(tmp); - else + if (zpool_can_sleep_mapped(pool)) zpool_unmap_handle(pool, entry->handle); BUG_ON(ret); @@ -1508,9 +1500,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry, return ret; fail: - if (!zpool_can_sleep_mapped(pool)) - kfree(tmp); - /* * If we get here because the page is already in swapcache, a * load may be happening concurrently. It is safe and okay to @@ -1771,7 +1760,7 @@ bool zswap_load(struct folio *folio) struct zswap_entry *entry; struct scatterlist input, output; struct crypto_acomp_ctx *acomp_ctx; - u8 *src, *dst, *tmp; + u8 *src, *dst; struct zpool *zpool; unsigned int dlen; bool ret; @@ -1796,26 +1785,19 @@ bool zswap_load(struct folio *folio) } zpool = zswap_find_zpool(entry); - if (!zpool_can_sleep_mapped(zpool)) { - tmp = kmalloc(entry->length, GFP_KERNEL); - if (!tmp) { - ret = false; - goto freeentry; - } - } /* decompress */ dlen = PAGE_SIZE; - src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); + acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); + mutex_lock(acomp_ctx->mutex); + src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); if (!zpool_can_sleep_mapped(zpool)) { - memcpy(tmp, src, entry->length); - src = tmp; + memcpy(acomp_ctx->dstmem, src, entry->length); + src = acomp_ctx->dstmem; zpool_unmap_handle(zpool, entry->handle); } - acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); - mutex_lock(acomp_ctx->mutex); sg_init_one(&input, src, entry->length); sg_init_table(&output, 1); sg_set_page(&output, page, PAGE_SIZE, 0); @@ -1826,15 +1808,13 @@ bool zswap_load(struct folio *folio) if (zpool_can_sleep_mapped(zpool)) zpool_unmap_handle(zpool, entry->handle); - else - kfree(tmp); ret = true; stats: count_vm_event(ZSWPIN); if (entry->objcg) count_objcg_event(entry->objcg, ZSWPIN); -freeentry: + spin_lock(&tree->lock); if (ret && zswap_exclusive_loads_enabled) { zswap_invalidate_entry(tree, entry);