From patchwork Mon Jan 27 07:59:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13951058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2799EC0218F for ; Mon, 27 Jan 2025 08:03:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AFC16280137; Mon, 27 Jan 2025 03:03:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AAAFB2800E8; Mon, 27 Jan 2025 03:03:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90076280137; Mon, 27 Jan 2025 03:03:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B322E2800E8 for ; Mon, 27 Jan 2025 03:03:41 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6D022B48E8 for ; Mon, 27 Jan 2025 08:03:41 +0000 (UTC) X-FDA: 83052492642.11.BB36329 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf23.hostedemail.com (Postfix) with ESMTP id 93049140009 for ; Mon, 27 Jan 2025 08:03:39 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="mv/B1sg0"; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf23.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737965019; a=rsa-sha256; cv=none; b=ViUeoJaVn1LO1ak02Lk4w6Ia2Qz5Aakoc9oXg+AZMOHA/Jl4Z3wNG7qNnWtsjkMCdDELdd 3HGzLtk4fX7Bz3PSssC4aQg9Kv5VvZ4XdKu1yUJpkwNkVY2LyZO8yrlsFMoDn5H0oVhJse 04EINJdVkH7GDcy6vxo25IImRmmUDJE= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="mv/B1sg0"; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf23.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737965019; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=651sDMCEJHysCKLNgE9iOdg6zJ4Jwg3Nj3A7ATDjnVc=; b=EHA9oHKqbgxRZUs7hF7VI+OVwnYfI0ZK1biKLyGzlVGusF6qLnbZLYpZ3pQVvtuF1sD8cn Md7p9S4e2Po6VZN4INcHQr/68v5S0+d0/Wji9cjJLMTF14TIpsHOq0/mP5QEJcpD8r8XeP BHPJn5FU7/jiKHclaRXOW5ktS1Xc3Vs= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-2163dc5155fso70549235ad.0 for ; Mon, 27 Jan 2025 00:03:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1737965018; x=1738569818; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=651sDMCEJHysCKLNgE9iOdg6zJ4Jwg3Nj3A7ATDjnVc=; b=mv/B1sg018n/Rv28BbU3vVsgJPB0/DpJ/IZC58XoUfUjCgYWgcH2ldXCMnNwy+FK6C 0+oSE4cyBENce2clTFmIEaUIDSfSOjR7lHSfXE/PBYWEEt5akS4E9ZmoJFhJ5qOvVlAY cNonW+Vpdo0wCiiVnMIy4KmBU+lG7MtP4yYOQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737965018; x=1738569818; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=651sDMCEJHysCKLNgE9iOdg6zJ4Jwg3Nj3A7ATDjnVc=; b=GShOD9uqua28WuvWQKEVjrduUiGpPVTGhZ6D6xsEYTvzeygITn73YLXmA5uCUJ0Cd4 vSr+Qqz4SOKWQsgSLogxhB8vBsrllfslsAFO5fn7BG6HhZG7ul5NiDRsbu3axO2/podp QTgvN8zxD74bbCPrixEKdTCy2/YNI00uUSuLyw/S39qtCbBK4HB3M/M/RC06rXMe2y0v AXJBRhwEuOwfaqV1XH2eNoMcAXNtn689r5z6aAQDZkLMjGbxIJmNFjevfTl51yxsMvRH Fl3PGmZtOD2XjqbF2Ly7Vs/d5uuSpdNFYZbXD978By21cqbFKNK7iDm/rca0GjxTYYnH 9QyQ== X-Gm-Message-State: AOJu0YyUwklWMReQtIZHN/AKrtnhEvunFVU/U7HF/DPD13h3/iHS/GD1 U/8u5zm8Gb9O9rGn66elAp6fP3oYMD4ncNXggafXfx/hkwbhOiCH4lmebiJC38ksr7oUWBFqWm8 = X-Gm-Gg: ASbGncv1rWiHSPx/f7UX2dCoWLrv32hEqAwHnZKBrFtsMTYwIZY5v1AIj7dn4FzpedD zdFB/gZfzChdcMiKVI44jHliICbwtDEeh7aSvY2Hq6NNH8xHhsTHh7QKuvAAE9HZd+VCPEhkh1V kk1lLkr70lBY8IVetooDAqZkNnvm511yfzjRGurrauMPDNyNwYm0/Dh9LAeSViqAf9yAZRKnMzk OKcsXqTZmq4IXEkWNdOR7Nl3wZ2x/krGtGlsRgeknkaAQpFx4SQuAMZtmER6JN+h/QaZmpFg7Sy 7SVDiDY= X-Google-Smtp-Source: AGHT+IF4OL0p2X71WwYse+YFBRvC81hL1zRDbOr33Q/MhbIwiQRVy63K3CZ31U6CYc/ujJCK075RbA== X-Received: by 2002:a17:902:db0a:b0:216:7761:cc49 with SMTP id d9443c01a7336-21c355f70c7mr542469735ad.47.1737965018362; Mon, 27 Jan 2025 00:03:38 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:566d:6152:c049:8d3a]) by smtp.gmail.com with UTF8SMTPSA id 41be03b00d2f7-ac496bbce61sm5749314a12.68.2025.01.27.00.03.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 27 Jan 2025 00:03:38 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [RFC PATCH 6/6] zram: switch over to zshandle mapping API Date: Mon, 27 Jan 2025 16:59:31 +0900 Message-ID: <20250127080254.1302026-7-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250127080254.1302026-1-senozhatsky@chromium.org> References: <20250127080254.1302026-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 93049140009 X-Stat-Signature: rpph1eph759wc7jtpxutxprkg419ezwd X-Rspam-User: X-HE-Tag: 1737965019-886251 X-HE-Meta: U2FsdGVkX19CWlRUP81nu4L5Z/Fn0LqliMgqRYVmUiK9cQdL+d3MoOlb4sDZv+0aV+zqx0kdsE74Xb2BZhyjx1pkF12yohPWORPtKsmUA1taGBxe0i/C8a/TDMvTkEB22fTOX+8ZoxUQcGeNICAibuUTxhFQEgfAlHmdD2BK2WnbLU4z3jvXjiewA8/8BxuVS7rfIXAaIyqHGhTuBAkFz9oww0m2bVrOPqt410WrXK5FIXQgR6vMJoRzXCDo0603NI3OiSQ03K+UM3+afZx5ctE8xLD3qRa2iM3nNuKtWGA8pjtJXzhRynw3QhNDQrU6e4zShmFJQF/kJhIEpiFiMNhXGsp4gSRJFyseQvOR4LweTR774jSPL+/XBHQ16mMHjo29JGM6p1k3MlYt9wlNGc7Y/ekXT+OUxxu+0JxAYx38PNY0sd1KXUdXlm+5Ex0bBzdwYdMuJ3thkd5vAS3nQ7q3cqrVVRFfGWxG7DIWozUfEzRWQbQrxgGfvFqu6B+bXVqy8yzm8YEYXFgU0vpn/WdotOKGBEtjADfLZlNz6HuSiUPdNAznVMREJWBYKZNCmjQt0C3ND556t/jZkZXBwZ3ZjgK+ZtRwsJ3I1rcYirDybcLvJh6V/EE0ve2ozy3GypA39KRDmCuKCVUsUDnHmYefWFqfdrw5OFjnMWZzEzm6MtqcO030zNeHTBL/zz+HHNO3EhQAN4NsOPdEfxedrw1JlJ/lziWe82fve/jWLAqGWxJWTRO7YGbuhEbD2l/DAoqzZRRQJ0FJRvzees9/MyAfq+LVCRuzfXGgnHSES7+R13aCp5QRBUkoAMlzJ0MH1zE837jxwqYCJkj2+9sqUmdtOTMktOzc7Ur9TfyL848do//dsGPWBqUBTRu0bgD+qh98C5OoFYQSJN/3uwsNMELtrktxzG5iG/uld4MWGM1k4bm62wDC66WC2NclFMytAoqrQNQ5gqa/LK4kh08 yI9xkWWZ iKkdZ+JQzYaB6b4Ml7T8tVVUtyTg4qJ3xN4bLzvr85onZzJUErv8/8/bZ3sRsv4UzEWRFvi9gODsd6u6jXQPo9WKQ+vfiBVb080fYC/sC1TFhWXTtRaBD7Hcv6+4kWOUkqBW6NXlY049POY0nGArvHmvTTrs3OT1m0NjTVDWdazJC4CYSEm+xNiCsp8kiHubzZj0pbMyDgL9bKojcaUjxAWthSUxAuD4zCfH7EoH5aC3kbtzm0xuKjlbKWMSmieKFk69huBrR0X7lj1r8bjaG9R1CroKpXRW8WIF8Pw2JBKuy3pH/s4j8mowrSw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use new zsmalloc handle mapping API so now zram read() becomes preemptible. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 4 +- drivers/block/zram/zcomp.h | 2 + drivers/block/zram/zram_drv.c | 103 ++++++++++++++++++---------------- 3 files changed, 61 insertions(+), 48 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index efd5919808d9..9b373ab1ee0b 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -45,6 +45,7 @@ static const struct zcomp_ops *backends[] = { static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *strm) { comp->ops->destroy_ctx(&strm->ctx); + vfree(strm->handle_mem_copy); vfree(strm->buffer); kfree(strm); } @@ -66,12 +67,13 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp) return NULL; } + strm->handle_mem_copy = vzalloc(PAGE_SIZE); /* * allocate 2 pages. 1 for compressed data, plus 1 extra in case if * compressed data is larger than the original one. */ strm->buffer = vzalloc(2 * PAGE_SIZE); - if (!strm->buffer) { + if (!strm->buffer || !strm->handle_mem_copy) { zcomp_strm_free(comp, strm); return NULL; } diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index 62330829db3f..f003f09820a5 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -34,6 +34,8 @@ struct zcomp_strm { struct list_head entry; /* compression buffer */ void *buffer; + /* handle object memory copy */ + void *handle_mem_copy; struct zcomp_ctx ctx; }; diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 9c72beb86ab0..120055b11520 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1558,37 +1558,43 @@ static int read_same_filled_page(struct zram *zram, struct page *page, static int read_incompressible_page(struct zram *zram, struct page *page, u32 index) { - unsigned long handle; - void *src, *dst; + struct zs_handle_mapping hm; + void *dst; - handle = zram_get_handle(zram, index); - src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); + hm.handle = zram_get_handle(zram, index); + hm.mode = ZS_MM_RO; + + zs_map_handle(zram->mem_pool, &hm); dst = kmap_local_page(page); - copy_page(dst, src); + copy_page(dst, hm.handle_mem); kunmap_local(dst); - zs_unmap_object(zram->mem_pool, handle); + zs_unmap_handle(zram->mem_pool, &hm); return 0; } static int read_compressed_page(struct zram *zram, struct page *page, u32 index) { + struct zs_handle_mapping hm; struct zcomp_strm *zstrm; - unsigned long handle; unsigned int size; - void *src, *dst; + void *dst; int ret, prio; - handle = zram_get_handle(zram, index); size = zram_get_obj_size(zram, index); prio = zram_get_priority(zram, index); zstrm = zcomp_stream_get(zram->comps[prio]); - src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); + hm.handle = zram_get_handle(zram, index); + hm.mode = ZS_MM_RO; + hm.local_copy = zstrm->handle_mem_copy; + + zs_map_handle(zram->mem_pool, &hm); dst = kmap_local_page(page); - ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); + ret = zcomp_decompress(zram->comps[prio], zstrm, + hm.handle_mem, size, dst); kunmap_local(dst); - zs_unmap_object(zram->mem_pool, handle); + zs_unmap_handle(zram->mem_pool, &hm); zcomp_stream_put(zram->comps[prio], zstrm); return ret; @@ -1683,33 +1689,34 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill, static int write_incompressible_page(struct zram *zram, struct page *page, u32 index) { - unsigned long handle; - void *src, *dst; + struct zs_handle_mapping hm; + void *src; /* * This function is called from preemptible context so we don't need * to do optimistic and fallback to pessimistic handle allocation, * like we do for compressible pages. */ - handle = zs_malloc(zram->mem_pool, PAGE_SIZE, - GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); - if (IS_ERR_VALUE(handle)) - return PTR_ERR((void *)handle); + hm.handle = zs_malloc(zram->mem_pool, PAGE_SIZE, + GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); + if (IS_ERR_VALUE(hm.handle)) + return PTR_ERR((void *)hm.handle); if (!zram_can_store_page(zram)) { - zs_free(zram->mem_pool, handle); + zs_free(zram->mem_pool, hm.handle); return -ENOMEM; } - dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); + hm.mode = ZS_MM_WO; + zs_map_handle(zram->mem_pool, &hm); src = kmap_local_page(page); - memcpy(dst, src, PAGE_SIZE); + memcpy(hm.handle_mem, src, PAGE_SIZE); kunmap_local(src); - zs_unmap_object(zram->mem_pool, handle); + zs_unmap_handle(zram->mem_pool, &hm); zram_slot_write_lock(zram, index); zram_set_flag(zram, index, ZRAM_HUGE); - zram_set_handle(zram, index, handle); + zram_set_handle(zram, index, hm.handle); zram_set_obj_size(zram, index, PAGE_SIZE); zram_slot_write_unlock(zram, index); @@ -1724,9 +1731,9 @@ static int write_incompressible_page(struct zram *zram, struct page *page, static int zram_write_page(struct zram *zram, struct page *page, u32 index) { int ret = 0; - unsigned long handle; + struct zs_handle_mapping hm; unsigned int comp_len; - void *dst, *mem; + void *mem; struct zcomp_strm *zstrm; unsigned long element; bool same_filled; @@ -1758,25 +1765,26 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return write_incompressible_page(zram, page, index); } - handle = zs_malloc(zram->mem_pool, comp_len, - GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); - if (IS_ERR_VALUE(handle)) - return PTR_ERR((void *)handle); + hm.handle = zs_malloc(zram->mem_pool, comp_len, + GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); + if (IS_ERR_VALUE(hm.handle)) + return PTR_ERR((void *)hm.handle); if (!zram_can_store_page(zram)) { zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); - zs_free(zram->mem_pool, handle); + zs_free(zram->mem_pool, hm.handle); return -ENOMEM; } - dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); - - memcpy(dst, zstrm->buffer, comp_len); + hm.mode = ZS_MM_WO; + hm.local_copy = zstrm->handle_mem_copy; + zs_map_handle(zram->mem_pool, &hm); + memcpy(hm.handle_mem, zstrm->buffer, comp_len); + zs_unmap_handle(zram->mem_pool, &hm); zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); - zs_unmap_object(zram->mem_pool, handle); zram_slot_write_lock(zram, index); - zram_set_handle(zram, index, handle); + zram_set_handle(zram, index, hm.handle); zram_set_obj_size(zram, index, comp_len); zram_slot_write_unlock(zram, index); @@ -1875,14 +1883,14 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, u32 prio_max) { struct zcomp_strm *zstrm = NULL; + struct zs_handle_mapping hm; unsigned long handle_old; - unsigned long handle_new; unsigned int comp_len_old; unsigned int comp_len_new; unsigned int class_index_old; unsigned int class_index_new; u32 num_recomps = 0; - void *src, *dst; + void *src; int ret; handle_old = zram_get_handle(zram, index); @@ -2000,34 +2008,35 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, /* zsmalloc handle allocation can schedule, unlock slot's bucket */ zram_slot_write_unlock(zram, index); - handle_new = zs_malloc(zram->mem_pool, comp_len_new, - GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); + hm.handle = zs_malloc(zram->mem_pool, comp_len_new, + GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); zram_slot_write_lock(zram, index); /* * If we couldn't allocate memory for recompressed object then bail * out and simply keep the old (existing) object in mempool. */ - if (IS_ERR_VALUE(handle_new)) { + if (IS_ERR_VALUE(hm.handle)) { zcomp_stream_put(zram->comps[prio], zstrm); - return PTR_ERR((void *)handle_new); + return PTR_ERR((void *)hm.handle); } /* Slot has been modified concurrently */ if (!zram_test_flag(zram, index, ZRAM_PP_SLOT)) { zcomp_stream_put(zram->comps[prio], zstrm); - zs_free(zram->mem_pool, handle_new); + zs_free(zram->mem_pool, hm.handle); return 0; } - dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); - memcpy(dst, zstrm->buffer, comp_len_new); + hm.mode = ZS_MM_WO; + hm.local_copy = zstrm->handle_mem_copy; + zs_map_handle(zram->mem_pool, &hm); + memcpy(hm.handle_mem, zstrm->buffer, comp_len_new); + zs_unmap_handle(zram->mem_pool, &hm); zcomp_stream_put(zram->comps[prio], zstrm); - zs_unmap_object(zram->mem_pool, handle_new); - zram_free_page(zram, index); - zram_set_handle(zram, index, handle_new); + zram_set_handle(zram, index, hm.handle); zram_set_obj_size(zram, index, comp_len_new); zram_set_priority(zram, index, prio);