From patchwork Wed Feb 12 06:27:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13971030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93574C0219E for ; Wed, 12 Feb 2025 06:32:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23A026B0089; Wed, 12 Feb 2025 01:32:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C387280001; Wed, 12 Feb 2025 01:32:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 014876B008C; Wed, 12 Feb 2025 01:32:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D44176B0089 for ; Wed, 12 Feb 2025 01:32:14 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8F2944B577 for ; Wed, 12 Feb 2025 06:32:14 +0000 (UTC) X-FDA: 83110322988.03.465DD4E Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf01.hostedemail.com (Postfix) with ESMTP id 9FFFB4000F for ; Wed, 12 Feb 2025 06:32:12 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=K9F0UrpF; spf=pass (imf01.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739341932; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=j7H+x+6xpsdKzVL/D/QPxhemiaH/onPBYnSp5xsYcj0=; b=S9scDRjs9xQpSgAdmC31WJQSwrdJqGWJJ1R1aQimiwwDetdqfmnQAzeOzU+dJ61jC8CwYD BlSF/ZB9pv3x0nYm+VVX3W0lel5NsMftR0EL25EpetVMxTlq4LSTyVmyGHix42s9IiTBdn mvna86AumhZzOWGI7gljpfbBYom3vWA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=K9F0UrpF; spf=pass (imf01.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739341932; a=rsa-sha256; cv=none; b=ZPZHh6tqsebSxK0J2iTIFirqxjVfWDBXQCIBNqv96AJqDSNgzNin50sMMjV9+gD0M1r7Pa pH7pFoxy5cOVZ1XBsVVjU1XKQyh9NsWtePCA++iBcn4Ed/LikMapdMWzFJm61+aG1SMozi qVoX7lbd1XUdUr1af0T8gXBNZaT66o8= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-21fa56e1583so45799575ad.3 for ; Tue, 11 Feb 2025 22:32:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1739341931; x=1739946731; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=j7H+x+6xpsdKzVL/D/QPxhemiaH/onPBYnSp5xsYcj0=; b=K9F0UrpFHbLhfbG1PbviE5G1uB2u2E/+go0x9B/J0KNGg35u2ibTz6/hYec31wM5JV w4ssBuu9QGw5nk9AZyGa4rQRzhsoybf0SY6iZPf13G2Ff64axxU4u7cLO6HwULNtwt/+ 8vTVOQLwJGsBFMz7Wx0BxyM3J9KVkQ7tNAmLY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739341931; x=1739946731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j7H+x+6xpsdKzVL/D/QPxhemiaH/onPBYnSp5xsYcj0=; b=bFCVytqoC7GZM0XoRoCsnYX2dNSZmUR6RVybD+AJ+sQfgs6J3nPO6uAQ2gaf/qpFJ/ aoyliyhlDbUJQ/GFk46aoUUL0toDffcC84KPZAiabK02ttWIEUnyn/R4mwrD9xiMml5F v5UNMZouKafWT1Y26pGKsPWMiOjDrm+yC+Frqj6B0dcvluqLT7I/h500Iq/lNFrVIVG7 6tSX3LeZT9v9YvSQ2ao+1BnRDCXj1eWzYbtF3DkFo9VKwkEqpIzMOTYEA0z2Pi2jdVHS 8+Og1QBCYeQSuV3DIO3KlpyaNoZ9MMu/QAvePdolWRnn+0itNZdKGybxiuu/vc0KgQUV o9Kw== X-Forwarded-Encrypted: i=1; AJvYcCVg2WoUAAAk7rCQCMKZuRF5OIkSIb5hVPARH9Ag6uW2pEIFOHsUqTwHmUVAtfNk83rpjVrWNT0ukg==@kvack.org X-Gm-Message-State: AOJu0YxLxEUp0TAtUAZPykX/7dXGcvZRsp5FP8pJf+vdESXeqjmchCXU 1Y1sehxOo03nGvg2GxVJrEM8EmvmIfw+NN7fhlV6dlF+FX4iuwNUChesSlKXJA== X-Gm-Gg: ASbGncuzLnreb/Kg87AJsQGA2VHayCMy1Fqk/76fAWBeE7n1bahMVrQBcN11T4jNFyk KWpTVck4luV1tRqefNe4hB3k8a7uvH51yuBVTFq0b7byJQ+Fg66Fhn8DG4PjNdPKijailCEswez 08x0bVc3GdZHTrD4v7/8Ihj2TC9+BkghxTYIdaYNl9+YyF7j4LyudYk7Cv6kcDdYw3iBkmh0BEQ TzYOjx+t0Zvle3aDalUYkkesP2WV4uEepQNFBfKrqRlmjTo+8h5esnkszZrwqUi5pO66kdpa6GJ ZR5hQNvgAniLWEe87w== X-Google-Smtp-Source: AGHT+IHq2P1ROpgHywWa5M8Xpzdi4NwWEFRypribFsvZL+hUr50EoXZA3FxWT+dt8VM20Q3kk3TokQ== X-Received: by 2002:a17:903:2f86:b0:216:2259:a4bd with SMTP id d9443c01a7336-220bbca3a98mr26603225ad.52.1739341931537; Tue, 11 Feb 2025 22:32:11 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:69f5:6852:451e:8142]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21f93c7e363sm48246805ad.94.2025.02.11.22.32.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 11 Feb 2025 22:32:11 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v5 02/18] zram: permit preemption with active compression stream Date: Wed, 12 Feb 2025 15:27:00 +0900 Message-ID: <20250212063153.179231-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog In-Reply-To: <20250212063153.179231-1-senozhatsky@chromium.org> References: <20250212063153.179231-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 9FFFB4000F X-Stat-Signature: pb4upwe4rxtutepn11qpmd93jkk1po5a X-Rspamd-Server: rspam03 X-HE-Tag: 1739341932-986618 X-HE-Meta: U2FsdGVkX18UEWZ4AsSRA1L1nJuF0ZKivlUGv2T0aqfMbGdV6BnlYDeyO3MHRiG8kIdVL+7u8I8XYbKhqeC+rKT1TiZBlBNo/m8vXLa93X4EQC59yA04wG7nnHMGtcQb1gXBtDFES6ePUlxQkIB+3nyE1E2OmgoL6HbGm2TSfu+tWaU52XlhGrCvx0ticJN7lFifUbqYQbnZIEUtW0e+xEtytWSeksar/TtECdrudMz6EFweCD3YDjnspJSXrj4lwBmw2amYoWkmSvwz3W2+Bm5giain6t+eRRmDQkKTqQ3swVNa4u22x/cbu/vNV+ek2n5/WTha89PcIDpBlZNpqI8EDSqm17r1rY0ka5IXVla67S+KVY/PQwDUXQs0S1K8sfVPfYPuoycZvYtxKyV4HbRY3pnhGBRvVApuid8xAb1wyfvW2CSSfQgEGsUOTsaRacvN6JKsk7s7NqF2HtuaX3uNeDfbuXnKI6TdXNCMENAeS680dNLcIdNM9UEyGgZCF+V31A1hcTjsxQq+gY2Vr7PHCoOmIqPw4VOomtbAY1loUbB777g3GoKY5NVOB7DW1ySNnZFP41ty66u5eBcBob+ySQ01ap1Z+8tB4XZ145wZHc9CuNZpWrVHPM7vbmfjRzrIBx52I8jD+woSmo9V0AFW7VL+nej5gsesZxQAsXAtwEfgbFNa1LxFgsa8yiKLTIc35Pd51qUxXlBM4PA8x4QhnoBs/ltSOHw0N4/9ecea9IWynk0MorcXYC7NV8ZlYK0LCtgvtAX2nYYQYcjUlcAIVxRHJukHpG+lvfmWCFRLZtRMyOKvtMhxl790wrEdDOrXz1uRmRjmKlxLdGqrAzXY5d/1V12BGggfEDkulN95gvGz6fkcRcGYV85r1EDwr6Sgqmvb4vQmXi1Sn7cdmvD/UvFzIjIg+kuJnqf1HPKz2T7DHgNFfEfpp8PA7C09x+VBJPHRKxlbsgiistf v6MoNFtI 6W9andPlAksg+mDVQ/Uv30wg9iCqRAEzMri28nKAIXCgzDsdZDuDCQWzO1aBaja7d+/x+7AbLIoKlGo99GX7/ckfAGj36s5/D5WzlLCFGjwhuRgNqM7VkZuLpw+pAJHVcGwwbAaWOsqMIMCzYHCwm3ux3oFfVC1d1hdyAYPXqAas4eph9xUyTGMvShHzITVAJiDGzTrW8iWikXKBuUAqAo6ctkVg1Cy3XsUQOQ/aQKb3VQxLnKbQdnOq/hvm1QNzNJGaqAvMVAdpQVLeb6u82MmdwPr8yk8yeGZJHqW7j8HN6Jtws42yTmAvQ4g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, per-CPU stream access is done from a non-preemptible (atomic) section, which imposes the same atomicity requirements on compression backends as entry spin-lock, and makes it impossible to use algorithms that can schedule/wait/sleep during compression and decompression. Switch to preemptible per-CPU model, similar to the one used in zswap. Instead of a per-CPU local lock, each stream carries a mutex which is locked throughout entire time zram uses it for compression or decompression, so that cpu-dead event waits for zram to stop using a particular per-CPU stream and release it. Suggested-by: Yosry Ahmed Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 36 +++++++++++++++++++++++++---------- drivers/block/zram/zcomp.h | 6 +++--- drivers/block/zram/zram_drv.c | 20 +++++++++---------- 3 files changed, 39 insertions(+), 23 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..e83dd9a80a81 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -54,6 +55,7 @@ static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm) { int ret; + mutex_init(&zstrm->lock); ret = comp->ops->create_ctx(comp->params, &zstrm->ctx); if (ret) return ret; @@ -109,13 +111,29 @@ ssize_t zcomp_available_show(const char *comp, char *buf) struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { - local_lock(&comp->stream->lock); - return this_cpu_ptr(comp->stream); + for (;;) { + struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream); + + /* + * Inspired by zswap + * + * stream is returned with ->mutex locked which prevents + * cpu_dead() from releasing this stream under us, however + * there is still a race window between raw_cpu_ptr() and + * mutex_lock(), during which we could have been migrated + * to a CPU that has already destroyed its stream. If so + * then unlock and re-try on the current CPU. + */ + mutex_lock(&zstrm->lock); + if (likely(zstrm->buffer)) + return zstrm; + mutex_unlock(&zstrm->lock); + } } -void zcomp_stream_put(struct zcomp *comp) +void zcomp_stream_put(struct zcomp_strm *zstrm) { - local_unlock(&comp->stream->lock); + mutex_unlock(&zstrm->lock); } int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, @@ -151,12 +169,9 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); int ret; - zstrm = per_cpu_ptr(comp->stream, cpu); - local_lock_init(&zstrm->lock); - ret = zcomp_strm_init(comp, zstrm); if (ret) pr_err("Can't allocate a compression stream\n"); @@ -166,10 +181,11 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); - zstrm = per_cpu_ptr(comp->stream, cpu); + mutex_lock(&zstrm->lock); zcomp_strm_free(comp, zstrm); + mutex_unlock(&zstrm->lock); return 0; } diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..23b8236b9090 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -3,7 +3,7 @@ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ -#include +#include #define ZCOMP_PARAM_NO_LEVEL INT_MIN @@ -31,7 +31,7 @@ struct zcomp_ctx { }; struct zcomp_strm { - local_lock_t lock; + struct mutex lock; /* compression buffer */ void *buffer; struct zcomp_ctx ctx; @@ -77,7 +77,7 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); -void zcomp_stream_put(struct zcomp *comp); +void zcomp_stream_put(struct zcomp_strm *zstrm); int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, const void *src, unsigned int *dst_len); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 3708436f1d1f..43f460a45e3e 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1608,7 +1608,7 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index) ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); zs_unmap_object(zram->mem_pool, handle); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1769,14 +1769,14 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) kunmap_local(mem); if (unlikely(ret)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); zs_free(zram->mem_pool, handle); return ret; } if (comp_len >= huge_class_size) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); return write_incompressible_page(zram, page, index); } @@ -1800,7 +1800,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | @@ -1812,7 +1812,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) } if (!zram_can_store_page(zram)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_free(zram->mem_pool, handle); return -ENOMEM; } @@ -1820,7 +1820,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len); - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle); zram_slot_lock(zram, index); @@ -1979,7 +1979,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, kunmap_local(src); if (ret) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1989,7 +1989,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, /* Continue until we make progress */ if (class_index_new >= class_index_old || (threshold && comp_len_new >= threshold)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); continue; } @@ -2047,13 +2047,13 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return PTR_ERR((void *)handle_new); } dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len_new); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle_new);