From patchwork Fri Feb 21 22:25:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13986375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2AECC021B5 for ; Fri, 21 Feb 2025 22:30:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53B2B6B008C; Fri, 21 Feb 2025 17:30:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4EB24280002; Fri, 21 Feb 2025 17:30:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3650A280001; Fri, 21 Feb 2025 17:30:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1669F6B008C for ; Fri, 21 Feb 2025 17:30:23 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B3669120869 for ; Fri, 21 Feb 2025 22:30:22 +0000 (UTC) X-FDA: 83145396684.05.53192B7 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf15.hostedemail.com (Postfix) with ESMTP id B6208A000C for ; Fri, 21 Feb 2025 22:30:20 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=WOVISUqq; spf=pass (imf15.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.180 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740177020; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bQZ6+kL2+QjAG1mCzi3fyv64KzDue4pptM4PWageRhE=; b=xiKvbRqxOcMocemrtGhfmTIME0ipYqMZVcQP0xk+kJF9Qf+2VekAZTiHgPeyKO6c78oQFW rEtOx8RLSsnCRs4G6Gl6FKLz4JMqUse26nHSGljWi0xrII2xukE9quosLj2GxZqpgsssln Z6p7LDOTx+kQH7gouOkfPMfxBOO3BYA= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=WOVISUqq; spf=pass (imf15.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.180 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740177020; a=rsa-sha256; cv=none; b=6kTCyBNr97QZfCnhF3VYQE59CTeRlq+TU3AnwEopCSjjAvjZ7hVvj6Ux2NjJqBj7aE/xYj +tvpgolu9k2tTa64hxTSz3tSeJMkvnkT+f0nfcw8KlpdI4I3zENLjU4L9KK8zF0cDx+t7s Goksvqwi8SdvrS65mCrNmpB5ar3BRLo= Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-220d398bea9so41112715ad.3 for ; Fri, 21 Feb 2025 14:30:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1740177019; x=1740781819; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bQZ6+kL2+QjAG1mCzi3fyv64KzDue4pptM4PWageRhE=; b=WOVISUqq/8IH47oKJ7/bmIDugS8NmWhWJpJhX1oGvlB0kPqIPDZIHr8oobzG59w1WK e70IUzJuABoi9B9qCtAywqdI5zzcNrMH4txUyuOZBItafAQoOFxMerlWzLXcUxHEf8Fo CQRJqBJlUA697UEwv0KPbELnNGcs+vFC6ayeg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740177019; x=1740781819; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bQZ6+kL2+QjAG1mCzi3fyv64KzDue4pptM4PWageRhE=; b=OVnRQiv0GhqSrWdh21exg/Wkh1qcEeyTZnSJ0XEffRA0cpoO4QHrS6Lj2dYHo2hqdn Ya9FTpAEEoEVu0M7zVTSLF3tWxdl9boaE0mYDZE/zTwXkequ95oVXuybpJCk/NeafQOb vtN2J1o2gI4BL8kEfec2DDIuAOe+U3+QQN1HRbsR8t/AV8Ng61NN1snO/86bQmToOfai MX4PIW0YPx9UJms9PJHst1wGA3Xm5vZyj7O89nKWME+s+OdUJ8ajKFPH4benCD++3BsK W4EeDvnIMgYxQULXApidJQq509ZRUKUAhxYLCz0VhtEWxFEIZOsEeXM1xTSOWui3Nxyt xTqA== X-Forwarded-Encrypted: i=1; AJvYcCWCINN40pGHfpFPb+fqGtNJrpK04H+1EHhlT4w1SXv1XL5rjOrgdvU7M7wQ1FU3gsLULqKKJqnOIw==@kvack.org X-Gm-Message-State: AOJu0YzNIJXSDzcxi9jSVDVIzOx0hg02LbkKOJGLFHrSXWCitEdj7VDh 1cEP1CQz4uZG4r/NVPMDWeXtvdPHQuI/LnbXxC6DlKG7r9dx/URkIoTlfKs4Nw== X-Gm-Gg: ASbGncsdTWYEs7gdHnLJn5y+YP1mZwMu+SQXNz5WoWA5H9922YoI4jZeqUeTlmTqIQc wgxNH1BB/DMdHFd9R4Yr9JkrOJSJKDpBQ0AB+ofHdOpN9578O7Wf533U5SJJ05thp4gI23OjacK afz1W5eg1iN0TJ6AbNOqUib5FMvVoNcahucpo1dimdLgqy6g31zEwU0kyYQ8Sf9Obb+r8MSvrzb PpqdeGW2axBV2WbFVL9SjKOQft1iVx70eOSt4ggEY44Hd652fAHKsDeIULFjHqh664BPtgP/FBt pS//R/ysQfKPnT6vvqaUHSChS88= X-Google-Smtp-Source: AGHT+IHFhs7hFYEDScLROxnrA8v6yUZmmj9MauSx4nCbk/5QH6SmEjHqOBlgJx/hpKYsI1k0DH+1eg== X-Received: by 2002:a17:902:da8f:b0:220:fe50:5b44 with SMTP id d9443c01a7336-221a1103431mr80781945ad.31.1740177019665; Fri, 21 Feb 2025 14:30:19 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:f987:e1e:3dbb:2191]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-220d558fe3asm141026215ad.234.2025.02.21.14.30.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 21 Feb 2025 14:30:19 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Sebastian Andrzej Siewior , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v8 02/17] zram: permit preemption with active compression stream Date: Sat, 22 Feb 2025 07:25:33 +0900 Message-ID: <20250221222958.2225035-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog In-Reply-To: <20250221222958.2225035-1-senozhatsky@chromium.org> References: <20250221222958.2225035-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B6208A000C X-Stat-Signature: e9rtwxysuz38811y3fn3hj4jhcuo7r76 X-HE-Tag: 1740177020-788421 X-HE-Meta: U2FsdGVkX18FNeg0+Z1W6t9wmQzOqpeHxxNpqiHoUGRdvSxZWiweoJ0YbVCYn8uz3mhpN86J0ZJLNztweUlBlDJnyLDKaUuo4mTiXKHdC5PzJXqM5AuAZbTLHNYdFNDowvJ60AFnNkj7n/+qmytJ8RCW1MNUEXYfyIvwEM3Zh+dtvLyC8R6YlW3BB2mJqo5P7K9cODSAilcFPkl/nuT9jtvXkAHpahuOIroDSCpDeou64fdWp3UqijAFVX+WjWOli2/Y/F8I1WccwIiYXcWITVe7hj04n3R8IyFqURezuZYRAx0VLVw+zgBT1uArHFtM8XEpCRi69IjxfSq6B27hUlrnWLYlUYyJwi+Gxx2qFkXqFe6DJv+Y03rtc7yEYqOJ0o+XGNMcKc1sgTV/BWBiNBCBdAdI+evhd0lXZnWU6ASIeL7LKxRcQPyPqIcdGgGz4lnSBeKdCNvx0jRU6hnwutQIYXzVgP86Y2lJoU5KWK355OmUVdpcnQvqJfPVH+rxotJ0bK2eKVRDDMI657DiwQvtlBQLoTH58ahkxwMnJEhRVHNukN9u6A5g31nZC94rvUPMgAZ1+p69VneQeyFD49kONd1oA3R+O3Qq6sNbL9qXApcH4S7FSI1pOm3vbHA1tDkMekDqCIoVW1yxEbgyX52dN7KQNeQt70h6I8v3YRIX7APTyAxs7SSlzfQsIocHkSPkWRwdSuH5GhELtRpdS6x5U+3nAfDUtSE3kocTBYS5KabI13WaB88fPlEgNxHyM33P9m7KZY/Jv2rw0ps11/hbtMGHiD6ZWwzWQWfnuchb/dpKL/dRRPyV0ZGBvfD9TTwTNZ/4xJK3gSNdoqERMDdUJG3jVlRUlnx76FzUc51vAUw9PxnzwL0OG+00j34SH203TQEQzDjGlApsp+jBBRYdXMBxAmp7As5VFOLNyV2ZlZwFXXFqbeQ0oJ9q3qfLFmUHfm+UMCsfv3qyQd7 C8F744NF 8urvKstHDzV2zPJ1JLe2WZXpFI2I6mb+lc5nBMNPQMwD4S8FuGVVeuNguWdwUdr1BfjiWBGMIfKctQxY+Uc+YtxClu3c5wTh4JvtQ0x+vv85anS/x/1Cxvhm279ur1duBKCPUxObuXmDheqOOEjtiiHayNUU7f3S9BKDHvhECwx2mljdgx2M8gh15Bnip+vjGEewIa7+IONk8saTo0BG4EBFo1XDyezlhMzyVd9kcxikCfsyeECKp6n7PjQwJPdPNEo5Mb/iwG2opvVTP8rjZwIx75LriEEKSjI/3l+/FanDsALVTbERgaahIaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, per-CPU stream access is done from a non-preemptible (atomic) section, which imposes the same atomicity requirements on compression backends as entry spin-lock, and makes it impossible to use algorithms that can schedule/wait/sleep during compression and decompression. Switch to preemptible per-CPU model, similar to the one used in zswap. Instead of a per-CPU local lock, each stream carries a mutex which is locked throughout entire time zram uses it for compression or decompression, so that cpu-dead event waits for zram to stop using a particular per-CPU stream and release it. Suggested-by: Yosry Ahmed Reviewed-by: Yosry Ahmed Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 41 +++++++++++++++++++++++++---------- drivers/block/zram/zcomp.h | 6 ++--- drivers/block/zram/zram_drv.c | 20 ++++++++--------- 3 files changed, 42 insertions(+), 25 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..53e4c37441be 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include #include @@ -109,13 +109,29 @@ ssize_t zcomp_available_show(const char *comp, char *buf) struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { - local_lock(&comp->stream->lock); - return this_cpu_ptr(comp->stream); + for (;;) { + struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream); + + /* + * Inspired by zswap + * + * stream is returned with ->mutex locked which prevents + * cpu_dead() from releasing this stream under us, however + * there is still a race window between raw_cpu_ptr() and + * mutex_lock(), during which we could have been migrated + * from a CPU that has already destroyed its stream. If + * so then unlock and re-try on the current CPU. + */ + mutex_lock(&zstrm->lock); + if (likely(zstrm->buffer)) + return zstrm; + mutex_unlock(&zstrm->lock); + } } -void zcomp_stream_put(struct zcomp *comp) +void zcomp_stream_put(struct zcomp_strm *zstrm) { - local_unlock(&comp->stream->lock); + mutex_unlock(&zstrm->lock); } int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, @@ -151,12 +167,9 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); int ret; - zstrm = per_cpu_ptr(comp->stream, cpu); - local_lock_init(&zstrm->lock); - ret = zcomp_strm_init(comp, zstrm); if (ret) pr_err("Can't allocate a compression stream\n"); @@ -166,16 +179,17 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); - zstrm = per_cpu_ptr(comp->stream, cpu); + mutex_lock(&zstrm->lock); zcomp_strm_free(comp, zstrm); + mutex_unlock(&zstrm->lock); return 0; } static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) { - int ret; + int ret, cpu; comp->stream = alloc_percpu(struct zcomp_strm); if (!comp->stream) @@ -186,6 +200,9 @@ static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) if (ret) goto cleanup; + for_each_possible_cpu(cpu) + mutex_init(&per_cpu_ptr(comp->stream, cpu)->lock); + ret = cpuhp_state_add_instance(CPUHP_ZCOMP_PREPARE, &comp->node); if (ret < 0) goto cleanup; diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..23b8236b9090 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -3,7 +3,7 @@ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ -#include +#include #define ZCOMP_PARAM_NO_LEVEL INT_MIN @@ -31,7 +31,7 @@ struct zcomp_ctx { }; struct zcomp_strm { - local_lock_t lock; + struct mutex lock; /* compression buffer */ void *buffer; struct zcomp_ctx ctx; @@ -77,7 +77,7 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); -void zcomp_stream_put(struct zcomp *comp); +void zcomp_stream_put(struct zcomp_strm *zstrm); int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, const void *src, unsigned int *dst_len); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 37c5651305c2..1b5bb206239f 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1613,7 +1613,7 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index) ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); zs_unmap_object(zram->mem_pool, handle); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1774,14 +1774,14 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) kunmap_local(mem); if (unlikely(ret)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); zs_free(zram->mem_pool, handle); return ret; } if (comp_len >= huge_class_size) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); return write_incompressible_page(zram, page, index); } @@ -1805,7 +1805,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | @@ -1817,7 +1817,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) } if (!zram_can_store_page(zram)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_free(zram->mem_pool, handle); return -ENOMEM; } @@ -1825,7 +1825,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len); - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle); zram_slot_lock(zram, index); @@ -1984,7 +1984,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, kunmap_local(src); if (ret) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1994,7 +1994,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, /* Continue until we make progress */ if (class_index_new >= class_index_old || (threshold && comp_len_new >= threshold)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); continue; } @@ -2052,13 +2052,13 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return PTR_ERR((void *)handle_new); } dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len_new); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle_new);