From patchwork Thu Jan 30 11:10:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13954483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14BDFC0218A for ; Thu, 30 Jan 2025 11:11:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B8286B00C4; Thu, 30 Jan 2025 06:11:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91986280286; Thu, 30 Jan 2025 06:11:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 776076B00C7; Thu, 30 Jan 2025 06:11:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 50F266B00C4 for ; Thu, 30 Jan 2025 06:11:26 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BF8C8A0BFB for ; Thu, 30 Jan 2025 11:11:25 +0000 (UTC) X-FDA: 83063852130.24.CD05ED6 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf17.hostedemail.com (Postfix) with ESMTP id D0A3440006 for ; Thu, 30 Jan 2025 11:11:23 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=GZj5BYiD; spf=pass (imf17.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.181 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738235483; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UEH++QnSGY0Fi6Z8VZI2JREfdhZluHMkf5kZ0a7B1uQ=; b=YIRsPtmcZ5THegpgaAvIdiIY6uKNZeeDaZZjkC08KEosXkTIZuNzQ0dgC5YGzFs3pNHjNd az2m6usY6Tf2qllNDhXcXHESqj0vbkRB3UrSNSpdYEETYKqrGb/USivcRHNfCal17IlMCl ecGtLiulddgsca+/LeXL8xlH3oZg9Vs= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=GZj5BYiD; spf=pass (imf17.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.181 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738235483; a=rsa-sha256; cv=none; b=1WeJ9Tjpo5rAF7KEMCWmxByajEg/zmoC1HzEiiaYWTZuIsY7f5SPjLCX3gdAGz8K+5D94e I4tAkgvGHx0fSii8bh9d2LcRXCh51bg7zBEU/m3gR+w5kPMO59FnefTYmREqT+RkjmrvnL D/fjzokxbsD1dkoITLujqxb1WHZqN9A= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2163b0c09afso9876085ad.0 for ; Thu, 30 Jan 2025 03:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738235482; x=1738840282; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UEH++QnSGY0Fi6Z8VZI2JREfdhZluHMkf5kZ0a7B1uQ=; b=GZj5BYiDT2OJgXTKL0+oDfoE02suL6N3+DV8Aencjx2E/Z4O1zUUQFIqq6L0ta7bnb Av8HMSgroqUjT7y78i/zfmYsJ/F8SF9eAdpipXHDe7qDH1Zs8Urk30yFRXfW11/RcMWk 3YtDIBjeQ6YOf6QP4qN2KAoECHv9/N7p7F67Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738235482; x=1738840282; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UEH++QnSGY0Fi6Z8VZI2JREfdhZluHMkf5kZ0a7B1uQ=; b=OaCs21jKoX1VB5OCGPxEjkZ8Rm/0aYukjWzI5hIWkyrTz31iNlz47A/CNQo90MxnCQ dGCBBsaph86RsQpVDZn9H5m57C5zezxOc52PTVCwOQCcl9LnWoDdtQeOABIWupz9msMn 2KKGaMVcvSPn8D+2px3Qad7efF+D1J9Y3j8bZy4+ngvwZjXvX46jc0A6ZEDi8oatxn8w B9niGouMsHXzY/oiFbVdoSv5pZufloqwixGw+ojnEwprQekgC1rTxV3IxLBqEj5mzrea IqEsbv+ciM/udZC51VaAu6NMIVqqzJB0MN8K62jVZ0H2h3V2KdvfCVs27x0upSofNk4+ 4KpA== X-Forwarded-Encrypted: i=1; AJvYcCUf023fDn48ZobH94bsLmh2bzOJJV9nbGNNLPbwL+uHrVUVihMr88JWBPvHX6LxznasL9uu13rB7Q==@kvack.org X-Gm-Message-State: AOJu0Ywmwr+Fpkw0IbKpPJOibRbECiYejJEcftGahbViS/Nx8/N5fpzh m7CabYXf0Ilu6jNjKTetxH0ZciYv8xiDxQQhMUGwjZ30yPNFRjk6/Ap2DdXmfJJIGYOrVlATh2M = X-Gm-Gg: ASbGncuw60DBQtU7DA02xnSHgCm83KweLH5F5LamUCt7s9vroU4j4YWctaCI7ZbEWUp L3PK+rVJf4PPtyZYP1w2pUX+ay2WgMPmGv8Ml5Rt+JRLIl8soHl8xnXByVy0MkIsTXiQ0dwfCT3 quuIQuU7frNnOhAJJy3gOQR9xtzi91Vrre0CBVvUp0jnsXft2HMtQGeYFUaBpzHiMPxb0sKa6p1 jhqgp9V6+Xrg7Badgp98NgGufO1q5WsjpC6l3tGE4ng/Zp7lVM9T1MhEntYFa9CN6osU0sh+SeO yShLZ1EibnFn+/Ff X-Google-Smtp-Source: AGHT+IH+TH8HeSlHJqfOK6VsPbvyML0ro0yaIHBMXXZ5hNOFlr6JBn/DYjgqdQ5lAb/0oKdeeTUdYw== X-Received: by 2002:a05:6a21:7882:b0:1e7:6f82:3217 with SMTP id adf61e73a8af0-1ed7a5c2316mr9568435637.3.1738235482594; Thu, 30 Jan 2025 03:11:22 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:d4ce:e744:f46b:4fb]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-72fe69ba47esm1183433b3a.96.2025.01.30.03.11.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Jan 2025 03:11:22 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv3 02/11] zram: do not use per-CPU compression streams Date: Thu, 30 Jan 2025 20:10:47 +0900 Message-ID: <20250130111105.2861324-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250130111105.2861324-1-senozhatsky@chromium.org> References: <20250130111105.2861324-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D0A3440006 X-Stat-Signature: kcwxh3krgkq6zdn6fssypiki3i554yec X-HE-Tag: 1738235483-494916 X-HE-Meta: U2FsdGVkX1/izRdfevLbJ46/iZ9qcTxA3c7juAqJPpfJM4W3kWH/h2nOCI1jFv7XiSDTWCnhD5ro+j1Nol5NU5DRETr2HRPnKDf/BDCAynhrkbZFSMal5EnTtD3OtUOjhiJ0ZCGeD/fJFk6og5kik0Ond+0sWA5dT83pEe+Q7JzSbCwclTf64xVEeHBkgQF62l7BOGn0x3KVkHGLJz8AWsJoshQ5euyJpwEudyKOFvSADa7SSMbWFuq+4lqygU30G5bEWhx4RonNnwZwpyRRx67FAe9k1qcMCQkcYuudVTw6EDLzJJyv/D65cWr8bNKLT7YPiKRpygHTRuRF9FGS4hODvQTkIrdV/JqLHeNyXDpEg9xiK5qUNbpnUvqFa3tCMSj0rjNiqwx5Z+Z/dB+kh+FCHI6Kxfp1SxbKLSVSUbdQXzwK4hkzSdIx7XfUHnZ4fJQftZNaMDI48wRiqwg0RrgW7o+M2YeSNLPLfPGDzwvCFmn0gfGoSjY1X7Vvw1lx+yZaJJJhNiYIupGIU1DUFp7Z1H01TsF5vapqg0slvAT999waegN8qJAN1CuVCEurRZkVGolw+/CFVH8irym0xNhIbmzHWXh4xgt0BwuDv4MsrmQt4c0FppXM/8H8XJM1H1PjsTCB/WL13oNROKG4GCdrfCp+vojDDlfu5lIRkCRKLe+wPaYyjUoYZJBiWP07UDiSXOb3mly6L/0nyNQlv1q26NTrxWb6vcYk3ewRmtjCciD8okxV3dV6UIJ8XrtBa7kzq5kVZm1FPZr6/gZHvF4BJsHam++BAJS5X8BcUDsg9OKFdF3tn/zkz77xxCxOcnbDLOFuCLVRH0p63f6F6yAQj6CuTUWiPzYYbh80aZ8JnfuQADWgZQZtodzo4E516IUusBDPJfrUSK7U1R4j3Z7Oh1skVMt+MuzwQIx43Q8WAhYsmKVBFidOnTLqQRd3q1TrBXm13VJFMeuzCcC P8yFHwc3 ljBgQ/FVN9x6pk0zLw3eayIQGwX41ShDKtwGjV/7+GvlGhXftBmpodfsPy9ecvIdOaiaOtxXrQOV/ds0b8wRT4Ix9fMxM9oOKvSSX9wtjkXNtONORq6MKMvNKuZZGbVpbclCDStzQu/I2wKbrY8XXrzehiJ/yVP/kDpJCnEZr6ulUVUz++I96NRunEWwdXLcbawPxjwtZSRfgSX2lFU/rSW/tDifmxi85kcgM7DqRtOwYi8SgpWt8OL5umJ/HdbfZ+yDYD2I/fSn+2LDKXx7JDYO08W2H9RazLonF4ladbPY2Il2QdJ3bNBIbaw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Similarly to per-entry spin-lock per-CPU compression streams also have a number of shortcoming. First, per-CPU stream access has to be done from a non-preemptible (atomic) section, which imposes the same atomicity requirements on compression backends as entry spin-lock do and makes it impossible to use algorithms that can schedule/wait/sleep during compression and decompression. Second, per-CPU streams noticeably increase memory usage (actually more like wastage) of secondary compression streams. The problem is that secondary compression streams are allocated per-CPU, just like the primary streams are. Yet we never use more that one secondary stream at a time, because recompression is a single threaded action. Which means that remaining num_online_cpu() - 1 streams are allocated for nothing, and this is per-priority list (we can have several secondary compression algorithms). Depending on the algorithm this may lead to a significant memory wastage, in addition each stream also carries a workmem buffer (2 physical pages). Instead of per-CPU streams, maintain a list of idle compression streams and allocate new streams on-demand (something that we used to do many years ago). So that zram read() and write() become non-atomic and ease requirements on the compression algorithm implementation. This also means that we now should have only one secondary stream per-priority list. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 164 +++++++++++++++++++--------------- drivers/block/zram/zcomp.h | 17 ++-- drivers/block/zram/zram_drv.c | 29 +++--- include/linux/cpuhotplug.h | 1 - 4 files changed, 109 insertions(+), 102 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..982c769d5831 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include #include @@ -43,31 +43,40 @@ static const struct zcomp_ops *backends[] = { NULL }; -static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *zstrm) +static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *strm) { - comp->ops->destroy_ctx(&zstrm->ctx); - vfree(zstrm->buffer); - zstrm->buffer = NULL; + comp->ops->destroy_ctx(&strm->ctx); + vfree(strm->buffer); + kfree(strm); } -static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm) +static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp) { + struct zcomp_strm *strm; int ret; - ret = comp->ops->create_ctx(comp->params, &zstrm->ctx); - if (ret) - return ret; + strm = kzalloc(sizeof(*strm), GFP_KERNEL); + if (!strm) + return NULL; + + INIT_LIST_HEAD(&strm->entry); + + ret = comp->ops->create_ctx(comp->params, &strm->ctx); + if (ret) { + kfree(strm); + return NULL; + } /* - * allocate 2 pages. 1 for compressed data, plus 1 extra for the - * case when compressed size is larger than the original one + * allocate 2 pages. 1 for compressed data, plus 1 extra in case if + * compressed data is larger than the original one. */ - zstrm->buffer = vzalloc(2 * PAGE_SIZE); - if (!zstrm->buffer) { - zcomp_strm_free(comp, zstrm); - return -ENOMEM; + strm->buffer = vzalloc(2 * PAGE_SIZE); + if (!strm->buffer) { + zcomp_strm_free(comp, strm); + return NULL; } - return 0; + return strm; } static const struct zcomp_ops *lookup_backend_ops(const char *comp) @@ -109,13 +118,59 @@ ssize_t zcomp_available_show(const char *comp, char *buf) struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { - local_lock(&comp->stream->lock); - return this_cpu_ptr(comp->stream); + struct zcomp_strm *strm; + + might_sleep(); + + while (1) { + spin_lock(&comp->strm_lock); + if (!list_empty(&comp->idle_strm)) { + strm = list_first_entry(&comp->idle_strm, + struct zcomp_strm, + entry); + list_del(&strm->entry); + spin_unlock(&comp->strm_lock); + return strm; + } + + /* cannot allocate new stream, wait for an idle one */ + if (comp->avail_strm >= num_online_cpus()) { + spin_unlock(&comp->strm_lock); + wait_event(comp->strm_wait, + !list_empty(&comp->idle_strm)); + continue; + } + + /* allocate new stream */ + comp->avail_strm++; + spin_unlock(&comp->strm_lock); + + strm = zcomp_strm_alloc(comp); + if (strm) + break; + + spin_lock(&comp->strm_lock); + comp->avail_strm--; + spin_unlock(&comp->strm_lock); + wait_event(comp->strm_wait, !list_empty(&comp->idle_strm)); + } + + return strm; } -void zcomp_stream_put(struct zcomp *comp) +void zcomp_stream_put(struct zcomp *comp, struct zcomp_strm *strm) { - local_unlock(&comp->stream->lock); + spin_lock(&comp->strm_lock); + if (comp->avail_strm <= num_online_cpus()) { + list_add(&strm->entry, &comp->idle_strm); + spin_unlock(&comp->strm_lock); + wake_up(&comp->strm_wait); + return; + } + + comp->avail_strm--; + spin_unlock(&comp->strm_lock); + zcomp_strm_free(comp, strm); } int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, @@ -148,61 +203,19 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, return comp->ops->decompress(comp->params, &zstrm->ctx, &req); } -int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) -{ - struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; - int ret; - - zstrm = per_cpu_ptr(comp->stream, cpu); - local_lock_init(&zstrm->lock); - - ret = zcomp_strm_init(comp, zstrm); - if (ret) - pr_err("Can't allocate a compression stream\n"); - return ret; -} - -int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) -{ - struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; - - zstrm = per_cpu_ptr(comp->stream, cpu); - zcomp_strm_free(comp, zstrm); - return 0; -} - -static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) -{ - int ret; - - comp->stream = alloc_percpu(struct zcomp_strm); - if (!comp->stream) - return -ENOMEM; - - comp->params = params; - ret = comp->ops->setup_params(comp->params); - if (ret) - goto cleanup; - - ret = cpuhp_state_add_instance(CPUHP_ZCOMP_PREPARE, &comp->node); - if (ret < 0) - goto cleanup; - - return 0; - -cleanup: - comp->ops->release_params(comp->params); - free_percpu(comp->stream); - return ret; -} - void zcomp_destroy(struct zcomp *comp) { - cpuhp_state_remove_instance(CPUHP_ZCOMP_PREPARE, &comp->node); + struct zcomp_strm *strm; + + while (!list_empty(&comp->idle_strm)) { + strm = list_first_entry(&comp->idle_strm, + struct zcomp_strm, + entry); + list_del(&strm->entry); + zcomp_strm_free(comp, strm); + } + comp->ops->release_params(comp->params); - free_percpu(comp->stream); kfree(comp); } @@ -229,7 +242,12 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params) return ERR_PTR(-EINVAL); } - error = zcomp_init(comp, params); + INIT_LIST_HEAD(&comp->idle_strm); + init_waitqueue_head(&comp->strm_wait); + spin_lock_init(&comp->strm_lock); + + comp->params = params; + error = comp->ops->setup_params(comp->params); if (error) { kfree(comp); return ERR_PTR(error); diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..62330829db3f 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -3,10 +3,10 @@ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ -#include - #define ZCOMP_PARAM_NO_LEVEL INT_MIN +#include + /* * Immutable driver (backend) parameters. The driver may attach private * data to it (e.g. driver representation of the dictionary, etc.). @@ -31,7 +31,7 @@ struct zcomp_ctx { }; struct zcomp_strm { - local_lock_t lock; + struct list_head entry; /* compression buffer */ void *buffer; struct zcomp_ctx ctx; @@ -60,16 +60,15 @@ struct zcomp_ops { const char *name; }; -/* dynamic per-device compression frontend */ struct zcomp { - struct zcomp_strm __percpu *stream; + struct list_head idle_strm; + spinlock_t strm_lock; + u32 avail_strm; + wait_queue_head_t strm_wait; const struct zcomp_ops *ops; struct zcomp_params *params; - struct hlist_node node; }; -int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node); -int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node); ssize_t zcomp_available_show(const char *comp, char *buf); bool zcomp_available_algorithm(const char *comp); @@ -77,7 +76,7 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); -void zcomp_stream_put(struct zcomp *comp); +void zcomp_stream_put(struct zcomp *comp, struct zcomp_strm *strm); int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, const void *src, unsigned int *dst_len); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index a8d22ae2a066..9ba3f8d97310 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -31,7 +31,6 @@ #include #include #include -#include #include #include @@ -1603,7 +1602,7 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index) ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); zs_unmap_object(zram->mem_pool, handle); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); return ret; } @@ -1764,14 +1763,14 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) kunmap_local(mem); if (unlikely(ret)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); pr_err("Compression failed! err=%d\n", ret); zs_free(zram->mem_pool, handle); return ret; } if (comp_len >= huge_class_size) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); return write_incompressible_page(zram, page, index); } @@ -1795,7 +1794,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | @@ -1807,7 +1806,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) } if (!zram_can_store_page(zram)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); zs_free(zram->mem_pool, handle); return -ENOMEM; } @@ -1815,7 +1814,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len); - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); zs_unmap_object(zram->mem_pool, handle); zram_slot_write_lock(zram, index); @@ -1974,7 +1973,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, kunmap_local(src); if (ret) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); return ret; } @@ -1984,7 +1983,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, /* Continue until we make progress */ if (class_index_new >= class_index_old || (threshold && comp_len_new >= threshold)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); continue; } @@ -2042,13 +2041,13 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); return PTR_ERR((void *)handle_new); } dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len_new); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); zs_unmap_object(zram->mem_pool, handle_new); @@ -2796,7 +2795,6 @@ static void destroy_devices(void) zram_debugfs_destroy(); idr_destroy(&zram_index_idr); unregister_blkdev(zram_major, "zram"); - cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); } static int __init zram_init(void) @@ -2806,15 +2804,9 @@ static int __init zram_init(void) BUILD_BUG_ON(__NR_ZRAM_PAGEFLAGS > sizeof(zram_te.flags) * 8); - ret = cpuhp_setup_state_multi(CPUHP_ZCOMP_PREPARE, "block/zram:prepare", - zcomp_cpu_up_prepare, zcomp_cpu_dead); - if (ret < 0) - return ret; - ret = class_register(&zram_control_class); if (ret) { pr_err("Unable to register zram-control class\n"); - cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); return ret; } @@ -2823,7 +2815,6 @@ static int __init zram_init(void) if (zram_major <= 0) { pr_err("Unable to get major number\n"); class_unregister(&zram_control_class); - cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); return -EBUSY; } diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 6cc5e484547c..092ace7db8ee 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -119,7 +119,6 @@ enum cpuhp_state { CPUHP_MM_ZS_PREPARE, CPUHP_MM_ZSWP_POOL_PREPARE, CPUHP_KVM_PPC_BOOK3S_PREPARE, - CPUHP_ZCOMP_PREPARE, CPUHP_TIMERS_PREPARE, CPUHP_TMIGR_PREPARE, CPUHP_MIPS_SOC_PREPARE,