From patchwork Mon Mar 24 06:45:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 14026830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30830C36008 for ; Mon, 24 Mar 2025 06:50:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CA5E280002; Mon, 24 Mar 2025 02:50:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07A35280001; Mon, 24 Mar 2025 02:50:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5BCF280002; Mon, 24 Mar 2025 02:50:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C4E7A280001 for ; Mon, 24 Mar 2025 02:50:43 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D879D81117 for ; Mon, 24 Mar 2025 06:50:44 +0000 (UTC) X-FDA: 83255521608.05.DCC5BDF Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf24.hostedemail.com (Postfix) with ESMTP id 09892180007 for ; Mon, 24 Mar 2025 06:50:42 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=LzYhjMLm; spf=pass (imf24.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.216.44 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742799043; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=LLL+9DMmaVUqwscH1bkBTLnfJnHgTU+E8Jec+bX8fVQ=; b=GKGBUaxkevYgDxI4UmQjiK+a5dgevj0IR5vIYQNUuMhBlx3AHTRrDhOCneHBUsHuVXiBt5 hh/WmrQN6twCVkbnyXDi6CdOBwO/k9cLqkMY6fSINd9DDnYk5fUGCQPCVPbA01o8MnNZTL 8kmvYDHY6kEJJy5rmEIr0aTB6Wsb5Bw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742799043; a=rsa-sha256; cv=none; b=53zma++yIcA3jdx+bYOq7a++Nvde/XlRUlfCfxq2pxSU1weOA8x145tG3GZXOMI2hrSwsq UBldIkMNKktvyRtPPmyBHDUdP4ZhnSFfv04Z4jduFSl0tlY7VcvhtOMUbxEuxPp4AejccX blaSScGyxp+2QE9VRqwKFuxegn/KQ/o= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=LzYhjMLm; spf=pass (imf24.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.216.44 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-301a4d5156aso6827656a91.1 for ; Sun, 23 Mar 2025 23:50:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1742799042; x=1743403842; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=LLL+9DMmaVUqwscH1bkBTLnfJnHgTU+E8Jec+bX8fVQ=; b=LzYhjMLmLyY/6aiA1ZO+Qk/XnjAqQUtOxORnSmuwUyyN5t/xef+Acmi3gLv8o8Ppua 6n2sudSMHGaVr5y/Ms/G54C+4jqvibmaV3trkWzLGwTKYaWMHepfhbBrvHX3ZksTdhDt n7GnOaiBUcOG2y2jMqeT0C3Fthl36heDbZN4U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742799042; x=1743403842; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LLL+9DMmaVUqwscH1bkBTLnfJnHgTU+E8Jec+bX8fVQ=; b=XYFux3DcvjJzho2cAmqval6yMLFDW3saQroayMLt8ZoLob1KoMWWZt8Xf2QXkqrrmO by+oI+KH8u9ct8UQsRQrPpmKHInMlHbm7NstGbVCVoJn6zsISgXo/VZ3cjxFhdfvyDJz kA6dMLAostJUKWXat/LtVT8X31/0CzYQOfyzVbesp9rMinLab7suT02zkXz57lbumy+A r/FnjcR1gXSHWk+f3xcxsmZjeAPakI/unnPPqT6XSoaZGcm91Ev4c0/0Cmr2BfSsRVzm qQKZPTH0NKIqTVsNiC1WQ0yIxjfUYIUA2NH/jmGPxpvsnz80P+c6AD0CmqIHFYUC1K3R K8Fw== X-Gm-Message-State: AOJu0YzUAX3FN4Ti4t4NDKDHGW6vpQySNQ4JBzLMT2QcPnm3jIL4nvJV QC6OieMw7gSM6asR1/4NZHB3O03XLr2doJfqEXsmt78rt0fdjLeatFOFKKIXSj46wcs5hF3P16T 0mA== X-Gm-Gg: ASbGncs8aZsvPDIfblsMEtEtLRqccsQRkslwSSJHvZ1vavS8z5ZWD1UVyt4pkREkjtF GoaX0NieXD0fIqyq6+hthtxHfY9DkrJxVYWxSzYrIoX4VwsNU/K66h/W9U4kVBu3DhZMPKiNWj5 cc+YOsGKJOnMZ1QesLnNGAf9YHcIAfmXdUp0P5N/vyEUj+8B6FwKihCMGZVwLVjFL8bPCfSw34Z JwEHMBDJGQEEuQVfnTDgNKNYpoKTmMpdDYpcR3uTcCbicINEhSLcP0ktOOzhn916DEfuMhkUdjX 1ZCCKsnVMQ5jKmQCMmaEQHPvwejv2cs5TAOivSHs2Xn9ozCNMX8UZ5K7T5fNOPu1qEh5dvk= X-Google-Smtp-Source: AGHT+IGGlsqYLSJ+JbPuzVrZ5UsBm7u6btr3xvOBimzw90wo4bRNTekWhe+qlA6m4iUPxdSuobgV8w== X-Received: by 2002:a17:90b:1f86:b0:2ee:c91a:acf7 with SMTP id 98e67ed59e1d1-3030fe856d7mr18173815a91.4.1742799041532; Sun, 23 Mar 2025 23:50:41 -0700 (PDT) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:f38f:d38c:cfee:a113]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3030f5d22dcsm7238535a91.12.2025.03.23.23.50.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Mar 2025 23:50:40 -0700 (PDT) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Brian Geffon Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [RFC PATCH] zram: introduce writeback_ext interface Date: Mon, 24 Mar 2025 15:45:41 +0900 Message-ID: <20250324065029.3077450-1-senozhatsky@chromium.org> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 09892180007 X-Stat-Signature: 9yqiku44665w48m97epafoqyncc8te47 X-HE-Tag: 1742799042-381237 X-HE-Meta: U2FsdGVkX1/EnelfUFOgnAWJ4K0GlQ6BFPlN0vKgOYAvAQrxasAm0v2K0SMwdGiKGGfdh3Fdv8xOC2AbNgJDow42K5O9N8I65JphPRlgyccF3XepzglENHn5ERUArjJ/bDYznwLwkFo9poBJeHsFkOO1jlz2h0v3fGZmbl+WCe2prKEBpqs90ImM+4HimbcLfx7LdynbZ1fMZn6SDhNFXxyn/WB+ad6tvsCib3GbwlsR3D+rjcsAjblgOARvhT5YJcCmPFN7enLwIvFVRldtgz+WJcIMFkhp1Z6/DxWQ2Na5NZCBTHRvxgq9Q8o2gGOb1xS5NvJkU2Z8R6gIhlscjMAzX6Bd0eiRzaFXRMAc1+f0oWhT1K3oQYaA7ZY3YYOH/Vb4JEmWA65Gr+GfZ8ZLbdvsKmQhjhKgupD18AA29v9A6xnPqytfIeCcAv5E19iwNT90qrBg0/e1tX/9XSJ/PqkOk735bn9p2BWmOY+t5qnHxGE/qSsc4Ar9cAcxWVk4177sL+8GvX9RrLCPSXvukJTNIQsEXrWhwiFO67iba0hQgPP/wNijcgVzcBa6jBL6OYn+ZLGlQwKmAQLG/tetabJi7ORMi/VRjUNpNUv9t9C4cNjJIJbLx1esk6Qd8nB72T3Hj/VAz1ykgyhXMINgGvTZV+KAeNc9vBxT64bpFpPbzUu4tH5lL5Bd2sk8Zb5q2ICoukNJZOsDEm9KP7b91F+CKiY/7Qng6IXuasgPOTXkzH3JtStyBYeVE8IvHlx78NO0E6NQRcr4Gn0PC/BUlMmDwqGYTAb1NQ0+97LVPC2Wu8XqTFf8Z8kmWxzCAPdkxBKdc+p4HPayXca+Adzx+GK/qGZhIqRSH0mDiKMpv7BHHFz2hDduimy0t3bWDe/RAa4YbNSj+txhK76ONPek74YsZmT3cYVkTCobuq5qNVSdehkWN1EG9OsoH591+RuzMuOOxdO8HY0TqNMTTIZ uxH4hXec NPWbfbLjMFA59VjVtBuZ7SVprFv6ntfN4pzmbbYuCPanWAS7Dk3jZSjJyRpSiZbgsScHccCE5pe118VvgHn47iBgUS52QuNQSGY63Ks0H8/rKbJBKhqj5a2dVlOuI1omTH3K+MskjOoBZ4DfLM8ySLJwovHKQqgWfCsWbjRXYgejul3Z/1wpz8qsbHBTDty7HzEJ1fp1WBMb4s+NaJA+uzyTUKZrC4NJddm09926+SLUrS3Z9a/8TDVqfrOlIqE0/ro3NegtqjyBlNaFoqDquIdkpTnMTKwlIBVULpxAkCfty9KPNqcmUkyiazVt47xFnQ33lTrOwXG+qOkaxXLCotx1L2we9rpegd4kTVvn5AUwep9cbpARXFkx2Ew== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: writeback_ext is going to replace the existing writeback interface in the future. For starters writeback_ext extends page writeback with page indices ranges. The legacy interface can write only one page at a time: echo page_index=100 > zram0/writeback ... echo page_index=200 > zram0/writeback echo page_index=500 > zram0/writeback ... echo page_index=700 > zram0/writeback One obvious downside is that it increases the number of syscalls. Less obvious but significantly more important downside is that when given only one page to post-process zram cannot perform an optimal target selection. This becomes critical when writeback_limit is enabled, because under writeback_limit we want to guarantee the highest memory savings. The new interface supports multiple page indices ranges in a single call: echo page_index_range=100-200 \ page_index_range=500-700 > zram0/writeback_ext This gives zram a chance to apply an optimal target selection strategy on each iteration of the writeback loop. Apart from that the new file unifies parameters passing and resembles other "modern" zram device attributes (e.g. recompression), while the old interface used a mixed scheme: values-less parameters for mode and a key=value format for page_index. Signed-off-by: Sergey Senozhatsky Signed-off-by: Sergey Senozhatsky --- Alternatively I can just extend writeback and say something like "starting from 6.16 it also accepts indices ranges" in zram's documentation. Documentation/ABI/testing/sysfs-block-zram | 8 + Documentation/admin-guide/blockdev/zram.rst | 26 ++ drivers/block/zram/zram_drv.c | 332 +++++++++++++------- 3 files changed, 261 insertions(+), 105 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram index 36c57de0a10a..17b43bba57ab 100644 --- a/Documentation/ABI/testing/sysfs-block-zram +++ b/Documentation/ABI/testing/sysfs-block-zram @@ -105,6 +105,7 @@ Contact: Minchan Kim Description: The writeback file is write-only and trigger idle and/or huge page writeback to backing device. + This file is deprecated, please use writeback_ext. What: /sys/block/zram/bd_stat Date: November 2018 @@ -150,3 +151,10 @@ Contact: Sergey Senozhatsky Description: The algorithm_params file is write-only and is used to setup compression algorithm parameters. + +What: /sys/block/zram/writeback_ext +Date: March 2025 +Contact: Sergey Senozhatsky +Description: + The writeback_ext file is write-only and is used for extended + writeabck (e.g. a writeback of page index ranges.) diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst index 9bdb30901a93..6571891d033a 100644 --- a/Documentation/admin-guide/blockdev/zram.rst +++ b/Documentation/admin-guide/blockdev/zram.rst @@ -320,6 +320,9 @@ Optional Feature writeback --------- +Note that `/sys/block/zramX/writeback` is considered deprecated +and is superseded by `/sys/block/zramX/writeback_ext`. + With CONFIG_ZRAM_WRITEBACK, zram can write idle/incompressible page to backing storage rather than keeping it in memory. To use the feature, admin should set up backing device via:: @@ -417,6 +420,29 @@ budget in next setting is user's job. If admin wants to measure writeback count in a certain period, they could know it via /sys/block/zram0/bd_stat's 3rd column. +writeback_ext +------------- + +This interface is expected to replace writeback interface in the future. +There are several key differences. + +First, this interface switches to `key=value` format for all parameters, +which simplifies future extension. For example, in order to writeback all +huge pages the writeback type now needs to be specified using `type=` +parameter:: + + echo "type=huge" > /sys/block/zramX/writeback_ext + +Second, this interface supports `page_index_range` parameters (multiple +ranges can be provided at once), which specify `LOW-HIGH` ranges of pages +to be written-back. This reduces the number of syscalls, but more +importantly this enables optimal post-processing target selection strategy. +Usage example:: + + echo "page_index_range=1-100" > /sys/block/zramX/writeback_ext + +The rest (writeback_limit, etc.) remains the same. + recompression ------------- diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index fda7d8624889..7d9a6332a189 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -734,114 +734,19 @@ static void read_from_bdev_async(struct zram *zram, struct page *page, submit_bio(bio); } -#define PAGE_WB_SIG "page_index=" - -#define PAGE_WRITEBACK 0 -#define HUGE_WRITEBACK (1<<0) -#define IDLE_WRITEBACK (1<<1) -#define INCOMPRESSIBLE_WRITEBACK (1<<2) - -static int scan_slots_for_writeback(struct zram *zram, u32 mode, - unsigned long nr_pages, - unsigned long index, - struct zram_pp_ctl *ctl) -{ - for (; nr_pages != 0; index++, nr_pages--) { - bool ok = true; - - zram_slot_lock(zram, index); - if (!zram_allocated(zram, index)) - goto next; - - if (zram_test_flag(zram, index, ZRAM_WB) || - zram_test_flag(zram, index, ZRAM_SAME)) - goto next; - - if (mode & IDLE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_IDLE)) - goto next; - if (mode & HUGE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_HUGE)) - goto next; - if (mode & INCOMPRESSIBLE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE)) - goto next; - - ok = place_pp_slot(zram, ctl, index); -next: - zram_slot_unlock(zram, index); - if (!ok) - break; - } - - return 0; -} - -static ssize_t writeback_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) +static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl) { - struct zram *zram = dev_to_zram(dev); - unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; - struct zram_pp_ctl *ctl = NULL; + unsigned long blk_idx = 0; + struct page *page = NULL; struct zram_pp_slot *pps; - unsigned long index = 0; - struct bio bio; struct bio_vec bio_vec; - struct page *page = NULL; - ssize_t ret = len; - int mode, err; - unsigned long blk_idx = 0; - - if (sysfs_streq(buf, "idle")) - mode = IDLE_WRITEBACK; - else if (sysfs_streq(buf, "huge")) - mode = HUGE_WRITEBACK; - else if (sysfs_streq(buf, "huge_idle")) - mode = IDLE_WRITEBACK | HUGE_WRITEBACK; - else if (sysfs_streq(buf, "incompressible")) - mode = INCOMPRESSIBLE_WRITEBACK; - else { - if (strncmp(buf, PAGE_WB_SIG, sizeof(PAGE_WB_SIG) - 1)) - return -EINVAL; - - if (kstrtol(buf + sizeof(PAGE_WB_SIG) - 1, 10, &index) || - index >= nr_pages) - return -EINVAL; - - nr_pages = 1; - mode = PAGE_WRITEBACK; - } - - down_read(&zram->init_lock); - if (!init_done(zram)) { - ret = -EINVAL; - goto release_init_lock; - } - - /* Do not permit concurrent post-processing actions. */ - if (atomic_xchg(&zram->pp_in_progress, 1)) { - up_read(&zram->init_lock); - return -EAGAIN; - } - - if (!zram->backing_dev) { - ret = -ENODEV; - goto release_init_lock; - } + struct bio bio; + int ret, err; + u32 index; page = alloc_page(GFP_KERNEL); - if (!page) { - ret = -ENOMEM; - goto release_init_lock; - } - - ctl = init_pp_ctl(); - if (!ctl) { - ret = -ENOMEM; - goto release_init_lock; - } - - scan_slots_for_writeback(zram, mode, nr_pages, index, ctl); + if (!page) + return -ENOMEM; while ((pps = select_pp_slot(ctl))) { spin_lock(&zram->wb_limit_lock); @@ -929,10 +834,217 @@ static ssize_t writeback_store(struct device *dev, if (blk_idx) free_block_bdev(zram, blk_idx); - -release_init_lock: if (page) __free_page(page); + + return ret; +} + +#define PAGE_WRITEBACK 0 +#define HUGE_WRITEBACK (1 << 0) +#define IDLE_WRITEBACK (1 << 1) +#define INCOMPRESSIBLE_WRITEBACK (1 << 2) + +static int parse_page_index(char *val, unsigned long nr_pages, + unsigned long *lo, unsigned long *hi) +{ + int ret; + + ret = kstrtoul(val, 10, lo); + if (ret) + return ret; + *hi = *lo + 1; + if (*lo >= nr_pages || *hi > nr_pages) + return -ERANGE; + return 0; +} + +static int parse_page_index_range(char *val, unsigned long nr_pages, + unsigned long *lo, unsigned long *hi) +{ + char *delim; + int ret; + + delim = strchr(val, '-'); + if (!delim) + return -EINVAL; + + *delim = 0x00; + ret = kstrtoul(val, 10, lo); + if (ret) + return ret; + if (*lo >= nr_pages) + return -ERANGE; + + ret = kstrtoul(delim + 1, 10, hi); + if (ret) + return ret; + if (*hi >= nr_pages || *lo > *hi) + return -ERANGE; + *hi += 1; + return 0; +} + +static int parse_mode(char *val, u32 *mode) +{ + *mode = 0; + + if (!strcmp(val, "idle")) + *mode = IDLE_WRITEBACK; + if (!strcmp(val, "huge")) + *mode = HUGE_WRITEBACK; + if (!strcmp(val, "huge_idle")) + *mode = IDLE_WRITEBACK | HUGE_WRITEBACK; + if (!strcmp(val, "incompressible")) + *mode = INCOMPRESSIBLE_WRITEBACK; + + if (*mode == 0) + return -EINVAL; + return 0; +} + +static int scan_slots_for_writeback(struct zram *zram, u32 mode, + unsigned long lo, unsigned long hi, + struct zram_pp_ctl *ctl) +{ + u32 index = lo; + + while (index < hi) { + bool ok = true; + + zram_slot_lock(zram, index); + if (!zram_allocated(zram, index)) + goto next; + + if (zram_test_flag(zram, index, ZRAM_WB) || + zram_test_flag(zram, index, ZRAM_SAME)) + goto next; + + if (mode & IDLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_IDLE)) + goto next; + if (mode & HUGE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_HUGE)) + goto next; + if (mode & INCOMPRESSIBLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE)) + goto next; + + ok = place_pp_slot(zram, ctl, index); +next: + zram_slot_unlock(zram, index); + if (!ok) + break; + index++; + } + + return 0; +} + +static ssize_t writeback_ext_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct zram *zram = dev_to_zram(dev); + u64 nr_pages = zram->disksize >> PAGE_SHIFT; + unsigned long lo = 0, hi = nr_pages; + struct zram_pp_ctl *ctl = NULL; + char *args, *param, *val; + ssize_t ret = len; + int err, mode = 0; + + down_read(&zram->init_lock); + if (!init_done(zram)) { + up_read(&zram->init_lock); + return -EINVAL; + } + + /* Do not permit concurrent post-processing actions. */ + if (atomic_xchg(&zram->pp_in_progress, 1)) { + up_read(&zram->init_lock); + return -EAGAIN; + } + + if (!zram->backing_dev) { + ret = -ENODEV; + goto release_init_lock; + } + + ctl = init_pp_ctl(); + if (!ctl) { + ret = -ENOMEM; + goto release_init_lock; + } + + args = skip_spaces(buf); + while (*args) { + args = next_arg(args, ¶m, &val); + + /* + * Workaround to support the old writeback interface. Will be + * reworked once the (deprecated) writeback interface is gone. + * + * The old writeback interface has a minor inconsistency and + * requires key=value only for page_index parameter, while + * writeback mode is a valueless parameter. + * + * This is not the case anymore and now all parameters are + * required to have values, however, we need to support the + * legacy writeback interface that's why we check if we can + * recognize a valueless parameter as a (legacy) writeback + * mode. + */ + if (!val || !*val) { + err = parse_mode(param, &mode); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + if (!strcmp(param, "type")) { + err = parse_mode(val, &mode); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + if (!strcmp(param, "page_index")) { + err = parse_page_index(val, nr_pages, &lo, &hi); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + /* There can be several page index ranges */ + if (!strcmp(param, "page_index_range")) { + err = parse_page_index_range(val, nr_pages, &lo, &hi); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + continue; + } + } + + err = zram_writeback_slots(zram, ctl); + if (err) + ret = err; + +release_init_lock: release_pp_ctl(zram, ctl); atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock); @@ -940,6 +1052,14 @@ static ssize_t writeback_store(struct device *dev, return ret; } +static ssize_t writeback_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + pr_info_once("deprecated attr: consider using writeback_ext\n"); + return writeback_ext_store(dev, attr, buf, len); +} + struct zram_work { struct work_struct work; struct zram *zram; @@ -2490,6 +2610,7 @@ static DEVICE_ATTR_RW(backing_dev); static DEVICE_ATTR_WO(writeback); static DEVICE_ATTR_RW(writeback_limit); static DEVICE_ATTR_RW(writeback_limit_enable); +static DEVICE_ATTR_WO(writeback_ext); #endif #ifdef CONFIG_ZRAM_MULTI_COMP static DEVICE_ATTR_RW(recomp_algorithm); @@ -2511,6 +2632,7 @@ static struct attribute *zram_disk_attrs[] = { &dev_attr_writeback.attr, &dev_attr_writeback_limit.attr, &dev_attr_writeback_limit_enable.attr, + &dev_attr_writeback_ext.attr, #endif &dev_attr_io_stat.attr, &dev_attr_mm_stat.attr,