From patchwork Wed Apr 9 11:23:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 14044494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9EF2C36002 for ; Wed, 9 Apr 2025 11:26:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40715280074; Wed, 9 Apr 2025 07:26:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B318280073; Wed, 9 Apr 2025 07:26:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25380280074; Wed, 9 Apr 2025 07:26:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 04A87280073 for ; Wed, 9 Apr 2025 07:26:23 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8AA4156788 for ; Wed, 9 Apr 2025 11:26:24 +0000 (UTC) X-FDA: 83314277088.11.FB0B612 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf22.hostedemail.com (Postfix) with ESMTP id 99387C0009 for ; Wed, 9 Apr 2025 11:26:22 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="gkvG7/ih"; spf=pass (imf22.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.215.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744197982; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=8njVRyswJxjtKP2Kc60M9Ee21lx/HZIRc/YI1JehKSo=; b=8ZmZrNH7s2jJiRgf3q0SlbaEwUxC2bZzHxxpX7Ru4kZctNeNyu712espVlQOUFnKuon4j5 nET499S4Shj7jGRef0Ths/qfp6A/5UNbe2HJBrAHQxQKBNf3SdtAvJLwzqjViz+c4VhPN1 c1Xzc2Fnq4kTCwLSonqousBUNzPHYgs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744197982; a=rsa-sha256; cv=none; b=eBo2nh/T4TO9SS7i8+sgpkKaOSgWipwebjUP/kEgDJqw8HSCVTD90qnV3XWPP3g3gUgaFe Dqu00QP/8Jttp0tJyumFlBYMUuZ9QW23LbbTEyZJMfpwlGHw5K4YOGPWbSfZdInjN69zCD 87uj4l/zA6HaqeVrvQxXznu2u93bgE0= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="gkvG7/ih"; spf=pass (imf22.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.215.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-af19b9f4c8cso4402743a12.2 for ; Wed, 09 Apr 2025 04:26:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1744197981; x=1744802781; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=8njVRyswJxjtKP2Kc60M9Ee21lx/HZIRc/YI1JehKSo=; b=gkvG7/ihbbPwSujJZ+qiopPCN+SIbXlaD4e3QH3tiysrmPw5hbAkEcAPFfkFsJSL+8 2dYH/Z11OyaBaoDK47GiIKQmguj4S8ePIlD1hcAK0zxD+ktQRzGQuHvrB6agoQ0nHZTG d24O2KBpuKKoaAdIwN41yhW0J3FAGEdewNHYQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744197981; x=1744802781; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8njVRyswJxjtKP2Kc60M9Ee21lx/HZIRc/YI1JehKSo=; b=lmz5JU3d6j/EsYPA/MbUWzX48kJJ7kQla0yr4CRyHu+XWKHlVdEMOkvO5HoSBMZ7zy Cii7kWhXoUQGhMGhyEWZyi1DCEa8D9+N8Ynu/dlOK9tuwq9hCaK0QuJRbkWOCIPc9/lH v5lNf+1QWDZ57QDNaRfQ5CGBhgEl639pshpbFay39gyMEefVLLc2Olnn+QtkrpOeZcFL 6yNKOCxGOiSu9KG49PZvwyc8BQJUtU1zis+23mRxY1Nh+y+djnQGK3e7uFGVI2UIQ+fL DsqglkevbM6o4N7l5MmSFBEgoT2fNRoqkTsWxtZpzcl1xMLJ31daLpNU9v8TJZJ/a+dp g4QQ== X-Forwarded-Encrypted: i=1; AJvYcCVGDpwT/3HjTjFpIBF0TJ+t3U1y+bFUOCPMiKEd4IJ1ASyT0PMEVbPZVv7GL4+zILYQUvZCVPeI3Q==@kvack.org X-Gm-Message-State: AOJu0YzS2A2odYP5+v5R3WQr/K4gdPMNxXNjQr61/7nuCWswZRx5KC2Q xaN1M+1FT9qplJfQQFbLV9ARk6lYZAseMGtVBU3zsjfh0flImcRXz3Cfmx0eb2adHq+njIB4n78 = X-Gm-Gg: ASbGncsU9h70fVKiCfDYhAR2Gq4mhetMDcXzriSvuz/OipZbKnmupDiAWvChhKTZryv +nAptxyK4dDtLtfPz2WRGvc12nHaqlrzyWkiEUEbQclyNPtXDwPEVeTKi7V5iS6bbQhCBI0qz4i sOPc0D2pbn24YbW5lW4TxaNjIjmntq2pFEZoXgdlZIaOpqEmQa80yb0Q+JuR1oYJ7maaCG34d55 Bp8rv9sWhkVZ+y452FP3insXMd5tnN3qo1zQx/IkXEhm10NfNvaIXGHJU15YC3cy9WvEG7kPE90 dcMlNz6N1nwVFaYELlEsVe4xxPvND+tLjMB+AlErmhSa4ICGfmhJMkWK+iM664O2pNU= X-Google-Smtp-Source: AGHT+IEJFZyVK28vWR0AC+GkqJiC2rDlgDQbL9yrvvipReuj5umGjR3KAq61kLyLEk5yWNA9Wx5dLw== X-Received: by 2002:a05:6a20:d528:b0:1fd:f4df:96ed with SMTP id adf61e73a8af0-201592af3aemr3446614637.26.1744197981338; Wed, 09 Apr 2025 04:26:21 -0700 (PDT) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:eb5e:c849:7471:d0ed]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b02a11d3180sm966735a12.43.2025.04.09.04.26.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Apr 2025 04:26:20 -0700 (PDT) From: Sergey Senozhatsky To: Andrew Morton Cc: Dan Carpenter , Richard Chang , Brian Geffon , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv4] zram: modernize writeback interface Date: Wed, 9 Apr 2025 20:23:48 +0900 Message-ID: <20250409112611.1154282-1-senozhatsky@chromium.org> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog MIME-Version: 1.0 X-Stat-Signature: mwi15m4r6hoeaqdfbyrcyww8xgmn3jg8 X-Rspam-User: X-Rspamd-Queue-Id: 99387C0009 X-Rspamd-Server: rspam08 X-HE-Tag: 1744197982-9434 X-HE-Meta: U2FsdGVkX1+sdxqjHxcfveJWywtJ+GM5MfD9QBfQBQxREjiqL4ZHraneatTk6mrKJhGyUd2w3r03lbeRCvicP039wcJs1nReLwJnKkuDpzYJlPtVpzYiMdfAewNhW6l2W09k/sXwCFr3wdsn7nSAFilWwddDpGUZt/6gRPtpotwsb1dPd5RlbTgj+X5CVhk+KkTgMXrmInPkEUN1DNLM+24SpPN+pDj34FDCcUzAiN2x25t31uy0glCzowAjh8ZvfKep6ikYsC7rV3VRzQQB7sGUNa5Q70nNXM6pqPQbYEZ0XAQy7nL0krqDUdBh6U35d4W7YKaJD1qS1gPoa7H/cfUjKo0CpFhN+NFS80Wue+YY0T7VWOnZA6pVajlTbfer3qWxPBj396LwbsGAAPHe1dRMZwUo2QsnMHG85aa4ols9DacnPjDV7w4lITraoq9EnTcpzRYcXfa//GssaqqTfKEUnvnqhBlg8ONhFQZF7YKSVe1PqeQGqELVix0BI8scQygxfLNrQSqcaXOVmuatY8eLeyI6p51NhXE8CNaBsvrwSetxj4wVrTAKN9wnewlXMvvwkT8JTxKuLCVzSKpFm2gADxR4Vtlc8o5rh4UE6V5hpfNzshNdidZaX5lA6DSgnnTHY4EPJL6+D8eN6PX4BxqzHHFQPZMeLXL3jfO5Q9XMQ9CS8z/CtBDdBDdPo5WIFrtLC6too9avm+ts3rDH1x5yW+tYwb5AuN1+lH/UPZNTQzg4O26IpfuH/lcysCnvphgrmmxzqs5SL7E3CpP0E2//pTAf5D+eYDxfY+g0kDt1AiIUUluI4Tpe1dxIE8bddvLBvR4Kboozorpe31weHTdNMhqFekLO27apoLdFUk9iOXJN/7J0yhRUYwbVs/voHKLrhxXA/MgQJQyd1OIOBx1XRpkXKnFfouepjzhGRl+UXPXQ4vlvvL0pORr6t4+4kSZ9+I6VK4kADnSlA7K LDTcauW2 3ZLJ+EFaXG02aOL5nK9PiJldFbvHGzW0N1uzOvY1y60oMtk1LqPOlKl74TsM6dUMCFqq/pWvp6RbTTlNK9y7hq6azLO7UHUTQgLvW23cpoTVPqANWdZC4DmuWEpiO3YfHV+O2OJIDZLWTWcvDDiV2GNwjQlQ1v5fA2Kg1R78Ov5SNxr2m4HT80njtaXM5zB/JhyexgDz0xI5pTXcwihE3y3CkRG1WqvmtJ/bRo6exsZWDB4hnVQ9h+sg1YLu0QIVGMLMjl0GOcWKEd+ViM1tyyvRQOA94gkl6lApuFMDbHoweNneLCaWZ9zCkPOjmezctsrX4xmQRyGxA/vo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The writeback interface supports a page_index=N parameter which performs writeback of the given page. Since we rarely need to writeback just one single page, the typical use case involves a number of writeback calls, each performing writeback of one page: echo page_index=100 > zram0/writeback ... echo page_index=200 > zram0/writeback echo page_index=500 > zram0/writeback ... echo page_index=700 > zram0/writeback One obvious downside of this is that it increases the number of syscalls. Less obvious, but a significantly more important downside, is that when given only one page to post-process zram cannot perform an optimal target selection. This becomes a critical limitation when writeback_limit is enabled, because under writeback_limit we want to guarantee the highest memory savings hence we first need to writeback pages that release the highest amount of zsmalloc pool memory. This patch adds page_indexes=LOW-HIGH parameter to the writeback interface: echo page_indexes=100-200 page_indexes=500-700 > zram0/writeback This gives zram a chance to apply an optimal target selection strategy on each iteration of the writeback loop. We also now permit multiple page_index parameters per call (previously zram would recognize only one page_index) and a mix or single pages and page ranges: echo page_index=42 page_index=99 page_indexes=100-200 \ page_indexes=500-700 > zram0/writeback Apart from that the patch also unifies parameters passing and resembles other "modern" zram device attributes (e.g. recompression), while the old interface used a mixed scheme: values-less parameters for mode and a key=value format for page_index. We still support the "old" value-less format for compatibility reasons. Reviewed-by: Brian Geffon Signed-off-by: Sergey Senozhatsky --- v4: fixed uninitialized variable in zram_writeback_slots (Dan) Documentation/admin-guide/blockdev/zram.rst | 17 ++ drivers/block/zram/zram_drv.c | 320 +++++++++++++------- 2 files changed, 232 insertions(+), 105 deletions(-) diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst index 9bdb30901a93..b8d36134a151 100644 --- a/Documentation/admin-guide/blockdev/zram.rst +++ b/Documentation/admin-guide/blockdev/zram.rst @@ -369,6 +369,23 @@ they could write a page index into the interface:: echo "page_index=1251" > /sys/block/zramX/writeback +In Linux 6.16 this interface underwent some rework. First, the interface +now supports `key=value` format for all of its parameters (`type=huge_idle`, +etc.) Second, the support for `page_indexes` was introduced, which specify +`LOW-HIGH` range (or ranges) of pages to be written-back. This reduces the +number of syscalls, but more importantly this enables optimal post-processing +target selection strategy. Usage example:: + + echo "type=idle" > /sys/block/zramX/writeback + echo "page_indexes=1-100 page_indexes=200-300" > \ + /sys/block/zramX/writeback + +We also now permit multiple page_index params per call and a mix of +single pages and page ranges:: + + echo page_index=42 page_index=99 page_indexes=100-200 \ + page_indexes=500-700 > /sys/block/zramX/writeback + If there are lots of write IO with flash device, potentially, it has flash wearout problem so that admin needs to design write limitation to guarantee storage health for entire product life. diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index fda7d8624889..31332155e845 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -734,114 +734,19 @@ static void read_from_bdev_async(struct zram *zram, struct page *page, submit_bio(bio); } -#define PAGE_WB_SIG "page_index=" - -#define PAGE_WRITEBACK 0 -#define HUGE_WRITEBACK (1<<0) -#define IDLE_WRITEBACK (1<<1) -#define INCOMPRESSIBLE_WRITEBACK (1<<2) - -static int scan_slots_for_writeback(struct zram *zram, u32 mode, - unsigned long nr_pages, - unsigned long index, - struct zram_pp_ctl *ctl) +static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl) { - for (; nr_pages != 0; index++, nr_pages--) { - bool ok = true; - - zram_slot_lock(zram, index); - if (!zram_allocated(zram, index)) - goto next; - - if (zram_test_flag(zram, index, ZRAM_WB) || - zram_test_flag(zram, index, ZRAM_SAME)) - goto next; - - if (mode & IDLE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_IDLE)) - goto next; - if (mode & HUGE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_HUGE)) - goto next; - if (mode & INCOMPRESSIBLE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE)) - goto next; - - ok = place_pp_slot(zram, ctl, index); -next: - zram_slot_unlock(zram, index); - if (!ok) - break; - } - - return 0; -} - -static ssize_t writeback_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) -{ - struct zram *zram = dev_to_zram(dev); - unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; - struct zram_pp_ctl *ctl = NULL; + unsigned long blk_idx = 0; + struct page *page = NULL; struct zram_pp_slot *pps; - unsigned long index = 0; - struct bio bio; struct bio_vec bio_vec; - struct page *page = NULL; - ssize_t ret = len; - int mode, err; - unsigned long blk_idx = 0; - - if (sysfs_streq(buf, "idle")) - mode = IDLE_WRITEBACK; - else if (sysfs_streq(buf, "huge")) - mode = HUGE_WRITEBACK; - else if (sysfs_streq(buf, "huge_idle")) - mode = IDLE_WRITEBACK | HUGE_WRITEBACK; - else if (sysfs_streq(buf, "incompressible")) - mode = INCOMPRESSIBLE_WRITEBACK; - else { - if (strncmp(buf, PAGE_WB_SIG, sizeof(PAGE_WB_SIG) - 1)) - return -EINVAL; - - if (kstrtol(buf + sizeof(PAGE_WB_SIG) - 1, 10, &index) || - index >= nr_pages) - return -EINVAL; - - nr_pages = 1; - mode = PAGE_WRITEBACK; - } - - down_read(&zram->init_lock); - if (!init_done(zram)) { - ret = -EINVAL; - goto release_init_lock; - } - - /* Do not permit concurrent post-processing actions. */ - if (atomic_xchg(&zram->pp_in_progress, 1)) { - up_read(&zram->init_lock); - return -EAGAIN; - } - - if (!zram->backing_dev) { - ret = -ENODEV; - goto release_init_lock; - } + struct bio bio; + int ret = 0, err; + u32 index; page = alloc_page(GFP_KERNEL); - if (!page) { - ret = -ENOMEM; - goto release_init_lock; - } - - ctl = init_pp_ctl(); - if (!ctl) { - ret = -ENOMEM; - goto release_init_lock; - } - - scan_slots_for_writeback(zram, mode, nr_pages, index, ctl); + if (!page) + return -ENOMEM; while ((pps = select_pp_slot(ctl))) { spin_lock(&zram->wb_limit_lock); @@ -929,10 +834,215 @@ static ssize_t writeback_store(struct device *dev, if (blk_idx) free_block_bdev(zram, blk_idx); - -release_init_lock: if (page) __free_page(page); + + return ret; +} + +#define PAGE_WRITEBACK 0 +#define HUGE_WRITEBACK (1 << 0) +#define IDLE_WRITEBACK (1 << 1) +#define INCOMPRESSIBLE_WRITEBACK (1 << 2) + +static int parse_page_index(char *val, unsigned long nr_pages, + unsigned long *lo, unsigned long *hi) +{ + int ret; + + ret = kstrtoul(val, 10, lo); + if (ret) + return ret; + if (*lo >= nr_pages) + return -ERANGE; + *hi = *lo + 1; + return 0; +} + +static int parse_page_indexes(char *val, unsigned long nr_pages, + unsigned long *lo, unsigned long *hi) +{ + char *delim; + int ret; + + delim = strchr(val, '-'); + if (!delim) + return -EINVAL; + + *delim = 0x00; + ret = kstrtoul(val, 10, lo); + if (ret) + return ret; + if (*lo >= nr_pages) + return -ERANGE; + + ret = kstrtoul(delim + 1, 10, hi); + if (ret) + return ret; + if (*hi >= nr_pages || *lo > *hi) + return -ERANGE; + *hi += 1; + return 0; +} + +static int parse_mode(char *val, u32 *mode) +{ + *mode = 0; + + if (!strcmp(val, "idle")) + *mode = IDLE_WRITEBACK; + if (!strcmp(val, "huge")) + *mode = HUGE_WRITEBACK; + if (!strcmp(val, "huge_idle")) + *mode = IDLE_WRITEBACK | HUGE_WRITEBACK; + if (!strcmp(val, "incompressible")) + *mode = INCOMPRESSIBLE_WRITEBACK; + + if (*mode == 0) + return -EINVAL; + return 0; +} + +static int scan_slots_for_writeback(struct zram *zram, u32 mode, + unsigned long lo, unsigned long hi, + struct zram_pp_ctl *ctl) +{ + u32 index = lo; + + while (index < hi) { + bool ok = true; + + zram_slot_lock(zram, index); + if (!zram_allocated(zram, index)) + goto next; + + if (zram_test_flag(zram, index, ZRAM_WB) || + zram_test_flag(zram, index, ZRAM_SAME)) + goto next; + + if (mode & IDLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_IDLE)) + goto next; + if (mode & HUGE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_HUGE)) + goto next; + if (mode & INCOMPRESSIBLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE)) + goto next; + + ok = place_pp_slot(zram, ctl, index); +next: + zram_slot_unlock(zram, index); + if (!ok) + break; + index++; + } + + return 0; +} + +static ssize_t writeback_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct zram *zram = dev_to_zram(dev); + u64 nr_pages = zram->disksize >> PAGE_SHIFT; + unsigned long lo = 0, hi = nr_pages; + struct zram_pp_ctl *ctl = NULL; + char *args, *param, *val; + ssize_t ret = len; + int err, mode = 0; + + down_read(&zram->init_lock); + if (!init_done(zram)) { + up_read(&zram->init_lock); + return -EINVAL; + } + + /* Do not permit concurrent post-processing actions. */ + if (atomic_xchg(&zram->pp_in_progress, 1)) { + up_read(&zram->init_lock); + return -EAGAIN; + } + + if (!zram->backing_dev) { + ret = -ENODEV; + goto release_init_lock; + } + + ctl = init_pp_ctl(); + if (!ctl) { + ret = -ENOMEM; + goto release_init_lock; + } + + args = skip_spaces(buf); + while (*args) { + args = next_arg(args, ¶m, &val); + + /* + * Workaround to support the old writeback interface. + * + * The old writeback interface has a minor inconsistency and + * requires key=value only for page_index parameter, while the + * writeback mode is a valueless parameter. + * + * This is not the case anymore and now all parameters are + * required to have values, however, we need to support the + * legacy writeback interface format so we check if we can + * recognize a valueless parameter as the (legacy) writeback + * mode. + */ + if (!val || !*val) { + err = parse_mode(param, &mode); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + if (!strcmp(param, "type")) { + err = parse_mode(val, &mode); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + if (!strcmp(param, "page_index")) { + err = parse_page_index(val, nr_pages, &lo, &hi); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + continue; + } + + if (!strcmp(param, "page_indexes")) { + err = parse_page_indexes(val, nr_pages, &lo, &hi); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + continue; + } + } + + err = zram_writeback_slots(zram, ctl); + if (err) + ret = err; + +release_init_lock: release_pp_ctl(zram, ctl); atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock);