From patchwork Fri Mar 7 12:01:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UXVuLXdlaSBMaW4gKOael+e+pOW0tCk=?= X-Patchwork-Id: 14006389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0D81C19F32 for ; Fri, 7 Mar 2025 12:02:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF05A6B0083; Fri, 7 Mar 2025 07:02:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A77546B0085; Fri, 7 Mar 2025 07:02:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F1356B0088; Fri, 7 Mar 2025 07:02:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6EC326B0083 for ; Fri, 7 Mar 2025 07:02:12 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B094E559C6 for ; Fri, 7 Mar 2025 12:02:14 +0000 (UTC) X-FDA: 83194616988.17.4321C22 Received: from mailgw02.mediatek.com (mailgw02.mediatek.com [216.200.240.185]) by imf18.hostedemail.com (Postfix) with ESMTP id 59E241C002F for ; Fri, 7 Mar 2025 12:02:10 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=mediatek.com header.s=dk header.b=AEnGWX8x; dmarc=pass (policy=quarantine) header.from=mediatek.com; spf=pass (imf18.hostedemail.com: domain of qun-wei.lin@mediatek.com designates 216.200.240.185 as permitted sender) smtp.mailfrom=qun-wei.lin@mediatek.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741348932; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Oi/1yul7QoqCYVR3VHpPFv7fpZcX0xh49zK0QqiiehU=; b=fKnxwGEqcA8keT7UExzWV3AsQIgmpzOjwYoPlkliyjPhekuwVVh7cVAY08NMBF03E4sidi opGAttOCXr0/VVIdK0e76DIk1PPEuevTNBrF0UqH9bjigiBIDiFW3m4mU3IjehjM5VeiNJ 6HN09rUwpB61l821a8L03QUB0Smc9v0= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=mediatek.com header.s=dk header.b=AEnGWX8x; dmarc=pass (policy=quarantine) header.from=mediatek.com; spf=pass (imf18.hostedemail.com: domain of qun-wei.lin@mediatek.com designates 216.200.240.185 as permitted sender) smtp.mailfrom=qun-wei.lin@mediatek.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741348932; a=rsa-sha256; cv=none; b=JojOHqjQl6LfamnW0bZ2BtVmf/Uk0Givahhpj5sNOge2Ael1Z3RPyo8eGVa2+wejxGGw+V db+FkQ++j4AjipMR3CA4DiYsbzB1B3wCnBj7AhuzqueCPluOxD2cY7bbllhs7T9CBCTJlB C9toDmlSI4R1zFDcLhMTNZ8/s2UipRk= X-UUID: fcd0101cfb4b11efa1e849db4cc18d44-20250307 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=Oi/1yul7QoqCYVR3VHpPFv7fpZcX0xh49zK0QqiiehU=; b=AEnGWX8xovgnGX01Gl48h0NnNYIMQZOLuMuEqdBX5eSvwetxP1BaxFubxbsK1nD47uupnsTVsXaijDSS6B9Yc5m9uE0z9+0cuF4iXyXdN8tx37LC/LnxR1YOBLCyud9iB95itFII2xl2AFl/pB+5L0xTd3bOTW7mqxHyw4vLyA8=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.2.1,REQID:8b03097a-411f-4017-bc34-7af2f44f399d,IP:0,UR L:0,TC:0,Content:-5,EDM:-25,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-30 X-CID-META: VersionHash:0ef645f,CLOUDID:460a168c-f5b8-47d5-8cf3-b68fe7530c9a,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:81|82|102,TC:nil,Content:0|50,EDM:1, IP:nil,URL:0,File:nil,RT:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV: 0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0,ARC:0 X-CID-BVR: 0 X-CID-BAS: 0,_,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR X-UUID: fcd0101cfb4b11efa1e849db4cc18d44-20250307 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 249684098; Fri, 07 Mar 2025 05:02:05 -0700 Received: from mtkmbs11n1.mediatek.inc (172.21.101.185) by mtkmbs13n2.mediatek.inc (172.21.101.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 7 Mar 2025 20:02:01 +0800 Received: from mtksitap99.mediatek.inc (10.233.130.16) by mtkmbs11n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1258.28 via Frontend Transport; Fri, 7 Mar 2025 20:02:01 +0800 From: Qun-Wei Lin To: Jens Axboe , Minchan Kim , Sergey Senozhatsky , Vishal Verma , Dan Williams , Dave Jiang , Ira Weiny , Andrew Morton , Matthias Brugger , AngeloGioacchino Del Regno , Chris Li , Ryan Roberts , "Huang, Ying" , Kairui Song , Dan Schatzberg , Barry Song , Al Viro CC: , , , , , , Casper Li , Chinwen Chang , Andrew Yang , James Hsu , Qun-Wei Lin Subject: [PATCH 1/2] mm: Split BLK_FEAT_SYNCHRONOUS and SWP_SYNCHRONOUS_IO into separate read and write flags Date: Fri, 7 Mar 2025 20:01:03 +0800 Message-ID: <20250307120141.1566673-2-qun-wei.lin@mediatek.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250307120141.1566673-1-qun-wei.lin@mediatek.com> References: <20250307120141.1566673-1-qun-wei.lin@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 59E241C002F X-Rspam-User: X-Stat-Signature: 7d6f54pcuqhb5p5zqa9cy1z8zu4hrf5b X-HE-Tag: 1741348930-551690 X-HE-Meta: U2FsdGVkX1/YYLjtapVDXUeUveszT8nVmGPFxendhCWqxE+ZnEr7UT7OvYc/H+e+rHXlirhkDHzrH5Md4S+TsmEHG+IjJWywFyHLl+Gck9qU6ul3Wea50QlngNbw9UAdj0JK9dCiPgoLQ9MR+onqpyPYQJCpGxtcQcBV0Z2K7FcLrn3C0v3dcsv+qWFkNXjy8v3lS9UTzT9qunuA0WU0J5H9uImHpig4yishDd9aeAbUYST91GxjFi+WRxbEARwxt93UDtgRww3Bwg4ZLjD69BNc6O5vzICNVITwdleiUoZr/bk92dEe0BORjwkgYYix8AFyI9BNyDcQRRMu7xtVp7G+zRtcButlj5I1MW+FtcH83nzBtqPbPwK7kjgWxeViZXrkFnP/EQLBJJzN/P3MTkrtU+HAg/4boHxM0nWNSitDMVNrIua+Mu38treYbWGACUl+ZgofE3J11cyuccmNbCA330iI23BLoudaW1k2oyIukFohUYJACIBeqMtUCmR/Fsv4zE5FtDElH5SGwZncrcRYgl8c6PsqGVQBOb3MAUusv2db9rDXwMVqIGgPAYrBm9JnsZpN3Vh/GPldqsPqTNpjOk50p4k1mZWhUSz9iGjjfJ9HE4uJWHDr/5pRNpE2JUf4gn/HjSX40MU+CTFfxl35dcisA3Vq5JOuodM4guMNgZSGstQSb6TYXgVWMBPYQ0/GXOzDh738U8n7iFyycmunir0wYPiUf2+z4fPkqbZopoGrMyLoC+FhH/MDCW0Hz0Z69fcopHBu0qs9qdrYL5AqZkN9bAUJxQrGXwBvsi5PxQFnz0+bGT331HxLiZQofIeTzLFbkBLWwGHz8sMqmB4c4IwneVBXFZOz3Ty+BD7kPhejUR1YgrKTV5LN2iNlnNP96v3ewPSbRnyJnhZmcZblhqn/FPRf0yEXUMJOUp8AhOzm2MLTXSzcLEYEtw73JiRIujR0kCrPbjWmiNu YsyiUOa2 9/Lwhg4WwgXZp8UusXl89IYHRuLPyb0bn2If/qujiClQ3CAuULp8l4dh+Udf2xb/omkkdYcBq94tZKq5nDfwdWMOe2VC8GUUaIzNv6UY2nA95TKXqwxtT7yZexqUhHezy7cI1mvFGjnXhbNF6lShnr0v+otz7k0kTCuUfWMGzURG5oMHElgIzCmoW4L3+rzuolemqyT5Jj5skzIp1y2uxgPyKLTkra91D/FehLUmRMIUvu3lVs3t5Ra6AE77nc4dGWdm1TDP2HfzJiDOOe+80sR/T4MK9mwK6cof8B5o/gk6KXkIo+uIJzQKLstUQgwjLKkRn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch splits the BLK_FEAT_SYNCHRONOUS feature flag into two separate flags: BLK_FEAT_READ_SYNCHRONOUS and BLK_FEAT_WRITE_SYNCHRONOUS. Similarly, the SWP_SYNCHRONOUS_IO flag is split into SWP_READ_SYNCHRONOUS_IO and SWP_WRITE_SYNCHRONOUS_IO. These changes are motivated by the need to better accommodate certain swap devices that support synchronous read operations but asynchronous write operations. The existing BLK_FEAT_SYNCHRONOUS and SWP_SYNCHRONOUS_IO flags are not sufficient for these devices, as they enforce synchronous behavior for both read and write operations. Signed-off-by: Qun-Wei Lin --- drivers/block/brd.c | 3 ++- drivers/block/zram/zram_drv.c | 5 +++-- drivers/nvdimm/btt.c | 3 ++- drivers/nvdimm/pmem.c | 5 +++-- include/linux/blkdev.h | 24 ++++++++++++++++-------- include/linux/swap.h | 31 ++++++++++++++++--------------- mm/memory.c | 4 ++-- mm/page_io.c | 6 +++--- mm/swapfile.c | 7 +++++-- 9 files changed, 52 insertions(+), 36 deletions(-) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 292f127cae0a..66920b9d4701 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -370,7 +370,8 @@ static int brd_alloc(int i) .max_hw_discard_sectors = UINT_MAX, .max_discard_segments = 1, .discard_granularity = PAGE_SIZE, - .features = BLK_FEAT_SYNCHRONOUS | + .features = BLK_FEAT_READ_SYNCHRONOUS | + BLK_FEAT_WRITE_SYNCHRONOUS | BLK_FEAT_NOWAIT, }; diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 3dee026988dc..2e1a70f2f4bd 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -2535,8 +2535,9 @@ static int zram_add(void) #if ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE .max_write_zeroes_sectors = UINT_MAX, #endif - .features = BLK_FEAT_STABLE_WRITES | - BLK_FEAT_SYNCHRONOUS, + .features = BLK_FEAT_STABLE_WRITES | + BLK_FEAT_READ_SYNCHRONOUS | + BLK_FEAT_WRITE_SYNCHRONOUS, }; struct zram *zram; int ret, device_id; diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c index 423dcd190906..1665d98f51af 100644 --- a/drivers/nvdimm/btt.c +++ b/drivers/nvdimm/btt.c @@ -1501,7 +1501,8 @@ static int btt_blk_init(struct btt *btt) .logical_block_size = btt->sector_size, .max_hw_sectors = UINT_MAX, .max_integrity_segments = 1, - .features = BLK_FEAT_SYNCHRONOUS, + .features = BLK_FEAT_READ_SYNCHRONOUS | + BLK_FEAT_WRITE_SYNCHRONOUS, }; int rc; diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index d81faa9d89c9..81a57d7ca746 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -455,8 +455,9 @@ static int pmem_attach_disk(struct device *dev, .logical_block_size = pmem_sector_size(ndns), .physical_block_size = PAGE_SIZE, .max_hw_sectors = UINT_MAX, - .features = BLK_FEAT_WRITE_CACHE | - BLK_FEAT_SYNCHRONOUS, + .features = BLK_FEAT_WRITE_CACHE | + BLK_FEAT_READ_SYNCHRONOUS | + BLK_FEAT_WRITE_SYNCHRONOUS, }; int nid = dev_to_node(dev), fua; struct resource *res = &nsio->res; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 08a727b40816..3070f2e9d862 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -305,20 +305,23 @@ typedef unsigned int __bitwise blk_features_t; /* don't modify data until writeback is done */ #define BLK_FEAT_STABLE_WRITES ((__force blk_features_t)(1u << 5)) -/* always completes in submit context */ -#define BLK_FEAT_SYNCHRONOUS ((__force blk_features_t)(1u << 6)) +/* read operations always completes in submit context */ +#define BLK_FEAT_READ_SYNCHRONOUS ((__force blk_features_t)(1u << 6)) + +/* write operations always completes in submit context */ +#define BLK_FEAT_WRITE_SYNCHRONOUS ((__force blk_features_t)(1u << 7)) /* supports REQ_NOWAIT */ -#define BLK_FEAT_NOWAIT ((__force blk_features_t)(1u << 7)) +#define BLK_FEAT_NOWAIT ((__force blk_features_t)(1u << 8)) /* supports DAX */ -#define BLK_FEAT_DAX ((__force blk_features_t)(1u << 8)) +#define BLK_FEAT_DAX ((__force blk_features_t)(1u << 9)) /* supports I/O polling */ -#define BLK_FEAT_POLL ((__force blk_features_t)(1u << 9)) +#define BLK_FEAT_POLL ((__force blk_features_t)(1u << 10)) /* is a zoned device */ -#define BLK_FEAT_ZONED ((__force blk_features_t)(1u << 10)) +#define BLK_FEAT_ZONED ((__force blk_features_t)(1u << 11)) /* supports PCI(e) p2p requests */ #define BLK_FEAT_PCI_P2PDMA ((__force blk_features_t)(1u << 12)) @@ -1321,9 +1324,14 @@ static inline bool bdev_nonrot(struct block_device *bdev) return blk_queue_nonrot(bdev_get_queue(bdev)); } -static inline bool bdev_synchronous(struct block_device *bdev) +static inline bool bdev_read_synchronous(struct block_device *bdev) +{ + return bdev->bd_disk->queue->limits.features & BLK_FEAT_READ_SYNCHRONOUS; +} + +static inline bool bdev_write_synchronous(struct block_device *bdev) { - return bdev->bd_disk->queue->limits.features & BLK_FEAT_SYNCHRONOUS; + return bdev->bd_disk->queue->limits.features & BLK_FEAT_WRITE_SYNCHRONOUS; } static inline bool bdev_stable_writes(struct block_device *bdev) diff --git a/include/linux/swap.h b/include/linux/swap.h index f3e0ac20c2e8..2068b6973648 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -205,21 +205,22 @@ struct swap_extent { offsetof(union swap_header, info.badpages)) / sizeof(int)) enum { - SWP_USED = (1 << 0), /* is slot in swap_info[] used? */ - SWP_WRITEOK = (1 << 1), /* ok to write to this swap? */ - SWP_DISCARDABLE = (1 << 2), /* blkdev support discard */ - SWP_DISCARDING = (1 << 3), /* now discarding a free cluster */ - SWP_SOLIDSTATE = (1 << 4), /* blkdev seeks are cheap */ - SWP_CONTINUED = (1 << 5), /* swap_map has count continuation */ - SWP_BLKDEV = (1 << 6), /* its a block device */ - SWP_ACTIVATED = (1 << 7), /* set after swap_activate success */ - SWP_FS_OPS = (1 << 8), /* swapfile operations go through fs */ - SWP_AREA_DISCARD = (1 << 9), /* single-time swap area discards */ - SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */ - SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ - SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */ - /* add others here before... */ - SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ + SWP_USED = (1 << 0), /* is slot in swap_info[] used? */ + SWP_WRITEOK = (1 << 1), /* ok to write to this swap? */ + SWP_DISCARDABLE = (1 << 2), /* blkdev support discard */ + SWP_DISCARDING = (1 << 3), /* now discarding a free cluster */ + SWP_SOLIDSTATE = (1 << 4), /* blkdev seeks are cheap */ + SWP_CONTINUED = (1 << 5), /* swap_map has count continuation */ + SWP_BLKDEV = (1 << 6), /* its a block device */ + SWP_ACTIVATED = (1 << 7), /* set after swap_activate success */ + SWP_FS_OPS = (1 << 8), /* swapfile operations go through fs */ + SWP_AREA_DISCARD = (1 << 9), /* single-time swap area discards */ + SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */ + SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ + SWP_READ_SYNCHRONOUS_IO = (1 << 12), /* synchronous read IO is efficient */ + SWP_WRITE_SYNCHRONOUS_IO = (1 << 13), /* synchronous write IO is efficient */ + SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ + /* add others here before... */ }; #define SWAP_CLUSTER_MAX 32UL diff --git a/mm/memory.c b/mm/memory.c index 75c2dfd04f72..56c864d5d787 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4293,7 +4293,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swapcache = folio; if (!folio) { - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && + if (data_race(si->flags & SWP_READ_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ folio = alloc_swap_folio(vmf); @@ -4430,7 +4430,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_nomap; } - /* allocated large folios for SWP_SYNCHRONOUS_IO */ + /* allocated large folios for SWP_READ_SYNCHRONOUS_IO */ if (folio_test_large(folio) && !folio_test_swapcache(folio)) { unsigned long nr = folio_nr_pages(folio); unsigned long folio_start = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); diff --git a/mm/page_io.c b/mm/page_io.c index 4b4ea8e49cf6..d692eafdd90c 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -465,10 +465,10 @@ void __swap_writepage(struct folio *folio, struct writeback_control *wbc) swap_writepage_fs(folio, wbc); /* * ->flags can be updated non-atomicially (scan_swap_map_slots), - * but that will never affect SWP_SYNCHRONOUS_IO, so the data_race + * but that will never affect SWP_WRITE_SYNCHRONOUS_IO, so the data_race * is safe. */ - else if (data_race(sis->flags & SWP_SYNCHRONOUS_IO)) + else if (data_race(sis->flags & SWP_WRITE_SYNCHRONOUS_IO)) swap_writepage_bdev_sync(folio, wbc, sis); else swap_writepage_bdev_async(folio, wbc, sis); @@ -616,7 +616,7 @@ static void swap_read_folio_bdev_async(struct folio *folio, void swap_read_folio(struct folio *folio, struct swap_iocb **plug) { struct swap_info_struct *sis = swp_swap_info(folio->swap); - bool synchronous = sis->flags & SWP_SYNCHRONOUS_IO; + bool synchronous = sis->flags & SWP_READ_SYNCHRONOUS_IO; bool workingset = folio_test_workingset(folio); unsigned long pflags; bool in_thrashing; diff --git a/mm/swapfile.c b/mm/swapfile.c index b0a9071cfe1d..902e5698af44 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3488,8 +3488,11 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) if (si->bdev && bdev_stable_writes(si->bdev)) si->flags |= SWP_STABLE_WRITES; - if (si->bdev && bdev_synchronous(si->bdev)) - si->flags |= SWP_SYNCHRONOUS_IO; + if (si->bdev && bdev_read_synchronous(si->bdev)) + si->flags |= SWP_READ_SYNCHRONOUS_IO; + + if (si->bdev && bdev_write_synchronous(si->bdev)) + si->flags |= SWP_WRITE_SYNCHRONOUS_IO; if (si->bdev && bdev_nonrot(si->bdev)) { si->flags |= SWP_SOLIDSTATE; From patchwork Fri Mar 7 12:01:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UXVuLXdlaSBMaW4gKOael+e+pOW0tCk=?= X-Patchwork-Id: 14006391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BFFCC19F32 for ; Fri, 7 Mar 2025 12:02:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 374C16B0089; Fri, 7 Mar 2025 07:02:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 324BF280002; Fri, 7 Mar 2025 07:02:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12AF1280001; Fri, 7 Mar 2025 07:02:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D64906B0089 for ; Fri, 7 Mar 2025 07:02:16 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E2E86B7253 for ; Fri, 7 Mar 2025 12:02:18 +0000 (UTC) X-FDA: 83194617156.26.7A1B90D Received: from mailgw02.mediatek.com (mailgw02.mediatek.com [216.200.240.185]) by imf05.hostedemail.com (Postfix) with ESMTP id 972FB100025 for ; Fri, 7 Mar 2025 12:02:16 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=mediatek.com header.s=dk header.b=eSd5hX0e; spf=pass (imf05.hostedemail.com: domain of qun-wei.lin@mediatek.com designates 216.200.240.185 as permitted sender) smtp.mailfrom=qun-wei.lin@mediatek.com; dmarc=pass (policy=quarantine) header.from=mediatek.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741348936; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=blsq4JOwbziFi2W42Dtc6ebpKzN452+fjKIqUwFyGtA=; b=fdIsNMVWOoX+Vz8PNzN6TJij6FWSlIuMkhGMDRwd4Lom8uqfvanZAEWEgem4ut2ro41kym NtWBnzrvPIXAcUcp/nekWJ1buYfkzwiODEivRv2JDfkKVoXfke+AGEKJj1MdjoCLpYcSxl vZvUkKQCzUtys7iWAShHMcm0WHxRvFk= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=mediatek.com header.s=dk header.b=eSd5hX0e; spf=pass (imf05.hostedemail.com: domain of qun-wei.lin@mediatek.com designates 216.200.240.185 as permitted sender) smtp.mailfrom=qun-wei.lin@mediatek.com; dmarc=pass (policy=quarantine) header.from=mediatek.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741348936; a=rsa-sha256; cv=none; b=AtRayr8tdXu+6qewozkz3iP+/6rcosngW1rb7734k8MG5TCnyHOqHhe/a+7XuVBMqptkl1 HGhL/gMVEKbJJkrMcl0zJ7/ChO6WGNwAm+7Smw9Xp4eXzwxx4Rtilf97KDAHszLkiNt2yo Lp/nGCRnm6UNeAk+KWb9ACvwygOUePQ= X-UUID: fcf20906fb4b11efa1e849db4cc18d44-20250307 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=blsq4JOwbziFi2W42Dtc6ebpKzN452+fjKIqUwFyGtA=; b=eSd5hX0enne70uayixh/PohmAI/5rXbiyep6ABGMJo6M1WqIz70nivN+D2g5ErXaqiZa3H9N/r6oge4E7bZ7tgA5Yc69B/YFu3gW3dONiGIsOiSBMz6dy28yXVekWkZxPfgEHKeKMdbL7FWomcg07fW8X9dSArxud29R7K/LqPc=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.2.1,REQID:3341cdcf-b5fc-4d36-b781-df9b7bb7d0da,IP:0,UR L:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:-25 X-CID-META: VersionHash:0ef645f,CLOUDID:480a168c-f5b8-47d5-8cf3-b68fe7530c9a,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:81|82|102,TC:nil,Content:0|50,EDM:-3 ,IP:nil,URL:0,File:nil,RT:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV :0,LES:1,SPR:NO,DKR:0,DKP:0,BRR:0,BRE:0,ARC:0 X-CID-BVR: 0,NGT X-CID-BAS: 0,NGT,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR X-UUID: fcf20906fb4b11efa1e849db4cc18d44-20250307 Received: from mtkmbs10n2.mediatek.inc [(172.21.101.183)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 921302912; Fri, 07 Mar 2025 05:02:05 -0700 Received: from mtkmbs11n1.mediatek.inc (172.21.101.185) by MTKMBS14N1.mediatek.inc (172.21.101.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Fri, 7 Mar 2025 20:02:01 +0800 Received: from mtksitap99.mediatek.inc (10.233.130.16) by mtkmbs11n1.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1258.28 via Frontend Transport; Fri, 7 Mar 2025 20:02:01 +0800 From: Qun-Wei Lin To: Jens Axboe , Minchan Kim , Sergey Senozhatsky , Vishal Verma , Dan Williams , Dave Jiang , Ira Weiny , Andrew Morton , Matthias Brugger , AngeloGioacchino Del Regno , Chris Li , Ryan Roberts , "Huang, Ying" , Kairui Song , Dan Schatzberg , Barry Song , Al Viro CC: , , , , , , Casper Li , Chinwen Chang , Andrew Yang , James Hsu , Qun-Wei Lin Subject: [PATCH 2/2] kcompressd: Add Kcompressd for accelerated zram compression Date: Fri, 7 Mar 2025 20:01:04 +0800 Message-ID: <20250307120141.1566673-3-qun-wei.lin@mediatek.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250307120141.1566673-1-qun-wei.lin@mediatek.com> References: <20250307120141.1566673-1-qun-wei.lin@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Stat-Signature: 6t8juw9d5x49bmodzb9erdgeuqq7kcig X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 972FB100025 X-Rspam-User: X-HE-Tag: 1741348936-679010 X-HE-Meta: U2FsdGVkX1+O/OEI5DWFw9NCwbEn2VxtkSeD4Qibj0460ufVOh2XmWeTTfbgHDWmr1fXHB2veDS1Fg+d1zhIi7aWnRfY4YMW0RP6B/nPYmZoWciexzQYhSJbs4ONsisFbQcPwVe2oZhlILvQ/bhrafsSfORwvFjiWAq/On7lMwSezRt+wyU3XtRpNUCM1kkiFUj15e4mIL1iJjaW/biJf/nEvknJFGB2dyTEOrpGshWYOOStIXDy3TwMT7lyKJ5x+kAEWDIpfPkb5ilSvvxYvme2kB1FRmiSCrzpR2Tlk/0yS/+xPRI2GVQ1hsHtio5ba7eK6RNqXrgjIuucMYLzTp5r6lBJ5DTR15D598t2azkVzYzLsqlJDNUSSxZFlY367kd18+tN5iW5v3rex9kboGzsVQb1CDGhbpi/1nUpFOSi6A9BZsDC7+07yGtv9ci3RV9+JVT/Zhsqs9hMU4tw7uY44uOD5+EKtrRKZAX1DaG2a8IP98LkVkkHAgm93dYcr3TR4VmY0Bzcd/fWArgxO4DgiFY9fTnJMCP4wpEyVf6sd6dMW9nL7rXNRAG8/5c6RF1pizEBiryC9RD4d1rrQ1uyf+XJqDcuLcvWZzLZgFRN5DXJt9Rcjr7yzQ3l4HnMrEqEDs8xm9vL8dHMYPNS+vmh0AY7V7SBr8hzXA3T9wcz60uROqO0qiogaQJAYKHBfkXoE0KaediiMGdTLIYDAunDoNmMX0sSLVQrZao6K+Aw1FhuPdB3ThWls33eMpPHgWUUTuOczZyfB2/N73xxpABH/RG5FyHqK2/ynvcfQs0rpZA+KGTeY7hcn03utGISCigmaFF3iO/VlcQEOMe1iTNqkG+N+JzAxJ0e3ec/5wSW26ZK9/kiGPqT3oT/kNZrtgfYniMRTpcWdmXK+JiWXOGr+CzDOKa4DM2Cfvn9jyWdGA0KP1kh+pqlmvaVeU2E0n/wsnX0dYgxiCnaypp ccBYuSOH E6Bez/HxFeE4Vdzg6/xga8Wrvsus9soAP35a1w5xUd5uiqHFzBrOYY5mUiZAI8TKKdZvq4VohB0y3gRAF0KNTUdcYesVW96+YrY9ch7+C/0wvx53bYwt7DA+KiO9BuF1h4LdrRyiEEVdLrdoPfKuBSxi2zPKXBTFrXgLDPR5SUsDCnQVY9l2NaS8gI6+3fScZQmnKco8litkuxthPeBMuLtWQL1MsbEN5bgfV5v9htdok/DctUG4AIKwAFlG7WofEMo7i+KFozk3G+mzZMQsdnhr1EQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduced Kcompressd to offload zram page compression, improving system efficiency by handling compression separately from memory reclaiming. Added necessary configurations and dependencies. Signed-off-by: Qun-Wei Lin --- drivers/block/zram/Kconfig | 11 ++ drivers/block/zram/Makefile | 3 +- drivers/block/zram/kcompressd.c | 340 ++++++++++++++++++++++++++++++++ drivers/block/zram/kcompressd.h | 25 +++ drivers/block/zram/zram_drv.c | 22 ++- 5 files changed, 397 insertions(+), 4 deletions(-) create mode 100644 drivers/block/zram/kcompressd.c create mode 100644 drivers/block/zram/kcompressd.h diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig index 402b7b175863..f0a1b574f770 100644 --- a/drivers/block/zram/Kconfig +++ b/drivers/block/zram/Kconfig @@ -145,3 +145,14 @@ config ZRAM_MULTI_COMP re-compress pages using a potentially slower but more effective compression algorithm. Note, that IDLE page recompression requires ZRAM_TRACK_ENTRY_ACTIME. + +config KCOMPRESSD + tristate "Kcompressd: Accelerated zram compression" + depends on ZRAM + help + Kcompressd creates multiple daemons to accelerate the compression of pages + in zram, offloading this time-consuming task from the zram driver. + + This approach improves system efficiency by handling page compression separately, + which was originally done by kswapd or direct reclaim. + diff --git a/drivers/block/zram/Makefile b/drivers/block/zram/Makefile index 0fdefd576691..23baa5dfceb9 100644 --- a/drivers/block/zram/Makefile +++ b/drivers/block/zram/Makefile @@ -9,4 +9,5 @@ zram-$(CONFIG_ZRAM_BACKEND_ZSTD) += backend_zstd.o zram-$(CONFIG_ZRAM_BACKEND_DEFLATE) += backend_deflate.o zram-$(CONFIG_ZRAM_BACKEND_842) += backend_842.o -obj-$(CONFIG_ZRAM) += zram.o +obj-$(CONFIG_ZRAM) += zram.o +obj-$(CONFIG_KCOMPRESSD) += kcompressd.o diff --git a/drivers/block/zram/kcompressd.c b/drivers/block/zram/kcompressd.c new file mode 100644 index 000000000000..195b7e386869 --- /dev/null +++ b/drivers/block/zram/kcompressd.c @@ -0,0 +1,340 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2024 MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kcompressd.h" + +#define INIT_QUEUE_SIZE 4096 +#define DEFAULT_NR_KCOMPRESSD 4 + +static atomic_t enable_kcompressd; +static unsigned int nr_kcompressd; +static unsigned int queue_size_per_kcompressd; +static struct kcompress *kcompress; + +enum run_state { + KCOMPRESSD_NOT_STARTED = 0, + KCOMPRESSD_RUNNING, + KCOMPRESSD_SLEEPING, +}; + +struct kcompressd_para { + wait_queue_head_t *kcompressd_wait; + struct kfifo *write_fifo; + atomic_t *running; +}; + +static struct kcompressd_para *kcompressd_para; +static BLOCKING_NOTIFIER_HEAD(kcompressd_notifier_list); + +struct write_work { + void *mem; + struct bio *bio; + compress_callback cb; +}; + +int kcompressd_enabled(void) +{ + return likely(atomic_read(&enable_kcompressd)); +} +EXPORT_SYMBOL(kcompressd_enabled); + +static void kcompressd_try_to_sleep(struct kcompressd_para *p) +{ + DEFINE_WAIT(wait); + + if (!kfifo_is_empty(p->write_fifo)) + return; + + if (freezing(current) || kthread_should_stop()) + return; + + atomic_set(p->running, KCOMPRESSD_SLEEPING); + prepare_to_wait(p->kcompressd_wait, &wait, TASK_INTERRUPTIBLE); + + /* + * After a short sleep, check if it was a premature sleep. If not, then + * go fully to sleep until explicitly woken up. + */ + if (!kthread_should_stop() && kfifo_is_empty(p->write_fifo)) + schedule(); + + finish_wait(p->kcompressd_wait, &wait); + atomic_set(p->running, KCOMPRESSD_RUNNING); +} + +static int kcompressd(void *para) +{ + struct task_struct *tsk = current; + struct kcompressd_para *p = (struct kcompressd_para *)para; + + tsk->flags |= PF_MEMALLOC | PF_KSWAPD; + set_freezable(); + + while (!kthread_should_stop()) { + bool ret; + + kcompressd_try_to_sleep(p); + ret = try_to_freeze(); + if (kthread_should_stop()) + break; + + if (ret) + continue; + + while (!kfifo_is_empty(p->write_fifo)) { + struct write_work entry; + + if (sizeof(struct write_work) == kfifo_out(p->write_fifo, + &entry, sizeof(struct write_work))) { + entry.cb(entry.mem, entry.bio); + bio_put(entry.bio); + } + } + + } + + tsk->flags &= ~(PF_MEMALLOC | PF_KSWAPD); + atomic_set(p->running, KCOMPRESSD_NOT_STARTED); + return 0; +} + +static int init_write_queue(void) +{ + int i; + unsigned int queue_len = queue_size_per_kcompressd * sizeof(struct write_work); + + for (i = 0; i < nr_kcompressd; i++) { + if (kfifo_alloc(&kcompress[i].write_fifo, + queue_len, GFP_KERNEL)) { + pr_err("Failed to alloc kfifo %d\n", i); + return -ENOMEM; + } + } + return 0; +} + +static void clean_bio_queue(int idx) +{ + struct write_work entry; + + while (sizeof(struct write_work) == kfifo_out(&kcompress[idx].write_fifo, + &entry, sizeof(struct write_work))) { + bio_put(entry.bio); + entry.cb(entry.mem, entry.bio); + } + kfifo_free(&kcompress[idx].write_fifo); +} + +static int kcompress_update(void) +{ + int i; + int ret; + + kcompress = kvmalloc_array(nr_kcompressd, sizeof(struct kcompress), GFP_KERNEL); + if (!kcompress) + return -ENOMEM; + + kcompressd_para = kvmalloc_array(nr_kcompressd, sizeof(struct kcompressd_para), GFP_KERNEL); + if (!kcompressd_para) + return -ENOMEM; + + ret = init_write_queue(); + if (ret) { + pr_err("Initialization of writing to FIFOs failed!!\n"); + return ret; + } + + for (i = 0; i < nr_kcompressd; i++) { + init_waitqueue_head(&kcompress[i].kcompressd_wait); + kcompressd_para[i].kcompressd_wait = &kcompress[i].kcompressd_wait; + kcompressd_para[i].write_fifo = &kcompress[i].write_fifo; + kcompressd_para[i].running = &kcompress[i].running; + } + + return 0; +} + +static void stop_all_kcompressd_thread(void) +{ + int i; + + for (i = 0; i < nr_kcompressd; i++) { + kthread_stop(kcompress[i].kcompressd); + kcompress[i].kcompressd = NULL; + clean_bio_queue(i); + } +} + +static int do_nr_kcompressd_handler(const char *val, + const struct kernel_param *kp) +{ + int ret; + + atomic_set(&enable_kcompressd, false); + + stop_all_kcompressd_thread(); + + ret = param_set_int(val, kp); + if (!ret) { + pr_err("Invalid number of kcompressd.\n"); + return -EINVAL; + } + + ret = init_write_queue(); + if (ret) { + pr_err("Initialization of writing to FIFOs failed!!\n"); + return ret; + } + + atomic_set(&enable_kcompressd, true); + + return 0; +} + +static const struct kernel_param_ops param_ops_change_nr_kcompressd = { + .set = &do_nr_kcompressd_handler, + .get = ¶m_get_uint, + .free = NULL, +}; + +module_param_cb(nr_kcompressd, ¶m_ops_change_nr_kcompressd, + &nr_kcompressd, 0644); +MODULE_PARM_DESC(nr_kcompressd, "Number of pre-created daemon for page compression"); + +static int do_queue_size_per_kcompressd_handler(const char *val, + const struct kernel_param *kp) +{ + int ret; + + atomic_set(&enable_kcompressd, false); + + stop_all_kcompressd_thread(); + + ret = param_set_int(val, kp); + if (!ret) { + pr_err("Invalid queue size for kcompressd.\n"); + return -EINVAL; + } + + ret = init_write_queue(); + if (ret) { + pr_err("Initialization of writing to FIFOs failed!!\n"); + return ret; + } + + pr_info("Queue size for kcompressd was changed: %d\n", queue_size_per_kcompressd); + + atomic_set(&enable_kcompressd, true); + return 0; +} + +static const struct kernel_param_ops param_ops_change_queue_size_per_kcompressd = { + .set = &do_queue_size_per_kcompressd_handler, + .get = ¶m_get_uint, + .free = NULL, +}; + +module_param_cb(queue_size_per_kcompressd, ¶m_ops_change_queue_size_per_kcompressd, + &queue_size_per_kcompressd, 0644); +MODULE_PARM_DESC(queue_size_per_kcompressd, + "Size of queue for kcompressd"); + +int schedule_bio_write(void *mem, struct bio *bio, compress_callback cb) +{ + int i; + bool submit_success = false; + size_t sz_work = sizeof(struct write_work); + + struct write_work entry = { + .mem = mem, + .bio = bio, + .cb = cb + }; + + if (unlikely(!atomic_read(&enable_kcompressd))) + return -EBUSY; + + if (!nr_kcompressd || !current_is_kswapd()) + return -EBUSY; + + bio_get(bio); + + for (i = 0; i < nr_kcompressd; i++) { + submit_success = + (kfifo_avail(&kcompress[i].write_fifo) >= sz_work) && + (sz_work == kfifo_in(&kcompress[i].write_fifo, &entry, sz_work)); + + if (submit_success) { + switch (atomic_read(&kcompress[i].running)) { + case KCOMPRESSD_NOT_STARTED: + atomic_set(&kcompress[i].running, KCOMPRESSD_RUNNING); + kcompress[i].kcompressd = kthread_run(kcompressd, + &kcompressd_para[i], "kcompressd:%d", i); + if (IS_ERR(kcompress[i].kcompressd)) { + atomic_set(&kcompress[i].running, KCOMPRESSD_NOT_STARTED); + pr_warn("Failed to start kcompressd:%d\n", i); + clean_bio_queue(i); + } + break; + case KCOMPRESSD_RUNNING: + break; + case KCOMPRESSD_SLEEPING: + wake_up_interruptible(&kcompress[i].kcompressd_wait); + break; + } + return 0; + } + } + + bio_put(bio); + return -EBUSY; +} +EXPORT_SYMBOL(schedule_bio_write); + +static int __init kcompressd_init(void) +{ + int ret; + + nr_kcompressd = DEFAULT_NR_KCOMPRESSD; + queue_size_per_kcompressd = INIT_QUEUE_SIZE; + + ret = kcompress_update(); + if (ret) { + pr_err("Init kcompressd failed!\n"); + return ret; + } + + atomic_set(&enable_kcompressd, true); + blocking_notifier_call_chain(&kcompressd_notifier_list, 0, NULL); + return 0; +} + +static void __exit kcompressd_exit(void) +{ + atomic_set(&enable_kcompressd, false); + stop_all_kcompressd_thread(); + + kvfree(kcompress); + kvfree(kcompressd_para); +} + +module_init(kcompressd_init); +module_exit(kcompressd_exit); + +MODULE_LICENSE("Dual BSD/GPL"); +MODULE_AUTHOR("Qun-Wei Lin "); +MODULE_DESCRIPTION("Separate the page compression from the memory reclaiming"); + diff --git a/drivers/block/zram/kcompressd.h b/drivers/block/zram/kcompressd.h new file mode 100644 index 000000000000..2fe0b424a7af --- /dev/null +++ b/drivers/block/zram/kcompressd.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2024 MediaTek Inc. + */ + +#ifndef _KCOMPRESSD_H_ +#define _KCOMPRESSD_H_ + +#include +#include +#include + +typedef void (*compress_callback)(void *mem, struct bio *bio); + +struct kcompress { + struct task_struct *kcompressd; + wait_queue_head_t kcompressd_wait; + struct kfifo write_fifo; + atomic_t running; +}; + +int kcompressd_enabled(void); +int schedule_bio_write(void *mem, struct bio *bio, compress_callback cb); +#endif + diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 2e1a70f2f4bd..bcd63ecb6ff2 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -35,6 +35,7 @@ #include #include +#include "kcompressd.h" #include "zram_drv.h" static DEFINE_IDR(zram_index_idr); @@ -2240,6 +2241,15 @@ static void zram_bio_write(struct zram *zram, struct bio *bio) bio_endio(bio); } +#if IS_ENABLED(CONFIG_KCOMPRESSD) +static void zram_bio_write_callback(void *mem, struct bio *bio) +{ + struct zram *zram = (struct zram *)mem; + + zram_bio_write(zram, bio); +} +#endif + /* * Handler function for all zram I/O requests. */ @@ -2252,6 +2262,10 @@ static void zram_submit_bio(struct bio *bio) zram_bio_read(zram, bio); break; case REQ_OP_WRITE: +#if IS_ENABLED(CONFIG_KCOMPRESSD) + if (kcompressd_enabled() && !schedule_bio_write(zram, bio, zram_bio_write_callback)) + break; +#endif zram_bio_write(zram, bio); break; case REQ_OP_DISCARD: @@ -2535,9 +2549,11 @@ static int zram_add(void) #if ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE .max_write_zeroes_sectors = UINT_MAX, #endif - .features = BLK_FEAT_STABLE_WRITES | - BLK_FEAT_READ_SYNCHRONOUS | - BLK_FEAT_WRITE_SYNCHRONOUS, + .features = BLK_FEAT_STABLE_WRITES + | BLK_FEAT_READ_SYNCHRONOUS +#if !IS_ENABLED(CONFIG_KCOMPRESSD) + | BLK_FEAT_WRITE_SYNCHRONOUS, +#endif }; struct zram *zram; int ret, device_id;