diff mbox

[v1,5/6] zram: remove zram_rw_page

Message ID 1502175024-28338-6-git-send-email-minchan@kernel.org (mailing list archive)
State New, archived
Headers show

Commit Message

Minchan Kim Aug. 8, 2017, 6:50 a.m. UTC
With on-stack-bio, rw_page interface doesn't provide a clear performance
benefit for zram and surely has a maintenance burden, so remove the
last user to remove rw_page completely.

Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/block/zram/zram_drv.c | 52 -------------------------------------------
 1 file changed, 52 deletions(-)

Comments

Sergey Senozhatsky Aug. 8, 2017, 7:02 a.m. UTC | #1
On (08/08/17 15:50), Minchan Kim wrote:
> With on-stack-bio, rw_page interface doesn't provide a clear performance
> benefit for zram and surely has a maintenance burden, so remove the
> last user to remove rw_page completely.

OK, never really liked it, I think we had that conversation before.

as far as I remember, zram_rw_page() was the reason we had to do some
tricks with init_lock to make lockdep happy. may be now we can "simplify"
the things back.


> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

	-ss
Minchan Kim Aug. 8, 2017, 8:13 a.m. UTC | #2
Hi Sergey,

On Tue, Aug 08, 2017 at 04:02:26PM +0900, Sergey Senozhatsky wrote:
> On (08/08/17 15:50), Minchan Kim wrote:
> > With on-stack-bio, rw_page interface doesn't provide a clear performance
> > benefit for zram and surely has a maintenance burden, so remove the
> > last user to remove rw_page completely.
> 
> OK, never really liked it, I think we had that conversation before.
> 
> as far as I remember, zram_rw_page() was the reason we had to do some
> tricks with init_lock to make lockdep happy. may be now we can "simplify"
> the things back.

I cannot remember. Blame my brain. ;-)

Anyway, it's always welcome to make thing simple.
Could you send a patch after settle down this patchset?

> 
> 
> > Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

Thanks for the review!
Sergey Senozhatsky Aug. 8, 2017, 8:23 a.m. UTC | #3
Hello Minchan,

On (08/08/17 17:13), Minchan Kim wrote:
> Hi Sergey,
> 
> On Tue, Aug 08, 2017 at 04:02:26PM +0900, Sergey Senozhatsky wrote:
> > On (08/08/17 15:50), Minchan Kim wrote:
> > > With on-stack-bio, rw_page interface doesn't provide a clear performance
> > > benefit for zram and surely has a maintenance burden, so remove the
> > > last user to remove rw_page completely.
> > 
> > OK, never really liked it, I think we had that conversation before.
> > 
> > as far as I remember, zram_rw_page() was the reason we had to do some
> > tricks with init_lock to make lockdep happy. may be now we can "simplify"
> > the things back.
> 
> I cannot remember. Blame my brain. ;-)

no worries. I didn't remember it clearly as well, hence the "may be" part.

commit 08eee69fcf6baea543a2b4d2a2fcba0e61aa3160
Author: Minchan Kim

    zram: remove init_lock in zram_make_request
    
    Admin could reset zram during I/O operation going on so we have used
    zram->init_lock as read-side lock in I/O path to prevent sudden zram
    meta freeing.
    
    However, the init_lock is really troublesome.  We can't do call
    zram_meta_alloc under init_lock due to lockdep splat because
    zram_rw_page is one of the function under reclaim path and hold it as
    read_lock while other places in process context hold it as write_lock.
    So, we have used allocation out of the lock to avoid lockdep warn but
    it's not good for readability and fainally, I met another lockdep splat
    between init_lock and cpu_hotplug from kmem_cache_destroy during working
    zsmalloc compaction.  :(
    
    Yes, the ideal is to remove horrible init_lock of zram in rw path.  This
    patch removes it in rw path and instead, add atomic refcount for meta
    lifetime management and completion to free meta in process context.
    It's important to free meta in process context because some of resource
    destruction needs mutex lock, which could be held if we releases the
    resource in reclaim context so it's deadlock, again.
    
    As a bonus, we could remove init_done check in rw path because
    zram_meta_get will do a role for it, instead.

> Anyway, it's always welcome to make thing simple.
> Could you send a patch after settle down this patchset?

well, if it will improve anything after all :)

	-ss
Matthew Wilcox Aug. 8, 2017, 3:48 p.m. UTC | #4
On Tue, Aug 08, 2017 at 05:23:50PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> 
> On (08/08/17 17:13), Minchan Kim wrote:
> > Hi Sergey,
> > 
> > On Tue, Aug 08, 2017 at 04:02:26PM +0900, Sergey Senozhatsky wrote:
> > > On (08/08/17 15:50), Minchan Kim wrote:
> > > > With on-stack-bio, rw_page interface doesn't provide a clear performance
> > > > benefit for zram and surely has a maintenance burden, so remove the
> > > > last user to remove rw_page completely.
> > > 
> > > OK, never really liked it, I think we had that conversation before.
> > > 
> > > as far as I remember, zram_rw_page() was the reason we had to do some
> > > tricks with init_lock to make lockdep happy. may be now we can "simplify"
> > > the things back.
> > 
> > I cannot remember. Blame my brain. ;-)
> 
> no worries. I didn't remember it clearly as well, hence the "may be" part.
> 
> commit 08eee69fcf6baea543a2b4d2a2fcba0e61aa3160
> Author: Minchan Kim
> 
>     zram: remove init_lock in zram_make_request
>     
>     Admin could reset zram during I/O operation going on so we have used
>     zram->init_lock as read-side lock in I/O path to prevent sudden zram
>     meta freeing.
>     
>     However, the init_lock is really troublesome.  We can't do call
>     zram_meta_alloc under init_lock due to lockdep splat because
>     zram_rw_page is one of the function under reclaim path and hold it as
>     read_lock while other places in process context hold it as write_lock.
>     So, we have used allocation out of the lock to avoid lockdep warn but
>     it's not good for readability and fainally, I met another lockdep splat
>     between init_lock and cpu_hotplug from kmem_cache_destroy during working
>     zsmalloc compaction.  :(

I don't think this patch is going to change anything with respect to the
use of init_lock.  You're still going to be called in the reclaim path,
no longer through rw_page, but through the bio path instead.
diff mbox

Patch

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3eda88d0ca95..9620163308fa 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1268,57 +1268,6 @@  static void zram_slot_free_notify(struct block_device *bdev,
 	atomic64_inc(&zram->stats.notify_free);
 }
 
-static int zram_rw_page(struct block_device *bdev, sector_t sector,
-		       struct page *page, bool is_write)
-{
-	int offset, ret;
-	u32 index;
-	struct zram *zram;
-	struct bio_vec bv;
-
-	if (PageTransHuge(page))
-		return -ENOTSUPP;
-	zram = bdev->bd_disk->private_data;
-
-	if (!valid_io_request(zram, sector, PAGE_SIZE)) {
-		atomic64_inc(&zram->stats.invalid_io);
-		ret = -EINVAL;
-		goto out;
-	}
-
-	index = sector >> SECTORS_PER_PAGE_SHIFT;
-	offset = (sector & (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT;
-
-	bv.bv_page = page;
-	bv.bv_len = PAGE_SIZE;
-	bv.bv_offset = 0;
-
-	ret = zram_bvec_rw(zram, &bv, index, offset, is_write, NULL);
-out:
-	/*
-	 * If I/O fails, just return error(ie, non-zero) without
-	 * calling page_endio.
-	 * It causes resubmit the I/O with bio request by upper functions
-	 * of rw_page(e.g., swap_readpage, __swap_writepage) and
-	 * bio->bi_end_io does things to handle the error
-	 * (e.g., SetPageError, set_page_dirty and extra works).
-	 */
-	if (unlikely(ret < 0))
-		return ret;
-
-	switch (ret) {
-	case 0:
-		page_endio(page, is_write, 0);
-		break;
-	case 1:
-		ret = 0;
-		break;
-	default:
-		WARN_ON(1);
-	}
-	return ret;
-}
-
 static void zram_reset_device(struct zram *zram)
 {
 	struct zcomp *comp;
@@ -1460,7 +1409,6 @@  static int zram_open(struct block_device *bdev, fmode_t mode)
 static const struct block_device_operations zram_devops = {
 	.open = zram_open,
 	.swap_slot_free_notify = zram_slot_free_notify,
-	.rw_page = zram_rw_page,
 	.owner = THIS_MODULE
 };