From patchwork Sun Dec 7 06:45:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Wheeler X-Patchwork-Id: 5454701 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 03D9BBEEA8 for ; Mon, 8 Dec 2014 08:06:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 06F9F20125 for ; Mon, 8 Dec 2014 08:06:26 +0000 (UTC) Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C573C2011D for ; Mon, 8 Dec 2014 08:06:24 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id sB882j0X008006; Mon, 8 Dec 2014 03:02:45 -0500 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id sB76jO3a018401; Sun, 7 Dec 2014 01:45:24 -0500 Received: from mx1.redhat.com (ext-mx12.extmail.prod.ext.phx2.redhat.com [10.5.110.17]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id sB76jOIX020805; Sun, 7 Dec 2014 01:45:24 -0500 Received: from homiemail-a4.g.dreamhost.com (homie.mail.dreamhost.com [208.97.132.208]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id sB76jNgZ007234; Sun, 7 Dec 2014 01:45:23 -0500 Received: from homiemail-a4.g.dreamhost.com (localhost [127.0.0.1]) by homiemail-a4.g.dreamhost.com (Postfix) with ESMTP id B993251C063; Sat, 6 Dec 2014 22:45:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=lists.ewheeler.net; h=date :from:to:cc:subject:in-reply-to:message-id:references :mime-version:content-type; s=lists.ewheeler.net; bh=y5G8VknPO+a j3DwZMX7qoniaRtg=; b=WP17xPezvKvATrAWimGPy/9yhQ/uf149KIm5i9ebvev 8ZKbdqOa3Lq0LDSzmxl0tdB7rLU+tNWyn2TM08jCg30X22EwHszj74GauJzfCXDr sD9j/RxNN+Fc2o1i2miUnxQvujnGzOqRzV7nHI9eckfqVIIyshRfyI3VnSVa14RY = Received: from ware.dreamhost.com (ware.dreamhost.com [64.111.127.160]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: ewheeler@ewheeler.net) by homiemail-a4.g.dreamhost.com (Postfix) with ESMTPSA id 5942451C062; Sat, 6 Dec 2014 22:45:22 -0800 (PST) Date: Sat, 6 Dec 2014 22:45:19 -0800 (PST) From: Eric Wheeler X-X-Sender: ewheeler@ware.dreamhost.com To: LVM2 development In-Reply-To: Message-ID: References: <20141204153358.GA19315@redhat.com> <5481EB1C.4000202@kernel.dk> <20141205183342.GA27397@redhat.com> <5483B04D.5030606@kernel.dk> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 X-RedHat-Spam-Score: -2.109 (BAYES_00, DCC_REPUT_13_19, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_NONE, URIBL_BLOCKED) 208.97.132.208 homie.mail.dreamhost.com 208.97.132.208 homie.mail.dreamhost.com X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.17 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Mon, 08 Dec 2014 03:01:29 -0500 Cc: Jens Axboe , dm-devel@redhat.com, ejt@redhat.com Subject: Re: [dm-devel] [lvm-devel] dm thin: optimize away writing all zeroes to unprovisioned blocks X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce bio_is_zero_filled() and use it to optimize away writing all zeroes to unprovisioned blocks. Subsequent reads to the associated unprovisioned blocks will be zero filled. bio_is_zero_filled now works with unaligned bvec data. Signed-off-by: Eric Wheeler Cc: Mike Snitzer Cc: Jens Axboe --- > Also, attached is the patch that supports uintptr_t word sized 0-checks. Re-sending. I think I've fixed my MUA's tendency to break patches. block/bio.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++++++ drivers/md/dm-thin.c | 10 +++++++ include/linux/bio.h | 1 + 3 files changed, 78 insertions(+), 0 deletions(-) diff --git a/block/bio.c b/block/bio.c index 8c2e55e..9100d35 100644 --- a/block/bio.c +++ b/block/bio.c @@ -511,6 +511,73 @@ void zero_fill_bio(struct bio *bio) } EXPORT_SYMBOL(zero_fill_bio); +bool bio_is_zero_filled(struct bio *bio) +{ + unsigned i, count; + unsigned long flags; + struct bio_vec bv; + struct bvec_iter iter; + bio_for_each_segment(bv, bio, iter) { + char *data = bvec_kmap_irq(&bv, &flags); + char *p = data; + uintptr_t *parch; + int left = bv.bv_len; + + if (unlikely( data == NULL )) + continue; + + + /* check unaligned bytes at the beginning of p */ + if (unlikely( ( (uintptr_t)p & (sizeof(uintptr_t)-1) ) != 0 )) { + count = sizeof(uintptr_t) - ( (uintptr_t)p & (sizeof(uintptr_t)-1) ); + for (i = 0; i < count; i++) { + if (*p) { + bvec_kunmap_irq(data, &flags); + return false; + } + p++; + } + left -= count; + } + + /* we should be word aligned now */ + BUG_ON(unlikely( ((uintptr_t)p & (sizeof(uintptr_t)-1) ) != 0 )); + + /* now check in word-sized chunks */ + parch = (uintptr_t*)p; + count = left >> ilog2(sizeof(uintptr_t)); /* count = left / sizeof(uintptr_t) */; + for (i = 0; i < count; i++) { + if (*parch) { + bvec_kunmap_irq(data, &flags); + return false; + } + parch++; + } + left -= count << ilog2(sizeof(uintptr_t)); /* left -= count*sizeof(uintptr_t) */ + + /* check remaining unaligned values at the end */ + p = (char*)parch; + if (unlikely(left > 0)) + { + for (i = 0; i < left; i++) { + if (*p) { + bvec_kunmap_irq(data, &flags); + return false; + } + p++; + } + left = 0; + } + + bvec_kunmap_irq(data, &flags); + BUG_ON(unlikely( left > 0 )); + BUG_ON(unlikely( data+bv.bv_len != p )); + } + + return true; +} +EXPORT_SYMBOL(bio_is_zero_filled); + /** * bio_put - release a reference to a bio * @bio: bio to release reference to diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index fc9c848..6a0c2c0 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -1258,6 +1258,16 @@ static void provision_block(struct thin_c *tc, struct bio *bio, dm_block_t block return; } + /* + * Optimize away writes of all zeroes, subsequent reads to + * associated unprovisioned blocks will be zero filled. + */ + if (unlikely(bio_is_zero_filled(bio))) { + cell_defer_no_holder(tc, cell); + bio_endio(bio, 0); + return; + } + r = alloc_data_block(tc, &data_block); switch (r) { case 0: diff --git a/include/linux/bio.h b/include/linux/bio.h index 5a64576..abb46f7 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -419,6 +419,7 @@ extern struct bio *bio_copy_user_iov(struct request_queue *, int, int, gfp_t); extern int bio_uncopy_user(struct bio *); void zero_fill_bio(struct bio *bio); +bool bio_is_zero_filled(struct bio *bio); extern struct bio_vec *bvec_alloc(gfp_t, int, unsigned long *, mempool_t *); extern void bvec_free(mempool_t *, struct bio_vec *, unsigned int); extern unsigned int bvec_nr_vecs(unsigned short idx);