From patchwork Sat Aug 1 06:58:10 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lin X-Patchwork-Id: 6921911 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 693A19F358 for ; Sat, 1 Aug 2015 07:03:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3117C20434 for ; Sat, 1 Aug 2015 07:03:20 +0000 (UTC) Received: from mx6-phx2.redhat.com (mx6-phx2.redhat.com [209.132.183.39]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DF58920429 for ; Sat, 1 Aug 2015 07:03:14 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx6-phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t716wejC012763; Sat, 1 Aug 2015 02:58:41 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id t716wdGK009400 for ; Sat, 1 Aug 2015 02:58:39 -0400 Received: from mx1.redhat.com (ext-mx04.extmail.prod.ext.phx2.redhat.com [10.5.110.28]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t716wd3P007186 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Sat, 1 Aug 2015 02:58:39 -0400 Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by mx1.redhat.com (Postfix) with ESMTP id 6C07891E83; Sat, 1 Aug 2015 06:58:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0256820392; Sat, 1 Aug 2015 06:58:23 +0000 (UTC) Received: from [10.0.0.138] (c-50-185-88-18.hsd1.ca.comcast.net [50.185.88.18]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6B8A3202F2; Sat, 1 Aug 2015 06:58:16 +0000 (UTC) Message-ID: <1438412290.26596.14.camel@hasee> From: Ming Lin To: Mike Snitzer Date: Fri, 31 Jul 2015 23:58:10 -0700 In-Reply-To: <20150731213831.GA16464@redhat.com> References: <1436168690-32102-1-git-send-email-mlin@kernel.org> <20150731192337.GA8907@redhat.com> <20150731213831.GA16464@redhat.com> Mime-Version: 1.0 X-Spam-Status: No, score=-5.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP X-RedHat-Spam-Score: -1.569 (BAYES_05, RP_MATCHES_RCVD, URIBL_BLOCKED) 198.145.29.136 mail.kernel.org 198.145.29.136 mail.kernel.org X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Scanned-By: MIMEDefang 2.75 on 10.5.110.28 X-loop: dm-devel@redhat.com Cc: Neil@redhat.com, Ming Lei , Al@redhat.com, dm-devel@redhat.com, Christoph Hellwig , Alasdair Kergon , Lars Ellenberg , Oleg@redhat.com, Philip Kelleher , Joshua Morris , Christoph Hellwig , Kent Overstreet , Nitin Gupta , Ming Lin , Drokin , Viro , Jens Axboe , Andreas Dilger , Geoff Levand , Jiri Kosina , lkml , Jim Paris , Minchan Kim , Dongsu Park , drbd-user@lists.linbit.com Subject: Re: [dm-devel] [PATCH v5 01/11] block: make generic_make_request handle arbitrarily sized bios X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Virus-Scanned: ClamAV using ClamSMTP On Fri, 2015-07-31 at 17:38 -0400, Mike Snitzer wrote: > On Fri, Jul 31 2015 at 5:19pm -0400, > Ming Lin wrote: > > > On Fri, Jul 31, 2015 at 12:23 PM, Mike Snitzer wrote: > > > On Mon, Jul 06 2015 at 3:44P -0400, > > > Ming Lin wrote: > > > > > >> From: Kent Overstreet > > >> > > >> The way the block layer is currently written, it goes to great lengths > > >> to avoid having to split bios; upper layer code (such as bio_add_page()) > > >> checks what the underlying device can handle and tries to always create > > >> bios that don't need to be split. > > >> > > >> But this approach becomes unwieldy and eventually breaks down with > > >> stacked devices and devices with dynamic limits, and it adds a lot of > > >> complexity. If the block layer could split bios as needed, we could > > >> eliminate a lot of complexity elsewhere - particularly in stacked > > >> drivers. Code that creates bios can then create whatever size bios are > > >> convenient, and more importantly stacked drivers don't have to deal with > > >> both their own bio size limitations and the limitations of the > > >> (potentially multiple) devices underneath them. In the future this will > > >> let us delete merge_bvec_fn and a bunch of other code. > > >> > > >> We do this by adding calls to blk_queue_split() to the various > > >> make_request functions that need it - a few can already handle arbitrary > > >> size bios. Note that we add the call _after_ any call to > > >> blk_queue_bounce(); this means that blk_queue_split() and > > >> blk_recalc_rq_segments() don't need to be concerned with bouncing > > >> affecting segment merging. > > >> > > >> Some make_request_fn() callbacks were simple enough to audit and verify > > >> they don't need blk_queue_split() calls. The skipped ones are: > > >> > > >> * nfhd_make_request (arch/m68k/emu/nfblock.c) > > >> * axon_ram_make_request (arch/powerpc/sysdev/axonram.c) > > >> * simdisk_make_request (arch/xtensa/platforms/iss/simdisk.c) > > >> * brd_make_request (ramdisk - drivers/block/brd.c) > > >> * mtip_submit_request (drivers/block/mtip32xx/mtip32xx.c) > > >> * loop_make_request > > >> * null_queue_bio > > >> * bcache's make_request fns > > >> > > >> Some others are almost certainly safe to remove now, but will be left > > >> for future patches. > > >> > > >> Cc: Jens Axboe > > >> Cc: Christoph Hellwig > > >> Cc: Al Viro > > >> Cc: Ming Lei > > >> Cc: Neil Brown > > >> Cc: Alasdair Kergon > > >> Cc: Mike Snitzer > > >> Cc: dm-devel@redhat.com > > >> Cc: Lars Ellenberg > > >> Cc: drbd-user@lists.linbit.com > > >> Cc: Jiri Kosina > > >> Cc: Geoff Levand > > >> Cc: Jim Paris > > >> Cc: Joshua Morris > > >> Cc: Philip Kelleher > > >> Cc: Minchan Kim > > >> Cc: Nitin Gupta > > >> Cc: Oleg Drokin > > >> Cc: Andreas Dilger > > >> Acked-by: NeilBrown (for the 'md/md.c' bits) > > >> Signed-off-by: Kent Overstreet > > >> [dpark: skip more mq-based drivers, resolve merge conflicts, etc.] > > >> Signed-off-by: Dongsu Park > > >> Signed-off-by: Ming Lin > > > ... > > >> diff --git a/block/blk-merge.c b/block/blk-merge.c > > >> index 30a0d9f..3707f30 100644 > > >> --- a/block/blk-merge.c > > >> +++ b/block/blk-merge.c > > >> @@ -9,12 +9,158 @@ > > >> > > >> #include "blk.h" > > >> > > >> +static struct bio *blk_bio_discard_split(struct request_queue *q, > > >> + struct bio *bio, > > >> + struct bio_set *bs) > > >> +{ > > >> + unsigned int max_discard_sectors, granularity; > > >> + int alignment; > > >> + sector_t tmp; > > >> + unsigned split_sectors; > > >> + > > >> + /* Zero-sector (unknown) and one-sector granularities are the same. */ > > >> + granularity = max(q->limits.discard_granularity >> 9, 1U); > > >> + > > >> + max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); > > >> + max_discard_sectors -= max_discard_sectors % granularity; > > >> + > > >> + if (unlikely(!max_discard_sectors)) { > > >> + /* XXX: warn */ > > >> + return NULL; > > >> + } > > >> + > > >> + if (bio_sectors(bio) <= max_discard_sectors) > > >> + return NULL; > > >> + > > >> + split_sectors = max_discard_sectors; > > >> + > > >> + /* > > >> + * If the next starting sector would be misaligned, stop the discard at > > >> + * the previous aligned sector. > > >> + */ > > >> + alignment = (q->limits.discard_alignment >> 9) % granularity; > > >> + > > >> + tmp = bio->bi_iter.bi_sector + split_sectors - alignment; > > >> + tmp = sector_div(tmp, granularity); > > >> + > > >> + if (split_sectors > tmp) > > >> + split_sectors -= tmp; > > >> + > > >> + return bio_split(bio, split_sectors, GFP_NOIO, bs); > > >> +} > > > > > > This code to stop the discard at the previous aligned sector could be > > > the reason why I have 2 device-mapper-test-suite tests in the > > > 'thin-provisioning' testsuite failing due to this patchset: > > > > I'm setting up the testsuite to debug. > > OK, once setup, to run the 2 tests in question directly you'd do > something like: > > dmtest run --suite thin-provisioning -n discard_a_fragmented_device > > dmtest run --suite thin-provisioning -n discard_fully_provisioned_device_benchmark > > Again, these tests pass without this patchset. It's caused by patch 4. When discard size >=4G, the bio->bi_iter.bi_size overflows. Below is the new patch. Christoph, Could you also help to review it? Now we still do "misaligned" check in blkdev_issue_discard(). So the same code in blk_bio_discard_split() was removed. Please see https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/commit/?h=block-generic-req&id=dcc5d9c41 I have updated both patch 1 & 4 on my tree. commit 9607f737de9c4ca1a81655c320a61c287bf77bf5 Author: Ming Lin Date: Fri May 22 00:46:56 2015 -0700 block: remove split code in blkdev_issue_discard The split code in blkdev_issue_discard() can go away now that any driver that cares does the split, all we have to do is make sure bio size doesn't overflow. Signed-off-by: Ming Lin --- block/blk-lib.c | 16 +++------------- 1 file changed, 3 insertions(+), 13 deletions(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/block/blk-lib.c b/block/blk-lib.c index 7688ee3..b9e2fca 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -43,7 +43,7 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, DECLARE_COMPLETION_ONSTACK(wait); struct request_queue *q = bdev_get_queue(bdev); int type = REQ_WRITE | REQ_DISCARD; - unsigned int max_discard_sectors, granularity; + unsigned int granularity; int alignment; struct bio_batch bb; struct bio *bio; @@ -60,17 +60,6 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, granularity = max(q->limits.discard_granularity >> 9, 1U); alignment = (bdev_discard_alignment(bdev) >> 9) % granularity; - /* - * Ensure that max_discard_sectors is of the proper - * granularity, so that requests stay aligned after a split. - */ - max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); - max_discard_sectors -= max_discard_sectors % granularity; - if (unlikely(!max_discard_sectors)) { - /* Avoid infinite loop below. Being cautious never hurts. */ - return -EOPNOTSUPP; - } - if (flags & BLKDEV_DISCARD_SECURE) { if (!blk_queue_secdiscard(q)) return -EOPNOTSUPP; @@ -92,7 +81,8 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, break; } - req_sects = min_t(sector_t, nr_sects, max_discard_sectors); + /* Make sure bi_size doesn't overflow */ + req_sects = min_t(sector_t, nr_sects, UINT_MAX >> 9); /* * If splitting a request, and the next starting sector would be