From patchwork Fri May 22 07:51:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lin X-Patchwork-Id: 6472811 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5AE19C0020 for ; Mon, 25 May 2015 07:23:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E37A720443 for ; Mon, 25 May 2015 07:23:28 +0000 (UTC) Received: from mx6-phx2.redhat.com (mx6-phx2.redhat.com [209.132.183.39]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 17D5E2042C for ; Mon, 25 May 2015 07:23:24 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx6-phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t4P7IJvT012641; Mon, 25 May 2015 03:18:20 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id t4M7pGAi010323 for ; Fri, 22 May 2015 03:51:16 -0400 Received: from mx1.redhat.com (ext-mx04.extmail.prod.ext.phx2.redhat.com [10.5.110.28]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t4M7pGke026364 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 22 May 2015 03:51:16 -0400 Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by mx1.redhat.com (Postfix) with ESMTP id A0A7D2DC3CA; Fri, 22 May 2015 07:51:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8555E20351; Fri, 22 May 2015 07:51:14 +0000 (UTC) Received: from [10.0.0.2] (c-67-161-27-242.hsd1.ca.comcast.net [67.161.27.242]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2C3F620306; Fri, 22 May 2015 07:51:09 +0000 (UTC) Message-ID: <555EDFE7.2020304@kernel.org> Date: Fri, 22 May 2015 00:51:03 -0700 From: Ming Lin User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Christoph Hellwig , Jeff Moyer References: <1430980461-5235-1-git-send-email-mlin@kernel.org> <1430980461-5235-2-git-send-email-mlin@kernel.org> <20150518172259.GA8116@lst.de> In-Reply-To: <20150518172259.GA8116@lst.de> X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP X-RedHat-Spam-Score: -1.91 (BAYES_00, T_RP_MATCHES_RCVD) 198.145.29.136 mail.kernel.org 198.145.29.136 mail.kernel.org X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-Scanned-By: MIMEDefang 2.75 on 10.5.110.28 X-loop: dm-devel@redhat.com X-Mailman-Approved-At: Mon, 25 May 2015 03:18:18 -0400 Cc: Mike Snitzer , Ming Lei , dm-devel@redhat.com, Joshua Morris , Alasdair Kergon , Lars Ellenberg , Philip Kelleher , Christoph Hellwig , Kent Overstreet , Nitin Gupta , Oleg Drokin , Al Viro , Jens Axboe , Andreas Dilger , Geoff Levand , Jiri Kosina , linux-kernel@vger.kernel.org, Jim Paris , Minchan Kim , Dongsu Park , drbd-user@lists.linbit.com Subject: Re: [dm-devel] [PATCH v3 01/11] block: make generic_make_request handle arbitrarily sized bios X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Virus-Scanned: ClamAV using ClamSMTP On 05/18/2015 10:22 AM, Christoph Hellwig wrote: > On Mon, May 18, 2015 at 12:52:03PM -0400, Jeff Moyer wrote: >>> + return bio_split(bio, split_sectors, GFP_NOIO, bs); >> >> Much of this function is cut-n-paste from blk-lib.c. Is there any way >> to factor it out? > > The code in blk-lib.c can go away now that any driver that cares > does the split. How about below? >From 23aec71f3e44a21aece46f7939715e022b7d3daa Mon Sep 17 00:00:00 2001 From: Ming Lin Date: Fri, 22 May 2015 00:46:56 -0700 Subject: [PATCH] block: remove split code in blkdev_issue_discard The split code in blkdev_issue_discard() can go away now that any driver that cares does the split. Signed-off-by: Ming Lin --- block/blk-lib.c | 73 +++++++++++---------------------------------------------- 1 file changed, 14 insertions(+), 59 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 7688ee3..3bf3c4a 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -43,34 +43,17 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, DECLARE_COMPLETION_ONSTACK(wait); struct request_queue *q = bdev_get_queue(bdev); int type = REQ_WRITE | REQ_DISCARD; - unsigned int max_discard_sectors, granularity; - int alignment; struct bio_batch bb; struct bio *bio; int ret = 0; struct blk_plug plug; - if (!q) + if (!q || !nr_sects) return -ENXIO; if (!blk_queue_discard(q)) return -EOPNOTSUPP; - /* Zero-sector (unknown) and one-sector granularities are the same. */ - granularity = max(q->limits.discard_granularity >> 9, 1U); - alignment = (bdev_discard_alignment(bdev) >> 9) % granularity; - - /* - * Ensure that max_discard_sectors is of the proper - * granularity, so that requests stay aligned after a split. - */ - max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); - max_discard_sectors -= max_discard_sectors % granularity; - if (unlikely(!max_discard_sectors)) { - /* Avoid infinite loop below. Being cautious never hurts. */ - return -EOPNOTSUPP; - } - if (flags & BLKDEV_DISCARD_SECURE) { if (!blk_queue_secdiscard(q)) return -EOPNOTSUPP; @@ -82,52 +65,24 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, bb.wait = &wait; blk_start_plug(&plug); - while (nr_sects) { - unsigned int req_sects; - sector_t end_sect, tmp; - bio = bio_alloc(gfp_mask, 1); - if (!bio) { - ret = -ENOMEM; - break; - } + bio = bio_alloc(gfp_mask, 1); + if (!bio) { + ret = -ENOMEM; + goto out; + } - req_sects = min_t(sector_t, nr_sects, max_discard_sectors); - - /* - * If splitting a request, and the next starting sector would be - * misaligned, stop the discard at the previous aligned sector. - */ - end_sect = sector + req_sects; - tmp = end_sect; - if (req_sects < nr_sects && - sector_div(tmp, granularity) != alignment) { - end_sect = end_sect - alignment; - sector_div(end_sect, granularity); - end_sect = end_sect * granularity + alignment; - req_sects = end_sect - sector; - } + bio->bi_iter.bi_sector = sector; + bio->bi_end_io = bio_batch_end_io; + bio->bi_bdev = bdev; + bio->bi_private = &bb; - bio->bi_iter.bi_sector = sector; - bio->bi_end_io = bio_batch_end_io; - bio->bi_bdev = bdev; - bio->bi_private = &bb; + bio->bi_iter.bi_size = nr_sects << 9; - bio->bi_iter.bi_size = req_sects << 9; - nr_sects -= req_sects; - sector = end_sect; + atomic_inc(&bb.done); + submit_bio(type, bio); - atomic_inc(&bb.done); - submit_bio(type, bio); - - /* - * We can loop for a long time in here, if someone does - * full device discards (like mkfs). Be nice and allow - * us to schedule out to avoid softlocking if preempt - * is disabled. - */ - cond_resched(); - } +out: blk_finish_plug(&plug); /* Wait for bios in-flight */