From patchwork Mon Jan 21 08:18:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10773297 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D51214E5 for ; Mon, 21 Jan 2019 08:22:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D16D29DC8 for ; Mon, 21 Jan 2019 08:22:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6049229EED; Mon, 21 Jan 2019 08:22:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CA0DB29DC8 for ; Mon, 21 Jan 2019 08:22:23 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E71A2C057F93; Mon, 21 Jan 2019 08:22:22 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B7FF960C45; Mon, 21 Jan 2019 08:22:22 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 517471803395; Mon, 21 Jan 2019 08:22:22 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id x0L8MLGL025055 for ; Mon, 21 Jan 2019 03:22:21 -0500 Received: by smtp.corp.redhat.com (Postfix) id 0273E68390; Mon, 21 Jan 2019 08:22:21 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from localhost (ovpn-8-22.pek2.redhat.com [10.72.8.22]) by smtp.corp.redhat.com (Postfix) with ESMTP id 130876838F; Mon, 21 Jan 2019 08:22:19 +0000 (UTC) From: Ming Lei To: Jens Axboe Date: Mon, 21 Jan 2019 16:18:04 +0800 Message-Id: <20190121081805.32727-18-ming.lei@redhat.com> In-Reply-To: <20190121081805.32727-1-ming.lei@redhat.com> References: <20190121081805.32727-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-loop: dm-devel@redhat.com Cc: Mike Snitzer , linux-mm@kvack.org, dm-devel@redhat.com, Christoph Hellwig , Sagi Grimberg , "Darrick J . Wong" , Omar Sandoval , cluster-devel@redhat.com, linux-ext4@vger.kernel.org, Kent Overstreet , Boaz Harrosh , Gao Xiang , Coly Li , linux-raid@vger.kernel.org, Bob Peterson , linux-bcache@vger.kernel.org, Alexander Viro , Dave Chinner , David Sterba , Ming Lei , linux-block@vger.kernel.org, Theodore Ts'o , linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-btrfs@vger.kernel.org Subject: [dm-devel] [PATCH V14 17/18] block: kill QUEUE_FLAG_NO_SG_MERGE X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 21 Jan 2019 08:22:23 +0000 (UTC) X-Virus-Scanned: ClamAV using ClamSMTP Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"), physical segment number is mainly figured out in blk_queue_split() for fast path, and the flag of BIO_SEG_VALID is set there too. Now only blk_recount_segments() and blk_recalc_rq_segments() use this flag. Basically blk_recount_segments() is bypassed in fast path given BIO_SEG_VALID is set in blk_queue_split(). For another user of blk_recalc_rq_segments(): - run in partial completion branch of blk_update_request, which is an unusual case - run in blk_cloned_rq_check_limits(), still not a big problem if the flag is killed since dm-rq is the only user. Multi-page bvec is enabled now, not doing S/G merging is rather pointless with the current setup of the I/O path, as it isn't going to save you a significant amount of cycles. Reviewed-by: Christoph Hellwig Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- block/blk-merge.c | 31 ++++++------------------------- block/blk-mq-debugfs.c | 1 - block/blk-mq.c | 3 --- drivers/md/dm-table.c | 13 ------------- include/linux/blkdev.h | 1 - 5 files changed, 6 insertions(+), 43 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 8a498f29636f..b990853f6de7 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -358,8 +358,7 @@ void blk_queue_split(struct request_queue *q, struct bio **bio) EXPORT_SYMBOL(blk_queue_split); static unsigned int __blk_recalc_rq_segments(struct request_queue *q, - struct bio *bio, - bool no_sg_merge) + struct bio *bio) { struct bio_vec bv, bvprv = { NULL }; int prev = 0; @@ -385,13 +384,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, nr_phys_segs = 0; for_each_bio(bio) { bio_for_each_mp_bvec(bv, bio, iter) { - /* - * If SG merging is disabled, each bio vector is - * a segment - */ - if (no_sg_merge) - goto new_segment; - if (prev) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) @@ -421,27 +413,16 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, void blk_recalc_rq_segments(struct request *rq) { - bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE, - &rq->q->queue_flags); - - rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio, - no_sg_merge); + rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); } void blk_recount_segments(struct request_queue *q, struct bio *bio) { - unsigned short seg_cnt = bio_segments(bio); - - if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) && - (seg_cnt < queue_max_segments(q))) - bio->bi_phys_segments = seg_cnt; - else { - struct bio *nxt = bio->bi_next; + struct bio *nxt = bio->bi_next; - bio->bi_next = NULL; - bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false); - bio->bi_next = nxt; - } + bio->bi_next = NULL; + bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio); + bio->bi_next = nxt; bio_set_flag(bio, BIO_SEG_VALID); } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index 90d68760af08..2f9a11ef5bad 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -128,7 +128,6 @@ static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(SAME_FORCE), QUEUE_FLAG_NAME(DEAD), QUEUE_FLAG_NAME(INIT_DONE), - QUEUE_FLAG_NAME(NO_SG_MERGE), QUEUE_FLAG_NAME(POLL), QUEUE_FLAG_NAME(WC), QUEUE_FLAG_NAME(FUA), diff --git a/block/blk-mq.c b/block/blk-mq.c index 3ba37b9e15e9..fa45817a7e62 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2829,9 +2829,6 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, set->map[HCTX_TYPE_POLL].nr_queues) blk_queue_flag_set(QUEUE_FLAG_POLL, q); - if (!(set->flags & BLK_MQ_F_SG_MERGE)) - blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q); - q->sg_reserved_size = INT_MAX; INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work); diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 4b1be754cc41..ba9481f1bf3c 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1698,14 +1698,6 @@ static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev, return q && !blk_queue_add_random(q); } -static int queue_supports_sg_merge(struct dm_target *ti, struct dm_dev *dev, - sector_t start, sector_t len, void *data) -{ - struct request_queue *q = bdev_get_queue(dev->bdev); - - return q && !test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags); -} - static bool dm_table_all_devices_attribute(struct dm_table *t, iterate_devices_callout_fn func) { @@ -1902,11 +1894,6 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (!dm_table_supports_write_zeroes(t)) q->limits.max_write_zeroes_sectors = 0; - if (dm_table_all_devices_attribute(t, queue_supports_sg_merge)) - blk_queue_flag_clear(QUEUE_FLAG_NO_SG_MERGE, q); - else - blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q); - dm_table_verify_integrity(t); /* diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 6ebae3ee8f44..2d4cdc1a7bac 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -588,7 +588,6 @@ struct request_queue { #define QUEUE_FLAG_SAME_FORCE 15 /* force complete on same CPU */ #define QUEUE_FLAG_DEAD 16 /* queue tear-down finished */ #define QUEUE_FLAG_INIT_DONE 17 /* queue is initialized */ -#define QUEUE_FLAG_NO_SG_MERGE 18 /* don't attempt to merge SG segments*/ #define QUEUE_FLAG_POLL 19 /* IO polling enabled if set */ #define QUEUE_FLAG_WC 20 /* Write back caching */ #define QUEUE_FLAG_FUA 21 /* device supports FUA writes */