From patchwork Tue May 21 07:01:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10953059 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01DBF933 for ; Tue, 21 May 2019 07:01:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6D162893F for ; Tue, 21 May 2019 07:01:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CDECF288CA; Tue, 21 May 2019 07:01:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5EE93288CA for ; Tue, 21 May 2019 07:01:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726318AbfEUHB5 (ORCPT ); Tue, 21 May 2019 03:01:57 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:38184 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725809AbfEUHB4 (ORCPT ); Tue, 21 May 2019 03:01:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=PK76R5YXJP7s3AD7eJLRJnbrnPYmKnRoB76Z2J8gtE4=; b=SRBXVKKyq/QNLZ7EjJcswMyb19 jgSHKpz83G94w9MCZgXCoPHArnBzwRq/t39DJ9OTnKO9KSfWiDN7ogZDjXWYOGFgAhTOV5h9zscKX Q7Vl7KMdL7qFlz7L7fSBO9M8FkHY5aUF9uCKHxd2/d0aqz4NtJQytjaF429xKZPaxRJM1DNEH8Ssm xcFj3VM1w0x8KQ703yhKvPmiKz9m1sHZvH9Kk5zoPqy9uBvYCdGy1za84+1O/h0bOmcvXTJIWE1ee u9zZLV8MIULYVbIUb9twnhWSOcbZ9mbXlicXVyAMyxq0+zi1+Cj9CgC9xHw4xVCnuSbmXigmHS9cm atcmUkSw==; Received: from 089144206147.atnat0015.highway.bob.at ([89.144.206.147] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hSymY-0000Yw-5H; Tue, 21 May 2019 07:01:54 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, linux-block@vger.kernel.org, Hannes Reinecke Subject: [PATCH 1/4] block: don't decrement nr_phys_segments for physically contigous segments Date: Tue, 21 May 2019 09:01:40 +0200 Message-Id: <20190521070143.22631-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190521070143.22631-1-hch@lst.de> References: <20190521070143.22631-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently ll_merge_requests_fn, unlike all other merge functions, reduces nr_phys_segments by one if the last segment of the previous, and the first segment of the next segement are contigous. While this seems like a nice solution to avoid building smaller than possible requests it causes a mismatch between the segments actually present in the request and those iterated over by the bvec iterators, including __rq_for_each_bio. This can for example mistrigger the single segment optimization in the nvme-pci driver, and might lead to mismatching nr_phys_segments number when recalculating the number of request when inserting a cloned request. We could possibly work around this by making the bvec iterators take the front and back segment size into account, but that would require moving them from the bio to the bio_iter and spreading this mess over all users of bvecs. Or we could simply remove this optimization under the assumption that most users already build good enough bvecs, and that the bio merge patch never cared about this optimization either. The latter is what this patch does. dff824b2aadb ("nvme-pci: optimize mapping of small single segment requests"). Signed-off-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Reviewed-by: Ming Lei --- block/blk-merge.c | 23 +---------------------- 1 file changed, 1 insertion(+), 22 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 21e87a714a73..80a5a0facb87 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -358,7 +358,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, unsigned front_seg_size; struct bio *fbio, *bbio; struct bvec_iter iter; - bool new_bio = false; if (!bio) return 0; @@ -379,31 +378,12 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, nr_phys_segs = 0; for_each_bio(bio) { bio_for_each_bvec(bv, bio, iter) { - if (new_bio) { - if (seg_size + bv.bv_len - > queue_max_segment_size(q)) - goto new_segment; - if (!biovec_phys_mergeable(q, &bvprv, &bv)) - goto new_segment; - - seg_size += bv.bv_len; - - if (nr_phys_segs == 1 && seg_size > - front_seg_size) - front_seg_size = seg_size; - - continue; - } -new_segment: bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, &front_seg_size, NULL, UINT_MAX); - new_bio = false; } bbio = bio; - if (likely(bio->bi_iter.bi_size)) { + if (likely(bio->bi_iter.bi_size)) bvprv = bv; - new_bio = true; - } } fbio->bi_seg_front_size = front_seg_size; @@ -725,7 +705,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, req->bio->bi_seg_front_size = seg_size; if (next->nr_phys_segments == 1) next->biotail->bi_seg_back_size = seg_size; - total_phys_segments--; } if (total_phys_segments > queue_max_segments(q)) From patchwork Tue May 21 07:01:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10953061 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5982E933 for ; Tue, 21 May 2019 07:02:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AC282883C for ; Tue, 21 May 2019 07:02:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3CD8D2893A; Tue, 21 May 2019 07:02:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D52642893A for ; Tue, 21 May 2019 07:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726319AbfEUHB7 (ORCPT ); Tue, 21 May 2019 03:01:59 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:38196 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725809AbfEUHB7 (ORCPT ); Tue, 21 May 2019 03:01:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=LrGO3AQcfF0r1ZyQTzTNCScbZVoCjtL6XAHImSNWc+0=; b=Ge6/7u+ckJsAui9QOjDfVMolCj J6dkR8WZ8lca4+lsOl80cGLdmndusuBqR9ll2SIPVfQvgOzeRyU8lMilZzMqKzj8U4AffnVpr+L8s pQHmoRAPzs+IyVTtarjyM2hfcB+tslm8gnEV6yrFwYk7Y6FipN6CHgGhBvcAgF+42/l9l1+ieT9PY vZpejbKmtFD85HJwL+eY2dsWKqcYZ9/YKDL898GFgF0GJF2N9tNblOgd033l3vlubC43Z8FXpxH1c vebklYv+v5Sc33n0bYu7ZSBRxAZ0D5aLwsbwQELzj+knUQwTjWz5p72RJhtqcybp8zuymCKJvZ48N d8F2pEEQ==; Received: from 089144206147.atnat0015.highway.bob.at ([89.144.206.147] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hSymb-0000ZF-9J; Tue, 21 May 2019 07:01:57 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, linux-block@vger.kernel.org, Hannes Reinecke Subject: [PATCH 2/4] block: force an unlimited segment size on queues with a virt boundary Date: Tue, 21 May 2019 09:01:41 +0200 Message-Id: <20190521070143.22631-3-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190521070143.22631-1-hch@lst.de> References: <20190521070143.22631-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently fail to update the front/back segment size in the bio when deciding to allow an otherwise gappy segement to a device with a virt boundary. The reason why this did not cause problems is that devices with a virt boundary fundamentally don't use segments as we know it and thus don't care. Make that assumption formal by forcing an unlimited segement size in this case. Fixes: f6970f83ef79 ("block: don't check if adjacent bvecs in one bio can be mergeable") Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei Reviewed-by: Hannes Reinecke --- block/blk-settings.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 3facc41476be..2ae348c101a0 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -310,6 +310,9 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size) __func__, max_size); } + /* see blk_queue_virt_boundary() for the explanation */ + WARN_ON_ONCE(q->limits.virt_boundary_mask); + q->limits.max_segment_size = max_size; } EXPORT_SYMBOL(blk_queue_max_segment_size); @@ -742,6 +745,14 @@ EXPORT_SYMBOL(blk_queue_segment_boundary); void blk_queue_virt_boundary(struct request_queue *q, unsigned long mask) { q->limits.virt_boundary_mask = mask; + + /* + * Devices that require a virtual boundary do not support scatter/gather + * I/O natively, but instead require a descriptor list entry for each + * page (which might not be idential to the Linux PAGE_SIZE). Because + * of that they are not limited by our notion of "segment size". + */ + q->limits.max_segment_size = UINT_MAX; } EXPORT_SYMBOL(blk_queue_virt_boundary); From patchwork Tue May 21 07:01:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10953063 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D85FB16C1 for ; Tue, 21 May 2019 07:02:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C8F0E288CA for ; Tue, 21 May 2019 07:02:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BD49B2893F; Tue, 21 May 2019 07:02:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 61E46288CA for ; Tue, 21 May 2019 07:02:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725809AbfEUHCC (ORCPT ); Tue, 21 May 2019 03:02:02 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:38208 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726338AbfEUHCB (ORCPT ); Tue, 21 May 2019 03:02:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=EnrD/u5tr2NH/JuMBaegGddcrLvuzl9DB0zeS+rnxEM=; b=HUYBnmWeh9tGk5L79m7w1JC3PY Ca803cy+hKJV748Vqxl4GNtDByIHUk4jclMD9SsXL9YhawwFD/NgnK5ytMT0v0iiWkYXbJB/ZfYmJ YDKX5bOm3OS9Fq0qdX7YAsWSYha18BAiwUQI12Pf/w36vVVzxlPsX1S/XKq3oqaxEek9MTKLNFs+c Fshh6c/TCpnZRwgIb0SVtBQEVt97Cduo59lYSHHSwADgX8bXky725e5PbHeYKKwQ7f/EzTR6pBOIs KC9UFrMrsji4IITtn1/Ok6EtR8NL8YF1Wfs55Da5cG6rS5AmILZVmPyglJsunJdLLVAChMnEx3GIQ LAVlgXig==; Received: from 089144206147.atnat0015.highway.bob.at ([89.144.206.147] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hSymd-0000Zh-W2; Tue, 21 May 2019 07:02:00 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, linux-block@vger.kernel.org, Hannes Reinecke Subject: [PATCH 3/4] block: remove the segment size check in bio_will_gap Date: Tue, 21 May 2019 09:01:42 +0200 Message-Id: <20190521070143.22631-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190521070143.22631-1-hch@lst.de> References: <20190521070143.22631-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We fundamentally do not have a maximum segement size for devices with a virt boundary. So don't bother checking it, especially given that the existing checks didn't properly work to start with as we never fully update the front/back segment size and miss the bi_seg_front_size that wuld have been required for some cases. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei Reviewed-by: Hannes Reinecke --- block/blk-merge.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 80a5a0facb87..eee2c02c50ce 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -12,23 +12,6 @@ #include "blk.h" -/* - * Check if the two bvecs from two bios can be merged to one segment. If yes, - * no need to check gap between the two bios since the 1st bio and the 1st bvec - * in the 2nd bio can be handled in one segment. - */ -static inline bool bios_segs_mergeable(struct request_queue *q, - struct bio *prev, struct bio_vec *prev_last_bv, - struct bio_vec *next_first_bv) -{ - if (!biovec_phys_mergeable(q, prev_last_bv, next_first_bv)) - return false; - if (prev->bi_seg_back_size + next_first_bv->bv_len > - queue_max_segment_size(q)) - return false; - return true; -} - static inline bool bio_will_gap(struct request_queue *q, struct request *prev_rq, struct bio *prev, struct bio *next) { @@ -60,7 +43,7 @@ static inline bool bio_will_gap(struct request_queue *q, */ bio_get_last_bvec(prev, &pb); bio_get_first_bvec(next, &nb); - if (bios_segs_mergeable(q, prev, &pb, &nb)) + if (biovec_phys_mergeable(q, &pb, &nb)) return false; return __bvec_gap_to_prev(q, &pb, nb.bv_offset); } From patchwork Tue May 21 07:01:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10953065 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6C6D16C1 for ; Tue, 21 May 2019 07:02:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4F8B288BB for ; Tue, 21 May 2019 07:02:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C99302899E; Tue, 21 May 2019 07:02:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3FF8F2899C for ; Tue, 21 May 2019 07:02:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726391AbfEUHCE (ORCPT ); Tue, 21 May 2019 03:02:04 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:38220 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726338AbfEUHCE (ORCPT ); Tue, 21 May 2019 03:02:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=fakP6uaLIRM0viq5C6JY04z4xbim+GriDzwLli/YaDk=; b=FHIc3nhFhNt/KzNIIMPJefTRti 0w44SIId7gJMDZsxkROJSy34x9i2SkrigBKLWLPaGgrtQ/DZG9ehGKlMXYHMSUeVgCDPgXPqYUQ2A QxlQedwJPBsa7QThfTdO1S3VEqwWGKPaVRJRk0nhU2C1jFMRapeXsVSS5jWcEcN/zYPhp3E/cUDK/ iYSmPP4wAW1C9LbR2m4PBacQClkr4La0TksRSvsOrwL9ykKl1GCS9XRRlrW6B41Gz8dbovBBprB52 iICN6k9dTw2+LogTClA1BDAgtu9ykm6CacMGg5V6Zo8/gb+OzobHB0OcmNeiWgJ71Srq+P/Z/FQSi yKFTX8Ug==; Received: from 089144206147.atnat0015.highway.bob.at ([89.144.206.147] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hSymg-0000a8-Jg; Tue, 21 May 2019 07:02:02 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, linux-block@vger.kernel.org, Hannes Reinecke Subject: [PATCH 4/4] block: remove the bi_seg_{front,back}_size fields in struct bio Date: Tue, 21 May 2019 09:01:43 +0200 Message-Id: <20190521070143.22631-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190521070143.22631-1-hch@lst.de> References: <20190521070143.22631-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP At this point these fields aren't used for anything, so we can remove them. Signed-off-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Reviewed-by: Ming Lei --- block/blk-merge.c | 94 +++++---------------------------------- include/linux/blk_types.h | 7 --- 2 files changed, 12 insertions(+), 89 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index eee2c02c50ce..17713d7d98d5 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -162,8 +162,7 @@ static unsigned get_max_segment_size(struct request_queue *q, * variables. */ static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, - unsigned *nsegs, unsigned *last_seg_size, - unsigned *front_seg_size, unsigned *sectors, unsigned max_segs) + unsigned *nsegs, unsigned *sectors, unsigned max_segs) { unsigned len = bv->bv_len; unsigned total_len = 0; @@ -185,28 +184,12 @@ static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, break; } - if (!new_nsegs) - return !!len; - - /* update front segment size */ - if (!*nsegs) { - unsigned first_seg_size; - - if (new_nsegs == 1) - first_seg_size = get_max_segment_size(q, bv->bv_offset); - else - first_seg_size = queue_max_segment_size(q); - - if (*front_seg_size < first_seg_size) - *front_seg_size = first_seg_size; + if (new_nsegs) { + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; } - /* update other varibles */ - *last_seg_size = seg_size; - *nsegs += new_nsegs; - if (sectors) - *sectors += total_len >> 9; - /* split in the middle of the bvec if len != 0 */ return !!len; } @@ -218,8 +201,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, { struct bio_vec bv, bvprv, *bvprvp = NULL; struct bvec_iter iter; - unsigned seg_size = 0, nsegs = 0, sectors = 0; - unsigned front_seg_size = bio->bi_seg_front_size; + unsigned nsegs = 0, sectors = 0; bool do_split = true; struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); @@ -243,8 +225,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, /* split in the middle of bvec */ bv.bv_len = (max_sectors - sectors) << 9; bvec_split_segs(q, &bv, &nsegs, - &seg_size, - &front_seg_size, §ors, max_segs); } goto split; @@ -258,12 +238,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (bv.bv_offset + bv.bv_len <= PAGE_SIZE) { nsegs++; - seg_size = bv.bv_len; sectors += bv.bv_len >> 9; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; - } else if (bvec_split_segs(q, &bv, &nsegs, &seg_size, - &front_seg_size, §ors, max_segs)) { + } else if (bvec_split_segs(q, &bv, &nsegs, §ors, + max_segs)) { goto split; } } @@ -278,10 +255,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bio = new; } - bio->bi_seg_front_size = front_seg_size; - if (seg_size > bio->bi_seg_back_size) - bio->bi_seg_back_size = seg_size; - return do_split ? new : NULL; } @@ -336,17 +309,13 @@ EXPORT_SYMBOL(blk_queue_split); static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio *bio) { - struct bio_vec uninitialized_var(bv), bvprv = { NULL }; - unsigned int seg_size, nr_phys_segs; - unsigned front_seg_size; - struct bio *fbio, *bbio; + unsigned int nr_phys_segs = 0; struct bvec_iter iter; + struct bio_vec bv; if (!bio) return 0; - front_seg_size = bio->bi_seg_front_size; - switch (bio_op(bio)) { case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: @@ -356,23 +325,11 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, return 1; } - fbio = bio; - seg_size = 0; - nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_bvec(bv, bio, iter) { - bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, - &front_seg_size, NULL, UINT_MAX); - } - bbio = bio; - if (likely(bio->bi_iter.bi_size)) - bvprv = bv; + bio_for_each_bvec(bv, bio, iter) + bvec_split_segs(q, &bv, &nr_phys_segs, NULL, UINT_MAX); } - fbio->bi_seg_front_size = front_seg_size; - if (seg_size > bbio->bi_seg_back_size) - bbio->bi_seg_back_size = seg_size; - return nr_phys_segs; } @@ -392,24 +349,6 @@ void blk_recount_segments(struct request_queue *q, struct bio *bio) bio_set_flag(bio, BIO_SEG_VALID); } -static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, - struct bio *nxt) -{ - struct bio_vec end_bv = { NULL }, nxt_bv; - - if (bio->bi_seg_back_size + nxt->bi_seg_front_size > - queue_max_segment_size(q)) - return 0; - - if (!bio_has_data(bio)) - return 1; - - bio_get_last_bvec(bio, &end_bv); - bio_get_first_bvec(nxt, &nxt_bv); - - return biovec_phys_mergeable(q, &end_bv, &nxt_bv); -} - static inline struct scatterlist *blk_next_sg(struct scatterlist **sg, struct scatterlist *sglist) { @@ -669,8 +608,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, struct request *next) { int total_phys_segments; - unsigned int seg_size = - req->biotail->bi_seg_back_size + next->bio->bi_seg_front_size; if (req_gap_back_merge(req, next->bio)) return 0; @@ -683,13 +620,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, return 0; total_phys_segments = req->nr_phys_segments + next->nr_phys_segments; - if (blk_phys_contig_segment(q, req->biotail, next->bio)) { - if (req->nr_phys_segments == 1) - req->bio->bi_seg_front_size = seg_size; - if (next->nr_phys_segments == 1) - next->biotail->bi_seg_back_size = seg_size; - } - if (total_phys_segments > queue_max_segments(q)) return 0; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index be418275763c..95202f80676c 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -159,13 +159,6 @@ struct bio { */ unsigned int bi_phys_segments; - /* - * To keep track of the max segment size, we account for the - * sizes of the first and last mergeable segments in this bio. - */ - unsigned int bi_seg_front_size; - unsigned int bi_seg_back_size; - struct bvec_iter bi_iter; atomic_t __bi_remaining;