From patchwork Mon May 13 06:37:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75276924 for ; Mon, 13 May 2019 06:38:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66CB2223B3 for ; Mon, 13 May 2019 06:38:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AD7B26E40; Mon, 13 May 2019 06:38:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E000D223B3 for ; Mon, 13 May 2019 06:38:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726642AbfEMGim (ORCPT ); Mon, 13 May 2019 02:38:42 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35916 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGim (ORCPT ); Mon, 13 May 2019 02:38:42 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=t2AjLqABC45wSdrAkopJwcvSBbfdj3Das0bmI8NjADY=; b=ks/uMgVq8sXmLRJnpiFw7S2Hot 7vTpWzH6fDJgrabsgCkY5bnZZIpbe7ypwGUm8v+BCcXOwI6lul4xm4BcXZV9Pwitkz9RtsaX9ehBK B4YlcmphNrCQaCOJvahtYhfDG8y2GlxAq1HTrwK7wiIRy7Bq05Va8ZQYXSqqUFDG24YND96qcGEea LaK3BB7OE7Ob9vn5e+AMGEICNGmWLCjZQtOxGvxeykClWSQqN5ymxCJAfMIeGHeO7FaeTHZIaXT25 ObffJAROB7220Uxl95iZBWclsSTf1mgQ6YjYj881C8CX3NCgR/eEqFiQmg/kZRz+oLSNLwqJvodUv NCtIg8XQ==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bf-0007OM-Nr; Mon, 13 May 2019 06:38:40 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 01/10] block: don't decrement nr_phys_segments for physically contigous segments Date: Mon, 13 May 2019 08:37:45 +0200 Message-Id: <20190513063754.1520-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently ll_merge_requests_fn, unlike all other merge functions, reduces nr_phys_segments by one if the last segment of the previous, and the first segment of the next segement are contigous. While this seems like a nice solution to avoid building smaller than possible requests it causes a mismatch between the segments actually present in the request and those iterated over by the bvec iterators, including __rq_for_each_bio. This could cause overwrites of too small kmalloc allocations in any driver using ranged discard, or also mistrigger the single segment optimization in the nvme-pci driver. We could possibly work around this by making the bvec iterators take the front and back segment size into account, but that would require moving them from the bio to the bio_iter and spreading this mess over all users of bvecs. Or we could simply remove this optimization under the assumption that most users already build good enough bvecs, and that the bio merge patch never cared about this optimization either. The latter is what this patch does. Signed-off-by: Christoph Hellwig --- block/blk-merge.c | 23 +---------------------- 1 file changed, 1 insertion(+), 22 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 21e87a714a73..80a5a0facb87 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -358,7 +358,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, unsigned front_seg_size; struct bio *fbio, *bbio; struct bvec_iter iter; - bool new_bio = false; if (!bio) return 0; @@ -379,31 +378,12 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, nr_phys_segs = 0; for_each_bio(bio) { bio_for_each_bvec(bv, bio, iter) { - if (new_bio) { - if (seg_size + bv.bv_len - > queue_max_segment_size(q)) - goto new_segment; - if (!biovec_phys_mergeable(q, &bvprv, &bv)) - goto new_segment; - - seg_size += bv.bv_len; - - if (nr_phys_segs == 1 && seg_size > - front_seg_size) - front_seg_size = seg_size; - - continue; - } -new_segment: bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, &front_seg_size, NULL, UINT_MAX); - new_bio = false; } bbio = bio; - if (likely(bio->bi_iter.bi_size)) { + if (likely(bio->bi_iter.bi_size)) bvprv = bv; - new_bio = true; - } } fbio->bi_seg_front_size = front_seg_size; @@ -725,7 +705,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, req->bio->bi_seg_front_size = seg_size; if (next->nr_phys_segments == 1) next->biotail->bi_seg_back_size = seg_size; - total_phys_segments--; } if (total_phys_segments > queue_max_segments(q)) From patchwork Mon May 13 06:37:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940369 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 863EE112C for ; Mon, 13 May 2019 06:38:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 782B5223B3 for ; Mon, 13 May 2019 06:38:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6BFA826E40; Mon, 13 May 2019 06:38:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 16F51223B3 for ; Mon, 13 May 2019 06:38:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726665AbfEMGio (ORCPT ); Mon, 13 May 2019 02:38:44 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35928 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGio (ORCPT ); Mon, 13 May 2019 02:38:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=JYcI1bUNGBU7rewciqlqoMO5I6zpZ7Ylv2vDWiNdhj8=; b=KfLz2sXkpJI0/mu+pfU/zidEJ0 9tb52KAs3lhOsFupXwN96F1WLMfY9NIMP+CHtv28V3O47eOVDKqDw3wwYZfMVNWdbGQ2lDEJlEQaR EFf1Tojgous+c/+nkgKPdVmU97Q+8Auzo8eMG8logEDDOWXaDaG50dIaGeBeoPHAQ3kFWHWmdmNW9 g9RZSVt8vU0bzckc1QfI2EU17KTYmIJib2XgWeu77w3w7o8+KeSAZ4upWxCjiyApZwbGvoZ90G+RP wUv3X3P3k7mfdQtb83Vi5r3ElxAzmy5EXcKjWZCK1+SsALbb/qGuxDcmMIwb+iSQl1Y80yYFAyVTK TK0ycs2w==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bi-0007OW-B0; Mon, 13 May 2019 06:38:42 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 02/10] block: force an unlimited segment size on queues with a virt boundary Date: Mon, 13 May 2019 08:37:46 +0200 Message-Id: <20190513063754.1520-3-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently fail to update the front/back segment size in the bio when deciding to allow an otherwise gappy segement to a device with a virt boundary. The reason why this did not cause problems is that devices with a virt boundary fundamentally don't use segments as we know it and thus don't care. Make that assumption formal by forcing an unlimited segement size in this case. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- block/blk-settings.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 3facc41476be..2ae348c101a0 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -310,6 +310,9 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size) __func__, max_size); } + /* see blk_queue_virt_boundary() for the explanation */ + WARN_ON_ONCE(q->limits.virt_boundary_mask); + q->limits.max_segment_size = max_size; } EXPORT_SYMBOL(blk_queue_max_segment_size); @@ -742,6 +745,14 @@ EXPORT_SYMBOL(blk_queue_segment_boundary); void blk_queue_virt_boundary(struct request_queue *q, unsigned long mask) { q->limits.virt_boundary_mask = mask; + + /* + * Devices that require a virtual boundary do not support scatter/gather + * I/O natively, but instead require a descriptor list entry for each + * page (which might not be idential to the Linux PAGE_SIZE). Because + * of that they are not limited by our notion of "segment size". + */ + q->limits.max_segment_size = UINT_MAX; } EXPORT_SYMBOL(blk_queue_virt_boundary); From patchwork Mon May 13 06:37:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0B4C112C for ; Mon, 13 May 2019 06:38:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3332223B3 for ; Mon, 13 May 2019 06:38:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7B9726E40; Mon, 13 May 2019 06:38:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 719F9223B3 for ; Mon, 13 May 2019 06:38:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726839AbfEMGir (ORCPT ); Mon, 13 May 2019 02:38:47 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35944 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGiq (ORCPT ); Mon, 13 May 2019 02:38:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=lBaH1nBiHm7+wFF1enY3xJjL2CLdHGWOBCdP2tQak6M=; b=fd/Qfi17FfkZD/GDbfNGP/U+nN nWo18f0lqMjDeoV0KyWg/79UDvBC0dNDLn5+HqVtDFeY0Nv5ILPD2N+nYIioho4lcluy9jMlCA9ig hYlfMnjOvGPuK778t3Z0+jZ22Ax5NZWHvW3BSLSOAGt0uLKR6o6ORFWHjkqDSnB2lAvtDrLaXu/XO yiUUMCiu9z4Xj9MMlSjJalhRZojEzy+Y5kcrrTQiReHeaBNCmzYbUc4Do358GDD2HxS6f5SwMpKn3 yrPQDrs9bfY8uz941VP99CeYCyftrjU9C5RTe/TpoBztqAnqxfrQuL6qht099Oayf+Y2FZ7QDePip 9UjijvZg==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bk-0007PF-TO; Mon, 13 May 2019 06:38:45 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 03/10] block: remove the segment size check in bio_will_gap Date: Mon, 13 May 2019 08:37:47 +0200 Message-Id: <20190513063754.1520-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We fundamentally do not have a maximum segement size for devices with a virt boundary. So don't bother checking it, especially given that the existing checks didn't properly work to start with as we never update bi_seg_back_size after a successful merge, and for front merges would have had to check bi_seg_front_size anyway. Signed-off-by: Christoph Hellwig Reviewed-by: Ming Lei --- block/blk-merge.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 80a5a0facb87..eee2c02c50ce 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -12,23 +12,6 @@ #include "blk.h" -/* - * Check if the two bvecs from two bios can be merged to one segment. If yes, - * no need to check gap between the two bios since the 1st bio and the 1st bvec - * in the 2nd bio can be handled in one segment. - */ -static inline bool bios_segs_mergeable(struct request_queue *q, - struct bio *prev, struct bio_vec *prev_last_bv, - struct bio_vec *next_first_bv) -{ - if (!biovec_phys_mergeable(q, prev_last_bv, next_first_bv)) - return false; - if (prev->bi_seg_back_size + next_first_bv->bv_len > - queue_max_segment_size(q)) - return false; - return true; -} - static inline bool bio_will_gap(struct request_queue *q, struct request *prev_rq, struct bio *prev, struct bio *next) { @@ -60,7 +43,7 @@ static inline bool bio_will_gap(struct request_queue *q, */ bio_get_last_bvec(prev, &pb); bio_get_first_bvec(next, &nb); - if (bios_segs_mergeable(q, prev, &pb, &nb)) + if (biovec_phys_mergeable(q, &pb, &nb)) return false; return __bvec_gap_to_prev(q, &pb, nb.bv_offset); } From patchwork Mon May 13 06:37:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 299B8924 for ; Mon, 13 May 2019 06:38:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1AA78223B3 for ; Mon, 13 May 2019 06:38:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0D11126E40; Mon, 13 May 2019 06:38:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FBE4223B3 for ; Mon, 13 May 2019 06:38:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726918AbfEMGit (ORCPT ); Mon, 13 May 2019 02:38:49 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35956 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGit (ORCPT ); Mon, 13 May 2019 02:38:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0DlO/ZJViAQMFXmbIEg6CUy1mjJz06SizGC4RpWQPWY=; b=hmqrDKVMUtC3MFaQwn7CRgkNXQ uwvMllwcpYPZ6InK866GGxaNOj8YaGLtZAV6D0BRkPF/tMDXEAwDL5BY5w650NUrbLy1gPTvoQNZq gqPQh3DgSbyBfKi2RLw4KJOyj0Dz5eQcbIuVxpVMwLC6pK3I67STJyza1NK27brChYdqw3oxenR1E VFp4enBcPIU9r0sddI1Gk92IxrOHZjvZ5vBLUzoa7PNDU3B7cMB0VvbYrV2yj5+jHZT799dk/daL8 R7g8lg0OYIk/Hj6CbljMZnsk9Bc+3bA9OYAQs8yOijSgNcGMHmHF9Ji21nL8g+t8VcIKMMAf8xQrZ pQvZY6kw==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bn-0007Pe-EE; Mon, 13 May 2019 06:38:47 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 04/10] block: remove the bi_seg_{front,back}_size fields in struct bio Date: Mon, 13 May 2019 08:37:48 +0200 Message-Id: <20190513063754.1520-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP At this point these fields aren't used for anything, so we can remove them. Signed-off-by: Christoph Hellwig --- block/blk-merge.c | 94 +++++---------------------------------- include/linux/blk_types.h | 7 --- 2 files changed, 12 insertions(+), 89 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index eee2c02c50ce..17713d7d98d5 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -162,8 +162,7 @@ static unsigned get_max_segment_size(struct request_queue *q, * variables. */ static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, - unsigned *nsegs, unsigned *last_seg_size, - unsigned *front_seg_size, unsigned *sectors, unsigned max_segs) + unsigned *nsegs, unsigned *sectors, unsigned max_segs) { unsigned len = bv->bv_len; unsigned total_len = 0; @@ -185,28 +184,12 @@ static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, break; } - if (!new_nsegs) - return !!len; - - /* update front segment size */ - if (!*nsegs) { - unsigned first_seg_size; - - if (new_nsegs == 1) - first_seg_size = get_max_segment_size(q, bv->bv_offset); - else - first_seg_size = queue_max_segment_size(q); - - if (*front_seg_size < first_seg_size) - *front_seg_size = first_seg_size; + if (new_nsegs) { + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; } - /* update other varibles */ - *last_seg_size = seg_size; - *nsegs += new_nsegs; - if (sectors) - *sectors += total_len >> 9; - /* split in the middle of the bvec if len != 0 */ return !!len; } @@ -218,8 +201,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, { struct bio_vec bv, bvprv, *bvprvp = NULL; struct bvec_iter iter; - unsigned seg_size = 0, nsegs = 0, sectors = 0; - unsigned front_seg_size = bio->bi_seg_front_size; + unsigned nsegs = 0, sectors = 0; bool do_split = true; struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); @@ -243,8 +225,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, /* split in the middle of bvec */ bv.bv_len = (max_sectors - sectors) << 9; bvec_split_segs(q, &bv, &nsegs, - &seg_size, - &front_seg_size, §ors, max_segs); } goto split; @@ -258,12 +238,9 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (bv.bv_offset + bv.bv_len <= PAGE_SIZE) { nsegs++; - seg_size = bv.bv_len; sectors += bv.bv_len >> 9; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; - } else if (bvec_split_segs(q, &bv, &nsegs, &seg_size, - &front_seg_size, §ors, max_segs)) { + } else if (bvec_split_segs(q, &bv, &nsegs, §ors, + max_segs)) { goto split; } } @@ -278,10 +255,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bio = new; } - bio->bi_seg_front_size = front_seg_size; - if (seg_size > bio->bi_seg_back_size) - bio->bi_seg_back_size = seg_size; - return do_split ? new : NULL; } @@ -336,17 +309,13 @@ EXPORT_SYMBOL(blk_queue_split); static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio *bio) { - struct bio_vec uninitialized_var(bv), bvprv = { NULL }; - unsigned int seg_size, nr_phys_segs; - unsigned front_seg_size; - struct bio *fbio, *bbio; + unsigned int nr_phys_segs = 0; struct bvec_iter iter; + struct bio_vec bv; if (!bio) return 0; - front_seg_size = bio->bi_seg_front_size; - switch (bio_op(bio)) { case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: @@ -356,23 +325,11 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, return 1; } - fbio = bio; - seg_size = 0; - nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_bvec(bv, bio, iter) { - bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, - &front_seg_size, NULL, UINT_MAX); - } - bbio = bio; - if (likely(bio->bi_iter.bi_size)) - bvprv = bv; + bio_for_each_bvec(bv, bio, iter) + bvec_split_segs(q, &bv, &nr_phys_segs, NULL, UINT_MAX); } - fbio->bi_seg_front_size = front_seg_size; - if (seg_size > bbio->bi_seg_back_size) - bbio->bi_seg_back_size = seg_size; - return nr_phys_segs; } @@ -392,24 +349,6 @@ void blk_recount_segments(struct request_queue *q, struct bio *bio) bio_set_flag(bio, BIO_SEG_VALID); } -static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, - struct bio *nxt) -{ - struct bio_vec end_bv = { NULL }, nxt_bv; - - if (bio->bi_seg_back_size + nxt->bi_seg_front_size > - queue_max_segment_size(q)) - return 0; - - if (!bio_has_data(bio)) - return 1; - - bio_get_last_bvec(bio, &end_bv); - bio_get_first_bvec(nxt, &nxt_bv); - - return biovec_phys_mergeable(q, &end_bv, &nxt_bv); -} - static inline struct scatterlist *blk_next_sg(struct scatterlist **sg, struct scatterlist *sglist) { @@ -669,8 +608,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, struct request *next) { int total_phys_segments; - unsigned int seg_size = - req->biotail->bi_seg_back_size + next->bio->bi_seg_front_size; if (req_gap_back_merge(req, next->bio)) return 0; @@ -683,13 +620,6 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, return 0; total_phys_segments = req->nr_phys_segments + next->nr_phys_segments; - if (blk_phys_contig_segment(q, req->biotail, next->bio)) { - if (req->nr_phys_segments == 1) - req->bio->bi_seg_front_size = seg_size; - if (next->nr_phys_segments == 1) - next->biotail->bi_seg_back_size = seg_size; - } - if (total_phys_segments > queue_max_segments(q)) return 0; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index be418275763c..95202f80676c 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -159,13 +159,6 @@ struct bio { */ unsigned int bi_phys_segments; - /* - * To keep track of the max segment size, we account for the - * sizes of the first and last mergeable segments in this bio. - */ - unsigned int bi_seg_front_size; - unsigned int bi_seg_back_size; - struct bvec_iter bi_iter; atomic_t __bi_remaining; From patchwork Mon May 13 06:37:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940375 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA68D924 for ; Mon, 13 May 2019 06:38:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBA51223B3 for ; Mon, 13 May 2019 06:38:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C033A26E40; Mon, 13 May 2019 06:38:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C4D4223B3 for ; Mon, 13 May 2019 06:38:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727210AbfEMGiv (ORCPT ); Mon, 13 May 2019 02:38:51 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35964 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGiv (ORCPT ); Mon, 13 May 2019 02:38:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Yp0zT0SGd+/Wmp0NljlwoaBS3wwIcRgoBd92ftS54D8=; b=n3dXk4S/wZ+FL5xhnLs+T2/89s vCWcxGuipozDBfATq2S8Y5hrO7Oribs/wHnwl7Fn+ZA28VrpVH3WuW4rykdHYhUMjneoKyhKFbi9r 2OJdhi0BEMs9ZZMqblsT0M3P+S1rjuqoKNyqEES+DCMnkOF2oiK2eSfcOb8VICDqIgX4gpEppreh5 wTltmA4yn9Dk++jKHqajkctjiNsaSIT6oMaeN9Y4N37KAImoEIPclu58avcxFa9S4GiH1XymoQrb5 b7YHeuuXdbXO/i8ddtycBf1i7ojPGJvZnzCDWDznuJlGOy3AzQI7bZsogJuCT+mrbg2nR5p4H3yyZ IIYnxo2Q==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bp-0007Q0-VM; Mon, 13 May 2019 06:38:50 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 05/10] block: initialize the write priority in blk_rq_bio_prep Date: Mon, 13 May 2019 08:37:49 +0200 Message-Id: <20190513063754.1520-6-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The priority fiel also make sense for passthrough requests, so initialize it in blk_rq_bio_prep. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/blk-core.c b/block/blk-core.c index 419d600e6637..7fb394dd3e11 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -716,7 +716,6 @@ void blk_init_request_from_bio(struct request *req, struct bio *bio) req->cmd_flags |= REQ_FAILFAST_MASK; req->__sector = bio->bi_iter.bi_sector; - req->ioprio = bio_prio(bio); req->write_hint = bio->bi_write_hint; blk_rq_bio_prep(req->q, req, bio); } @@ -1494,6 +1493,7 @@ void blk_rq_bio_prep(struct request_queue *q, struct request *rq, rq->__data_len = bio->bi_iter.bi_size; rq->bio = rq->biotail = bio; + rq->ioprio = bio_prio(bio); if (bio->bi_disk) rq->rq_disk = bio->bi_disk; From patchwork Mon May 13 06:37:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940377 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA900112C for ; Mon, 13 May 2019 06:38:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9CC3D223B3 for ; Mon, 13 May 2019 06:38:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 911E226E40; Mon, 13 May 2019 06:38:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2DAD7223B3 for ; Mon, 13 May 2019 06:38:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727230AbfEMGiy (ORCPT ); Mon, 13 May 2019 02:38:54 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35980 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGiy (ORCPT ); Mon, 13 May 2019 02:38:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=2z5ifdjK8OavhOyzD7FuJciQC5qugfi1KRTeW53Bkqo=; b=tCkswAlfFCeVatG2PHN1GEA1wN HPtNGmISaVYfmxH4Kic+I0Vnvl560tlflhW73owFiRe33zIG0D/6R7DxDl3k0jwI4vQTwESIo5i64 UYmQCVb8ZD/JEO5NFsDTxK5vK2K5xsQcx5lXDonGKzflHam/8BcWENEUJn3W/vlRbgylzR2hEh/tz ts4EsrrCUHrzBkO3SPSga8czkcqfV+hbn6F4whEAwwfm1KBOPhJDZNvPQq1DQN6bhsZn4iGAXLzAQ 3+x28mi+NsMdBu2LBD6g98SPgsd6WvpNFdNYnj9cMOpGReDQJYp7GNpRdmIwl7MNTabukUuWxZrM9 ign9/gHQ==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bs-0007QL-GV; Mon, 13 May 2019 06:38:52 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 06/10] block: remove blk_init_request_from_bio Date: Mon, 13 May 2019 08:37:50 +0200 Message-Id: <20190513063754.1520-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP lightnvm should have never used this function, as it is sending passthrough requests, so switch it to blk_rq_append_bio like all the other passthrough request users. Inline blk_init_request_from_bio into the only remaining caller. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 11 ----------- block/blk-mq.c | 7 ++++++- drivers/nvme/host/lightnvm.c | 2 +- include/linux/blkdev.h | 1 - 4 files changed, 7 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 7fb394dd3e11..b46ea531cb07 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -710,17 +710,6 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, return false; } -void blk_init_request_from_bio(struct request *req, struct bio *bio) -{ - if (bio->bi_opf & REQ_RAHEAD) - req->cmd_flags |= REQ_FAILFAST_MASK; - - req->__sector = bio->bi_iter.bi_sector; - req->write_hint = bio->bi_write_hint; - blk_rq_bio_prep(req->q, req, bio); -} -EXPORT_SYMBOL_GPL(blk_init_request_from_bio); - static void handle_bad_sector(struct bio *bio, sector_t maxsector) { char b[BDEVNAME_SIZE]; diff --git a/block/blk-mq.c b/block/blk-mq.c index 08a6248d8536..c0e5132d9103 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1765,7 +1765,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) static void blk_mq_bio_to_request(struct request *rq, struct bio *bio) { - blk_init_request_from_bio(rq, bio); + if (bio->bi_opf & REQ_RAHEAD) + rq->cmd_flags |= REQ_FAILFAST_MASK; + + rq->__sector = bio->bi_iter.bi_sector; + rq->write_hint = bio->bi_write_hint; + blk_rq_bio_prep(rq->q, rq, bio); blk_account_io_start(rq, true); } diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 949e29e1d782..6cc050894d3a 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -660,7 +660,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q, rq->cmd_flags &= ~REQ_FAILFAST_DRIVER; if (rqd->bio) - blk_init_request_from_bio(rq, rqd->bio); + blk_rq_append_bio(rq, &rqd->bio); else rq->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1aafeb923e7b..05bc85cdbc25 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -823,7 +823,6 @@ extern void blk_unregister_queue(struct gendisk *disk); extern blk_qc_t generic_make_request(struct bio *bio); extern blk_qc_t direct_make_request(struct bio *bio); extern void blk_rq_init(struct request_queue *q, struct request *rq); -extern void blk_init_request_from_bio(struct request *req, struct bio *bio); extern void blk_put_request(struct request *); extern struct request *blk_get_request(struct request_queue *, unsigned int op, blk_mq_req_flags_t flags); From patchwork Mon May 13 06:37:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D4BB112C for ; Mon, 13 May 2019 06:39:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B60C223B3 for ; Mon, 13 May 2019 06:39:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F80326E51; Mon, 13 May 2019 06:39:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D0E3223B3 for ; Mon, 13 May 2019 06:38:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727316AbfEMGi6 (ORCPT ); Mon, 13 May 2019 02:38:58 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35992 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGi6 (ORCPT ); Mon, 13 May 2019 02:38:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=PRsSu0QIfJ2u60RA6gD6KXDLNGrqA5Va1mXvlG5llMw=; b=B9WoMLhr66YdSCgls0EVSq2Jad /jq5o8tjF14KB5OYGuVMY5nxhOiw2ETNZWGHvAqVrHxxJ9gTp8CFfw8CNNynsoJnu1wTSkyC7mIkm Qys6CvbJ4Pvkp4QuajhVwg9REApKYOjIcM7RkPoHOzaSyCynAv7aw8OT9x0xlhYsL4PxSiQIzC9ju 5eGyIaX8y/4Ocz9k7S9tCGIKsXNuYqMZOGRJZKfac3xn3TZ8JkvuaYII6EhxlVXtUV/iIbJ5wJFyG pMFsWGrCP/alpUiObEzIZAhfpHszdrT+xisMzTNJAiW5srkLxe1eNLp0DweQK1Wv/qyUcmxyVR5PO fFOny1Wg==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4bv-0007Qj-A8; Mon, 13 May 2019 06:38:55 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 07/10] block: remove the bi_phys_segments field in struct bio Date: Mon, 13 May 2019 08:37:51 +0200 Message-Id: <20190513063754.1520-8-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We only need the number of segments in the blk-mq submission path. Remove the field from struct bio, and return it from a variant of blk_queue_split instead of that it can passed as an argument to those functions that need the value. This also means we stop recounting segments except for cloning and partial segments. To keep the number of arguments in this how path down remove pointless struct request_queue arguments from any of the functions that had it and grew a nr_segs argument. Signed-off-by: Christoph Hellwig --- Documentation/block/biodoc.txt | 1 - block/bfq-iosched.c | 5 ++- block/bio.c | 15 +------ block/blk-core.c | 32 +++++++-------- block/blk-map.c | 10 ++++- block/blk-merge.c | 75 ++++++++++++---------------------- block/blk-mq-sched.c | 26 +++++++----- block/blk-mq-sched.h | 10 +++-- block/blk-mq.c | 23 ++++++----- block/blk.h | 23 ++++++----- block/kyber-iosched.c | 5 ++- block/mq-deadline.c | 5 ++- drivers/md/raid5.c | 1 - include/linux/bio.h | 1 - include/linux/blk-mq.h | 2 +- include/linux/blk_types.h | 6 --- include/linux/blkdev.h | 1 - include/linux/elevator.h | 2 +- 18 files changed, 106 insertions(+), 137 deletions(-) diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt index ac18b488cb5e..31c177663ed5 100644 --- a/Documentation/block/biodoc.txt +++ b/Documentation/block/biodoc.txt @@ -436,7 +436,6 @@ struct bio { struct bvec_iter bi_iter; /* current index into bio_vec array */ unsigned int bi_size; /* total size in bytes */ - unsigned short bi_phys_segments; /* segments after physaddr coalesce*/ unsigned short bi_hw_segments; /* segments after DMA remapping */ unsigned int bi_max; /* max bio_vecs we can hold used as index into pool */ diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index f8d430f88d25..a6bf842cbe16 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2027,7 +2027,8 @@ static void bfq_remove_request(struct request_queue *q, } -static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) +static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, + unsigned int nr_segs) { struct request_queue *q = hctx->queue; struct bfq_data *bfqd = q->elevator->elevator_data; @@ -2050,7 +2051,7 @@ static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) bfqd->bio_bfqq = NULL; bfqd->bio_bic = bic; - ret = blk_mq_sched_try_merge(q, bio, &free); + ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free); if (free) blk_mq_free_request(free); diff --git a/block/bio.c b/block/bio.c index 683cbb40f051..d550d36392e9 100644 --- a/block/bio.c +++ b/block/bio.c @@ -558,14 +558,6 @@ void bio_put(struct bio *bio) } EXPORT_SYMBOL(bio_put); -int bio_phys_segments(struct request_queue *q, struct bio *bio) -{ - if (unlikely(!bio_flagged(bio, BIO_SEG_VALID))) - blk_recount_segments(q, bio); - - return bio->bi_phys_segments; -} - /** * __bio_clone_fast - clone a bio that shares the original bio's biovec * @bio: destination bio @@ -739,7 +731,7 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, if (bio_full(bio)) return 0; - if (bio->bi_phys_segments >= queue_max_segments(q)) + if (bio->bi_vcnt >= queue_max_segments(q)) return 0; bvec = &bio->bi_io_vec[bio->bi_vcnt]; @@ -749,8 +741,6 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, bio->bi_vcnt++; done: bio->bi_iter.bi_size += len; - bio->bi_phys_segments = bio->bi_vcnt; - bio_set_flag(bio, BIO_SEG_VALID); return len; } @@ -1910,10 +1900,7 @@ void bio_trim(struct bio *bio, int offset, int size) if (offset == 0 && size == bio->bi_iter.bi_size) return; - bio_clear_flag(bio, BIO_SEG_VALID); - bio_advance(bio, offset << 9); - bio->bi_iter.bi_size = size; if (bio_integrity(bio)) diff --git a/block/blk-core.c b/block/blk-core.c index b46ea531cb07..84118411626c 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -573,15 +573,15 @@ void blk_put_request(struct request *req) } EXPORT_SYMBOL(blk_put_request); -bool bio_attempt_back_merge(struct request_queue *q, struct request *req, - struct bio *bio) +bool bio_attempt_back_merge(struct request *req, struct bio *bio, + unsigned int nr_segs) { const int ff = bio->bi_opf & REQ_FAILFAST_MASK; - if (!ll_back_merge_fn(q, req, bio)) + if (!ll_back_merge_fn(req, bio, nr_segs)) return false; - trace_block_bio_backmerge(q, req, bio); + trace_block_bio_backmerge(req->q, req, bio); if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff) blk_rq_set_mixed_merge(req); @@ -594,15 +594,15 @@ bool bio_attempt_back_merge(struct request_queue *q, struct request *req, return true; } -bool bio_attempt_front_merge(struct request_queue *q, struct request *req, - struct bio *bio) +bool bio_attempt_front_merge(struct request *req, struct bio *bio, + unsigned int nr_segs) { const int ff = bio->bi_opf & REQ_FAILFAST_MASK; - if (!ll_front_merge_fn(q, req, bio)) + if (!ll_front_merge_fn(req, bio, nr_segs)) return false; - trace_block_bio_frontmerge(q, req, bio); + trace_block_bio_frontmerge(req->q, req, bio); if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff) blk_rq_set_mixed_merge(req); @@ -644,6 +644,7 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, * blk_attempt_plug_merge - try to merge with %current's plugged list * @q: request_queue new bio is being queued at * @bio: new bio being queued + * @nr_segs: number of segments in @bio * @same_queue_rq: pointer to &struct request that gets filled in when * another request associated with @q is found on the plug list * (optional, may be %NULL) @@ -662,7 +663,7 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, * Caller must ensure !blk_queue_nomerges(q) beforehand. */ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, - struct request **same_queue_rq) + unsigned int nr_segs, struct request **same_queue_rq) { struct blk_plug *plug; struct request *rq; @@ -691,10 +692,10 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, switch (blk_try_merge(rq, bio)) { case ELEVATOR_BACK_MERGE: - merged = bio_attempt_back_merge(q, rq, bio); + merged = bio_attempt_back_merge(rq, bio, nr_segs); break; case ELEVATOR_FRONT_MERGE: - merged = bio_attempt_front_merge(q, rq, bio); + merged = bio_attempt_front_merge(rq, bio, nr_segs); break; case ELEVATOR_DISCARD_MERGE: merged = bio_attempt_discard_merge(q, rq, bio); @@ -1472,14 +1473,9 @@ bool blk_update_request(struct request *req, blk_status_t error, } EXPORT_SYMBOL_GPL(blk_update_request); -void blk_rq_bio_prep(struct request_queue *q, struct request *rq, - struct bio *bio) +void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs) { - if (bio_has_data(bio)) - rq->nr_phys_segments = bio_phys_segments(q, bio); - else if (bio_op(bio) == REQ_OP_DISCARD) - rq->nr_phys_segments = 1; - + rq->nr_phys_segments = nr_segs; rq->__data_len = bio->bi_iter.bi_size; rq->bio = rq->biotail = bio; rq->ioprio = bio_prio(bio); diff --git a/block/blk-map.c b/block/blk-map.c index db9373bd31ac..3a62e471d81b 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -18,13 +18,19 @@ int blk_rq_append_bio(struct request *rq, struct bio **bio) { struct bio *orig_bio = *bio; + struct bvec_iter iter; + struct bio_vec bv; + unsigned int nr_segs = 0; blk_queue_bounce(rq->q, bio); + bio_for_each_bvec(bv, *bio, iter) + nr_segs++; + if (!rq->bio) { - blk_rq_bio_prep(rq->q, rq, *bio); + blk_rq_bio_prep(rq, *bio, nr_segs); } else { - if (!ll_back_merge_fn(rq->q, rq, *bio)) { + if (!ll_back_merge_fn(rq, *bio, nr_segs)) { if (orig_bio != *bio) { bio_put(*bio); *bio = orig_bio; diff --git a/block/blk-merge.c b/block/blk-merge.c index 17713d7d98d5..72b4fd89a22d 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -258,32 +258,29 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, return do_split ? new : NULL; } -void blk_queue_split(struct request_queue *q, struct bio **bio) +void __blk_queue_split(struct request_queue *q, struct bio **bio, + unsigned int *nr_segs) { - struct bio *split, *res; - unsigned nsegs; + struct bio *split; switch (bio_op(*bio)) { case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: - split = blk_bio_discard_split(q, *bio, &q->bio_split, &nsegs); + split = blk_bio_discard_split(q, *bio, &q->bio_split, nr_segs); break; case REQ_OP_WRITE_ZEROES: - split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, &nsegs); + split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, + nr_segs); break; case REQ_OP_WRITE_SAME: - split = blk_bio_write_same_split(q, *bio, &q->bio_split, &nsegs); + split = blk_bio_write_same_split(q, *bio, &q->bio_split, + nr_segs); break; default: - split = blk_bio_segment_split(q, *bio, &q->bio_split, &nsegs); + split = blk_bio_segment_split(q, *bio, &q->bio_split, nr_segs); break; } - /* physical segments can be figured out during splitting */ - res = split ? split : *bio; - res->bi_phys_segments = nsegs; - bio_set_flag(res, BIO_SEG_VALID); - if (split) { /* there isn't chance to merge the splitted bio */ split->bi_opf |= REQ_NOMERGE; @@ -304,6 +301,13 @@ void blk_queue_split(struct request_queue *q, struct bio **bio) *bio = split; } } + +void blk_queue_split(struct request_queue *q, struct bio **bio) +{ + unsigned int nr_segs; + + __blk_queue_split(q, bio, &nr_segs); +} EXPORT_SYMBOL(blk_queue_split); static unsigned int __blk_recalc_rq_segments(struct request_queue *q, @@ -338,17 +342,6 @@ void blk_recalc_rq_segments(struct request *rq) rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); } -void blk_recount_segments(struct request_queue *q, struct bio *bio) -{ - struct bio *nxt = bio->bi_next; - - bio->bi_next = NULL; - bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio); - bio->bi_next = nxt; - - bio_set_flag(bio, BIO_SEG_VALID); -} - static inline struct scatterlist *blk_next_sg(struct scatterlist **sg, struct scatterlist *sglist) { @@ -519,16 +512,13 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, } EXPORT_SYMBOL(blk_rq_map_sg); -static inline int ll_new_hw_segment(struct request_queue *q, - struct request *req, - struct bio *bio) +static inline int ll_new_hw_segment(struct request *req, struct bio *bio, + unsigned int nr_phys_segs) { - int nr_phys_segs = bio_phys_segments(q, bio); - - if (req->nr_phys_segments + nr_phys_segs > queue_max_segments(q)) + if (req->nr_phys_segments + nr_phys_segs > queue_max_segments(req->q)) goto no_merge; - if (blk_integrity_merge_bio(q, req, bio) == false) + if (blk_integrity_merge_bio(req->q, req, bio) == false) goto no_merge; /* @@ -539,12 +529,11 @@ static inline int ll_new_hw_segment(struct request_queue *q, return 1; no_merge: - req_set_nomerge(q, req); + req_set_nomerge(req->q, req); return 0; } -int ll_back_merge_fn(struct request_queue *q, struct request *req, - struct bio *bio) +int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs) { if (req_gap_back_merge(req, bio)) return 0; @@ -553,21 +542,15 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req, return 0; if (blk_rq_sectors(req) + bio_sectors(bio) > blk_rq_get_max_sectors(req, blk_rq_pos(req))) { - req_set_nomerge(q, req); + req_set_nomerge(req->q, req); return 0; } - if (!bio_flagged(req->biotail, BIO_SEG_VALID)) - blk_recount_segments(q, req->biotail); - if (!bio_flagged(bio, BIO_SEG_VALID)) - blk_recount_segments(q, bio); - return ll_new_hw_segment(q, req, bio); + return ll_new_hw_segment(req, bio, nr_segs); } -int ll_front_merge_fn(struct request_queue *q, struct request *req, - struct bio *bio) +int ll_front_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs) { - if (req_gap_front_merge(req, bio)) return 0; if (blk_integrity_rq(req) && @@ -575,15 +558,11 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req, return 0; if (blk_rq_sectors(req) + bio_sectors(bio) > blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) { - req_set_nomerge(q, req); + req_set_nomerge(req->q, req); return 0; } - if (!bio_flagged(bio, BIO_SEG_VALID)) - blk_recount_segments(q, bio); - if (!bio_flagged(req->bio, BIO_SEG_VALID)) - blk_recount_segments(q, req->bio); - return ll_new_hw_segment(q, req, bio); + return ll_new_hw_segment(req, bio, nr_segs); } static bool req_attempt_discard_merge(struct request_queue *q, struct request *req, diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 74c6bb871f7e..72124d76b96a 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -224,7 +224,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) } bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, - struct request **merged_request) + unsigned int nr_segs, struct request **merged_request) { struct request *rq; @@ -232,7 +232,7 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, case ELEVATOR_BACK_MERGE: if (!blk_mq_sched_allow_merge(q, rq, bio)) return false; - if (!bio_attempt_back_merge(q, rq, bio)) + if (!bio_attempt_back_merge(rq, bio, nr_segs)) return false; *merged_request = attempt_back_merge(q, rq); if (!*merged_request) @@ -241,7 +241,7 @@ bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, case ELEVATOR_FRONT_MERGE: if (!blk_mq_sched_allow_merge(q, rq, bio)) return false; - if (!bio_attempt_front_merge(q, rq, bio)) + if (!bio_attempt_front_merge(rq, bio, nr_segs)) return false; *merged_request = attempt_front_merge(q, rq); if (!*merged_request) @@ -260,7 +260,7 @@ EXPORT_SYMBOL_GPL(blk_mq_sched_try_merge); * of them. */ bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list, - struct bio *bio) + struct bio *bio, unsigned int nr_segs) { struct request *rq; int checked = 8; @@ -277,11 +277,13 @@ bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list, switch (blk_try_merge(rq, bio)) { case ELEVATOR_BACK_MERGE: if (blk_mq_sched_allow_merge(q, rq, bio)) - merged = bio_attempt_back_merge(q, rq, bio); + merged = bio_attempt_back_merge(rq, bio, + nr_segs); break; case ELEVATOR_FRONT_MERGE: if (blk_mq_sched_allow_merge(q, rq, bio)) - merged = bio_attempt_front_merge(q, rq, bio); + merged = bio_attempt_front_merge(rq, bio, + nr_segs); break; case ELEVATOR_DISCARD_MERGE: merged = bio_attempt_discard_merge(q, rq, bio); @@ -304,13 +306,14 @@ EXPORT_SYMBOL_GPL(blk_mq_bio_list_merge); */ static bool blk_mq_attempt_merge(struct request_queue *q, struct blk_mq_hw_ctx *hctx, - struct blk_mq_ctx *ctx, struct bio *bio) + struct blk_mq_ctx *ctx, struct bio *bio, + unsigned int nr_segs) { enum hctx_type type = hctx->type; lockdep_assert_held(&ctx->lock); - if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio)) { + if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) { ctx->rq_merged++; return true; } @@ -318,7 +321,8 @@ static bool blk_mq_attempt_merge(struct request_queue *q, return false; } -bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) +bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs) { struct elevator_queue *e = q->elevator; struct blk_mq_ctx *ctx = blk_mq_get_ctx(q); @@ -328,7 +332,7 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) if (e && e->type->ops.bio_merge) { blk_mq_put_ctx(ctx); - return e->type->ops.bio_merge(hctx, bio); + return e->type->ops.bio_merge(hctx, bio, nr_segs); } type = hctx->type; @@ -336,7 +340,7 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) !list_empty_careful(&ctx->rq_lists[type])) { /* default per sw-queue merge */ spin_lock(&ctx->lock); - ret = blk_mq_attempt_merge(q, hctx, ctx, bio); + ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs); spin_unlock(&ctx->lock); } diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index c7bdb52367ac..a1e5850ffb1d 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -12,8 +12,9 @@ void blk_mq_sched_assign_ioc(struct request *rq); void blk_mq_sched_request_inserted(struct request *rq); bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, - struct request **merged_request); -bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio); + unsigned int nr_segs, struct request **merged_request); +bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs); bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq); void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx); void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx); @@ -30,12 +31,13 @@ int blk_mq_init_sched(struct request_queue *q, struct elevator_type *e); void blk_mq_exit_sched(struct request_queue *q, struct elevator_queue *e); static inline bool -blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio) +blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, + unsigned int nr_segs) { if (blk_queue_nomerges(q) || !bio_mergeable(bio)) return false; - return __blk_mq_sched_bio_merge(q, bio); + return __blk_mq_sched_bio_merge(q, bio, nr_segs); } static inline bool diff --git a/block/blk-mq.c b/block/blk-mq.c index c0e5132d9103..dbb58c50654b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1763,14 +1763,15 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) } } -static void blk_mq_bio_to_request(struct request *rq, struct bio *bio) +static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, + unsigned int nr_segs) { if (bio->bi_opf & REQ_RAHEAD) rq->cmd_flags |= REQ_FAILFAST_MASK; rq->__sector = bio->bi_iter.bi_sector; rq->write_hint = bio->bi_write_hint; - blk_rq_bio_prep(rq->q, rq, bio); + blk_rq_bio_prep(rq, bio, nr_segs); blk_account_io_start(rq, true); } @@ -1940,20 +1941,20 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) struct request *rq; struct blk_plug *plug; struct request *same_queue_rq = NULL; + unsigned int nr_segs; blk_qc_t cookie; blk_queue_bounce(q, &bio); - - blk_queue_split(q, &bio); + __blk_queue_split(q, &bio, &nr_segs); if (!bio_integrity_prep(bio)) return BLK_QC_T_NONE; if (!is_flush_fua && !blk_queue_nomerges(q) && - blk_attempt_plug_merge(q, bio, &same_queue_rq)) + blk_attempt_plug_merge(q, bio, nr_segs, &same_queue_rq)) return BLK_QC_T_NONE; - if (blk_mq_sched_bio_merge(q, bio)) + if (blk_mq_sched_bio_merge(q, bio, nr_segs)) return BLK_QC_T_NONE; rq_qos_throttle(q, bio); @@ -1976,7 +1977,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) plug = current->plug; if (unlikely(is_flush_fua)) { blk_mq_put_ctx(data.ctx); - blk_mq_bio_to_request(rq, bio); + blk_mq_bio_to_request(rq, bio, nr_segs); /* bypass scheduler for flush rq */ blk_insert_flush(rq); @@ -1990,7 +1991,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) struct request *last = NULL; blk_mq_put_ctx(data.ctx); - blk_mq_bio_to_request(rq, bio); + blk_mq_bio_to_request(rq, bio, nr_segs); if (!request_count) trace_block_plug(q); @@ -2005,7 +2006,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) blk_add_rq_to_plug(plug, rq); } else if (plug && !blk_queue_nomerges(q)) { - blk_mq_bio_to_request(rq, bio); + blk_mq_bio_to_request(rq, bio, nr_segs); /* * We do limited plugging. If the bio can be merged, do that. @@ -2034,11 +2035,11 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator && !data.hctx->dispatch_busy)) { blk_mq_put_ctx(data.ctx); - blk_mq_bio_to_request(rq, bio); + blk_mq_bio_to_request(rq, bio, nr_segs); blk_mq_try_issue_directly(data.hctx, rq, &cookie); } else { blk_mq_put_ctx(data.ctx); - blk_mq_bio_to_request(rq, bio); + blk_mq_bio_to_request(rq, bio, nr_segs); blk_mq_sched_insert_request(rq, false, true, true); } diff --git a/block/blk.h b/block/blk.h index e27fd1512e4b..a18d0b9fe353 100644 --- a/block/blk.h +++ b/block/blk.h @@ -51,8 +51,7 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q, void blk_free_flush_queue(struct blk_flush_queue *q); void blk_exit_queue(struct request_queue *q); -void blk_rq_bio_prep(struct request_queue *q, struct request *rq, - struct bio *bio); +void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs); void blk_freeze_queue(struct request_queue *q); static inline void blk_queue_enter_live(struct request_queue *q) @@ -154,14 +153,14 @@ static inline bool bio_integrity_endio(struct bio *bio) unsigned long blk_rq_timeout(unsigned long timeout); void blk_add_timer(struct request *req); -bool bio_attempt_front_merge(struct request_queue *q, struct request *req, - struct bio *bio); -bool bio_attempt_back_merge(struct request_queue *q, struct request *req, - struct bio *bio); +bool bio_attempt_front_merge(struct request *req, struct bio *bio, + unsigned int nr_segs); +bool bio_attempt_back_merge(struct request *req, struct bio *bio, + unsigned int nr_segs); bool bio_attempt_discard_merge(struct request_queue *q, struct request *req, struct bio *bio); bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio, - struct request **same_queue_rq); + unsigned int nr_segs, struct request **same_queue_rq); void blk_account_io_start(struct request *req, bool new_io); void blk_account_io_completion(struct request *req, unsigned int bytes); @@ -195,10 +194,12 @@ static inline int blk_should_fake_timeout(struct request_queue *q) } #endif -int ll_back_merge_fn(struct request_queue *q, struct request *req, - struct bio *bio); -int ll_front_merge_fn(struct request_queue *q, struct request *req, - struct bio *bio); +void __blk_queue_split(struct request_queue *q, struct bio **bio, + unsigned int *nr_segs); +int ll_back_merge_fn(struct request *req, struct bio *bio, + unsigned int nr_segs); +int ll_front_merge_fn(struct request *req, struct bio *bio, + unsigned int nr_segs); struct request *attempt_back_merge(struct request_queue *q, struct request *rq); struct request *attempt_front_merge(struct request_queue *q, struct request *rq); int blk_attempt_req_merge(struct request_queue *q, struct request *rq, diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index c3b05119cebd..3c2602601741 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -562,7 +562,8 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) } } -static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) +static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, + unsigned int nr_segs) { struct kyber_hctx_data *khd = hctx->sched_data; struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue); @@ -572,7 +573,7 @@ static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) bool merged; spin_lock(&kcq->lock); - merged = blk_mq_bio_list_merge(hctx->queue, rq_list, bio); + merged = blk_mq_bio_list_merge(hctx->queue, rq_list, bio, nr_segs); spin_unlock(&kcq->lock); blk_mq_put_ctx(ctx); diff --git a/block/mq-deadline.c b/block/mq-deadline.c index 1876f5712bfd..b8a682b5a1bb 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -469,7 +469,8 @@ static int dd_request_merge(struct request_queue *q, struct request **rq, return ELEVATOR_NO_MERGE; } -static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) +static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio, + unsigned int nr_segs) { struct request_queue *q = hctx->queue; struct deadline_data *dd = q->elevator->elevator_data; @@ -477,7 +478,7 @@ static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) bool ret; spin_lock(&dd->lock); - ret = blk_mq_sched_try_merge(q, bio, &free); + ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free); spin_unlock(&dd->lock); if (free) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 7fde645d2e90..16358fdd1850 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -5259,7 +5259,6 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio) rcu_read_unlock(); raid_bio->bi_next = (void*)rdev; bio_set_dev(align_bi, rdev->bdev); - bio_clear_flag(align_bi, BIO_SEG_VALID); if (is_badblock(rdev, align_bi->bi_iter.bi_sector, bio_sectors(align_bi), diff --git a/include/linux/bio.h b/include/linux/bio.h index ea73df36529a..e0f3b8898a81 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -408,7 +408,6 @@ static inline void bio_wouldblock_error(struct bio *bio) } struct request_queue; -extern int bio_phys_segments(struct request_queue *, struct bio *); extern int submit_bio_wait(struct bio *bio); extern void bio_advance(struct bio *, unsigned); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 15d1aa53d96c..3fa1fa59f9b2 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -306,7 +306,7 @@ void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs bool blk_mq_complete_request(struct request *rq); void blk_mq_complete_request_sync(struct request *rq); bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list, - struct bio *bio); + struct bio *bio, unsigned int nr_segs); bool blk_mq_queue_stopped(struct request_queue *q); void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx); void blk_mq_start_hw_queue(struct blk_mq_hw_ctx *hctx); diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 95202f80676c..6a53799c3fe2 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -154,11 +154,6 @@ struct bio { blk_status_t bi_status; u8 bi_partno; - /* Number of segments in this BIO after - * physical address coalescing is performed. - */ - unsigned int bi_phys_segments; - struct bvec_iter bi_iter; atomic_t __bi_remaining; @@ -210,7 +205,6 @@ struct bio { */ enum { BIO_NO_PAGE_REF, /* don't put release vec pages */ - BIO_SEG_VALID, /* bi_phys_segments valid */ BIO_CLONED, /* doesn't own data */ BIO_BOUNCED, /* bio is a bounce bio */ BIO_USER_MAPPED, /* contains user pages */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 05bc85cdbc25..96edfdf34cd3 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -836,7 +836,6 @@ extern blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq); extern int blk_rq_append_bio(struct request *rq, struct bio **bio); extern void blk_queue_split(struct request_queue *, struct bio **); -extern void blk_recount_segments(struct request_queue *, struct bio *); extern int scsi_verify_blk_ioctl(struct block_device *, unsigned int); extern int scsi_cmd_blk_ioctl(struct block_device *, fmode_t, unsigned int, void __user *); diff --git a/include/linux/elevator.h b/include/linux/elevator.h index 6e8bc53740f0..169bb2e02516 100644 --- a/include/linux/elevator.h +++ b/include/linux/elevator.h @@ -34,7 +34,7 @@ struct elevator_mq_ops { void (*depth_updated)(struct blk_mq_hw_ctx *); bool (*allow_merge)(struct request_queue *, struct request *, struct bio *); - bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *); + bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *, unsigned int); int (*request_merge)(struct request_queue *q, struct request **, struct bio *); void (*request_merged)(struct request_queue *, struct request *, enum elv_merge); void (*requests_merged)(struct request_queue *, struct request *, struct request *); From patchwork Mon May 13 06:37:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D4B9924 for ; Mon, 13 May 2019 06:39:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F95E223B3 for ; Mon, 13 May 2019 06:39:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43CE226E40; Mon, 13 May 2019 06:39:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B217B26419 for ; Mon, 13 May 2019 06:39:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727322AbfEMGjA (ORCPT ); Mon, 13 May 2019 02:39:00 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:36000 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGi7 (ORCPT ); Mon, 13 May 2019 02:38:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=4y/2DPa+qj3r8jDsbLfHcOtnwWS/+v0C0Eu3kFh8clQ=; b=T4SoSkasV2i50dZvTsVvfD8hhy i1AN9w3uic1hKXz9KLUDPXbW3zSH7zd0Vh+RER1oXF1CDa/lXjhmBVM0tT8+Tazil92px5rtrixxJ 7xKXG+T6FilpEK3T7W3AACp0j2yhGIiy85fzpkwVorOn8PzPcfQRsWkYQ9KAnTPeQoKg1FPpiTy/Z xyI4owO9BJJu79ZYiawdTRz1OQ364VQ4a00ERQu463NOOvw6ZBfoK37qxLfCwlAPqfiYTF2zijtsH Wq0wm40uBAm5CkjCXH8ZcW1VWYF8sZOXR+QM2kOkZCozTTXVfpkGLt7EluRq+vPkfRO9TJ4H5gsrE viHaBChw==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4by-0007RA-1M; Mon, 13 May 2019 06:38:58 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 08/10] block: simplify blk_recalc_rq_segments Date: Mon, 13 May 2019 08:37:52 +0200 Message-Id: <20190513063754.1520-9-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Return the segement and let the callers assign them, which makes the code a littler more obvious. Also pass the request instead of q plus bio chain, allowing for the use of rq_for_each_bvec. Signed-off-by: Christoph Hellwig --- block/blk-core.c | 4 ++-- block/blk-merge.c | 21 ++++++--------------- block/blk.h | 2 +- 3 files changed, 9 insertions(+), 18 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 84118411626c..c894c9887dca 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1198,7 +1198,7 @@ static int blk_cloned_rq_check_limits(struct request_queue *q, * Recalculate it to check the request correctly on this queue's * limitation. */ - blk_recalc_rq_segments(rq); + rq->nr_phys_segments = blk_recalc_rq_segments(rq); if (rq->nr_phys_segments > queue_max_segments(q)) { printk(KERN_ERR "%s: over max segments limit.\n", __func__); return -EIO; @@ -1466,7 +1466,7 @@ bool blk_update_request(struct request *req, blk_status_t error, } /* recalculate the number of segments */ - blk_recalc_rq_segments(req); + req->nr_phys_segments = blk_recalc_rq_segments(req); } return true; diff --git a/block/blk-merge.c b/block/blk-merge.c index 72b4fd89a22d..2ea21ffd5f72 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -310,17 +310,16 @@ void blk_queue_split(struct request_queue *q, struct bio **bio) } EXPORT_SYMBOL(blk_queue_split); -static unsigned int __blk_recalc_rq_segments(struct request_queue *q, - struct bio *bio) +unsigned int blk_recalc_rq_segments(struct request *rq) { unsigned int nr_phys_segs = 0; - struct bvec_iter iter; + struct req_iterator iter; struct bio_vec bv; - if (!bio) + if (!rq->bio) return 0; - switch (bio_op(bio)) { + switch (bio_op(rq->bio)) { case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: @@ -329,19 +328,11 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, return 1; } - for_each_bio(bio) { - bio_for_each_bvec(bv, bio, iter) - bvec_split_segs(q, &bv, &nr_phys_segs, NULL, UINT_MAX); - } - + rq_for_each_bvec(bv, rq, iter) + bvec_split_segs(rq->q, &bv, &nr_phys_segs, NULL, UINT_MAX); return nr_phys_segs; } -void blk_recalc_rq_segments(struct request *rq) -{ - rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); -} - static inline struct scatterlist *blk_next_sg(struct scatterlist **sg, struct scatterlist *sglist) { diff --git a/block/blk.h b/block/blk.h index a18d0b9fe353..5352cdb876a6 100644 --- a/block/blk.h +++ b/block/blk.h @@ -204,7 +204,7 @@ struct request *attempt_back_merge(struct request_queue *q, struct request *rq); struct request *attempt_front_merge(struct request_queue *q, struct request *rq); int blk_attempt_req_merge(struct request_queue *q, struct request *rq, struct request *next); -void blk_recalc_rq_segments(struct request *rq); +unsigned int blk_recalc_rq_segments(struct request *rq); void blk_rq_set_mixed_merge(struct request *rq); bool blk_rq_merge_ok(struct request *rq, struct bio *bio); enum elv_merge blk_try_merge(struct request *rq, struct bio *bio); From patchwork Mon May 13 06:37:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E118112C for ; Mon, 13 May 2019 06:39:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 40B45223B3 for ; Mon, 13 May 2019 06:39:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 350D126E40; Mon, 13 May 2019 06:39:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6DF9223B3 for ; Mon, 13 May 2019 06:39:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727327AbfEMGjD (ORCPT ); Mon, 13 May 2019 02:39:03 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:36016 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGjD (ORCPT ); Mon, 13 May 2019 02:39:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Q2pNw2OEvlZU2mjuLx4ovc/Bm3KmL0NHqCFYnfnd7Pw=; b=uzOjiRgmlR9AoCFhmyiCZnzjQr dPgZbOvQQmRcMzBHRIorW4MOHV4G7DtOt0C9CVauEkqjIv1ImUqpyCN/49IZa3ComeN5J4prxzZ0i JwAZFAZ9TNIfGR6amjPCUiR7SYGFbhQTsess9H/Ty4rJug3QmsK/glQTGx7LtKVB/XAIeOYw/SjZL 2aFQMBJk2zjVWTFTKCzNPv9+3I0pAQCXZwzq+tWuMuePuAgQaHOyy0c6MFOjcBPyTyjaM6hfvzbZz KlsoubNGm8q3LSYrmv4e4qxEZUR/T6blIaJVqHokueX7/8ajqPv3hnXFD3YbJzfV/ZsOh9sv0omtk xijv3o5w==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4c0-0007RW-Io; Mon, 13 May 2019 06:39:00 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 09/10] block: untangle the end of blk_bio_segment_split Date: Mon, 13 May 2019 08:37:53 +0200 Message-Id: <20190513063754.1520-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we don't need to assign the front/back segment sizes, we can duplicating the segs assignment for the split vs no-split case and remove a whole chunk of boilerplate code. Signed-off-by: Christoph Hellwig --- block/blk-merge.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 2ea21ffd5f72..ca45eb51c669 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -202,8 +202,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio_vec bv, bvprv, *bvprvp = NULL; struct bvec_iter iter; unsigned nsegs = 0, sectors = 0; - bool do_split = true; - struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); const unsigned max_segs = queue_max_segments(q); @@ -245,17 +243,11 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, } } - do_split = false; + *segs = nsegs; + return NULL; split: *segs = nsegs; - - if (do_split) { - new = bio_split(bio, sectors, GFP_NOIO, bs); - if (new) - bio = new; - } - - return do_split ? new : NULL; + return bio_split(bio, sectors, GFP_NOIO, bs); } void __blk_queue_split(struct request_queue *q, struct bio **bio, From patchwork Mon May 13 06:37:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10940385 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 16922112C for ; Mon, 13 May 2019 06:39:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 09886223B3 for ; Mon, 13 May 2019 06:39:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F200826E40; Mon, 13 May 2019 06:39:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B497223B3 for ; Mon, 13 May 2019 06:39:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727331AbfEMGjF (ORCPT ); Mon, 13 May 2019 02:39:05 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:36024 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726181AbfEMGjE (ORCPT ); Mon, 13 May 2019 02:39:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=s2kx0Pwv+YVLjI/MK8lN/4670dthhnWmC1mj+5QFMpI=; b=juiuYSTYzaodImoFl0E72V53fL fo9G7cyJLnDmT0fIPYCZfYbdFC0egYVV3WHp5wAeA1N49Xv3s6jYcGd42pkr9Swh8Hb+StTykCrOz Qiju0+d+u8SalxkW9Qnwy2Bt3k3PKqsnryThWTUCk6qJtBY/Kzjzw3CpoJ0Z/usawmseZCHjAbnKt qBEc69OfqxdNS6nCeTCykYQp2ZZD02oGHrKNs8MpGGgwLHNSGjIo1pBFqoXf+X0unC7vKf03GwJQN /XCHcRRnvolbFpVUkw34FIWoDv1UY/xsQdjmHodwhgX9CNf2yHdOIqeFh6J8EJin6cORTB7Dm9aP1 WVwRQSPw==; Received: from 089144210233.atnat0019.highway.a1.net ([89.144.210.233] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQ4c3-0007Rj-3v; Mon, 13 May 2019 06:39:03 +0000 From: Christoph Hellwig To: axboe@fb.com Cc: ming.lei@redhat.com, Matias Bjorling , linux-block@vger.kernel.org Subject: [PATCH 10/10] block: mark blk_rq_bio_prep as inline Date: Mon, 13 May 2019 08:37:54 +0200 Message-Id: <20190513063754.1520-11-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190513063754.1520-1-hch@lst.de> References: <20190513063754.1520-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This function just has a few trivial assignments, has two callers with one of them being in the fastpath. Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni --- block/blk-core.c | 11 ----------- block/blk.h | 13 ++++++++++++- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index c894c9887dca..9405388ac658 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1473,17 +1473,6 @@ bool blk_update_request(struct request *req, blk_status_t error, } EXPORT_SYMBOL_GPL(blk_update_request); -void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs) -{ - rq->nr_phys_segments = nr_segs; - rq->__data_len = bio->bi_iter.bi_size; - rq->bio = rq->biotail = bio; - rq->ioprio = bio_prio(bio); - - if (bio->bi_disk) - rq->rq_disk = bio->bi_disk; -} - #if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE /** * rq_flush_dcache_pages - Helper function to flush all pages in a request diff --git a/block/blk.h b/block/blk.h index 5352cdb876a6..cbb0995ed17e 100644 --- a/block/blk.h +++ b/block/blk.h @@ -51,7 +51,6 @@ struct blk_flush_queue *blk_alloc_flush_queue(struct request_queue *q, void blk_free_flush_queue(struct blk_flush_queue *q); void blk_exit_queue(struct request_queue *q); -void blk_rq_bio_prep(struct request *rq, struct bio *bio, unsigned int nr_segs); void blk_freeze_queue(struct request_queue *q); static inline void blk_queue_enter_live(struct request_queue *q) @@ -100,6 +99,18 @@ static inline bool bvec_gap_to_prev(struct request_queue *q, return __bvec_gap_to_prev(q, bprv, offset); } +static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio, + unsigned int nr_segs) +{ + rq->nr_phys_segments = nr_segs; + rq->__data_len = bio->bi_iter.bi_size; + rq->bio = rq->biotail = bio; + rq->ioprio = bio_prio(bio); + + if (bio->bi_disk) + rq->rq_disk = bio->bi_disk; +} + #ifdef CONFIG_BLK_DEV_INTEGRITY void blk_flush_integrity(void); bool __bio_integrity_endio(struct bio *);