From patchwork Wed Nov 21 03:23:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81C3113BB for ; Wed, 21 Nov 2018 03:25:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 701F72B406 for ; Wed, 21 Nov 2018 03:25:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 632522B40A; Wed, 21 Nov 2018 03:25:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A5882B406 for ; Wed, 21 Nov 2018 03:25:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727244AbeKUN5b (ORCPT ); Wed, 21 Nov 2018 08:57:31 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42754 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN5b (ORCPT ); Wed, 21 Nov 2018 08:57:31 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 81FB337E79; Wed, 21 Nov 2018 03:25:00 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 97DDA608C4; Wed, 21 Nov 2018 03:24:46 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 01/19] block: don't use bio->bi_vcnt to figure out segment number Date: Wed, 21 Nov 2018 11:23:09 +0800 Message-Id: <20181121032327.8434-2-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 21 Nov 2018 03:25:00 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It is wrong to use bio->bi_vcnt to figure out how many segments there are in the bio even though CLONED flag isn't set on this bio, because this bio may be splitted or advanced. So always use bio_segments() in blk_recount_segments(), and it shouldn't cause any performance loss now because the physical segment number is figured out in blk_queue_split() and BIO_SEG_VALID is set meantime since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"). Reviewed-by: Christoph Hellwig Fixes: 76d8137a3113 ("blk-merge: recaculate segment if it isn't less than max segments") Signed-off-by: Ming Lei --- block/blk-merge.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index b1df622cbd85..f52400ce2187 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -368,13 +368,7 @@ void blk_recalc_rq_segments(struct request *rq) void blk_recount_segments(struct request_queue *q, struct bio *bio) { - unsigned short seg_cnt; - - /* estimate segment number by bi_vcnt for non-cloned bio */ - if (bio_flagged(bio, BIO_CLONED)) - seg_cnt = bio_segments(bio); - else - seg_cnt = bio->bi_vcnt; + unsigned short seg_cnt = bio_segments(bio); if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) && (seg_cnt < queue_max_segments(q))) From patchwork Wed Nov 21 03:23:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691615 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3643813BB for ; Wed, 21 Nov 2018 03:25:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2413E2B406 for ; Wed, 21 Nov 2018 03:25:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1812F2B40A; Wed, 21 Nov 2018 03:25:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B154E2B409 for ; Wed, 21 Nov 2018 03:25:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727379AbeKUN5s (ORCPT ); Wed, 21 Nov 2018 08:57:48 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37582 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN5s (ORCPT ); Wed, 21 Nov 2018 08:57:48 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E04BD81DE1; Wed, 21 Nov 2018 03:25:16 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id AFB61BABB; Wed, 21 Nov 2018 03:25:02 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 02/19] block: introduce multi-page bvec helpers Date: Wed, 21 Nov 2018 11:23:10 +0800 Message-Id: <20181121032327.8434-3-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Wed, 21 Nov 2018 03:25:17 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces helpers of 'segment_iter_*' for multipage bvec support. The introduced helpers treate one bvec as real multi-page segment, which may include more than one pages. The existed helpers of bvec_iter_* are interfaces for supporting current bvec iterator which is thought as single-page by drivers, fs, dm and etc. These introduced helpers will build single-page bvec in flight, so this way won't break current bio/bvec users, which needn't any change. Follows some multi-page bvec background: - bvecs stored in bio->bi_io_vec is always multi-page style - bvec(struct bio_vec) represents one physically contiguous I/O buffer, now the buffer may include more than one page after multi-page bvec is supported, and all these pages represented by one bvec is physically contiguous. Before multi-page bvec support, at most one page is included in one bvec, we call it single-page bvec. - .bv_page of the bvec points to the 1st page in the multi-page bvec - .bv_offset of the bvec is the offset of the buffer in the bvec The effect on the current drivers/filesystem/dm/bcache/...: - almost everyone supposes that one bvec only includes one single page, so we keep the sp interface not changed, for example, bio_for_each_segment() still returns single-page bvec - bio_for_each_segment_all() will return single-page bvec too - during iterating, iterator variable(struct bvec_iter) is always updated in multi-page bvec style, and bvec_iter_advance() is kept not changed - returned(copied) single-page bvec is built in flight by bvec helpers from the stored multi-page bvec Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- include/linux/bvec.h | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index 02c73c6aa805..ed90bbf4c9c9 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -23,6 +23,7 @@ #include #include #include +#include /* * was unsigned short, but we might as well be ready for > 64kB I/O pages @@ -50,16 +51,35 @@ struct bvec_iter { */ #define __bvec_iter_bvec(bvec, iter) (&(bvec)[(iter).bi_idx]) -#define bvec_iter_page(bvec, iter) \ +#define segment_iter_page(bvec, iter) \ (__bvec_iter_bvec((bvec), (iter))->bv_page) -#define bvec_iter_len(bvec, iter) \ +#define segment_iter_len(bvec, iter) \ min((iter).bi_size, \ __bvec_iter_bvec((bvec), (iter))->bv_len - (iter).bi_bvec_done) -#define bvec_iter_offset(bvec, iter) \ +#define segment_iter_offset(bvec, iter) \ (__bvec_iter_bvec((bvec), (iter))->bv_offset + (iter).bi_bvec_done) +#define segment_iter_page_idx(bvec, iter) \ + (segment_iter_offset((bvec), (iter)) / PAGE_SIZE) + +/* + * of single-page segment. + * + * This helpers are for building single-page bvec in flight. + */ +#define bvec_iter_offset(bvec, iter) \ + (segment_iter_offset((bvec), (iter)) % PAGE_SIZE) + +#define bvec_iter_len(bvec, iter) \ + min_t(unsigned, segment_iter_len((bvec), (iter)), \ + PAGE_SIZE - bvec_iter_offset((bvec), (iter))) + +#define bvec_iter_page(bvec, iter) \ + nth_page(segment_iter_page((bvec), (iter)), \ + segment_iter_page_idx((bvec), (iter))) + #define bvec_iter_bvec(bvec, iter) \ ((struct bio_vec) { \ .bv_page = bvec_iter_page((bvec), (iter)), \ From patchwork Wed Nov 21 03:23:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E67413BB for ; Wed, 21 Nov 2018 03:25:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0E0D12B409 for ; Wed, 21 Nov 2018 03:25:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 019B82B412; Wed, 21 Nov 2018 03:25:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED2022B409 for ; Wed, 21 Nov 2018 03:25:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727534AbeKUN6B (ORCPT ); Wed, 21 Nov 2018 08:58:01 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48870 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN6B (ORCPT ); Wed, 21 Nov 2018 08:58:01 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 47485C050DFB; Wed, 21 Nov 2018 03:25:30 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 187B47E102; Wed, 21 Nov 2018 03:25:18 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 03/19] block: introduce bio_for_each_bvec() Date: Wed, 21 Nov 2018 11:23:11 +0800 Message-Id: <20181121032327.8434-4-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 21 Nov 2018 03:25:30 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This helper is used for iterating over multi-page bvec for bio split & merge code. Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- include/linux/bio.h | 25 ++++++++++++++++++++++--- include/linux/bvec.h | 36 +++++++++++++++++++++++++++++------- 2 files changed, 51 insertions(+), 10 deletions(-) diff --git a/include/linux/bio.h b/include/linux/bio.h index 056fb627edb3..7560209d6a8a 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -76,6 +76,9 @@ #define bio_data_dir(bio) \ (op_is_write(bio_op(bio)) ? WRITE : READ) +#define bio_iter_mp_iovec(bio, iter) \ + segment_iter_bvec((bio)->bi_io_vec, (iter)) + /* * Check whether this bio carries any data or not. A NULL bio is allowed. */ @@ -135,18 +138,24 @@ static inline bool bio_full(struct bio *bio) #define bio_for_each_segment_all(bvl, bio, i) \ for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++) -static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter, - unsigned bytes) +static inline void __bio_advance_iter(struct bio *bio, struct bvec_iter *iter, + unsigned bytes, unsigned max_seg_len) { iter->bi_sector += bytes >> 9; if (bio_no_advance_iter(bio)) iter->bi_size -= bytes; else - bvec_iter_advance(bio->bi_io_vec, iter, bytes); + __bvec_iter_advance(bio->bi_io_vec, iter, bytes, max_seg_len); /* TODO: It is reasonable to complete bio with error here. */ } +static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter, + unsigned bytes) +{ + __bio_advance_iter(bio, iter, bytes, PAGE_SIZE); +} + #define __bio_for_each_segment(bvl, bio, iter, start) \ for (iter = (start); \ (iter).bi_size && \ @@ -156,6 +165,16 @@ static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter, #define bio_for_each_segment(bvl, bio, iter) \ __bio_for_each_segment(bvl, bio, iter, (bio)->bi_iter) +#define __bio_for_each_bvec(bvl, bio, iter, start) \ + for (iter = (start); \ + (iter).bi_size && \ + ((bvl = bio_iter_mp_iovec((bio), (iter))), 1); \ + __bio_advance_iter((bio), &(iter), (bvl).bv_len, BVEC_MAX_LEN)) + +/* returns one real segment(multi-page bvec) each time */ +#define bio_for_each_bvec(bvl, bio, iter) \ + __bio_for_each_bvec(bvl, bio, iter, (bio)->bi_iter) + #define bio_iter_last(bvec, iter) ((iter).bi_size == (bvec).bv_len) static inline unsigned bio_segments(struct bio *bio) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index ed90bbf4c9c9..b279218c5c4d 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -25,6 +25,8 @@ #include #include +#define BVEC_MAX_LEN ((unsigned int)-1) + /* * was unsigned short, but we might as well be ready for > 64kB I/O pages */ @@ -87,8 +89,15 @@ struct bvec_iter { .bv_offset = bvec_iter_offset((bvec), (iter)), \ }) -static inline bool bvec_iter_advance(const struct bio_vec *bv, - struct bvec_iter *iter, unsigned bytes) +#define segment_iter_bvec(bvec, iter) \ +((struct bio_vec) { \ + .bv_page = segment_iter_page((bvec), (iter)), \ + .bv_len = segment_iter_len((bvec), (iter)), \ + .bv_offset = segment_iter_offset((bvec), (iter)), \ +}) + +static inline bool __bvec_iter_advance(const struct bio_vec *bv, + struct bvec_iter *iter, unsigned bytes, unsigned max_seg_len) { if (WARN_ONCE(bytes > iter->bi_size, "Attempted to advance past end of bvec iter\n")) { @@ -97,12 +106,18 @@ static inline bool bvec_iter_advance(const struct bio_vec *bv, } while (bytes) { - unsigned iter_len = bvec_iter_len(bv, *iter); - unsigned len = min(bytes, iter_len); + unsigned segment_len = segment_iter_len(bv, *iter); - bytes -= len; - iter->bi_size -= len; - iter->bi_bvec_done += len; + if (max_seg_len < BVEC_MAX_LEN) + segment_len = min_t(unsigned, segment_len, + max_seg_len - + bvec_iter_offset(bv, *iter)); + + segment_len = min(bytes, segment_len); + + bytes -= segment_len; + iter->bi_size -= segment_len; + iter->bi_bvec_done += segment_len; if (iter->bi_bvec_done == __bvec_iter_bvec(bv, *iter)->bv_len) { iter->bi_bvec_done = 0; @@ -136,6 +151,13 @@ static inline bool bvec_iter_rewind(const struct bio_vec *bv, return true; } +static inline bool bvec_iter_advance(const struct bio_vec *bv, + struct bvec_iter *iter, + unsigned bytes) +{ + return __bvec_iter_advance(bv, iter, bytes, PAGE_SIZE); +} + #define for_each_bvec(bvl, bio_vec, iter, start) \ for (iter = (start); \ (iter).bi_size && \ From patchwork Wed Nov 21 03:23:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2BEBC1923 for ; Wed, 21 Nov 2018 03:25:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A2142B406 for ; Wed, 21 Nov 2018 03:25:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0D9CB2B40A; Wed, 21 Nov 2018 03:25:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 966BD2B409 for ; Wed, 21 Nov 2018 03:25:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727566AbeKUN6R (ORCPT ); Wed, 21 Nov 2018 08:58:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55156 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN6R (ORCPT ); Wed, 21 Nov 2018 08:58:17 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B820B811D7; Wed, 21 Nov 2018 03:25:46 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 69B805C221; Wed, 21 Nov 2018 03:25:32 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 04/19] block: use bio_for_each_bvec() to compute multi-page bvec count Date: Wed, 21 Nov 2018 11:23:12 +0800 Message-Id: <20181121032327.8434-5-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 21 Nov 2018 03:25:47 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP First it is more efficient to use bio_for_each_bvec() in both blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how many multi-page bvecs there are in the bio. Secondly once bio_for_each_bvec() is used, the bvec may need to be splitted because its length can be very longer than max segment size, so we have to split the big bvec into several segments. Thirdly when splitting multi-page bvec into segments, the max segment limit may be reached, so the bio split need to be considered under this situation too. Signed-off-by: Ming Lei --- block/blk-merge.c | 87 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 68 insertions(+), 19 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index f52400ce2187..ec0b93fa1ff8 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -161,6 +161,54 @@ static inline unsigned get_max_io_size(struct request_queue *q, return sectors; } +/* + * Split the bvec @bv into segments, and update all kinds of + * variables. + */ +static bool bvec_split_segs(struct request_queue *q, struct bio_vec *bv, + unsigned *nsegs, unsigned *last_seg_size, + unsigned *front_seg_size, unsigned *sectors) +{ + unsigned len = bv->bv_len; + unsigned total_len = 0; + unsigned new_nsegs = 0, seg_size = 0; + + /* + * Multipage bvec may be too big to hold in one segment, + * so the current bvec has to be splitted as multiple + * segments. + */ + while (len && new_nsegs + *nsegs < queue_max_segments(q)) { + seg_size = min(queue_max_segment_size(q), len); + + new_nsegs++; + total_len += seg_size; + len -= seg_size; + + if ((bv->bv_offset + total_len) & queue_virt_boundary(q)) + break; + } + + /* update front segment size */ + if (!*nsegs) { + unsigned first_seg_size = seg_size; + + if (new_nsegs > 1) + first_seg_size = queue_max_segment_size(q); + if (*front_seg_size < first_seg_size) + *front_seg_size = first_seg_size; + } + + /* update other varibles */ + *last_seg_size = seg_size; + *nsegs += new_nsegs; + if (sectors) + *sectors += total_len >> 9; + + /* split in the middle of the bvec if len != 0 */ + return !!len; +} + static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -174,7 +222,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, struct bio *new = NULL; const unsigned max_sectors = get_max_io_size(q, bio); - bio_for_each_segment(bv, bio, iter) { + bio_for_each_bvec(bv, bio, iter) { /* * If the queue doesn't support SG gaps and adding this * offset would create a gap, disallow it. @@ -189,8 +237,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, */ if (nsegs < queue_max_segments(q) && sectors < max_sectors) { - nsegs++; - sectors = max_sectors; + /* split in the middle of bvec */ + bv.bv_len = (max_sectors - sectors) << 9; + bvec_split_segs(q, &bv, &nsegs, + &seg_size, + &front_seg_size, + §ors); } goto split; } @@ -212,14 +264,12 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (nsegs == queue_max_segments(q)) goto split; - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; - - nsegs++; bvprv = bv; bvprvp = &bvprv; - seg_size = bv.bv_len; - sectors += bv.bv_len >> 9; + + if (bvec_split_segs(q, &bv, &nsegs, &seg_size, + &front_seg_size, §ors)) + goto split; } @@ -233,8 +283,6 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, bio = new; } - if (nsegs == 1 && seg_size > front_seg_size) - front_seg_size = seg_size; bio->bi_seg_front_size = front_seg_size; if (seg_size > bio->bi_seg_back_size) bio->bi_seg_back_size = seg_size; @@ -297,6 +345,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; unsigned int seg_size, nr_phys_segs; + unsigned front_seg_size = bio->bi_seg_front_size; struct bio *fbio, *bbio; struct bvec_iter iter; @@ -317,7 +366,7 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, seg_size = 0; nr_phys_segs = 0; for_each_bio(bio) { - bio_for_each_segment(bv, bio, iter) { + bio_for_each_bvec(bv, bio, iter) { /* * If SG merging is disabled, each bio vector is * a segment @@ -337,20 +386,20 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, continue; } new_segment: - if (nr_phys_segs == 1 && seg_size > - fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; - nr_phys_segs++; bvprv = bv; prev = 1; - seg_size = bv.bv_len; + bvec_split_segs(q, &bv, &nr_phys_segs, &seg_size, + &front_seg_size, NULL); } bbio = bio; } - if (nr_phys_segs == 1 && seg_size > fbio->bi_seg_front_size) - fbio->bi_seg_front_size = seg_size; + if (nr_phys_segs == 1 && seg_size > front_seg_size) + front_seg_size = seg_size; + fbio->bi_seg_front_size = front_seg_size; if (seg_size > bbio->bi_seg_back_size) bbio->bi_seg_back_size = seg_size; From patchwork Wed Nov 21 03:23:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0086A13BB for ; Wed, 21 Nov 2018 03:26:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E20C32B406 for ; Wed, 21 Nov 2018 03:26:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D62F92B40A; Wed, 21 Nov 2018 03:26:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EF752B406 for ; Wed, 21 Nov 2018 03:26:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727643AbeKUN6i (ORCPT ); Wed, 21 Nov 2018 08:58:38 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58696 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN6h (ORCPT ); Wed, 21 Nov 2018 08:58:37 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 59539308428D; Wed, 21 Nov 2018 03:26:06 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id DA5367D2CF; Wed, 21 Nov 2018 03:25:48 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 05/19] block: use bio_for_each_bvec() to map sg Date: Wed, 21 Nov 2018 11:23:13 +0800 Message-Id: <20181121032327.8434-6-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Wed, 21 Nov 2018 03:26:06 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It is more efficient to use bio_for_each_bvec() to map sg, meantime we have to consider splitting multipage bvec as done in blk_bio_segment_split(). Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- block/blk-merge.c | 68 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 48 insertions(+), 20 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index ec0b93fa1ff8..8829c51b4e75 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -455,6 +455,52 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, return biovec_phys_mergeable(q, &end_bv, &nxt_bv); } +static struct scatterlist *blk_next_sg(struct scatterlist **sg, + struct scatterlist *sglist) +{ + if (!*sg) + return sglist; + + /* + * If the driver previously mapped a shorter list, we could see a + * termination bit prematurely unless it fully inits the sg table + * on each mapping. We KNOW that there must be more entries here + * or the driver would be buggy, so force clear the termination bit + * to avoid doing a full sg_init_table() in drivers for each command. + */ + sg_unmark_end(*sg); + return sg_next(*sg); +} + +static unsigned blk_bvec_map_sg(struct request_queue *q, + struct bio_vec *bvec, struct scatterlist *sglist, + struct scatterlist **sg) +{ + unsigned nbytes = bvec->bv_len; + unsigned nsegs = 0, total = 0; + + while (nbytes > 0) { + unsigned seg_size; + struct page *pg; + unsigned offset, idx; + + *sg = blk_next_sg(sg, sglist); + + seg_size = min(nbytes, queue_max_segment_size(q)); + offset = (total + bvec->bv_offset) % PAGE_SIZE; + idx = (total + bvec->bv_offset) / PAGE_SIZE; + pg = nth_page(bvec->bv_page, idx); + + sg_set_page(*sg, pg, seg_size, offset); + + total += seg_size; + nbytes -= seg_size; + nsegs++; + } + + return nsegs; +} + static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, struct scatterlist *sglist, struct bio_vec *bvprv, @@ -472,25 +518,7 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, (*sg)->length += nbytes; } else { new_segment: - if (!*sg) - *sg = sglist; - else { - /* - * If the driver previously mapped a shorter - * list, we could see a termination bit - * prematurely unless it fully inits the sg - * table on each mapping. We KNOW that there - * must be more entries here or the driver - * would be buggy, so force clear the - * termination bit to avoid doing a full - * sg_init_table() in drivers for each command. - */ - sg_unmark_end(*sg); - *sg = sg_next(*sg); - } - - sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); - (*nsegs)++; + (*nsegs) += blk_bvec_map_sg(q, bvec, sglist, sg); } *bvprv = *bvec; } @@ -512,7 +540,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, int cluster = blk_queue_cluster(q), nsegs = 0; for_each_bio(bio) - bio_for_each_segment(bvec, bio, iter) + bio_for_each_bvec(bvec, bio, iter) __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, &nsegs, &cluster); From patchwork Wed Nov 21 03:23:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691663 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90C7913BB for ; Wed, 21 Nov 2018 03:26:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EB182B406 for ; Wed, 21 Nov 2018 03:26:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6FACE2B40C; Wed, 21 Nov 2018 03:26:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D03A2B406 for ; Wed, 21 Nov 2018 03:26:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727741AbeKUN6w (ORCPT ); Wed, 21 Nov 2018 08:58:52 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36694 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN6w (ORCPT ); Wed, 21 Nov 2018 08:58:52 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E170C308FB94; Wed, 21 Nov 2018 03:26:20 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 874A7600C5; Wed, 21 Nov 2018 03:26:08 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 06/19] block: introduce bvec_last_segment() Date: Wed, 21 Nov 2018 11:23:14 +0800 Message-Id: <20181121032327.8434-7-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Wed, 21 Nov 2018 03:26:21 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP BTRFS and guard_bio_eod() need to get the last singlepage segment from one multipage bvec, so introduce this helper to make them happy. Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- include/linux/bvec.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index b279218c5c4d..b37d13a79a7d 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -173,4 +173,26 @@ static inline bool bvec_iter_advance(const struct bio_vec *bv, .bi_bvec_done = 0, \ } +/* + * Get the last single-page segment from the multi-page bvec and store it + * in @seg + */ +static inline void bvec_last_segment(const struct bio_vec *bvec, + struct bio_vec *seg) +{ + unsigned total = bvec->bv_offset + bvec->bv_len; + unsigned last_page = (total - 1) / PAGE_SIZE; + + seg->bv_page = nth_page(bvec->bv_page, last_page); + + /* the whole segment is inside the last page */ + if (bvec->bv_offset >= last_page * PAGE_SIZE) { + seg->bv_offset = bvec->bv_offset % PAGE_SIZE; + seg->bv_len = bvec->bv_len; + } else { + seg->bv_offset = 0; + seg->bv_len = total - last_page * PAGE_SIZE; + } +} + #endif /* __LINUX_BVEC_ITER_H */ From patchwork Wed Nov 21 03:23:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691675 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A7EC16B1 for ; Wed, 21 Nov 2018 03:26:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 391F82B409 for ; Wed, 21 Nov 2018 03:26:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2CCC42B412; Wed, 21 Nov 2018 03:26:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E009E2B409 for ; Wed, 21 Nov 2018 03:26:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727807AbeKUN7H (ORCPT ); Wed, 21 Nov 2018 08:59:07 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38294 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726165AbeKUN7G (ORCPT ); Wed, 21 Nov 2018 08:59:06 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6C75B7F6A8; Wed, 21 Nov 2018 03:26:35 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 17CC25DD86; Wed, 21 Nov 2018 03:26:22 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 07/19] fs/buffer.c: use bvec iterator to truncate the bio Date: Wed, 21 Nov 2018 11:23:15 +0800 Message-Id: <20181121032327.8434-8-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Wed, 21 Nov 2018 03:26:35 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Once multi-page bvec is enabled, the last bvec may include more than one page, this patch use bvec_last_segment() to truncate the bio. Reviewed-by: Omar Sandoval Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- fs/buffer.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/fs/buffer.c b/fs/buffer.c index 1286c2b95498..fa37ad52e962 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -3032,7 +3032,10 @@ void guard_bio_eod(int op, struct bio *bio) /* ..and clear the end of the buffer for reads */ if (op == REQ_OP_READ) { - zero_user(bvec->bv_page, bvec->bv_offset + bvec->bv_len, + struct bio_vec bv; + + bvec_last_segment(bvec, &bv); + zero_user(bv.bv_page, bv.bv_offset + bv.bv_len, truncated_bytes); } } From patchwork Wed Nov 21 03:23:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691689 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5836116B1 for ; Wed, 21 Nov 2018 03:26:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4612B2B409 for ; Wed, 21 Nov 2018 03:26:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 36E432B412; Wed, 21 Nov 2018 03:26:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7CC32B409 for ; Wed, 21 Nov 2018 03:26:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727839AbeKUN7R (ORCPT ); Wed, 21 Nov 2018 08:59:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47982 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbeKUN7Q (ORCPT ); Wed, 21 Nov 2018 08:59:16 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 94672C058CA6; Wed, 21 Nov 2018 03:26:45 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 924975C226; Wed, 21 Nov 2018 03:26:37 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 08/19] btrfs: use bvec_last_segment to get bio's last page Date: Wed, 21 Nov 2018 11:23:16 +0800 Message-Id: <20181121032327.8434-9-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 21 Nov 2018 03:26:45 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Preparing for supporting multi-page bvec. Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- fs/btrfs/extent_io.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index d228f706ff3e..5d5965297e7e 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2720,11 +2720,12 @@ static int __must_check submit_one_bio(struct bio *bio, int mirror_num, { blk_status_t ret = 0; struct bio_vec *bvec = bio_last_bvec_all(bio); - struct page *page = bvec->bv_page; + struct bio_vec bv; struct extent_io_tree *tree = bio->bi_private; u64 start; - start = page_offset(page) + bvec->bv_offset; + bvec_last_segment(bvec, &bv); + start = page_offset(bv.bv_page) + bv.bv_offset; bio->bi_private = NULL; From patchwork Wed Nov 21 03:23:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691697 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAA2C13BB for ; Wed, 21 Nov 2018 03:27:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B91982B409 for ; Wed, 21 Nov 2018 03:27:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ABD1B2B412; Wed, 21 Nov 2018 03:27:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 637582B409 for ; Wed, 21 Nov 2018 03:27:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727949AbeKUN7b (ORCPT ); Wed, 21 Nov 2018 08:59:31 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44318 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbeKUN7a (ORCPT ); Wed, 21 Nov 2018 08:59:30 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6E2DAC057E26; Wed, 21 Nov 2018 03:26:59 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7C4D6012B; Wed, 21 Nov 2018 03:26:47 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 09/19] btrfs: move bio_pages_all() to btrfs Date: Wed, 21 Nov 2018 11:23:17 +0800 Message-Id: <20181121032327.8434-10-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Wed, 21 Nov 2018 03:26:59 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP BTRFS is the only user of this helper, so move this helper into BTRFS, and implement it via bio_for_each_segment_all(), since bio->bi_vcnt may not equal to number of pages after multipage bvec is enabled. Signed-off-by: Ming Lei --- fs/btrfs/extent_io.c | 14 +++++++++++++- include/linux/bio.h | 6 ------ 2 files changed, 13 insertions(+), 7 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 5d5965297e7e..874bb9aeebdc 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2348,6 +2348,18 @@ struct bio *btrfs_create_repair_bio(struct inode *inode, struct bio *failed_bio, return bio; } +static unsigned btrfs_bio_pages_all(struct bio *bio) +{ + unsigned i; + struct bio_vec *bv; + + WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); + + bio_for_each_segment_all(bv, bio, i) + ; + return i; +} + /* * this is a generic handler for readpage errors (default * readpage_io_failed_hook). if other copies exist, read those and write back @@ -2368,7 +2380,7 @@ static int bio_readpage_error(struct bio *failed_bio, u64 phy_offset, int read_mode = 0; blk_status_t status; int ret; - unsigned failed_bio_pages = bio_pages_all(failed_bio); + unsigned failed_bio_pages = btrfs_bio_pages_all(failed_bio); BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE); diff --git a/include/linux/bio.h b/include/linux/bio.h index 7560209d6a8a..9d6284f53c07 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -282,12 +282,6 @@ static inline void bio_get_last_bvec(struct bio *bio, struct bio_vec *bv) bv->bv_len = iter.bi_bvec_done; } -static inline unsigned bio_pages_all(struct bio *bio) -{ - WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); - return bio->bi_vcnt; -} - static inline struct bio_vec *bio_first_bvec_all(struct bio *bio) { WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); From patchwork Wed Nov 21 03:23:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691705 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE7DC13BB for ; Wed, 21 Nov 2018 03:27:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBDC12B409 for ; Wed, 21 Nov 2018 03:27:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFC3E2B413; Wed, 21 Nov 2018 03:27:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 515B22B409 for ; Wed, 21 Nov 2018 03:27:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728020AbeKUN7s (ORCPT ); Wed, 21 Nov 2018 08:59:48 -0500 Received: from mx1.redhat.com ([209.132.183.28]:46666 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbeKUN7r (ORCPT ); Wed, 21 Nov 2018 08:59:47 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9CB3E30B2537; Wed, 21 Nov 2018 03:27:16 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 991F47D2F2; Wed, 21 Nov 2018 03:27:01 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 10/19] block: loop: pass multi-page bvec to iov_iter Date: Wed, 21 Nov 2018 11:23:18 +0800 Message-Id: <20181121032327.8434-11-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Wed, 21 Nov 2018 03:27:16 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP iov_iter is implemented on bvec itererator helpers, so it is safe to pass multi-page bvec to it, and this way is much more efficient than passing one page in each bvec. Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig --- drivers/block/loop.c | 20 ++++++++++---------- include/linux/blkdev.h | 4 ++++ 2 files changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 176ab1f28eca..e3683211f12d 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -510,21 +510,22 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, loff_t pos, bool rw) { struct iov_iter iter; + struct req_iterator rq_iter; struct bio_vec *bvec; struct request *rq = blk_mq_rq_from_pdu(cmd); struct bio *bio = rq->bio; struct file *file = lo->lo_backing_file; + struct bio_vec tmp; unsigned int offset; - int segments = 0; + int nr_bvec = 0; int ret; + rq_for_each_bvec(tmp, rq, rq_iter) + nr_bvec++; + if (rq->bio != rq->biotail) { - struct req_iterator iter; - struct bio_vec tmp; - __rq_for_each_bio(bio, rq) - segments += bio_segments(bio); - bvec = kmalloc_array(segments, sizeof(struct bio_vec), + bvec = kmalloc_array(nr_bvec, sizeof(struct bio_vec), GFP_NOIO); if (!bvec) return -EIO; @@ -533,10 +534,10 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, /* * The bios of the request may be started from the middle of * the 'bvec' because of bio splitting, so we can't directly - * copy bio->bi_iov_vec to new bvec. The rq_for_each_segment + * copy bio->bi_iov_vec to new bvec. The rq_for_each_bvec * API will take care of all details for us. */ - rq_for_each_segment(tmp, rq, iter) { + rq_for_each_bvec(tmp, rq, rq_iter) { *bvec = tmp; bvec++; } @@ -550,11 +551,10 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd, */ offset = bio->bi_iter.bi_bvec_done; bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); - segments = bio_segments(bio); } atomic_set(&cmd->ref, 2); - iov_iter_bvec(&iter, rw, bvec, segments, blk_rq_bytes(rq)); + iov_iter_bvec(&iter, rw, bvec, nr_bvec, blk_rq_bytes(rq)); iter.iov_offset = offset; cmd->iocb.ki_pos = pos; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1ad6eafc43f2..a281b6737b61 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -805,6 +805,10 @@ struct req_iterator { __rq_for_each_bio(_iter.bio, _rq) \ bio_for_each_segment(bvl, _iter.bio, _iter.iter) +#define rq_for_each_bvec(bvl, _rq, _iter) \ + __rq_for_each_bio(_iter.bio, _rq) \ + bio_for_each_bvec(bvl, _iter.bio, _iter.iter) + #define rq_iter_last(bvec, _iter) \ (_iter.bio->bi_next == NULL && \ bio_iter_last(bvec, _iter.iter)) From patchwork Wed Nov 21 03:23:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691735 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E776A13BB for ; Wed, 21 Nov 2018 03:27:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D37D62B40C for ; Wed, 21 Nov 2018 03:27:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C49D02B413; Wed, 21 Nov 2018 03:27:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EE512B40C for ; Wed, 21 Nov 2018 03:27:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728033AbeKUOAD (ORCPT ); Wed, 21 Nov 2018 09:00:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51428 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727092AbeKUOAC (ORCPT ); Wed, 21 Nov 2018 09:00:02 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 71CCD637FC; Wed, 21 Nov 2018 03:27:31 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id CAA9D60BF6; Wed, 21 Nov 2018 03:27:18 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 11/19] bcache: avoid to use bio_for_each_segment_all() in bch_bio_alloc_pages() Date: Wed, 21 Nov 2018 11:23:19 +0800 Message-Id: <20181121032327.8434-12-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 21 Nov 2018 03:27:31 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP bch_bio_alloc_pages() is always called on one new bio, so it is safe to access the bvec table directly. Given it is the only kind of this case, open code the bvec table access since bio_for_each_segment_all() will be changed to support for iterating over multipage bvec. Acked-by: Coly Li Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig --- drivers/md/bcache/util.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c index 20eddeac1531..62fb917f7a4f 100644 --- a/drivers/md/bcache/util.c +++ b/drivers/md/bcache/util.c @@ -270,7 +270,11 @@ int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask) int i; struct bio_vec *bv; - bio_for_each_segment_all(bv, bio, i) { + /* + * This is called on freshly new bio, so it is safe to access the + * bvec table directly. + */ + for (i = 0, bv = bio->bi_io_vec; i < bio->bi_vcnt; bv++, i++) { bv->bv_page = alloc_page(gfp_mask); if (!bv->bv_page) { while (--bv >= bio->bi_io_vec) From patchwork Wed Nov 21 03:23:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691741 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD9F713BB for ; Wed, 21 Nov 2018 03:27:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B16E2B40C for ; Wed, 21 Nov 2018 03:27:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F2402B413; Wed, 21 Nov 2018 03:27:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E1F22B40C for ; Wed, 21 Nov 2018 03:27:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728174AbeKUOAT (ORCPT ); Wed, 21 Nov 2018 09:00:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59468 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbeKUOAR (ORCPT ); Wed, 21 Nov 2018 09:00:17 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D103E3084243; Wed, 21 Nov 2018 03:27:45 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8D3B610021B6; Wed, 21 Nov 2018 03:27:33 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 12/19] block: allow bio_for_each_segment_all() to iterate over multi-page bvec Date: Wed, 21 Nov 2018 11:23:20 +0800 Message-Id: <20181121032327.8434-13-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Wed, 21 Nov 2018 03:27:46 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces one extra iterator variable to bio_for_each_segment_all(), then we can allow bio_for_each_segment_all() to iterate over multi-page bvec. Given it is just one mechannical & simple change on all bio_for_each_segment_all() users, this patch does tree-wide change in one single patch, so that we can avoid to use a temporary helper for this conversion. Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig --- block/bio.c | 27 ++++++++++++++++++--------- block/bounce.c | 6 ++++-- drivers/md/bcache/btree.c | 3 ++- drivers/md/dm-crypt.c | 3 ++- drivers/md/raid1.c | 3 ++- drivers/staging/erofs/data.c | 3 ++- drivers/staging/erofs/unzip_vle.c | 3 ++- fs/block_dev.c | 6 ++++-- fs/btrfs/compression.c | 3 ++- fs/btrfs/disk-io.c | 3 ++- fs/btrfs/extent_io.c | 12 ++++++++---- fs/btrfs/inode.c | 6 ++++-- fs/btrfs/raid56.c | 3 ++- fs/crypto/bio.c | 3 ++- fs/direct-io.c | 4 +++- fs/exofs/ore.c | 3 ++- fs/exofs/ore_raid.c | 3 ++- fs/ext4/page-io.c | 3 ++- fs/ext4/readpage.c | 3 ++- fs/f2fs/data.c | 9 ++++++--- fs/gfs2/lops.c | 6 ++++-- fs/gfs2/meta_io.c | 3 ++- fs/iomap.c | 6 ++++-- fs/mpage.c | 3 ++- fs/xfs/xfs_aops.c | 5 +++-- include/linux/bio.h | 11 +++++++++-- include/linux/bvec.h | 31 +++++++++++++++++++++++++++++++ 27 files changed, 128 insertions(+), 46 deletions(-) diff --git a/block/bio.c b/block/bio.c index 4f4d9884443b..2680aa42a625 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1073,8 +1073,9 @@ static int bio_copy_from_iter(struct bio *bio, struct iov_iter *iter) { int i; struct bio_vec *bvec; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { ssize_t ret; ret = copy_page_from_iter(bvec->bv_page, @@ -1104,8 +1105,9 @@ static int bio_copy_to_iter(struct bio *bio, struct iov_iter iter) { int i; struct bio_vec *bvec; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { ssize_t ret; ret = copy_page_to_iter(bvec->bv_page, @@ -1127,8 +1129,9 @@ void bio_free_pages(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) __free_page(bvec->bv_page); } EXPORT_SYMBOL(bio_free_pages); @@ -1295,6 +1298,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, struct bio *bio; int ret; struct bio_vec *bvec; + struct bvec_iter_all iter_all; if (!iov_iter_count(iter)) return ERR_PTR(-EINVAL); @@ -1368,7 +1372,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, return bio; out_unmap: - bio_for_each_segment_all(bvec, bio, j) { + bio_for_each_segment_all(bvec, bio, j, iter_all) { put_page(bvec->bv_page); } bio_put(bio); @@ -1379,11 +1383,12 @@ static void __bio_unmap_user(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; /* * make sure we dirty pages we wrote to */ - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { if (bio_data_dir(bio) == READ) set_page_dirty_lock(bvec->bv_page); @@ -1475,8 +1480,9 @@ static void bio_copy_kern_endio_read(struct bio *bio) char *p = bio->bi_private; struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { memcpy(p, page_address(bvec->bv_page), bvec->bv_len); p += bvec->bv_len; } @@ -1585,8 +1591,9 @@ void bio_set_pages_dirty(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { if (!PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); } @@ -1597,8 +1604,9 @@ static void bio_release_pages(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) put_page(bvec->bv_page); } @@ -1645,8 +1653,9 @@ void bio_check_pages_dirty(struct bio *bio) struct bio_vec *bvec; unsigned long flags; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page)) goto defer; } diff --git a/block/bounce.c b/block/bounce.c index 559c55bda040..7338041e3042 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -165,11 +165,12 @@ static void bounce_end_io(struct bio *bio, mempool_t *pool) struct bio_vec *bvec, orig_vec; int i; struct bvec_iter orig_iter = bio_orig->bi_iter; + struct bvec_iter_all iter_all; /* * free up bounce indirect pages used */ - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { orig_vec = bio_iter_iovec(bio_orig, orig_iter); if (bvec->bv_page != orig_vec.bv_page) { dec_zone_page_state(bvec->bv_page, NR_BOUNCE); @@ -293,6 +294,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bool bounce = false; int sectors = 0; bool passthrough = bio_is_passthrough(*bio_orig); + struct bvec_iter_all iter_all; bio_for_each_segment(from, *bio_orig, iter) { if (i++ < BIO_MAX_PAGES) @@ -312,7 +314,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, bio = bounce_clone_bio(*bio_orig, GFP_NOIO, passthrough ? NULL : &bounce_bio_set); - bio_for_each_segment_all(to, bio, i) { + bio_for_each_segment_all(to, bio, i, iter_all) { struct page *page = to->bv_page; if (page_to_pfn(page) <= q->limits.bounce_pfn) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 3f4211b5cd33..6242ae4e2127 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -427,8 +427,9 @@ static void do_btree_node_write(struct btree *b) int j; struct bio_vec *bv; void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1)); + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bv, b->bio, j) + bio_for_each_segment_all(bv, b->bio, j, iter_all) memcpy(page_address(bv->bv_page), base + j * PAGE_SIZE, PAGE_SIZE); diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b8eec515a003..a0dcf28c01b5 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1447,8 +1447,9 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone) { unsigned int i; struct bio_vec *bv; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bv, clone, i) { + bio_for_each_segment_all(bv, clone, i, iter_all) { BUG_ON(!bv->bv_page); mempool_free(bv->bv_page, &cc->page_pool); } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 1d54109071cc..6f74a3b06c7e 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2114,13 +2114,14 @@ static void process_checks(struct r1bio *r1_bio) struct page **spages = get_resync_pages(sbio)->pages; struct bio_vec *bi; int page_len[RESYNC_PAGES] = { 0 }; + struct bvec_iter_all iter_all; if (sbio->bi_end_io != end_sync_read) continue; /* Now we can 'fixup' the error value */ sbio->bi_status = 0; - bio_for_each_segment_all(bi, sbio, j) + bio_for_each_segment_all(bi, sbio, j, iter_all) page_len[j] = bi->bv_len; if (!status) { diff --git a/drivers/staging/erofs/data.c b/drivers/staging/erofs/data.c index 6384f73e5418..96240ceca02a 100644 --- a/drivers/staging/erofs/data.c +++ b/drivers/staging/erofs/data.c @@ -20,8 +20,9 @@ static inline void read_endio(struct bio *bio) int i; struct bio_vec *bvec; const blk_status_t err = bio->bi_status; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; /* page is already locked */ diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c index 79d3ba62b298..41a8a9399863 100644 --- a/drivers/staging/erofs/unzip_vle.c +++ b/drivers/staging/erofs/unzip_vle.c @@ -731,11 +731,12 @@ static inline void z_erofs_vle_read_endio(struct bio *bio) const blk_status_t err = bio->bi_status; unsigned int i; struct bio_vec *bvec; + struct bvec_iter_all iter_all; #ifdef EROFS_FS_HAS_MANAGED_CACHE struct address_space *mngda = NULL; #endif - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; bool cachemngd = false; diff --git a/fs/block_dev.c b/fs/block_dev.c index 4d79bc80fb41..6f505982e6b1 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -197,6 +197,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, ssize_t ret; blk_qc_t qc; int i; + struct bvec_iter_all iter_all; if ((pos | iov_iter_alignment(iter)) & (bdev_logical_block_size(bdev) - 1)) @@ -246,7 +247,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, } __set_current_state(TASK_RUNNING); - bio_for_each_segment_all(bvec, &bio, i) { + bio_for_each_segment_all(bvec, &bio, i, iter_all) { if (should_dirty && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); put_page(bvec->bv_page); @@ -314,8 +315,9 @@ static void blkdev_bio_end_io(struct bio *bio) } else { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) put_page(bvec->bv_page); bio_put(bio); } diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 2955a4ea2fa8..602a74c645c3 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -162,13 +162,14 @@ static void end_compressed_bio_read(struct bio *bio) } else { int i; struct bio_vec *bvec; + struct bvec_iter_all iter_all; /* * we have verified the checksum already, set page * checked so the end_io handlers know about it */ ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, cb->orig_bio, i) + bio_for_each_segment_all(bvec, cb->orig_bio, i, iter_all) SetPageChecked(bvec->bv_page); bio_endio(cb->orig_bio); diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 3f0b6d1936e8..9f2a31bea08d 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -819,9 +819,10 @@ static blk_status_t btree_csum_one_bio(struct bio *bio) struct bio_vec *bvec; struct btrfs_root *root; int i, ret = 0; + struct bvec_iter_all iter_all; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { root = BTRFS_I(bvec->bv_page->mapping->host)->root; ret = csum_dirty_buffer(root->fs_info, bvec->bv_page); if (ret) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 874bb9aeebdc..9373eb8ade06 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2352,10 +2352,11 @@ static unsigned btrfs_bio_pages_all(struct bio *bio) { unsigned i; struct bio_vec *bv; + struct bvec_iter_all iter_all; WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bv, bio, i) + bio_for_each_segment_all(bv, bio, i, iter_all) ; return i; } @@ -2457,9 +2458,10 @@ static void end_bio_extent_writepage(struct bio *bio) u64 start; u64 end; int i; + struct bvec_iter_all iter_all; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -2528,9 +2530,10 @@ static void end_bio_extent_readpage(struct bio *bio) int mirror; int ret; int i; + struct bvec_iter_all iter_all; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; struct inode *inode = page->mapping->host; struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); @@ -3682,9 +3685,10 @@ static void end_bio_extent_buffer_writepage(struct bio *bio) struct bio_vec *bvec; struct extent_buffer *eb; int i, done; + struct bvec_iter_all iter_all; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; eb = (struct extent_buffer *)page->private; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9ea4c6f0352f..2850fca6cc44 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -7822,6 +7822,7 @@ static void btrfs_retry_endio_nocsum(struct bio *bio) struct bio_vec *bvec; struct extent_io_tree *io_tree, *failure_tree; int i; + struct bvec_iter_all iter_all; if (bio->bi_status) goto end; @@ -7833,7 +7834,7 @@ static void btrfs_retry_endio_nocsum(struct bio *bio) done->uptodate = 1; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) clean_io_failure(BTRFS_I(inode)->root->fs_info, failure_tree, io_tree, done->start, bvec->bv_page, btrfs_ino(BTRFS_I(inode)), 0); @@ -7912,6 +7913,7 @@ static void btrfs_retry_endio(struct bio *bio) int uptodate; int ret; int i; + struct bvec_iter_all iter_all; if (bio->bi_status) goto end; @@ -7925,7 +7927,7 @@ static void btrfs_retry_endio(struct bio *bio) failure_tree = &BTRFS_I(inode)->io_failure_tree; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { ret = __readpage_endio_check(inode, io_bio, i, bvec->bv_page, bvec->bv_offset, done->start, bvec->bv_len); diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index df41d7049936..e33a99871d60 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1443,10 +1443,11 @@ static void set_bio_pages_uptodate(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; ASSERT(!bio_flagged(bio, BIO_CLONED)); - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) SetPageUptodate(bvec->bv_page); } diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c index 0959044c5cee..5759bcd018cd 100644 --- a/fs/crypto/bio.c +++ b/fs/crypto/bio.c @@ -30,8 +30,9 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done) { struct bio_vec *bv; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment_all(bv, bio, i, iter_all) { struct page *page = bv->bv_page; int ret = fscrypt_decrypt_page(page->mapping->host, page, PAGE_SIZE, 0, page->index); diff --git a/fs/direct-io.c b/fs/direct-io.c index ea07d5a34317..5904fc2e180c 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -551,7 +551,9 @@ static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio) if (dio->is_async && dio->op == REQ_OP_READ && dio->should_dirty) { bio_check_pages_dirty(bio); /* transfers ownership */ } else { - bio_for_each_segment_all(bvec, bio, i) { + struct bvec_iter_all iter_all; + + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; if (dio->op == REQ_OP_READ && !PageCompound(page) && diff --git a/fs/exofs/ore.c b/fs/exofs/ore.c index 5331a15a61f1..24a8e34882e9 100644 --- a/fs/exofs/ore.c +++ b/fs/exofs/ore.c @@ -420,8 +420,9 @@ static void _clear_bio(struct bio *bio) { struct bio_vec *bv; unsigned i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment_all(bv, bio, i, iter_all) { unsigned this_count = bv->bv_len; if (likely(PAGE_SIZE == this_count)) diff --git a/fs/exofs/ore_raid.c b/fs/exofs/ore_raid.c index 199590f36203..e83bab54b03e 100644 --- a/fs/exofs/ore_raid.c +++ b/fs/exofs/ore_raid.c @@ -468,11 +468,12 @@ static void _mark_read4write_pages_uptodate(struct ore_io_state *ios, int ret) /* loop on all devices all pages */ for (d = 0; d < ios->numdevs; d++) { struct bio *bio = ios->per_dev[d].bio; + struct bvec_iter_all iter_all; if (!bio) continue; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment_all(bv, bio, i, iter_all) { struct page *page = bv->bv_page; SetPageUptodate(page); diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index db7590178dfc..0644b4e7d6d4 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -63,8 +63,9 @@ static void ext4_finish_bio(struct bio *bio) { int i; struct bio_vec *bvec; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; #ifdef CONFIG_EXT4_FS_ENCRYPTION struct page *data_page = NULL; diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index f461d75ac049..b0d9537bc797 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -72,6 +72,7 @@ static void mpage_end_io(struct bio *bio) { struct bio_vec *bv; int i; + struct bvec_iter_all iter_all; if (ext4_bio_encrypted(bio)) { if (bio->bi_status) { @@ -81,7 +82,7 @@ static void mpage_end_io(struct bio *bio) return; } } - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment_all(bv, bio, i, iter_all) { struct page *page = bv->bv_page; if (!bio->bi_status) { diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index b293cb3e27a2..d28f482a0d52 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -87,8 +87,9 @@ static void __read_end_io(struct bio *bio) struct page *page; struct bio_vec *bv; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment_all(bv, bio, i, iter_all) { page = bv->bv_page; /* PG_error was set if any post_read step failed */ @@ -164,13 +165,14 @@ static void f2fs_write_end_io(struct bio *bio) struct f2fs_sb_info *sbi = bio->bi_private; struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; if (time_to_inject(sbi, FAULT_WRITE_IO)) { f2fs_show_injection_info(FAULT_WRITE_IO); bio->bi_status = BLK_STS_IOERR; } - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; enum count_type type = WB_DATA_TYPE(page); @@ -347,6 +349,7 @@ static bool __has_merged_page(struct f2fs_bio_info *io, struct inode *inode, struct bio_vec *bvec; struct page *target; int i; + struct bvec_iter_all iter_all; if (!io->bio) return false; @@ -354,7 +357,7 @@ static bool __has_merged_page(struct f2fs_bio_info *io, struct inode *inode, if (!inode && !page && !ino) return true; - bio_for_each_segment_all(bvec, io->bio, i) { + bio_for_each_segment_all(bvec, io->bio, i, iter_all) { if (bvec->bv_page->mapping) target = bvec->bv_page; diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c index 4c7069b8f3c1..f2f165620161 100644 --- a/fs/gfs2/lops.c +++ b/fs/gfs2/lops.c @@ -168,7 +168,8 @@ u64 gfs2_log_bmap(struct gfs2_sbd *sdp) * that is pinned in the pagecache. */ -static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp, struct bio_vec *bvec, +static void gfs2_end_log_write_bh(struct gfs2_sbd *sdp, + struct bio_vec *bvec, blk_status_t error) { struct buffer_head *bh, *next; @@ -207,6 +208,7 @@ static void gfs2_end_log_write(struct bio *bio) struct bio_vec *bvec; struct page *page; int i; + struct bvec_iter_all iter_all; if (bio->bi_status) { fs_err(sdp, "Error %d writing to journal, jid=%u\n", @@ -214,7 +216,7 @@ static void gfs2_end_log_write(struct bio *bio) wake_up(&sdp->sd_logd_waitq); } - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { page = bvec->bv_page; if (page_has_buffers(page)) gfs2_end_log_write_bh(sdp, bvec, bio->bi_status); diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c index be9c0bf697fe..3201342404a7 100644 --- a/fs/gfs2/meta_io.c +++ b/fs/gfs2/meta_io.c @@ -190,8 +190,9 @@ static void gfs2_meta_read_endio(struct bio *bio) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) { + bio_for_each_segment_all(bvec, bio, i, iter_all) { struct page *page = bvec->bv_page; struct buffer_head *bh = page_buffers(page); unsigned int len = bvec->bv_len; diff --git a/fs/iomap.c b/fs/iomap.c index b0462b363bad..f5fb8bf75cc8 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -262,8 +262,9 @@ iomap_read_end_io(struct bio *bio) int error = blk_status_to_errno(bio->bi_status); struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) iomap_read_page_end_io(bvec, error); bio_put(bio); } @@ -1541,8 +1542,9 @@ static void iomap_dio_bio_end_io(struct bio *bio) } else { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) put_page(bvec->bv_page); bio_put(bio); } diff --git a/fs/mpage.c b/fs/mpage.c index c820dc9bebab..3f19da75178b 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -48,8 +48,9 @@ static void mpage_end_io(struct bio *bio) { struct bio_vec *bv; int i; + struct bvec_iter_all iter_all; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_segment_all(bv, bio, i, iter_all) { struct page *page = bv->bv_page; page_endio(page, bio_op(bio), blk_status_to_errno(bio->bi_status)); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 338b9d9984e0..1f1829e506e8 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -62,7 +62,7 @@ xfs_find_daxdev_for_inode( static void xfs_finish_page_writeback( struct inode *inode, - struct bio_vec *bvec, + struct bio_vec *bvec, int error) { struct iomap_page *iop = to_iomap_page(bvec->bv_page); @@ -98,6 +98,7 @@ xfs_destroy_ioend( for (bio = &ioend->io_inline_bio; bio; bio = next) { struct bio_vec *bvec; int i; + struct bvec_iter_all iter_all; /* * For the last bio, bi_private points to the ioend, so we @@ -109,7 +110,7 @@ xfs_destroy_ioend( next = bio->bi_private; /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bvec, bio, i) + bio_for_each_segment_all(bvec, bio, i, iter_all) xfs_finish_page_writeback(inode, bvec, error); bio_put(bio); } diff --git a/include/linux/bio.h b/include/linux/bio.h index 9d6284f53c07..7edad188568a 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -131,12 +131,19 @@ static inline bool bio_full(struct bio *bio) return bio->bi_vcnt >= bio->bi_max_vecs; } +#define bvec_for_each_segment(bv, bvl, i, iter_all) \ + for (bv = bvec_init_iter_all(&iter_all); \ + (iter_all.done < (bvl)->bv_len) && \ + (bvec_next_segment((bvl), &iter_all), 1); \ + iter_all.done += bv->bv_len, i += 1) + /* * drivers should _never_ use the all version - the bio may have been split * before it got to the driver and the driver won't own all of it */ -#define bio_for_each_segment_all(bvl, bio, i) \ - for (i = 0, bvl = (bio)->bi_io_vec; i < (bio)->bi_vcnt; i++, bvl++) +#define bio_for_each_segment_all(bvl, bio, i, iter_all) \ + for (i = 0, iter_all.idx = 0; iter_all.idx < (bio)->bi_vcnt; iter_all.idx++) \ + bvec_for_each_segment(bvl, &((bio)->bi_io_vec[iter_all.idx]), i, iter_all) static inline void __bio_advance_iter(struct bio *bio, struct bvec_iter *iter, unsigned bytes, unsigned max_seg_len) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index b37d13a79a7d..cab36d838ed0 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -47,6 +47,12 @@ struct bvec_iter { current bvec */ }; +struct bvec_iter_all { + struct bio_vec bv; + int idx; + unsigned done; +}; + /* * various member access, note that bio_data should of course not be used * on highmem page vectors @@ -173,6 +179,31 @@ static inline bool bvec_iter_advance(const struct bio_vec *bv, .bi_bvec_done = 0, \ } +static inline struct bio_vec *bvec_init_iter_all(struct bvec_iter_all *iter_all) +{ + iter_all->bv.bv_page = NULL; + iter_all->done = 0; + + return &iter_all->bv; +} + +/* used for chunk_for_each_segment */ +static inline void bvec_next_segment(const struct bio_vec *bvec, + struct bvec_iter_all *iter_all) +{ + struct bio_vec *bv = &iter_all->bv; + + if (bv->bv_page) { + bv->bv_page += 1; + bv->bv_offset = 0; + } else { + bv->bv_page = bvec->bv_page; + bv->bv_offset = bvec->bv_offset; + } + bv->bv_len = min_t(unsigned int, PAGE_SIZE - bv->bv_offset, + bvec->bv_len - iter_all->done); +} + /* * Get the last single-page segment from the multi-page bvec and store it * in @seg From patchwork Wed Nov 21 03:23:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 33DFA16B1 for ; Wed, 21 Nov 2018 03:28:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 217A32B40C for ; Wed, 21 Nov 2018 03:28:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 149D92B413; Wed, 21 Nov 2018 03:28:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A8B12B40C for ; Wed, 21 Nov 2018 03:28:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728243AbeKUOA2 (ORCPT ); Wed, 21 Nov 2018 09:00:28 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44128 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726967AbeKUOA1 (ORCPT ); Wed, 21 Nov 2018 09:00:27 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 81F973083390; Wed, 21 Nov 2018 03:27:55 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 091845D75D; Wed, 21 Nov 2018 03:27:47 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 13/19] block: move bounce_clone_bio into bio.c Date: Wed, 21 Nov 2018 11:23:21 +0800 Message-Id: <20181121032327.8434-14-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Wed, 21 Nov 2018 03:27:55 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We will reuse bounce_clone_bio() for cloning bio in case of !blk_queue_cluster(q), so move this helper into bio.c and rename it as bio_clone_bioset(). No function change. Signed-off-by: Ming Lei --- block/bio.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++ block/blk.h | 2 ++ block/bounce.c | 70 +--------------------------------------------------------- 3 files changed, 72 insertions(+), 69 deletions(-) diff --git a/block/bio.c b/block/bio.c index 2680aa42a625..0f1635b9ec50 100644 --- a/block/bio.c +++ b/block/bio.c @@ -647,6 +647,75 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) } EXPORT_SYMBOL(bio_clone_fast); +/* block core only helper */ +struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask, + struct bio_set *bs) +{ + struct bvec_iter iter; + struct bio_vec bv; + struct bio *bio; + + /* + * Pre immutable biovecs, __bio_clone() used to just do a memcpy from + * bio_src->bi_io_vec to bio->bi_io_vec. + * + * We can't do that anymore, because: + * + * - The point of cloning the biovec is to produce a bio with a biovec + * the caller can modify: bi_idx and bi_bvec_done should be 0. + * + * - The original bio could've had more than BIO_MAX_PAGES biovecs; if + * we tried to clone the whole thing bio_alloc_bioset() would fail. + * But the clone should succeed as long as the number of biovecs we + * actually need to allocate is fewer than BIO_MAX_PAGES. + * + * - Lastly, bi_vcnt should not be looked at or relied upon by code + * that does not own the bio - reason being drivers don't use it for + * iterating over the biovec anymore, so expecting it to be kept up + * to date (i.e. for clones that share the parent biovec) is just + * asking for trouble and would force extra work on + * __bio_clone_fast() anyways. + */ + + bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs); + if (!bio) + return NULL; + bio->bi_disk = bio_src->bi_disk; + bio->bi_opf = bio_src->bi_opf; + bio->bi_ioprio = bio_src->bi_ioprio; + bio->bi_write_hint = bio_src->bi_write_hint; + bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; + bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; + + switch (bio_op(bio)) { + case REQ_OP_DISCARD: + case REQ_OP_SECURE_ERASE: + case REQ_OP_WRITE_ZEROES: + break; + case REQ_OP_WRITE_SAME: + bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0]; + break; + default: + bio_for_each_segment(bv, bio_src, iter) + bio->bi_io_vec[bio->bi_vcnt++] = bv; + break; + } + + if (bio_integrity(bio_src)) { + int ret; + + ret = bio_integrity_clone(bio, bio_src, gfp_mask); + if (ret < 0) { + bio_put(bio); + return NULL; + } + } + + bio_clone_blkcg_association(bio, bio_src); + + return bio; +} + /** * bio_add_pc_page - attempt to add page to bio * @q: the target queue diff --git a/block/blk.h b/block/blk.h index 816a9abb87cd..31c0e45aba3a 100644 --- a/block/blk.h +++ b/block/blk.h @@ -336,6 +336,8 @@ static inline int blk_iolatency_init(struct request_queue *q) { return 0; } struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp); +struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask, struct bio_set *bs); + #ifdef CONFIG_BLK_DEV_ZONED void blk_queue_free_zone_bitmaps(struct request_queue *q); #else diff --git a/block/bounce.c b/block/bounce.c index 7338041e3042..4947c36173b2 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -215,74 +215,6 @@ static void bounce_end_io_read_isa(struct bio *bio) __bounce_end_io_read(bio, &isa_page_pool); } -static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask, - struct bio_set *bs) -{ - struct bvec_iter iter; - struct bio_vec bv; - struct bio *bio; - - /* - * Pre immutable biovecs, __bio_clone() used to just do a memcpy from - * bio_src->bi_io_vec to bio->bi_io_vec. - * - * We can't do that anymore, because: - * - * - The point of cloning the biovec is to produce a bio with a biovec - * the caller can modify: bi_idx and bi_bvec_done should be 0. - * - * - The original bio could've had more than BIO_MAX_PAGES biovecs; if - * we tried to clone the whole thing bio_alloc_bioset() would fail. - * But the clone should succeed as long as the number of biovecs we - * actually need to allocate is fewer than BIO_MAX_PAGES. - * - * - Lastly, bi_vcnt should not be looked at or relied upon by code - * that does not own the bio - reason being drivers don't use it for - * iterating over the biovec anymore, so expecting it to be kept up - * to date (i.e. for clones that share the parent biovec) is just - * asking for trouble and would force extra work on - * __bio_clone_fast() anyways. - */ - - bio = bio_alloc_bioset(gfp_mask, bio_segments(bio_src), bs); - if (!bio) - return NULL; - bio->bi_disk = bio_src->bi_disk; - bio->bi_opf = bio_src->bi_opf; - bio->bi_ioprio = bio_src->bi_ioprio; - bio->bi_write_hint = bio_src->bi_write_hint; - bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector; - bio->bi_iter.bi_size = bio_src->bi_iter.bi_size; - - switch (bio_op(bio)) { - case REQ_OP_DISCARD: - case REQ_OP_SECURE_ERASE: - case REQ_OP_WRITE_ZEROES: - break; - case REQ_OP_WRITE_SAME: - bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0]; - break; - default: - bio_for_each_segment(bv, bio_src, iter) - bio->bi_io_vec[bio->bi_vcnt++] = bv; - break; - } - - if (bio_integrity(bio_src)) { - int ret; - - ret = bio_integrity_clone(bio, bio_src, gfp_mask); - if (ret < 0) { - bio_put(bio); - return NULL; - } - } - - bio_clone_blkcg_association(bio, bio_src); - - return bio; -} - static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, mempool_t *pool) { @@ -311,7 +243,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, generic_make_request(*bio_orig); *bio_orig = bio; } - bio = bounce_clone_bio(*bio_orig, GFP_NOIO, passthrough ? NULL : + bio = bio_clone_bioset(*bio_orig, GFP_NOIO, passthrough ? NULL : &bounce_bio_set); bio_for_each_segment_all(to, bio, i, iter_all) { From patchwork Wed Nov 21 03:23:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691763 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78B5013BB for ; Wed, 21 Nov 2018 03:28:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 664D82B412 for ; Wed, 21 Nov 2018 03:28:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 599FE2B415; Wed, 21 Nov 2018 03:28:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEEB72B412 for ; Wed, 21 Nov 2018 03:28:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728294AbeKUOAt (ORCPT ); Wed, 21 Nov 2018 09:00:49 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37826 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726705AbeKUOAs (ORCPT ); Wed, 21 Nov 2018 09:00:48 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CD8E4309706B; Wed, 21 Nov 2018 03:28:15 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id B383B17188; Wed, 21 Nov 2018 03:27:57 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split Date: Wed, 21 Nov 2018 11:23:22 +0800 Message-Id: <20181121032327.8434-15-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.43]); Wed, 21 Nov 2018 03:28:16 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We will enable multi-page bvec soon, but non-cluster queue can't handle the multi-page bvec at all. This patch borrows bounce's idea to clone new single-page bio for non-cluster queue, and moves its handling out of blk_bio_segment_split(). Signed-off-by: Ming Lei Signed-off-by: Christoph Hellwig --- block/Makefile | 3 ++- block/blk-merge.c | 6 ++++- block/blk.h | 2 ++ block/non-cluster.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 79 insertions(+), 2 deletions(-) create mode 100644 block/non-cluster.c diff --git a/block/Makefile b/block/Makefile index eee1b4ceecf9..e07d59438c4b 100644 --- a/block/Makefile +++ b/block/Makefile @@ -9,7 +9,8 @@ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-sysfs.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \ blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \ genhd.o partition-generic.o ioprio.o \ - badblocks.o partitions/ blk-rq-qos.o + badblocks.o partitions/ blk-rq-qos.o \ + non-cluster.o obj-$(CONFIG_BOUNCE) += bounce.o obj-$(CONFIG_BLK_SCSI_REQUEST) += scsi_ioctl.o diff --git a/block/blk-merge.c b/block/blk-merge.c index 8829c51b4e75..7c44216c1b58 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -247,7 +247,7 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, goto split; } - if (bvprvp && blk_queue_cluster(q)) { + if (bvprvp) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) goto new_segment; if (!biovec_phys_mergeable(q, bvprvp, &bv)) @@ -307,6 +307,10 @@ void blk_queue_split(struct request_queue *q, struct bio **bio) split = blk_bio_write_same_split(q, *bio, &q->bio_split, &nsegs); break; default: + if (!blk_queue_cluster(q)) { + blk_queue_non_cluster_bio(q, bio); + return; + } split = blk_bio_segment_split(q, *bio, &q->bio_split, &nsegs); break; } diff --git a/block/blk.h b/block/blk.h index 31c0e45aba3a..6fc5821ced55 100644 --- a/block/blk.h +++ b/block/blk.h @@ -338,6 +338,8 @@ struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp); struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask, struct bio_set *bs); +void blk_queue_non_cluster_bio(struct request_queue *q, struct bio **bio_orig); + #ifdef CONFIG_BLK_DEV_ZONED void blk_queue_free_zone_bitmaps(struct request_queue *q); #else diff --git a/block/non-cluster.c b/block/non-cluster.c new file mode 100644 index 000000000000..9c2910be9404 --- /dev/null +++ b/block/non-cluster.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0 +/* non-cluster handling for block devices */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "blk.h" + +static struct bio_set non_cluster_bio_set, non_cluster_bio_split; + +static __init int init_non_cluster_bioset(void) +{ + WARN_ON(bioset_init(&non_cluster_bio_set, BIO_POOL_SIZE, 0, + BIOSET_NEED_BVECS)); + WARN_ON(bioset_integrity_create(&non_cluster_bio_set, BIO_POOL_SIZE)); + WARN_ON(bioset_init(&non_cluster_bio_split, BIO_POOL_SIZE, 0, 0)); + + return 0; +} +__initcall(init_non_cluster_bioset); + +static void non_cluster_end_io(struct bio *bio) +{ + struct bio *bio_orig = bio->bi_private; + + bio_orig->bi_status = bio->bi_status; + bio_endio(bio_orig); + bio_put(bio); +} + +void blk_queue_non_cluster_bio(struct request_queue *q, struct bio **bio_orig) +{ + struct bio *bio; + struct bvec_iter iter; + struct bio_vec from; + unsigned i = 0; + unsigned sectors = 0; + unsigned short max_segs = min_t(unsigned short, BIO_MAX_PAGES, + queue_max_segments(q)); + + bio_for_each_segment(from, *bio_orig, iter) { + if (i++ < max_segs) + sectors += from.bv_len >> 9; + else + break; + } + + if (sectors < bio_sectors(*bio_orig)) { + bio = bio_split(*bio_orig, sectors, GFP_NOIO, + &non_cluster_bio_split); + bio_chain(bio, *bio_orig); + generic_make_request(*bio_orig); + *bio_orig = bio; + } + bio = bio_clone_bioset(*bio_orig, GFP_NOIO, &non_cluster_bio_set); + + bio->bi_phys_segments = bio_segments(bio); + bio_set_flag(bio, BIO_SEG_VALID); + bio->bi_end_io = non_cluster_end_io; + + bio->bi_private = *bio_orig; + *bio_orig = bio; +} From patchwork Wed Nov 21 03:23:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691779 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA87716B1 for ; Wed, 21 Nov 2018 03:28:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBD7D2AB6F for ; Wed, 21 Nov 2018 03:28:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF2C82B440; Wed, 21 Nov 2018 03:28:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 663F92AB6F for ; Wed, 21 Nov 2018 03:28:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727815AbeKUOBD (ORCPT ); Wed, 21 Nov 2018 09:01:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44324 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726705AbeKUOBC (ORCPT ); Wed, 21 Nov 2018 09:01:02 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9E45D3083390; Wed, 21 Nov 2018 03:28:30 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 072B117A6A; Wed, 21 Nov 2018 03:28:17 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 15/19] block: enable multipage bvecs Date: Wed, 21 Nov 2018 11:23:23 +0800 Message-Id: <20181121032327.8434-16-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.44]); Wed, 21 Nov 2018 03:28:31 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch pulls the trigger for multi-page bvecs. Signed-off-by: Ming Lei --- block/bio.c | 32 +++++++++++++++++++++++++++----- fs/iomap.c | 2 +- fs/xfs/xfs_aops.c | 2 +- 3 files changed, 29 insertions(+), 7 deletions(-) diff --git a/block/bio.c b/block/bio.c index 0f1635b9ec50..854676edc438 100644 --- a/block/bio.c +++ b/block/bio.c @@ -823,7 +823,7 @@ EXPORT_SYMBOL(bio_add_pc_page); * @len: length of the data to add * @off: offset of the data in @page * - * Try to add the data at @page + @off to the last bvec of @bio. This is a + * Try to add the data at @page + @off to the last page of @bio. This is a * a useful optimisation for file systems with a block size smaller than the * page size. * @@ -836,10 +836,13 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page, return false; if (bio->bi_vcnt > 0) { - struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; + struct bio_vec bv; + struct bio_vec *seg = &bio->bi_io_vec[bio->bi_vcnt - 1]; - if (page == bv->bv_page && off == bv->bv_offset + bv->bv_len) { - bv->bv_len += len; + bvec_last_segment(seg, &bv); + + if (page == bv.bv_page && off == bv.bv_offset + bv.bv_len) { + seg->bv_len += len; bio->bi_iter.bi_size += len; return true; } @@ -848,6 +851,25 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL_GPL(__bio_try_merge_page); +static bool bio_try_merge_segment(struct bio *bio, struct page *page, + unsigned int len, unsigned int off) +{ + if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) + return false; + + if (bio->bi_vcnt > 0) { + struct bio_vec *seg = &bio->bi_io_vec[bio->bi_vcnt - 1]; + + if (page_to_phys(seg->bv_page) + seg->bv_offset + seg->bv_len == + page_to_phys(page) + off) { + seg->bv_len += len; + bio->bi_iter.bi_size += len; + return true; + } + } + return false; +} + /** * __bio_add_page - add page to a bio in a new segment * @bio: destination bio @@ -888,7 +910,7 @@ EXPORT_SYMBOL_GPL(__bio_add_page); int bio_add_page(struct bio *bio, struct page *page, unsigned int len, unsigned int offset) { - if (!__bio_try_merge_page(bio, page, len, offset)) { + if (!bio_try_merge_segment(bio, page, len, offset)) { if (bio_full(bio)) return 0; __bio_add_page(bio, page, len, offset); diff --git a/fs/iomap.c b/fs/iomap.c index f5fb8bf75cc8..ccc2ba115f4d 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -344,7 +344,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, ctx->bio->bi_end_io = iomap_read_end_io; } - __bio_add_page(ctx->bio, page, plen, poff); + bio_add_page(ctx->bio, page, plen, poff); done: /* * Move the caller beyond our range so that it keeps making progress. diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 1f1829e506e8..5c2190216614 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -621,7 +621,7 @@ xfs_add_to_ioend( atomic_inc(&iop->write_count); if (bio_full(wpc->ioend->io_bio)) xfs_chain_bio(wpc->ioend, wbc, bdev, sector); - __bio_add_page(wpc->ioend->io_bio, page, len, poff); + bio_add_page(wpc->ioend->io_bio, page, len, poff); } wpc->ioend->io_size += len; From patchwork Wed Nov 21 03:23:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691793 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F98B13BB for ; Wed, 21 Nov 2018 03:28:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3B4C2AB6F for ; Wed, 21 Nov 2018 03:28:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E74382B4A2; Wed, 21 Nov 2018 03:28:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 868C62AB6F for ; Wed, 21 Nov 2018 03:28:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727843AbeKUOBT (ORCPT ); Wed, 21 Nov 2018 09:01:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:43026 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727004AbeKUOBS (ORCPT ); Wed, 21 Nov 2018 09:01:18 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 013083082E4D; Wed, 21 Nov 2018 03:28:47 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id C40FF7E383; Wed, 21 Nov 2018 03:28:32 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 16/19] block: always define BIO_MAX_PAGES as 256 Date: Wed, 21 Nov 2018 11:23:24 +0800 Message-Id: <20181121032327.8434-17-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Wed, 21 Nov 2018 03:28:47 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to increase BIO_MAX_PAGES for it. CONFIG_THP_SWAP needs to split one THP into normal pages and adds them all to one bio. With multipage-bvec, it just takes one bvec to hold them all. Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- include/linux/bio.h | 8 -------- 1 file changed, 8 deletions(-) diff --git a/include/linux/bio.h b/include/linux/bio.h index 7edad188568a..e5b975fa0558 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -34,15 +34,7 @@ #define BIO_BUG_ON #endif -#ifdef CONFIG_THP_SWAP -#if HPAGE_PMD_NR > 256 -#define BIO_MAX_PAGES HPAGE_PMD_NR -#else #define BIO_MAX_PAGES 256 -#endif -#else -#define BIO_MAX_PAGES 256 -#endif #define bio_prio(bio) (bio)->bi_ioprio #define bio_set_prio(bio, prio) ((bio)->bi_ioprio = prio) From patchwork Wed Nov 21 03:23:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691807 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CCEFE16B1 for ; Wed, 21 Nov 2018 03:29:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBBCD2B4D4 for ; Wed, 21 Nov 2018 03:29:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AF2682B4E7; Wed, 21 Nov 2018 03:29:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69D442B4D4 for ; Wed, 21 Nov 2018 03:29:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728383AbeKUOBa (ORCPT ); Wed, 21 Nov 2018 09:01:30 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51938 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727004AbeKUOB3 (ORCPT ); Wed, 21 Nov 2018 09:01:29 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 13802356D3; Wed, 21 Nov 2018 03:28:58 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 36DC810021B6; Wed, 21 Nov 2018 03:28:48 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 17/19] block: document usage of bio iterator helpers Date: Wed, 21 Nov 2018 11:23:25 +0800 Message-Id: <20181121032327.8434-18-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Wed, 21 Nov 2018 03:28:58 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now multi-page bvec is supported, some helpers may return page by page, meantime some may return segment by segment, this patch documents the usage. Signed-off-by: Ming Lei --- Documentation/block/biovecs.txt | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/Documentation/block/biovecs.txt b/Documentation/block/biovecs.txt index 25689584e6e0..bb008f7afb05 100644 --- a/Documentation/block/biovecs.txt +++ b/Documentation/block/biovecs.txt @@ -117,3 +117,27 @@ Other implications: size limitations and the limitations of the underlying devices. Thus there's no need to define ->merge_bvec_fn() callbacks for individual block drivers. + +Usage of helpers: +================= + +* The following helpers whose names have the suffix of "_all" can only be used +on non-BIO_CLONED bio. They are usually used by filesystem code. Drivers +shouldn't use them because the bio may have been split before it reached the +driver. + + bio_for_each_segment_all() + bio_first_bvec_all() + bio_first_page_all() + bio_last_bvec_all() + +* The following helpers iterate over single-page bvecs. The passed 'struct +bio_vec' will contain a single-page IO vector during the iteration + + bio_for_each_segment() + bio_for_each_segment_all() + +* The following helpers iterate over single-page bvecs. The passed 'struct +bio_vec' will contain a single-page IO vector during the iteration + + bio_for_each_bvec() From patchwork Wed Nov 21 03:23:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691819 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0AA21923 for ; Wed, 21 Nov 2018 03:29:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DD3C42B4E1 for ; Wed, 21 Nov 2018 03:29:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D154A2B4ED; Wed, 21 Nov 2018 03:29:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D91E2B4E7 for ; Wed, 21 Nov 2018 03:29:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728428AbeKUOBp (ORCPT ); Wed, 21 Nov 2018 09:01:45 -0500 Received: from mx1.redhat.com ([209.132.183.28]:17659 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726869AbeKUOBo (ORCPT ); Wed, 21 Nov 2018 09:01:44 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6FC1437E79; Wed, 21 Nov 2018 03:29:12 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C0BD17A6A; Wed, 21 Nov 2018 03:29:00 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 18/19] block: kill QUEUE_FLAG_NO_SG_MERGE Date: Wed, 21 Nov 2018 11:23:26 +0800 Message-Id: <20181121032327.8434-19-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 21 Nov 2018 03:29:12 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"), physical segment number is mainly figured out in blk_queue_split() for fast path, and the flag of BIO_SEG_VALID is set there too. Now only blk_recount_segments() and blk_recalc_rq_segments() use this flag. Basically blk_recount_segments() is bypassed in fast path given BIO_SEG_VALID is set in blk_queue_split(). For another user of blk_recalc_rq_segments(): - run in partial completion branch of blk_update_request, which is an unusual case - run in blk_cloned_rq_check_limits(), still not a big problem if the flag is killed since dm-rq is the only user. Multi-page bvec is enabled now, not doing S/G merging is rather pointless with the current setup of the I/O path, as it isn't going to save you a significant amount of cycles. Reviewed-by: Christoph Hellwig Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- block/blk-merge.c | 31 ++++++------------------------- block/blk-mq-debugfs.c | 1 - block/blk-mq.c | 3 --- drivers/md/dm-table.c | 13 ------------- include/linux/blkdev.h | 1 - 5 files changed, 6 insertions(+), 43 deletions(-) diff --git a/block/blk-merge.c b/block/blk-merge.c index 7c44216c1b58..8fcac7855a45 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -343,8 +343,7 @@ void blk_queue_split(struct request_queue *q, struct bio **bio) EXPORT_SYMBOL(blk_queue_split); static unsigned int __blk_recalc_rq_segments(struct request_queue *q, - struct bio *bio, - bool no_sg_merge) + struct bio *bio) { struct bio_vec bv, bvprv = { NULL }; int cluster, prev = 0; @@ -371,13 +370,6 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, nr_phys_segs = 0; for_each_bio(bio) { bio_for_each_bvec(bv, bio, iter) { - /* - * If SG merging is disabled, each bio vector is - * a segment - */ - if (no_sg_merge) - goto new_segment; - if (prev && cluster) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) @@ -412,27 +404,16 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, void blk_recalc_rq_segments(struct request *rq) { - bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE, - &rq->q->queue_flags); - - rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio, - no_sg_merge); + rq->nr_phys_segments = __blk_recalc_rq_segments(rq->q, rq->bio); } void blk_recount_segments(struct request_queue *q, struct bio *bio) { - unsigned short seg_cnt = bio_segments(bio); - - if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) && - (seg_cnt < queue_max_segments(q))) - bio->bi_phys_segments = seg_cnt; - else { - struct bio *nxt = bio->bi_next; + struct bio *nxt = bio->bi_next; - bio->bi_next = NULL; - bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false); - bio->bi_next = nxt; - } + bio->bi_next = NULL; + bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio); + bio->bi_next = nxt; bio_set_flag(bio, BIO_SEG_VALID); } diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index a32bb79d6c95..d752fe4461af 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -127,7 +127,6 @@ static const char *const blk_queue_flag_name[] = { QUEUE_FLAG_NAME(SAME_FORCE), QUEUE_FLAG_NAME(DEAD), QUEUE_FLAG_NAME(INIT_DONE), - QUEUE_FLAG_NAME(NO_SG_MERGE), QUEUE_FLAG_NAME(POLL), QUEUE_FLAG_NAME(WC), QUEUE_FLAG_NAME(FUA), diff --git a/block/blk-mq.c b/block/blk-mq.c index 32b246ed44c0..0375c3bd410e 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2755,9 +2755,6 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT; - if (!(set->flags & BLK_MQ_F_SG_MERGE)) - blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q); - q->sg_reserved_size = INT_MAX; INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work); diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 844f7d0f2ef8..a41832cf0c98 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1698,14 +1698,6 @@ static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev, return q && !blk_queue_add_random(q); } -static int queue_supports_sg_merge(struct dm_target *ti, struct dm_dev *dev, - sector_t start, sector_t len, void *data) -{ - struct request_queue *q = bdev_get_queue(dev->bdev); - - return q && !test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags); -} - static bool dm_table_all_devices_attribute(struct dm_table *t, iterate_devices_callout_fn func) { @@ -1902,11 +1894,6 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (!dm_table_supports_write_zeroes(t)) q->limits.max_write_zeroes_sectors = 0; - if (dm_table_all_devices_attribute(t, queue_supports_sg_merge)) - blk_queue_flag_clear(QUEUE_FLAG_NO_SG_MERGE, q); - else - blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q); - dm_table_verify_integrity(t); /* diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a281b6737b61..8e05966ffe94 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -590,7 +590,6 @@ struct request_queue { #define QUEUE_FLAG_SAME_FORCE 15 /* force complete on same CPU */ #define QUEUE_FLAG_DEAD 16 /* queue tear-down finished */ #define QUEUE_FLAG_INIT_DONE 17 /* queue is initialized */ -#define QUEUE_FLAG_NO_SG_MERGE 18 /* don't attempt to merge SG segments*/ #define QUEUE_FLAG_POLL 19 /* IO polling enabled if set */ #define QUEUE_FLAG_WC 20 /* Write back caching */ #define QUEUE_FLAG_FUA 21 /* device supports FUA writes */ From patchwork Wed Nov 21 03:23:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10691831 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F7331923 for ; Wed, 21 Nov 2018 03:29:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D2B42B4D4 for ; Wed, 21 Nov 2018 03:29:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E8A22B4E7; Wed, 21 Nov 2018 03:29:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF0642B4E1 for ; Wed, 21 Nov 2018 03:29:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728450AbeKUOCA (ORCPT ); Wed, 21 Nov 2018 09:02:00 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52414 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726565AbeKUOB7 (ORCPT ); Wed, 21 Nov 2018 09:01:59 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1DD9A96DB; Wed, 21 Nov 2018 03:29:27 +0000 (UTC) Received: from localhost (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9F884600C3; Wed, 21 Nov 2018 03:29:14 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Sagi Grimberg , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com, Ming Lei Subject: [PATCH V11 19/19] block: kill BLK_MQ_F_SG_MERGE Date: Wed, 21 Nov 2018 11:23:27 +0800 Message-Id: <20181121032327.8434-20-ming.lei@redhat.com> In-Reply-To: <20181121032327.8434-1-ming.lei@redhat.com> References: <20181121032327.8434-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 21 Nov 2018 03:29:27 +0000 (UTC) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP QUEUE_FLAG_NO_SG_MERGE has been killed, so kill BLK_MQ_F_SG_MERGE too. Reviewed-by: Christoph Hellwig Reviewed-by: Omar Sandoval Signed-off-by: Ming Lei --- block/blk-mq-debugfs.c | 1 - drivers/block/loop.c | 2 +- drivers/block/nbd.c | 2 +- drivers/block/rbd.c | 2 +- drivers/block/skd_main.c | 1 - drivers/block/xen-blkfront.c | 2 +- drivers/md/dm-rq.c | 2 +- drivers/mmc/core/queue.c | 3 +-- drivers/scsi/scsi_lib.c | 2 +- include/linux/blk-mq.h | 1 - 10 files changed, 7 insertions(+), 11 deletions(-) diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c index d752fe4461af..a6ec055b54fa 100644 --- a/block/blk-mq-debugfs.c +++ b/block/blk-mq-debugfs.c @@ -249,7 +249,6 @@ static const char *const alloc_policy_name[] = { static const char *const hctx_flag_name[] = { HCTX_FLAG_NAME(SHOULD_MERGE), HCTX_FLAG_NAME(TAG_SHARED), - HCTX_FLAG_NAME(SG_MERGE), HCTX_FLAG_NAME(BLOCKING), HCTX_FLAG_NAME(NO_SCHED), }; diff --git a/drivers/block/loop.c b/drivers/block/loop.c index e3683211f12d..4cf5486689de 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1906,7 +1906,7 @@ static int loop_add(struct loop_device **l, int i) lo->tag_set.queue_depth = 128; lo->tag_set.numa_node = NUMA_NO_NODE; lo->tag_set.cmd_size = sizeof(struct loop_cmd); - lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; + lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; lo->tag_set.driver_data = lo; err = blk_mq_alloc_tag_set(&lo->tag_set); diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 08696f5f00bb..999c94de78e5 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -1570,7 +1570,7 @@ static int nbd_dev_add(int index) nbd->tag_set.numa_node = NUMA_NO_NODE; nbd->tag_set.cmd_size = sizeof(struct nbd_cmd); nbd->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | - BLK_MQ_F_SG_MERGE | BLK_MQ_F_BLOCKING; + BLK_MQ_F_BLOCKING; nbd->tag_set.driver_data = nbd; err = blk_mq_alloc_tag_set(&nbd->tag_set); diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index 8e5140bbf241..3dfd300b5283 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -3988,7 +3988,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev) rbd_dev->tag_set.ops = &rbd_mq_ops; rbd_dev->tag_set.queue_depth = rbd_dev->opts->queue_depth; rbd_dev->tag_set.numa_node = NUMA_NO_NODE; - rbd_dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; + rbd_dev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; rbd_dev->tag_set.nr_hw_queues = 1; rbd_dev->tag_set.cmd_size = sizeof(struct work_struct); diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c index a10d5736d8f7..a7040f9a1b1b 100644 --- a/drivers/block/skd_main.c +++ b/drivers/block/skd_main.c @@ -2843,7 +2843,6 @@ static int skd_cons_disk(struct skd_device *skdev) skdev->sgs_per_request * sizeof(struct scatterlist); skdev->tag_set.numa_node = NUMA_NO_NODE; skdev->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | - BLK_MQ_F_SG_MERGE | BLK_ALLOC_POLICY_TO_MQ_FLAG(BLK_TAG_ALLOC_FIFO); skdev->tag_set.driver_data = skdev; rc = blk_mq_alloc_tag_set(&skdev->tag_set); diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 0ed4b200fa58..d43a5677ccbc 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -977,7 +977,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size, } else info->tag_set.queue_depth = BLK_RING_SIZE(info); info->tag_set.numa_node = NUMA_NO_NODE; - info->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; + info->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; info->tag_set.cmd_size = sizeof(struct blkif_req); info->tag_set.driver_data = info; diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c index 1f1fe9a618ea..afbac62a02a2 100644 --- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -536,7 +536,7 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t) md->tag_set->ops = &dm_mq_ops; md->tag_set->queue_depth = dm_get_blk_mq_queue_depth(); md->tag_set->numa_node = md->numa_node_id; - md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; + md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE; md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues(); md->tag_set->driver_data = md; diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 35cc138b096d..cc19e71c71d4 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -410,8 +410,7 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card) else mq->tag_set.queue_depth = MMC_QUEUE_DEPTH; mq->tag_set.numa_node = NUMA_NO_NODE; - mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE | - BLK_MQ_F_BLOCKING; + mq->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; mq->tag_set.nr_hw_queues = 1; mq->tag_set.cmd_size = sizeof(struct mmc_queue_req); mq->tag_set.driver_data = mq; diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 0df15cb738d2..4091a67d23e5 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1890,7 +1890,7 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost) shost->tag_set.queue_depth = shost->can_queue; shost->tag_set.cmd_size = cmd_size; shost->tag_set.numa_node = NUMA_NO_NODE; - shost->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE; + shost->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; shost->tag_set.flags |= BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy); shost->tag_set.driver_data = shost; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 929e8abc5535..ca7389d7e04f 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -211,7 +211,6 @@ struct blk_mq_ops { enum { BLK_MQ_F_SHOULD_MERGE = 1 << 0, BLK_MQ_F_TAG_SHARED = 1 << 1, - BLK_MQ_F_SG_MERGE = 1 << 2, BLK_MQ_F_BLOCKING = 1 << 5, BLK_MQ_F_NO_SCHED = 1 << 6, BLK_MQ_F_ALLOC_POLICY_START_BIT = 8,