From patchwork Sat Jun 9 12:30:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10455697 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 263FD60467 for ; Sat, 9 Jun 2018 12:37:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 19458206AC for ; Sat, 9 Jun 2018 12:37:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0DF6A223A6; Sat, 9 Jun 2018 12:37:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB484206AC for ; Sat, 9 Jun 2018 12:37:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753418AbeFIMeW (ORCPT ); Sat, 9 Jun 2018 08:34:22 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:38324 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753416AbeFIMeU (ORCPT ); Sat, 9 Jun 2018 08:34:20 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 24E3D818BAF7; Sat, 9 Jun 2018 12:34:20 +0000 (UTC) Received: from localhost (ovpn-12-40.pek2.redhat.com [10.72.12.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id A708B2166BB2; Sat, 9 Jun 2018 12:34:14 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Alexander Viro , Kent Overstreet Cc: David Sterba , Huang Ying , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Ming Lei Subject: [PATCH V6 19/30] md/dm/bcache: conver to bio_for_each_chunk_segment_all and bio_for_each_chunk_all Date: Sat, 9 Jun 2018 20:30:03 +0800 Message-Id: <20180609123014.8861-20-ming.lei@redhat.com> In-Reply-To: <20180609123014.8861-1-ming.lei@redhat.com> References: <20180609123014.8861-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Sat, 09 Jun 2018 12:34:20 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Sat, 09 Jun 2018 12:34:20 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In bch_bio_alloc_pages(), bio_for_each_chunk_all() is fine because this helper can only be used on a freshly new bio. For other cases, we conver to bio_for_each_chunk_segment_all() since they needn't to update bvec table. bio_for_each_segment_all() can't be used any more after multipage bvec is enabled, so we have to convert to bio_for_each_chunk_segment_all(). Signed-off-by: Ming Lei --- drivers/md/bcache/btree.c | 3 ++- drivers/md/bcache/util.c | 2 +- drivers/md/dm-crypt.c | 3 ++- drivers/md/raid1.c | 3 ++- 4 files changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 2a0968c04e21..dc0747c37bdf 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -423,8 +423,9 @@ static void do_btree_node_write(struct btree *b) int j; struct bio_vec *bv; void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1)); + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bv, b->bio, j) + bio_for_each_chunk_segment_all(bv, b->bio, j, citer) memcpy(page_address(bv->bv_page), base + j * PAGE_SIZE, PAGE_SIZE); diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c index fc479b026d6d..2f05199f7edb 100644 --- a/drivers/md/bcache/util.c +++ b/drivers/md/bcache/util.c @@ -268,7 +268,7 @@ int bch_bio_alloc_pages(struct bio *bio, gfp_t gfp_mask) int i; struct bio_vec *bv; - bio_for_each_segment_all(bv, bio, i) { + bio_for_each_chunk_all(bv, bio, i) { bv->bv_page = alloc_page(gfp_mask); if (!bv->bv_page) { while (--bv >= bio->bi_io_vec) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index da02f4d8e4b9..637ef1b1dc43 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1450,8 +1450,9 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone) { unsigned int i; struct bio_vec *bv; + struct bvec_chunk_iter citer; - bio_for_each_segment_all(bv, clone, i) { + bio_for_each_chunk_segment_all(bv, clone, i, citer) { BUG_ON(!bv->bv_page); mempool_free(bv->bv_page, &cc->page_pool); } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index bad28520719b..2a4f1037c680 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2116,13 +2116,14 @@ static void process_checks(struct r1bio *r1_bio) struct page **spages = get_resync_pages(sbio)->pages; struct bio_vec *bi; int page_len[RESYNC_PAGES] = { 0 }; + struct bvec_chunk_iter citer; if (sbio->bi_end_io != end_sync_read) continue; /* Now we can 'fixup' the error value */ sbio->bi_status = 0; - bio_for_each_segment_all(bi, sbio, j) + bio_for_each_chunk_segment_all(bi, sbio, j, citer) page_len[j] = bi->bv_len; if (!status) {