From patchwork Wed Feb 2 16:00:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 773ECC433FE for ; Wed, 2 Feb 2022 16:01:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345629AbiBBQBR (ORCPT ); Wed, 2 Feb 2022 11:01:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345628AbiBBQBR (ORCPT ); Wed, 2 Feb 2022 11:01:17 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DF60C061714 for ; Wed, 2 Feb 2022 08:01:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=53g90lZBMh/ldzirlFZP4n9i+Wwo32QW2GhtYSf7T5E=; b=v56pgGCEOMyc3JXNRaB7TS7FAX IauoereXezUReiBaeO1d4C4O4ReBujjrvVSs7ieQZzqLMQWM9HbpLtntMdw4gxo+pLCk7+7dKS5Pn 2/hOVdHcobJGn0oTmM/80YlS73Fb7QRqOB0266M6tKLHtcRT5oAgcjuSD1z2wlWQpCTrDxVWz0QHQ jCQUKDh2BwlImzd+yEAPnwXCtrUGuTruBap8+57GacXtW6cvTG1SQ0CZHvLCxnhcpEPpNuGMZ+EYe KMp7XreppkL0bYnqeS53zyRNVRDI3asQ8kxbrWr1UEXHY2/TURV+iCgqgBbNyrRBGwXi1Q8VXHy/3 kFSlvD/A==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4I-00G7wt-54; Wed, 02 Feb 2022 16:01:14 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 01/13] drbd: set ->bi_bdev in drbd_req_new Date: Wed, 2 Feb 2022 17:00:57 +0100 Message-Id: <20220202160109.108149-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Make sure the newly allocated bio has the correct bi_bdev set from the start. Signed-off-by: Christoph Hellwig --- drivers/block/drbd/drbd_req.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index 3235532ae0778..8d44e96c4c4ef 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -31,6 +31,7 @@ static struct drbd_request *drbd_req_new(struct drbd_device *device, struct bio memset(req, 0, sizeof(*req)); req->private_bio = bio_clone_fast(bio_src, GFP_NOIO, &drbd_io_bio_set); + bio_set_dev(req->private_bio, device->ldev->backing_bdev); req->private_bio->bi_private = req; req->private_bio->bi_end_io = drbd_request_endio; @@ -1151,8 +1152,6 @@ drbd_submit_req_private_bio(struct drbd_request *req) else type = DRBD_FAULT_DT_RD; - bio_set_dev(bio, device->ldev->backing_bdev); - /* State may have changed since we grabbed our reference on the * ->ldev member. Double check, and short-circuit to endio. * In case the last activity log transaction failed to get on From patchwork Wed Feb 2 16:00:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733224 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02839C433EF for ; Wed, 2 Feb 2022 16:01:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345631AbiBBQBV (ORCPT ); Wed, 2 Feb 2022 11:01:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345628AbiBBQBU (ORCPT ); Wed, 2 Feb 2022 11:01:20 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DA3EC061714 for ; Wed, 2 Feb 2022 08:01:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=8GVQXT0kVqChU6vSgNGttF15H9QFLTedNtJ6t0U/v+8=; b=gy5mGNGZskKl9PZnlzVuZdn61+ 9HZOA4yr3sXebPZ/UB56iWuefW+QMc1WWv/75+WQXtMXQuN+Y+DkrJaBFF6gbHhNykAn862njd8qA atVWWB75tZtW/5Ar3Ej8RSHdLs8R75RxXiNZrihS8uQNRv7yVF/oS+UVMbxDCS6nHpHfNa3pNO78+ f7EJ7LRy6dW3qwChpXocua/xfDatRdyObW+jGa5qyFj3pkGUQ5jBo1laqhwwAnPRse/TKIjgVbz/2 BSH0qDaecFmRJlENqa5cS4NwE58sG1xAiId8R3h3stZUpzmmDDNaWurxT1/rTVu288cH4xPBbN6jB 9x2hS5rg==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4L-00G7xZ-0B; Wed, 02 Feb 2022 16:01:17 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 02/13] dm: add a clone_to_tio helper Date: Wed, 2 Feb 2022 17:00:58 +0100 Message-Id: <20220202160109.108149-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a helper to stop open coding the container_of operations to get from the clone bio to the tio structure. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index fa596b654c99c..5543e18f3c3bc 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -79,10 +79,14 @@ struct clone_info { #define DM_IO_BIO_OFFSET \ (offsetof(struct dm_target_io, clone) + offsetof(struct dm_io, tio)) +static inline struct dm_target_io *clone_to_tio(struct bio *clone) +{ + return container_of(clone, struct dm_target_io, clone); +} + void *dm_per_bio_data(struct bio *bio, size_t data_size) { - struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); - if (!tio->inside_dm_io) + if (!clone_to_tio(bio)->inside_dm_io) return (char *)bio - DM_TARGET_IO_BIO_OFFSET - data_size; return (char *)bio - DM_IO_BIO_OFFSET - data_size; } @@ -477,10 +481,7 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode, u64 dm_start_time_ns_from_clone(struct bio *bio) { - struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); - struct dm_io *io = tio->io; - - return jiffies_to_nsecs(io->start_time); + return jiffies_to_nsecs(clone_to_tio(bio)->io->start_time); } EXPORT_SYMBOL_GPL(dm_start_time_ns_from_clone); @@ -521,7 +522,7 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio) clone = bio_alloc_bioset(NULL, 0, 0, GFP_NOIO, &md->io_bs); - tio = container_of(clone, struct dm_target_io, clone); + tio = clone_to_tio(clone); tio->inside_dm_io = true; tio->io = NULL; @@ -557,7 +558,7 @@ static struct dm_target_io *alloc_tio(struct clone_info *ci, struct dm_target *t if (!clone) return NULL; - tio = container_of(clone, struct dm_target_io, clone); + tio = clone_to_tio(clone); tio->inside_dm_io = false; } @@ -878,7 +879,7 @@ static bool swap_bios_limit(struct dm_target *ti, struct bio *bio) static void clone_endio(struct bio *bio) { blk_status_t error = bio->bi_status; - struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); + struct dm_target_io *tio = clone_to_tio(bio); struct dm_io *io = tio->io; struct mapped_device *md = tio->io->md; dm_endio_fn endio = tio->ti->type->end_io; @@ -1084,7 +1085,7 @@ static int dm_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, */ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors) { - struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); + struct dm_target_io *tio = clone_to_tio(bio); unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT; BUG_ON(bio->bi_opf & REQ_PREFLUSH); @@ -1257,10 +1258,8 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, if (bio_nr == num_bios) return; - while ((bio = bio_list_pop(blist))) { - tio = container_of(bio, struct dm_target_io, clone); - free_tio(tio); - } + while ((bio = bio_list_pop(blist))) + free_tio(clone_to_tio(bio)); } } @@ -1282,14 +1281,11 @@ static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, { struct bio_list blist = BIO_EMPTY_LIST; struct bio *bio; - struct dm_target_io *tio; alloc_multiple_bios(&blist, ci, ti, num_bios); - while ((bio = bio_list_pop(&blist))) { - tio = container_of(bio, struct dm_target_io, clone); - __clone_and_map_simple_bio(ci, tio, len); - } + while ((bio = bio_list_pop(&blist))) + __clone_and_map_simple_bio(ci, clone_to_tio(bio), len); } static int __send_empty_flush(struct clone_info *ci) From patchwork Wed Feb 2 16:00:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A080EC433EF for ; Wed, 2 Feb 2022 16:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345628AbiBBQBZ (ORCPT ); Wed, 2 Feb 2022 11:01:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345630AbiBBQBY (ORCPT ); Wed, 2 Feb 2022 11:01:24 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 611E6C06173B for ; Wed, 2 Feb 2022 08:01:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2zDeJV7L2pJAwrBW2BoaWb0iAmlSixUiIC+xO6VsI9s=; b=HynGE8TD9m6HLFarZX5vNdsvyE IoxmI0Dxy5doQdGvR/a3Ot3TJAwRwwe82ufa7zuoWpqLgcOUmKbcbiwgEgfediuOzGYxYbRj5jC82 IferQLr2R7RhDTFul+jZP+SKAws+9t5Zqm1wUG0bXF/6cil+pLb60/3Pa2W1X2pmNetyZOISqfXqU KFF4hTLmF3/AODcwsKBeJ7eKxxS2lDTaTtdql5wHcOps3YS/bebU77Ast4dR5V+QgpZVw5YU9LLCi gNb8yi3B1KhozI0O3JjyBzHMFYPc8xqCnlwzRQhnApezX1a83xmYsp32ScJVxLylyPmhnuPQ3KlIN enW2xelg==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4N-00G7yw-LV; Wed, 02 Feb 2022 16:01:20 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 03/13] dm: fold clone_bio into __clone_and_map_data_bio Date: Wed, 2 Feb 2022 17:00:59 +0100 Message-Id: <20220202160109.108149-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fold clone_bio into its only caller to prepare for refactoring. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 43 +++++++++++++++++-------------------------- 1 file changed, 17 insertions(+), 26 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 5543e18f3c3bc..9384d250a3e4e 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1190,17 +1190,22 @@ static void bio_setup_sector(struct bio *bio, sector_t sector, unsigned len) /* * Creates a bio that consists of range of complete bvecs. */ -static int clone_bio(struct dm_target_io *tio, struct bio *bio, - sector_t sector, unsigned len) +static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, + sector_t sector, unsigned *len) { - struct bio *clone = &tio->clone; + struct bio *bio = ci->bio, *clone; + struct dm_target_io *tio; int r; + tio = alloc_tio(ci, ti, 0, GFP_NOIO); + tio->len_ptr = len; + + clone = &tio->clone; __bio_clone_fast(clone, bio); r = bio_crypt_clone(clone, bio, GFP_NOIO); if (r < 0) - return r; + goto free_tio; if (bio_integrity(bio)) { if (unlikely(!dm_target_has_integrity(tio->ti->type) && @@ -1208,21 +1213,26 @@ static int clone_bio(struct dm_target_io *tio, struct bio *bio, DMWARN("%s: the target %s doesn't support integrity data.", dm_device_name(tio->io->md), tio->ti->type->name); - return -EIO; + r = -EIO; + goto free_tio; } r = bio_integrity_clone(clone, bio, GFP_NOIO); if (r < 0) - return r; + goto free_tio; } bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector)); - clone->bi_iter.bi_size = to_bytes(len); + clone->bi_iter.bi_size = to_bytes(*len); if (bio_integrity(bio)) bio_integrity_trim(clone); + __map_bio(tio); return 0; +free_tio: + free_tio(tio); + return r; } static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, @@ -1313,25 +1323,6 @@ static int __send_empty_flush(struct clone_info *ci) return 0; } -static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, - sector_t sector, unsigned *len) -{ - struct bio *bio = ci->bio; - struct dm_target_io *tio; - int r; - - tio = alloc_tio(ci, ti, 0, GFP_NOIO); - tio->len_ptr = len; - r = clone_bio(tio, bio, sector, *len); - if (r < 0) { - free_tio(tio); - return r; - } - __map_bio(tio); - - return 0; -} - static int __send_changing_extent_only(struct clone_info *ci, struct dm_target *ti, unsigned num_bios) { From patchwork Wed Feb 2 16:01:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733226 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C169C433FE for ; Wed, 2 Feb 2022 16:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240920AbiBBQB1 (ORCPT ); Wed, 2 Feb 2022 11:01:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345633AbiBBQBZ (ORCPT ); Wed, 2 Feb 2022 11:01:25 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BC62C061714 for ; Wed, 2 Feb 2022 08:01:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=tDHwNd2dRB8Z2K6EiRpcY2F0+iUbIwe96VxPtXRuAkw=; b=4lIVZRrhfvopQBZZO2SIoK7rn9 f/J3VzjxAr2p9T7TWpoJv7q7ci9MlRhZbkq1Oh+SPs8SKQzZM3anNuhYv8sXPIGalVwdM4HP1GQN+ ig6e2nloYWHU3+dDTRrv+zALLTZis5uV5dbfpdhBYzrXyODrkz3UAe4+HGRXvb7k7phqhIU8Jt3Y2 7t4SxSzLWI07Z5/NAogpWBGizq1rR2Z5cYSdCh8pPOjP759w8Snlc/AE8OzyU24xE4hogUZo+ZFDu IIuanz8H+fOJUzsAswEEchkmep6TnoiAOUvrzExBGNwcYwCiIcaM3mhqHSXMITqdLv8i0y0FJTgti 0Uj7taXg==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4Q-00G80Z-AJ; Wed, 02 Feb 2022 16:01:22 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 04/13] dm: fold __send_duplicate_bios into __clone_and_map_simple_bio Date: Wed, 2 Feb 2022 17:01:00 +0100 Message-Id: <20220202160109.108149-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fold __send_duplicate_bios into its only caller to prepare for refactoring. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 9384d250a3e4e..2527b287ead0f 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1273,29 +1273,24 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, } } -static void __clone_and_map_simple_bio(struct clone_info *ci, - struct dm_target_io *tio, unsigned *len) -{ - struct bio *clone = &tio->clone; - - tio->len_ptr = len; - - __bio_clone_fast(clone, ci->bio); - if (len) - bio_setup_sector(clone, ci->sector, *len); - __map_bio(tio); -} - static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, unsigned num_bios, unsigned *len) { struct bio_list blist = BIO_EMPTY_LIST; - struct bio *bio; + struct bio *clone; alloc_multiple_bios(&blist, ci, ti, num_bios); - while ((bio = bio_list_pop(&blist))) - __clone_and_map_simple_bio(ci, clone_to_tio(bio), len); + while ((clone = bio_list_pop(&blist))) { + struct dm_target_io *tio = clone_to_tio(clone); + + tio->len_ptr = len; + + __bio_clone_fast(clone, ci->bio); + if (len) + bio_setup_sector(clone, ci->sector, *len); + __map_bio(tio); + } } static int __send_empty_flush(struct clone_info *ci) From patchwork Wed Feb 2 16:01:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A145BC433F5 for ; Wed, 2 Feb 2022 16:01:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345636AbiBBQB2 (ORCPT ); Wed, 2 Feb 2022 11:01:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345634AbiBBQB1 (ORCPT ); Wed, 2 Feb 2022 11:01:27 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5907C061714 for ; Wed, 2 Feb 2022 08:01:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=DsQFlobwtwH+o+U45Zc9FPhgYyit0C3r48NlH8H2OKc=; b=j+gHptLCTZhtypmJEWrA1ad73E 6DlmLS+lowmN2MXlII6y72WdyPyJ9MYaHY77ekyvh3oRbWViuQ/VpDjxPs1jBjqXx0p1jGwxQ2uYG NlCTnAMf4s+P7xTeUVxjVqeH8310RZdJaKLfQ4xFM5B/wfCC+gKEoC1yOXmfwPVj9nh/JUt6/s9DI oItx3QzzC1WXBj1jlgfpCb/6ddMLMUHBZpabX+C5b2HqgMrrrwOVz49K6WYKQsoQsTpiO7abjehNj mc6ivJBGTEq08AFahbf+Ew36O8gcG6mndcFkKR9ZXka+DgwPAkcU0kHXAVjXhuMcXrsquT/wlq1P4 Ires1bpA==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4S-00G81a-UW; Wed, 02 Feb 2022 16:01:25 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 05/13] dm: move cloning the bio into alloc_tio Date: Wed, 2 Feb 2022 17:01:01 +0100 Message-Id: <20220202160109.108149-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move the call to __bio_clone_fast and the assignment of ->len_ptr from the callers into alloc_tio to prepare for changes to the bio clone API. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 2527b287ead0f..90341b7fa5809 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -545,7 +545,7 @@ static void free_io(struct mapped_device *md, struct dm_io *io) } static struct dm_target_io *alloc_tio(struct clone_info *ci, struct dm_target *ti, - unsigned target_bio_nr, gfp_t gfp_mask) + unsigned target_bio_nr, unsigned *len, gfp_t gfp_mask) { struct dm_target_io *tio; @@ -561,11 +561,13 @@ static struct dm_target_io *alloc_tio(struct clone_info *ci, struct dm_target *t tio = clone_to_tio(clone); tio->inside_dm_io = false; } + __bio_clone_fast(&tio->clone, ci->bio); tio->magic = DM_TIO_MAGIC; tio->io = ci->io; tio->ti = ti; tio->target_bio_nr = target_bio_nr; + tio->len_ptr = len; return tio; } @@ -1197,11 +1199,8 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, struct dm_target_io *tio; int r; - tio = alloc_tio(ci, ti, 0, GFP_NOIO); - tio->len_ptr = len; - + tio = alloc_tio(ci, ti, 0, len, GFP_NOIO); clone = &tio->clone; - __bio_clone_fast(clone, bio); r = bio_crypt_clone(clone, bio, GFP_NOIO); if (r < 0) @@ -1236,7 +1235,8 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, } static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, - struct dm_target *ti, unsigned num_bios) + struct dm_target *ti, unsigned num_bios, + unsigned *len) { struct dm_target_io *tio; int try; @@ -1245,7 +1245,7 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, return; if (num_bios == 1) { - tio = alloc_tio(ci, ti, 0, GFP_NOIO); + tio = alloc_tio(ci, ti, 0, len, GFP_NOIO); bio_list_add(blist, &tio->clone); return; } @@ -1257,7 +1257,8 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, if (try) mutex_lock(&ci->io->md->table_devices_lock); for (bio_nr = 0; bio_nr < num_bios; bio_nr++) { - tio = alloc_tio(ci, ti, bio_nr, try ? GFP_NOIO : GFP_NOWAIT); + tio = alloc_tio(ci, ti, bio_nr, len, + try ? GFP_NOIO : GFP_NOWAIT); if (!tio) break; @@ -1279,14 +1280,11 @@ static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, struct bio_list blist = BIO_EMPTY_LIST; struct bio *clone; - alloc_multiple_bios(&blist, ci, ti, num_bios); + alloc_multiple_bios(&blist, ci, ti, num_bios, len); while ((clone = bio_list_pop(&blist))) { struct dm_target_io *tio = clone_to_tio(clone); - tio->len_ptr = len; - - __bio_clone_fast(clone, ci->bio); if (len) bio_setup_sector(clone, ci->sector, *len); __map_bio(tio); From patchwork Wed Feb 2 16:01:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733228 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACB67C433EF for ; Wed, 2 Feb 2022 16:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236940AbiBBQBb (ORCPT ); Wed, 2 Feb 2022 11:01:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345635AbiBBQBb (ORCPT ); Wed, 2 Feb 2022 11:01:31 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99D2FC06173B for ; Wed, 2 Feb 2022 08:01:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ZdpH0Dvvs/Dm4a6IaYQGVHJPIYUa7UOvgcazFhG1ngc=; b=lhzpTxB744VOcjS8rHWtgoFU4Q /kQp+0VLvl+qM0+5RbwAwzyYznIsg6+aiJbarjmAoEsLLIkCZZ6nxKCTj2Dun4JL5NRd+PdMaQ6oJ He0hVLmC7Q6a7mh4zSxN+YLgJiKPZeiegAdtoztE9tH6TXwuHa6ldl/V0c7BQl86JzXotaQrGygbw 4MzzMJkekmAk9oJdes4Tw2JGCLOIKG3zhJzXFrHL4EY/qTMYiRmx1p/NqGLBVuJQ727SDqMeZLxtz WiYWHibNG76P7Zpkvltyo1b3r0x0gpHVSC1+W5/1aIpllXmiS5hkmGas3E6EDrb8jsqv9g5p/O9Gd LXXCTi+w==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4V-00G82n-L5; Wed, 02 Feb 2022 16:01:28 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 06/13] dm: pass the bio instead of tio to __map_bio Date: Wed, 2 Feb 2022 17:01:02 +0100 Message-Id: <20220202160109.108149-7-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This simplifies the callers a bit. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 90341b7fa5809..a43d280e9bc54 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1117,11 +1117,11 @@ static noinline void __set_swap_bios_limit(struct mapped_device *md, int latch) mutex_unlock(&md->swap_bios_lock); } -static void __map_bio(struct dm_target_io *tio) +static void __map_bio(struct bio *clone) { + struct dm_target_io *tio = clone_to_tio(clone); int r; sector_t sector; - struct bio *clone = &tio->clone; struct dm_io *io = tio->io; struct dm_target *ti = tio->ti; @@ -1227,7 +1227,7 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, if (bio_integrity(bio)) bio_integrity_trim(clone); - __map_bio(tio); + __map_bio(clone); return 0; free_tio: free_tio(tio); @@ -1283,11 +1283,9 @@ static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, alloc_multiple_bios(&blist, ci, ti, num_bios, len); while ((clone = bio_list_pop(&blist))) { - struct dm_target_io *tio = clone_to_tio(clone); - if (len) bio_setup_sector(clone, ci->sector, *len); - __map_bio(tio); + __map_bio(clone); } } From patchwork Wed Feb 2 16:01:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E97A1C433F5 for ; Wed, 2 Feb 2022 16:01:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345633AbiBBQBe (ORCPT ); Wed, 2 Feb 2022 11:01:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345634AbiBBQBd (ORCPT ); Wed, 2 Feb 2022 11:01:33 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86485C06173B for ; Wed, 2 Feb 2022 08:01:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=0KkcieFVl0IGyoBXee3Jo2St7aVyz1T+6Nz7lRgq7G0=; b=Cqyxo5CWfnRWYVc4tfecgiNu82 BdBKWmenlvwKq2lIZTpQgK47lxUYSol23/1rHk+kSsoB5hfRa40mkU/FokDf6nF3TvOxMTrcqw2Qz Vld8fr+Kbh+gXx12XGpv0gnJAxg/3Kv84/qXYdF0e0evSj8VSqTKHrVq2Rj9naFodPLp9yplKp6I+ moljdMz4FUf3UTcsWe6Nta3Q6FXvXCkSdiKVxBb/cIvCgcT7gSrxwM19sD6+4td7xiDzBghYhk//E fRrMydm10fq4dFBBpCICm9yWm3wCuCgckWDiLq5l70CYDxybyQ/9ohDW+fzOvUm6s+A6vauz+g2+Z vK8XwIXg==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4Y-00G84D-AB; Wed, 02 Feb 2022 16:01:30 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 07/13] dm: retun the clone bio from alloc_tio Date: Wed, 2 Feb 2022 17:01:03 +0100 Message-Id: <20220202160109.108149-8-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Return the clone bio embedded into the tio as that is what the callers actually want. Similar for the free side. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 39 +++++++++++++++++++-------------------- 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index a43d280e9bc54..c05b6ff1bb957 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -544,7 +544,7 @@ static void free_io(struct mapped_device *md, struct dm_io *io) bio_put(&io->tio.clone); } -static struct dm_target_io *alloc_tio(struct clone_info *ci, struct dm_target *ti, +static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti, unsigned target_bio_nr, unsigned *len, gfp_t gfp_mask) { struct dm_target_io *tio; @@ -569,14 +569,14 @@ static struct dm_target_io *alloc_tio(struct clone_info *ci, struct dm_target *t tio->target_bio_nr = target_bio_nr; tio->len_ptr = len; - return tio; + return &tio->clone; } -static void free_tio(struct dm_target_io *tio) +static void free_tio(struct bio *clone) { - if (tio->inside_dm_io) + if (clone_to_tio(clone)->inside_dm_io) return; - bio_put(&tio->clone); + bio_put(clone); } /* @@ -932,7 +932,7 @@ static void clone_endio(struct bio *bio) up(&md->swap_bios_semaphore); } - free_tio(tio); + free_tio(bio); dm_io_dec_pending(io, error); } @@ -1166,7 +1166,7 @@ static void __map_bio(struct bio *clone) struct mapped_device *md = io->md; up(&md->swap_bios_semaphore); } - free_tio(tio); + free_tio(clone); dm_io_dec_pending(io, BLK_STS_IOERR); break; case DM_MAPIO_REQUEUE: @@ -1174,7 +1174,7 @@ static void __map_bio(struct bio *clone) struct mapped_device *md = io->md; up(&md->swap_bios_semaphore); } - free_tio(tio); + free_tio(clone); dm_io_dec_pending(io, BLK_STS_DM_REQUEUE); break; default: @@ -1196,17 +1196,17 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, sector_t sector, unsigned *len) { struct bio *bio = ci->bio, *clone; - struct dm_target_io *tio; int r; - tio = alloc_tio(ci, ti, 0, len, GFP_NOIO); - clone = &tio->clone; + clone = alloc_tio(ci, ti, 0, len, GFP_NOIO); r = bio_crypt_clone(clone, bio, GFP_NOIO); if (r < 0) goto free_tio; if (bio_integrity(bio)) { + struct dm_target_io *tio = clone_to_tio(clone); + if (unlikely(!dm_target_has_integrity(tio->ti->type) && !dm_target_passes_integrity(tio->ti->type))) { DMWARN("%s: the target %s doesn't support integrity data.", @@ -1230,7 +1230,7 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, __map_bio(clone); return 0; free_tio: - free_tio(tio); + free_tio(clone); return r; } @@ -1238,31 +1238,30 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, struct dm_target *ti, unsigned num_bios, unsigned *len) { - struct dm_target_io *tio; + struct bio *bio; int try; if (!num_bios) return; if (num_bios == 1) { - tio = alloc_tio(ci, ti, 0, len, GFP_NOIO); - bio_list_add(blist, &tio->clone); + bio = alloc_tio(ci, ti, 0, len, GFP_NOIO); + bio_list_add(blist, bio); return; } for (try = 0; try < 2; try++) { int bio_nr; - struct bio *bio; if (try) mutex_lock(&ci->io->md->table_devices_lock); for (bio_nr = 0; bio_nr < num_bios; bio_nr++) { - tio = alloc_tio(ci, ti, bio_nr, len, + bio = alloc_tio(ci, ti, bio_nr, len, try ? GFP_NOIO : GFP_NOWAIT); - if (!tio) + if (!bio) break; - bio_list_add(blist, &tio->clone); + bio_list_add(blist, bio); } if (try) mutex_unlock(&ci->io->md->table_devices_lock); @@ -1270,7 +1269,7 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, return; while ((bio = bio_list_pop(blist))) - free_tio(clone_to_tio(bio)); + free_tio(bio); } } From patchwork Wed Feb 2 16:01:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C29DC433F5 for ; Wed, 2 Feb 2022 16:01:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345634AbiBBQBh (ORCPT ); Wed, 2 Feb 2022 11:01:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345642AbiBBQBf (ORCPT ); Wed, 2 Feb 2022 11:01:35 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C17ADC06173D for ; Wed, 2 Feb 2022 08:01:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=6nB4XCyNV0lLp4O7TLKt17coILi7UbFBfUYhwbdKt70=; b=DBLAljdEvNjT1ULi0WBmTfAend BUZVo07tqMGN+j/ue6vbsm0P0gfzIQR7DYDkjlUH0KrHTPp5PBPtgQ0+jGocgtMDJtwyTI3r5kccv 18Bx6laYf60iQxFUPOKAbwLyBsLpiUcvxnr0NiiXDav2qSauvr1eF703KEvINVlUThORtbBDQDB/t Aq2pOrVaymC1bs65nNw03OXxhwHmaNx5SqopgmTUlKV21cQovhVEWunDuD2qM6MBayrs3P1shOY0N M+dXqqhxnH3KNC+CrEbtHK9CV1e8mzw0sp4MTw+AHo22JxeQglaUZ17YG1yrHbk+Z2Xv/D8Qymwax MK/0outg==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4a-00G85F-Ue; Wed, 02 Feb 2022 16:01:33 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 08/13] dm: simplify the single bio fast path in __send_duplicate_bios Date: Wed, 2 Feb 2022 17:01:04 +0100 Message-Id: <20220202160109.108149-9-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Most targets just need a single flush bio. Open code that case in __send_duplicate_bios without the need to add the bio to a list. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index c05b6ff1bb957..78df75f57288b 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1241,15 +1241,6 @@ static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, struct bio *bio; int try; - if (!num_bios) - return; - - if (num_bios == 1) { - bio = alloc_tio(ci, ti, 0, len, GFP_NOIO); - bio_list_add(blist, bio); - return; - } - for (try = 0; try < 2; try++) { int bio_nr; @@ -1279,12 +1270,23 @@ static void __send_duplicate_bios(struct clone_info *ci, struct dm_target *ti, struct bio_list blist = BIO_EMPTY_LIST; struct bio *clone; - alloc_multiple_bios(&blist, ci, ti, num_bios, len); - - while ((clone = bio_list_pop(&blist))) { + switch (num_bios) { + case 0: + break; + case 1: + clone = alloc_tio(ci, ti, 0, len, GFP_NOIO); if (len) bio_setup_sector(clone, ci->sector, *len); __map_bio(clone); + break; + default: + alloc_multiple_bios(&blist, ci, ti, num_bios, len); + while ((clone = bio_list_pop(&blist))) { + if (len) + bio_setup_sector(clone, ci->sector, *len); + __map_bio(clone); + } + break; } } From patchwork Wed Feb 2 16:01:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE98AC433F5 for ; Wed, 2 Feb 2022 16:01:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345630AbiBBQBk (ORCPT ); Wed, 2 Feb 2022 11:01:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345632AbiBBQBj (ORCPT ); Wed, 2 Feb 2022 11:01:39 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BEFBC061714 for ; Wed, 2 Feb 2022 08:01:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=8t+56lp6dVMHM+jQkBmWG3FrLeB98BaTd5m46YJ8gF8=; b=piYM9GXRwThQPvdPrSoM/5lPmj MbrOC6Qa0iRJqSEHYoaRBxuwR7CMUJ7Q32SFkOivdbb23mWIxVHga9EAOfooYA1L3obWyn/l8MGH0 xqiHh1SIcdVXPtKPatoepoY4a6pdWMGdRP3J6G/sqVd9EPS+npnPd4x4ODCyfm7ISqvJca0ZfymNn d7sa2QIRr0GgCpBexpNDw1qzzwPF2Hm7Eq2Vjrau7nAum5KIQ+/NRS3jIsZ914z3fxdrg61G7oRwL AVqPcYwq3eE1Uu6oWlKOhrYxP9N5qAQ8kMKnCGSkr3GDlJ5F2RWdrHDkjJfaJpnSQskJ7FLl0Wj7+ fvVWlP/Q==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4d-00G86M-IZ; Wed, 02 Feb 2022 16:01:36 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 09/13] dm-cache: remove __remap_to_origin_clear_discard Date: Wed, 2 Feb 2022 17:01:05 +0100 Message-Id: <20220202160109.108149-10-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fold __remap_to_origin_clear_discard into the two callers to prepare for bio cloning refactoring. Signed-off-by: Christoph Hellwig --- drivers/md/dm-cache-target.c | 24 ++++++++---------------- 1 file changed, 8 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index 447d030036d18..1c37fe028e531 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -744,21 +744,14 @@ static void check_if_tick_bio_needed(struct cache *cache, struct bio *bio) spin_unlock_irq(&cache->lock); } -static void __remap_to_origin_clear_discard(struct cache *cache, struct bio *bio, - dm_oblock_t oblock, bool bio_has_pbd) -{ - if (bio_has_pbd) - check_if_tick_bio_needed(cache, bio); - remap_to_origin(cache, bio); - if (bio_data_dir(bio) == WRITE) - clear_discard(cache, oblock_to_dblock(cache, oblock)); -} - static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio, dm_oblock_t oblock) { // FIXME: check_if_tick_bio_needed() is called way too much through this interface - __remap_to_origin_clear_discard(cache, bio, oblock, true); + check_if_tick_bio_needed(cache, bio); + remap_to_origin(cache, bio); + if (bio_data_dir(bio) == WRITE) + clear_discard(cache, oblock_to_dblock(cache, oblock)); } static void remap_to_cache_dirty(struct cache *cache, struct bio *bio, @@ -831,11 +824,10 @@ static void remap_to_origin_and_cache(struct cache *cache, struct bio *bio, BUG_ON(!origin_bio); bio_chain(origin_bio, bio); - /* - * Passing false to __remap_to_origin_clear_discard() skips - * all code that might use per_bio_data (since clone doesn't have it) - */ - __remap_to_origin_clear_discard(cache, origin_bio, oblock, false); + + remap_to_origin(cache, origin_bio); + if (bio_data_dir(origin_bio) == WRITE) + clear_discard(cache, oblock_to_dblock(cache, oblock)); submit_bio(origin_bio); remap_to_cache(cache, bio, cblock); From patchwork Wed Feb 2 16:01:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD782C433FE for ; Wed, 2 Feb 2022 16:01:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345638AbiBBQBl (ORCPT ); Wed, 2 Feb 2022 11:01:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345635AbiBBQBl (ORCPT ); Wed, 2 Feb 2022 11:01:41 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 634A1C061714 for ; Wed, 2 Feb 2022 08:01:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=waIVev7vCVu3j9ZBMV+gH4szJEFZXLOIquptj7LJIlc=; b=o8stQR1KFQzEVImDenAoi5Dh63 B/HARMAnJcjTwKWZhpotCBCrnUkjqxkyUb36sWJz9KAiNjR7LgUIu1+K1wQvZbmZFV7ngGQqYSy2g HANQsrLJOUUTr5S6k2JsciUgyjVy5ATEIwBneLnSCuGzh92XbLZn6sBAs0s7R+htnqGDvuJamshvE GARX0T5oM5T7Qr1vRd4P+p5Df9C+ID1KiENK3kBWNB2YKJInRB8d98yR7biacgRp5RRT4ZVfc1oUF W8bHJAI65P/phW717thX5o9PcfNs39RltktG8tv64fqcnv0WdZd7SjBAqUnCdhQIOb3DIqXcBUIEl Rj33OIIg==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4g-00G87u-A8; Wed, 02 Feb 2022 16:01:38 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 10/13] block: clone crypto and integrity data in __bio_clone_fast Date: Wed, 2 Feb 2022 17:01:06 +0100 Message-Id: <20220202160109.108149-11-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org __bio_clone_fast should also clone integrity and crypto data, as a clone without those is incomplete. Right now the only caller that can actually support crypto and integrity data (dm) does it manually for the one callchain that supports these, but we better do it properly in the core. Note that all callers except for the above mentioned one also don't need to handle failure at all, given that the integrity and crypto clones are based on mempool allocations that won't fail for sleeping allocations. Signed-off-by: Christoph Hellwig --- block/bio-integrity.c | 1 - block/bio.c | 26 +++++++++++++------------- block/blk-crypto.c | 1 - drivers/md/bcache/request.c | 2 +- drivers/md/dm.c | 33 ++++++--------------------------- drivers/md/md-multipath.c | 2 +- include/linux/bio.h | 2 +- 7 files changed, 22 insertions(+), 45 deletions(-) diff --git a/block/bio-integrity.c b/block/bio-integrity.c index d251147154592..bd54532200650 100644 --- a/block/bio-integrity.c +++ b/block/bio-integrity.c @@ -420,7 +420,6 @@ int bio_integrity_clone(struct bio *bio, struct bio *bio_src, return 0; } -EXPORT_SYMBOL(bio_integrity_clone); int bioset_integrity_create(struct bio_set *bs, int pool_size) { diff --git a/block/bio.c b/block/bio.c index 2e19ca600fcdb..b5840ef549164 100644 --- a/block/bio.c +++ b/block/bio.c @@ -730,6 +730,7 @@ EXPORT_SYMBOL(bio_put); * __bio_clone_fast - clone a bio that shares the original bio's biovec * @bio: destination bio * @bio_src: bio to clone + * @gfp: allocation flags * * Clone a &bio. Caller will own the returned bio, but not * the actual data it points to. Reference count of returned @@ -737,7 +738,7 @@ EXPORT_SYMBOL(bio_put); * * Caller must ensure that @bio_src is not freed before @bio. */ -void __bio_clone_fast(struct bio *bio, struct bio *bio_src) +int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp) { WARN_ON_ONCE(bio->bi_pool && bio->bi_max_vecs); @@ -759,6 +760,13 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src) bio_clone_blkg_association(bio, bio_src); blkcg_bio_issue_init(bio); + + if (bio_crypt_clone(bio, bio_src, gfp) < 0) + return -ENOMEM; + if (bio_integrity(bio_src) && + bio_integrity_clone(bio, bio_src, gfp) < 0) + return -ENOMEM; + return 0; } EXPORT_SYMBOL(__bio_clone_fast); @@ -778,20 +786,12 @@ struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) if (!b) return NULL; - __bio_clone_fast(b, bio); - - if (bio_crypt_clone(b, bio, gfp_mask) < 0) - goto err_put; - - if (bio_integrity(bio) && - bio_integrity_clone(b, bio, gfp_mask) < 0) - goto err_put; + if (__bio_clone_fast(b, bio, gfp_mask < 0)) { + bio_put(b); + return NULL; + } return b; - -err_put: - bio_put(b); - return NULL; } EXPORT_SYMBOL(bio_clone_fast); diff --git a/block/blk-crypto.c b/block/blk-crypto.c index ec9efeeeca918..773dae4c329ba 100644 --- a/block/blk-crypto.c +++ b/block/blk-crypto.c @@ -111,7 +111,6 @@ int __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask) *dst->bi_crypt_context = *src->bi_crypt_context; return 0; } -EXPORT_SYMBOL_GPL(__bio_crypt_clone); /* Increments @dun by @inc, treating @dun as a multi-limb integer. */ void bio_crypt_dun_increment(u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 7ba59d08ed870..574b02b94f1a4 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -686,7 +686,7 @@ static void do_bio_hook(struct search *s, struct bio *bio = &s->bio.bio; bio_init(bio, NULL, NULL, 0, 0); - __bio_clone_fast(bio, orig_bio); + __bio_clone_fast(bio, orig_bio, GFP_NOIO); /* * bi_end_io can be set separately somewhere else, e.g. the * variants in, diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 78df75f57288b..0f8796159379e 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -561,7 +561,12 @@ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti, tio = clone_to_tio(clone); tio->inside_dm_io = false; } - __bio_clone_fast(&tio->clone, ci->bio); + + if (__bio_clone_fast(&tio->clone, ci->bio, gfp_mask) < 0) { + if (ci->io->tio.io) + bio_put(&tio->clone); + return NULL; + } tio->magic = DM_TIO_MAGIC; tio->io = ci->io; @@ -1196,31 +1201,8 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, sector_t sector, unsigned *len) { struct bio *bio = ci->bio, *clone; - int r; clone = alloc_tio(ci, ti, 0, len, GFP_NOIO); - - r = bio_crypt_clone(clone, bio, GFP_NOIO); - if (r < 0) - goto free_tio; - - if (bio_integrity(bio)) { - struct dm_target_io *tio = clone_to_tio(clone); - - if (unlikely(!dm_target_has_integrity(tio->ti->type) && - !dm_target_passes_integrity(tio->ti->type))) { - DMWARN("%s: the target %s doesn't support integrity data.", - dm_device_name(tio->io->md), - tio->ti->type->name); - r = -EIO; - goto free_tio; - } - - r = bio_integrity_clone(clone, bio, GFP_NOIO); - if (r < 0) - goto free_tio; - } - bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector)); clone->bi_iter.bi_size = to_bytes(*len); @@ -1229,9 +1211,6 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti, __map_bio(clone); return 0; -free_tio: - free_tio(clone); - return r; } static void alloc_multiple_bios(struct bio_list *blist, struct clone_info *ci, diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index 5e15940634d85..010c759c741ad 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -122,7 +122,7 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio) multipath = conf->multipaths + mp_bh->path; bio_init(&mp_bh->bio, NULL, NULL, 0, 0); - __bio_clone_fast(&mp_bh->bio, bio); + __bio_clone_fast(&mp_bh->bio, bio, GFP_NOIO); mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset; bio_set_dev(&mp_bh->bio, multipath->rdev->bdev); diff --git a/include/linux/bio.h b/include/linux/bio.h index 18cfe5bb41ea8..b814361c957b0 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -413,7 +413,7 @@ struct bio *bio_alloc_kiocb(struct kiocb *kiocb, struct block_device *bdev, struct bio *bio_kmalloc(gfp_t gfp_mask, unsigned short nr_iovecs); extern void bio_put(struct bio *); -extern void __bio_clone_fast(struct bio *, struct bio *); +int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp); extern struct bio *bio_clone_fast(struct bio *, gfp_t, struct bio_set *); extern struct bio_set fs_bio_set; From patchwork Wed Feb 2 16:01:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B8D6C433F5 for ; Wed, 2 Feb 2022 16:01:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345640AbiBBQBp (ORCPT ); Wed, 2 Feb 2022 11:01:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345635AbiBBQBo (ORCPT ); Wed, 2 Feb 2022 11:01:44 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5068C06173B for ; Wed, 2 Feb 2022 08:01:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=9vtcoYF0xFYfZ7BqRP6/9k6ICg1jRsT6X5lPX4vB1/Q=; b=zrIOMGLosVsiJK6TV63F0QfKK/ ls7/UBbvT/X7pvwG9wvV6qozERF8maIeHz0kUgQY4nzBIqpirNE83PvEGlciZTyZQSsLhqlVrCrWi fCegjZosJKqQzUrMF/jCF82WDi+HFR7WLBWRMLfP0IekUStsQJq5Qcn2BN+FwexBcR8nZpNInahLB 1ksVZ+MUIYwO6XqSKYq0vV2n2+ZGvcjyNlI8b9e9IitfTZ31HId33uIt8VaomXKj6tBGKStvhzjEo GB6Zu3fhtBgKuocQsT8BGsOnj9RiiCxna6tOteeFc9NVGo4/oP86RD2ijTz7pk2uAnW0kmmK2oR15 PyCYE58Q==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4j-00G89K-8Z; Wed, 02 Feb 2022 16:01:41 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 11/13] dm: use bio_clone_fast in alloc_io/alloc_tio Date: Wed, 2 Feb 2022 17:01:07 +0100 Message-Id: <20220202160109.108149-12-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Replace open coded bio_clone_fast implementations with the actual helper. Note that the bio allocated as part of the dm_io structure in alloc_io will only actually be used later in alloc_tio, making this earlier cloning of the information safe. Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 12 +++--------- 1 file changed, 3 insertions(+), 9 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 0f8796159379e..862564a5df74b 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -520,7 +520,7 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio) struct dm_target_io *tio; struct bio *clone; - clone = bio_alloc_bioset(NULL, 0, 0, GFP_NOIO, &md->io_bs); + clone = bio_clone_fast(bio, GFP_NOIO, &md->io_bs); tio = clone_to_tio(clone); tio->inside_dm_io = true; @@ -553,8 +553,8 @@ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti, /* the dm_target_io embedded in ci->io is available */ tio = &ci->io->tio; } else { - struct bio *clone = bio_alloc_bioset(NULL, 0, 0, gfp_mask, - &ci->io->md->bs); + struct bio *clone = bio_clone_fast(ci->bio, gfp_mask, + &ci->io->md->bs); if (!clone) return NULL; @@ -562,12 +562,6 @@ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti, tio->inside_dm_io = false; } - if (__bio_clone_fast(&tio->clone, ci->bio, gfp_mask) < 0) { - if (ci->io->tio.io) - bio_put(&tio->clone); - return NULL; - } - tio->magic = DM_TIO_MAGIC; tio->io = ci->io; tio->ti = ti; From patchwork Wed Feb 2 16:01:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 808EEC433F5 for ; Wed, 2 Feb 2022 16:01:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345620AbiBBQBu (ORCPT ); Wed, 2 Feb 2022 11:01:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345632AbiBBQBr (ORCPT ); Wed, 2 Feb 2022 11:01:47 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D8EAC061714 for ; Wed, 2 Feb 2022 08:01:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=LiH/jmFi0B3ROIzNbOiFIJviaI7firS2XNE9RX004w4=; b=tgCMdMdSU1lhVIFGJtytbThxmT Z55G9OSYnDOjRxAC+eNNkHQtn1hEMOfms1Q6LT+GWiBus9R8+IpErMn0+bRlqg31EQkUX059rFyFl EqwNgEm2zt+k+MzL1hTcWgo3wUx3mmJU3AjOFVARVVyf4KEJiUn9tWMKxt1X2Kkd3HI46pstGcGfw Vynikba0eVJHFiUuSnhujV3EIeng/cEeH03JXDk//vKljdZtn/xsOqZ6iy0chEQQ/CwLJbzeaEqvc Zg2hrCq08thpPcvGmIBfs9GAkaUdw4vK11hFUYP13u+PrP/dku3AdGnry/+nnSMvbHVVtfH+X6k2D 3bRy3lsQ==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4m-00G8Ag-5P; Wed, 02 Feb 2022 16:01:44 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 12/13] block: initialize the target bio in __bio_clone_fast Date: Wed, 2 Feb 2022 17:01:08 +0100 Message-Id: <20220202160109.108149-13-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org All callers of __bio_clone_fast initialize the bio first. Move that initialization into __bio_clone_fast instead. Signed-off-by: Christoph Hellwig --- block/bio.c | 75 ++++++++++++++++++++----------------- drivers/md/bcache/request.c | 1 - drivers/md/md-multipath.c | 1 - 3 files changed, 40 insertions(+), 37 deletions(-) diff --git a/block/bio.c b/block/bio.c index b5840ef549164..22b18ba6bb1c5 100644 --- a/block/bio.c +++ b/block/bio.c @@ -726,37 +726,16 @@ void bio_put(struct bio *bio) } EXPORT_SYMBOL(bio_put); -/** - * __bio_clone_fast - clone a bio that shares the original bio's biovec - * @bio: destination bio - * @bio_src: bio to clone - * @gfp: allocation flags - * - * Clone a &bio. Caller will own the returned bio, but not - * the actual data it points to. Reference count of returned - * bio will be one. - * - * Caller must ensure that @bio_src is not freed before @bio. - */ -int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp) +static int __bio_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp) { - WARN_ON_ONCE(bio->bi_pool && bio->bi_max_vecs); - - /* - * most users will be overriding ->bi_bdev with a new target, - * so we don't set nor calculate new physical/hw segment counts here - */ - bio->bi_bdev = bio_src->bi_bdev; bio_set_flag(bio, BIO_CLONED); if (bio_flagged(bio_src, BIO_THROTTLED)) bio_set_flag(bio, BIO_THROTTLED); if (bio_flagged(bio_src, BIO_REMAPPED)) bio_set_flag(bio, BIO_REMAPPED); - bio->bi_opf = bio_src->bi_opf; bio->bi_ioprio = bio_src->bi_ioprio; bio->bi_write_hint = bio_src->bi_write_hint; bio->bi_iter = bio_src->bi_iter; - bio->bi_io_vec = bio_src->bi_io_vec; bio_clone_blkg_association(bio, bio_src); blkcg_bio_issue_init(bio); @@ -768,33 +747,59 @@ int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp) return -ENOMEM; return 0; } -EXPORT_SYMBOL(__bio_clone_fast); /** - * bio_clone_fast - clone a bio that shares the original bio's biovec - * @bio: bio to clone - * @gfp_mask: allocation priority - * @bs: bio_set to allocate from + * bio_clone_fast - clone a bio that shares the original bio's biovec + * @bio_src: bio to clone from + * @gfp: allocation priority + * @bs: bio_set to allocate from + * + * Allocate a new bio that is a clone of @bio_src. The caller owns the returned + * bio, but not the actual data it points to. * - * Like __bio_clone_fast, only also allocates the returned bio + * The caller must ensure that the return bio is not freed before @bio_src. */ -struct bio *bio_clone_fast(struct bio *bio, gfp_t gfp_mask, struct bio_set *bs) +struct bio *bio_clone_fast(struct bio *bio_src, gfp_t gfp, struct bio_set *bs) { - struct bio *b; + struct bio *bio; - b = bio_alloc_bioset(NULL, 0, 0, gfp_mask, bs); - if (!b) + bio = bio_alloc_bioset(bio_src->bi_bdev, 0, bio_src->bi_opf, gfp, bs); + if (!bio) return NULL; - if (__bio_clone_fast(b, bio, gfp_mask < 0)) { - bio_put(b); + if (__bio_clone(bio, bio_src, gfp) < 0) { + bio_put(bio); return NULL; } + bio->bi_io_vec = bio_src->bi_io_vec; - return b; + return bio; } EXPORT_SYMBOL(bio_clone_fast); +/** + * __bio_clone_fast - clone a bio that shares the original bio's biovec + * @bio: bio to clone into + * @bio_src: bio to clone from + * @gfp: allocation priority + * + * Initialize a new bio in caller provided memory that is a clone of @bio_src. + * The caller owns the returned bio, but not the actual data it points to. + * + * The caller must ensure that @bio_src is not freed before @bio. + */ +int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp) +{ + int ret; + + bio_init(bio, bio_src->bi_bdev, bio_src->bi_io_vec, 0, bio_src->bi_opf); + ret = __bio_clone(bio, bio_src, gfp); + if (ret) + bio_uninit(bio); + return ret; +} +EXPORT_SYMBOL(__bio_clone_fast); + const char *bio_devname(struct bio *bio, char *buf) { return bdevname(bio->bi_bdev, buf); diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 574b02b94f1a4..d2cb853bf9173 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -685,7 +685,6 @@ static void do_bio_hook(struct search *s, { struct bio *bio = &s->bio.bio; - bio_init(bio, NULL, NULL, 0, 0); __bio_clone_fast(bio, orig_bio, GFP_NOIO); /* * bi_end_io can be set separately somewhere else, e.g. the diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index 010c759c741ad..483a5500f83cd 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -121,7 +121,6 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio) } multipath = conf->multipaths + mp_bh->path; - bio_init(&mp_bh->bio, NULL, NULL, 0, 0); __bio_clone_fast(&mp_bh->bio, bio, GFP_NOIO); mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset; From patchwork Wed Feb 2 16:01:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12733235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD002C433FE for ; Wed, 2 Feb 2022 16:01:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345641AbiBBQBv (ORCPT ); Wed, 2 Feb 2022 11:01:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345642AbiBBQBu (ORCPT ); Wed, 2 Feb 2022 11:01:50 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20096C06173E for ; Wed, 2 Feb 2022 08:01:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=OVu/B63APTd0hJbW+FibysOinJHIi94ORlBEh7LbsEI=; b=1g7YyAv/dougDkclq4zIvFIESt aT1qa698zqzzX67muiUTeV26Zk5+7WaLN7SvCW0w9ZhfLlHjPXBXnrVQ43IOR8D79lBP96LErocXj B50JQuDbBPtUh4wqszX0vO0/eHhuY8BMjMywh5DA6b0S8KJfS72sxVU/OR3IKTA/4sWyCIBIqevPE f8cHYnHC7f3+pCM/R7Kq6d17c4BL+wk5ZGbnZBa78CZ+5EAyRWOZ98lo/Cj2cGXMNZ7Held20sZIp WBNNxsjNxYvHymqkuJJlfX5nPt3S8Jb3xju8j59zfiH55l6Fz48kNTN8qZzcrchRRVwEG3Ahd4DNq BIRBeaVw==; Received: from [2001:4bb8:191:327d:b3e5:1ccd:eaac:6609] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFI4p-00G8Bd-0A; Wed, 02 Feb 2022 16:01:47 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Pavel Begunkov , Mike Snitzer , Philipp Reisner , Lars Ellenberg , linux-block@vger.kernel.org, dm-devel@redhat.com, drbd-dev@lists.linbit.com Subject: [PATCH 13/13] block: pass a block_device to bio_clone_fast Date: Wed, 2 Feb 2022 17:01:09 +0100 Message-Id: <20220202160109.108149-14-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220202160109.108149-1-hch@lst.de> References: <20220202160109.108149-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Pass a block_device to bio_clone_fast and __bio_clone_fast and give the functions more suitable names. Signed-off-by: Christoph Hellwig --- Documentation/block/biodoc.rst | 5 ----- block/bio.c | 31 +++++++++++++++++------------ block/blk-mq.c | 4 ++-- block/bounce.c | 3 +-- drivers/block/drbd/drbd_req.c | 4 ++-- drivers/block/drbd/drbd_worker.c | 4 ++-- drivers/block/pktcdvd.c | 4 ++-- drivers/md/bcache/request.c | 5 +++-- drivers/md/dm-cache-target.c | 4 ++-- drivers/md/dm-crypt.c | 11 +++++------ drivers/md/dm-zoned-target.c | 3 +-- drivers/md/dm.c | 6 +++--- drivers/md/md-faulty.c | 4 ++-- drivers/md/md-multipath.c | 3 +-- drivers/md/md.c | 5 +++-- drivers/md/raid1.c | 34 ++++++++++++++++---------------- drivers/md/raid10.c | 16 +++++++-------- drivers/md/raid5.c | 4 ++-- fs/btrfs/extent_io.c | 4 ++-- include/linux/bio.h | 6 ++++-- 20 files changed, 80 insertions(+), 80 deletions(-) diff --git a/Documentation/block/biodoc.rst b/Documentation/block/biodoc.rst index 2098477851a4b..4fbc367e62f95 100644 --- a/Documentation/block/biodoc.rst +++ b/Documentation/block/biodoc.rst @@ -663,11 +663,6 @@ to i/o submission, if the bio fields are likely to be accessed after the i/o is issued (since the bio may otherwise get freed in case i/o completion happens in the meantime). -The bio_clone_fast() routine may be used to duplicate a bio, where the clone -shares the bio_vec_list with the original bio (i.e. both point to the -same bio_vec_list). This would typically be used for splitting i/o requests -in lvm or md. - 3.2 Generic bio helper Routines ------------------------------- diff --git a/block/bio.c b/block/bio.c index 22b18ba6bb1c5..ec453e325b6ae 100644 --- a/block/bio.c +++ b/block/bio.c @@ -731,7 +731,8 @@ static int __bio_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp) bio_set_flag(bio, BIO_CLONED); if (bio_flagged(bio_src, BIO_THROTTLED)) bio_set_flag(bio, BIO_THROTTLED); - if (bio_flagged(bio_src, BIO_REMAPPED)) + if (bio->bi_bdev == bio_src->bi_bdev && + bio_flagged(bio_src, BIO_REMAPPED)) bio_set_flag(bio, BIO_REMAPPED); bio->bi_ioprio = bio_src->bi_ioprio; bio->bi_write_hint = bio_src->bi_write_hint; @@ -749,7 +750,8 @@ static int __bio_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp) } /** - * bio_clone_fast - clone a bio that shares the original bio's biovec + * bio_alloc_clone - clone a bio that shares the original bio's biovec + * @bdev: block_device to clone onto * @bio_src: bio to clone from * @gfp: allocation priority * @bs: bio_set to allocate from @@ -759,11 +761,12 @@ static int __bio_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp) * * The caller must ensure that the return bio is not freed before @bio_src. */ -struct bio *bio_clone_fast(struct bio *bio_src, gfp_t gfp, struct bio_set *bs) +struct bio *bio_alloc_clone(struct block_device *bdev, struct bio *bio_src, + gfp_t gfp, struct bio_set *bs) { struct bio *bio; - bio = bio_alloc_bioset(bio_src->bi_bdev, 0, bio_src->bi_opf, gfp, bs); + bio = bio_alloc_bioset(bdev, 0, bio_src->bi_opf, gfp, bs); if (!bio) return NULL; @@ -775,10 +778,11 @@ struct bio *bio_clone_fast(struct bio *bio_src, gfp_t gfp, struct bio_set *bs) return bio; } -EXPORT_SYMBOL(bio_clone_fast); +EXPORT_SYMBOL(bio_alloc_clone); /** - * __bio_clone_fast - clone a bio that shares the original bio's biovec + * bio_init_clone - clone a bio that shares the original bio's biovec + * @bdev: block_device to clone onto * @bio: bio to clone into * @bio_src: bio to clone from * @gfp: allocation priority @@ -788,17 +792,18 @@ EXPORT_SYMBOL(bio_clone_fast); * * The caller must ensure that @bio_src is not freed before @bio. */ -int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp) +int bio_init_clone(struct block_device *bdev, struct bio *bio, + struct bio *bio_src, gfp_t gfp) { int ret; - bio_init(bio, bio_src->bi_bdev, bio_src->bi_io_vec, 0, bio_src->bi_opf); + bio_init(bio, bdev, bio_src->bi_io_vec, 0, bio_src->bi_opf); ret = __bio_clone(bio, bio_src, gfp); if (ret) bio_uninit(bio); return ret; } -EXPORT_SYMBOL(__bio_clone_fast); +EXPORT_SYMBOL(bio_init_clone); const char *bio_devname(struct bio *bio, char *buf) { @@ -1570,7 +1575,7 @@ struct bio *bio_split(struct bio *bio, int sectors, if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND)) return NULL; - split = bio_clone_fast(bio, gfp, bs); + split = bio_alloc_clone(bio->bi_bdev, bio, gfp, bs); if (!split) return NULL; @@ -1665,9 +1670,9 @@ EXPORT_SYMBOL(bioset_exit); * Note that the bio must be embedded at the END of that structure always, * or things will break badly. * If %BIOSET_NEED_BVECS is set in @flags, a separate pool will be allocated - * for allocating iovecs. This pool is not needed e.g. for bio_clone_fast(). - * If %BIOSET_NEED_RESCUER is set, a workqueue is created which can be used to - * dispatch queued requests when the mempool runs out of space. + * for allocating iovecs. This pool is not needed e.g. for bio_init_clone(). + * If %BIOSET_NEED_RESCUER is set, a workqueue is created which can be used + * to dispatch queued requests when the mempool runs out of space. * */ int bioset_init(struct bio_set *bs, diff --git a/block/blk-mq.c b/block/blk-mq.c index 1adfe4824ef5e..4b868e792ba4a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2975,10 +2975,10 @@ int blk_rq_prep_clone(struct request *rq, struct request *rq_src, bs = &fs_bio_set; __rq_for_each_bio(bio_src, rq_src) { - bio = bio_clone_fast(bio_src, gfp_mask, bs); + bio = bio_alloc_clone(rq->q->disk->part0, bio_src, gfp_mask, + bs); if (!bio) goto free_and_out; - bio->bi_bdev = rq->q->disk->part0; if (bio_ctr && bio_ctr(bio, bio_src, data)) goto free_and_out; diff --git a/block/bounce.c b/block/bounce.c index 330ddde25b460..3fd3bc6fd5dbb 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -162,8 +162,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src) * that does not own the bio - reason being drivers don't use it for * iterating over the biovec anymore, so expecting it to be kept up * to date (i.e. for clones that share the parent biovec) is just - * asking for trouble and would force extra work on - * __bio_clone_fast() anyways. + * asking for trouble and would force extra work. */ bio = bio_alloc_bioset(bio_src->bi_bdev, bio_segments(bio_src), bio_src->bi_opf, GFP_NOIO, &bounce_bio_set); diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index 8d44e96c4c4ef..c00ae8619519e 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -30,8 +30,8 @@ static struct drbd_request *drbd_req_new(struct drbd_device *device, struct bio return NULL; memset(req, 0, sizeof(*req)); - req->private_bio = bio_clone_fast(bio_src, GFP_NOIO, &drbd_io_bio_set); - bio_set_dev(req->private_bio, device->ldev->backing_bdev); + req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, bio_src, + GFP_NOIO, &drbd_io_bio_set); req->private_bio->bi_private = req; req->private_bio->bi_end_io = drbd_request_endio; diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c index 64563bfdf0da0..a5e04b38006b6 100644 --- a/drivers/block/drbd/drbd_worker.c +++ b/drivers/block/drbd/drbd_worker.c @@ -1523,9 +1523,9 @@ int w_restart_disk_io(struct drbd_work *w, int cancel) if (bio_data_dir(req->master_bio) == WRITE && req->rq_state & RQ_IN_ACT_LOG) drbd_al_begin_io(device, &req->i); - req->private_bio = bio_clone_fast(req->master_bio, GFP_NOIO, + req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, + req->master_bio, GFP_NOIO, &drbd_io_bio_set); - bio_set_dev(req->private_bio, device->ldev->backing_bdev); req->private_bio->bi_private = req; req->private_bio->bi_end_io = drbd_request_endio; submit_bio_noacct(req->private_bio); diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c index 3aa5954429462..be749c686feb7 100644 --- a/drivers/block/pktcdvd.c +++ b/drivers/block/pktcdvd.c @@ -2294,12 +2294,12 @@ static void pkt_end_io_read_cloned(struct bio *bio) static void pkt_make_request_read(struct pktcdvd_device *pd, struct bio *bio) { - struct bio *cloned_bio = bio_clone_fast(bio, GFP_NOIO, &pkt_bio_set); + struct bio *cloned_bio = + bio_alloc_clone(pd->bdev, bio, GFP_NOIO, &pkt_bio_set); struct packet_stacked_data *psd = mempool_alloc(&psd_pool, GFP_NOIO); psd->pd = pd; psd->bio = bio; - bio_set_dev(cloned_bio, pd->bdev); cloned_bio->bi_private = psd; cloned_bio->bi_end_io = pkt_end_io_read_cloned; pd->stats.secs_r += bio_sectors(bio); diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index d2cb853bf9173..6869e010475a3 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -685,7 +685,7 @@ static void do_bio_hook(struct search *s, { struct bio *bio = &s->bio.bio; - __bio_clone_fast(bio, orig_bio, GFP_NOIO); + bio_init_clone(bio->bi_bdev, bio, orig_bio, GFP_NOIO); /* * bi_end_io can be set separately somewhere else, e.g. the * variants in, @@ -1036,7 +1036,8 @@ static void cached_dev_write(struct cached_dev *dc, struct search *s) closure_bio_submit(s->iop.c, flush, cl); } } else { - s->iop.bio = bio_clone_fast(bio, GFP_NOIO, &dc->disk.bio_split); + s->iop.bio = bio_alloc_clone(bio->bi_bdev, bio, GFP_NOIO, + &dc->disk.bio_split); /* I/O request sent to backing device */ bio->bi_end_io = backing_request_endio; closure_bio_submit(s->iop.c, bio, cl); diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index 1c37fe028e531..89fdfb49d564e 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -819,13 +819,13 @@ static void issue_op(struct bio *bio, void *context) static void remap_to_origin_and_cache(struct cache *cache, struct bio *bio, dm_oblock_t oblock, dm_cblock_t cblock) { - struct bio *origin_bio = bio_clone_fast(bio, GFP_NOIO, &cache->bs); + struct bio *origin_bio = bio_alloc_clone(cache->origin_dev->bdev, bio, + GFP_NOIO, &cache->bs); BUG_ON(!origin_bio); bio_chain(origin_bio, bio); - remap_to_origin(cache, origin_bio); if (bio_data_dir(origin_bio) == WRITE) clear_discard(cache, oblock_to_dblock(cache, oblock)); submit_bio(origin_bio); diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index f7e4435b7439a..a5006cb6ee8ad 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1834,17 +1834,16 @@ static int kcryptd_io_read(struct dm_crypt_io *io, gfp_t gfp) struct bio *clone; /* - * We need the original biovec array in order to decrypt - * the whole bio data *afterwards* -- thanks to immutable - * biovecs we don't need to worry about the block layer - * modifying the biovec array; so leverage bio_clone_fast(). + * We need the original biovec array in order to decrypt the whole bio + * data *afterwards* -- thanks to immutable biovecs we don't need to + * worry about the block layer modifying the biovec array; so leverage + * bio_alloc_clone(). */ - clone = bio_clone_fast(io->base_bio, gfp, &cc->bs); + clone = bio_alloc_clone(cc->dev->bdev, io->base_bio, gfp, &cc->bs); if (!clone) return 1; clone->bi_private = io; clone->bi_end_io = crypt_endio; - bio_set_dev(clone, cc->dev->bdev); crypt_inc_pending(io); diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c index 166c4e9d99c97..a3f6d3ef38174 100644 --- a/drivers/md/dm-zoned-target.c +++ b/drivers/md/dm-zoned-target.c @@ -125,11 +125,10 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone, if (dev->flags & DMZ_BDEV_DYING) return -EIO; - clone = bio_clone_fast(bio, GFP_NOIO, &dmz->bio_set); + clone = bio_alloc_clone(dev->bdev, bio, GFP_NOIO, &dmz->bio_set); if (!clone) return -ENOMEM; - bio_set_dev(clone, dev->bdev); bioctx->dev = dev; clone->bi_iter.bi_sector = dmz_start_sect(dmz->metadata, zone) + dmz_blk2sect(chunk_block); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 862564a5df74b..ab9cc91931f99 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -520,7 +520,7 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio) struct dm_target_io *tio; struct bio *clone; - clone = bio_clone_fast(bio, GFP_NOIO, &md->io_bs); + clone = bio_alloc_clone(bio->bi_bdev, bio, GFP_NOIO, &md->io_bs); tio = clone_to_tio(clone); tio->inside_dm_io = true; @@ -553,8 +553,8 @@ static struct bio *alloc_tio(struct clone_info *ci, struct dm_target *ti, /* the dm_target_io embedded in ci->io is available */ tio = &ci->io->tio; } else { - struct bio *clone = bio_clone_fast(ci->bio, gfp_mask, - &ci->io->md->bs); + struct bio *clone = bio_alloc_clone(ci->bio->bi_bdev, ci->bio, + gfp_mask, &ci->io->md->bs); if (!clone) return NULL; diff --git a/drivers/md/md-faulty.c b/drivers/md/md-faulty.c index c0dc6f2ef4a3d..50ad818978a43 100644 --- a/drivers/md/md-faulty.c +++ b/drivers/md/md-faulty.c @@ -205,9 +205,9 @@ static bool faulty_make_request(struct mddev *mddev, struct bio *bio) } } if (failit) { - struct bio *b = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + struct bio *b = bio_alloc_clone(conf->rdev->bdev, bio, GFP_NOIO, + &mddev->bio_set); - bio_set_dev(b, conf->rdev->bdev); b->bi_private = bio; b->bi_end_io = faulty_fail; bio = b; diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c index 483a5500f83cd..97fb948e3e741 100644 --- a/drivers/md/md-multipath.c +++ b/drivers/md/md-multipath.c @@ -121,10 +121,9 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio) } multipath = conf->multipaths + mp_bh->path; - __bio_clone_fast(&mp_bh->bio, bio, GFP_NOIO); + bio_init_clone(multipath->rdev->bdev, &mp_bh->bio, bio, GFP_NOIO); mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset; - bio_set_dev(&mp_bh->bio, multipath->rdev->bdev); mp_bh->bio.bi_opf |= REQ_FAILFAST_TRANSPORT; mp_bh->bio.bi_end_io = multipath_end_request; mp_bh->bio.bi_private = mp_bh; diff --git a/drivers/md/md.c b/drivers/md/md.c index 0a89f072dae0d..f88a9e948f3eb 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -8634,13 +8634,14 @@ static void md_end_io_acct(struct bio *bio) */ void md_account_bio(struct mddev *mddev, struct bio **bio) { + struct block_device *bdev = (*bio)->bi_bdev; struct md_io_acct *md_io_acct; struct bio *clone; - if (!blk_queue_io_stat((*bio)->bi_bdev->bd_disk->queue)) + if (!blk_queue_io_stat(bdev->bd_disk->queue)) return; - clone = bio_clone_fast(*bio, GFP_NOIO, &mddev->io_acct_set); + clone = bio_alloc_clone(bdev, *bio, GFP_NOIO, &mddev->io_acct_set); md_io_acct = container_of(clone, struct md_io_acct, bio_clone); md_io_acct->orig_bio = *bio; md_io_acct->start_time = bio_start_io_acct(*bio); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index e7710fb5befb4..c3288d46948de 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1320,13 +1320,13 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, if (!r1bio_existed && blk_queue_io_stat(bio->bi_bdev->bd_disk->queue)) r1_bio->start_time = bio_start_io_acct(bio); - read_bio = bio_clone_fast(bio, gfp, &mddev->bio_set); + read_bio = bio_alloc_clone(mirror->rdev->bdev, bio, gfp, + &mddev->bio_set); r1_bio->bios[rdisk] = read_bio; read_bio->bi_iter.bi_sector = r1_bio->sector + mirror->rdev->data_offset; - bio_set_dev(read_bio, mirror->rdev->bdev); read_bio->bi_end_io = raid1_end_read_request; bio_set_op_attrs(read_bio, op, do_sync); if (test_bit(FailFast, &mirror->rdev->flags) && @@ -1546,24 +1546,25 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, first_clone = 0; } - if (r1_bio->behind_master_bio) - mbio = bio_clone_fast(r1_bio->behind_master_bio, - GFP_NOIO, &mddev->bio_set); - else - mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); - if (r1_bio->behind_master_bio) { + mbio = bio_alloc_clone(rdev->bdev, + r1_bio->behind_master_bio, + GFP_NOIO, &mddev->bio_set); if (test_bit(CollisionCheck, &rdev->flags)) wait_for_serialization(rdev, r1_bio); if (test_bit(WriteMostly, &rdev->flags)) atomic_inc(&r1_bio->behind_remaining); - } else if (mddev->serialize_policy) - wait_for_serialization(rdev, r1_bio); + } else { + mbio = bio_alloc_clone(rdev->bdev, bio, GFP_NOIO, + &mddev->bio_set); + + if (mddev->serialize_policy) + wait_for_serialization(rdev, r1_bio); + } r1_bio->bios[i] = mbio; mbio->bi_iter.bi_sector = (r1_bio->sector + rdev->data_offset); - bio_set_dev(mbio, rdev->bdev); mbio->bi_end_io = raid1_end_write_request; mbio->bi_opf = bio_op(bio) | (bio->bi_opf & (REQ_SYNC | REQ_FUA)); if (test_bit(FailFast, &rdev->flags) && @@ -2416,12 +2417,12 @@ static int narrow_write_error(struct r1bio *r1_bio, int i) /* Write at 'sector' for 'sectors'*/ if (test_bit(R1BIO_BehindIO, &r1_bio->state)) { - wbio = bio_clone_fast(r1_bio->behind_master_bio, - GFP_NOIO, - &mddev->bio_set); + wbio = bio_alloc_clone(rdev->bdev, + r1_bio->behind_master_bio, + GFP_NOIO, &mddev->bio_set); } else { - wbio = bio_clone_fast(r1_bio->master_bio, GFP_NOIO, - &mddev->bio_set); + wbio = bio_alloc_clone(rdev->bdev, r1_bio->master_bio, + GFP_NOIO, &mddev->bio_set); } bio_set_op_attrs(wbio, REQ_OP_WRITE, 0); @@ -2430,7 +2431,6 @@ static int narrow_write_error(struct r1bio *r1_bio, int i) bio_trim(wbio, sector - r1_bio->sector, sectors); wbio->bi_iter.bi_sector += rdev->data_offset; - bio_set_dev(wbio, rdev->bdev); if (submit_bio_wait(wbio) < 0) /* failure! */ diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index da07bcbc06d08..5dd2e17e1d0ea 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1208,14 +1208,13 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, if (blk_queue_io_stat(bio->bi_bdev->bd_disk->queue)) r10_bio->start_time = bio_start_io_acct(bio); - read_bio = bio_clone_fast(bio, gfp, &mddev->bio_set); + read_bio = bio_alloc_clone(rdev->bdev, bio, gfp, &mddev->bio_set); r10_bio->devs[slot].bio = read_bio; r10_bio->devs[slot].rdev = rdev; read_bio->bi_iter.bi_sector = r10_bio->devs[slot].addr + choose_data_offset(r10_bio, rdev); - bio_set_dev(read_bio, rdev->bdev); read_bio->bi_end_io = raid10_end_read_request; bio_set_op_attrs(read_bio, op, do_sync); if (test_bit(FailFast, &rdev->flags) && @@ -1255,7 +1254,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, } else rdev = conf->mirrors[devnum].rdev; - mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + mbio = bio_alloc_clone(rdev->bdev, bio, GFP_NOIO, &mddev->bio_set); if (replacement) r10_bio->devs[n_copy].repl_bio = mbio; else @@ -1263,7 +1262,6 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, mbio->bi_iter.bi_sector = (r10_bio->devs[n_copy].addr + choose_data_offset(r10_bio, rdev)); - bio_set_dev(mbio, rdev->bdev); mbio->bi_end_io = raid10_end_write_request; bio_set_op_attrs(mbio, op, do_sync | do_fua); if (!replacement && test_bit(FailFast, @@ -1812,7 +1810,8 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) */ if (r10_bio->devs[disk].bio) { struct md_rdev *rdev = conf->mirrors[disk].rdev; - mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + mbio = bio_alloc_clone(bio->bi_bdev, bio, GFP_NOIO, + &mddev->bio_set); mbio->bi_end_io = raid10_end_discard_request; mbio->bi_private = r10_bio; r10_bio->devs[disk].bio = mbio; @@ -1825,7 +1824,8 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio) } if (r10_bio->devs[disk].repl_bio) { struct md_rdev *rrdev = conf->mirrors[disk].replacement; - rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + rbio = bio_alloc_clone(bio->bi_bdev, bio, GFP_NOIO, + &mddev->bio_set); rbio->bi_end_io = raid10_end_discard_request; rbio->bi_private = r10_bio; r10_bio->devs[disk].repl_bio = rbio; @@ -2892,12 +2892,12 @@ static int narrow_write_error(struct r10bio *r10_bio, int i) if (sectors > sect_to_write) sectors = sect_to_write; /* Write at 'sector' for 'sectors' */ - wbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set); + wbio = bio_alloc_clone(rdev->bdev, bio, GFP_NOIO, + &mddev->bio_set); bio_trim(wbio, sector - bio->bi_iter.bi_sector, sectors); wsector = r10_bio->devs[i].addr + (sector - r10_bio->sector); wbio->bi_iter.bi_sector = wsector + choose_data_offset(r10_bio, rdev); - bio_set_dev(wbio, rdev->bdev); bio_set_op_attrs(wbio, REQ_OP_WRITE, 0); if (submit_bio_wait(wbio) < 0) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 7c119208a2143..8891aaba65964 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -5438,14 +5438,14 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio) return 0; } - align_bio = bio_clone_fast(raid_bio, GFP_NOIO, &mddev->io_acct_set); + align_bio = bio_alloc_clone(rdev->bdev, raid_bio, GFP_NOIO, + &mddev->io_acct_set); md_io_acct = container_of(align_bio, struct md_io_acct, bio_clone); raid_bio->bi_next = (void *)rdev; if (blk_queue_io_stat(raid_bio->bi_bdev->bd_disk->queue)) md_io_acct->start_time = bio_start_io_acct(raid_bio); md_io_acct->orig_bio = raid_bio; - bio_set_dev(align_bio, rdev->bdev); align_bio->bi_end_io = raid5_align_endio; align_bio->bi_private = md_io_acct; align_bio->bi_iter.bi_sector = sector; diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 421d921a05716..dee86911a4bef 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -3154,7 +3154,7 @@ struct bio *btrfs_bio_clone(struct bio *bio) struct bio *new; /* Bio allocation backed by a bioset does not fail */ - new = bio_clone_fast(bio, GFP_NOFS, &btrfs_bioset); + new = bio_alloc_clone(bio->bi_bdev, bio, GFP_NOFS, &btrfs_bioset); bbio = btrfs_bio(new); btrfs_bio_init(bbio); bbio->iter = bio->bi_iter; @@ -3169,7 +3169,7 @@ struct bio *btrfs_bio_clone_partial(struct bio *orig, u64 offset, u64 size) ASSERT(offset <= UINT_MAX && size <= UINT_MAX); /* this will never fail when it's backed by a bioset */ - bio = bio_clone_fast(orig, GFP_NOFS, &btrfs_bioset); + bio = bio_alloc_clone(orig->bi_bdev, orig, GFP_NOFS, &btrfs_bioset); ASSERT(bio); bbio = btrfs_bio(bio); diff --git a/include/linux/bio.h b/include/linux/bio.h index b814361c957b0..7523aba4ddf7c 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -413,8 +413,10 @@ struct bio *bio_alloc_kiocb(struct kiocb *kiocb, struct block_device *bdev, struct bio *bio_kmalloc(gfp_t gfp_mask, unsigned short nr_iovecs); extern void bio_put(struct bio *); -int __bio_clone_fast(struct bio *bio, struct bio *bio_src, gfp_t gfp); -extern struct bio *bio_clone_fast(struct bio *, gfp_t, struct bio_set *); +struct bio *bio_alloc_clone(struct block_device *bdev, struct bio *bio_src, + gfp_t gfp, struct bio_set *bs); +int bio_init_clone(struct block_device *bdev, struct bio *bio, + struct bio *bio_src, gfp_t gfp); extern struct bio_set fs_bio_set;