From patchwork Sun Oct 30 15:31:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13025098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B0DAC38A02 for ; Sun, 30 Oct 2022 15:31:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229786AbiJ3Pb6 (ORCPT ); Sun, 30 Oct 2022 11:31:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229743AbiJ3Pb5 (ORCPT ); Sun, 30 Oct 2022 11:31:57 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DECC1B1D4 for ; Sun, 30 Oct 2022 08:31:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=uuYC3KQ/Ip3wZhufBDNGLln2QHFIqhWw6L9MkLncVtM=; b=viCfgK3v6sR9n/ASfPJxi3g3bm T2AePtGXnL/vfyhnqXyojdCbwLGc9xVaCn+RLSFg+BM0HSVAEjWffYfkAA1PeR6EwDKN/Cc3wT9+J rGJ7sWgqt2tYElWudGAO+D5ax61LzM6k5EncNRc2hHSeOIbHqEjdtmWq59u0bSQuZHq5+O2WHyVNG Yq5O0QAIkFuQQ+xdOd2QjL4GzoJ5itEcYVSrdHShwLk1o2tNdqOQtL5pV4QKA0+AK51V8T80+fc4J CCJ9H+ZrPmCLanyi82JgyDMKyRxi7NtW+deo2GXHhs23DE0JXylVsWNrYX7+B3zyEURbKmdGdeBQV nyKZIsLg==; Received: from 213-225-37-80.nat.highway.a1.net ([213.225.37.80] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1opAHt-00HVxu-9m; Sun, 30 Oct 2022 15:31:49 +0000 From: Christoph Hellwig To: Jens Axboe , Alasdair Kergon , Mike Snitzer Cc: Yu Kuai , dm-devel@redhat.com, linux-block@vger.kernel.org Subject: [PATCH 5/7] dm: track per-add_disk holder relations in DM Date: Sun, 30 Oct 2022 16:31:17 +0100 Message-Id: <20221030153120.1045101-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221030153120.1045101-1-hch@lst.de> References: <20221030153120.1045101-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org dm is a bit special in that it opens the underlying devices. Commit 89f871af1b26 ("dm: delay registering the gendisk") tried to accomodate that by allowing to add the holder to the list before add_gendisk and then just add them to sysfs once add_disk is called. But that leads to really odd lifetime problems and error handling problems as we can't know the state of the kobjects and don't unwind properly. To fix this switch to just registering all existing table_devices with the holder code right after add_disk, and remove them before calling del_gendisk. Fixes: 89f871af1b26 ("dm: delay registering the gendisk") Reported-by: Yu Kuai Signed-off-by: Christoph Hellwig --- drivers/md/dm.c | 45 +++++++++++++++++++++++++++++++++++++-------- 1 file changed, 37 insertions(+), 8 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 2917700b1e15c..7b0d6dc957549 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -751,9 +751,16 @@ static struct table_device *open_table_device(struct mapped_device *md, goto out_free_td; } - r = bd_link_disk_holder(bdev, dm_disk(md)); - if (r) - goto out_blkdev_put; + /* + * We can be called before the dm disk is added. In that case we can't + * register the holder relation here. It will be done once add_disk was + * called. + */ + if (md->disk->slave_dir) { + r = bd_link_disk_holder(bdev, md->disk); + if (r) + goto out_blkdev_put; + } td->dm_dev.mode = mode; td->dm_dev.bdev = bdev; @@ -774,7 +781,8 @@ static struct table_device *open_table_device(struct mapped_device *md, */ static void close_table_device(struct table_device *td, struct mapped_device *md) { - bd_unlink_disk_holder(td->dm_dev.bdev, dm_disk(md)); + if (md->disk->slave_dir) + bd_unlink_disk_holder(td->dm_dev.bdev, md->disk); blkdev_put(td->dm_dev.bdev, td->dm_dev.mode | FMODE_EXCL); put_dax(td->dm_dev.dax_dev); list_del(&td->list); @@ -1951,7 +1959,13 @@ static void cleanup_mapped_device(struct mapped_device *md) md->disk->private_data = NULL; spin_unlock(&_minor_lock); if (dm_get_md_type(md) != DM_TYPE_NONE) { + struct table_device *td; + dm_sysfs_exit(md); + list_for_each_entry(td, &md->table_devices, list) { + bd_unlink_disk_holder(td->dm_dev.bdev, + md->disk); + } del_gendisk(md->disk); } dm_queue_destroy_crypto_profile(md->queue); @@ -2284,6 +2298,7 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) { enum dm_queue_mode type = dm_table_get_type(t); struct queue_limits limits; + struct table_device *td; int r; switch (type) { @@ -2316,13 +2331,27 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) if (r) return r; - r = dm_sysfs_init(md); - if (r) { - del_gendisk(md->disk); - return r; + /* + * Register the holder relationship for devices added before the disk + * was live. + */ + list_for_each_entry(td, &md->table_devices, list) { + r = bd_link_disk_holder(td->dm_dev.bdev, md->disk); + if (r) + goto out_undo_holders; } + + r = dm_sysfs_init(md); + if (r) + goto out_undo_holders; md->type = type; return 0; + +out_undo_holders: + list_for_each_entry_continue_reverse(td, &md->table_devices, list) + bd_unlink_disk_holder(td->dm_dev.bdev, md->disk); + del_gendisk(md->disk); + return r; } struct mapped_device *dm_get_md(dev_t dev)