From patchwork Thu Jun 29 06:26:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 13296643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D58BEB64DD for ; Thu, 29 Jun 2023 06:26:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231561AbjF2G0O (ORCPT ); Thu, 29 Jun 2023 02:26:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231817AbjF2G0L (ORCPT ); Thu, 29 Jun 2023 02:26:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA75E2D5B; Wed, 28 Jun 2023 23:26:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 794E661492; Thu, 29 Jun 2023 06:26:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23616C433C9; Thu, 29 Jun 2023 06:26:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1688019968; bh=xHJViNRochmFfkpGu4V5kEET8WKXpezvRtP00JVSRG8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=kOyEskx6rtk6VOMI0reImUMVyeAtzfP+X35hcsD8M9YpjZmdRj841av8fiN/rUo1U a7pQKXsrHuLjl1iPVU7bhmVwXn++Sfo2jLvFKnatroHxzbYvwOj4svYUc12rmA9l3a PISadmrykYPTUCA7/4Nu/TQojKPYgulCQ/oi3ZE/BB+uBil+03K4z0TgLuGUOScfBz YcmX/MFTOK0jXGeeKfaCxNGkNRo6kPoKCiKctEfwpuw21E6c6C3KR/rmIOq6igQbFK tvLTE1sMo62GxcNME05q9n6kLGmyoM01MG292oqjIWHAlCxFQtgCIoFb64DFL0ZgC2 qcMb3uTXaCMGg== From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe , linux-nvme@lists.infradead.org, Christoph Hellwig , Keith Busch , linux-scsi@vger.kernel.org, "Martin K . Petersen" Subject: [PATCH 4/5] block: virtio_blk: Set zone limits before revalidating zones Date: Thu, 29 Jun 2023 15:26:01 +0900 Message-ID: <20230629062602.234913-5-dlemoal@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230629062602.234913-1-dlemoal@kernel.org> References: <20230629062602.234913-1-dlemoal@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In virtblk_probe_zoned_device(), call blk_queue_chunk_sectors() and blk_queue_max_zone_append_sectors() to respectively set a device zone size and maximum zone append sector limit before executing blk_revalidate_disk_zones() to allow this function to check zone limits. Signed-off-by: Damien Le Moal --- drivers/block/virtio_blk.c | 35 ++++++++++++++++------------------- 1 file changed, 16 insertions(+), 19 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index b47358da92a2..7d9c9f9d2ae9 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -751,7 +751,6 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, { u32 v, wg; u8 model; - int ret; virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model); @@ -806,6 +805,7 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, vblk->zone_sectors); return -ENODEV; } + blk_queue_chunk_sectors(q, vblk->zone_sectors); dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors); if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { @@ -814,26 +814,23 @@ static int virtblk_probe_zoned_device(struct virtio_device *vdev, blk_queue_max_discard_sectors(q, 0); } - ret = blk_revalidate_disk_zones(vblk->disk, NULL); - if (!ret) { - virtio_cread(vdev, struct virtio_blk_config, - zoned.max_append_sectors, &v); - if (!v) { - dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); - return -ENODEV; - } - if ((v << SECTOR_SHIFT) < wg) { - dev_err(&vdev->dev, - "write granularity %u exceeds max_append_sectors %u limit\n", - wg, v); - return -ENODEV; - } - - blk_queue_max_zone_append_sectors(q, v); - dev_dbg(&vdev->dev, "max append sectors = %u\n", v); + virtio_cread(vdev, struct virtio_blk_config, + zoned.max_append_sectors, &v); + if (!v) { + dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); + return -ENODEV; + } + if ((v << SECTOR_SHIFT) < wg) { + dev_err(&vdev->dev, + "write granularity %u exceeds max_append_sectors %u limit\n", + wg, v); + return -ENODEV; } - return ret; + blk_queue_max_zone_append_sectors(q, v); + dev_dbg(&vdev->dev, "max append sectors = %u\n", v); + + return blk_revalidate_disk_zones(vblk->disk, NULL); } #else