From patchwork Tue May 25 02:25:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12277631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21579C2B9F8 for ; Tue, 25 May 2021 02:25:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E40FA6141E for ; Tue, 25 May 2021 02:25:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230174AbhEYC1K (ORCPT ); Mon, 24 May 2021 22:27:10 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:34067 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230074AbhEYC1K (ORCPT ); Mon, 24 May 2021 22:27:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621909541; x=1653445541; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=ATmg89G+Dct0ontkWU+SFfZLt4WC7pezEH5Lnl8QCgw=; b=kfjwzMQTgdSmQUHq5B4vWBP8TA9Y8U+wcbREJ7CbTxKE4ENLuPwDP5bg yIzBeKzboTyc1cCZMdEb9N/rC2CZ7O5x7deDw8Cyf/Kz0icnBsMt/0I7E sLEetb3hPZjJJJ490G3++HNxD+qjQW4AMAI+uQwhkHK9p4cU/a0tPkdeu mqByWGkLZlu5lv1HqqJ3wpxdel3VGyKVzmHF8O3faQnCkImAuFO2z0eX0 b0E8n7ZKgv6FGjzskuglaC7GShnvIz48+wdXxtVJCTOKaisJk+N3el77g hpzx9ndfWqK3G3xwn4YYwrg+fCSuWVt5+RdA/CULskkaf3cguzaMluU4G Q==; IronPort-SDR: sGs2P02kzZ1Mt57IbM6gBgB9YxzgYG+pt10ODaFrKn4y02h3RKZRv/5Fid4IQ66uoNbMJ2Lv4W MtM1dSoCR10icodIORIN5tR/J+MY2wDlahRs+QZ2z/zTWfAJ2MmwXGsGeqiblEfBLLeoZF2p/M HDWRrnkyglSRFCvcW0i1hHETQQQrsGRX6LgCPzsOZWSuFql/ZYx1HElrXYWNHJAgK0ZSfSKGgw j+uvVBlI7ADUzj9bulO455mxawk4k+virDqD8HnFY8i+b1U3V526D4vaLtOeFheTdN948jldBl VoI= X-IronPort-AV: E=Sophos;i="5.82,327,1613404800"; d="scan'208";a="173981316" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 25 May 2021 10:25:41 +0800 IronPort-SDR: MUk3m9RB70eEd1FHOKrRv+EDykmnYiQbiUIyWls8IBuRtjCJJLzjc+u9VqIkbf7ExQBYtkyzGv xPCERL2BB2y+afpBxz3kRut3j3c8uHaGtYCVqnhFs5Tnz/H/Uz9mcH/7DoN4pjaSH0R4/fA8FO PmqLDL5Cr/n+HVJObp10yT0t2De+alVPFSwMr4DMA4Vb8KfMRMFZQvUHEUz3ayTSJvdPm4hS2W ifUL98x2s3h8R24ay8qMSo7JjLe/4NtusuM6vPRsYIOuQiWgb7fkFwHzHiLZy3Y8RfurC7xYZ7 TwerlkrJFmLex3D9iXqOyFQM Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2021 19:03:56 -0700 IronPort-SDR: g0nzjXDYk9USvQNTYRlrx6tvADftuBjMMTNnHDjQ6FypnGlaq9hRyk3ahpDRSa4DnLCVoJbEhP BUbxSz0o9eRDF92xz4xPNwT9LnoCNO7EKzoo8bW531qxQWK4VWYzqu4YjPDzx7EZVYIh6y4CRj Zgnjbdkkj5zIHaWAl7Zr2fBNx0XplFDkFn7Ozdol8YyL00Ge8ITbM599aLKpUNFL7YPYTxnC0T vep4vekioQEPuUj7ALbjfj/vyZ9At6ZpsZyg8dQwcThyQI8JkeASIax4YRz0dgd3vLDiGJfQh7 0Xk= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 24 May 2021 19:25:40 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 01/11] block: improve handling of all zones reset operation Date: Tue, 25 May 2021 11:25:29 +0900 Message-Id: <20210525022539.119661-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525022539.119661-1-damien.lemoal@wdc.com> References: <20210525022539.119661-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org SCSI, ZNS and null_blk zoned devices support resetting all zones using a single command (REQ_OP_ZONE_RESET_ALL), as indicated using the device request queue flag QUEUE_FLAG_ZONE_RESETALL. This flag is not set for device mapper targets creating zoned devices. In this case, a user request for resetting all zones of a device is processed in blkdev_zone_mgmt() by issuing a REQ_OP_ZONE_RESET operation for each zone of the device. This leads to different behaviors of the BLKRESETZONE ioctl() depending on the target device support for the reset all operation. E.g. blkzone reset /dev/sdX will reset all zones of a SCSI device using a single command that will ignore conventional, read-only or offline zones. But a dm-linear device including conventional, read-only or offline zones cannot be reset in the same manner as some of the single zone reset operations issued by blkdev_zone_mgmt() will fail. E.g.: blkzone reset /dev/dm-Y blkzone: /dev/dm-0: BLKRESETZONE ioctl failed: Remote I/O error To simplify applications and tools development, unify the behavior of the all-zone reset operation by modifying blkdev_zone_mgmt() to not issue a zone reset operation for conventional, read-only and offline zones, thus mimicking what an actual reset-all device command does on a device supporting REQ_OP_ZONE_RESET_ALL. This emulation is done using the new function blkdev_zone_reset_all_emulated(). The zones needing a reset are identified using a bitmap that is initialized using a zone report. Since empty zones do not need a reset, also ignore these zones. The function blkdev_zone_reset_all() is introduced for block devices natively supporting reset all operations. blkdev_zone_mgmt() is modified to call either function to execute an all zone reset request. Signed-off-by: Damien Le Moal [hch: split into multiple functions] Signed-off-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Himanshu Madhani --- block/blk-zoned.c | 119 +++++++++++++++++++++++++++++++++++----------- 1 file changed, 92 insertions(+), 27 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 250cb76ee615..f47f688b6ea6 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -161,18 +161,89 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL_GPL(blkdev_report_zones); -static inline bool blkdev_allow_reset_all_zones(struct block_device *bdev, - sector_t sector, - sector_t nr_sectors) +static inline unsigned long *blk_alloc_zone_bitmap(int node, + unsigned int nr_zones) { - if (!blk_queue_zone_resetall(bdev_get_queue(bdev))) - return false; + return kcalloc_node(BITS_TO_LONGS(nr_zones), sizeof(unsigned long), + GFP_NOIO, node); +} +static int blk_zone_need_reset_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ /* - * REQ_OP_ZONE_RESET_ALL can be executed only if the number of sectors - * of the applicable zone range is the entire disk. + * For an all-zones reset, ignore conventional, empty, read-only + * and offline zones. */ - return !sector && nr_sectors == get_capacity(bdev->bd_disk); + switch (zone->cond) { + case BLK_ZONE_COND_NOT_WP: + case BLK_ZONE_COND_EMPTY: + case BLK_ZONE_COND_READONLY: + case BLK_ZONE_COND_OFFLINE: + return 0; + default: + set_bit(idx, (unsigned long *)data); + return 0; + } +} + +static int blkdev_zone_reset_all_emulated(struct block_device *bdev, + gfp_t gfp_mask) +{ + struct request_queue *q = bdev_get_queue(bdev); + sector_t capacity = get_capacity(bdev->bd_disk); + sector_t zone_sectors = blk_queue_zone_sectors(q); + unsigned long *need_reset; + struct bio *bio = NULL; + sector_t sector = 0; + int ret; + + need_reset = blk_alloc_zone_bitmap(q->node, q->nr_zones); + if (!need_reset) + return -ENOMEM; + + ret = bdev->bd_disk->fops->report_zones(bdev->bd_disk, 0, + q->nr_zones, blk_zone_need_reset_cb, + need_reset); + if (ret < 0) + goto out_free_need_reset; + + ret = 0; + while (sector < capacity) { + if (!test_bit(blk_queue_zone_no(q, sector), need_reset)) { + sector += zone_sectors; + continue; + } + + bio = blk_next_bio(bio, 0, gfp_mask); + bio_set_dev(bio, bdev); + bio->bi_opf = REQ_OP_ZONE_RESET | REQ_SYNC; + bio->bi_iter.bi_sector = sector; + sector += zone_sectors; + + /* This may take a while, so be nice to others */ + cond_resched(); + } + + if (bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + } + +out_free_need_reset: + kfree(need_reset); + return ret; +} + +static int blkdev_zone_reset_all(struct block_device *bdev, gfp_t gfp_mask) +{ + struct bio bio; + + bio_init(&bio, NULL, 0); + bio_set_dev(&bio, bdev); + bio.bi_opf = REQ_OP_ZONE_RESET_ALL | REQ_SYNC; + + return submit_bio_wait(&bio); } /** @@ -200,7 +271,7 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, sector_t capacity = get_capacity(bdev->bd_disk); sector_t end_sector = sector + nr_sectors; struct bio *bio = NULL; - int ret; + int ret = 0; if (!blk_queue_is_zoned(q)) return -EOPNOTSUPP; @@ -222,20 +293,21 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, if ((nr_sectors & (zone_sectors - 1)) && end_sector != capacity) return -EINVAL; + /* + * In the case of a zone reset operation over all zones, + * REQ_OP_ZONE_RESET_ALL can be used with devices supporting this + * command. For other devices, we emulate this command behavior by + * identifying the zones needing a reset. + */ + if (op == REQ_OP_ZONE_RESET && sector == 0 && nr_sectors == capacity) { + if (!blk_queue_zone_resetall(q)) + return blkdev_zone_reset_all_emulated(bdev, gfp_mask); + return blkdev_zone_reset_all(bdev, gfp_mask); + } + while (sector < end_sector) { bio = blk_next_bio(bio, 0, gfp_mask); bio_set_dev(bio, bdev); - - /* - * Special case for the zone reset operation that reset all - * zones, this is useful for applications like mkfs. - */ - if (op == REQ_OP_ZONE_RESET && - blkdev_allow_reset_all_zones(bdev, sector, nr_sectors)) { - bio->bi_opf = REQ_OP_ZONE_RESET_ALL | REQ_SYNC; - break; - } - bio->bi_opf = op | REQ_SYNC; bio->bi_iter.bi_sector = sector; sector += zone_sectors; @@ -396,13 +468,6 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, return ret; } -static inline unsigned long *blk_alloc_zone_bitmap(int node, - unsigned int nr_zones) -{ - return kcalloc_node(BITS_TO_LONGS(nr_zones), sizeof(unsigned long), - GFP_NOIO, node); -} - void blk_queue_free_zone_bitmaps(struct request_queue *q) { kfree(q->conv_zones_bitmap);