From patchwork Fri May 21 03:01:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41847C433ED for ; Fri, 21 May 2021 03:01:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CC376128A for ; Fri, 21 May 2021 03:01:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233106AbhEUDCp (ORCPT ); Thu, 20 May 2021 23:02:45 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbhEUDCo (ORCPT ); Thu, 20 May 2021 23:02:44 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566082; x=1653102082; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=7bTXWm+VIFI6tQpCu2HcnpmzRdaygdV3JeIzWjfejMQ=; b=hNT+K/YgqAmsyFIFrD9cG4Cgs4/w1ZUXoAQDxUkkOqxFJEoJUXv9fRhX dOgKJblFRkFBIhvlDo/Rmluy7w301nuYFDNOSHPq0rHbrwh9IlBMw9zkS gJOmnDSSsOBIlyEKEidwk48YcyZpL5R+TDDQzmSBg3BN5UpXTfDYc16w8 Lp/xzV+ZB/WXviGueOLaZqaDgi54REcWs0IwaSAClhkBO8X14qPXqbGhd +g2bKNhp9hzBKSR8H8fBp5U/JThTkWnoRCMS4EGN8tHJCPYVadlgS+n93 bmWjKHOQCS0U6vMaYS6AnWaVyLJaihaUlwBT5m+Mo6YC8jZCfnwW7lXqi Q==; IronPort-SDR: /hJ3Xp7YAxc1Q/6mLhrW5/Q0L+uAIaRQ3LyXFoJOG+UmdcILfiTaNcLN4Qwp3KO2hBd6UwVx0r M5Z5S2JuQIIIugsDZFe0O1RD3yDHz/zFbFUO5i51cn/sdThN/ZkFMmSNt4cdnrTGBodRt9MTzb dmih2KmjRL/4SdZFsfPSNClBAopP4Pu8B/1pvmMHs2OglANH2wRiyFp+L24Z4pMoQ0ZXtNi2mi aTsXXqIt/aHy2yhQcu31AgLXmX1gkjcNeNITKLsa2OfryZ9xs/WGLXzA5pgoY15TEArmlJ6I0o dGk= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943796" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:22 +0800 IronPort-SDR: tseHRxmPjY503B+R70JMx1CCR7EslgRj8UcTN+TYJLoAlACXB6dUQ5ACvTi933mwPFtOBEJV6o XcMQQyw8vCSxhWHPy48aUGQXyz4qmRYjILtD8hQUUdiG7IDzAH7Vi0FoM2GxmYFpdbzL1A6yUX BZ9LN93w9lQgQ+anO2q7HHYnbrc7Xq/an451hqdigVa2pgUP12+fC0UZ9B6pKVikfLkpwA0xRf kKx60jjldbDLagvndQfDkdijGucsUcUREhP6BStJWMk4FQCeqXaj0YkOnQI7xSOKjU2vNFBFP2 DFE61qVTQEwPMC5GM3V81RIz Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:40:58 -0700 IronPort-SDR: mnB/sYa1djSEQS5pmeI1BfkDsJcYpPdnCvkoppqs6mf4LTYCml8iapj0VkDg2rvD2lsIzlmcIY umuAukiYrBTPhLn+28h5r6pV8tEsmaggluGFmSGgHBw5ZrOULlCF6qGkdXXgfOFxQN1yIVWDtD k8oX31NuZHXii4myQy5mSX7THgcVBm7m5pAxfhOlN0KyW0mdSDvyeVZ5Wx5hG/yCoemRKJ2PGU 2HQYIyCB8/cRvzw7rKRuMX5U2XWh0oirpWFZB/2yLSV0FlFtweYZr9YyEY3b21wPMxpYTrshyU 9+s= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:22 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 01/11] block: improve handling of all zones reset operation Date: Fri, 21 May 2021 12:01:09 +0900 Message-Id: <20210521030119.1209035-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org SCSI, ZNS and null_blk zoned devices support resetting all zones using a single command (REQ_OP_ZONE_RESET_ALL), as indicated using the device request queue flag QUEUE_FLAG_ZONE_RESETALL. This flag is not set for device mapper targets creating zoned devices. In this case, a user request for resetting all zones of a device is processed in blkdev_zone_mgmt() by issuing a REQ_OP_ZONE_RESET operation for each zone of the device. This leads to different behaviors of the BLKRESETZONE ioctl() depending on the target device support for the reset all operation. E.g. blkzone reset /dev/sdX will reset all zones of a SCSI device using a single command that will ignore conventional, read-only or offline zones. But a dm-linear device including conventional, read-only or offline zones cannot be reset in the same manner as some of the single zone reset operations issued by blkdev_zone_mgmt() will fail. E.g.: blkzone reset /dev/dm-Y blkzone: /dev/dm-0: BLKRESETZONE ioctl failed: Remote I/O error To simplify applications and tools development, unify the behavior of the all-zone reset operation by modifying blkdev_zone_mgmt() to not issue a zone reset operation for conventional, read-only and offline zones, thus mimicking what an actual reset-all device command does on a device supporting REQ_OP_ZONE_RESET_ALL. This emulation is done using the new function blkdev_zone_reset_all_emulated(). The zones needing a reset are identified using a bitmap that is initialized using a zone report. Since empty zones do not need a reset, also ignore these zones. The function blkdev_zone_reset_all() is introduced for block devices natively supporting reset all operations. blkdev_zone_mgmt() is modified to call either function to execute an all zone reset request. Signed-off-by: Damien Le Moal [hch: split into multiple functions] Signed-off-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Reviewed-by: Johannes Thumshirn --- block/blk-zoned.c | 117 +++++++++++++++++++++++++++++++++++----------- 1 file changed, 90 insertions(+), 27 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 250cb76ee615..0ded16b0f713 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -161,18 +161,87 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector, } EXPORT_SYMBOL_GPL(blkdev_report_zones); -static inline bool blkdev_allow_reset_all_zones(struct block_device *bdev, - sector_t sector, - sector_t nr_sectors) +static inline unsigned long *blk_alloc_zone_bitmap(int node, + unsigned int nr_zones) { - if (!blk_queue_zone_resetall(bdev_get_queue(bdev))) - return false; + return kcalloc_node(BITS_TO_LONGS(nr_zones), sizeof(unsigned long), + GFP_NOIO, node); +} +static int blk_zone_need_reset_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ /* - * REQ_OP_ZONE_RESET_ALL can be executed only if the number of sectors - * of the applicable zone range is the entire disk. + * For an all-zones reset, ignore conventional, empty, read-only + * and offline zones. */ - return !sector && nr_sectors == get_capacity(bdev->bd_disk); + switch (zone->cond) { + case BLK_ZONE_COND_NOT_WP: + case BLK_ZONE_COND_EMPTY: + case BLK_ZONE_COND_READONLY: + case BLK_ZONE_COND_OFFLINE: + return 0; + default: + set_bit(idx, (unsigned long *)data); + return 0; + } +} + +static int blkdev_zone_reset_all_emulated(struct block_device *bdev, + gfp_t gfp_mask) +{ + struct request_queue *q = bdev_get_queue(bdev); + sector_t capacity = get_capacity(bdev->bd_disk); + sector_t zone_sectors = blk_queue_zone_sectors(q); + unsigned long *need_reset; + struct bio *bio = NULL; + sector_t sector; + int ret; + + need_reset = blk_alloc_zone_bitmap(q->node, q->nr_zones); + if (!need_reset) + return -ENOMEM; + + ret = bdev->bd_disk->fops->report_zones(bdev->bd_disk, 0, + q->nr_zones, blk_zone_need_reset_cb, + need_reset); + if (ret < 0) + goto out_free_need_reset; + + ret = 0; + while (sector < capacity) { + if (!test_bit(blk_queue_zone_no(q, sector), need_reset)) { + sector += zone_sectors; + continue; + } + bio = blk_next_bio(bio, 0, gfp_mask); + bio_set_dev(bio, bdev); + bio->bi_opf = REQ_OP_ZONE_RESET | REQ_SYNC; + bio->bi_iter.bi_sector = sector; + sector += zone_sectors; + + /* This may take a while, so be nice to others */ + cond_resched(); + } + + if (bio) { + ret = submit_bio_wait(bio); + bio_put(bio); + } + +out_free_need_reset: + kfree(need_reset); + return ret; +} + +static int blkdev_zone_reset_all(struct block_device *bdev, gfp_t gfp_mask) +{ + struct bio bio; + + bio_init(&bio, NULL, 0); + bio_set_dev(&bio, bdev); + bio.bi_opf = REQ_OP_ZONE_RESET_ALL | REQ_SYNC; + return submit_bio_wait(&bio); } /** @@ -200,7 +269,7 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, sector_t capacity = get_capacity(bdev->bd_disk); sector_t end_sector = sector + nr_sectors; struct bio *bio = NULL; - int ret; + int ret = 0; if (!blk_queue_is_zoned(q)) return -EOPNOTSUPP; @@ -222,20 +291,21 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op, if ((nr_sectors & (zone_sectors - 1)) && end_sector != capacity) return -EINVAL; + /* + * In the case of a zone reset operation over all zones, + * REQ_OP_ZONE_RESET_ALL can be used with devices supporting this + * command. For other devices, we emulate this command behavior by + * identifying the zones needing a reset. + */ + if (op == REQ_OP_ZONE_RESET && sector == 0 && nr_sectors == capacity) { + if (!blk_queue_zone_resetall(q)) + return blkdev_zone_reset_all_emulated(bdev, gfp_mask); + return blkdev_zone_reset_all(bdev, gfp_mask); + } + while (sector < end_sector) { bio = blk_next_bio(bio, 0, gfp_mask); bio_set_dev(bio, bdev); - - /* - * Special case for the zone reset operation that reset all - * zones, this is useful for applications like mkfs. - */ - if (op == REQ_OP_ZONE_RESET && - blkdev_allow_reset_all_zones(bdev, sector, nr_sectors)) { - bio->bi_opf = REQ_OP_ZONE_RESET_ALL | REQ_SYNC; - break; - } - bio->bi_opf = op | REQ_SYNC; bio->bi_iter.bi_sector = sector; sector += zone_sectors; @@ -396,13 +466,6 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, return ret; } -static inline unsigned long *blk_alloc_zone_bitmap(int node, - unsigned int nr_zones) -{ - return kcalloc_node(BITS_TO_LONGS(nr_zones), sizeof(unsigned long), - GFP_NOIO, node); -} - void blk_queue_free_zone_bitmaps(struct request_queue *q) { kfree(q->conv_zones_bitmap); From patchwork Fri May 21 03:01:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55135C433B4 for ; Fri, 21 May 2021 03:01:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 398526121E for ; Fri, 21 May 2021 03:01:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233354AbhEUDCq (ORCPT ); Thu, 20 May 2021 23:02:46 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbhEUDCq (ORCPT ); Thu, 20 May 2021 23:02:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566083; x=1653102083; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=x1WpI3Z+8cyaMWROHXaks7b5keOWUGnB3d9FR3KBb4M=; b=S+i0c/yQtYgjgirUQ8aIrgBmq6bD+3rviHTsFlJ5a+50BLrEiW9dXCMN A6EdV5Q6br3Fi2Cu5Fluqm2NI/ucGrUNxYBODFLOdjfD2aP6dhPXgvoTg qbKIyy1Pep/kQVuaF3+M3pbsUbnymz0WEg0rmVbd/a3sQ2Of4jqH4tGNv Oq6GwpOzjqlCoXvVtVB4SQeTfp8HTPdmFxLytcwLYxnjmHvR4iFrKcPaJ 1npGek+mNg5a/I8fcz5i4PevqWaU+UuoDR7GKH9gK1t25Z9giT+CQxOLs ndSPDt3xmKumQRUJbRrh+U8AKeHO1LdKFJlL21b9HtGDNKryTfV00X7He Q==; IronPort-SDR: 7hlujYTfXI242d7q/HR7Jk1XzfNZa9D4+Hzs5hIH7N68QN7y1DIk+j2tGZfSLA9cIIiHYIeZCc O1t23FpYEaqjYdJWGsPBHjMIgH3XZysxuWUDUJkl4wuIuyM5JRlw4AtIUHgcVzeoR0zGKLMMnX JWl5VFHdhx/Uyghd5aBMPC6yZHViEu7qfdWDDp/OSk8S1D0QyxHfZFAzESbGt4soH3op9X43xz bxAKiwTE8raBrH9rRc47iJfiMGNRUWvZgo7kzu0b0vtAhxBqYIOvgQxXnrLtzQj4JuHf0e84pv oDw= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943804" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:23 +0800 IronPort-SDR: +TtpdBLNLMnRQaIFOtjAkt+qybqBnCaPVy9uP8SYYURUmSAMREI5sZ5qWSSbYze1iZtubkKV/t vxGiswtPDJQVTj5iYAJ7VT2qg64m0VzvNRxa2IeRZIPo034n2GB285VBOALYvgCbSwMS2jMIHk aZBcQParPqb5XWEeOUg7FBvF3GubqicpcZeu+rsteAAfR5nrzypEFXjeCz8yLG93RSV17hBidp oFDEURttxGAz2KWrEMIGBRfltClga1hz7Y5zl3b89g3Wo+N6reI+ZFfRvRUa0zjREuETouDYWb NoqBh2KYrJK1dmtP7sHnZv2z Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:40:59 -0700 IronPort-SDR: eWE035UCu7Mks9ONY6f7nljDD8+7jYU6zjDemN2JpwWJFMSe/S67w+4WR+rDth3H/WR2gT/hhQ iJcLtA02Z/cObCMfSnvmnlE64WjzemeoOMN5fvfoaCbrsWKym3JxyqoWA+h07lvOPqa6emT9Fb VG8WP9ZumCyBDPclidyMt09I47nJddBJCuozI4dghjhV0mNkvRT+2WiGTj5reahz8wZkIPHXCU 8G6gbbAFOtJtvgBoMohnJA/+QYw35B3Y0KCNwyOgijdgych/BR7Aadj4vwd2vwYoPy8s6gDZFd cmM= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:23 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 02/11] block: introduce bio zone helpers Date: Fri, 21 May 2021 12:01:10 +0900 Message-Id: <20210521030119.1209035-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the helper functions bio_zone_no() and bio_zone_is_seq(). Both are the BIO counterparts of the request helpers blk_rq_zone_no() and blk_rq_zone_is_seq(), respectively returning the number of the target zone of a bio and true if the BIO target zone is sequential. Signed-off-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Johannes Thumshirn Reviewed-by: Chaitanya Kulkarni --- include/linux/blkdev.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index f69c75bd6d27..2db0f376f5d9 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1008,6 +1008,18 @@ static inline unsigned int blk_rq_stats_sectors(const struct request *rq) /* Helper to convert BLK_ZONE_ZONE_XXX to its string format XXX */ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond); +static inline unsigned int bio_zone_no(struct bio *bio) +{ + return blk_queue_zone_no(bdev_get_queue(bio->bi_bdev), + bio->bi_iter.bi_sector); +} + +static inline unsigned int bio_zone_is_seq(struct bio *bio) +{ + return blk_queue_zone_is_seq(bdev_get_queue(bio->bi_bdev), + bio->bi_iter.bi_sector); +} + static inline unsigned int blk_rq_zone_no(struct request *rq) { return blk_queue_zone_no(rq->q, blk_rq_pos(rq)); From patchwork Fri May 21 03:01:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5A70C43460 for ; Fri, 21 May 2021 03:01:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 865706128A for ; Fri, 21 May 2021 03:01:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233511AbhEUDCs (ORCPT ); Thu, 20 May 2021 23:02:48 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbhEUDCr (ORCPT ); Thu, 20 May 2021 23:02:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566085; x=1653102085; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=rWxjqEF/31OQ9k25vywpYFOLdpWY41q2iRDiStGTtjU=; b=Pk578Ko+TUj2TP8oSWUI0HCsoUbgDJb1QvlFRGVjqIsOH9Rz8dB6IAz5 iSslVwgJ/qDSdlGwDTmzLu2BobNBJlxfI92IWkUCwi4RG1ohF9c9xq6GQ p7/F1rYnfPP4PoLlsZiXXfWw301AXHNvFhJmfJbsX3kW9yTrGFb3aRzDK PQbTOo0J6CRr0AUrliGPGMIN6jY5dC3peM4npQHUz+L3tjVKNhmVqvClz IaXDEU0osWf+1kfdGCP2f4cAZX8kAaw1peximWdG/MIsFfQBJAm2l7eKm yrkgGvfJwDwz96eDOFRFz9Adw7Cd4YJBnbWvSjoiacnSqnd70IEoNTO+F Q==; IronPort-SDR: cM9HXBAku8ZRK658Gmy7OCHSLWcwawxIXSjgF6IJaG8L1gLJr5934tbZt4d8u9FEvgldFPAQ71 4p9UymV+0HpT0zJPTKwlt0FtVz0u2RQydkwEQcKDE4UZAa0y+OKYBV9ACExQoPR9TGaZFkDaaG oiUxwqv4MFoF9ez6WBus2klgC+OSp5FTpN7MtXyku4FgskG5g/qZRdzIWexqJKQgX5FwtuCH/l 65t0l9uuSmM0gGD6AmjS5VvPlI6Jx30hIuUq4udIfhFTanIxEyuHeMctAJxaaaYQ3+jwLTxPqn IYU= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943815" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:25 +0800 IronPort-SDR: vAVnWW6/6GQqHNFGJcKU7s1Z4BQ/+cGeTN3BeX8sE5S3g1lEcSebyASlRDIBMdyOWG7vi7LTnx ge8oWSz4cH09CzFq/ZqbaS8QOsa4QhyWFcJqr/k8SzkEOM1i8GN6bzKxZuFB/ZRTCI7ssXIaLy SV49nQ2+mcp3eWq6KtOWwkNrkW4SsSjfdsDpD3lwhFthLlSw86mtAwbkANGodKN63m4d2bm97q otRoWvvm7ueYt2L6BIlzjwy1s/Ag8JnGvrWRfOtih0Gb1PRo5R15E67B4l9WesHy6noaKQZMUh Ht4c9kAAawgTqLWZsfWwVhNB Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:01 -0700 IronPort-SDR: lAeiAp2b2gKV5TvRtg7Y01nKMN1o+nLd9OPWRivCMfl0DYPczAwXLajZ/jH7mBhVJbmqj9wkPI uj7RC1OY/oGqX8O/TtxTRpLNsQIYXTazwJbj/0Hf/qxzYtaEOHjOcDleSypSecySFst/QaWyJZ AbwwnpdJepMU7ynqzJ4dHI1oJtHqor8WHfh1IiAYfYTztuk9jQkNBKo/1clea0dahCnuVSz6yd 2HLMC2lcN8ZnbwXC6t8zEvhLE0FwzIn0iuVzZRTRiJ3gc6BjKLVofR6uIIw9jHtaaWpgzqjmiZ 2V4= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:25 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 03/11] block: introduce BIO_ZONE_WRITE_LOCKED bio flag Date: Fri, 21 May 2021 12:01:11 +0900 Message-Id: <20210521030119.1209035-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Introduce the BIO flag BIO_ZONE_WRITE_LOCKED to indicate that a BIO owns the write lock of the zone it is targeting. This is the counterpart of the struct request flag RQF_ZONE_WRITE_LOCKED. This new BIO flag is reserved for now for zone write locking control for device mapper targets exposing a zoned block device. Since in this case, the lock flag must not be propagated to the struct request that will be used to process the BIO, a BIO private flag is used rather than changing the RQF_ZONE_WRITE_LOCKED request flag into a common REQ_XXX flag that could be used for both BIO and request. This avoids conflicts down the stack with the block IO scheduler zone write locking (in mq-deadline). Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Reviewed-by: Chaitanya Kulkarni --- include/linux/blk_types.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index db026b6ec15a..e5cf12f102a2 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -304,6 +304,7 @@ enum { BIO_CGROUP_ACCT, /* has been accounted to a cgroup */ BIO_TRACKED, /* set if bio goes through the rq_qos path */ BIO_REMAPPED, + BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */ BIO_FLAG_LAST }; From patchwork Fri May 21 03:01:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3487C433ED for ; Fri, 21 May 2021 03:01:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C6CAB6128A for ; Fri, 21 May 2021 03:01:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233706AbhEUDCt (ORCPT ); Thu, 20 May 2021 23:02:49 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbhEUDCs (ORCPT ); Thu, 20 May 2021 23:02:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566086; x=1653102086; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=ufPSlVbwPGym1w8qq3VpiWPqELmfMohbbGux1r6mm7k=; b=fY+gI7fU2ZhOsTljc7179NjdIZojTiUJI4duIs7BEZTHdXiJR6dQu+3H yKFpoUB0ePzJ4eM8G1D3PVshIQ/A414biYMefm8/UiJG9dkHUMIEOnkvO 2SWwEsxjDRscu85cl3GyOwvVkbnXYGyFM2IxR1I1CAv4383YsGpGUEo+W RRAfbS7juSsoMXoUrhAZs2xKlcLi9nhxmFvk7Uzb8+gpGS850wARts0JS Rj12gDTlDoYXTYqzpgzCWUJ9SS7MZuUA/9xH3AGYAGd0fjCoWp/Wk7LGU qQUwGefohrf9Eqfg/KLPCF/bMzZFpxi8jTolfXfizIETKS8Klmuj4uGwy g==; IronPort-SDR: ceSJHM50IRaKAoga1DYm5WdVo+sM8QMqgJyU3aLpA4lWOd4HwtMtVCPtRQ+2Zah/VstS/D6Z0t jgc/XJnTvZxzpJ4XkzyoZeP+Hh1HVbAQJuhfoC9/hTPXkUOw6uXbJRhEzmMuUFnsDDwV3yVTjj QvgwN2p5K+T+RYMi1Jx1ZmYZNe8iqLFkKBk/Ss/XHbIFUFE9o7BafwIPR3AtEVT41z7RZdS/NU m3nDnDKQSxDl5cEQSQWoqAj4ixa/K4MmHbiWl2AsFZ8iyIxweJRE22JuSAubChB92WzSR9PbPV 8YQ= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943821" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:26 +0800 IronPort-SDR: i8aOKU/PkR1GRBUmpZX+sT3ct6lqDZh+XOhJaT6xKFSVZsAvMdfxcE1MbrTss70BOQBHbKDEg9 7YXCoxOMieIZijrazBT66c9qxRdZh+8m4AXkHq7Y45H/dG7vtsn4K8FufPi5xgul8Xd8WZRFcD 68w/kv4Y8NReFgoddRuVzb8Otn7FHICekYCE826wjqPGjOgpOpUujxJKwxsMWXzDcoFcYXlSo1 KzVYWytyFX7mPakmiQqw1JRtpcsbXdpDHJ+jJK9qtiM1t0N+Z3L3ASru4inArVKaLuB4TXeGtH L3ZLzQDfJRW8JMAQxblNwZXJ Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:02 -0700 IronPort-SDR: 0rXFbAARo+VBtr5JRENEYPHlGpVr3hVF0f/SSFtwxSu9j0wbFEysCpN3hFtPOBkbKZwaVJT1h1 Th87fWx/CNKLTWDVs1UBk8pC7kB/VEu057k0lhIn2lJ81cZue8HyNUsFeUn6eGTz+RhITO1onq S6OalEtQQTiluUhkcB76VTR9lurzyhS/c8x+7UIrAb5duuOcY3O8EoryDiERSb1nhDZlwIGh99 3aD0v/0Sor0/Yx0H/IZGHiE6tLARLZa6iombSzX09yTJhU0NoYi2o111mUCkFFnQFjRFwzzfwx b80= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:26 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 04/11] dm: Fix dm_accept_partial_bio() Date: Fri, 21 May 2021 12:01:12 +0900 Message-Id: <20210521030119.1209035-5-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fix dm_accept_partial_bio() to actually check that zone management commands are not passed as explained in the function documentation comment. Also, since a zone append operation cannot be split, add REQ_OP_ZONE_APPEND as a forbidden command. White lines are added around the group of BUG_ON() calls to make the code more legible. Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn Reviewed-by: Hannes Reinecke --- drivers/md/dm.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index ca2aedd8ee7d..a9211575bfed 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1237,8 +1237,9 @@ static int dm_dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff, /* * A target may call dm_accept_partial_bio only from the map routine. It is - * allowed for all bio types except REQ_PREFLUSH, REQ_OP_ZONE_RESET, - * REQ_OP_ZONE_OPEN, REQ_OP_ZONE_CLOSE and REQ_OP_ZONE_FINISH. + * allowed for all bio types except REQ_PREFLUSH, zone management operations + * (REQ_OP_ZONE_RESET, REQ_OP_ZONE_OPEN, REQ_OP_ZONE_CLOSE and + * REQ_OP_ZONE_FINISH) and zone append writes. * * dm_accept_partial_bio informs the dm that the target only wants to process * additional n_sectors sectors of the bio and the rest of the data should be @@ -1268,9 +1269,13 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors) { struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone); unsigned bi_size = bio->bi_iter.bi_size >> SECTOR_SHIFT; + BUG_ON(bio->bi_opf & REQ_PREFLUSH); + BUG_ON(op_is_zone_mgmt(bio_op(bio))); + BUG_ON(bio_op(bio) == REQ_OP_ZONE_APPEND); BUG_ON(bi_size > *tio->len_ptr); BUG_ON(n_sectors > bi_size); + *tio->len_ptr -= bi_size - n_sectors; bio->bi_iter.bi_size = n_sectors << SECTOR_SHIFT; } From patchwork Fri May 21 03:01:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B768C433B4 for ; Fri, 21 May 2021 03:01:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56DF66128A for ; Fri, 21 May 2021 03:01:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237936AbhEUDCu (ORCPT ); Thu, 20 May 2021 23:02:50 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235553AbhEUDCu (ORCPT ); Thu, 20 May 2021 23:02:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566088; x=1653102088; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=qwMSHzXDr1xSUztltcFrjPV9BIHY9llHie0d63Ir2Pg=; b=dC05c+eFZHLDuMNDOr78bjFu+OiPpIe0+R03suiHvXMavGqrQRHqZSvx GvoQ343pQtgWgrK/yDpCTRT1dYKuVs/4OLQ2FYJqum5waZBC077fOZWhE Wjn/lK5VLdC4Co9uRklNs9xH8msCc9jkFv3JpDTjDivf2TnrDaBdOEhhh TCMgNrkfzV29/1HLKcxyiTMN5twj74dYg35iF2Cpelj5KMWPd533thv7z lvQb9QWgGSaFppLkdNWf0pE57ZdlIYznIZrxCTNWL5GuNlJaIPKnytMu8 LZd8UZONpNBq28nw6d9w6Uc+FuV6BKAQtU8yyszGEZQ+Vf2aV53Ki2aOc A==; IronPort-SDR: /Hue5h/pooGcfekzR0TXYUI3XfOdugW9iYVKPEULQq4k6Wg0/13wZwGVK1gPsQGQVi46CWZK3H t40CfKxhkvzUtpuSbJz1Vt0LT48saztLoTNLT+vAdOLv4/RRiL7rMBRW7oIRrYYfUh3MGfpl3O /eSGncEQdJGaaVlL6xUXFL43WcaoGeaYqgb6tI9VUYH7xg18Unux28JfkRSrVvRTeyFPU03mu4 uGJfdzEsPZSzTTuOHak9bgMwR6zQt3pU3FJCMouwwdHWoqHckhe1ckyUaBVG8r6BfSaoeyMLWD JBU= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943830" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:27 +0800 IronPort-SDR: 3HEpnoNR6yKR0coTvQYwoetNU0ukdBM0lmtbs7oo14EpvdCrp10/XV5NY3D66boyyv1yYJ2CSJ RwAZiY0SZ+vM7jwwvMNA4kr9CoQo8AbqhcppYX+LCx8NVxCbBQuM4d6Proc7hPTTHzwDTxjlxw vJD2q+IYj8r7rXZ6AXKhhe9XMABz+BJ5l+reRHKa+E9ftf8QCNEzSbua+X8+aZjr23xEU/INU5 ug6NCrxTnLTvpdBWPmeezZMz6IuIQ/wc4VBs2ccsbVhTUsiDQhX5/GYp2Seg/7nPWpG4X0UOls VJJKkt4+cdaaesVbwCzTpmJo Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:04 -0700 IronPort-SDR: 4AZpCJ5RZBShdzujQpwtsu4LG365nyts0TnD3MLehk3UQSjrGyAeZeIs+UkRrvRn2uL9L/4Rs6 5fBm7zZeVXVfrs3yB8wyrd+c1+y6Dc9pZ5WlODloIQmu+FtX3XsJ//CFegDiA9GrY4b/nGMUGi KFXAD1F4e4vEw5t5ToT2wl1BXmmBJy5TFd4yMx0gxvkiRJ5SSjK3t6R3a1zUmzeFVHCA+FNSny mXUTq6JuBobwx31k/kA+irggjxfW3QBKEIL91onw1issbC1v+SnkA1OMjsXsNSYwhoSRix63d7 PbQ= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:27 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 05/11] dm: cleanup device_area_is_invalid() Date: Fri, 21 May 2021 12:01:13 +0900 Message-Id: <20210521030119.1209035-6-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In device_area_is_invalid(), use bdev_is_zoned() instead of open coding the test on the zoned model returned by bdev_zoned_model(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn Reviewed-by: Hannes Reinecke --- drivers/md/dm-table.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index ee47a332b462..21fd9cd4da32 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -249,7 +249,7 @@ static int device_area_is_invalid(struct dm_target *ti, struct dm_dev *dev, * If the target is mapped to zoned block device(s), check * that the zones are not partially mapped. */ - if (bdev_zoned_model(bdev) != BLK_ZONED_NONE) { + if (bdev_is_zoned(bdev)) { unsigned int zone_sectors = bdev_zone_sectors(bdev); if (start & (zone_sectors - 1)) { From patchwork Fri May 21 03:01:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38267C43460 for ; Fri, 21 May 2021 03:01:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 103196121E for ; Fri, 21 May 2021 03:01:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235553AbhEUDCw (ORCPT ); Thu, 20 May 2021 23:02:52 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbhEUDCw (ORCPT ); Thu, 20 May 2021 23:02:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566089; x=1653102089; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=5DsU3z9vFCVUXranzeld/Dhd8errTp9UbrB7wUKuLEY=; b=AOPv4NnEodfjWebPufwrI+aQIjGahseh7aLtIN6LKp9708bLIOTdBrI7 /Wkah3NYJmhtpjJWxHheRlyC1ZSuKz5mq9ZFOgwkFJ1PvvXl9W26CycK5 cG4sVScog2Q/2876A4RAGTOibTFD+LzjcegLipC+rfHgm3JKMHe/TPpcO x25AFQ5drMq2i1OiRhMtB7yoCd7zbWp1nYbyDjHfwlSzhgkbI0hE5xJV8 hGZ7YOw+IfXTztwP/2Ity0iaXWoYrDyXzIr8WbAORY/c4FR11IerUHB9/ GYpErncOZLFaG9GAgOnMMROpw48eH1nzAPSaWPQFMFDM+gSsXYEzG+2Dq w==; IronPort-SDR: jTGvNtxvv5r7TqHleFWN/KY8b0WG7ABd2vmNmAWzHqOtah51qN2T5bcuSYOVQkwIHJv3eB7InT pYUWEQi5Cq7gLSalhepA4FVRbmHHeDtdLgT0fTvloq8PDMH71JFLh02vNrkm6eC2CZ7jFAt0Lv schYTlrvaX8PCdOELyc96kRNXlw4HU/qMVCeZBNTdKz5NeZsl3viDD4r7jU12DNgCGTWQWmdex DyXEuYjFZWrTQz1bhoE2P/qnuSYoqJcvDRvU6npo6xdSgXke9uV2BFWiPqXVKvsmztDgLnopLs BRQ= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943845" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:29 +0800 IronPort-SDR: bSiiqzeKDbVzwBTyUYm8ga0345aHS/HaRiZ3ZI9wmqNUOBK+5SbfKgwRiBLd/GPlztADkofz+e 1G6ehLGpmT2BsJ2lqQtXDskgJDeF2vUSHTwsFK0pAwK30BZRSkf+1CsmfT/F2Vf3OfqkAOB7Y6 hHgOUS0nJh6RaGeftMOD5tPrMQlTBf0vGODIKm5BFzMsraTNlxB9jVZaJPjABhIVryrNshXmvj jnE4LgV1qpCuv9ctYE/vTLhqnvFU0BC7Mn3333CYMvFh/DARrwYBN2iWjqCMDvWBfsbOFNLad5 +m/KKqmNF7iUyt5WaB9HRuB1 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:05 -0700 IronPort-SDR: O+vvLBiZsQhwzVXuFcZB2abuAIaTAY22yGIiNBaQP5DEVqcZKJknqn1xA/SqsLBzNb9msJHCTh KYnvrV3htPHwebXexG5kJ+4VOWQieyTq7LtS1oTlJmYNlU2tI4Gf/Z8EuiHf/YaieEs5KGmMrm yo09PfVfgijE3o1FmTEacztZ6NpbzE69a0LBhJo1ZqbyWMRuYLke+9p+Yc9CoFoXUeyQaqpL1G UryhnLaQWgCKDTpm9oL4GYAHOzcXoBAsMDvhYMSeWSVLqXaF21ttRSWuTxYv8GhWM73tZeDI4u CXw= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:29 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 06/11] dm: move zone related code to dm-zone.c Date: Fri, 21 May 2021 12:01:14 +0900 Message-Id: <20210521030119.1209035-7-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move core and table code used for zoned targets and conditionally defined with #ifdef CONFIG_BLK_DEV_ZONED to the new file dm-zone.c. This file is conditionally compiled depending on CONFIG_BLK_DEV_ZONED. The small helper dm_set_zones_restrictions() is introduced to initialize a mapped device request queue zone attributes in dm_table_set_restrictions(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn Reviewed-by: Hannes Reinecke --- drivers/md/Makefile | 4 ++ drivers/md/dm-table.c | 14 ++---- drivers/md/dm-zone.c | 102 ++++++++++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 78 -------------------------------- drivers/md/dm.h | 11 +++++ 5 files changed, 120 insertions(+), 89 deletions(-) create mode 100644 drivers/md/dm-zone.c diff --git a/drivers/md/Makefile b/drivers/md/Makefile index ef7ddc27685c..a74aaf8b1445 100644 --- a/drivers/md/Makefile +++ b/drivers/md/Makefile @@ -92,6 +92,10 @@ ifeq ($(CONFIG_DM_UEVENT),y) dm-mod-objs += dm-uevent.o endif +ifeq ($(CONFIG_BLK_DEV_ZONED),y) +dm-mod-objs += dm-zone.o +endif + ifeq ($(CONFIG_DM_VERITY_FEC),y) dm-verity-objs += dm-verity-fec.o endif diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 21fd9cd4da32..dd9f648ab598 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -2064,17 +2064,9 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, dm_table_any_dev_attr(t, device_is_not_random, NULL)) blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); - /* - * For a zoned target, the number of zones should be updated for the - * correct value to be exposed in sysfs queue/nr_zones. For a BIO based - * target, this is all that is needed. - */ -#ifdef CONFIG_BLK_DEV_ZONED - if (blk_queue_is_zoned(q)) { - WARN_ON_ONCE(queue_is_mq(q)); - q->nr_zones = blkdev_nr_zones(t->md->disk); - } -#endif + /* For a zoned target, setup the zones related queue attributes */ + if (blk_queue_is_zoned(q)) + dm_set_zones_restrictions(t, q); dm_update_keyslot_manager(q, t); blk_queue_update_readahead(q); diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c new file mode 100644 index 000000000000..3243c42b7951 --- /dev/null +++ b/drivers/md/dm-zone.c @@ -0,0 +1,102 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + */ + +#include + +#include "dm-core.h" + +/* + * User facing dm device block device report zone operation. This calls the + * report_zones operation for each target of a device table. This operation is + * generally implemented by targets using dm_report_zones(). + */ +int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + unsigned int nr_zones, report_zones_cb cb, void *data) +{ + struct mapped_device *md = disk->private_data; + struct dm_table *map; + int srcu_idx, ret; + struct dm_report_zones_args args = { + .next_sector = sector, + .orig_data = data, + .orig_cb = cb, + }; + + if (dm_suspended_md(md)) + return -EAGAIN; + + map = dm_get_live_table(md, &srcu_idx); + if (!map) { + ret = -EIO; + goto out; + } + + do { + struct dm_target *tgt; + + tgt = dm_table_find_target(map, args.next_sector); + if (WARN_ON_ONCE(!tgt->type->report_zones)) { + ret = -EIO; + goto out; + } + + args.tgt = tgt; + ret = tgt->type->report_zones(tgt, &args, + nr_zones - args.zone_idx); + if (ret < 0) + goto out; + } while (args.zone_idx < nr_zones && + args.next_sector < get_capacity(disk)); + + ret = args.zone_idx; +out: + dm_put_live_table(md, srcu_idx); + return ret; +} + +int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) +{ + struct dm_report_zones_args *args = data; + sector_t sector_diff = args->tgt->begin - args->start; + + /* + * Ignore zones beyond the target range. + */ + if (zone->start >= args->start + args->tgt->len) + return 0; + + /* + * Remap the start sector and write pointer position of the zone + * to match its position in the target range. + */ + zone->start += sector_diff; + if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) { + if (zone->cond == BLK_ZONE_COND_FULL) + zone->wp = zone->start + zone->len; + else if (zone->cond == BLK_ZONE_COND_EMPTY) + zone->wp = zone->start; + else + zone->wp += sector_diff; + } + + args->next_sector = zone->start + zone->len; + return args->orig_cb(zone, args->zone_idx++, args->orig_data); +} +EXPORT_SYMBOL_GPL(dm_report_zones_cb); + +void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) +{ + if (!blk_queue_is_zoned(q)) + return; + + /* + * For a zoned target, the number of zones should be updated for the + * correct value to be exposed in sysfs queue/nr_zones. For a BIO based + * target, this is all that is needed. + */ + WARN_ON_ONCE(queue_is_mq(q)); + q->nr_zones = blkdev_nr_zones(t->md->disk); +} + diff --git a/drivers/md/dm.c b/drivers/md/dm.c index a9211575bfed..45d2dc2ee844 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -444,84 +444,6 @@ static int dm_blk_getgeo(struct block_device *bdev, struct hd_geometry *geo) return dm_get_geometry(md, geo); } -#ifdef CONFIG_BLK_DEV_ZONED -int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) -{ - struct dm_report_zones_args *args = data; - sector_t sector_diff = args->tgt->begin - args->start; - - /* - * Ignore zones beyond the target range. - */ - if (zone->start >= args->start + args->tgt->len) - return 0; - - /* - * Remap the start sector and write pointer position of the zone - * to match its position in the target range. - */ - zone->start += sector_diff; - if (zone->type != BLK_ZONE_TYPE_CONVENTIONAL) { - if (zone->cond == BLK_ZONE_COND_FULL) - zone->wp = zone->start + zone->len; - else if (zone->cond == BLK_ZONE_COND_EMPTY) - zone->wp = zone->start; - else - zone->wp += sector_diff; - } - - args->next_sector = zone->start + zone->len; - return args->orig_cb(zone, args->zone_idx++, args->orig_data); -} -EXPORT_SYMBOL_GPL(dm_report_zones_cb); - -static int dm_blk_report_zones(struct gendisk *disk, sector_t sector, - unsigned int nr_zones, report_zones_cb cb, void *data) -{ - struct mapped_device *md = disk->private_data; - struct dm_table *map; - int srcu_idx, ret; - struct dm_report_zones_args args = { - .next_sector = sector, - .orig_data = data, - .orig_cb = cb, - }; - - if (dm_suspended_md(md)) - return -EAGAIN; - - map = dm_get_live_table(md, &srcu_idx); - if (!map) { - ret = -EIO; - goto out; - } - - do { - struct dm_target *tgt; - - tgt = dm_table_find_target(map, args.next_sector); - if (WARN_ON_ONCE(!tgt->type->report_zones)) { - ret = -EIO; - goto out; - } - - args.tgt = tgt; - ret = tgt->type->report_zones(tgt, &args, - nr_zones - args.zone_idx); - if (ret < 0) - goto out; - } while (args.zone_idx < nr_zones && - args.next_sector < get_capacity(disk)); - - ret = args.zone_idx; -out: - dm_put_live_table(md, srcu_idx); - return ret; -} -#else -#define dm_blk_report_zones NULL -#endif /* CONFIG_BLK_DEV_ZONED */ - static int dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx, struct block_device **bdev) { diff --git a/drivers/md/dm.h b/drivers/md/dm.h index b441ad772c18..fdf1536a4b62 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -100,6 +100,17 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t); */ #define dm_target_hybrid(t) (dm_target_bio_based(t) && dm_target_request_based(t)) +/* + * Zoned targets related functions. + */ +void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); +#ifdef CONFIG_BLK_DEV_ZONED +int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + unsigned int nr_zones, report_zones_cb cb, void *data); +#else +#define dm_blk_report_zones NULL +#endif + /*----------------------------------------------------------------- * A registry of target types. *---------------------------------------------------------------*/ From patchwork Fri May 21 03:01:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CC87C433B4 for ; Fri, 21 May 2021 03:01:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CBA96108C for ; Fri, 21 May 2021 03:01:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232046AbhEUDCx (ORCPT ); Thu, 20 May 2021 23:02:53 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234712AbhEUDCx (ORCPT ); Thu, 20 May 2021 23:02:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566091; x=1653102091; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=shPpSeP296J2qygw6NO0KVC+guYs/L89/b5ZZys6J5g=; b=DShdwlIQqzX/TBDL9PBD1+bBW2a0we+t+cSr9vtLzHZ4sIiNKZBah2ep TyLHw//DVm1QyC4qTGkkcEBoNIY/8SOee530UfTHB0MKcVZTK7cdmd3BM z3splmG3EsuhsmTnyTGFfm/XlWnlKQ40gAh54na1pv1cETUQq/acsqK3u DLg2wDJPhKHbNTUxiwietWU/Qiis5HI2RFewq7EStlrjiADN9isiHoqGG KtZAbErBaDyKCOIoWg2oIYFqJ3Xr+ue6TnT1SL/2E6xjcIgICSA79xa9+ Z8S3urFSiZJR42EIodK7YEs1SGAPj03tIBY6Quoi2gwqKZWxApax7I6mL w==; IronPort-SDR: Si3QHv7ILCzM7iIxDXzGlbW4Yt4oJd5dX+w3pI02BU64Uy58Cd1yuPwWJsiZWc/xElhpzCrsbH cwbWBmajj9/FAS9TIuyx1JclPqiGmL3Ljnd1WgJ7GwPcQjr/hqnnEMG56ekOkUZBCZIofYmsal ZRpfPp9YiQxrp5cBJXhZLhrTioc5ceLAuXiiHOdM7joV79K6sXHjZ1KnknEZzkXD/OEJrW84Lq dYOrdyE/t73jsEEyF6pCm7ihwFw8ShRDRhZlfuC8M8F/yPQLxmP0ZgD1HY1f5Ws4v3CI65Nw15 GSA= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943860" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:30 +0800 IronPort-SDR: pXRWFodA1KIXe4Oh51/I+0uMet1O95WbzlB/XLdGlNugVJ3J7iwbuZI0Csby57v3Pv0braTDKl 7CkpLyGz1T/lR4dlsQU96HCGyr5CDqDynBa8eGDW5zuKOVfUHBJZXNrR4ACTXdqXXdlnMN20pS IugFjb/piujAJTuUKOvweFwbroZN1r73Hsrt2L6dZlbmqeZZGqeOTy8BlXYbDTjIDB/N+/HMEB zABdE9pZq03HLnjtNjd4l+sKCbFhkprEcPkoYbWx8RCy7Q+nf8IXZ6UKCVqCtaTDxYdK+rxGYV zxjnWC4cu3yTqdPax3ZGly2v Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:07 -0700 IronPort-SDR: XTzp68aMOYTDkar2IQ8AuQcI3JQX77oPn7SA8j/h+8A/9jF7cuJS1owUIgiHJBx6rOTuqHiRK4 7mbOvkzUZov5f5P9QhcAmIIa5E+LK+B/9c5SpWJOACEfliqcT03VDrKY1DhAPQWyD+DyyB4IzU I5VRTuvaCWBsXYcfmZTcGK0aWHTCNxq7QSjxqSv4g9qoUtjrAjIVBV4B0jg8NdiA0bafjs0yzs DGKA7wCKnOobWrXlrB80PE7YEYtiXVHbG7YrrlEv8JfiFJ5fd/zFjecEWOe2XILUrhce3FpKpq fa4= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:30 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 07/11] dm: Introduce dm_report_zones() Date: Fri, 21 May 2021 12:01:15 +0900 Message-Id: <20210521030119.1209035-8-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org To simplify the implementation of the report_zones operation of a zoned target, introduce the function dm_report_zones() to set a target mapping start sector in struct dm_report_zones_args and call blkdev_report_zones(). This new function is exported and the report zones callback function dm_report_zones_cb() is not. dm-linear, dm-flakey and dm-crypt are modified to use dm_report_zones(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn Reviewed-by: Hannes Reinecke --- drivers/md/dm-crypt.c | 7 +++---- drivers/md/dm-flakey.c | 7 +++---- drivers/md/dm-linear.c | 7 +++---- drivers/md/dm-zone.c | 23 ++++++++++++++++++++--- include/linux/device-mapper.h | 3 ++- 5 files changed, 31 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b0ab080f2567..f410ceee51d7 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3138,11 +3138,10 @@ static int crypt_report_zones(struct dm_target *ti, struct dm_report_zones_args *args, unsigned int nr_zones) { struct crypt_config *cc = ti->private; - sector_t sector = cc->start + dm_target_offset(ti, args->next_sector); - args->start = cc->start; - return blkdev_report_zones(cc->dev->bdev, sector, nr_zones, - dm_report_zones_cb, args); + return dm_report_zones(cc->dev->bdev, cc->start, + cc->start + dm_target_offset(ti, args->next_sector), + args, nr_zones); } #else #define crypt_report_zones NULL diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c index b7fee9936f05..5877220c01ed 100644 --- a/drivers/md/dm-flakey.c +++ b/drivers/md/dm-flakey.c @@ -463,11 +463,10 @@ static int flakey_report_zones(struct dm_target *ti, struct dm_report_zones_args *args, unsigned int nr_zones) { struct flakey_c *fc = ti->private; - sector_t sector = flakey_map_sector(ti, args->next_sector); - args->start = fc->start; - return blkdev_report_zones(fc->dev->bdev, sector, nr_zones, - dm_report_zones_cb, args); + return dm_report_zones(fc->dev->bdev, fc->start, + flakey_map_sector(ti, args->next_sector), + args, nr_zones); } #else #define flakey_report_zones NULL diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 92db0f5e7f28..c91f1e2e2f65 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -140,11 +140,10 @@ static int linear_report_zones(struct dm_target *ti, struct dm_report_zones_args *args, unsigned int nr_zones) { struct linear_c *lc = ti->private; - sector_t sector = linear_map_sector(ti, args->next_sector); - args->start = lc->start; - return blkdev_report_zones(lc->dev->bdev, sector, nr_zones, - dm_report_zones_cb, args); + return dm_report_zones(lc->dev->bdev, lc->start, + linear_map_sector(ti, args->next_sector), + args, nr_zones); } #else #define linear_report_zones NULL diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index 3243c42b7951..b42474043249 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -56,7 +56,8 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector, return ret; } -int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) +static int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, + void *data) { struct dm_report_zones_args *args = data; sector_t sector_diff = args->tgt->begin - args->start; @@ -84,7 +85,24 @@ int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data) args->next_sector = zone->start + zone->len; return args->orig_cb(zone, args->zone_idx++, args->orig_data); } -EXPORT_SYMBOL_GPL(dm_report_zones_cb); + +/* + * Helper for drivers of zoned targets to implement struct target_type + * report_zones operation. + */ +int dm_report_zones(struct block_device *bdev, sector_t start, sector_t sector, + struct dm_report_zones_args *args, unsigned int nr_zones) +{ + /* + * Set the target mapping start sector first so that + * dm_report_zones_cb() can correctly remap zone information. + */ + args->start = start; + + return blkdev_report_zones(bdev, sector, nr_zones, + dm_report_zones_cb, args); +} +EXPORT_SYMBOL_GPL(dm_report_zones); void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) { @@ -99,4 +117,3 @@ void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) WARN_ON_ONCE(queue_is_mq(q)); q->nr_zones = blkdev_nr_zones(t->md->disk); } - diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index ff700fb6ce1d..caea0a079d2d 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -478,7 +478,8 @@ struct dm_report_zones_args { /* must be filled by ->report_zones before calling dm_report_zones_cb */ sector_t start; }; -int dm_report_zones_cb(struct blk_zone *zone, unsigned int idx, void *data); +int dm_report_zones(struct block_device *bdev, sector_t start, sector_t sector, + struct dm_report_zones_args *args, unsigned int nr_zones); #endif /* CONFIG_BLK_DEV_ZONED */ /* From patchwork Fri May 21 03:01:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22959C43460 for ; Fri, 21 May 2021 03:01:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 052086108D for ; Fri, 21 May 2021 03:01:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238729AbhEUDCz (ORCPT ); Thu, 20 May 2021 23:02:55 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234712AbhEUDCz (ORCPT ); Thu, 20 May 2021 23:02:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566092; x=1653102092; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=fT8Bn2N/hDneLu0IulSghPa3VnHS7U7B+KnPGflHK+I=; b=eeWYtqM5LVXtqMM7GWSEp7ABAKrw29Zgz23rYeqi/GTBcvIeN0zC4D8Z rjxlU3GC/VnJXtBZpKM9+dIyGivrp8mUXGXiagOlcmhbyvHao0fs0RRKr jwnXI4fF4e+NuS5lJ58sb6URrWLQTqJJB5CP3x5KRztKe67V5qTHWIcy5 zsqj1dhBqL/p7t4boXz4ABlLuF/mqtRLSxEiKPorewnysQlVI+1OM89IM XKIz1SG5mlzReEpekYX0LyZR8mPLsfyILh1HGwIui7ZlYWFMLYygRmzcJ S0n7NiEznQA7zsXhFtWZyYu99u64+kuvJP9xclRt17l1W96nxp2Ba4OtI Q==; IronPort-SDR: g5DuFHaqYVRerXx4imeVEI5DHYjteFgVHPQhNiV+LsVUICVt9jQfcDIQYu7Y1vInqwFoNAtELD p0NDJV+hzQES33tJU87IgEjEmqels67mAPFlT0zDD+/Md5mjBr0sQmhMR/JkexzOqpum7tAyVp Ffi4r9DkK23MfhsghjRHY7KlQb6k5Sdf7nk0XyArkadiKiiYV18SpVgAgeVN8eEDbpWwea1h93 JbWnLGmrPcqPW2sYcu+vVfR3npe/6B5KRTsDU2Z/wuYFLedLynvj7ejLdGvLefj6nGTIKLIIAn HYE= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943876" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:32 +0800 IronPort-SDR: AMNego9UGwm/92kOJeAPV3JCoyGl5uX4oEhVEtu+/4+B/9a2ylHsxdbtl0KjH4/iOvBs6waVGM sgYFg5AFsc/SKA7cjsGh7998JuQVUWf5ppjmGm8d3cIrPBKYAcKeEEWRDVK3Fy0sn+DChb0Bk4 xCXecIupzy/YCfHnBnt8//1W0NhKz7JoXgxuUqYGtIFInOV/6bOLem2FwNaZJTlXzp/uHUPkQ4 GRlT2dBcL8Q/0r7q6zr9FV+C2T0Pb2o/xc1RV82zt77PC1+zYvQJl9jg82bOJv3mvfJIdWg7RQ wm0SoPWhU8JQOnt3/3V4ugEL Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:08 -0700 IronPort-SDR: RAD/kYBdYh3LJ64YeVuAYXWHENoCW0YElm3jtEfobVEjhQ9kzBBjCH/iPr+ASuy0nMAsjNe8DS /SaYZ8aeSBDlwWztzeUk0zbcOoX5BI8Jw1GybLeW8ajgbCZjOKJAmQe7SuxMjWZF6hEwIeLKwC t6DC/qW6w+Rtfg932+ugEJ347giEQKNN7+8QbILwY/DWmDxp8c5WuOri9ZBu49bmFbZiAsQl+W k2PuYAqDitp4GT4e3sCfIHxTQ89alh7WQ2O2YQTlFNNgBXdG5mCB8A/X9d2Ze7/cFGDG1psREs 07I= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:32 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 08/11] dm: Forbid requeue of writes to zones Date: Fri, 21 May 2021 12:01:16 +0900 Message-Id: <20210521030119.1209035-9-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org A target map method requesting the requeue of a bio with DM_MAPIO_REQUEUE or completing it with DM_ENDIO_REQUEUE can cause unaligned write errors if the bio is a write operation targeting a sequential zone. If a zoned target request such a requeue, warn about it and kill the IO. The function dm_is_zone_write() is introduced to detect write operations to zoned targets. This change does not affect the target drivers supporting zoned devices and exposing a zoned device, namely dm-crypt, dm-linear and dm-flakey as none of these targets ever request a requeue. Signed-off-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Johannes Thumshirn --- drivers/md/dm-zone.c | 17 +++++++++++++++++ drivers/md/dm.c | 18 +++++++++++++++--- drivers/md/dm.h | 5 +++++ 3 files changed, 37 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index b42474043249..edc3bbb45637 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -104,6 +104,23 @@ int dm_report_zones(struct block_device *bdev, sector_t start, sector_t sector, } EXPORT_SYMBOL_GPL(dm_report_zones); +bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) +{ + struct request_queue *q = md->queue; + + if (!blk_queue_is_zoned(q)) + return false; + + switch (bio_op(bio)) { + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + return !op_is_flush(bio->bi_opf) && bio_sectors(bio); + default: + return false; + } +} + void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) { if (!blk_queue_is_zoned(q)) diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 45d2dc2ee844..4426019a89cc 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -846,11 +846,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error) * Target requested pushing back the I/O. */ spin_lock_irqsave(&md->deferred_lock, flags); - if (__noflush_suspending(md)) + if (__noflush_suspending(md) && + !WARN_ON_ONCE(dm_is_zone_write(md, bio))) /* NOTE early return due to BLK_STS_DM_REQUEUE below */ bio_list_add_head(&md->deferred, io->orig_bio); else - /* noflush suspend was interrupted. */ + /* + * noflush suspend was interrupted or this is + * a write to a zoned target. + */ io->status = BLK_STS_IOERR; spin_unlock_irqrestore(&md->deferred_lock, flags); } @@ -947,7 +951,15 @@ static void clone_endio(struct bio *bio) int r = endio(tio->ti, bio, &error); switch (r) { case DM_ENDIO_REQUEUE: - error = BLK_STS_DM_REQUEUE; + /* + * Requeuing writes to a sequential zone of a zoned + * target will break the sequential write pattern: + * fail such IO. + */ + if (WARN_ON_ONCE(dm_is_zone_write(md, bio))) + error = BLK_STS_IOERR; + else + error = BLK_STS_DM_REQUEUE; fallthrough; case DM_ENDIO_DONE: break; diff --git a/drivers/md/dm.h b/drivers/md/dm.h index fdf1536a4b62..39c243258e24 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -107,8 +107,13 @@ void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); #ifdef CONFIG_BLK_DEV_ZONED int dm_blk_report_zones(struct gendisk *disk, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data); +bool dm_is_zone_write(struct mapped_device *md, struct bio *bio); #else #define dm_blk_report_zones NULL +static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) +{ + return false; +} #endif /*----------------------------------------------------------------- From patchwork Fri May 21 03:01:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 396D8C433B4 for ; Fri, 21 May 2021 03:01:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 161A66108D for ; Fri, 21 May 2021 03:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237675AbhEUDC4 (ORCPT ); Thu, 20 May 2021 23:02:56 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234712AbhEUDC4 (ORCPT ); Thu, 20 May 2021 23:02:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566093; x=1653102093; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=mVWcuA33S1kgvwU9lvMHZjHYjL3rbxOlZv0XJKT/oqg=; b=oOojov0t+gUKW62xVaL67urGKYMMZeFQG3ORx3s2Mf/JPrx+6gdyJA/x faYdUufCMYJL0U5rS0Nil+ZAcd6nh+RoMV9ZH9KpcmY7iVEMB5O8FmKAu h/7cGuJobLoqypfMnkgWfuctyRhurQkcf1H4JrZn/AmCOQ+Um8mS1deoR Z1D4yo32nkOg65fh75uQrgd9nGOUAkjV+HYZwfaitNpRhNC2OWCdUjI6D nHcbRnBka7R5vXC7+t+1j5p7RuKx2PeoYJwMZiD5M+qDYRSIJaZyo3cpw FnN/alZeo6uBw1MKndFpj66/sVB+BNbv8jfxxs4qPTtkhFh4X7WBQsR4U g==; IronPort-SDR: nfUV1uNcbpG+8Rf3PziMblEgDN3xa2YQ3jGRW13w6jx5RPdDw8qlTtoeR21bUOc98LTaG4k1zA 99JakX6c5jl28m9A8bpFfyDGyCI8SL9mge3g2DcLl43dtTB2a2V1DwAle4h2S3ZKoPwZpTrF0i Wcc+lWjQ/sNR/c0cvD5IGJ34qsSBs3ZvyirfeUYRmwjCSgxuQJ/DAvf+3QpD2UG61DUQi/wZzi nFxmSSF74TqHFBveRAgBemGnpGNdwbwRyFXPRIYT93T2tumKDsKlaT6Uo6XobbQd9cvroLNdaV Hkk= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943890" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:33 +0800 IronPort-SDR: sey2g+GqGOxwE86IkB+QWoEgWnmVV4jVowRbXu7kCAaIYsxMolp/wpBs3/xng1XqfmOVwYSfpo 0tCNbzRUDZvL83+NuP3xyBTlhs1mnIpz5PuHd4BIlm+74LLKGfljwpv0vE22gFdl0fP3YDfMDp qz8BkfZ2ArSOMvugBR9PnV3Pio/P0xvniiLAb8WS6cW8NsoYbyb5GSzKq+cO3KPirf4CI6m27L kuR1ekjSSVerAfX05KcbwKc6SVNq/adJme63q3GKuGhoncdIP2f+QWTS51TaQUHIOo8UYJ7uAv ymfrhePuq5AtCw5DQgby4vYq Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:09 -0700 IronPort-SDR: vZc1obVH3qSWD/jUq9mfK4iY9tPm1e1SZ0Chfeq038Lw7l/fnCY+XCoeWwTI4FnGz6H47hcMY4 aNi+e5zcoVZBkI3gB/1JPb8Mf+VSEyJzfcEKt8Wkv5MUfqUZfHn2HYSp5ZxNPBerinfGMmzaU7 7VaYuCw4NBxBAQ40GJSCN9HjYtStao4hO+1ioiU+juK/1+5RfHxuiwKl5biCeqflhKdkn+mLfQ V/xBaQuk3VCKi5tts9PF937jbBzGsLjYDKJxRJ53WY05cBkPa+a0JDjhzSwjeaP6lbagrfom+q 714= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:33 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 09/11] dm: rearrange core declarations Date: Fri, 21 May 2021 12:01:17 +0900 Message-Id: <20210521030119.1209035-10-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move the definitions of struct dm_target_io, struct dm_io and of the bits of the flags field of struct mapped_device from dm.c to dm-core.h to make them usable from dm-zone.c. For the same reason, declare dec_pending() in dm-core.h after renaming it to dm_io_dec_pending(). And for symmetry of the function names, introduce the inline helper dm_io_inc_pending() instead of directly using atomic_inc() calls. Signed-off-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Johannes Thumshirn --- drivers/md/dm-core.h | 52 ++++++++++++++++++++++++++++++++++++++ drivers/md/dm.c | 59 ++++++-------------------------------------- 2 files changed, 59 insertions(+), 52 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index 5953ff2bd260..cfabc1c91f9f 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -116,6 +116,19 @@ struct mapped_device { struct srcu_struct io_barrier; }; +/* + * Bits for the flags field of struct mapped_device. + */ +#define DMF_BLOCK_IO_FOR_SUSPEND 0 +#define DMF_SUSPENDED 1 +#define DMF_FROZEN 2 +#define DMF_FREEING 3 +#define DMF_DELETING 4 +#define DMF_NOFLUSH_SUSPENDING 5 +#define DMF_DEFERRED_REMOVE 6 +#define DMF_SUSPENDED_INTERNALLY 7 +#define DMF_POST_SUSPENDING 8 + void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); void disable_write_zeroes(struct mapped_device *md); @@ -173,6 +186,45 @@ struct dm_table { #endif }; +/* + * One of these is allocated per clone bio. + */ +#define DM_TIO_MAGIC 7282014 +struct dm_target_io { + unsigned int magic; + struct dm_io *io; + struct dm_target *ti; + unsigned int target_bio_nr; + unsigned int *len_ptr; + bool inside_dm_io; + struct bio clone; +}; + +/* + * One of these is allocated per original bio. + * It contains the first clone used for that original. + */ +#define DM_IO_MAGIC 5191977 +struct dm_io { + unsigned int magic; + struct mapped_device *md; + blk_status_t status; + atomic_t io_count; + struct bio *orig_bio; + unsigned long start_time; + spinlock_t endio_lock; + struct dm_stats_aux stats_aux; + /* last member of dm_target_io is 'struct bio' */ + struct dm_target_io tio; +}; + +static inline void dm_io_inc_pending(struct dm_io *io) +{ + atomic_inc(&io->io_count); +} + +void dm_io_dec_pending(struct dm_io *io, blk_status_t error); + static inline struct completion *dm_get_completion_from_kobject(struct kobject *kobj) { return &container_of(kobj, struct dm_kobject_holder, kobj)->completion; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 4426019a89cc..563504163b74 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -74,38 +74,6 @@ struct clone_info { unsigned sector_count; }; -/* - * One of these is allocated per clone bio. - */ -#define DM_TIO_MAGIC 7282014 -struct dm_target_io { - unsigned magic; - struct dm_io *io; - struct dm_target *ti; - unsigned target_bio_nr; - unsigned *len_ptr; - bool inside_dm_io; - struct bio clone; -}; - -/* - * One of these is allocated per original bio. - * It contains the first clone used for that original. - */ -#define DM_IO_MAGIC 5191977 -struct dm_io { - unsigned magic; - struct mapped_device *md; - blk_status_t status; - atomic_t io_count; - struct bio *orig_bio; - unsigned long start_time; - spinlock_t endio_lock; - struct dm_stats_aux stats_aux; - /* last member of dm_target_io is 'struct bio' */ - struct dm_target_io tio; -}; - #define DM_TARGET_IO_BIO_OFFSET (offsetof(struct dm_target_io, clone)) #define DM_IO_BIO_OFFSET \ (offsetof(struct dm_target_io, clone) + offsetof(struct dm_io, tio)) @@ -137,19 +105,6 @@ EXPORT_SYMBOL_GPL(dm_bio_get_target_bio_nr); #define MINOR_ALLOCED ((void *)-1) -/* - * Bits for the md->flags field. - */ -#define DMF_BLOCK_IO_FOR_SUSPEND 0 -#define DMF_SUSPENDED 1 -#define DMF_FROZEN 2 -#define DMF_FREEING 3 -#define DMF_DELETING 4 -#define DMF_NOFLUSH_SUSPENDING 5 -#define DMF_DEFERRED_REMOVE 6 -#define DMF_SUSPENDED_INTERNALLY 7 -#define DMF_POST_SUSPENDING 8 - #define DM_NUMA_NODE NUMA_NO_NODE static int dm_numa_node = DM_NUMA_NODE; @@ -825,7 +780,7 @@ static int __noflush_suspending(struct mapped_device *md) * Decrements the number of outstanding ios that a bio has been * cloned into, completing the original io if necc. */ -static void dec_pending(struct dm_io *io, blk_status_t error) +void dm_io_dec_pending(struct dm_io *io, blk_status_t error) { unsigned long flags; blk_status_t io_error; @@ -978,7 +933,7 @@ static void clone_endio(struct bio *bio) } free_tio(tio); - dec_pending(io, error); + dm_io_dec_pending(io, error); } /* @@ -1247,7 +1202,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) * anything, the target has assumed ownership of * this io. */ - atomic_inc(&io->io_count); + dm_io_inc_pending(io); sector = clone->bi_iter.bi_sector; if (unlikely(swap_bios_limit(ti, clone))) { @@ -1273,7 +1228,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) up(&md->swap_bios_semaphore); } free_tio(tio); - dec_pending(io, BLK_STS_IOERR); + dm_io_dec_pending(io, BLK_STS_IOERR); break; case DM_MAPIO_REQUEUE: if (unlikely(swap_bios_limit(ti, clone))) { @@ -1281,7 +1236,7 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) up(&md->swap_bios_semaphore); } free_tio(tio); - dec_pending(io, BLK_STS_DM_REQUEUE); + dm_io_dec_pending(io, BLK_STS_DM_REQUEUE); break; default: DMWARN("unimplemented target map return value: %d", r); @@ -1570,7 +1525,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, if (bio->bi_opf & REQ_PREFLUSH) { error = __send_empty_flush(&ci); - /* dec_pending submits any data associated with flush */ + /* dm_io_dec_pending submits any data associated with flush */ } else if (op_is_zone_mgmt(bio_op(bio))) { ci.bio = bio; ci.sector_count = 0; @@ -1611,7 +1566,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, } /* drop the extra reference count */ - dec_pending(ci.io, errno_to_blk_status(error)); + dm_io_dec_pending(ci.io, errno_to_blk_status(error)); return ret; } From patchwork Fri May 21 03:01:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C80D0C433B4 for ; Fri, 21 May 2021 03:01:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B27C6108C for ; Fri, 21 May 2021 03:01:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234712AbhEUDC7 (ORCPT ); Thu, 20 May 2021 23:02:59 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238147AbhEUDC5 (ORCPT ); Thu, 20 May 2021 23:02:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566095; x=1653102095; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=MVIqzhhBqyStlak46iki2sbaZs39cpQQyuYLmacyGzM=; b=Y6VLpIAXJrlC8E+fQ5kVCLkzflmh82TsBHUTUGR9fEE3wj2xvPjHi8Vz GU1IeP/6uZhGJBFRh2qlPu2FK6aUnpcdkGyVpBfgVZlrdUKbng7tQaUlf oMp6gQfhYUURNiMCvF5saw2shGW7nmEP4Dh09jNKH3Yks3Ee/UKb4GMB4 fa/iF8XZeMiLJ3uBCAs81qhTIaqkynCj5PbgfSW5z5cuzVV+QN5tMdlMa vl5qHqAdskxPc5TmQJcjvHyZcsyjDrwZQAT3WjGe7tGCX1S3WnlMgldgp G/ZQyK2+koQUGaoZZNP6dbRlxiXD0V/6BoLQBmRouUZi5ZEhry2Z4Y1d8 w==; IronPort-SDR: Y39OnHdpNXvoUDTjTpckE1d8aX2pg/t00gZ4KdQQHMsun0WH41f9lT5MXkJ9ug7TVKWcH13PU6 wWSZqJqK3Wc1wh1RemWKBrV1ZwXc06yufXJW8FNYSr+ijX/Q2VfXEtdwR/oJxRGjsjf0CKlaXw 9Vr7nlqvYwrMPOfrgWgp5+DO9RDfpxW7l27KubZrbOcuUdY6fAkkr58zV6lY4aS6oZFcmj6gX+ VcNtqYgEEfYihNnfrIb5XKkuvo3OXPvSvhpdJa2PC37NbHxZLAIq/3NfsBBdnEkZWbL00cfUEf ujg= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943908" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:35 +0800 IronPort-SDR: QxZJ3niULS8JHpnx7Hx+cY2MzfVBvycgGOVGNnopVQPuRhXM+BZPfXLzwE1+P/5Vs0gGZE8uCW okC/fEsqhbVlNfgs9L2QlBvAB+6FCtHMn+SWq4TnyZd2EXn29ju+UbQGp+KnbZLuh9B4JY+7K3 yzwt8nfD3XJLT2iRrA8hYQ0W3Jw/WqmnxWOJObkKP4WT/alSaq1778H7VNhil5b4Roq7Vc3pzK ofX7cWCl0evMSTAuwiQwMHcNhf9Tx1JWZZAx0Eqtmu7O45GGe2N1ufqZ1hH7H0OaBQ2KCWVuwd DKizrc454r+WGG/pBoiUnQXG Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:11 -0700 IronPort-SDR: QvlevTU1Yh5wDn1sHaP84zn0IvcDNr7Zt1nurGrrkpD/1FP9Oxfzt5Ij8WLpseI848gfZHlIhZ yEAXItCrwqTLTKVfX4nOkcP7lywOx0addoPriIVHqvRk7i30IU7qT78Im1R95qAompiNA9QR1q 9tAXw0RFht+ltMDsIvsPXZfqh1FKP4yhfFWW++Pl380bc0JZ5nHyym96LJ4C+lPNyYlOGtsyjv 7OaIx3IPXAq+JQan5wMWFqxtAOaApxyC0V6XInF/+djnRXqYh27u+oNwVzLKrFmq3fKGKFVNNO aO8= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:35 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 10/11] dm: introduce zone append emulation Date: Fri, 21 May 2021 12:01:18 +0900 Message-Id: <20210521030119.1209035-11-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For zoned targets that cannot support zone append operations, implement an emulation using regular write operations. If the original BIO submitted by the user is a zone append operation, change its clone into a regular write operation directed at the target zone write pointer position. To do so, an array of write pointer offsets (write pointer position relative to the start of a zone) is added to struct mapped_device. All operations that modify a sequential zone write pointer (writes, zone reset, zone finish and zone append) are intersepted in __map_bio() and processed using the new functions dm_zone_map_bio(). Detection of the target ability to natively support zone append operations is done from dm_table_set_restrictions() by calling the function dm_set_zones_restrictions(). A target that does not support zone append operation, either by explicitly declaring it using the new struct dm_target field zone_append_not_supported, or because the device table contains a non-zoned device, has its mapped device marked with the new flag DMF_ZONE_APPEND_EMULATED. The helper function dm_emulate_zone_append() is introduced to test a mapped device for this new flag. Atomicity of the zones write pointer tracking and updates is done using a zone write locking mechanism based on a bitmap. This is similar to the block layer method but based on BIOs rather than struct request. A zone write lock is taken in dm_zone_map_bio() for any clone BIO with an operation type that changes the BIO target zone write pointer position. The zone write lock is released if the clone BIO is failed before submission or when dm_zone_endio() is called when the clone BIO completes. The zone write lock bitmap of the mapped device, together with a bitmap indicating zone types (conv_zones_bitmap) and the write pointer offset array (zwp_offset) are allocated and initialized with a full device zone report in dm_set_zones_restrictions() using the function dm_revalidate_zones(). For failed operations that may have modified a zone write pointer, the zone write pointer offset is marked as invalid in dm_zone_endio(). Zones with an invalid write pointer offset are checked and the write pointer updated using an internal report zone operation when the faulty zone is accessed again by the user. All functions added for this emulation have a minimal overhead for zoned targets natively supporting zone append operations. Regular device targets are also not affected. The added code also does not impact builds with CONFIG_BLK_DEV_ZONED disabled by stubbing out all dm zone related functions. Signed-off-by: Damien Le Moal --- drivers/md/dm-core.h | 13 + drivers/md/dm-table.c | 19 +- drivers/md/dm-zone.c | 580 ++++++++++++++++++++++++++++++++-- drivers/md/dm.c | 38 ++- drivers/md/dm.h | 16 +- include/linux/device-mapper.h | 6 + 6 files changed, 618 insertions(+), 54 deletions(-) diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h index cfabc1c91f9f..edc1553c4eea 100644 --- a/drivers/md/dm-core.h +++ b/drivers/md/dm-core.h @@ -114,6 +114,11 @@ struct mapped_device { bool init_tio_pdu:1; struct srcu_struct io_barrier; + +#ifdef CONFIG_BLK_DEV_ZONED + unsigned int nr_zones; + unsigned int *zwp_offset; +#endif }; /* @@ -128,6 +133,7 @@ struct mapped_device { #define DMF_DEFERRED_REMOVE 6 #define DMF_SUSPENDED_INTERNALLY 7 #define DMF_POST_SUSPENDING 8 +#define DMF_EMULATE_ZONE_APPEND 9 void disable_discard(struct mapped_device *md); void disable_write_same(struct mapped_device *md); @@ -143,6 +149,13 @@ static inline struct dm_stats *dm_get_stats(struct mapped_device *md) return &md->stats; } +static inline bool dm_emulate_zone_append(struct mapped_device *md) +{ + if (blk_queue_is_zoned(md->queue)) + return test_bit(DMF_EMULATE_ZONE_APPEND, &md->flags); + return false; +} + #define DM_TABLE_MAX_DEPTH 16 struct dm_table { diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index dd9f648ab598..21fdccfb16cf 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1981,11 +1981,12 @@ static int device_requires_stable_pages(struct dm_target *ti, return blk_queue_stable_writes(q); } -void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, - struct queue_limits *limits) +int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + struct queue_limits *limits) { bool wc = false, fua = false; int page_size = PAGE_SIZE; + int r; /* * Copy table's limits to the DM device's request_queue @@ -2064,12 +2065,20 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, dm_table_any_dev_attr(t, device_is_not_random, NULL)) blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); - /* For a zoned target, setup the zones related queue attributes */ - if (blk_queue_is_zoned(q)) - dm_set_zones_restrictions(t, q); + /* + * For a zoned target, setup the zones related queue attributes + * and resources necessary for zone append emulation if necessary. + */ + if (blk_queue_is_zoned(q)) { + r = dm_set_zones_restrictions(t, q); + if (r) + return r; + } dm_update_keyslot_manager(q, t); blk_queue_update_readahead(q); + + return 0; } unsigned int dm_table_get_num_targets(struct dm_table *t) diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c index edc3bbb45637..c2f26949f5ee 100644 --- a/drivers/md/dm-zone.c +++ b/drivers/md/dm-zone.c @@ -4,55 +4,72 @@ */ #include +#include +#include +#include #include "dm-core.h" +#define DM_MSG_PREFIX "zone" + +#define DM_ZONE_INVALID_WP_OFST UINT_MAX + /* - * User facing dm device block device report zone operation. This calls the - * report_zones operation for each target of a device table. This operation is - * generally implemented by targets using dm_report_zones(). + * For internal zone reports bypassing the top BIO submission path. */ -int dm_blk_report_zones(struct gendisk *disk, sector_t sector, - unsigned int nr_zones, report_zones_cb cb, void *data) +static int dm_blk_do_report_zones(struct mapped_device *md, struct dm_table *t, + sector_t sector, unsigned int nr_zones, + report_zones_cb cb, void *data) { - struct mapped_device *md = disk->private_data; - struct dm_table *map; - int srcu_idx, ret; + struct gendisk *disk = md->disk; + int ret; struct dm_report_zones_args args = { .next_sector = sector, .orig_data = data, .orig_cb = cb, }; - if (dm_suspended_md(md)) - return -EAGAIN; - - map = dm_get_live_table(md, &srcu_idx); - if (!map) { - ret = -EIO; - goto out; - } - do { struct dm_target *tgt; - tgt = dm_table_find_target(map, args.next_sector); - if (WARN_ON_ONCE(!tgt->type->report_zones)) { - ret = -EIO; - goto out; - } + tgt = dm_table_find_target(t, args.next_sector); + if (WARN_ON_ONCE(!tgt->type->report_zones)) + return -EIO; args.tgt = tgt; ret = tgt->type->report_zones(tgt, &args, nr_zones - args.zone_idx); if (ret < 0) - goto out; + return ret; } while (args.zone_idx < nr_zones && args.next_sector < get_capacity(disk)); - ret = args.zone_idx; -out: + return args.zone_idx; +} + +/* + * User facing dm device block device report zone operation. This calls the + * report_zones operation for each target of a device table. This operation is + * generally implemented by targets using dm_report_zones(). + */ +int dm_blk_report_zones(struct gendisk *disk, sector_t sector, + unsigned int nr_zones, report_zones_cb cb, void *data) +{ + struct mapped_device *md = disk->private_data; + struct dm_table *map; + int srcu_idx, ret; + + if (dm_suspended_md(md)) + return -EAGAIN; + + map = dm_get_live_table(md, &srcu_idx); + if (!map) + return -EIO; + + ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb, data); + dm_put_live_table(md, srcu_idx); + return ret; } @@ -121,16 +138,517 @@ bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) } } -void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) +void dm_cleanup_zoned_dev(struct mapped_device *md) { - if (!blk_queue_is_zoned(q)) - return; + struct request_queue *q = md->queue; + + if (q) { + kfree(q->conv_zones_bitmap); + q->conv_zones_bitmap = NULL; + kfree(q->seq_zones_wlock); + q->seq_zones_wlock = NULL; + } + + kvfree(md->zwp_offset); + md->zwp_offset = NULL; + md->nr_zones = 0; +} + +static unsigned int dm_get_zone_wp_offset(struct blk_zone *zone) +{ + switch (zone->cond) { + case BLK_ZONE_COND_IMP_OPEN: + case BLK_ZONE_COND_EXP_OPEN: + case BLK_ZONE_COND_CLOSED: + return zone->wp - zone->start; + case BLK_ZONE_COND_FULL: + return zone->len; + case BLK_ZONE_COND_EMPTY: + case BLK_ZONE_COND_NOT_WP: + case BLK_ZONE_COND_OFFLINE: + case BLK_ZONE_COND_READONLY: + default: + /* + * Conventional, offline and read-only zones do not have a valid + * write pointer. Use 0 as for an empty zone. + */ + return 0; + } +} + +static int dm_zone_revalidate_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ + struct mapped_device *md = data; + struct request_queue *q = md->queue; + + switch (zone->type) { + case BLK_ZONE_TYPE_CONVENTIONAL: + if (!q->conv_zones_bitmap) { + q->conv_zones_bitmap = + kcalloc(BITS_TO_LONGS(q->nr_zones), + sizeof(unsigned long), GFP_NOIO); + if (!q->conv_zones_bitmap) + return -ENOMEM; + } + set_bit(idx, q->conv_zones_bitmap); + break; + case BLK_ZONE_TYPE_SEQWRITE_REQ: + case BLK_ZONE_TYPE_SEQWRITE_PREF: + if (!q->seq_zones_wlock) { + q->seq_zones_wlock = + kcalloc(BITS_TO_LONGS(q->nr_zones), + sizeof(unsigned long), GFP_NOIO); + if (!q->seq_zones_wlock) + return -ENOMEM; + } + if (!md->zwp_offset) { + md->zwp_offset = + kvcalloc(q->nr_zones, sizeof(unsigned int), + GFP_NOIO); + if (!md->zwp_offset) + return -ENOMEM; + } + md->zwp_offset[idx] = dm_get_zone_wp_offset(zone); + + break; + default: + DMERR("Invalid zone type 0x%x at sectors %llu", + (int)zone->type, zone->start); + return -ENODEV; + } + + return 0; +} + +/* + * Revalidate the zones of a mapped device to initialize resource necessary + * for zone append emulation. Note that we cannot simply use the block layer + * blk_revalidate_disk_zones() function here as the mapped device is suspended + * (this is called from __bind() context). + */ +static int dm_revalidate_zones(struct mapped_device *md, struct dm_table *t) +{ + struct request_queue *q = md->queue; + int ret; + + /* + * Check if something changed. If yes, cleanup the current resources + * and reallocate everything. + */ + if (!q->nr_zones || q->nr_zones != md->nr_zones) + dm_cleanup_zoned_dev(md); + if (md->nr_zones) + return 0; + + /* Scan all zones to initialize everything */ + ret = dm_blk_do_report_zones(md, t, 0, q->nr_zones, + dm_zone_revalidate_cb, md); + if (ret < 0) + goto err; + if (ret != q->nr_zones) { + ret = -EIO; + goto err; + } + + md->nr_zones = q->nr_zones; + + return 0; + +err: + DMERR("Revalidate zones failed %d", ret); + dm_cleanup_zoned_dev(md); + return ret; +} + +static int device_not_zone_append_capable(struct dm_target *ti, + struct dm_dev *dev, sector_t start, + sector_t len, void *data) +{ + return !blk_queue_is_zoned(bdev_get_queue(dev->bdev)); +} + +static bool dm_table_supports_zone_append(struct dm_table *t) +{ + struct dm_target *ti; + unsigned int i; + + for (i = 0; i < dm_table_get_num_targets(t); i++) { + ti = dm_table_get_target(t, i); + + if (ti->emulate_zone_append) + return false; + + if (!ti->type->iterate_devices || + ti->type->iterate_devices(ti, device_not_zone_append_capable, NULL)) + return false; + } + + return true; +} + +int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q) +{ + struct mapped_device *md = t->md; /* * For a zoned target, the number of zones should be updated for the - * correct value to be exposed in sysfs queue/nr_zones. For a BIO based - * target, this is all that is needed. + * correct value to be exposed in sysfs queue/nr_zones. */ WARN_ON_ONCE(queue_is_mq(q)); - q->nr_zones = blkdev_nr_zones(t->md->disk); + q->nr_zones = blkdev_nr_zones(md->disk); + + /* Check if zone append is natively supported */ + if (dm_table_supports_zone_append(t)) { + clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags); + dm_cleanup_zoned_dev(md); + return 0; + } + + /* + * Mark the mapped device as needing zone append emulation and + * initialize the emulation resources once the capacity is set. + */ + set_bit(DMF_EMULATE_ZONE_APPEND, &md->flags); + if (!get_capacity(md->disk)) + return 0; + + return dm_revalidate_zones(md, t); +} + +static int dm_update_zone_wp_offset_cb(struct blk_zone *zone, unsigned int idx, + void *data) +{ + unsigned int *wp_offset = data; + + *wp_offset = dm_get_zone_wp_offset(zone); + + return 0; +} + +static int dm_update_zone_wp_offset(struct mapped_device *md, unsigned int zno, + unsigned int *wp_ofst) +{ + sector_t sector = zno * blk_queue_zone_sectors(md->queue); + unsigned int noio_flag; + struct dm_table *t; + int srcu_idx, ret; + + t = dm_get_live_table(md, &srcu_idx); + if (!t) + return -EIO; + + /* + * Ensure that all memory allocations in this context are done as if + * GFP_NOIO was specified. + */ + noio_flag = memalloc_noio_save(); + ret = dm_blk_do_report_zones(md, t, sector, 1, + dm_update_zone_wp_offset_cb, wp_ofst); + memalloc_noio_restore(noio_flag); + + dm_put_live_table(md, srcu_idx); + + if (ret != 1) + return -EIO; + + return 0; +} + +/* + * First phase of BIO mapping for targets with zone append emulation: + * check all BIO that change a zone writer pointer and change zone + * append operations into regular write operations. + */ +static bool dm_zone_map_bio_begin(struct mapped_device *md, + struct bio *orig_bio, struct bio *clone) +{ + sector_t zsectors = blk_queue_zone_sectors(md->queue); + unsigned int zno = bio_zone_no(orig_bio); + unsigned int zwp_offset = READ_ONCE(md->zwp_offset[zno]); + + /* + * If the target zone is in an error state, recover by inspecting the + * zone to get its current write pointer position. Note that since the + * target zone is already locked, a BIO issuing context should never + * see the zone write in the DM_ZONE_UPDATING_WP_OFST state. + */ + if (zwp_offset == DM_ZONE_INVALID_WP_OFST) { + if (dm_update_zone_wp_offset(md, zno, &zwp_offset)) + return false; + WRITE_ONCE(md->zwp_offset[zno], zwp_offset); + } + + switch (bio_op(orig_bio)) { + case REQ_OP_ZONE_RESET: + case REQ_OP_ZONE_FINISH: + return true; + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + /* Writes must be aligned to the zone write pointer */ + if ((clone->bi_iter.bi_sector & (zsectors - 1)) != zwp_offset) + return false; + break; + case REQ_OP_ZONE_APPEND: + /* + * Change zone append operations into a non-mergeable regular + * writes directed at the current write pointer position of the + * target zone. + */ + clone->bi_opf = REQ_OP_WRITE | REQ_NOMERGE | + (orig_bio->bi_opf & (~REQ_OP_MASK)); + clone->bi_iter.bi_sector = + orig_bio->bi_iter.bi_sector + zwp_offset; + break; + default: + DMWARN_LIMIT("Invalid BIO operation"); + return false; + } + + /* Cannot write to a full zone */ + if (zwp_offset >= zsectors) + return false; + + return true; +} + +/* + * Second phase of BIO mapping for targets with zone append emulation: + * update the zone write pointer offset array to account for the additional + * data written to a zone. Note that at this point, the remapped clone BIO + * may already have completed, so we do not touch it. + */ +static blk_status_t dm_zone_map_bio_end(struct mapped_device *md, + struct bio *orig_bio, + unsigned int nr_sectors) +{ + unsigned int zno = bio_zone_no(orig_bio); + unsigned int zwp_offset = READ_ONCE(md->zwp_offset[zno]); + + /* The clone BIO may already have been completed and failed */ + if (zwp_offset == DM_ZONE_INVALID_WP_OFST) + return BLK_STS_IOERR; + + /* Update the zone wp offset */ + switch (bio_op(orig_bio)) { + case REQ_OP_ZONE_RESET: + WRITE_ONCE(md->zwp_offset[zno], 0); + return BLK_STS_OK; + case REQ_OP_ZONE_FINISH: + WRITE_ONCE(md->zwp_offset[zno], + blk_queue_zone_sectors(md->queue)); + return BLK_STS_OK; + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + WRITE_ONCE(md->zwp_offset[zno], zwp_offset + nr_sectors); + return BLK_STS_OK; + case REQ_OP_ZONE_APPEND: + /* + * Check that the target did not truncate the write operation + * emulating a zone append. + */ + if (nr_sectors != bio_sectors(orig_bio)) { + DMWARN_LIMIT("Truncated write for zone append"); + return BLK_STS_IOERR; + } + WRITE_ONCE(md->zwp_offset[zno], zwp_offset + nr_sectors); + return BLK_STS_OK; + default: + DMWARN_LIMIT("Invalid BIO operation"); + return BLK_STS_IOERR; + } +} + +static inline void dm_zone_lock(struct request_queue *q, + unsigned int zno, struct bio *clone) +{ + if (WARN_ON_ONCE(bio_flagged(clone, BIO_ZONE_WRITE_LOCKED))) + return; + + wait_on_bit_lock_io(q->seq_zones_wlock, zno, TASK_UNINTERRUPTIBLE); + bio_set_flag(clone, BIO_ZONE_WRITE_LOCKED); +} + +static inline void dm_zone_unlock(struct request_queue *q, + unsigned int zno, struct bio *clone) +{ + if (!bio_flagged(clone, BIO_ZONE_WRITE_LOCKED)) + return; + + WARN_ON_ONCE(!test_bit(zno, q->seq_zones_wlock)); + clear_bit_unlock(zno, q->seq_zones_wlock); + smp_mb__after_atomic(); + wake_up_bit(q->seq_zones_wlock, zno); + + bio_clear_flag(clone, BIO_ZONE_WRITE_LOCKED); +} + +static bool dm_need_zone_wp_tracking(struct bio *orig_bio) +{ + /* + * Special processing is not needed for operations that do not need the + * zone write lock, that is, all operations that target conventional + * zones and all operations that do not modify directly a sequential + * zone write pointer. + */ + if (op_is_flush(orig_bio->bi_opf) && !bio_sectors(orig_bio)) + return false; + switch (bio_op(orig_bio)) { + case REQ_OP_WRITE_ZEROES: + case REQ_OP_WRITE_SAME: + case REQ_OP_WRITE: + case REQ_OP_ZONE_RESET: + case REQ_OP_ZONE_FINISH: + case REQ_OP_ZONE_APPEND: + return bio_zone_is_seq(orig_bio); + default: + return false; + } +} + +/* + * Special IO mapping for targets needing zone append emulation. + */ +int dm_zone_map_bio(struct dm_target_io *tio) +{ + struct dm_io *io = tio->io; + struct dm_target *ti = tio->ti; + struct mapped_device *md = io->md; + struct request_queue *q = md->queue; + struct bio *orig_bio = io->orig_bio; + struct bio *clone = &tio->clone; + unsigned int zno; + blk_status_t sts; + int r; + + /* + * IOs that do not change a zone write pointer do not need + * any additional special processing. + */ + if (!dm_need_zone_wp_tracking(orig_bio)) + return ti->type->map(ti, clone); + + /* Lock the target zone */ + zno = bio_zone_no(orig_bio); + dm_zone_lock(q, zno, clone); + + /* + * Check that the bio and the target zone write pointer offset are + * both valid, and if the bio is a zone append, remap it to a write. + */ + if (!dm_zone_map_bio_begin(md, orig_bio, clone)) { + dm_zone_unlock(q, zno, clone); + return DM_MAPIO_KILL; + } + + /* + * The target map function may issue and complete the IO quickly. + * Take an extra reference on the IO to make sure it does disappear + * until we run dm_zone_map_bio_end(). + */ + dm_io_inc_pending(io); + + /* Let the target do its work */ + r = ti->type->map(ti, clone); + switch (r) { + case DM_MAPIO_SUBMITTED: + /* + * The target submitted the clone BIO. The target zone will + * be unlocked on completion of the clone. + */ + sts = dm_zone_map_bio_end(md, orig_bio, *tio->len_ptr); + break; + case DM_MAPIO_REMAPPED: + /* + * The target only remapped the clone BIO. In case of error, + * unlock the target zone here as the clone will not be + * submitted. + */ + sts = dm_zone_map_bio_end(md, orig_bio, *tio->len_ptr); + if (sts != BLK_STS_OK) + dm_zone_unlock(q, zno, clone); + break; + case DM_MAPIO_REQUEUE: + case DM_MAPIO_KILL: + default: + dm_zone_unlock(q, zno, clone); + sts = BLK_STS_IOERR; + break; + } + + /* Drop the extra reference on the IO */ + dm_io_dec_pending(io, sts); + + if (sts != BLK_STS_OK) + return DM_MAPIO_KILL; + + return r; +} + +/* + * IO completion callback called from clone_endio(). + */ +void dm_zone_endio(struct dm_io *io, struct bio *clone) +{ + struct mapped_device *md = io->md; + struct request_queue *q = md->queue; + struct bio *orig_bio = io->orig_bio; + unsigned int zwp_offset; + unsigned int zno; + + /* + * For targets that do not emulate zone append, we only need to + * handle native zone-append bios. + */ + if (!dm_emulate_zone_append(md)) { + /* + * Get the offset within the zone of the written sector + * and add that to the original bio sector position. + */ + if (clone->bi_status == BLK_STS_OK && + bio_op(clone) == REQ_OP_ZONE_APPEND) { + sector_t mask = (sector_t)blk_queue_zone_sectors(q) - 1; + + orig_bio->bi_iter.bi_sector += + clone->bi_iter.bi_sector & mask; + } + + return; + } + + /* + * For targets that do emulate zone append, if the clone BIO does not + * own the target zone write lock, we have nothing to do. + */ + if (!bio_flagged(clone, BIO_ZONE_WRITE_LOCKED)) + return; + + zno = bio_zone_no(orig_bio); + + if (clone->bi_status != BLK_STS_OK) { + /* + * BIOs that modify a zone write pointer may leave the zone + * in an unknown state in case of failure (e.g. the write + * pointer was only partially advanced). In this case, set + * the target zone write pointer as invalid unless it is + * already being updated. + */ + WRITE_ONCE(md->zwp_offset[zno], DM_ZONE_INVALID_WP_OFST); + } else if (bio_op(orig_bio) == REQ_OP_ZONE_APPEND) { + /* + * Get the written sector for zone append operation that were + * emulated using regular write operations. + */ + zwp_offset = READ_ONCE(md->zwp_offset[zno]); + if (WARN_ON_ONCE(zwp_offset < bio_sectors(orig_bio))) + WRITE_ONCE(md->zwp_offset[zno], + DM_ZONE_INVALID_WP_OFST); + else + orig_bio->bi_iter.bi_sector += + zwp_offset - bio_sectors(orig_bio); + } + + dm_zone_unlock(q, zno, clone); } diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 563504163b74..6c570abc1417 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -875,7 +875,6 @@ static void clone_endio(struct bio *bio) struct dm_io *io = tio->io; struct mapped_device *md = tio->io->md; dm_endio_fn endio = tio->ti->type->end_io; - struct bio *orig_bio = io->orig_bio; struct request_queue *q = bio->bi_bdev->bd_disk->queue; if (unlikely(error == BLK_STS_TARGET)) { @@ -890,17 +889,8 @@ static void clone_endio(struct bio *bio) disable_write_zeroes(md); } - /* - * For zone-append bios get offset in zone of the written - * sector and add that to the original bio sector pos. - */ - if (bio_op(orig_bio) == REQ_OP_ZONE_APPEND) { - sector_t written_sector = bio->bi_iter.bi_sector; - struct request_queue *q = orig_bio->bi_bdev->bd_disk->queue; - u64 mask = (u64)blk_queue_zone_sectors(q) - 1; - - orig_bio->bi_iter.bi_sector += written_sector & mask; - } + if (blk_queue_is_zoned(q)) + dm_zone_endio(io, bio); if (endio) { int r = endio(tio->ti, bio, &error); @@ -1213,7 +1203,16 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) down(&md->swap_bios_semaphore); } - r = ti->type->map(ti, clone); + /* + * Check if the IO needs a special mapping due to zone append emulation + * on zoned target. In this case, dm_zone_map_begin() calls the target + * map operation. + */ + if (dm_emulate_zone_append(io->md)) + r = dm_zone_map_bio(tio); + else + r = ti->type->map(ti, clone); + switch (r) { case DM_MAPIO_SUBMITTED: break; @@ -1711,6 +1710,7 @@ static void cleanup_mapped_device(struct mapped_device *md) mutex_destroy(&md->swap_bios_lock); dm_mq_cleanup_mapped_device(md); + dm_cleanup_zoned_dev(md); } /* @@ -1956,11 +1956,16 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t, goto out; } + ret = dm_table_set_restrictions(t, q, limits); + if (ret) { + old_map = ERR_PTR(ret); + goto out; + } + old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock)); rcu_assign_pointer(md->map, (void *)t); md->immutable_target_type = dm_table_get_immutable_target_type(t); - dm_table_set_restrictions(t, q, limits); if (old_map) dm_sync_table(md); @@ -2079,7 +2084,10 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t) DMERR("Cannot calculate initial queue limits"); return r; } - dm_table_set_restrictions(t, md->queue, &limits); + r = dm_table_set_restrictions(t, md->queue, &limits); + if (r) + return r; + blk_register_queue(md->disk); return 0; diff --git a/drivers/md/dm.h b/drivers/md/dm.h index 39c243258e24..742d9c80efe1 100644 --- a/drivers/md/dm.h +++ b/drivers/md/dm.h @@ -45,6 +45,8 @@ struct dm_dev_internal { struct dm_table; struct dm_md_mempools; +struct dm_target_io; +struct dm_io; /*----------------------------------------------------------------- * Internal table functions. @@ -56,8 +58,8 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector); bool dm_table_has_no_data_devices(struct dm_table *table); int dm_calculate_queue_limits(struct dm_table *table, struct queue_limits *limits); -void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, - struct queue_limits *limits); +int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + struct queue_limits *limits); struct list_head *dm_table_get_devices(struct dm_table *t); void dm_table_presuspend_targets(struct dm_table *t); void dm_table_presuspend_undo_targets(struct dm_table *t); @@ -103,17 +105,25 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t); /* * Zoned targets related functions. */ -void dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); +int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q); +void dm_zone_endio(struct dm_io *io, struct bio *clone); #ifdef CONFIG_BLK_DEV_ZONED +void dm_cleanup_zoned_dev(struct mapped_device *md); int dm_blk_report_zones(struct gendisk *disk, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data); bool dm_is_zone_write(struct mapped_device *md, struct bio *bio); +int dm_zone_map_bio(struct dm_target_io *io); #else +static inline void dm_cleanup_zoned_dev(struct mapped_device *md) {} #define dm_blk_report_zones NULL static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) { return false; } +static inline int dm_zone_map_bio(struct dm_target_io *tio) +{ + return DM_MAPIO_KILL; +} #endif /*----------------------------------------------------------------- diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index caea0a079d2d..7457d49acf9a 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -361,6 +361,12 @@ struct dm_target { * Set if we need to limit the number of in-flight bios when swapping. */ bool limit_swap_bios:1; + + /* + * Set if this target implements a a zoned device and needs emulation of + * zone append operations using regular writes. + */ + bool emulate_zone_append:1; }; void *dm_per_bio_data(struct bio *bio, size_t data_size); From patchwork Fri May 21 03:01:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 12271751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B3A2C433ED for ; Fri, 21 May 2021 03:01:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 531256108C for ; Fri, 21 May 2021 03:01:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238738AbhEUDC7 (ORCPT ); Thu, 20 May 2021 23:02:59 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:25389 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238147AbhEUDC7 (ORCPT ); Thu, 20 May 2021 23:02:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1621566097; x=1653102097; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=GypGqk+P9NK3nH7MTJFSvTP9WqAa/tRgXSLKL6Mh68I=; b=pQSA/CdMdfHIQScOjykahusY9csSmKS/XskHLMn1JoGYiGVWz6/7Jd5P 1iRK7uqZe+KA+eNVu1PZFzNi9KrvJXY9jo/ZaLIQEXm4UhFZCozKeJwO2 6izUsMYfld1AvPHcB9GgbpcBCO0hUoGGIbLJRRM2tDS7Y21cJU0o98VAu d8XsUrF4GtUHtNy3iBwONrsp+h31ZIWgchLCZ+bDT9nsjIhKWWiaA26Oj 2al9TU7x7NWgZWnxI6HjnDdj06M7MPe/AQd1O3qoX7ffjaUeUgyWrCkRA OwMjn1j2UMxQrXLpYGJfAotn3ofDtOfDuSdIWmRwjI2nD29ESmjFXGIdl w==; IronPort-SDR: EYyESBhykEsE/s5gaJwKkExZhgDnzNXpbZQVF8TzOcU2N0Ua37NFfiydN/62A+ZZXiokgUmVot mJGSpWrNeYcBYKWJD1HRbnBOdvfWoLqOlJEV1PW8qeI1gR3SAe1pcm8APeLGgNDqfc/wuEyKMD QvbtoDlZeJqDpQTFgJ/R/Qm8lauhrml4FQEVWeXslJfnzkHkssV1wxzNj2k0b7RIzezeTlbEOI K5dUqdZHvajOaFXmANUJnzhuBkzVyo0e5fPBpoHweqeizDTIl3IAR76vhAB5lJa31U/Jh52udi QQ8= X-IronPort-AV: E=Sophos;i="5.82,313,1613404800"; d="scan'208";a="168943921" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 21 May 2021 11:01:37 +0800 IronPort-SDR: 4SI7H2n2jZZeAGkcAdqK5SdlYBBd7wZ5IOfaK2NAXbzqGkD6d1I/Uz3Db4aR2/P4hsnL/sBmdV HhQ+9su6sO759RZlcByVkmFfocnpo6OC5dEsxWKGyDKbI5KIotzO9Vm5NakAZ1OSdzTINIZo0b co1ZtHoGAK06xFxSfI39hGIBmM8I96Lirikm4EnbHExJpX90EwbZnVwnfosYhmCh749nCMiEa3 U+1lTvGCWSW51y2RuNU3MXusGU8u8h47KbD1kzKzR+Afc0mxiwElXTmLUQaQ7rEazy8xKN4yb1 6vfejEoY+JY9jEO4GOrd24+C Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2021 19:41:13 -0700 IronPort-SDR: Zn2PPmc/iCohYVwADqN/1taYAjbCyo5RH1HgFYMvsiahQov8RWWTIKcCbOwB4ANUohJhTysDXW YPi/MxU7ElIUYU4IyK9OAb62hxbGbefiXAGBqcNn4lr/E6Oaxw7lzGUeTbptAcVYwe2ViEsac+ JNlDJsrTe7OAtcxNheVUFIVQdwldEnaRHanIquiY8xCJO1UpztxVQ+KbL4d5hpbPImm5ElOn27 5ocHdDWDy8BU32hnNS5OCre9z3wYMvll+jmvaKXknL4OqNej1ql1gSI5CfQ/N1qM2fzb2otzJH d5Q= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 20 May 2021 20:01:36 -0700 From: Damien Le Moal To: dm-devel@redhat.com, Mike Snitzer , linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v3 11/11] dm crypt: Fix zoned block device support Date: Fri, 21 May 2021 12:01:19 +0900 Message-Id: <20210521030119.1209035-12-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210521030119.1209035-1-damien.lemoal@wdc.com> References: <20210521030119.1209035-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Zone append BIOs (REQ_OP_ZONE_APPEND) always specify the start sector of the zone to be written instead of the actual sector location to write. The write location is determined by the device and returned to the host upon completion of the operation. This interface, while simple and efficient for writing into sequential zones of a zoned block device, is incompatible with the use of sector values to calculate a cypher block IV. All data written in a zone end up using the same IV values corresponding to the first sectors of the zone, but read operation will specify any sector within the zone resulting in an IV mismatch between encryption and decryption. To solve this problem, report to DM core that zone append operations are not supported. This result in the zone append operations being emulated using regular write operations. Reported-by: Shin'ichiro Kawasaki Signed-off-by: Damien Le Moal Reviewed-by: Hannes Reinecke Reviewed-by: Johannes Thumshirn --- drivers/md/dm-crypt.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index f410ceee51d7..50f4cbd600d5 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3280,14 +3280,28 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) } cc->start = tmpll; - /* - * For zoned block devices, we need to preserve the issuer write - * ordering. To do so, disable write workqueues and force inline - * encryption completion. - */ if (bdev_is_zoned(cc->dev->bdev)) { + /* + * For zoned block devices, we need to preserve the issuer write + * ordering. To do so, disable write workqueues and force inline + * encryption completion. + */ set_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags); set_bit(DM_CRYPT_WRITE_INLINE, &cc->flags); + + /* + * All zone append writes to a zone of a zoned block device will + * have the same BIO sector, the start of the zone. When the + * cypher IV mode uses sector values, all data targeting a + * zone will be encrypted using the first sector numbers of the + * zone. This will not result in write errors but will + * cause most reads to fail as reads will use the sector values + * for the actual data locations, resulting in IV mismatch. + * To avoid this problem, ask DM core to emulate zone append + * operations with regular writes. + */ + DMDEBUG("Zone append operations will be emulated"); + ti->emulate_zone_append = true; } if (crypt_integrity_aead(cc) || cc->integrity_iv_size) {