From patchwork Wed Nov 11 05:16:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B6D7C55ABD for ; Wed, 11 Nov 2020 05:16:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F08FF20729 for ; Wed, 11 Nov 2020 05:16:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="UrAycymQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725971AbgKKFQx (ORCPT ); Wed, 11 Nov 2020 00:16:53 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43928 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725870AbgKKFQw (ORCPT ); Wed, 11 Nov 2020 00:16:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071812; x=1636607812; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=PKMMRUU9vYzHfv4nwkDX8jarpnBsCQGZ/fU3l0xt8Fk=; b=UrAycymQ3z99iBjJbte3bi9quFQpFv9yc0E5Rq21MIqMojw42b0gnGzB a1MN7uK6j+zB6FhK03yOiq4JwfhT/vBvFPQFpcQ+BzU0/9+1Y8tFTKXFX cx/8J5xnr+MLmwBr6BTkmvo3VMp0gvqG/kIUWNNJPicj7GX4QiJPTMYBb X/rnJx+bB/BDHRARjyonW9+yTfFYjWPbcIh9SM21Q+qq63y8SSja/TaRr 3vIwQFWvUISZ8lgo3B23CMIRNcqf0FgrHwjRAQUQZOuMuOvwbVE7IrTK3 AJq8X8s+NaWr5zQ2QY2ZLHX3vHciQ3lpulHfgMy26qvWngSzbrMF1hb96 Q==; IronPort-SDR: ERJIxLONaFWPp9psoK7LnyjlchNv20RrfM2U48fHCpBvS63stgpuGEdJcv85UfyB2JH3FmtJWf 9m3r8n2mdFsFvQAxiosHtKXbcJGiBjpwklJdA8IuOxiJdNQmSYbC7VwtD0y5y9PyC41S0nq+5M u6xnI4CEv/D7seiOilSUeSlfsV+kTzflDgXgTvF4NLXUrc1Bs00oKns5xNAmjPpBCuebegPaVx hOPFOWvhhXjCKD2X3RDpWZkImTZ4ZGbhJ+i6dFgFssxG6GkxxbgvSD/EgPXF1h5tVSwzxFbEJG jbw= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254625" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:51 +0800 IronPort-SDR: EM2uzdqI/rxw1SBJbxFvxG7Te5IjK+59nh3x4sCG2zInV8irRWepqYNw2/0q4DKSyxpzQUiR5N u7ll4dRL+DUUxRydapmc2FflofxGuFKjRjHUugHv9+8g8m/GDlocbkkceBtpYIHkvfWu77CYwr +d75NbPaXawKyb6WWC7gBBRbrkXsm3YLE24jWRhGSAHg+H3ZRKl1lXkH9cY6dYngHmVK+dmcQa U1ocbr0yLZncvB2s3wlXvb7d6ZVWBybPu1FQPRXp5qtsvBIao2Kr6noahgeMZdz3Xam9kl8EeZ qqM49lYwIAgTGWgTok1uHffj Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:38 -0800 IronPort-SDR: jFkxlsc67ig6dc6maMjzGftxd1UiHdcyuKahsyfw9SvKIasm8XT4+k7ShfbOz1FPRZJ/Wmq2HB yxAInFAzWuNCFqD+2qbLgUwtbGOK1xr3DQdG+yh1+XFaOlHN3G53KAhEos7yQGKXnqZGX5udno tOkeAL3JrE77KmdPd/o6srOj1g9AN+Wb6angKBcv9APAc3qPFy66bwj1TzuiwT9jyFKuxrEB7r k1Gqc5csdX2J7p1UvuDZg6aQnvD8htvf/SvxXRHLBlc4Ho0/MMrgkd1HPndwGvxnkAo9s8ZbsC gwk= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:51 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 1/9] null_blk: Fix zone size initialization Date: Wed, 11 Nov 2020 14:16:40 +0900 Message-Id: <20201111051648.635300-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For a null_blk device with zoned mode enabled is currently initialized with a number of zones equal to the device capacity divided by the zone size, without considering if the device capacity is a multiple of the zone size. If the zone size is not a divisor of the capacity, the zones end up not covering the entire capacity, potentially resulting is out of bounds accesses to the zone array. Fix this by adding one last smaller zone with a size equal to the remainder of the disk capacity divided by the zone size if the capacity is not a multiple of the zone size. For such smaller last zone, the zone capacity is also checked so that it does not exceed the smaller zone size. Reported-by: Naohiro Aota Fixes: ca4b2a011948 ("null_blk: add zone support") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk_zoned.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index beb34b4f76b0..1d0370d91fe7 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -6,8 +6,7 @@ #define CREATE_TRACE_POINTS #include "null_blk_trace.h" -/* zone_size in MBs to sectors. */ -#define ZONE_SIZE_SHIFT 11 +#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT) static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect) { @@ -16,7 +15,7 @@ static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect) int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) { - sector_t dev_size = (sector_t)dev->size * 1024 * 1024; + sector_t dev_capacity_sects, zone_capacity_sects; sector_t sector = 0; unsigned int i; @@ -38,9 +37,13 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) return -EINVAL; } - dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT; - dev->nr_zones = dev_size >> - (SECTOR_SHIFT + ilog2(dev->zone_size_sects)); + zone_capacity_sects = MB_TO_SECTS(dev->zone_capacity); + dev_capacity_sects = MB_TO_SECTS(dev->size); + dev->zone_size_sects = MB_TO_SECTS(dev->zone_size); + dev->nr_zones = dev_capacity_sects >> ilog2(dev->zone_size_sects); + if (dev_capacity_sects & (dev->zone_size_sects - 1)) + dev->nr_zones++; + dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone), GFP_KERNEL | __GFP_ZERO); if (!dev->zones) @@ -101,8 +104,12 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) struct blk_zone *zone = &dev->zones[i]; zone->start = zone->wp = sector; - zone->len = dev->zone_size_sects; - zone->capacity = dev->zone_capacity << ZONE_SIZE_SHIFT; + if (zone->start + dev->zone_size_sects > dev_capacity_sects) + zone->len = dev_capacity_sects - zone->start; + else + zone->len = dev->zone_size_sects; + zone->capacity = + min_t(sector_t, zone->len, zone_capacity_sects); zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ; zone->cond = BLK_ZONE_COND_EMPTY; From patchwork Wed Nov 11 05:16:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D773C56202 for ; Wed, 11 Nov 2020 05:16:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C226B20729 for ; Wed, 11 Nov 2020 05:16:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="MVy9JT3P" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725972AbgKKFQy (ORCPT ); Wed, 11 Nov 2020 00:16:54 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43928 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725828AbgKKFQx (ORCPT ); Wed, 11 Nov 2020 00:16:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071813; x=1636607813; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=45yYfoe96qdeI81oKyZ9SDAYSoxixi+7/tPmyx2lzg4=; b=MVy9JT3Pyg8t+vVZtxs7h/s3ckOyCHjPXbcD1zB4j1H0ndtnjoFG0uE5 vU4OJ8axTvBRGkdRTTWD1TQTzz0Yg9nZr5R3btDzMZJs4+wBfJwZFF0Y1 v4v55cD4hnnqGRP45bYgnZAj3fnjqtYWbrNxA6B7nOSDeJlRjs4tPm/Ag vaRQKSsx03OQpYNlwUETqxV33yoVVYEgatiP3kpMn5zFklW3zu6q/weMi gaxYIlKKHMmTyqlBRRMyHbxvLKaQsIweEmPTO1hlraG13wQK/12nB981p GcqGN70s72dWe/g7X5J7ZJ4yBBJvNYCd/a2Ocpq222iydsbxVIIr65jKx Q==; IronPort-SDR: pb07ApsooYceaqW/fA8NVmB8Hic5BeAwf+mSVFMSBK615E7PcdusvSQyRIWsKvQt+Q5jp5oNRc 06t3O8eaXGOLad9LRIg6aJx3du8GFxgzajMnVo3eoIYoxoSu6+cYndYBbELXfRkon36IZYXRVw il3PLkV7CtETPUAQMxGje8U0VxvIkU7Zyd0JNYYlXZa/MQsu5yxtUwQC9HCMmfMxmMZime7Pn6 yekmUQWiqZ3eCkqNUsGs7FyooQ1jgRxMM2fnNJRpnhBOUS3ume/BsXrx1F4IhmVJjGalinfY8D EUE= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254626" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:52 +0800 IronPort-SDR: 52U0JGWmkQQo///n8dKb+SC2ZTgj87nggeoDiOBeir6snghm65k6fS3UF1B5NnSmOqktB7i0MD lY5T5aJUgpiY1p+UBbnJruzIRsPQsdV6YWK4QBE5Bh9O01VwFY/aZyrBzi3x1JwPs6S2pY0zVw t4FA8Kxrcpn/ZYEKJa++XvISWm6sY+xbinqAUj7sTsbprHFxWJOQrwXBefJB56W4+4JjuHabyZ WgBMql0OYkpiaSXOsCaKiKi/JwV+or/b01WVEj/r6cVVkIsECQ6xk8MUxxLRr0H61xJEZQcsQP m0H7V44sdJrHF6BUgd/e5ae9 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:38 -0800 IronPort-SDR: NP7cKZZmHD7eiwpqBZOqiNqrlRMA1oubl1NXd9f3UltQltixqM24kTPJCJbi2zwTqkF3rWXELD 8h2gqDWUkxeuDGvWxempAyENYbQx22xyYJL9qJJ+Q7PxaPi/qoTzInNkXFcmPEtGRnhJ+JPWgl Xlfq5hMIIhjTdXe7yMnHxf8h9KWR9uzdH2yzvmu9cP8CzPMT1cTafC7U/iGERNMJCDlKY/AdYw 89OFuBDSMISNnIPPujj2mW3FpyDeq/SLJyguKMX2N8Cu3qP0DtDbTJCpeUmoee6aDda+IpxzDy 9fw= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:51 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 2/9] null_blk: Fail zone append to conventional zones Date: Wed, 11 Nov 2020 14:16:41 +0900 Message-Id: <20201111051648.635300-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Conventional zones do not have a write pointer and so cannot accept zone append writes. Make sure to fail any zone append write command issued to a conventional zone. Reported-by: Naohiro Aota Fixes: e0489ed5daeb ("null_blk: Support REQ_OP_ZONE_APPEND") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk_zoned.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 1d0370d91fe7..172f720b8d63 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -339,8 +339,11 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, trace_nullb_zone_op(cmd, zno, zone->cond); - if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) + if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) { + if (append) + return BLK_STS_IOERR; return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); + } null_lock_zone(dev, zno); From patchwork Wed Nov 11 05:16:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B26BC56201 for ; Wed, 11 Nov 2020 05:16:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4474020729 for ; Wed, 11 Nov 2020 05:16:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="R2cvq3X3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725984AbgKKFQy (ORCPT ); Wed, 11 Nov 2020 00:16:54 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43931 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725870AbgKKFQx (ORCPT ); Wed, 11 Nov 2020 00:16:53 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071813; x=1636607813; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=YqMtHkmhnXJnHlDZwv4XX+IidTqWrb1YHJe15NPnTHg=; b=R2cvq3X3JfrIQNaVrdZ5oVmf0aLBH59qGrzyhh4o1dFG1R10hm0+8OzA DyfDcSjqltkA8M8dGkSlaST+sh9ElgecgmfYqN/uUJGoA2nrEYFhOJW7r a9Tcdyp3vPD0drwS2WPbv5wyMG7A1sXMozCy0w4aga87aX1brewP/m/ie H5PMLs+4fwCe7B/bE5WWxuqpaEktXV2VxmbIFX5hfVwPbKVvjL8vHBMKC jJaWs3lLXtCiu06mpOGLetsWQs5dKJLLbplJDUgdnTWUasjcCE1B6ik76 QMvvvorJI9xKJ5Z9B29wXiI+CpKuQ4oIbJQ5cMN8H5gO2dIdDzWZLGtKZ A==; IronPort-SDR: tgbh6gnalYf65/eBAJngBqUg3APqGkqYGWTpGADxtDAPUP/uiPkd2rjmUyqjEOOvU1KYcqgsQZ td38PhjBpEm1LbJQD6Ar3oG3oGqdVHRDCmqGBC3NvQ6yj2XBonEjtq2lIuwUkkSsCUimM8eepJ JsttB6EjC+SwfP5CwRgL0U5Xyc6b0uvChHDnm2Mx8d7QUGF7j9045CAK6bB9S1GCx2gQfrP4It 7JaMJ5Zf7+nqtC+7m+FrtlBL19QkTWgI/+GMoUxrkyLJKfPB9ZK6cX0SLGgAyw0n0KVbUdmynK pn8= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254631" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:53 +0800 IronPort-SDR: uMV0qaE6CCIjZMk5dxQN364PMjxK9ZePlgC+oetm6+BrGaYuAzP1Fp/9m7RULMJfQDVAncKJ6P +Ic10EX7dA0ADl8a/9Crc7YKB9moppR36QSpbh9YhLi8FbeDi9/MzYlGPtbhalOc105L+wucwu NTf8WvathVfBspRgZ/jHIInERndpha1F/icAtHuSsOOK2wKEAx+jfWHGhK+prX6CYRgR/mGgnm yyVtQ1aOsr1rMQ2ttNHPXHc8kEP85TccGsNoK4pyx7LpW/J5o3+ZpfJ60yqUgD+mUmd5R6C4Gy 0THqbb1E22aM9KkqDVFZDAKz Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:39 -0800 IronPort-SDR: HmXsrh850Jx6GIW27OJjZTaa0EEzvJbtiF5t7HGZLhjeHsom0KxDq5dQwoHwRpCSevTcr6xKz3 pMBriRmWtOoUlm9GVSadR6ucksQ9QzchRj1FbBDEvANHLy9vByQHNs/qD/68uiaGCgiYzpF+12 pPwJqo477D5vYRqgBfU7yVZXZhg8l4R9HC/lubrgF32LVH0P1CR4ieUuNCFqMuB3CtTFC4XXjL t6kXQHQh4l5TzFBrubpx4he2r7Qy7kuMHv7Cwqg/6WPq25BMASyVNxsEnSZX7bmsNDCInjOq63 Cys= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:52 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 3/9] null_blk: Align max_hw_sectors to blocksize Date: Wed, 11 Nov 2020 14:16:42 +0900 Message-Id: <20201111051648.635300-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org null_blk always uses the default BLK_SAFE_MAX_SECTORS value for the max_hw_sectors and max_sectors queue limits resulting in a maximum request size of 127 sectors. When the blocksize setting is larger than the default 512B, this maximum request size is not aligned to the block size. To emulate a real device more accurately, fix this by setting the max_hw_sectors and max_sectors queue limits to a value that is aligned to the block size. Signed-off-by: Damien Le Moal --- drivers/block/null_blk_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 4685ea401d5b..b77a506a4ae4 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1866,6 +1866,9 @@ static int null_add_dev(struct nullb_device *dev) blk_queue_logical_block_size(nullb->q, dev->blocksize); blk_queue_physical_block_size(nullb->q, dev->blocksize); + blk_queue_max_hw_sectors(nullb->q, + round_down(queue_max_hw_sectors(nullb->q), + dev->blocksize >> SECTOR_SHIFT)); null_config_discard(nullb); From patchwork Wed Nov 11 05:16:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51351C388F9 for ; Wed, 11 Nov 2020 05:16:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDEF420709 for ; Wed, 11 Nov 2020 05:16:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="lWhuYuBH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725986AbgKKFQ4 (ORCPT ); Wed, 11 Nov 2020 00:16:56 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43928 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725828AbgKKFQz (ORCPT ); Wed, 11 Nov 2020 00:16:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071814; x=1636607814; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=VMU74VSKM3Le21EwLr+RGLuLGAM9QKY6umL1AyTseKs=; b=lWhuYuBHhJRi1cKcT1L7m/5g7sSPhKhWzIC+CmnP6xEmiICXyJ1SxVEL y/FHwFMyE6daLLVHmcvamMKtPdHAH3697F7gmifbKuk2fmIbV3Cqqtit1 5wjjnlV6SGI0KG8OmRTLGF8hCUdBPp4BKpF1H1U1JelboStNXcp1/UUJX GPuawHXvEDRwCqKHsCwizVFZ0/VzjUkuc0vw3S3KNUofljhNj+Bar3B8a j4jRFKILfrVSa1MipxGnswCJ3cqUyWyOgXWk0xvBMpONvsOoGmSB7rhy1 /3CquYceenfYRwn1lmf47CGK4AtDXoJiJlgmihZ4MVTMNfRkfsDczoiq3 A==; IronPort-SDR: qpCIuonVQJXu/bth/e17MI9Z3RVyJjpJZPxQJ5zPfCnfKL0FGM+BG1vJ3uKXBhVhd+leFU9h0g GtUs+lZtEAF6aib9P8sVDY40rkRDxznCrMYZDxvlEPa9wBRBOOymK38j2i1M6KWYORtsgGsZGr ivG361tGhyFRiTnHlDRneeuwOG4E1m2amE8x1HIo72VH5d0b2nZgqMOmTifJnozLDNUd/i9ucS JVPcfxxKGX7Eo1ZMks4op8J54K3Gpyvv4dFOoJoUjwe51+cc/XUfbv/83rqgjNLYOuvsJQxmCK O4I= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254634" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:53 +0800 IronPort-SDR: fwJgu/e54s10y5oo0rVxf+71ZcxLzpX9oMp967HLasiGcwu/ZrkrftD2bcgfKrPHZn/NexPQhO kOrS7vMkBityVNKc8aiizHmpkEuYVGME6noYhl32fmLvFtn2UHI57ue4RYGoLXpby89mfexUpQ YNFHGB7OJSbewdG1DAcpRNYOrl+Jq/6uCNM3biCh/JesRxlyXbKdZ0yIEo+9axkli1BP6QwGLN NVRrSvAMRT0GLspaxXwA5aEvFJ2aRBdD1CsmG5ZKvoK+e6kgzit2y+sXHSvwozKlXfRewRKc+M KApuYyHB6H94seUXyL3hPB1A Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:40 -0800 IronPort-SDR: VHsy10/szvxXg8NxqIhTp4P+LFExzba0YIUgml7p0wRQ84I22nquVXREsg1RDWQN7O9ndKxmDW Iedo16wxXKEcZfRQVKlI6e9f5hoqBTlivewlsPndsUdVQAyAhyVy/XkZ04FVacOGcKtq7pHZlf /JpmcdjdHgi1Piz0QIYHaDPxA1Jl8h20QLNm2i3jlCmMLK8AOJTZPwP6ZeZv6SeWJJ4T7Y3bha GnXeWYaUmkeQPUifsNeMLjCt+TO4DEg8fPiZ893NgVq7LWQRxyf0pI83U164dY6PEIEAanNatG B9o= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:53 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 4/9] null_blk: improve zone locking Date: Wed, 11 Nov 2020 14:16:43 +0900 Message-Id: <20201111051648.635300-5-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org With memory backing disabled, using a single spinlock for protecting zone information and zone resource management prevent the parallel execution on multiple queue of IO requests to different zones. Furthermore, regardless of the use of memory backing, if a null_blk device is created without limits on the number of opn and active zones, accounting for zone resource management is not necessary. From these observations, zone locking is changed as follow to improve performance: 1) the zone_lock spinlock is renamed zone_res_lock and used only if zone resource management is necessary, that is, if either zone_max_open or zone_max_active are not 0. This is indicated using the new boolean need_zone_res_mgmt in the nullb_device structure. null_zone_write() is modified to reduce the amount of code executed with the zone_res_lock spinlock held. null_zone_valid_read_len() is also modified to avoid taking the zone lock before calling null_process_cmd() for read operations in null_process_zoned_cmd(). 2) With memory backing disabled, per zone locking is changed to a spinlock per zone. With these changes, fio performance with zonemode=zbd for 4K random read and random write on a dual socket (24 cores per socket) machine using the none schedulder is as follows: before patch: write (psync x 96 jobs) = 465 KIOPS read (libaio@qd=8 x 96 jobs) = 1361 KIOPS after patch: write (psync x 96 jobs) = 468 KIOPS read (libaio@qd=8 x 96 jobs) = 3340 KIOPS Write performance remains mostly unchanged but read performance more than double. Performance when using the mq-deadline scheduler is not changed by this patch as mq-deadline becomes the bottleneck for a multi-queue device. Signed-off-by: Damien Le Moal --- drivers/block/null_blk.h | 5 +- drivers/block/null_blk_zoned.c | 192 +++++++++++++++++++++------------ 2 files changed, 126 insertions(+), 71 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index c24d9b5ad81a..4c101c39c3d1 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -47,8 +47,9 @@ struct nullb_device { unsigned int nr_zones_closed; struct blk_zone *zones; sector_t zone_size_sects; - spinlock_t zone_lock; - unsigned long *zone_locks; + bool need_zone_res_mgmt; + spinlock_t zone_res_lock; + void *zone_locks; unsigned long size; /* device size in MB */ unsigned long completion_nsec; /* time in ns to complete a request */ diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 172f720b8d63..2630aeda757d 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -56,13 +56,15 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing * implies that the queue is marked with BLK_MQ_F_BLOCKING. */ - spin_lock_init(&dev->zone_lock); - if (dev->memory_backed) { + spin_lock_init(&dev->zone_res_lock); + if (dev->memory_backed) dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL); - if (!dev->zone_locks) { - kvfree(dev->zones); - return -ENOMEM; - } + else + dev->zone_locks = kmalloc_array(dev->nr_zones, + sizeof(spinlock_t), GFP_KERNEL); + if (!dev->zone_locks) { + kvfree(dev->zones); + return -ENOMEM; } if (dev->zone_nr_conv >= dev->nr_zones) { @@ -86,10 +88,14 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) dev->zone_max_open = 0; pr_info("zone_max_open limit disabled, limit >= zone count\n"); } + dev->need_zone_res_mgmt = dev->zone_max_active || dev->zone_max_open; for (i = 0; i < dev->zone_nr_conv; i++) { struct blk_zone *zone = &dev->zones[i]; + if (!dev->memory_backed) + spin_lock_init((spinlock_t *)dev->zone_locks + i); + zone->start = sector; zone->len = dev->zone_size_sects; zone->capacity = zone->len; @@ -103,6 +109,9 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { struct blk_zone *zone = &dev->zones[i]; + if (!dev->memory_backed) + spin_lock_init((spinlock_t *)dev->zone_locks + i); + zone->start = zone->wp = sector; if (zone->start + dev->zone_size_sects > dev_capacity_sects) zone->len = dev_capacity_sects - zone->start; @@ -147,23 +156,41 @@ int null_register_zoned_dev(struct nullb *nullb) void null_free_zoned_dev(struct nullb_device *dev) { - bitmap_free(dev->zone_locks); + if (dev->memory_backed) + bitmap_free(dev->zone_locks); + else + kfree(dev->zone_locks); kvfree(dev->zones); } +#define null_lock_zone_res(dev, flags) \ + do { \ + if ((dev)->need_zone_res_mgmt) \ + spin_lock_irqsave(&(dev)->zone_res_lock, \ + (flags)); \ + } while (0) + +#define null_unlock_zone_res(dev, flags) \ + do { \ + if ((dev)->need_zone_res_mgmt) \ + spin_unlock_irqrestore(&(dev)->zone_res_lock, \ + (flags)); \ + } while (0) + static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno) { if (dev->memory_backed) wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE); - spin_lock_irq(&dev->zone_lock); + else + spin_lock_irq((spinlock_t *)dev->zone_locks + zno); } static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno) { - spin_unlock_irq(&dev->zone_lock); - if (dev->memory_backed) clear_and_wake_up_bit(zno, dev->zone_locks); + else + spin_unlock_irq((spinlock_t *)dev->zone_locks + zno); } int null_report_zones(struct gendisk *disk, sector_t sector, @@ -224,11 +251,9 @@ size_t null_zone_valid_read_len(struct nullb *nullb, return (zone->wp - sector) << SECTOR_SHIFT; } -static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t __null_close_zone(struct nullb_device *dev, + struct blk_zone *zone) { - if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) - return BLK_STS_IOERR; - switch (zone->cond) { case BLK_ZONE_COND_CLOSED: /* close operation on closed is not an error */ @@ -261,7 +286,7 @@ static void null_close_first_imp_zone(struct nullb_device *dev) for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) { - null_close_zone(dev, &dev->zones[i]); + __null_close_zone(dev, &dev->zones[i]); return; } } @@ -335,6 +360,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, struct nullb_device *dev = cmd->nq->dev; unsigned int zno = null_zone_no(dev, sector); struct blk_zone *zone = &dev->zones[zno]; + unsigned long flags; blk_status_t ret; trace_nullb_zone_op(cmd, zno, zone->cond); @@ -347,24 +373,10 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, null_lock_zone(dev, zno); - switch (zone->cond) { - case BLK_ZONE_COND_FULL: + if (zone->cond == BLK_ZONE_COND_FULL) { /* Cannot write to a full zone */ ret = BLK_STS_IOERR; goto unlock; - case BLK_ZONE_COND_EMPTY: - case BLK_ZONE_COND_CLOSED: - ret = null_check_zone_resources(dev, zone); - if (ret != BLK_STS_OK) - goto unlock; - break; - case BLK_ZONE_COND_IMP_OPEN: - case BLK_ZONE_COND_EXP_OPEN: - break; - default: - /* Invalid zone condition */ - ret = BLK_STS_IOERR; - goto unlock; } /* @@ -389,37 +401,43 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, goto unlock; } - if (zone->cond == BLK_ZONE_COND_CLOSED) { - dev->nr_zones_closed--; - dev->nr_zones_imp_open++; - } else if (zone->cond == BLK_ZONE_COND_EMPTY) { - dev->nr_zones_imp_open++; + if (zone->cond == BLK_ZONE_COND_CLOSED || + zone->cond == BLK_ZONE_COND_EMPTY) { + null_lock_zone_res(dev, flags); + + ret = null_check_zone_resources(dev, zone); + if (ret != BLK_STS_OK) { + null_unlock_zone_res(dev, flags); + goto unlock; + } + if (zone->cond == BLK_ZONE_COND_CLOSED) { + dev->nr_zones_closed--; + dev->nr_zones_imp_open++; + } else if (zone->cond == BLK_ZONE_COND_EMPTY) { + dev->nr_zones_imp_open++; + } + + if (zone->cond != BLK_ZONE_COND_EXP_OPEN) + zone->cond = BLK_ZONE_COND_IMP_OPEN; + + null_unlock_zone_res(dev, flags); } - if (zone->cond != BLK_ZONE_COND_EXP_OPEN) - zone->cond = BLK_ZONE_COND_IMP_OPEN; - /* - * Memory backing allocation may sleep: release the zone_lock spinlock - * to avoid scheduling in atomic context. Zone operation atomicity is - * still guaranteed through the zone_locks bitmap. - */ - if (dev->memory_backed) - spin_unlock_irq(&dev->zone_lock); ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); - if (dev->memory_backed) - spin_lock_irq(&dev->zone_lock); - if (ret != BLK_STS_OK) goto unlock; zone->wp += nr_sectors; if (zone->wp == zone->start + zone->capacity) { + null_lock_zone_res(dev, flags); if (zone->cond == BLK_ZONE_COND_EXP_OPEN) dev->nr_zones_exp_open--; else if (zone->cond == BLK_ZONE_COND_IMP_OPEN) dev->nr_zones_imp_open--; zone->cond = BLK_ZONE_COND_FULL; + null_unlock_zone_res(dev, flags); } + ret = BLK_STS_OK; unlock: @@ -430,19 +448,22 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zone) { - blk_status_t ret; + blk_status_t ret = BLK_STS_OK; + unsigned long flags; if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) return BLK_STS_IOERR; + null_lock_zone_res(dev, flags); + switch (zone->cond) { case BLK_ZONE_COND_EXP_OPEN: /* open operation on exp open is not an error */ - return BLK_STS_OK; + goto unlock; case BLK_ZONE_COND_EMPTY: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; break; case BLK_ZONE_COND_IMP_OPEN: dev->nr_zones_imp_open--; @@ -450,35 +471,57 @@ static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zo case BLK_ZONE_COND_CLOSED: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; dev->nr_zones_closed--; break; case BLK_ZONE_COND_FULL: default: - return BLK_STS_IOERR; + ret = BLK_STS_IOERR; + goto unlock; } zone->cond = BLK_ZONE_COND_EXP_OPEN; dev->nr_zones_exp_open++; - return BLK_STS_OK; +unlock: + null_unlock_zone_res(dev, flags); + + return ret; } -static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *zone) { + unsigned long flags; blk_status_t ret; if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) return BLK_STS_IOERR; + null_lock_zone_res(dev, flags); + ret = __null_close_zone(dev, zone); + null_unlock_zone_res(dev, flags); + + return ret; +} + +static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *zone) +{ + blk_status_t ret = BLK_STS_OK; + unsigned long flags; + + if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) + return BLK_STS_IOERR; + + null_lock_zone_res(dev, flags); + switch (zone->cond) { case BLK_ZONE_COND_FULL: /* finish operation on full is not an error */ - return BLK_STS_OK; + goto unlock; case BLK_ZONE_COND_EMPTY: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; break; case BLK_ZONE_COND_IMP_OPEN: dev->nr_zones_imp_open--; @@ -489,27 +532,36 @@ static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone * case BLK_ZONE_COND_CLOSED: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; dev->nr_zones_closed--; break; default: - return BLK_STS_IOERR; + ret = BLK_STS_IOERR; + goto unlock; } zone->cond = BLK_ZONE_COND_FULL; zone->wp = zone->start + zone->len; - return BLK_STS_OK; +unlock: + null_unlock_zone_res(dev, flags); + + return ret; } static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *zone) { + unsigned long flags; + if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) return BLK_STS_IOERR; + null_lock_zone_res(dev, flags); + switch (zone->cond) { case BLK_ZONE_COND_EMPTY: /* reset operation on empty is not an error */ + null_unlock_zone_res(dev, flags); return BLK_STS_OK; case BLK_ZONE_COND_IMP_OPEN: dev->nr_zones_imp_open--; @@ -523,12 +575,15 @@ static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *z case BLK_ZONE_COND_FULL: break; default: + null_unlock_zone_res(dev, flags); return BLK_STS_IOERR; } zone->cond = BLK_ZONE_COND_EMPTY; zone->wp = zone->start; + null_unlock_zone_res(dev, flags); + return BLK_STS_OK; } @@ -588,29 +643,28 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op, blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op, sector_t sector, sector_t nr_sectors) { - struct nullb_device *dev = cmd->nq->dev; - unsigned int zno = null_zone_no(dev, sector); + struct nullb_device *dev; + unsigned int zno; blk_status_t sts; switch (op) { case REQ_OP_WRITE: - sts = null_zone_write(cmd, sector, nr_sectors, false); - break; + return null_zone_write(cmd, sector, nr_sectors, false); case REQ_OP_ZONE_APPEND: - sts = null_zone_write(cmd, sector, nr_sectors, true); - break; + return null_zone_write(cmd, sector, nr_sectors, true); case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_RESET_ALL: case REQ_OP_ZONE_OPEN: case REQ_OP_ZONE_CLOSE: case REQ_OP_ZONE_FINISH: - sts = null_zone_mgmt(cmd, op, sector); - break; + return null_zone_mgmt(cmd, op, sector); default: + dev = cmd->nq->dev; + zno = null_zone_no(dev, sector); + null_lock_zone(dev, zno); sts = null_process_cmd(cmd, op, sector, nr_sectors); null_unlock_zone(dev, zno); + return sts; } - - return sts; } From patchwork Wed Nov 11 05:16:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A021C55ABD for ; Wed, 11 Nov 2020 05:16:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 343482075A for ; Wed, 11 Nov 2020 05:16:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="boIdhBBO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725828AbgKKFQ4 (ORCPT ); Wed, 11 Nov 2020 00:16:56 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43931 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725870AbgKKFQz (ORCPT ); Wed, 11 Nov 2020 00:16:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071814; x=1636607814; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=t4Xwaz5VtFq8OwY6FsYgFH9L9GIA4CHcYM8RcV1DIyA=; b=boIdhBBObdQ9IpwzuFewYjjkUdnah7VbZq/AC1YFZ+KFD5rMZQDwcdRl UgnDBy+1sffupjCNbGnIk+QXq4mcIJnqI28+T7u+Sni5x5DVG0CpxSfnD EiFZImojwennKrMWx5iaXCLKNy7aB50mmkm2l6XFT+LUNYTO709BfTZ67 1/aT3dpsdnaLx4+ggSK9Rx4M5k4xRpOiIj3fFsLppNdylAFI4/hyVRGjw PUo+SZtEwedrzdLMLc4Z355wDWDODc+XJJDp7WgfNFNYgBE/h1pTxLn6M GTak1lGHV7UuDDml4CLH0WI+naWzyhxsqDwJudbvV2cIjWnrrBC9Omtdo Q==; IronPort-SDR: pYz8tL04t86IS/VDCiT7kK119zBhY5hzOP+kWtPN9MpL4i2NbulZsu55T1RJOT24unq2S1nRmt b0Vl8XRklTIbLrXAsyzBjLZ24grRvvZFFjK5vlDhzgA4a9BfRGfXO7RckEdkgsX5uBwv8RtntF O0/VGSflBQT73OYVC4p4QMKGDBoOwHWAyPJ8ujyGkS1iSoX7VrSYva+2F9XI4bTltm0xL2QL0I oj+1Dp33Xc8Up9QTc8Ee2qZjS84ZLpKJPT/8hbihBltT7huqWcHY0d9uJLmmOoeSC/tY2NipRq dn0= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254638" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:54 +0800 IronPort-SDR: 7Vkl6XIYymegOxM9dCDockc5QfRqqCnVMqJrVgwRIZrXWiCTP9GKB4IxZmjMAbQCKS1juKayxI /dMi8RFVcuMfcEB0gfpNXg62QJWOVpT9XJiel/CAv4UteDiDKMjj5DNHEl//s9vmCJkS/Nm7V7 LPuKxDu0ii3ucBWXFBs9k5IKxDzmH3anuDn31XSjC/NDaJKVS+Zqc+qsB+aNHDylgnUZdVwdqX YkV+KGn8ssjeCNtSMRJZZqy65q2iLgaR6+aiwb/biRjhsOytxvfeRBJ2QQMcXSPwF3VPe9a1aJ oM8spuAx/eEZBdeExAJmNxNS Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:41 -0800 IronPort-SDR: 80O9bqDreuD63d0kajkYS4u3RLA4hr5YcEQV39uCFvRPhAw6KrRsdcjMHmYsJUPf8Sa3B0mKR0 SiaLcP4r6tPsm2dYUS6bxNPU4C8aL+QlYK/zF5Z/3VypAbt0PFUZpm0CcbvS52VHyyyyhqTJGF H4XVnXA8MqqZbYcIvMryJKsw7SBunpSgb/bA6YAsGproQaX/mHT2bnUdFpJz99qdbQz1dhMgdo xxv89XoPCczg8luYipS1olZ4LPieMg12JLoHi4kkgVVBRquvG8A+B4jYWv3zZmRV8D+cyRHM3P V/4= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:54 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 5/9] null_blk: Improve implicit zone close Date: Wed, 11 Nov 2020 14:16:44 +0900 Message-Id: <20201111051648.635300-6-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When open zone resource management is enabled, that is, when a null_blk zoned device is created with zone_max_open different than 0, implicitly or explicitly opening a zone may require implicitly closing a zone that is already implicitly open. This operation is done using the function null_close_first_imp_zone(), which search for an implicitly open zone to close starting from the first sequential zone. This implementation is simple but may result in the same being constantly implicitly closed and then implicitly reopened on write, namely, the lowest numbered zone that is being written. Avoid this by starting the search for an implicitly open zone starting from the zone following the last zone that was implicitly closed. The function null_close_first_imp_zone() is renamed null_close_imp_open_zone(). Signed-off-by: Damien Le Moal --- drivers/block/null_blk.h | 1 + drivers/block/null_blk_zoned.c | 22 +++++++++++++++++----- 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 4c101c39c3d1..683b573b7e14 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -45,6 +45,7 @@ struct nullb_device { unsigned int nr_zones_imp_open; unsigned int nr_zones_exp_open; unsigned int nr_zones_closed; + unsigned int imp_close_zone_no; struct blk_zone *zones; sector_t zone_size_sects; bool need_zone_res_mgmt; diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 2630aeda757d..905cab12ee3c 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -89,6 +89,7 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) pr_info("zone_max_open limit disabled, limit >= zone count\n"); } dev->need_zone_res_mgmt = dev->zone_max_active || dev->zone_max_open; + dev->imp_close_zone_no = dev->zone_nr_conv; for (i = 0; i < dev->zone_nr_conv; i++) { struct blk_zone *zone = &dev->zones[i]; @@ -280,13 +281,24 @@ static blk_status_t __null_close_zone(struct nullb_device *dev, return BLK_STS_OK; } -static void null_close_first_imp_zone(struct nullb_device *dev) +static void null_close_imp_open_zone(struct nullb_device *dev) { - unsigned int i; + struct blk_zone *zone; + unsigned int zno, i; + + zno = dev->imp_close_zone_no; + if (zno >= dev->nr_zones) + zno = dev->zone_nr_conv; for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { - if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) { - __null_close_zone(dev, &dev->zones[i]); + zone = &dev->zones[zno]; + zno++; + if (zno >= dev->nr_zones) + zno = dev->zone_nr_conv; + + if (zone->cond == BLK_ZONE_COND_IMP_OPEN) { + __null_close_zone(dev, zone); + dev->imp_close_zone_no = zno; return; } } @@ -314,7 +326,7 @@ static blk_status_t null_check_open(struct nullb_device *dev) if (dev->nr_zones_imp_open) { if (null_check_active(dev) == BLK_STS_OK) { - null_close_first_imp_zone(dev); + null_close_imp_open_zone(dev); return BLK_STS_OK; } } From patchwork Wed Nov 11 05:16:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69443C56202 for ; Wed, 11 Nov 2020 05:16:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1AEFC20729 for ; Wed, 11 Nov 2020 05:16:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="mD+3udUR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726004AbgKKFQ5 (ORCPT ); Wed, 11 Nov 2020 00:16:57 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43941 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725996AbgKKFQ4 (ORCPT ); Wed, 11 Nov 2020 00:16:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071815; x=1636607815; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=rHpbzx20o9PwMzL7RFD80zQCgVYK7sbQTqbaYUE8Lvo=; b=mD+3udURCrYcBpozNWUmRh3liXOkkoXcgWO1c3C5yF57rgCbLhhBP0u8 o+anivZ5REAVz185WvuGQ3Z1J2KrH+4S2JAvVToUFyVON+dXFeL18kSRX 2VDzWjgpxZfNfgFngWDgcBUhFslnj/3AZWE6g4GFNc5dd932PA7RfUn73 IF4+Wdj5oIRF5ZpShsyJjiDtz9ue6EB2HypJSxsy5/NQ00/QZsQ5VkE+t RwmyFu0nBWgwhpaL+IgYsO6uy795eYpnMekIiaSOygVuh8LaBzihIDf3G OwUuzHPM1bcC9yat/vNQ6rnceC1BTaia67MI4HwtKs8yeex/ThVLrbl1G A==; IronPort-SDR: wSFO5Z2uUgwW6wfYfiW0isTZsEfnMe0ZwKUaXv8rBwgT4DqvJE8c7SzOa4UIio9BzQ+v12BNjE 74GpHJrQ5hK3bWit+kmBGJjdoeH3RFVr6VNNvJIlgxF3YpujG3UlIpx+88HnTij7each00UCWy 4xCuweyM/G7I4PIlX8iz1aRrm/O9Y5b7TtfZJXXORZqsdsh2sVPKRpWYqkvbdwkA+lzge5LTYE NkOWb7O2F9za2iVtAOG9o3gwtbNYOjEXoN6fO9fpYc6g7mM2NX8VT677alo0OYWK8FewjMpNwJ 1k8= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254641" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:55 +0800 IronPort-SDR: 1xFLGTbg0oLre+v24dwdoh6KgnMRAGDGcjJPIXDhOkpHgo3+3gW3KMnOM32X9fKKkCS6NcnJqe b1NxxMfMV/f0TojKuFqMYJEh3ovUCJSOeLDwOFHQjU6PQHOZHotHKBbZR4p8Aehg/f7dpS6nTF lh87FyCg3PwOt71Bc9HaGcjKttbMQf1Vz2Mitop0RYAcc1GLK6Ows7YexGZvf0ab3QnQSjHf02 Wt7U62GVkUH7kkW3CYqo/hnOcT6aZxKYVjQW/1zrPIy5D+Z2OZS6s8VHR/AIIvuwt4Uoe4SoiJ AN0djRdx7qAlhl8ovLWJ2gI9 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:42 -0800 IronPort-SDR: k24SH/hhWnJvTLGsPdSxfqD2HL41utss07/JXqG3eftTQsK0+ZsRZ2cy9pvmci9vZKHnt1nkst WyMEJCiTYEbNa+n1PXIEEJK3ovdSDtQlOMN4oCi06ZQYun6mTK2UHtr0LtM89SPPbXS6upotxm 2e4L/ljCMuHTfnY3OFDSeyaBh6MjOX9cGdYsOUGAFS4OUWjJb3kDuwAYCEosYt6BtiGUQYYKgy FtOJnzLy/WLTBuqp6DpaFSrigz8SVef34EBd5qrxBOqUJiRD1cD5aof6O/lk8RVz6CS9rfgivs kEA= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:55 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 6/9] null_blk: cleanup discard handling Date: Wed, 11 Nov 2020 14:16:45 +0900 Message-Id: <20201111051648.635300-7-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org null_handle_discard() is called from both null_handle_rq() and null_handle_bio(). As these functions are only passed a nullb_cmd structure, this forces pointer dereferences to identiify the discard operation code and to access the sector range to be discarded. Simplify all this by changing the interface of the functions null_handle_discard() and null_handle_memory_backed() to pass along the operation code, operation start sector and number of sectors. With this change null_handle_discard() can be called directly from null_handle_memory_backed(). Also add a message warning that the discard configuration attribute has no effect when memory backing is disabled. No functional change is introduced by this patch. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- drivers/block/null_blk_main.c | 43 ++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index b77a506a4ae4..06b909fd230b 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1076,13 +1076,16 @@ static void nullb_fill_pattern(struct nullb *nullb, struct page *page, kunmap_atomic(dst); } -static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n) +static void null_handle_discard(struct nullb_device *dev, sector_t sector, + sector_t nr_sectors) { + struct nullb *nullb = dev->nullb; + size_t n = nr_sectors << SECTOR_SHIFT; size_t temp; spin_lock_irq(&nullb->lock); while (n > 0) { - temp = min_t(size_t, n, nullb->dev->blocksize); + temp = min_t(size_t, n, dev->blocksize); null_free_sector(nullb, sector, false); if (null_cache_active(nullb)) null_free_sector(nullb, sector, true); @@ -1149,17 +1152,10 @@ static int null_handle_rq(struct nullb_cmd *cmd) struct nullb *nullb = cmd->nq->dev->nullb; int err; unsigned int len; - sector_t sector; + sector_t sector = blk_rq_pos(rq); struct req_iterator iter; struct bio_vec bvec; - sector = blk_rq_pos(rq); - - if (req_op(rq) == REQ_OP_DISCARD) { - null_handle_discard(nullb, sector, blk_rq_bytes(rq)); - return 0; - } - spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; @@ -1183,18 +1179,10 @@ static int null_handle_bio(struct nullb_cmd *cmd) struct nullb *nullb = cmd->nq->dev->nullb; int err; unsigned int len; - sector_t sector; + sector_t sector = bio->bi_iter.bi_sector; struct bio_vec bvec; struct bvec_iter iter; - sector = bio->bi_iter.bi_sector; - - if (bio_op(bio) == REQ_OP_DISCARD) { - null_handle_discard(nullb, sector, - bio_sectors(bio) << SECTOR_SHIFT); - return 0; - } - spin_lock_irq(&nullb->lock); bio_for_each_segment(bvec, bio, iter) { len = bvec.bv_len; @@ -1263,11 +1251,18 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, } static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, - enum req_opf op) + enum req_opf op, sector_t sector, + sector_t nr_sectors) + { struct nullb_device *dev = cmd->nq->dev; int err; + if (op == REQ_OP_DISCARD) { + null_handle_discard(dev, sector, nr_sectors); + return 0; + } + if (dev->queue_mode == NULL_Q_BIO) err = null_handle_bio(cmd); else @@ -1343,7 +1338,7 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, } if (dev->memory_backed) - return null_handle_memory_backed(cmd, op); + return null_handle_memory_backed(cmd, op, sector, nr_sectors); return BLK_STS_OK; } @@ -1589,6 +1584,12 @@ static void null_config_discard(struct nullb *nullb) if (nullb->dev->discard == false) return; + if (!nullb->dev->memory_backed) { + nullb->dev->discard = false; + pr_info("discard option is ignored without memory backing\n"); + return; + } + if (nullb->dev->zoned) { nullb->dev->discard = false; pr_info("discard option is ignored in zoned mode\n"); From patchwork Wed Nov 11 05:16:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77C6AC55ABD for ; Wed, 11 Nov 2020 05:16:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2DB0A20729 for ; Wed, 11 Nov 2020 05:16:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="STS/Wk0z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726048AbgKKFQ6 (ORCPT ); Wed, 11 Nov 2020 00:16:58 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43941 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725870AbgKKFQ5 (ORCPT ); Wed, 11 Nov 2020 00:16:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071817; x=1636607817; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=KiTQyMnB3LypsCUfUWpdHyWtrnzmVGTAeBxjqequFm8=; b=STS/Wk0zKro3XH2N+oj4FhIxd/x4nIc8D8C/qlEpfezA6iJ+5twj00+A BG2tR5mpCEbK8lZFOu86/zPdiU2fQhjWpv2iRAUvM+XrMWsYRtYRvyrwA /x895trFKlSSguXks8GP0gOHRcOzrHCfzuXpt+gy0s3kbdDR5lbTxG7eV qbDv40VwDRZQxX3FeEP0CVYaUWmExNVrJAVTpfWFpblRMKikXoSBWbuPe NepnNntXGJeBkqwuUfZwsnx5OpmUA1jnEmu4pgs5jEojCLWkHTvxjoF9Y mNSWL57HYxeP8nDlJm3YexG1OyfiprTq/L35mdmEv+yTefZRJMfL1U7ZB g==; IronPort-SDR: 6jSY1gSBuJBF1sJVNLYQojz4n47SJfxeD3v/VGJW8xop3xWls5miQ09hK781lF36YJHzS3SpDx R3wIP9ingCR/J+DathdkpSgnOeQWsQOwA9wU2b6f0FFvjQVrnwQ6RaHPk+YjHEdTyt5lLYEnSU 1udd/G5lYQRbnTh7T2CKoX83PiFJzBv/9v0XhmfT13YSIPu9Nau6QRcuFN+zmjs0mLvyaR6TuC FXRUpPMqNtc1TuzMzw3sQ0dT03tB9tB8WWMljNooTNoqS9haZ8gwoDvmiRYlvitfl16JfhyYpv l4o= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254642" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:56 +0800 IronPort-SDR: 1kvO49YZGS7rjWauOs5Jg8S1i/gjlMmUsbevhmz4AhLtFTGqHuznI55B0L0+Ktj0tCk3sQUf1i KPmlu3YaMB2mq55RF6lZZBSv2WA2wq6ogmo/UShCBdyC2T5BkL39oz6fuMiLuwuVFTes42q5Dk HCcX2+uYqQjKeSIPcvRZGRz4kL8es3QXwuFMfS00zB7PBngxxBXO4H7Qi06JjobBeSye0aVtCm WtbUjyQlWwgXK401dPyM2T6KAtNdeI19TS6ypQZzv8tCPsTXTDMZd4EoYVBDJSxvb4EJSd0NzB dQQVu/FmwrooyoWrj1Kn9tTX Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:42 -0800 IronPort-SDR: fSdsomnOBC2z/sSjTlh9wQi75VHOCZGagDbbjcnwBuNUhzJxLPisNw2DzIo12GtrLECOraSkGy FkKmUMIisHOO/QH4Hmcno97ToA330z/6pLFjwQrhTU39rTay8HPDZt35c2OgHME60bpnXxZGCg bkLESeLM/pEzxBkkeCpC7Dbk4ZRpcFTVfY+3+V0kYoIlftg5we0i1xwlO/03i9CXX5HqkuMAoH DjNQO+arANfDin06ocvgfIxJSn7Ef4tXfibOiTn+CMGAhTpfIjg8XyIv0rVwtyqUrNyZuHLr3+ 1tc= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:55 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 7/9] null_blk: discard zones on reset Date: Wed, 11 Nov 2020 14:16:46 +0900 Message-Id: <20201111051648.635300-8-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When memory backing is enabled, use null_handle_discard() to free the backing memory used by a zone when the zone is being reset. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk.h | 2 ++ drivers/block/null_blk_main.c | 4 ++-- drivers/block/null_blk_zoned.c | 3 +++ 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 683b573b7e14..76bd190fa185 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -95,6 +95,8 @@ struct nullb { char disk_name[DISK_NAME_LEN]; }; +void null_handle_discard(struct nullb_device *dev, sector_t sector, + sector_t nr_sectors); blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_opf op, sector_t sector, unsigned int nr_sectors); diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 06b909fd230b..fa0bc65bbd1e 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1076,8 +1076,8 @@ static void nullb_fill_pattern(struct nullb *nullb, struct page *page, kunmap_atomic(dst); } -static void null_handle_discard(struct nullb_device *dev, sector_t sector, - sector_t nr_sectors) +void null_handle_discard(struct nullb_device *dev, sector_t sector, + sector_t nr_sectors) { struct nullb *nullb = dev->nullb; size_t n = nr_sectors << SECTOR_SHIFT; diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 905cab12ee3c..87c9b6ebdccb 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -596,6 +596,9 @@ static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *z null_unlock_zone_res(dev, flags); + if (dev->memory_backed) + null_handle_discard(dev, zone->start, zone->len); + return BLK_STS_OK; } From patchwork Wed Nov 11 05:16:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7BA6C388F9 for ; Wed, 11 Nov 2020 05:17:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62A0920729 for ; Wed, 11 Nov 2020 05:17:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="HwFLI5rg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726054AbgKKFQ7 (ORCPT ); Wed, 11 Nov 2020 00:16:59 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43941 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726020AbgKKFQ6 (ORCPT ); Wed, 11 Nov 2020 00:16:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071818; x=1636607818; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=rJy0p4QkOCx1xuA3M6loEpPfRJYBhFY0EOc0HkW8SBs=; b=HwFLI5rgH988OMqYtYubdnxr0i9pJj5kXMzlrEwG7o7V0c8ZSoCxJ7D1 r+ibPSYTXUB7VHY5GNDOMPRA4Dki0ABK1oKSfOcZKss4Qhav9lqhYansC NsEP7L3ty7taDa7i8s2PruIUKMJzD+mRH7ksLY5SUse4koRAQpCGSwQRB N1nmU/6eq/ivZ78+rIlXxdlWgWulbUDXjsUFHxOxR4ENqmoxCQLZGNK7f yGjskastsMlKFBsZmsfTgUeCCp+fXDFyC7hMI3WwxPG/Zdn3GbEd8X5DW LUkR4AFSVonvcoJ/aQwIc3ihqntCCgey22EM+vllmYXfM4DbsnkL08sYq A==; IronPort-SDR: RXrZiYRz+sd1xMgcrGtQOrgpBqnB+XO7a7j8HbMD10jhXYJjw3h85uov/xChp4CpzgREdiiJH4 EO4V4tTR53QdTERVCN7nDV4eb+4wZDwO2ezsZqYTVyf/Su1pPuywHsIREyMOqfviuZCPvzo57h 20eKpxv+ji3J/J7MGS9XVNRZ0344XXzdNEPg3gDK/4oWcvJHWPEqTtQKlb/Kzmj5YkCmZXS881 Vz+v48wpEiRycUojesfeSq5d/oUN9Rz3XjYVVmTIgwn5dSa0o0BQo3zA1Enu3tk9P+xcTJdIzV vu8= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254644" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:57 +0800 IronPort-SDR: SiA++XGD0bepJo+qpXlOEsi55TDYdJ0hHI2LG15pBZgnWQFpN8uj9qXRFMqHvf2y+Wg8zrjn2j aGHopkgPIPMXiGWvV6/MBxtaDnvQX/ly6V3K9m3E0BsyhFNuiK94F1+L6dmklTO82h491Mt3GN nBKtPePLx5eg4bDwbFaJmoYvL+UaLxvxoNeppAuqquSDuu/i9G8T+IRIVavPwtZsBDbkWdKFH7 31yIM5H0TByAcpg4FWa5m488A4+DYxNAxAMf/hK4HuT1A56BAWffF/UOVngnp1Oh9QrqgLxzcm kn19uNHtqwGfwtKmRiOiX7Wp Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:43 -0800 IronPort-SDR: Hm3rJEca4beTDTJZwNoaJ3IXDutsIVgIJY1HOKbMM5xXkA59XCjw1hg1WpMGMKE1c8K1X4Tl7H +HgXYk8rQAm2Y0X7/n5xzPKh1lKLFajJItsAK0wN2tHy4mby5uFj2Wwa6fOJfWk5BTDNDtxYGo +LNm21Pz/a2n5I/Zifh2ZxkuwsAO2ErA01Q/y8sVfgmKBuLzH3HiBi++tRNKXBmdzO8bIQ/oFw x14KvFJBfWBB/dtjwp45TsT4CR7i5aADKRkSV0yA3iDkrhiiA0mmPE0WRg7OW3MasCRDdDYFAi Gfs= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:56 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 8/9] null_blk: Allow controlling max_hw_sectors limit Date: Wed, 11 Nov 2020 14:16:47 +0900 Message-Id: <20201111051648.635300-9-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add the module option and configfs attribute max_sectors to allow configuring the maximum size of a command issued to a null_blk device. This allows exercising the block layer BIO splitting with different limits than the default BLK_SAFE_MAX_SECTORS. This is also useful for testing the zone append write path of file systems as the max_hw_sectors limit value is also used for the max_zone_append_sectors limit. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk.h | 1 + drivers/block/null_blk_main.c | 25 +++++++++++++++++++++---- 2 files changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 76bd190fa185..6e5197987093 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -64,6 +64,7 @@ struct nullb_device { unsigned int home_node; /* home node for the device */ unsigned int queue_mode; /* block interface */ unsigned int blocksize; /* block size */ + unsigned int max_sectors; /* Max sectors per command */ unsigned int irqmode; /* IRQ completion handler */ unsigned int hw_queue_depth; /* queue depth */ unsigned int index; /* index of the disk, only valid with a disk */ diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index fa0bc65bbd1e..5f7fbcd56489 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -152,6 +152,10 @@ static int g_bs = 512; module_param_named(bs, g_bs, int, 0444); MODULE_PARM_DESC(bs, "Block size (in bytes)"); +static int g_max_sectors; +module_param_named(max_sectors, g_max_sectors, int, 0444); +MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)"); + static unsigned int nr_devices = 1; module_param(nr_devices, uint, 0444); MODULE_PARM_DESC(nr_devices, "Number of devices to register"); @@ -346,6 +350,7 @@ NULLB_DEVICE_ATTR(submit_queues, uint, nullb_apply_submit_queues); NULLB_DEVICE_ATTR(home_node, uint, NULL); NULLB_DEVICE_ATTR(queue_mode, uint, NULL); NULLB_DEVICE_ATTR(blocksize, uint, NULL); +NULLB_DEVICE_ATTR(max_sectors, uint, NULL); NULLB_DEVICE_ATTR(irqmode, uint, NULL); NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL); NULLB_DEVICE_ATTR(index, uint, NULL); @@ -463,6 +468,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_home_node, &nullb_device_attr_queue_mode, &nullb_device_attr_blocksize, + &nullb_device_attr_max_sectors, &nullb_device_attr_irqmode, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, @@ -533,7 +539,7 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { return snprintf(page, PAGE_SIZE, - "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active\n"); + "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors\n"); } CONFIGFS_ATTR_RO(memb_group_, features); @@ -588,6 +594,7 @@ static struct nullb_device *null_alloc_dev(void) dev->home_node = g_home_node; dev->queue_mode = g_queue_mode; dev->blocksize = g_bs; + dev->max_sectors = g_max_sectors; dev->irqmode = g_irqmode; dev->hw_queue_depth = g_hw_queue_depth; dev->blocking = g_blocking; @@ -1867,9 +1874,13 @@ static int null_add_dev(struct nullb_device *dev) blk_queue_logical_block_size(nullb->q, dev->blocksize); blk_queue_physical_block_size(nullb->q, dev->blocksize); - blk_queue_max_hw_sectors(nullb->q, - round_down(queue_max_hw_sectors(nullb->q), - dev->blocksize >> SECTOR_SHIFT)); + if (!dev->max_sectors) + dev->max_sectors = queue_max_hw_sectors(nullb->q); + dev->max_sectors = min_t(unsigned int, dev->max_sectors, + BLK_DEF_MAX_SECTORS); + dev->max_sectors = round_down(dev->max_sectors, + dev->blocksize >> SECTOR_SHIFT); + blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); null_config_discard(nullb); @@ -1913,6 +1924,12 @@ static int __init null_init(void) g_bs = PAGE_SIZE; } + if (g_max_sectors > BLK_DEF_MAX_SECTORS) { + pr_warn("invalid max sectors\n"); + pr_warn("defaults max sectors to %u\n", BLK_DEF_MAX_SECTORS); + g_max_sectors = BLK_DEF_MAX_SECTORS; + } + if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) { pr_err("invalid home_node value\n"); g_home_node = NUMA_NO_NODE; From patchwork Wed Nov 11 05:16:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11896235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29832C56202 for ; Wed, 11 Nov 2020 05:17:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DAE9020729 for ; Wed, 11 Nov 2020 05:17:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Z7Jhy3e7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726056AbgKKFRB (ORCPT ); Wed, 11 Nov 2020 00:17:01 -0500 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:43941 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725870AbgKKFRA (ORCPT ); Wed, 11 Nov 2020 00:17:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605071819; x=1636607819; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=V0R1iZndLQRBUbAEUddRx5qiIJy4Uayo4l4AqSDE1dE=; b=Z7Jhy3e7K+3SqHeYXjxRO0ou4Ta53hf2vXQFc3KaQ4T3Uv+W57U/ryiQ FIGdGmDE/yCCB23IcUw8tCsjhhYC3hLXxIP4YOvY5u4nkkxaA42P+d2Lz fR7YT1lhf0fSHSgAiFoCMRbun1xx7ny/vOHE9SBsOiCt7VbnW+qli4Rqj OwLvxBN9dpN+CF31rfYSyB5CxH34/od2K37YZW27ClLoSzFXI/Dk1c5Or Mqz7p1rL+ZgvDdE8TodX+18zSN5wSpDvhohiHQZL5Urr+RTNiw4iltFPE 9iOnkeg/hZTLYPXSOJ7gbF+4SGWBxkwVlF54gSq2FKtkamA+0X++KiDRv w==; IronPort-SDR: n/Fv5Rrddl2rP1hgpc4K2KMU0B/o4ALyHCg7KOeCdD5pCpbflvpRwMOU6VIkjJo/ydf/HJ2pH3 kuMEJghDrxTQW2hmxX+y/P2XNdBYGAgcipsJtsXfHW+vUE5Pq1sENiQK/YYi9y5FcDEkl09uy7 FVRM91dF9+ebRSF6qUWVODogptT5CtLZngUtD0LhbNNmnH7KkHURr9KAvdTsRXJWyhWusTYD2I cdCa4PnL3eq85b7lAIaDB1MQoJrm9btYfVJnufY6pZEzW4d/A9t8ANaz+S0OGUs5fzPoF1GTTY 96M= X-IronPort-AV: E=Sophos;i="5.77,468,1596470400"; d="scan'208";a="152254646" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 11 Nov 2020 13:16:57 +0800 IronPort-SDR: 08NBbOrbbgsgFWumW/6dEDs/n9JlsXdF/s+3iWRcL8pa3xRjx5UDTlLPBpxXdGOMZlw+0BaFJP QzSE4CZdw3lrlJC1wfFehNYmyxXZkZtr/myEbgO/GmfNj3Scd1ikUISGPCJHc/k2PmkojAWf6Q Meq0pJPAVTRLJ0M8Q6MMt7lXi2jBCM7kfWv7xJ9e7XisbguyeAAS2Q5PazSdDIjfnle/A+i13f sXblbJM5xbhouq4+0uJDGJU9KeDYdKR3HEgpQdJZdyIjTLo/QafBVkWg1PcA/534iC18HrHO82 Kex9ZDr+PmJyr1uV+FvPVJ2q Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2020 21:01:44 -0800 IronPort-SDR: AAksP2A8PQNPxu0wKeDW42IKBuLIQlt/rhe3L7PvjuC/9TzF9SAfMPkgY9XgeFKMVDCcks7Xet 7zhhxmUB337q4GZsPNeKc1Kg7GonSomOKJt77snqLwsOixCGCoGEHHKXdazqeCfCkNuDRs1gGr MXlfv16h0PLENS9ntYuOigyH0HWKm+A5FEmKpk4ng6X+RYDkuPb6H+g6LqNti7S/vlsNAHva4t 81umAIDu2esAWAPrRZYEEHhNnectRhMWRynEsT/bDLTNaCVcZhgdOozvFvQHnN4ypNKcm2sNRU QJI= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 10 Nov 2020 21:16:57 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v2 9/9] null_blk: Move driver into its own directory Date: Wed, 11 Nov 2020 14:16:48 +0900 Message-Id: <20201111051648.635300-10-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201111051648.635300-1-damien.lemoal@wdc.com> References: <20201111051648.635300-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move null_blk driver code into the new sub-directory drivers/block/null_blk. Suggested-by: Bart Van Assche Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/block/Kconfig | 8 +------- drivers/block/Makefile | 7 +------ drivers/block/null_blk/Kconfig | 12 ++++++++++++ drivers/block/null_blk/Makefile | 11 +++++++++++ drivers/block/{null_blk_main.c => null_blk/main.c} | 0 drivers/block/{ => null_blk}/null_blk.h | 0 drivers/block/{null_blk_trace.c => null_blk/trace.c} | 2 +- drivers/block/{null_blk_trace.h => null_blk/trace.h} | 2 +- drivers/block/{null_blk_zoned.c => null_blk/zoned.c} | 2 +- 9 files changed, 28 insertions(+), 16 deletions(-) create mode 100644 drivers/block/null_blk/Kconfig create mode 100644 drivers/block/null_blk/Makefile rename drivers/block/{null_blk_main.c => null_blk/main.c} (100%) rename drivers/block/{ => null_blk}/null_blk.h (100%) rename drivers/block/{null_blk_trace.c => null_blk/trace.c} (93%) rename drivers/block/{null_blk_trace.h => null_blk/trace.h} (97%) rename drivers/block/{null_blk_zoned.c => null_blk/zoned.c} (99%) diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig index ecceaaa1a66f..262326973ee0 100644 --- a/drivers/block/Kconfig +++ b/drivers/block/Kconfig @@ -16,13 +16,7 @@ menuconfig BLK_DEV if BLK_DEV -config BLK_DEV_NULL_BLK - tristate "Null test block driver" - select CONFIGFS_FS - -config BLK_DEV_NULL_BLK_FAULT_INJECTION - bool "Support fault injection for Null test block driver" - depends on BLK_DEV_NULL_BLK && FAULT_INJECTION +source "drivers/block/null_blk/Kconfig" config BLK_DEV_FD tristate "Normal floppy disk support" diff --git a/drivers/block/Makefile b/drivers/block/Makefile index e1f63117ee94..a3170859e01d 100644 --- a/drivers/block/Makefile +++ b/drivers/block/Makefile @@ -41,12 +41,7 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/ obj-$(CONFIG_ZRAM) += zram/ obj-$(CONFIG_BLK_DEV_RNBD) += rnbd/ -obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o -null_blk-objs := null_blk_main.o -ifeq ($(CONFIG_BLK_DEV_ZONED), y) -null_blk-$(CONFIG_TRACING) += null_blk_trace.o -endif -null_blk-$(CONFIG_BLK_DEV_ZONED) += null_blk_zoned.o +obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk/ skd-y := skd_main.o swim_mod-y := swim.o swim_asm.o diff --git a/drivers/block/null_blk/Kconfig b/drivers/block/null_blk/Kconfig new file mode 100644 index 000000000000..6bf1f8ca20a2 --- /dev/null +++ b/drivers/block/null_blk/Kconfig @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Null block device driver configuration +# + +config BLK_DEV_NULL_BLK + tristate "Null test block driver" + select CONFIGFS_FS + +config BLK_DEV_NULL_BLK_FAULT_INJECTION + bool "Support fault injection for Null test block driver" + depends on BLK_DEV_NULL_BLK && FAULT_INJECTION diff --git a/drivers/block/null_blk/Makefile b/drivers/block/null_blk/Makefile new file mode 100644 index 000000000000..84c36e512ab8 --- /dev/null +++ b/drivers/block/null_blk/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 + +# needed for trace events +ccflags-y += -I$(src) + +obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o +null_blk-objs := main.o +ifeq ($(CONFIG_BLK_DEV_ZONED), y) +null_blk-$(CONFIG_TRACING) += trace.o +endif +null_blk-$(CONFIG_BLK_DEV_ZONED) += zoned.o diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk/main.c similarity index 100% rename from drivers/block/null_blk_main.c rename to drivers/block/null_blk/main.c diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk/null_blk.h similarity index 100% rename from drivers/block/null_blk.h rename to drivers/block/null_blk/null_blk.h diff --git a/drivers/block/null_blk_trace.c b/drivers/block/null_blk/trace.c similarity index 93% rename from drivers/block/null_blk_trace.c rename to drivers/block/null_blk/trace.c index f246e7bff698..3711cba16071 100644 --- a/drivers/block/null_blk_trace.c +++ b/drivers/block/null_blk/trace.c @@ -4,7 +4,7 @@ * * Copyright (C) 2020 Western Digital Corporation or its affiliates. */ -#include "null_blk_trace.h" +#include "trace.h" /* * Helper to use for all null_blk traces to extract disk name. diff --git a/drivers/block/null_blk_trace.h b/drivers/block/null_blk/trace.h similarity index 97% rename from drivers/block/null_blk_trace.h rename to drivers/block/null_blk/trace.h index 4f83032eb544..ce3b430e88c5 100644 --- a/drivers/block/null_blk_trace.h +++ b/drivers/block/null_blk/trace.h @@ -73,7 +73,7 @@ TRACE_EVENT(nullb_report_zones, #undef TRACE_INCLUDE_PATH #define TRACE_INCLUDE_PATH . #undef TRACE_INCLUDE_FILE -#define TRACE_INCLUDE_FILE null_blk_trace +#define TRACE_INCLUDE_FILE trace /* This part must be outside protection */ #include diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk/zoned.c similarity index 99% rename from drivers/block/null_blk_zoned.c rename to drivers/block/null_blk/zoned.c index 87c9b6ebdccb..2e25a0a1c40d 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -4,7 +4,7 @@ #include "null_blk.h" #define CREATE_TRACE_POINTS -#include "null_blk_trace.h" +#include "trace.h" #define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)