From patchwork Fri Nov 20 01:55:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B15DAC63798 for ; Fri, 20 Nov 2020 01:55:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3949B22255 for ; Fri, 20 Nov 2020 01:55:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="PMYZB9UD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727235AbgKTBzm (ORCPT ); Thu, 19 Nov 2020 20:55:42 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6553 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726485AbgKTBzm (ORCPT ); Thu, 19 Nov 2020 20:55:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837341; x=1637373341; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=B/NkrmS6S2vXNAmi2wSNbc75GLIMf9m34of4fI/U8sg=; b=PMYZB9UDpnquO9q2GnT7keRUHmtOQ/NlxVsfB+Ht6vdFPBp/MNOPEyw7 jRkjVdNRYRou9inMOv+otailA4N7rU0Bz68+JG1B5RXemql/J0jVBBluN 0Vj4N5PGgxk2BX/xi9H5Owta2NEXusgwM0id4FarJf5v564e+zii5P01m Z/WakxAhbj+r1tjPJ7sSuD/6+btUHWszcXcxoZkqD24wJ43pcLYRfgh1r ipdVkybklSak22/zoPyAZh/OAFPIsITyudTuHNJSGSueZDz4AC3X+hi+t Ru8q3o5rt4RQvMkdpOk1/EkQNyOED2kJr9pyya/iTMX10lg+L9bMa8zi/ w==; IronPort-SDR: bmM6X3ToJRdyjtyTiX+Njau6ftGUFVHZc5YCTMReJ17jqxHsU5Q4n9t4Ly7gX/TBHnTR0AIOwZ qJYW+5qc050LSFQ8fQGcePKUaoINJvCC1StgFurbmm2Hk6L295cGSRDqNRC4MQuptmlU+VZhBO OnPaZNqEivwI9+prBWnTCELbpZyZAHzxAJO4TAeWAz+5xFLs5hs8rOIxU22sRfu7m2cSflYkV5 lDw/ws2KO2trFFr3HMGpi6SzLqezfhkIIx8Y/NHr6JzppTo/zKr57eyRgTubMzCovYge78cwq0 qOc= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516399" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:41 +0800 IronPort-SDR: 7hqa4JuF9aa58kxPLb7TLMlmkIVTpCkGphwO7IQIA/93rOaKVA55Fce+LQWWyVZPfD+HbQYnBq vN9tqyOUziWq8yP7xgkPWv/PoOPUYsSix7StOpA/HSMzflPMEStB/hPfWmt/QVK++R7vyznf0d rqSrqwfSSRaJmzQsF/mlE8zg3Q/rYtDDk9p751Gzp0WLTSeb8zySLJ1Dy/c7dnWyoKXCEjak5C 4oLHUjHSPKekVwb/6uB6oW8qf6wsIyuJLYKrb+i3l1mkEBGI6pNIeoR4fszHPg01eiA2kTnunZ Ax0kwZWVOFaVw5W1qTOAHnEW Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:31 -0800 IronPort-SDR: JI0zDegYvulBj+85xemEc0E6AIQ0g/lRKOfYMGpu10xKcOBrwybWRibfcxBwWz9KAykcMYV+2o BPebqhjuF84qeUEOyk7MMAs/0EI1NUp+Av9BpRQu9VmZrn+Dxrs/rfgcLFei3hP/Em+w0++U/a RhjvpWiOa1m7vJw56a0+iBoO4Rcmw2MmT1UMbwtNF4yqrqk9qZQVYZOtOMN1I9PkY1wEbLqLc9 F5xNd3BywHsYnP2U5XAknhcxbMTkS36/FQyPM1Y7y+IcoBnRO+Bpf0+DR3H2/cYb7/+pKeY3Wr beI= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:41 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 1/9] null_blk: Fix zone size initialization Date: Fri, 20 Nov 2020 10:55:11 +0900 Message-Id: <20201120015519.276820-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For a null_blk device with zoned mode enabled is currently initialized with a number of zones equal to the device capacity divided by the zone size, without considering if the device capacity is a multiple of the zone size. If the zone size is not a divisor of the capacity, the zones end up not covering the entire capacity, potentially resulting is out of bounds accesses to the zone array. Fix this by adding one last smaller zone with a size equal to the remainder of the disk capacity divided by the zone size if the capacity is not a multiple of the zone size. For such smaller last zone, the zone capacity is also checked so that it does not exceed the smaller zone size. Reported-by: Naohiro Aota Fixes: ca4b2a011948 ("null_blk: add zone support") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk_zoned.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index beb34b4f76b0..1d0370d91fe7 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -6,8 +6,7 @@ #define CREATE_TRACE_POINTS #include "null_blk_trace.h" -/* zone_size in MBs to sectors. */ -#define ZONE_SIZE_SHIFT 11 +#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT) static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect) { @@ -16,7 +15,7 @@ static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect) int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) { - sector_t dev_size = (sector_t)dev->size * 1024 * 1024; + sector_t dev_capacity_sects, zone_capacity_sects; sector_t sector = 0; unsigned int i; @@ -38,9 +37,13 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) return -EINVAL; } - dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT; - dev->nr_zones = dev_size >> - (SECTOR_SHIFT + ilog2(dev->zone_size_sects)); + zone_capacity_sects = MB_TO_SECTS(dev->zone_capacity); + dev_capacity_sects = MB_TO_SECTS(dev->size); + dev->zone_size_sects = MB_TO_SECTS(dev->zone_size); + dev->nr_zones = dev_capacity_sects >> ilog2(dev->zone_size_sects); + if (dev_capacity_sects & (dev->zone_size_sects - 1)) + dev->nr_zones++; + dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone), GFP_KERNEL | __GFP_ZERO); if (!dev->zones) @@ -101,8 +104,12 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) struct blk_zone *zone = &dev->zones[i]; zone->start = zone->wp = sector; - zone->len = dev->zone_size_sects; - zone->capacity = dev->zone_capacity << ZONE_SIZE_SHIFT; + if (zone->start + dev->zone_size_sects > dev_capacity_sects) + zone->len = dev_capacity_sects - zone->start; + else + zone->len = dev->zone_size_sects; + zone->capacity = + min_t(sector_t, zone->len, zone_capacity_sects); zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ; zone->cond = BLK_ZONE_COND_EMPTY; From patchwork Fri Nov 20 01:55:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51CBAC6379D for ; Fri, 20 Nov 2020 01:55:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C4A9B22255 for ; Fri, 20 Nov 2020 01:55:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="j43fzXIo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726485AbgKTBzn (ORCPT ); Thu, 19 Nov 2020 20:55:43 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6553 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbgKTBzm (ORCPT ); Thu, 19 Nov 2020 20:55:42 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837342; x=1637373342; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=u4RvdBCLXAsEgfYrFWz21p+kYwih56qACaS8qEWsIyc=; b=j43fzXIojWu5eUYxqylNSkl8MrQcE60mUBnpQzCiyQ202PMBhAMAhxbm zTpnL7O05i56rQ6ullqoMJAAgBmcUWfSDs42PvwJ2cufP9628wK/IFYIW 30tumUTvXjHAmvVLwAueRRqqsbm3YhKHX2nPWFAof47fJDqfZ/qp1Xmc3 EwEk40Z5PCe9s3o7D8vCmTXWAMquJOojqm8+K4xcyfcHxWUaqj+nDmd67 KM351MxwfQJjzBFekGdUiDi/lFKb8OpC1IZ1P4O3+qVwCYQVRUDtPbnw8 jJneiSq/sVNMoI+aCb7bev9JXAOfxdNAjefTRrcaxZ2u3ojO1eMKeZOKd g==; IronPort-SDR: uojvf4odap6hLbhs2Mqo2QdbvucE1H4qDyjHQ0SWLhGRA81iW177XJC5n8038LI/h5hopG+qgZ JgzLSyvwamHkc21w0wYfl4d0pCdIrHfntIxiWzqYbDK2nFojCDco6g01V2pyQ49lVqwv4E6j8q TQbqG+L4g1INKY14Ll47DML8W/E0eZ7zdIkIoI5tGdi5Pw0wGmVGuW5NBWPsS1RtMxGd3zuJHX TZlP6JlgAkT+Lar9CBoyOveqJ2J8csCOOREoiUwl/8BpGWcC0hBlwOAFVrVU90rU/FFwpLZVx/ M9Y= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516400" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:42 +0800 IronPort-SDR: 4os4iUTjZn880bpSZwpNiFVlGXvbmiPeAkvOnnd1dpudX4rCSL92pFdukN1UFUGuI+aLGGShsb mMuy+m0HH0knFyM3P44Z9e8qZvZGKhL1kNX9Lh3BngeBEWf0sKxcs8CYY9toaX8gwUvKT0ocrJ jyv512QeQymaffSo5gyN9aE5BfiDLflXG3V4wmaZjGXMAq3mSXBwZcUvXYomS0n/lBEtpmG8D6 172GSJd2eE5r5lkfrZwhSg9RezkYHnSv84yAdBPo2LQqZOUk+9f4NSfmAV7LnczJiGJX+8y8kd TRY09pKNr1Qy7eaX9EvoP00v Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:32 -0800 IronPort-SDR: lV0ql2G6cpsSbO8x+el9kxACM74T/XReOo4yrKm1CVoXeiKyLvJmU19PBeX4CwmP0VQYbfY9zj 8n7zWsJjRh4D+g2elPDGvMQD41t7VQbuNy3KphXDSPpAefcxxkqFw2xI8mfwr0hYla0Cu5afGd NUZGP1iNzEfPspZhRWTZ0RAhLhave79ANmDW1UdP4qv5mTEnz9UbQ+BpUTq+1f6A1KkIST6pM/ QaPP3GdxSqyT1LS1hk+9glhOcbI755AVe01z6O0iAKRdUm1PIscBZw824EbpOFwb9T44yxjWFb Ktw= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:41 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 2/9] null_blk: Fail zone append to conventional zones Date: Fri, 20 Nov 2020 10:55:12 +0900 Message-Id: <20201120015519.276820-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Conventional zones do not have a write pointer and so cannot accept zone append writes. Make sure to fail any zone append write command issued to a conventional zone. Reported-by: Naohiro Aota Fixes: e0489ed5daeb ("null_blk: Support REQ_OP_ZONE_APPEND") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk_zoned.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 1d0370d91fe7..172f720b8d63 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -339,8 +339,11 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, trace_nullb_zone_op(cmd, zno, zone->cond); - if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) + if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) { + if (append) + return BLK_STS_IOERR; return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); + } null_lock_zone(dev, zno); From patchwork Fri Nov 20 01:55:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2F48C6379F for ; Fri, 20 Nov 2020 01:55:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C17A22255 for ; Fri, 20 Nov 2020 01:55:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="iuZOP/QV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727159AbgKTBzn (ORCPT ); Thu, 19 Nov 2020 20:55:43 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6553 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbgKTBzn (ORCPT ); Thu, 19 Nov 2020 20:55:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837343; x=1637373343; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=V7oCjS3GKfR7wtIyMbYqgVSC3/fcLyOTBkS3KX7Wx2w=; b=iuZOP/QVpsXLDyueyQ+5PnUAndElZmr1PNMcWT6+URZ9Vngnq0IYO69D lcjcAWJ7ja0JmhxQf8/BnN9lZyEw4YFJ/1lKxlHcVG6ZerXMEtVcXeO04 HG7gYirDhpBdOutv36Kdbh1goU+kDP/1Q/Umd3Au9G7uJs3PbVfvXjxZ6 UpP9yCVBZBEyxbQ2lFzdCYUDFxSisu7XsdgbU5TKOW3Ufbc1Pxqvs9XcA cITwTB8ZtapvqWewvVEUai9SiMMFNPVAHbSOHkxLDPOqX76SLZfiH1vHC XSv4froUMJO++B4EeZoC7vEzk2CvVXityyHkEAemW3P8fu77FXOLDTJyc A==; IronPort-SDR: H8zj/vn2Oec4YeJvHJEn1WrUYb/I7EqyZR5E3PPmo1Vxt0hzLcxfyMx2rDx6vy8x+W2/rkK4AC zrZ5+J6OoYueo8f5ndkinGn9+ToXS4hetcvlzgxWZMLQDXFqFOerkNpvM5AYzm1iHC6fDDEDnI +EkbXfFBPRE6QErEsKxt9PcOm+ObJT7t5p58UQnijpHj0ctBhTLcKWd8Gu7z5MmSdNXBf4JoUB xotmfdeYgGfu0sU949Vam0Tu2AEtpxbDRfhejfyQHPI8bcCd6hA8QeA9bPsn9sCamil+3kdJg4 FAg= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516402" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:43 +0800 IronPort-SDR: bh1lrH1W/2doieuMDUmzeAvH0mzT15rSdbp5o3uDs2yrqS+0XAWd1VqwT43ShN9C9c+c0WExNC 4rV5LKl7VSQFPi7xXJUc7FgCNGTzBPFQBIwlZoII+5SeXvSmsVqUtZMFm8OapvQRalZN2cEehd BeYsuDsdNfKioVs9dsKTHyHMn1HQTCMr9woXi9PqB93nK6e7mL3B2PV6Ist9/kdnVf8uSkzL71 BdJg9xwrk0V/B2XQQ8mjNdvfygogwsqyks0rqkqfDtgKrq7v6E4mm/Twq78xP5VbFx+q/67NR0 UakN3FggTqnK/7Y7DnAYL6uY Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:32 -0800 IronPort-SDR: xXUnY0CASWKklHxNjTp+5jkWjJZDohBixDj3FFJP4tR8xe4QyZLGu7QNWXPwhUKroteeFxCK2L MNuSoO+LglJogJOq2t5Vw/kuDwUtm4+owdClw1uKYrRuVSTLlZToDFE/tGwcvJdYXzX/mVPvSe jk/Hxn/Xdz1bIhlrxJ3GSWdLe2g+HShMETfXzgt+JlgJlfoiUhLD0+oko35cbA8mlBi/b8nTul MzoendEwl5rdL0xff21CUWEXvtwYfwmA72BlwHW2snzWsMh6QLRTPvZXniuOQbjUmj9KnJ3nfb sT8= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:42 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 3/9] block: Align max_hw_sectors to logical blocksize Date: Fri, 20 Nov 2020 10:55:13 +0900 Message-Id: <20201120015519.276820-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Block device drivers do not have to call blk_queue_max_hw_sectors() to set a limit on request size if the default limit BLK_SAFE_MAX_SECTORS is acceptable. However, this limit (255 sectors) may not be aligned to the device logical block size which cannot be used as is for a request maximum size. This is the case for the null_blk device driver. Modify blk_queue_max_hw_sectors() to make sure that the request size limits specified by the max_hw_sectors and max_sectors queue limits are always aligned to the device logical block size. Additionally, to avoid introducing a dependence on the execution order of this function with blk_queue_logical_block_size(), also modify blk_queue_logical_block_size() to perform the same alignment when the logical block size is set after max_hw_sectors. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- block/blk-settings.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 9741d1d83e98..dde5c2e9a728 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -157,10 +157,16 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto __func__, max_hw_sectors); } + max_hw_sectors = round_down(max_hw_sectors, + limits->logical_block_size >> SECTOR_SHIFT); limits->max_hw_sectors = max_hw_sectors; + max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors); max_sectors = min_t(unsigned int, max_sectors, BLK_DEF_MAX_SECTORS); + max_sectors = round_down(max_sectors, + limits->logical_block_size >> SECTOR_SHIFT); limits->max_sectors = max_sectors; + q->backing_dev_info->io_pages = max_sectors >> (PAGE_SHIFT - 9); } EXPORT_SYMBOL(blk_queue_max_hw_sectors); @@ -321,13 +327,20 @@ EXPORT_SYMBOL(blk_queue_max_segment_size); **/ void blk_queue_logical_block_size(struct request_queue *q, unsigned int size) { - q->limits.logical_block_size = size; + struct queue_limits *limits = &q->limits; - if (q->limits.physical_block_size < size) - q->limits.physical_block_size = size; + limits->logical_block_size = size; - if (q->limits.io_min < q->limits.physical_block_size) - q->limits.io_min = q->limits.physical_block_size; + if (limits->physical_block_size < size) + limits->physical_block_size = size; + + if (limits->io_min < limits->physical_block_size) + limits->io_min = limits->physical_block_size; + + limits->max_hw_sectors = + round_down(limits->max_hw_sectors, size >> SECTOR_SHIFT); + limits->max_sectors = + round_down(limits->max_sectors, size >> SECTOR_SHIFT); } EXPORT_SYMBOL(blk_queue_logical_block_size); From patchwork Fri Nov 20 01:55:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2ADFC63697 for ; Fri, 20 Nov 2020 01:55:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A1D92225B for ; Fri, 20 Nov 2020 01:55:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="g3shkMuO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727253AbgKTBzp (ORCPT ); Thu, 19 Nov 2020 20:55:45 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6553 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbgKTBzp (ORCPT ); Thu, 19 Nov 2020 20:55:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837344; x=1637373344; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=BQVIFkJwoTMhWrpqBPlwL1BTlp60IXDuv/f5gp+gJWU=; b=g3shkMuOyLCFFgpOgaauYPaxH/FOCALPaqDAsjMFwHF+z8MXyB5j6EBA h/U94qh/4jhNV30mWUBY8OpVG256WWxw4fRHigrfajMmQOztxzsNZ6CE4 0bCCS+WgUWqtMIZrPsAhk2z44VUZ9jaILVEi4aN7FZkUwedzFyUw6UpRR SphWh9YtFG3/H986qxrKJYrNajImEjNzTj1rg9vtyJ4soUL2DSbtL0kyc SvMX0o0lBdfatNY9iROpGKQ2GeXCenD5KCVQsQhXUncUZEWmNQN65TZ6s pn6nBeY517rJ7y31vsjID068QYw/D6aU8f71AIDFjJc/2A2t0EQ8M7KN8 Q==; IronPort-SDR: XfW56YTBq8S39PMEVaQe3xjsR1EfUr2DZfzXYUfri+5Ospqer+hwSlHKRe48VREKLlIebGMyqz HvmzkMnnz1Qbn3BgWuwk6ZnvSG8xVI0rKS2+DTVGiXqde4rvYXulk0fu0Q0qFEUIk6yfGaSEPk ijgyYci+IEGRjwtut+lMNg9kpgnW3JszDDcJss652awT2yksrswRKXziOld+xSsffYRM1w286F OfW8McbcoIxnijMEiGTnN+pOAZQrDEr8VeiIZssqnRAT8aJNOQZUKDjC1S9Z5zMG4KJa0jjdvF pcY= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516403" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:43 +0800 IronPort-SDR: RnpN0Q3j02o8RfgqbbgmmIq8uyJK8WTZ/rucpmBpSGaIAoa43ki9rE5MFlagBnD1I2OrrCNsk8 Gicy8UYcg5mP2i2tstHOayGSz+WXV+XGae4OwqGdXL3JNT/uDXnxApx0nuWN1cQ2KNzxAPqXjl qnmMvDZdTq0gBtiNB2B4s4NQOQfRhMsn5swn/XuLnNPI/+h7rWf7IIgcy1VM3nu61mydRsY4N0 usx4UHmCtGHu3WF2fiCW5K8TExGh6tXbJzzZpAI81/JFlEWeWkVcLBxXPEfsnI4x5lpTSV93W1 oglLHZl+z/9R5O0GwlJzhuOS Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:33 -0800 IronPort-SDR: IhEZ2CaLcFVIFafrYMgm8MXp0Rv9kD7qSEhKMdDU1UzpbrbvkXUakz2zan8AaGnhYLsrtl98yj efIS89LP7KqA32Sr2tQTPOA+G2lglM2VRcZhZjVajO9rwP4m6QYrjdTmVe46uud6clPN8g1WTX UJQAN9+FjhRyhL01+Ntq5jFYyodG8Zf/Iw5Qr4EwKtT4Tscztjx6a7ljbHNsYIcDA1ysKPr6u2 MILPEOwVAlAcVTYNUHrX0UWrjuZyEzNBRiYyn2ghuxQ0p6d5EeJerA4lX82fam9nni5X9dWhFy MOU= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:43 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 4/9] null_blk: improve zone locking Date: Fri, 20 Nov 2020 10:55:14 +0900 Message-Id: <20201120015519.276820-5-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org With memory backing disabled, using a single spinlock for protecting zone information and zone resource management prevents the parallel execution on multiple queue of IO requests to different zones. Furthermore, regardless of the use of memory backing, if a null_blk device is created without limits on the number of open and active zones, accounting for zone resource management is not necessary. From these observations, zone locking is changed as follows to improve performance: 1) the zone_lock spinlock is renamed zone_res_lock and used only if zone resource management is necessary, that is, if either zone_max_open or zone_max_active are not 0. This is indicated using the new boolean need_zone_res_mgmt in the nullb_device structure. null_zone_write() is modified to reduce the amount of code executed with the zone_res_lock spinlock held. 2) With memory backing disabled, per zone locking is changed to a spinlock per zone. 3) Introduce the structure nullb_zone to replace the use of struct blk_zone for zone information. This new structure includes a union of a spinlock and a mutex for zone locking. The spinlock is used when memory backing is disabled and the mutex is used with memory backing. With these changes, fio performance with zonemode=zbd for 4K random read and random write on a dual socket (24 cores per socket) machine using the none schedulder is as follows: before patch: write (psync x 96 jobs) = 465 KIOPS read (libaio@qd=8 x 96 jobs) = 1361 KIOPS after patch: write (psync x 96 jobs) = 456 KIOPS read (libaio@qd=8 x 96 jobs) = 4096 KIOPS Write performance remains mostly unchanged but read performance is three times higher. Performance when using the mq-deadline scheduler is not changed by this patch as mq-deadline becomes the bottleneck for a multi-queue device. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig --- drivers/block/null_blk.h | 28 +++- drivers/block/null_blk_zoned.c | 280 +++++++++++++++++++-------------- 2 files changed, 188 insertions(+), 120 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index c24d9b5ad81a..14546ead1d66 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -12,6 +12,8 @@ #include #include #include +#include +#include struct nullb_cmd { struct request *rq; @@ -32,6 +34,26 @@ struct nullb_queue { struct nullb_cmd *cmds; }; +struct nullb_zone { + /* + * Zone lock to prevent concurrent modification of a zone write + * pointer position and condition: with memory backing, a write + * command execution may sleep on memory allocation. For this case, + * use mutex as the zone lock. Otherwise, use the spinlock for + * locking the zone. + */ + union { + spinlock_t spinlock; + struct mutex mutex; + }; + enum blk_zone_type type; + enum blk_zone_cond cond; + sector_t start; + sector_t wp; + unsigned int len; + unsigned int capacity; +}; + struct nullb_device { struct nullb *nullb; struct config_item item; @@ -45,10 +67,10 @@ struct nullb_device { unsigned int nr_zones_imp_open; unsigned int nr_zones_exp_open; unsigned int nr_zones_closed; - struct blk_zone *zones; + struct nullb_zone *zones; sector_t zone_size_sects; - spinlock_t zone_lock; - unsigned long *zone_locks; + bool need_zone_res_mgmt; + spinlock_t zone_res_lock; unsigned long size; /* device size in MB */ unsigned long completion_nsec; /* time in ns to complete a request */ diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 172f720b8d63..4d5c0b938618 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -13,9 +13,49 @@ static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect) return sect >> ilog2(dev->zone_size_sects); } +static inline void null_lock_zone_res(struct nullb_device *dev) +{ + if (dev->need_zone_res_mgmt) + spin_lock_irq(&dev->zone_res_lock); +} + +static inline void null_unlock_zone_res(struct nullb_device *dev) +{ + if (dev->need_zone_res_mgmt) + spin_unlock_irq(&dev->zone_res_lock); +} + +static inline void null_init_zone_lock(struct nullb_device *dev, + struct nullb_zone *zone) +{ + if (!dev->memory_backed) + spin_lock_init(&zone->spinlock); + else + mutex_init(&zone->mutex); +} + +static inline void null_lock_zone(struct nullb_device *dev, + struct nullb_zone *zone) +{ + if (!dev->memory_backed) + spin_lock_irq(&zone->spinlock); + else + mutex_lock(&zone->mutex); +} + +static inline void null_unlock_zone(struct nullb_device *dev, + struct nullb_zone *zone) +{ + if (!dev->memory_backed) + spin_unlock_irq(&zone->spinlock); + else + mutex_unlock(&zone->mutex); +} + int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) { sector_t dev_capacity_sects, zone_capacity_sects; + struct nullb_zone *zone; sector_t sector = 0; unsigned int i; @@ -44,26 +84,12 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) if (dev_capacity_sects & (dev->zone_size_sects - 1)) dev->nr_zones++; - dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone), - GFP_KERNEL | __GFP_ZERO); + dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct nullb_zone), + GFP_KERNEL | __GFP_ZERO); if (!dev->zones) return -ENOMEM; - /* - * With memory backing, the zone_lock spinlock needs to be temporarily - * released to avoid scheduling in atomic context. To guarantee zone - * information protection, use a bitmap to lock zones with - * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing - * implies that the queue is marked with BLK_MQ_F_BLOCKING. - */ - spin_lock_init(&dev->zone_lock); - if (dev->memory_backed) { - dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL); - if (!dev->zone_locks) { - kvfree(dev->zones); - return -ENOMEM; - } - } + spin_lock_init(&dev->zone_res_lock); if (dev->zone_nr_conv >= dev->nr_zones) { dev->zone_nr_conv = dev->nr_zones - 1; @@ -86,10 +112,12 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) dev->zone_max_open = 0; pr_info("zone_max_open limit disabled, limit >= zone count\n"); } + dev->need_zone_res_mgmt = dev->zone_max_active || dev->zone_max_open; for (i = 0; i < dev->zone_nr_conv; i++) { - struct blk_zone *zone = &dev->zones[i]; + zone = &dev->zones[i]; + null_init_zone_lock(dev, zone); zone->start = sector; zone->len = dev->zone_size_sects; zone->capacity = zone->len; @@ -101,8 +129,9 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) } for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { - struct blk_zone *zone = &dev->zones[i]; + zone = &dev->zones[i]; + null_init_zone_lock(dev, zone); zone->start = zone->wp = sector; if (zone->start + dev->zone_size_sects > dev_capacity_sects) zone->len = dev_capacity_sects - zone->start; @@ -147,32 +176,17 @@ int null_register_zoned_dev(struct nullb *nullb) void null_free_zoned_dev(struct nullb_device *dev) { - bitmap_free(dev->zone_locks); kvfree(dev->zones); } -static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno) -{ - if (dev->memory_backed) - wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE); - spin_lock_irq(&dev->zone_lock); -} - -static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno) -{ - spin_unlock_irq(&dev->zone_lock); - - if (dev->memory_backed) - clear_and_wake_up_bit(zno, dev->zone_locks); -} - int null_report_zones(struct gendisk *disk, sector_t sector, unsigned int nr_zones, report_zones_cb cb, void *data) { struct nullb *nullb = disk->private_data; struct nullb_device *dev = nullb->dev; - unsigned int first_zone, i, zno; - struct blk_zone zone; + unsigned int first_zone, i; + struct nullb_zone *zone; + struct blk_zone blkz; int error; first_zone = null_zone_no(dev, sector); @@ -182,19 +196,25 @@ int null_report_zones(struct gendisk *disk, sector_t sector, nr_zones = min(nr_zones, dev->nr_zones - first_zone); trace_nullb_report_zones(nullb, nr_zones); - zno = first_zone; - for (i = 0; i < nr_zones; i++, zno++) { + memset(&blkz, 0, sizeof(struct blk_zone)); + zone = &dev->zones[first_zone]; + for (i = 0; i < nr_zones; i++, zone++) { /* * Stacked DM target drivers will remap the zone information by * modifying the zone information passed to the report callback. * So use a local copy to avoid corruption of the device zone * array. */ - null_lock_zone(dev, zno); - memcpy(&zone, &dev->zones[zno], sizeof(struct blk_zone)); - null_unlock_zone(dev, zno); - - error = cb(&zone, i, data); + null_lock_zone(dev, zone); + blkz.start = zone->start; + blkz.len = zone->len; + blkz.wp = zone->wp; + blkz.type = zone->type; + blkz.cond = zone->cond; + blkz.capacity = zone->capacity; + null_unlock_zone(dev, zone); + + error = cb(&blkz, i, data); if (error) return error; } @@ -210,7 +230,7 @@ size_t null_zone_valid_read_len(struct nullb *nullb, sector_t sector, unsigned int len) { struct nullb_device *dev = nullb->dev; - struct blk_zone *zone = &dev->zones[null_zone_no(dev, sector)]; + struct nullb_zone *zone = &dev->zones[null_zone_no(dev, sector)]; unsigned int nr_sectors = len >> SECTOR_SHIFT; /* Read must be below the write pointer position */ @@ -224,11 +244,9 @@ size_t null_zone_valid_read_len(struct nullb *nullb, return (zone->wp - sector) << SECTOR_SHIFT; } -static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t __null_close_zone(struct nullb_device *dev, + struct nullb_zone *zone) { - if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) - return BLK_STS_IOERR; - switch (zone->cond) { case BLK_ZONE_COND_CLOSED: /* close operation on closed is not an error */ @@ -261,7 +279,7 @@ static void null_close_first_imp_zone(struct nullb_device *dev) for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) { - null_close_zone(dev, &dev->zones[i]); + __null_close_zone(dev, &dev->zones[i]); return; } } @@ -310,7 +328,8 @@ static blk_status_t null_check_open(struct nullb_device *dev) * it is not certain that closing an implicit open zone will allow a new zone * to be opened, since we might already be at the active limit capacity. */ -static blk_status_t null_check_zone_resources(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t null_check_zone_resources(struct nullb_device *dev, + struct nullb_zone *zone) { blk_status_t ret; @@ -334,7 +353,7 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, { struct nullb_device *dev = cmd->nq->dev; unsigned int zno = null_zone_no(dev, sector); - struct blk_zone *zone = &dev->zones[zno]; + struct nullb_zone *zone = &dev->zones[zno]; blk_status_t ret; trace_nullb_zone_op(cmd, zno, zone->cond); @@ -345,26 +364,12 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); } - null_lock_zone(dev, zno); + null_lock_zone(dev, zone); - switch (zone->cond) { - case BLK_ZONE_COND_FULL: + if (zone->cond == BLK_ZONE_COND_FULL) { /* Cannot write to a full zone */ ret = BLK_STS_IOERR; goto unlock; - case BLK_ZONE_COND_EMPTY: - case BLK_ZONE_COND_CLOSED: - ret = null_check_zone_resources(dev, zone); - if (ret != BLK_STS_OK) - goto unlock; - break; - case BLK_ZONE_COND_IMP_OPEN: - case BLK_ZONE_COND_EXP_OPEN: - break; - default: - /* Invalid zone condition */ - ret = BLK_STS_IOERR; - goto unlock; } /* @@ -389,60 +394,69 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector, goto unlock; } - if (zone->cond == BLK_ZONE_COND_CLOSED) { - dev->nr_zones_closed--; - dev->nr_zones_imp_open++; - } else if (zone->cond == BLK_ZONE_COND_EMPTY) { - dev->nr_zones_imp_open++; + if (zone->cond == BLK_ZONE_COND_CLOSED || + zone->cond == BLK_ZONE_COND_EMPTY) { + null_lock_zone_res(dev); + + ret = null_check_zone_resources(dev, zone); + if (ret != BLK_STS_OK) { + null_unlock_zone_res(dev); + goto unlock; + } + if (zone->cond == BLK_ZONE_COND_CLOSED) { + dev->nr_zones_closed--; + dev->nr_zones_imp_open++; + } else if (zone->cond == BLK_ZONE_COND_EMPTY) { + dev->nr_zones_imp_open++; + } + + if (zone->cond != BLK_ZONE_COND_EXP_OPEN) + zone->cond = BLK_ZONE_COND_IMP_OPEN; + + null_unlock_zone_res(dev); } - if (zone->cond != BLK_ZONE_COND_EXP_OPEN) - zone->cond = BLK_ZONE_COND_IMP_OPEN; - /* - * Memory backing allocation may sleep: release the zone_lock spinlock - * to avoid scheduling in atomic context. Zone operation atomicity is - * still guaranteed through the zone_locks bitmap. - */ - if (dev->memory_backed) - spin_unlock_irq(&dev->zone_lock); ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors); - if (dev->memory_backed) - spin_lock_irq(&dev->zone_lock); - if (ret != BLK_STS_OK) goto unlock; zone->wp += nr_sectors; if (zone->wp == zone->start + zone->capacity) { + null_lock_zone_res(dev); if (zone->cond == BLK_ZONE_COND_EXP_OPEN) dev->nr_zones_exp_open--; else if (zone->cond == BLK_ZONE_COND_IMP_OPEN) dev->nr_zones_imp_open--; zone->cond = BLK_ZONE_COND_FULL; + null_unlock_zone_res(dev); } + ret = BLK_STS_OK; unlock: - null_unlock_zone(dev, zno); + null_unlock_zone(dev, zone); return ret; } -static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t null_open_zone(struct nullb_device *dev, + struct nullb_zone *zone) { - blk_status_t ret; + blk_status_t ret = BLK_STS_OK; if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) return BLK_STS_IOERR; + null_lock_zone_res(dev); + switch (zone->cond) { case BLK_ZONE_COND_EXP_OPEN: /* open operation on exp open is not an error */ - return BLK_STS_OK; + goto unlock; case BLK_ZONE_COND_EMPTY: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; break; case BLK_ZONE_COND_IMP_OPEN: dev->nr_zones_imp_open--; @@ -450,35 +464,57 @@ static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zo case BLK_ZONE_COND_CLOSED: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; dev->nr_zones_closed--; break; case BLK_ZONE_COND_FULL: default: - return BLK_STS_IOERR; + ret = BLK_STS_IOERR; + goto unlock; } zone->cond = BLK_ZONE_COND_EXP_OPEN; dev->nr_zones_exp_open++; - return BLK_STS_OK; +unlock: + null_unlock_zone_res(dev); + + return ret; } -static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t null_close_zone(struct nullb_device *dev, + struct nullb_zone *zone) { blk_status_t ret; if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) return BLK_STS_IOERR; + null_lock_zone_res(dev); + ret = __null_close_zone(dev, zone); + null_unlock_zone_res(dev); + + return ret; +} + +static blk_status_t null_finish_zone(struct nullb_device *dev, + struct nullb_zone *zone) +{ + blk_status_t ret = BLK_STS_OK; + + if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) + return BLK_STS_IOERR; + + null_lock_zone_res(dev); + switch (zone->cond) { case BLK_ZONE_COND_FULL: /* finish operation on full is not an error */ - return BLK_STS_OK; + goto unlock; case BLK_ZONE_COND_EMPTY: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; break; case BLK_ZONE_COND_IMP_OPEN: dev->nr_zones_imp_open--; @@ -489,27 +525,35 @@ static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone * case BLK_ZONE_COND_CLOSED: ret = null_check_zone_resources(dev, zone); if (ret != BLK_STS_OK) - return ret; + goto unlock; dev->nr_zones_closed--; break; default: - return BLK_STS_IOERR; + ret = BLK_STS_IOERR; + goto unlock; } zone->cond = BLK_ZONE_COND_FULL; zone->wp = zone->start + zone->len; - return BLK_STS_OK; +unlock: + null_unlock_zone_res(dev); + + return ret; } -static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *zone) +static blk_status_t null_reset_zone(struct nullb_device *dev, + struct nullb_zone *zone) { if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) return BLK_STS_IOERR; + null_lock_zone_res(dev); + switch (zone->cond) { case BLK_ZONE_COND_EMPTY: /* reset operation on empty is not an error */ + null_unlock_zone_res(dev); return BLK_STS_OK; case BLK_ZONE_COND_IMP_OPEN: dev->nr_zones_imp_open--; @@ -523,12 +567,15 @@ static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *z case BLK_ZONE_COND_FULL: break; default: + null_unlock_zone_res(dev); return BLK_STS_IOERR; } zone->cond = BLK_ZONE_COND_EMPTY; zone->wp = zone->start; + null_unlock_zone_res(dev); + return BLK_STS_OK; } @@ -537,19 +584,19 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op, { struct nullb_device *dev = cmd->nq->dev; unsigned int zone_no; - struct blk_zone *zone; + struct nullb_zone *zone; blk_status_t ret; size_t i; if (op == REQ_OP_ZONE_RESET_ALL) { for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { - null_lock_zone(dev, i); zone = &dev->zones[i]; + null_lock_zone(dev, zone); if (zone->cond != BLK_ZONE_COND_EMPTY) { null_reset_zone(dev, zone); trace_nullb_zone_op(cmd, i, zone->cond); } - null_unlock_zone(dev, i); + null_unlock_zone(dev, zone); } return BLK_STS_OK; } @@ -557,7 +604,7 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op, zone_no = null_zone_no(dev, sector); zone = &dev->zones[zone_no]; - null_lock_zone(dev, zone_no); + null_lock_zone(dev, zone); switch (op) { case REQ_OP_ZONE_RESET: @@ -580,7 +627,7 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op, if (ret == BLK_STS_OK) trace_nullb_zone_op(cmd, zone_no, zone->cond); - null_unlock_zone(dev, zone_no); + null_unlock_zone(dev, zone); return ret; } @@ -588,29 +635,28 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op, blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op, sector_t sector, sector_t nr_sectors) { - struct nullb_device *dev = cmd->nq->dev; - unsigned int zno = null_zone_no(dev, sector); + struct nullb_device *dev; + struct nullb_zone *zone; blk_status_t sts; switch (op) { case REQ_OP_WRITE: - sts = null_zone_write(cmd, sector, nr_sectors, false); - break; + return null_zone_write(cmd, sector, nr_sectors, false); case REQ_OP_ZONE_APPEND: - sts = null_zone_write(cmd, sector, nr_sectors, true); - break; + return null_zone_write(cmd, sector, nr_sectors, true); case REQ_OP_ZONE_RESET: case REQ_OP_ZONE_RESET_ALL: case REQ_OP_ZONE_OPEN: case REQ_OP_ZONE_CLOSE: case REQ_OP_ZONE_FINISH: - sts = null_zone_mgmt(cmd, op, sector); - break; + return null_zone_mgmt(cmd, op, sector); default: - null_lock_zone(dev, zno); + dev = cmd->nq->dev; + zone = &dev->zones[null_zone_no(dev, sector)]; + + null_lock_zone(dev, zone); sts = null_process_cmd(cmd, op, sector, nr_sectors); - null_unlock_zone(dev, zno); + null_unlock_zone(dev, zone); + return sts; } - - return sts; } From patchwork Fri Nov 20 01:55:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EFD2C388F9 for ; Fri, 20 Nov 2020 01:55:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B90422255 for ; Fri, 20 Nov 2020 01:55:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="O+s6vqAV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727246AbgKTBzp (ORCPT ); Thu, 19 Nov 2020 20:55:45 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6556 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727241AbgKTBzp (ORCPT ); Thu, 19 Nov 2020 20:55:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837344; x=1637373344; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=wV+ZxTTT43RQRfdQFWYX1FhlYh0q3loVxxX1Y8tFHFs=; b=O+s6vqAVgA9EocqrkLMLjj80efCkc7v3JnlIXltl5k+yAbVvok6AN+k5 Vmol2gsFRWjN0BJlikLTX5UAwpjlc9EE65y1hy4jOjVWcguFDSsMxBXbR GTxRjj+WOX32A5HIE6Ak0ARAmyfyy/x3zkPz3r/NDb+vCGbd8awyNW1ot LYfYANJPP4AmZasyuArlYYMJaLM7udeNM+u3zkx1LkfHZA/tMrjSaylPN Sbk88KCGjWuPg15TyEMR5jb53tprIw3/Kacr0R5QdN3ibIsKVOSWU2JBd 3UGckGb2t2jpxMNIS7m4mtD43L7A1wYR4MURGt3hEJ1mbM4RYazZzi39F g==; IronPort-SDR: /+RhlURSidBnIWBFeRSagjv9N7bb8kgH/dF9IuIflM9m0BW0EgVrDwaFXfc40uvG162TK7dXD0 xn/fEo38jWbb9wwZy0UFJYjbUqueU3MdKWptEDMyZNV0DgDILDm54QRIjVIdAQusAPaT1kl3Bj p+GUs0ou1l5EDOuiPD3hg2ARvDpDX1v6TN6wp1F2bW2PzGakEK4zMEEouPCPSdRNptBY37vFDa yOWCaJZeqDvEFCZnknClzyLE+jHFYt8QJDl7J3RHkKq0nbim/VN7FsisCjVCyTjG5MVPr0rrbw sTk= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516404" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:44 +0800 IronPort-SDR: T5KuiDMAb9TZSiVPzO2f0jpbtWdvrOdaKqOTCBhgTrTjsklcLdkdsxbRS4FJfcM7h/vjP1AuZr eeVLlAkvBmw3ifEDMWLu2M0eI/gIIyX2WHfhbkZVn5ReQhu7eN7ST9tsirSIL3S05RG9SA8FsO HQWheaejuHXBiGaaO2ZHMVVP+02Cz6SC+n4Ewutrlpg58jB47QJs87beecnORM1vJo4hEdKB6I 57KwxYa/F+rqesIM7IPHcEK1hcVnH/kx2CiXJdJFkCCCQSDop5AsfecC8YSmc1jqWMPXDTbgP+ 5PHlZJ3Wi7g0AWzWzCXLDzZ5 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:34 -0800 IronPort-SDR: K1ka5Ww/69SCewCCKvkX5/5X4sThBOvxklEju/+RgJxcdmfxl+k7F/egtGqGMbiWoxQLw+C2m3 V3A6N4r8PSs/JXMXi1yhyJEYwhkV7X9preo0OXvCS/DAVj16+JBuqsshgK4RFheNKyBCWZ8R14 0i1ZwmZXXmGfEDarGO1hh3lzzgwLn0juWOqfALQgIrWNZKWAo30yssIJ1TWaoeuKjEpaLgP6fI jFEXkEJEXMkGDt3c1fkbS5h+/VWWmA4J/HqNOSGhtzkT6bjEL+YUK2zQuv+gvOOeVbYM9bq9xC sug= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:44 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 5/9] null_blk: Improve implicit zone close Date: Fri, 20 Nov 2020 10:55:15 +0900 Message-Id: <20201120015519.276820-6-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When open zone resource management is enabled, that is, when a null_blk zoned device is created with zone_max_open different than 0, implicitly or explicitly opening a zone may require implicitly closing a zone that is already implicitly open. This operation is done using the function null_close_first_imp_zone(), which search for an implicitly open zone to close starting from the first sequential zone. This implementation is simple but may result in the same being constantly implicitly closed and then implicitly reopened on write, namely, the lowest numbered zone that is being written. Avoid this by starting the search for an implicitly open zone starting from the zone following the last zone that was implicitly closed. The function null_close_first_imp_zone() is renamed null_close_imp_open_zone(). Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk.h | 1 + drivers/block/null_blk_zoned.c | 22 +++++++++++++++++----- 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 14546ead1d66..29a8817fadfc 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -67,6 +67,7 @@ struct nullb_device { unsigned int nr_zones_imp_open; unsigned int nr_zones_exp_open; unsigned int nr_zones_closed; + unsigned int imp_close_zone_no; struct nullb_zone *zones; sector_t zone_size_sects; bool need_zone_res_mgmt; diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 4d5c0b938618..4dad8748a61d 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -113,6 +113,7 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q) pr_info("zone_max_open limit disabled, limit >= zone count\n"); } dev->need_zone_res_mgmt = dev->zone_max_active || dev->zone_max_open; + dev->imp_close_zone_no = dev->zone_nr_conv; for (i = 0; i < dev->zone_nr_conv; i++) { zone = &dev->zones[i]; @@ -273,13 +274,24 @@ static blk_status_t __null_close_zone(struct nullb_device *dev, return BLK_STS_OK; } -static void null_close_first_imp_zone(struct nullb_device *dev) +static void null_close_imp_open_zone(struct nullb_device *dev) { - unsigned int i; + struct nullb_zone *zone; + unsigned int zno, i; + + zno = dev->imp_close_zone_no; + if (zno >= dev->nr_zones) + zno = dev->zone_nr_conv; for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) { - if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) { - __null_close_zone(dev, &dev->zones[i]); + zone = &dev->zones[zno]; + zno++; + if (zno >= dev->nr_zones) + zno = dev->zone_nr_conv; + + if (zone->cond == BLK_ZONE_COND_IMP_OPEN) { + __null_close_zone(dev, zone); + dev->imp_close_zone_no = zno; return; } } @@ -307,7 +319,7 @@ static blk_status_t null_check_open(struct nullb_device *dev) if (dev->nr_zones_imp_open) { if (null_check_active(dev) == BLK_STS_OK) { - null_close_first_imp_zone(dev); + null_close_imp_open_zone(dev); return BLK_STS_OK; } } From patchwork Fri Nov 20 01:55:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33B83C6379D for ; Fri, 20 Nov 2020 01:55:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD02A22256 for ; Fri, 20 Nov 2020 01:55:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="HowkZDg6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727257AbgKTBzr (ORCPT ); Thu, 19 Nov 2020 20:55:47 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6558 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727241AbgKTBzq (ORCPT ); Thu, 19 Nov 2020 20:55:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837345; x=1637373345; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=VoDPqaAXpKnS5gRKKkrEI3GWtUDbZPmg3D2/N/jmIIM=; b=HowkZDg6g2QFdsW3rw5vYIzDXA/gefg80FDXvSWAHge026lt5ZpUJeLT t9azM9gisPfbnQ5Pzxs90k/5bgQljvHTI4ZdiJEOxA2WxprIeJokORIrt C4E4SvCiMBFm1pYvI/GyzEXeazg2NEkJvXhpPV87MCL9qIpL6TKHz+akB 7QiHe/Xudy/xprwMNIJMPukQvZmftQg4ygCbx8ml8FTuJb/1lPOYkHPYQ 0eQDe1iCWEH8V5AIVpxvFvAZM6am+MQu0m6ahESWtbO0t+DfPgrhEjzN5 TRsJGQPVu7N1CmXsFmXpOV5WIzWJD8GJnW4XtsD1P5VjpZde+WFfMqAtJ w==; IronPort-SDR: hWPdqJTAvX9hqCNcb9iqqgCEscQ98XjiLdLkvxfwN2s7f6JRz8XUttwayLTI++VMk1dJCHN6iG R4kDxCcMbDgpdRy51t0E1NpK7svnpZRYEn1P++4ur6KGemGYpvFHBXsHO82YNsOOu2rKdJN2gw Wx+y7wp8v5pfwGPpewSn20ZCm2xh2NPJtx6JLfoqkaD7BvA42rhHGV67q4HGHxRhBgRldGBMcX ukg3fIiEWh8sexaQF2U6zfKyzwJTZVD/X5kfYwSIFmaoYLQLGHXGxBP9/ioRaItEfTF0gesbQt x2M= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516406" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:45 +0800 IronPort-SDR: nVzOtdMXOlKpSN+IYIbOPtUXlW+1/FCY1CcyUFR9leXw2hTR7+Zi1PIjznN1vsAPX3FTHagFOk tepwtb9VbR580sObtuNgnzt8jWxAaKWnrVaG4vuTefaPNQQZQv/2JKJcBb/6Q+Cd/JLd4SvVJw JD5BSRZ3uJhkBvUyrvET7X2QB5as9Vk4EFdZZuNcrkJ2bmtw9BmuCYQMRUfMjRf3Y2uLCjdCMm n1mAbh9Ulne29AbwIaxwLpEqdt8VXeYlIKLhZ+CdGRqcHmyNLSQU+POQYVQMtGfcBTJIQa/DAP zn7gGT3HrkwGkrr57GYN1THT Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:35 -0800 IronPort-SDR: rPMZOuef99YkOKk0VWpy2+jyAhwXKT9tOank77vQ5CGpG6k6iCfj9l/64rkdKElvO6KGOnk9x5 mt8bN88ge9ECxL4WyApA682+dVDyCVEswvNZCCJC3UUZ7SOJeg9ZcoDyyNJUTzy3nm3yhEtxu3 qAVRvCDC78Zosx7gN781dK9VubEEKofc+SpF9wlMQ1zAw68pMIWRayI+4MPvmWJBhoemPVxzs4 gX2ZUG8QCgpZdngPtFy18CB2TS33m3Mvwr2bD5fE5DOGdBK07HmLhMWKbuMFL6b69zR+fBvHZZ XeI= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:45 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 6/9] null_blk: cleanup discard handling Date: Fri, 20 Nov 2020 10:55:16 +0900 Message-Id: <20201120015519.276820-7-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org null_handle_discard() is called from both null_handle_rq() and null_handle_bio(). As these functions are only passed a nullb_cmd structure, this forces pointer dereferences to identiify the discard operation code and to access the sector range to be discarded. Simplify all this by changing the interface of the functions null_handle_discard() and null_handle_memory_backed() to pass along the operation code, operation start sector and number of sectors. With this change null_handle_discard() can be called directly from null_handle_memory_backed(). Also add a message warning that the discard configuration attribute has no effect when memory backing is disabled. No functional change is introduced by this patch. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk_main.c | 43 ++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 4685ea401d5b..a223bee24e76 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1076,13 +1076,16 @@ static void nullb_fill_pattern(struct nullb *nullb, struct page *page, kunmap_atomic(dst); } -static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n) +static blk_status_t null_handle_discard(struct nullb_device *dev, + sector_t sector, sector_t nr_sectors) { + struct nullb *nullb = dev->nullb; + size_t n = nr_sectors << SECTOR_SHIFT; size_t temp; spin_lock_irq(&nullb->lock); while (n > 0) { - temp = min_t(size_t, n, nullb->dev->blocksize); + temp = min_t(size_t, n, dev->blocksize); null_free_sector(nullb, sector, false); if (null_cache_active(nullb)) null_free_sector(nullb, sector, true); @@ -1090,6 +1093,8 @@ static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n) n -= temp; } spin_unlock_irq(&nullb->lock); + + return BLK_STS_OK; } static int null_handle_flush(struct nullb *nullb) @@ -1149,17 +1154,10 @@ static int null_handle_rq(struct nullb_cmd *cmd) struct nullb *nullb = cmd->nq->dev->nullb; int err; unsigned int len; - sector_t sector; + sector_t sector = blk_rq_pos(rq); struct req_iterator iter; struct bio_vec bvec; - sector = blk_rq_pos(rq); - - if (req_op(rq) == REQ_OP_DISCARD) { - null_handle_discard(nullb, sector, blk_rq_bytes(rq)); - return 0; - } - spin_lock_irq(&nullb->lock); rq_for_each_segment(bvec, rq, iter) { len = bvec.bv_len; @@ -1183,18 +1181,10 @@ static int null_handle_bio(struct nullb_cmd *cmd) struct nullb *nullb = cmd->nq->dev->nullb; int err; unsigned int len; - sector_t sector; + sector_t sector = bio->bi_iter.bi_sector; struct bio_vec bvec; struct bvec_iter iter; - sector = bio->bi_iter.bi_sector; - - if (bio_op(bio) == REQ_OP_DISCARD) { - null_handle_discard(nullb, sector, - bio_sectors(bio) << SECTOR_SHIFT); - return 0; - } - spin_lock_irq(&nullb->lock); bio_for_each_segment(bvec, bio, iter) { len = bvec.bv_len; @@ -1263,11 +1253,16 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd, } static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd, - enum req_opf op) + enum req_opf op, + sector_t sector, + sector_t nr_sectors) { struct nullb_device *dev = cmd->nq->dev; int err; + if (op == REQ_OP_DISCARD) + return null_handle_discard(dev, sector, nr_sectors); + if (dev->queue_mode == NULL_Q_BIO) err = null_handle_bio(cmd); else @@ -1343,7 +1338,7 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd, } if (dev->memory_backed) - return null_handle_memory_backed(cmd, op); + return null_handle_memory_backed(cmd, op, sector, nr_sectors); return BLK_STS_OK; } @@ -1589,6 +1584,12 @@ static void null_config_discard(struct nullb *nullb) if (nullb->dev->discard == false) return; + if (!nullb->dev->memory_backed) { + nullb->dev->discard = false; + pr_info("discard option is ignored without memory backing\n"); + return; + } + if (nullb->dev->zoned) { nullb->dev->discard = false; pr_info("discard option is ignored in zoned mode\n"); From patchwork Fri Nov 20 01:55:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBA6EC63798 for ; Fri, 20 Nov 2020 01:55:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 79D5F22255 for ; Fri, 20 Nov 2020 01:55:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="jw5IMCSl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727266AbgKTBzr (ORCPT ); Thu, 19 Nov 2020 20:55:47 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6558 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbgKTBzr (ORCPT ); Thu, 19 Nov 2020 20:55:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837346; x=1637373346; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=7g9vbczkAtuKa0oaVgaWdrIrL7UeG50WRzCl1A1RxUQ=; b=jw5IMCSlS4VslxZmBPzpg5qakfZMg0LS+yC8ncFVK9ehw5nFwM/3R4Jq WS+RC6PaX7mu9nUAxQJu/CW9JxlcfmrfhzkabkFm84R9NAQJaHItRNW+T UuZJ5Asat/NOBR8OLDmgeDrjOQwvCW0VVLxjQrh9Li24k6hTwJQsonrzz Fdv4LQnhaHQV4z1YdTztWUzh2ejPblxR6pWbUqqq4/di6rBXf3e/R/sXJ sdt+aoDMPRSNupjFBqCaeaF9FR6xsTXhK5X3Lrb4vkF66BRGtaNvqQ87V GAw8yMQ5gcOePJ28AVJ0VdpUiUsjpeNjXZsmeG+3e+jBQF1rXy5NywjTY Q==; IronPort-SDR: 8fcDXA48/Wv7tKRn1ng1te1os3F4uz6yiSHzsYo6ury3xUxdPHMdbxODQahMbWFCYVJt4IlGmj hgHl+JwTOUyxMOrr7l7BZybccfeI2157PudeHgUeHOcGgsfnj7aIA2NnFGQQb7KtYi0LhMALX6 MLFBRMvkAr/QVdBMPPTHKnhEbpw8ToFQMjMKIitmGT3G/A8TC3D0+PMKjaGspwCvEUXaWgIfFI HD3IXSIHUShDfD1UgJD8B4L5GA7z/Nq7uWpDTh/PQjT9XwQM9Hlju/9Nf5GYBh8qhMuY9PuSAI Gys= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516407" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:46 +0800 IronPort-SDR: 4lwLY4Yevmbu9C4O38Lx9psSOA9W25vkOax/IyJB8gMII5Q9tHSRDmm7+EM5L2R+tZh91Mt18N NGQwCybv//dZj2HETDPBc2m6fM38+08WCCdT8ELgR8wywN4Ve8DLkN9xMenuti5xOKJ/jKo2jl ps+8k695W1WXOKQpbbgtPtljp7Y0SxK5l6qmCA09bFf122Ex+bqtFHcF32M+BZBBy42U8UXMf2 vQ/jRdS+1NQBwygpZjb1AI6z35oK/fIkitbLykfCyD3D9+OVhjXvOnZfzAJk42leLxJ5e/B3ID YapKCJqwLOkOCEIVXrC/qbpA Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:36 -0800 IronPort-SDR: GkxFMcx7R3XDEiMkkWFPeD6dnOZTotMFYncHu7lk8QIFn2w5oYnGU4ayyXhU6uyJTcnjGAr+Wx wf/K49oA7kHmk2HvDhA0yMmyA7UWyuoDtySnxIedwzY4p3a0XXK2wnERaZkeSPg9WP7xoxd5f6 PWVoPxPWbQR71oBlsEagqfE0OR6LQhXkStK33etCSU7r1UpgeCQcjzV/ZNpQ+BYqgFbWYoeDDH qAUODd3RTTuCC+Nzib836r1xiXOQQP510af4jINl5xJJWlZrNxzrZQ2AxmaIhbKGHLU5vcBYxl eAs= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:45 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 7/9] null_blk: discard zones on reset Date: Fri, 20 Nov 2020 10:55:17 +0900 Message-Id: <20201120015519.276820-8-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org When memory backing is enabled, use null_handle_discard() to free the backing memory used by a zone when the zone is being reset. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk.h | 2 ++ drivers/block/null_blk_main.c | 4 ++-- drivers/block/null_blk_zoned.c | 3 +++ 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 29a8817fadfc..63000aeeb2f3 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -116,6 +116,8 @@ struct nullb { char disk_name[DISK_NAME_LEN]; }; +blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector, + sector_t nr_sectors); blk_status_t null_process_cmd(struct nullb_cmd *cmd, enum req_opf op, sector_t sector, unsigned int nr_sectors); diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index a223bee24e76..b758b9366630 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -1076,8 +1076,8 @@ static void nullb_fill_pattern(struct nullb *nullb, struct page *page, kunmap_atomic(dst); } -static blk_status_t null_handle_discard(struct nullb_device *dev, - sector_t sector, sector_t nr_sectors) +blk_status_t null_handle_discard(struct nullb_device *dev, + sector_t sector, sector_t nr_sectors) { struct nullb *nullb = dev->nullb; size_t n = nr_sectors << SECTOR_SHIFT; diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c index 4dad8748a61d..65464f7559e0 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk_zoned.c @@ -588,6 +588,9 @@ static blk_status_t null_reset_zone(struct nullb_device *dev, null_unlock_zone_res(dev); + if (dev->memory_backed) + return null_handle_discard(dev, zone->start, zone->len); + return BLK_STS_OK; } From patchwork Fri Nov 20 01:55:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 548B6C388F9 for ; Fri, 20 Nov 2020 01:55:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E267522259 for ; Fri, 20 Nov 2020 01:55:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="RtSHHA47" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727267AbgKTBzs (ORCPT ); Thu, 19 Nov 2020 20:55:48 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6558 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727241AbgKTBzr (ORCPT ); Thu, 19 Nov 2020 20:55:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837347; x=1637373347; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=pNUL4ST6yu4deA5ilYSy/0HGs6YN2e4PKGkMywlWeIA=; b=RtSHHA47Q7WJz5tHIffyA4xj/01Zc0yQ/U9baYblltnAXukA9H/N+UQk aubbDLc/uf1SOYa7siHXqYhmFV+0qeymn4k0U/Bl+MWGkQy+vEpnZctzt cbGpl9PZH1zeHolxUAT+5RIq52EBRhTlsjUCEilWUkgmIXhy2Zs3yAeqa pISUYTayA5SgrqZY0hGU//gzOd02Mx/0zUTq0dl4mr2EI9qwmTvSdsCjL d3LUxO5duHv8y3Ip0FpIoivv/N95+5nnxs8+DfEk25L10Sy2Lx0lqraLf +tL2GcVhQhHDuunuutzCHqmKUq30HpZstVbxsTPl9AJWNOqEC9yACHitm g==; IronPort-SDR: bN0GP3zY0iqHRJLQ3vwNzQiqaHvupTTgGV54Xa3BctXVyIXS2j5YKN3J3Uxs9pkx8NogzqKb2L nFuMn4kFIcrVxdKL95uMLN/c5yR9o/OKH1VXqQOAeyiY1jrcI2c+/xIvc+GpjHBPIXhwTHZoD/ C7EtITNat2t3sf76PkzH8fhdVE1XKiUGYaWU/XQNHcrZeBFvnbwNfr0KelE7BHRyIFISnowKDo 0/NgnnEqFqYRAbxQRtphFL5rLd55PHV71uhqn96IyusF0tPe8tFSR+ES4Li28XnsM5mvdbFNCV Lms= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516409" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:47 +0800 IronPort-SDR: 7CKxw/12dXVV4+UdsA5q7iO8ja2Kpj86bgyQdazFLDxJOpJJQcTqXl29G8glBMD8ZJK1fFG1Pj TCVhMiQElIw2Yj1K5LrvhAiQDWQLOa1vtB8fF2+9DAd9VmWrfiQMqwN5RdxjjdZtxhUlMgw1CR 8U24ujViITpCObzJCqbVRymL9el0+b10eT5FBz92KV+BOmvlpU1pw5W7pd0e49dHogXJc6P9cp I4oMiq1wpaUxU6HcyC4hOIVg44GUb+HarM6kHx4DFRR2U0cp856tziECpRWxtch5vQS0At2aXU gFbHjlsGmFP4G9El3eIIza7J Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:36 -0800 IronPort-SDR: 4Cv15q0WRna8Bd+T9aAbYqMz7G7ajB3EdcylwfseMOkYbJhrCCpAo71WZFXm0UAXIdvL6z/MfK 8LMQL8m+Jz6Z6Wg62LtFQYjtjx6e9aL5dthVUHxn83zQQmd0SM4XidecXGGpkozxKRczHKAtgZ 4JplRSx/ueqHSz/GYefhpzaXYwQT4wrMsfNjAUHzgUGibBxaci3WrsKjW3XaCvacqFq203Rk4a +4Mi/wbYBiZhjBIN/3kBRvHxegJuMJf0WMXAvx4ZAxwhtBIjP5956Pl9jou0ruofmCLsDsok75 6/8= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:46 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 8/9] null_blk: Allow controlling max_hw_sectors limit Date: Fri, 20 Nov 2020 10:55:18 +0900 Message-Id: <20201120015519.276820-9-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add the module option and configfs attribute max_sectors to allow configuring the maximum size of a command issued to a null_blk device. This allows exercising the block layer BIO splitting with different limits than the default BLK_SAFE_MAX_SECTORS. This is also useful for testing the zone append write path of file systems as the max_hw_sectors limit value is also used for the max_zone_append_sectors limit. Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- drivers/block/null_blk.h | 1 + drivers/block/null_blk_main.c | 20 +++++++++++++++++++- 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 63000aeeb2f3..83504f3cc9d6 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -85,6 +85,7 @@ struct nullb_device { unsigned int home_node; /* home node for the device */ unsigned int queue_mode; /* block interface */ unsigned int blocksize; /* block size */ + unsigned int max_sectors; /* Max sectors per command */ unsigned int irqmode; /* IRQ completion handler */ unsigned int hw_queue_depth; /* queue depth */ unsigned int index; /* index of the disk, only valid with a disk */ diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index b758b9366630..5357c3a4a36f 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -152,6 +152,10 @@ static int g_bs = 512; module_param_named(bs, g_bs, int, 0444); MODULE_PARM_DESC(bs, "Block size (in bytes)"); +static int g_max_sectors; +module_param_named(max_sectors, g_max_sectors, int, 0444); +MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)"); + static unsigned int nr_devices = 1; module_param(nr_devices, uint, 0444); MODULE_PARM_DESC(nr_devices, "Number of devices to register"); @@ -346,6 +350,7 @@ NULLB_DEVICE_ATTR(submit_queues, uint, nullb_apply_submit_queues); NULLB_DEVICE_ATTR(home_node, uint, NULL); NULLB_DEVICE_ATTR(queue_mode, uint, NULL); NULLB_DEVICE_ATTR(blocksize, uint, NULL); +NULLB_DEVICE_ATTR(max_sectors, uint, NULL); NULLB_DEVICE_ATTR(irqmode, uint, NULL); NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL); NULLB_DEVICE_ATTR(index, uint, NULL); @@ -463,6 +468,7 @@ static struct configfs_attribute *nullb_device_attrs[] = { &nullb_device_attr_home_node, &nullb_device_attr_queue_mode, &nullb_device_attr_blocksize, + &nullb_device_attr_max_sectors, &nullb_device_attr_irqmode, &nullb_device_attr_hw_queue_depth, &nullb_device_attr_index, @@ -533,7 +539,7 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item) static ssize_t memb_group_features_show(struct config_item *item, char *page) { return snprintf(page, PAGE_SIZE, - "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active\n"); + "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors\n"); } CONFIGFS_ATTR_RO(memb_group_, features); @@ -588,6 +594,7 @@ static struct nullb_device *null_alloc_dev(void) dev->home_node = g_home_node; dev->queue_mode = g_queue_mode; dev->blocksize = g_bs; + dev->max_sectors = g_max_sectors; dev->irqmode = g_irqmode; dev->hw_queue_depth = g_hw_queue_depth; dev->blocking = g_blocking; @@ -1867,6 +1874,11 @@ static int null_add_dev(struct nullb_device *dev) blk_queue_logical_block_size(nullb->q, dev->blocksize); blk_queue_physical_block_size(nullb->q, dev->blocksize); + if (!dev->max_sectors) + dev->max_sectors = queue_max_hw_sectors(nullb->q); + dev->max_sectors = min_t(unsigned int, dev->max_sectors, + BLK_DEF_MAX_SECTORS); + blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); null_config_discard(nullb); @@ -1910,6 +1922,12 @@ static int __init null_init(void) g_bs = PAGE_SIZE; } + if (g_max_sectors > BLK_DEF_MAX_SECTORS) { + pr_warn("invalid max sectors\n"); + pr_warn("defaults max sectors to %u\n", BLK_DEF_MAX_SECTORS); + g_max_sectors = BLK_DEF_MAX_SECTORS; + } + if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) { pr_err("invalid home_node value\n"); g_home_node = NUMA_NO_NODE; From patchwork Fri Nov 20 01:55:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11919367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72F9CC6379D for ; Fri, 20 Nov 2020 01:55:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BACE22254 for ; Fri, 20 Nov 2020 01:55:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Z+66ktM0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727040AbgKTBzt (ORCPT ); Thu, 19 Nov 2020 20:55:49 -0500 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:6558 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726479AbgKTBzt (ORCPT ); Thu, 19 Nov 2020 20:55:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1605837347; x=1637373347; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=0QxzYPCbjov24q7/AyNBXvdAWCcFZLDgQuujZDOvIEQ=; b=Z+66ktM0NYsy6CW6fwwMrxKKJn5iHqSUj/kpKXc2tEHyKFOR/fqFBSAR NIx/ObjYao1PNfwITS8ZcwwFArG3yJA3rIKnnNV1h5zoLD7GSrYG8TV/U oB7kOccGtg4viiUrwEuivZf7SZ1eUEm60aFIXSeUOiqHI9HLnK6hhePJ8 wZBSXaB0O6wmyM3c35NnI237cSZ1P88Dimm0pb/9NXNKlDg/xYks3bpTD ULOEIiiFrN5XlRZ/mh1oyMr6dsXUhzjTB4aSdFNvRcn4LJsuq06kepmqs bbp6/FTkaSLByRXtk9Q/3R8m9Pq/RqK+V+7pk6KTxUZkA8rWfZtpRTO2o w==; IronPort-SDR: AwyiCHQNVTgLi6+BxkLDqN3eKMDmfjxYPa0zYRUyfsPJuNtFE/c7kvQoUidsPPZxb/+d2Uf0dk s1yZtfB8wRxO7G6XpvwWQSV1Yf2TS7H3MpD5vBOzXoG14RlUrQOWg3WSzvOeaPeiSeoo5Plr8u 8ZmgebW1DHkZtxLdO/EMa1DKaCs4Gb+4eUipS8UGZZr5pxzyBfZDPTYeBgBMwIAW2L3JnsKCct cVFqTz01c3lynlQxXTttgVuxdLIkpqh7JQZzCONba43MNCHHCs3aZecsEOx/GAqq3QrH6eo0KR NX0= X-IronPort-AV: E=Sophos;i="5.78,354,1599494400"; d="scan'208";a="157516416" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 20 Nov 2020 09:55:47 +0800 IronPort-SDR: BbDDotb2+bbh7I7xhQWVird0TGDANVEOCxeQdRa+33LTTXkZg/NOV7wNFneRmh+7QNbs8lrzdV Cy/r6oSCfAsXbEuJ2E7eiZRtzNFkBDajOKuiUhDG0MFGBAIok2ObBAzGgnOvgU1MY/Dr0WrQeY Y/xE0rWlMJ53Vv5s0Xe/UVQK3WhhvypTBV0zV6x9YDfTgwW5kFsFH8RYkp8PJNbo91dciHLw+B +yX4ydrOlDC/HiLNkX+rg56QbKIjK5ZT9RFGX6BoK/xmU/t6apkXKQu7Hxat+XJrQV+jlal0RW DIXOejYodDyoItSHBrlSWE55 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2020 17:41:37 -0800 IronPort-SDR: npl0KWFeJX/JoBgoUeScL0K7EnAAN64uzoC2aE/Xf0GVqnrU5o/ArsNaScFlIh5LZ1QLi1hcgm jpVoyxJ2Dt7oAKJWA30uI/j7M/Jw28gVc4O18pxWhpwpm7HS0WL4fo7t9s1+3AVA/JD1tLhL4W RDRZCNENcQYO4+m7P9bqBQDSZApZo0l1lOKoy4VTS2oIOig3j01tsUHv2vkQ7nOFCcMtDlqbAK LKMlnzrxXtJIDVyPXOFOe3WC3EgVEKIjgexmGGpaMCisXbl4jHUaNLi9uEHENqGX1Kb+wSA/r/ dF0= WDCIronportException: Internal Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip02.wdc.com with ESMTP; 19 Nov 2020 17:55:47 -0800 From: Damien Le Moal To: linux-block@vger.kernel.org, Jens Axboe Subject: [PATCH v4 9/9] null_blk: Move driver into its own directory Date: Fri, 20 Nov 2020 10:55:19 +0900 Message-Id: <20201120015519.276820-10-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201120015519.276820-1-damien.lemoal@wdc.com> References: <20201120015519.276820-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move null_blk driver code into the new sub-directory drivers/block/null_blk. Suggested-by: Bart Van Assche Signed-off-by: Damien Le Moal Reviewed-by: Johannes Thumshirn --- drivers/block/Kconfig | 8 +------- drivers/block/Makefile | 7 +------ drivers/block/null_blk/Kconfig | 12 ++++++++++++ drivers/block/null_blk/Makefile | 11 +++++++++++ drivers/block/{null_blk_main.c => null_blk/main.c} | 0 drivers/block/{ => null_blk}/null_blk.h | 0 drivers/block/{null_blk_trace.c => null_blk/trace.c} | 2 +- drivers/block/{null_blk_trace.h => null_blk/trace.h} | 2 +- drivers/block/{null_blk_zoned.c => null_blk/zoned.c} | 2 +- 9 files changed, 28 insertions(+), 16 deletions(-) create mode 100644 drivers/block/null_blk/Kconfig create mode 100644 drivers/block/null_blk/Makefile rename drivers/block/{null_blk_main.c => null_blk/main.c} (100%) rename drivers/block/{ => null_blk}/null_blk.h (100%) rename drivers/block/{null_blk_trace.c => null_blk/trace.c} (93%) rename drivers/block/{null_blk_trace.h => null_blk/trace.h} (97%) rename drivers/block/{null_blk_zoned.c => null_blk/zoned.c} (99%) diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig index ecceaaa1a66f..262326973ee0 100644 --- a/drivers/block/Kconfig +++ b/drivers/block/Kconfig @@ -16,13 +16,7 @@ menuconfig BLK_DEV if BLK_DEV -config BLK_DEV_NULL_BLK - tristate "Null test block driver" - select CONFIGFS_FS - -config BLK_DEV_NULL_BLK_FAULT_INJECTION - bool "Support fault injection for Null test block driver" - depends on BLK_DEV_NULL_BLK && FAULT_INJECTION +source "drivers/block/null_blk/Kconfig" config BLK_DEV_FD tristate "Normal floppy disk support" diff --git a/drivers/block/Makefile b/drivers/block/Makefile index e1f63117ee94..a3170859e01d 100644 --- a/drivers/block/Makefile +++ b/drivers/block/Makefile @@ -41,12 +41,7 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/ obj-$(CONFIG_ZRAM) += zram/ obj-$(CONFIG_BLK_DEV_RNBD) += rnbd/ -obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o -null_blk-objs := null_blk_main.o -ifeq ($(CONFIG_BLK_DEV_ZONED), y) -null_blk-$(CONFIG_TRACING) += null_blk_trace.o -endif -null_blk-$(CONFIG_BLK_DEV_ZONED) += null_blk_zoned.o +obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk/ skd-y := skd_main.o swim_mod-y := swim.o swim_asm.o diff --git a/drivers/block/null_blk/Kconfig b/drivers/block/null_blk/Kconfig new file mode 100644 index 000000000000..6bf1f8ca20a2 --- /dev/null +++ b/drivers/block/null_blk/Kconfig @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Null block device driver configuration +# + +config BLK_DEV_NULL_BLK + tristate "Null test block driver" + select CONFIGFS_FS + +config BLK_DEV_NULL_BLK_FAULT_INJECTION + bool "Support fault injection for Null test block driver" + depends on BLK_DEV_NULL_BLK && FAULT_INJECTION diff --git a/drivers/block/null_blk/Makefile b/drivers/block/null_blk/Makefile new file mode 100644 index 000000000000..84c36e512ab8 --- /dev/null +++ b/drivers/block/null_blk/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 + +# needed for trace events +ccflags-y += -I$(src) + +obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o +null_blk-objs := main.o +ifeq ($(CONFIG_BLK_DEV_ZONED), y) +null_blk-$(CONFIG_TRACING) += trace.o +endif +null_blk-$(CONFIG_BLK_DEV_ZONED) += zoned.o diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk/main.c similarity index 100% rename from drivers/block/null_blk_main.c rename to drivers/block/null_blk/main.c diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk/null_blk.h similarity index 100% rename from drivers/block/null_blk.h rename to drivers/block/null_blk/null_blk.h diff --git a/drivers/block/null_blk_trace.c b/drivers/block/null_blk/trace.c similarity index 93% rename from drivers/block/null_blk_trace.c rename to drivers/block/null_blk/trace.c index f246e7bff698..3711cba16071 100644 --- a/drivers/block/null_blk_trace.c +++ b/drivers/block/null_blk/trace.c @@ -4,7 +4,7 @@ * * Copyright (C) 2020 Western Digital Corporation or its affiliates. */ -#include "null_blk_trace.h" +#include "trace.h" /* * Helper to use for all null_blk traces to extract disk name. diff --git a/drivers/block/null_blk_trace.h b/drivers/block/null_blk/trace.h similarity index 97% rename from drivers/block/null_blk_trace.h rename to drivers/block/null_blk/trace.h index 4f83032eb544..ce3b430e88c5 100644 --- a/drivers/block/null_blk_trace.h +++ b/drivers/block/null_blk/trace.h @@ -73,7 +73,7 @@ TRACE_EVENT(nullb_report_zones, #undef TRACE_INCLUDE_PATH #define TRACE_INCLUDE_PATH . #undef TRACE_INCLUDE_FILE -#define TRACE_INCLUDE_FILE null_blk_trace +#define TRACE_INCLUDE_FILE trace /* This part must be outside protection */ #include diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk/zoned.c similarity index 99% rename from drivers/block/null_blk_zoned.c rename to drivers/block/null_blk/zoned.c index 65464f7559e0..148b871f263b 100644 --- a/drivers/block/null_blk_zoned.c +++ b/drivers/block/null_blk/zoned.c @@ -4,7 +4,7 @@ #include "null_blk.h" #define CREATE_TRACE_POINTS -#include "null_blk_trace.h" +#include "trace.h" #define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)