From patchwork Thu Jun 27 09:29:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11019205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9318276 for ; Thu, 27 Jun 2019 09:29:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8576D28A44 for ; Thu, 27 Jun 2019 09:29:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 799F61FF87; Thu, 27 Jun 2019 09:29:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 15CBF28A3E for ; Thu, 27 Jun 2019 09:29:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726561AbfF0J3u (ORCPT ); Thu, 27 Jun 2019 05:29:50 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:13113 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726315AbfF0J3s (ORCPT ); Thu, 27 Jun 2019 05:29:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1561627788; x=1593163788; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ptWQf99dhviH+Z9RIreebaUM3APKwpxn4V7neVceyOY=; b=HJ0cx1rGGDgkHXX6Rf3Aw7f/TxQRD51ihx7t/k+YRzPtBBSjz5UVcuK4 txJo3GVlLd9hMAdyk+9kpRoeL8HzAJEb1dVzZvsv+Duia4hiS4WZIIr2g CGC6iu3xfW2VhAeEjW75VTr36cl+3C59rUYOIFtl2mhw7WLVbHDvYfWdE /1hgtPOrkCMm7hd/o6rzBGtbdRQ7tBBAuyTVKtJdR4oaKKaMiMNdw8qhY 07EZ+yQ59wxco8p4WQPw5VRNXfQfSddzzfhsbk5BNJ1vHlJkcI8tJYlzV W8vwLTICCB2bx7dfPaRCj0fPdy3VGzAx5Tg5xN70LWGBBbJ83vCuUfNhz w==; X-IronPort-AV: E=Sophos;i="5.63,423,1557158400"; d="scan'208";a="116545298" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 27 Jun 2019 17:29:47 +0800 IronPort-SDR: DSO5Nbr35J/vCLNx4IC6cy1tjyqGsMGb0upyD11IoOrdr/zqRhzKTapx/OXtNwcRVA1uTb0jL4 3kayeAdmy4UNtobEdbxb77BHpFlAiMlFzY3JJnfftQ0Cqphpp3aZHBuN0NFWzIt4vDEtbYSjd9 xqf0VARRCbaO7owRn6MMSqleWvTbB2oMvBP3z9BbFrysme5DcGrS6lgwcS0teoSS3+mDGkLuNw HuJwB6BCGYis+SZ5hhE5iW0A+DGPYXtHw5/FZ/tmNDMruhuQ57JVW6NpjDhQ1qmVtP0nslZRag G1jYRPZPZVujHweRi0qq5kAm Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP; 27 Jun 2019 02:28:55 -0700 IronPort-SDR: sBMvxRS5bIqMp9gWjq3zzXEdD7iA6fQfo8jC5RhzOPI7J2yNh3Mu6f/YSmDNFIt7LhhA/F4mQO LKyifiXKA6EvhgHGhchF/QKTMcTYvJ5KFjNtZO8jQ1X1b5gmOSzW3EtsGI+9DJjLMXDvpmONY2 tXIRTXspSEjpHL2aS/I8rWrR051od/E/zz//JnTbEBjun6LrAXT1izeX+tD1mBdsYQpMrZFvrU eSvyGWZw01tGDgfOVDwd96FRd/Yd11vM33v+B1jPIYor/OEE3PnQMkp1bjUAMsG5ddtMMo6dXY jHk= Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jun 2019 02:29:46 -0700 From: Damien Le Moal To: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-block@vger.kernel.org, Jens Axboe Cc: Christoph Hellwig , Bart Van Assche Subject: [PATCH V5 1/3] block: Allow mapping of vmalloc-ed buffers Date: Thu, 27 Jun 2019 18:29:42 +0900 Message-Id: <20190627092944.20957-2-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190627092944.20957-1-damien.lemoal@wdc.com> References: <20190627092944.20957-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To allow the SCSI subsystem scsi_execute_req() function to issue requests using large buffers that are better allocated with vmalloc() rather than kmalloc(), modify bio_map_kern() to allow passing a buffer allocated with vmalloc(). To do so, detect vmalloc-ed buffers using is_vmalloc_addr(). For vmalloc-ed buffers, flush the buffer using flush_kernel_vmap_range(), use vmalloc_to_page() instead of virt_to_page() to obtain the pages of the buffer, and invalidate the buffer addresses with invalidate_kernel_vmap_range() on completion of read BIOs. This last point is executed using the function bio_invalidate_vmalloc_pages() which is defined only if the architecture defines ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE, that is, if the architecture actually needs the invalidation done. Fixes: 515ce6061312 ("scsi: sd_zbc: Fix sd_zbc_report_zones() buffer allocation") Fixes: e76239a3748c ("block: add a report_zones method") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Chaitanya Kulkarni Reviewed-by: Ming Lei --- block/bio.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index ce797d73bb43..bbba5f08b2ef 100644 --- a/block/bio.c +++ b/block/bio.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include "blk.h" @@ -1479,8 +1480,22 @@ void bio_unmap_user(struct bio *bio) bio_put(bio); } +static void bio_invalidate_vmalloc_pages(struct bio *bio) +{ +#ifdef ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE + if (bio->bi_private && !op_is_write(bio_op(bio))) { + unsigned long i, len = 0; + + for (i = 0; i < bio->bi_vcnt; i++) + len += bio->bi_io_vec[i].bv_len; + invalidate_kernel_vmap_range(bio->bi_private, len); + } +#endif +} + static void bio_map_kern_endio(struct bio *bio) { + bio_invalidate_vmalloc_pages(bio); bio_put(bio); } @@ -1501,6 +1516,8 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, unsigned long end = (kaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long start = kaddr >> PAGE_SHIFT; const int nr_pages = end - start; + bool is_vmalloc = is_vmalloc_addr(data); + struct page *page; int offset, i; struct bio *bio; @@ -1508,6 +1525,11 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, if (!bio) return ERR_PTR(-ENOMEM); + if (is_vmalloc) { + flush_kernel_vmap_range(data, len); + bio->bi_private = data; + } + offset = offset_in_page(kaddr); for (i = 0; i < nr_pages; i++) { unsigned int bytes = PAGE_SIZE - offset; @@ -1518,7 +1540,11 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, if (bytes > len) bytes = len; - if (bio_add_pc_page(q, bio, virt_to_page(data), bytes, + if (!is_vmalloc) + page = virt_to_page(data); + else + page = vmalloc_to_page(data); + if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) { /* we don't support partial mappings */ bio_put(bio); @@ -1531,6 +1557,7 @@ struct bio *bio_map_kern(struct request_queue *q, void *data, unsigned int len, } bio->bi_end_io = bio_map_kern_endio; + return bio; } EXPORT_SYMBOL(bio_map_kern); From patchwork Thu Jun 27 09:29:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11019209 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9C2414E5 for ; Thu, 27 Jun 2019 09:29:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A55E28796 for ; Thu, 27 Jun 2019 09:29:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 97F1028A4F; Thu, 27 Jun 2019 09:29:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EAB16287CE for ; Thu, 27 Jun 2019 09:29:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726566AbfF0J3v (ORCPT ); Thu, 27 Jun 2019 05:29:51 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:13123 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726537AbfF0J3u (ORCPT ); Thu, 27 Jun 2019 05:29:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1561627789; x=1593163789; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7kbEPhpOWuZEiLmXFafrzlRcGM3900czchapAYxS100=; b=KwVLtB5pczxcHdQd+CAL/7WgEfU40sxE9eFHuQC2/Bwbmb4fZZq0QV56 nSjTmoQVYbvLFAv2oUH7j/9PYuJ9zYbUB4jYOkXLBMIob6/g3uvhwJDmb 5fJDfkSVcPG5uSz5GL1lBvmHlEeUSrcZaELB+YGklxkpDSKHru5ZbDzUy OaF1mGc/qiUWmCjlohwvl4CK20tJlB3MLaa89TkGy1XspkrNGBsaqeNDQ D6OKlYWtitIf3Xy9YxMdxxbF5YLOQyDeLZNz3f+Hikn5msE539+sJ9bL/ 3gtHKLDAjroG8UOG/EgSrq6IF/XeHfAcl8jCtTJzEgBAKv49qb/LHAr3l Q==; X-IronPort-AV: E=Sophos;i="5.63,423,1557158400"; d="scan'208";a="116545300" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 27 Jun 2019 17:29:49 +0800 IronPort-SDR: 4Yg1pD1hz6eXEwZkc7zbyqrQegDbc7UZ4DYSC3X/ue9dsFVMgbDN3WK2EhTvq7iVE5xj6zJ8rr 8w4tnfpUtbYg7S80rbaOZ/C0BFVDW3T5rTSAzLD0Nm+eQjsAkArsppJlSnFN0oHtu8x+AMVyba ZK29gv9cR2lT0Ado2M5TTBRP9XukQTPqZnhT3N8EvNfyF/W1NEv5NJE6EIjQLtAWQfWyVXEGAd ub7tae8inh/PUYeDcAXsoZTyZUQT4IYzfAODheCEXYCadk4pNl5YBBVeEy27yp27VeE6SqYDor 0iDwxxW8GhbGhtmSTxnwDVA3 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP; 27 Jun 2019 02:28:56 -0700 IronPort-SDR: gQ03/g8/YOcQDuRgQDklGO04gWxw0S5jnK2JczT5a04utq+FCKTI+Bnt2/ylw62fAhTXLtT0De SRq77OUV61ZYuA8q+bCg5UP05E94v3hQYFP6LL7GZ8eHODzmXsatUwQB6aYh5MsRd/JGIG/Bj6 JlV7pcQlCT1m0hC4R2Yo0zaqLhtfW0ogbPsxx5YNMoa/rH7z98Iueo6Uk41qrVJ3zzrl4Vl7kX xyPKBOo+q9EX+PM96e/nohhNGJr9vivf5TM2R3NliwYsJZrU9yngZNrTegHjxNAtd48QDHaKxh TcQ= Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jun 2019 02:29:48 -0700 From: Damien Le Moal To: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-block@vger.kernel.org, Jens Axboe Cc: Christoph Hellwig , Bart Van Assche Subject: [PATCH V5 2/3] sd_zbc: Fix report zones buffer allocation Date: Thu, 27 Jun 2019 18:29:43 +0900 Message-Id: <20190627092944.20957-3-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190627092944.20957-1-damien.lemoal@wdc.com> References: <20190627092944.20957-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP During disk scan and revalidation done with sd_revalidate(), the zones of a zoned disk are checked using the helper function blk_revalidate_disk_zones() if a configuration change is detected (change in the number of zones or zone size). The function blk_revalidate_disk_zones() issues report_zones calls that are very large, that is, to obtain zone information for all zones of the disk with a single command. The size of the report zones command buffer necessary for such large request generally is lower than the disk max_hw_sectors and KMALLOC_MAX_SIZE (4MB) and succeeds on boot (no memory fragmentation), but often fail at run time (e.g. hot-plug event). This causes the disk revalidation to fail and the disk capacity to be changed to 0. This problem can be avoided by using vmalloc() instead of kmalloc() for the buffer allocation. To limit the amount of memory to be allocated, this patch also introduces the arbitrary SD_ZBC_REPORT_MAX_ZONES maximum number of zones to report with a single report zones command. This limit may be lowered further to satisfy the disk max_hw_sectors limit. Finally, to ensure that the vmalloc-ed buffer can always be mapped in a request, the buffer size is further limited to at most queue_max_segments() pages, allowing successful mapping of the buffer even in the worst case scenario where none of the buffer pages are contiguous. Fixes: 515ce6061312 ("scsi: sd_zbc: Fix sd_zbc_report_zones() buffer allocation") Fixes: e76239a3748c ("block: add a report_zones method") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal --- drivers/scsi/sd_zbc.c | 83 ++++++++++++++++++++++++++++++++----------- 1 file changed, 62 insertions(+), 21 deletions(-) diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c index 7334024b64f1..ecd967fb39c1 100644 --- a/drivers/scsi/sd_zbc.c +++ b/drivers/scsi/sd_zbc.c @@ -9,6 +9,7 @@ */ #include +#include #include @@ -50,7 +51,7 @@ static void sd_zbc_parse_report(struct scsi_disk *sdkp, u8 *buf, /** * sd_zbc_do_report_zones - Issue a REPORT ZONES scsi command. * @sdkp: The target disk - * @buf: Buffer to use for the reply + * @buf: vmalloc-ed buffer to use for the reply * @buflen: the buffer size * @lba: Start LBA of the report * @partial: Do partial report @@ -79,6 +80,7 @@ static int sd_zbc_do_report_zones(struct scsi_disk *sdkp, unsigned char *buf, put_unaligned_be32(buflen, &cmd[10]); if (partial) cmd[14] = ZBC_REPORT_ZONE_PARTIAL; + memset(buf, 0, buflen); result = scsi_execute_req(sdp, cmd, DMA_FROM_DEVICE, @@ -103,6 +105,48 @@ static int sd_zbc_do_report_zones(struct scsi_disk *sdkp, unsigned char *buf, return 0; } +/* + * Maximum number of zones to get with one report zones command. + */ +#define SD_ZBC_REPORT_MAX_ZONES 8192U + +/** + * Allocate a buffer for report zones reply. + * @disk: The target disk + * @nr_zones: Maximum number of zones to report + * @buflen: Size of the buffer allocated + * @gfp_mask: Memory allocation mask + * + */ +static void *sd_zbc_alloc_report_buffer(struct request_queue *q, + unsigned int nr_zones, size_t *buflen, + gfp_t gfp_mask) +{ + size_t bufsize; + void *buf; + + /* + * Report zone buffer size should be at most 64B times the number of + * zones requested plus the 64B reply header, but should be at least + * SECTOR_SIZE for ATA devices. + * Make sure that this size does not exceed the hardware capabilities. + * Furthermore, since the report zone command cannot be split, make + * sure that the allocated buffer can always be mapped by limiting the + * number of pages allocated to the HBA max segments limit. + */ + nr_zones = min(nr_zones, SD_ZBC_REPORT_MAX_ZONES); + bufsize = roundup((nr_zones + 1) * 64, 512); + bufsize = min_t(size_t, bufsize, + queue_max_hw_sectors(q) << SECTOR_SHIFT); + bufsize = min_t(size_t, bufsize, queue_max_segments(q) << PAGE_SHIFT); + + buf = __vmalloc(bufsize, gfp_mask, PAGE_KERNEL); + if (buf) + *buflen = bufsize; + + return buf; +} + /** * sd_zbc_report_zones - Disk report zones operation. * @disk: The target disk @@ -118,9 +162,9 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, gfp_t gfp_mask) { struct scsi_disk *sdkp = scsi_disk(disk); - unsigned int i, buflen, nrz = *nr_zones; + unsigned int i, nrz = *nr_zones; unsigned char *buf; - size_t offset = 0; + size_t buflen = 0, offset = 0; int ret = 0; if (!sd_is_zoned(sdkp)) @@ -132,16 +176,14 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, * without exceeding the device maximum command size. For ATA disks, * buffers must be aligned to 512B. */ - buflen = min(queue_max_hw_sectors(disk->queue) << 9, - roundup((nrz + 1) * 64, 512)); - buf = kmalloc(buflen, gfp_mask); + buf = sd_zbc_alloc_report_buffer(disk->queue, nrz, &buflen, gfp_mask); if (!buf) return -ENOMEM; ret = sd_zbc_do_report_zones(sdkp, buf, buflen, sectors_to_logical(sdkp->device, sector), true); if (ret) - goto out_free_buf; + goto out; nrz = min(nrz, get_unaligned_be32(&buf[0]) / 64); for (i = 0; i < nrz; i++) { @@ -152,8 +194,8 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector, *nr_zones = nrz; -out_free_buf: - kfree(buf); +out: + kvfree(buf); return ret; } @@ -287,8 +329,6 @@ static int sd_zbc_check_zoned_characteristics(struct scsi_disk *sdkp, return 0; } -#define SD_ZBC_BUF_SIZE 131072U - /** * sd_zbc_check_zones - Check the device capacity and zone sizes * @sdkp: Target disk @@ -304,22 +344,23 @@ static int sd_zbc_check_zoned_characteristics(struct scsi_disk *sdkp, */ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks) { + size_t bufsize, buflen; u64 zone_blocks = 0; sector_t max_lba, block = 0; unsigned char *buf; unsigned char *rec; - unsigned int buf_len; - unsigned int list_length; int ret; u8 same; /* Get a buffer */ - buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL); + buf = sd_zbc_alloc_report_buffer(sdkp->disk->queue, + SD_ZBC_REPORT_MAX_ZONES, + &bufsize, GFP_NOIO); if (!buf) return -ENOMEM; /* Do a report zone to get max_lba and the same field */ - ret = sd_zbc_do_report_zones(sdkp, buf, SD_ZBC_BUF_SIZE, 0, false); + ret = sd_zbc_do_report_zones(sdkp, buf, bufsize, 0, false); if (ret) goto out_free; @@ -355,12 +396,12 @@ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks) do { /* Parse REPORT ZONES header */ - list_length = get_unaligned_be32(&buf[0]) + 64; + buflen = min_t(size_t, get_unaligned_be32(&buf[0]) + 64, + bufsize); rec = buf + 64; - buf_len = min(list_length, SD_ZBC_BUF_SIZE); /* Parse zone descriptors */ - while (rec < buf + buf_len) { + while (rec < buf + buflen) { u64 this_zone_blocks = get_unaligned_be64(&rec[8]); if (zone_blocks == 0) { @@ -376,8 +417,8 @@ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks) } if (block < sdkp->capacity) { - ret = sd_zbc_do_report_zones(sdkp, buf, SD_ZBC_BUF_SIZE, - block, true); + ret = sd_zbc_do_report_zones(sdkp, buf, bufsize, block, + true); if (ret) goto out_free; } @@ -408,7 +449,7 @@ static int sd_zbc_check_zones(struct scsi_disk *sdkp, u32 *zblocks) } out_free: - kfree(buf); + kvfree(buf); return ret; } From patchwork Thu Jun 27 09:29:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Damien Le Moal X-Patchwork-Id: 11019213 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55FFC13B4 for ; Thu, 27 Jun 2019 09:29:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4518928796 for ; Thu, 27 Jun 2019 09:29:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 38F2C28A47; Thu, 27 Jun 2019 09:29:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BE60E287CE for ; Thu, 27 Jun 2019 09:29:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726572AbfF0J3w (ORCPT ); Thu, 27 Jun 2019 05:29:52 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:13128 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726315AbfF0J3v (ORCPT ); Thu, 27 Jun 2019 05:29:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1561627790; x=1593163790; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lg8pbEXqIGRxgTLVtp82Eq91m5uzXzwMVXBnCRVLznI=; b=iF+58YTU2/K65VaCHJlBG7s8GYhaAd/IDqcPiwhceXwO6zzTaDdET4P/ 9Ca/sw380iIiWZ1P/TS9RFGNfAUvuSS1/6y9RMunSjLrWQk9GIZQ6g7ak KCfZR7iGVW0DOnrLMKAyv1gVerCGO7iF3Fr1Ol4AmyknA0+xjsUlARuJP pyJZ02lfQjJHoggQE/JtxOXqNf1Qdur53W6e8m1DAIRfTUPe8+63/JyR6 lq+j6q0dFtMHaLmyS0wwphKBZ+OOpzh7t7k7YpHsFKu8KCGCM4J4JWXWh qcH1j7kEAs4ijNUMdwiAgU1mxyjoXtTAsySewE2mOXFYzei1PtLVEvDB6 g==; X-IronPort-AV: E=Sophos;i="5.63,423,1557158400"; d="scan'208";a="116545303" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 27 Jun 2019 17:29:50 +0800 IronPort-SDR: iZusZ02A8UkSuxvH1GadwzOQv/llBC2Ldxq8a5AxDnt3+N9b8tKkTJMltzjVyVQzu5YG5gAZPD /ASSFBg/uPpPSKzn+TEK2nvWB56kWxIXNXjltfT2tcBi2TyDbrt+FegbdfcAZryNlB3fUltky2 bhRGZc1LekBak2LLdoifRXGszRZL7MZugsa4V0/qeCheLTISy3TROeeSSgteS81+Xv8yV7Tp/2 pB6Un86jj6goCiDjvGvfpst6mZ7pP2nnzowXjG3NdrANPpiwX7xka0Y3AwU3oVqH4FvnK25ay5 iDi7AAoIaMgIb/9lBnqoo7e4 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP; 27 Jun 2019 02:28:57 -0700 IronPort-SDR: wsae8wdZTTa0rBCsei6OLNZQm3/A5jc2Zr9CHeDq/8aO47EselAF4cGWzBcfTCeQgFHM9j4peJ wsuBOq4gu/yBprpj6/YCS1zckwLYw0RQ2R+EoSIlcKbet22v4hMvf1EX3DIOIp4hi9uNp4R8OL OxUqJnFPZH8dql6fcffJ7dC+zOisTxEdbyZ0VikEN1WFcvK+PxmhUPMQMUzE8lYntCx83KE9uL dTRK/MQL/CTgYuXBK6h5sYjTPphGdhOVQ43euXqvyTRhEBxQ318hXOwIGqQO7dBo+u8D0I7g0S Uds= Received: from washi.fujisawa.hgst.com ([10.149.53.254]) by uls-op-cesaip01.wdc.com with ESMTP; 27 Jun 2019 02:29:49 -0700 From: Damien Le Moal To: linux-scsi@vger.kernel.org, "Martin K . Petersen" , linux-block@vger.kernel.org, Jens Axboe Cc: Christoph Hellwig , Bart Van Assche Subject: [PATCH V5 3/3] block: Limit zone array allocation size Date: Thu, 27 Jun 2019 18:29:44 +0900 Message-Id: <20190627092944.20957-4-damien.lemoal@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190627092944.20957-1-damien.lemoal@wdc.com> References: <20190627092944.20957-1-damien.lemoal@wdc.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Limit the size of the struct blk_zone array used in blk_revalidate_disk_zones() to avoid memory allocation failures leading to disk revalidation failure. Further reduce the likelyhood of these failures by using kvmalloc() instead of directly allocating contiguous pages. Fixes: 515ce6061312 ("scsi: sd_zbc: Fix sd_zbc_report_zones() buffer allocation") Fixes: e76239a3748c ("block: add a report_zones method") Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Reviewed-by: Bart Van Assche Reviewed-by: Chaitanya Kulkarni --- block/blk-zoned.c | 29 +++++++++++++---------------- include/linux/blkdev.h | 5 +++++ 2 files changed, 18 insertions(+), 16 deletions(-) diff --git a/block/blk-zoned.c b/block/blk-zoned.c index ae7e91bd0618..26f878b9b5f5 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -373,22 +373,20 @@ static inline unsigned long *blk_alloc_zone_bitmap(int node, * Allocate an array of struct blk_zone to get nr_zones zone information. * The allocated array may be smaller than nr_zones. */ -static struct blk_zone *blk_alloc_zones(int node, unsigned int *nr_zones) +static struct blk_zone *blk_alloc_zones(unsigned int *nr_zones) { - size_t size = *nr_zones * sizeof(struct blk_zone); - struct page *page; - int order; - - for (order = get_order(size); order >= 0; order--) { - page = alloc_pages_node(node, GFP_NOIO | __GFP_ZERO, order); - if (page) { - *nr_zones = min_t(unsigned int, *nr_zones, - (PAGE_SIZE << order) / sizeof(struct blk_zone)); - return page_address(page); - } + struct blk_zone *zones; + size_t nrz = min(*nr_zones, BLK_ZONED_REPORT_MAX_ZONES); + + zones = kvcalloc(nrz, sizeof(struct blk_zone), GFP_NOIO); + if (!zones) { + *nr_zones = 0; + return NULL; } - return NULL; + *nr_zones = nrz; + + return zones; } void blk_queue_free_zone_bitmaps(struct request_queue *q) @@ -443,7 +441,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk) /* Get zone information and initialize seq_zones_bitmap */ rep_nr_zones = nr_zones; - zones = blk_alloc_zones(q->node, &rep_nr_zones); + zones = blk_alloc_zones(&rep_nr_zones); if (!zones) goto out; @@ -480,8 +478,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk) blk_mq_unfreeze_queue(q); out: - free_pages((unsigned long)zones, - get_order(rep_nr_zones * sizeof(struct blk_zone))); + kvfree(zones); kfree(seq_zones_wlock); kfree(seq_zones_bitmap); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 592669bcc536..f7faac856017 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -344,6 +344,11 @@ struct queue_limits { #ifdef CONFIG_BLK_DEV_ZONED +/* + * Maximum number of zones to report with a single report zones command. + */ +#define BLK_ZONED_REPORT_MAX_ZONES 8192U + extern unsigned int blkdev_nr_zones(struct block_device *bdev); extern int blkdev_report_zones(struct block_device *bdev, sector_t sector, struct blk_zone *zones,