From patchwork Fri Nov 10 01:01:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sarthak Kukreti X-Patchwork-Id: 13451959 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2B5DED1 for ; Fri, 10 Nov 2023 01:01:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="g1n0EJIx" Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 365E43C02 for ; Thu, 9 Nov 2023 17:01:48 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1cc37fb1310so12966145ad.1 for ; Thu, 09 Nov 2023 17:01:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1699578107; x=1700182907; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3ICrroe80Ll599Nr/P3Q2fdaJ37Sj5UUPmfU1KqMuvw=; b=g1n0EJIxaNo+UkRMTmFiUqOPi/4D877TIcV6HtAZTddOOtBix423FEsbGwp6eLDRsQ Lqf4Xfk5iKTa8a69xcuixNLQKXkXrS9WguZs4D+lf5cQksNvSDoKuouqn8ZhBSvJ+5AL Suuck3Deg962oUktUEzyDlBimPeX2Q+p17UqE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699578107; x=1700182907; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3ICrroe80Ll599Nr/P3Q2fdaJ37Sj5UUPmfU1KqMuvw=; b=hQ0R8kblrKRAcAv7Lb7D61D8U877rFBYrokWVGYOF0eLk6ISf5zmnjvX+BUr/2D1if CErniXE3pI6tFqjCCQ4IlCMf/1Lhdm2y/0xw0c8pMGMKLExv5zEqyTR3CaMoSwim3vGS Xba8snbb1S96a3kJE9PBE+E95LCFZZSyJ4Wki+0b+3iy5eRgIuUhEQjtHhZE+ZlrpkqY k7WOFtCMyh/XC89g2kD9Ag/YqJMdZn9Hg+tFLmvPRtT16GMBCfisVyWWzt3oqEp7vHlb OrqNgj7sUljZ7EWBnXVDfRwEoboNOPvQLRvRs87axMJwJrZANZWr2fCjJPU2GirZ32h/ zW3w== X-Gm-Message-State: AOJu0YzmEDsVBsYg3TwhQBk6oB4tY5nvIZvzjl1JrdYQW2kkOh+eZqLn +TCqkqZ6RluJK+/1zmwf9YoVMQ== X-Google-Smtp-Source: AGHT+IEyooZF8Guj23GsnQ5FHWs4Cr0gvVG8m4ngCMhTo570l3Wow6cWa3hmoVMGokaunLGT/wWefw== X-Received: by 2002:a17:902:ee85:b0:1cc:5d06:b38 with SMTP id a5-20020a170902ee8500b001cc5d060b38mr6182612pld.64.1699578107586; Thu, 09 Nov 2023 17:01:47 -0800 (PST) Received: from localhost ([2620:15c:9d:2:e584:25c0:d29c:54c]) by smtp.gmail.com with UTF8SMTPSA id u13-20020a17090341cd00b001bdb85291casm4106985ple.208.2023.11.09.17.01.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 09 Nov 2023 17:01:47 -0800 (PST) From: Sarthak Kukreti To: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Jens Axboe , Mike Snitzer , "Darrick J . Wong" , Christoph Hellwig , Dave Chinner , Brian Foster , Sarthak Kukreti Subject: [PATCH v9 1/3] block: Introduce provisioning primitives Date: Thu, 9 Nov 2023 17:01:36 -0800 Message-ID: <20231110010139.3901150-2-sarthakkukreti@chromium.org> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog In-Reply-To: <20231110010139.3901150-1-sarthakkukreti@chromium.org> References: <20231110010139.3901150-1-sarthakkukreti@chromium.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Introduce block request REQ_OP_PROVISION. The intent of this request is to request underlying storage to preallocate disk space for the given block range. Block devices that support this capability will export a provision limit within their request queues. This patch also adds the capability to call fallocate() in mode 0 on block devices, which will send REQ_OP_PROVISION to the block device for the specified range. Signed-off-by: Sarthak Kukreti Signed-off-by: Mike Snitzer --- block/blk-core.c | 5 ++++ block/blk-lib.c | 51 +++++++++++++++++++++++++++++++++++++++ block/blk-merge.c | 18 ++++++++++++++ block/blk-settings.c | 19 +++++++++++++++ block/blk-sysfs.c | 9 +++++++ block/bounce.c | 1 + block/fops.c | 5 ++++ include/linux/bio.h | 6 +++-- include/linux/blk_types.h | 5 +++- include/linux/blkdev.h | 16 ++++++++++++ 10 files changed, 132 insertions(+), 3 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 9d51e9894ece..e1615ffa71bc 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -123,6 +123,7 @@ static const char *const blk_op_name[] = { REQ_OP_NAME(WRITE_ZEROES), REQ_OP_NAME(DRV_IN), REQ_OP_NAME(DRV_OUT), + REQ_OP_NAME(PROVISION) }; #undef REQ_OP_NAME @@ -792,6 +793,10 @@ void submit_bio_noacct(struct bio *bio) if (!q->limits.max_write_zeroes_sectors) goto not_supported; break; + case REQ_OP_PROVISION: + if (!q->limits.max_provision_sectors) + goto not_supported; + break; default: break; } diff --git a/block/blk-lib.c b/block/blk-lib.c index e59c3069e835..b1f720e198cd 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -343,3 +343,54 @@ int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, return ret; } EXPORT_SYMBOL(blkdev_issue_secure_erase); + +/** + * blkdev_issue_provision - provision a block range + * @bdev: blockdev to write + * @sector: start sector + * @nr_sects: number of sectors to provision + * @gfp_mask: memory allocation flags (for bio_alloc) + * + * Description: + * Issues a provision request to the block device for the range of sectors. + * For thinly provisioned block devices, this acts as a signal for the + * underlying storage pool to allocate space for this block range. + */ +int blkdev_issue_provision(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp) +{ + sector_t bs_mask = (bdev_logical_block_size(bdev) >> 9) - 1; + unsigned int max_sectors = bdev_max_provision_sectors(bdev); + struct bio *bio = NULL; + struct blk_plug plug; + int ret = 0; + + if (max_sectors == 0) + return -EOPNOTSUPP; + if ((sector | nr_sects) & bs_mask) + return -EINVAL; + if (bdev_read_only(bdev)) + return -EPERM; + + blk_start_plug(&plug); + for (;;) { + unsigned int req_sects = min_t(sector_t, nr_sects, max_sectors); + + bio = blk_next_bio(bio, bdev, 0, REQ_OP_PROVISION, gfp); + bio->bi_iter.bi_sector = sector; + bio->bi_iter.bi_size = req_sects << SECTOR_SHIFT; + + sector += req_sects; + nr_sects -= req_sects; + if (!nr_sects) { + ret = submit_bio_wait(bio); + bio_put(bio); + break; + } + cond_resched(); + } + blk_finish_plug(&plug); + + return ret; +} +EXPORT_SYMBOL_GPL(blkdev_issue_provision); diff --git a/block/blk-merge.c b/block/blk-merge.c index 65e75efa9bd3..83e516d2121f 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -158,6 +158,21 @@ static struct bio *bio_split_write_zeroes(struct bio *bio, return bio_split(bio, lim->max_write_zeroes_sectors, GFP_NOIO, bs); } +static struct bio *bio_split_provision(struct bio *bio, + const struct queue_limits *lim, + unsigned int *nsegs, struct bio_set *bs) +{ + *nsegs = 0; + + if (!lim->max_provision_sectors) + return NULL; + + if (bio_sectors(bio) <= lim->max_provision_sectors) + return NULL; + + return bio_split(bio, lim->max_provision_sectors, GFP_NOIO, bs); +} + /* * Return the maximum number of sectors from the start of a bio that may be * submitted as a single request to a block device. If enough sectors remain, @@ -366,6 +381,9 @@ struct bio *__bio_split_to_limits(struct bio *bio, case REQ_OP_WRITE_ZEROES: split = bio_split_write_zeroes(bio, lim, nr_segs, bs); break; + case REQ_OP_PROVISION: + split = bio_split_provision(bio, lim, nr_segs, bs); + break; default: split = bio_split_rw(bio, lim, nr_segs, bs, get_max_io_size(bio, lim) << SECTOR_SHIFT); diff --git a/block/blk-settings.c b/block/blk-settings.c index 0046b447268f..c81820406f2f 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -59,6 +59,7 @@ void blk_set_default_limits(struct queue_limits *lim) lim->zoned = BLK_ZONED_NONE; lim->zone_write_granularity = 0; lim->dma_alignment = 511; + lim->max_provision_sectors = 0; } /** @@ -82,6 +83,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) lim->max_dev_sectors = UINT_MAX; lim->max_write_zeroes_sectors = UINT_MAX; lim->max_zone_append_sectors = UINT_MAX; + lim->max_provision_sectors = UINT_MAX; } EXPORT_SYMBOL(blk_set_stacking_limits); @@ -208,6 +210,20 @@ void blk_queue_max_write_zeroes_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors); +/** + * blk_queue_max_provision_sectors - set max sectors for a single provision + * + * @q: the request queue for the device + * @max_provision_sectors: maximum number of sectors to provision per command + **/ + +void blk_queue_max_provision_sectors(struct request_queue *q, + unsigned int max_provision_sectors) +{ + q->limits.max_provision_sectors = max_provision_sectors; +} +EXPORT_SYMBOL(blk_queue_max_provision_sectors); + /** * blk_queue_max_zone_append_sectors - set max sectors for a single zone append * @q: the request queue for the device @@ -578,6 +594,9 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, t->max_segment_size = min_not_zero(t->max_segment_size, b->max_segment_size); + t->max_provision_sectors = min(t->max_provision_sectors, + b->max_provision_sectors); + t->misaligned |= b->misaligned; alignment = queue_limit_alignment_offset(b, start); diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 63e481262336..9a78c36f3199 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -199,6 +199,13 @@ static ssize_t queue_discard_zeroes_data_show(struct request_queue *q, char *pag return queue_var_show(0, page); } +static ssize_t queue_provision_max_show(struct request_queue *q, + char *page) +{ + return sprintf(page, "%llu\n", + (unsigned long long)q->limits.max_provision_sectors << 9); +} + static ssize_t queue_write_same_max_show(struct request_queue *q, char *page) { return queue_var_show(0, page); @@ -507,6 +514,7 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes"); QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes"); QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data"); +QUEUE_RO_ENTRY(queue_provision_max, "provision_max_bytes"); QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes"); QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes"); QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes"); @@ -633,6 +641,7 @@ static struct attribute *queue_attrs[] = { &queue_discard_max_entry.attr, &queue_discard_max_hw_entry.attr, &queue_discard_zeroes_data_entry.attr, + &queue_provision_max_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, diff --git a/block/bounce.c b/block/bounce.c index 7cfcb242f9a1..ab9d8723ae64 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -176,6 +176,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_PROVISION: break; default: bio_for_each_segment(bv, bio_src, iter) diff --git a/block/fops.c b/block/fops.c index 0abaac705daf..193caf7ab500 100644 --- a/block/fops.c +++ b/block/fops.c @@ -789,6 +789,11 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start, * de-allocate mode calls to fallocate(). */ switch (mode) { + case 0: + case FALLOC_FL_KEEP_SIZE: + error = blkdev_issue_provision(bdev, start >> SECTOR_SHIFT, + len >> SECTOR_SHIFT, GFP_KERNEL); + break; case FALLOC_FL_ZERO_RANGE: case FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE: error = truncate_bdev_range(bdev, file_to_blk_mode(file), start, end); diff --git a/include/linux/bio.h b/include/linux/bio.h index 41d417ee1349..aa2119f7cd5a 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -57,7 +57,8 @@ static inline bool bio_has_data(struct bio *bio) bio->bi_iter.bi_size && bio_op(bio) != REQ_OP_DISCARD && bio_op(bio) != REQ_OP_SECURE_ERASE && - bio_op(bio) != REQ_OP_WRITE_ZEROES) + bio_op(bio) != REQ_OP_WRITE_ZEROES && + bio_op(bio) != REQ_OP_PROVISION) return true; return false; @@ -67,7 +68,8 @@ static inline bool bio_no_advance_iter(const struct bio *bio) { return bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE || - bio_op(bio) == REQ_OP_WRITE_ZEROES; + bio_op(bio) == REQ_OP_WRITE_ZEROES || + bio_op(bio) == REQ_OP_PROVISION; } static inline void *bio_data(struct bio *bio) diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index d5c5e59ddbd2..e55828ddfafe 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -397,7 +397,10 @@ enum req_op { REQ_OP_DRV_IN = (__force blk_opf_t)34, REQ_OP_DRV_OUT = (__force blk_opf_t)35, - REQ_OP_LAST = (__force blk_opf_t)36, + /* request device to provision block */ + REQ_OP_PROVISION = (__force blk_opf_t)37, + + REQ_OP_LAST = (__force blk_opf_t)38, }; enum req_flag_bits { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 51fa7ffdee83..c40f2b590e5d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -308,6 +308,7 @@ struct queue_limits { unsigned int discard_granularity; unsigned int discard_alignment; unsigned int zone_write_granularity; + unsigned int max_provision_sectors; unsigned short max_segments; unsigned short max_integrity_segments; @@ -900,6 +901,8 @@ extern void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors); extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q, unsigned int max_write_same_sectors); +extern void blk_queue_max_provision_sectors(struct request_queue *q, + unsigned int max_provision_sectors); extern void blk_queue_logical_block_size(struct request_queue *, unsigned int); extern void blk_queue_max_zone_append_sectors(struct request_queue *q, unsigned int max_zone_append_sectors); @@ -1038,6 +1041,9 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, int blkdev_issue_secure_erase(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp); +extern int blkdev_issue_provision(struct block_device *bdev, sector_t sector, + sector_t nr_sects, gfp_t gfp_mask); + #define BLKDEV_ZERO_NOUNMAP (1 << 0) /* do not free blocks */ #define BLKDEV_ZERO_NOFALLBACK (1 << 1) /* don't write explicit zeroes */ @@ -1117,6 +1123,11 @@ static inline unsigned short queue_max_discard_segments(const struct request_que return q->limits.max_discard_segments; } +static inline unsigned short queue_max_provision_sectors(const struct request_queue *q) +{ + return q->limits.max_provision_sectors; +} + static inline unsigned int queue_max_segment_size(const struct request_queue *q) { return q->limits.max_segment_size; @@ -1259,6 +1270,11 @@ static inline bool bdev_nowait(struct block_device *bdev) return test_bit(QUEUE_FLAG_NOWAIT, &bdev_get_queue(bdev)->queue_flags); } +static inline unsigned int bdev_max_provision_sectors(struct block_device *bdev) +{ + return bdev_get_queue(bdev)->limits.max_provision_sectors; +} + static inline enum blk_zoned_model bdev_zoned_model(struct block_device *bdev) { return blk_queue_zoned_model(bdev_get_queue(bdev)); From patchwork Fri Nov 10 01:01:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sarthak Kukreti X-Patchwork-Id: 13451960 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DB25136F for ; Fri, 10 Nov 2023 01:01:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="k/XOAW1f" Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7DD03ABF for ; Thu, 9 Nov 2023 17:01:50 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-6b77ab73c6fso1263201b3a.1 for ; Thu, 09 Nov 2023 17:01:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1699578110; x=1700182910; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZpIlXDAK836nj2ayS4Px4IQ+M7LfyqBN0RSk77HlWk0=; b=k/XOAW1f2zktIdEXBUiXfdYhwhgWtVlr87eGvmwU2ejrxqU+850JvCg1XcwvC1mEYW gP5WDvEj97Obv35crFQ95piifEtQrhQ8GNo+TtEfJCUXdqbq+MD3M8I63sFfMoWdvBrp MoTNeKUtOhbfWib9M55IFaaDAZ1z1pPuhkdGw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699578110; x=1700182910; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZpIlXDAK836nj2ayS4Px4IQ+M7LfyqBN0RSk77HlWk0=; b=wNCgKGqbX/wTB2FChiJ6+t39kAJERMVu3migXYtkPhbQq9NXbzo9P88wiuWLlsEt1+ XlIQwT0ONTK6d9/+QEbHlOQPku4o7ZnBb+XVTUOkrdb9bII4yAVSVQ8PA+Jx0gCxxWEW 2txfewhgSZvuuGf6sjLtUj9WzKg7sb4kcgoA6XjhFuq4BomlxaDDRDTLwBORrZgq2Zcp Oc9VBVL3wEM6ABIv2V2DMPezTz15TWeKoHW1Tj4tYIHh2yGTTlavyrZmP1556C/Wxxoz Mz32xRdgIU+t3xorbaEMukFBXD7ayNW5HFQikYXZke0RZUnuDjJIjcXWAcpOJ+VYrQzK vfvw== X-Gm-Message-State: AOJu0Yz5AKo3OtouZyyfvT1WLOYNgZBMB/dXERJVSz4P1NXXg/gkCnir eZgGRKjg+WyRZeRobCsqdG/ohw== X-Google-Smtp-Source: AGHT+IFs9+3vv5SiURWJQfxLcqgU9CLjws5ou3tIHtJ9h+cXOfzdemON/fbHWOOjBj5e21c6QxkEEw== X-Received: by 2002:a05:6a00:6592:b0:6bf:15fb:4b32 with SMTP id hd18-20020a056a00659200b006bf15fb4b32mr1401432pfb.8.1699578110109; Thu, 09 Nov 2023 17:01:50 -0800 (PST) Received: from localhost ([2620:15c:9d:2:e584:25c0:d29c:54c]) by smtp.gmail.com with UTF8SMTPSA id k76-20020a636f4f000000b005b529d633b7sm5224838pgc.14.2023.11.09.17.01.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 09 Nov 2023 17:01:49 -0800 (PST) From: Sarthak Kukreti To: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Jens Axboe , Mike Snitzer , "Darrick J . Wong" , Christoph Hellwig , Dave Chinner , Brian Foster , Sarthak Kukreti Subject: [PATCH v9 2/3] dm: Add block provisioning support Date: Thu, 9 Nov 2023 17:01:37 -0800 Message-ID: <20231110010139.3901150-3-sarthakkukreti@chromium.org> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog In-Reply-To: <20231110010139.3901150-1-sarthakkukreti@chromium.org> References: <20231110010139.3901150-1-sarthakkukreti@chromium.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add block provisioning support for device-mapper targets. dm-crypt and dm-linear will, by default, passthrough REQ_OP_PROVISION requests to the underlying device, if supported. Signed-off-by: Sarthak Kukreti Signed-off-by: Mike Snitzer --- drivers/md/dm-crypt.c | 4 +++- drivers/md/dm-linear.c | 1 + drivers/md/dm-table.c | 23 +++++++++++++++++++++++ drivers/md/dm.c | 7 +++++++ include/linux/device-mapper.h | 17 +++++++++++++++++ 5 files changed, 51 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 6de107aff331..1d18926ae801 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3365,6 +3365,8 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) cc->tag_pool_max_sectors <<= cc->sector_shift; } + ti->num_provision_bios = 1; + ret = -ENOMEM; cc->io_queue = alloc_workqueue("kcryptd_io/%s", WQ_MEM_RECLAIM, 1, devname); if (!cc->io_queue) { @@ -3419,7 +3421,7 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) * - for REQ_OP_DISCARD caller must use flush if IO ordering matters */ if (unlikely(bio->bi_opf & REQ_PREFLUSH || - bio_op(bio) == REQ_OP_DISCARD)) { + bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_PROVISION)) { bio_set_dev(bio, cc->dev->bdev); if (bio_sectors(bio)) bio->bi_iter.bi_sector = cc->start + diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c index 2d3e186ca87e..8d2dc9dfe93e 100644 --- a/drivers/md/dm-linear.c +++ b/drivers/md/dm-linear.c @@ -62,6 +62,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv) ti->num_discard_bios = 1; ti->num_secure_erase_bios = 1; ti->num_write_zeroes_bios = 1; + ti->num_provision_bios = 1; ti->private = lc; return 0; diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 198d38b53322..f29100fc1a60 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1877,6 +1877,26 @@ static bool dm_table_supports_write_zeroes(struct dm_table *t) return true; } +static int device_provision_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) +{ + return bdev_max_provision_sectors(dev->bdev); +} + +static bool dm_table_supports_provision(struct dm_table *t) +{ + for (unsigned int i = 0; i < t->num_targets; i++) { + struct dm_target *ti = dm_table_get_target(t, i); + + if (ti->provision_supported || + (ti->type->iterate_devices && + ti->type->iterate_devices(ti, device_provision_capable, NULL))) + return true; + } + + return false; +} + static int device_not_nowait_capable(struct dm_target *ti, struct dm_dev *dev, sector_t start, sector_t len, void *data) { @@ -2010,6 +2030,9 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, if (!dm_table_supports_write_zeroes(t)) q->limits.max_write_zeroes_sectors = 0; + if (!dm_table_supports_provision(t)) + q->limits.max_provision_sectors = 0; + dm_table_verify_integrity(t); /* diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 23c32cd1f1d8..2e207fa0b0f4 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1609,6 +1609,7 @@ static bool is_abnormal_io(struct bio *bio) case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: case REQ_OP_WRITE_ZEROES: + case REQ_OP_PROVISION: return true; default: break; @@ -1645,6 +1646,12 @@ static blk_status_t __process_abnormal_io(struct clone_info *ci, if (ti->max_write_zeroes_granularity) max_granularity = max_sectors; break; + case REQ_OP_PROVISION: + num_bios = ti->num_provision_bios; + max_sectors = limits->max_provision_sectors; + if (ti->max_provision_granularity) + max_granularity = max_sectors; + break; default: break; } diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h index 772ab4d74d94..c1d674d32444 100644 --- a/include/linux/device-mapper.h +++ b/include/linux/device-mapper.h @@ -333,6 +333,12 @@ struct dm_target { */ unsigned int num_write_zeroes_bios; + /* + * The number of PROVISION bios that will be submitted to the target. + * The bio number can be accessed with dm_bio_get_target_bio_nr. + */ + unsigned int num_provision_bios; + /* * The minimum number of extra bytes allocated in each io for the * target to use. @@ -357,6 +363,11 @@ struct dm_target { */ bool discards_supported:1; + /* Set if this target needs to receive provision requests regardless of + * whether or not its underlying devices have support. + */ + bool provision_supported:1; + /* * Set if this target requires that discards be split on * 'max_discard_sectors' boundaries. @@ -375,6 +386,12 @@ struct dm_target { */ bool max_write_zeroes_granularity:1; + /* + * Set if this target requires that provisions be split on + * 'max_provision_sectors' boundaries. + */ + bool max_provision_granularity:1; + /* * Set if we need to limit the number of in-flight bios when swapping. */ From patchwork Fri Nov 10 01:01:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sarthak Kukreti X-Patchwork-Id: 13451961 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7F9D17F8 for ; Fri, 10 Nov 2023 01:01:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="fHml8pM0" Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 137C53C01 for ; Thu, 9 Nov 2023 17:01:53 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id d2e1a72fcca58-6be0277c05bso1449792b3a.0 for ; Thu, 09 Nov 2023 17:01:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1699578112; x=1700182912; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b06Vj9lOHSb9sRVpxi+IgZjFW0xvb2n47gh9duK5vpM=; b=fHml8pM0AZqFsHLx+G5hKygpabuUGZhXqEWwK0EDYq+U9uDwfT66khW1/BemKVcBTS 57wmby0t8H9qFsAxUeeMgbOJ8PTa6kl9uha4/RefDxHWwqfNNoeO7BXUApmzLFyYppAW 8T17yTp5UHFVbNEeUZvVxqonMIRZ6Lm4F+OsQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699578112; x=1700182912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b06Vj9lOHSb9sRVpxi+IgZjFW0xvb2n47gh9duK5vpM=; b=pwfx5xTDDYej16QrvNRMbtT1x2fYSF66GX8+FWUIVxmOeXjm4FrxlKuude7J4Uhiqe Zdp9704SxkdrnEABIrFe+Jxr3cUz1AgbI9g8by+TDiAzKfVvmYY68JWuZQaV4N3Ea3Ca UbKwoOmitZJMel1+7qH9nkWdPVAMg6qSokhnh0kUFyAPpKD1sdzoYqwLvO+F5Qm71gTk hx62BxAVk4zxI8vWZSV72fTbE8XJzpkkSv6OEXOWUQZZ2l8fxXnXFJd/qn7QfUjTDQV7 nrg8NxJMGwFUQZJbJS533ADZs17QUBhMYuydiJqDS1WdV+rWyIZJ9xW02T2dcKO0G4zV NVgQ== X-Gm-Message-State: AOJu0YyhF2/E8WYLMHrRZ7ItcvrKYl87eaVtDRrev+9L/V2C76go3AE4 OaIMhg/u7vLKKax75fDuYqHqLw== X-Google-Smtp-Source: AGHT+IEcUrMoRB4M3cFIJYYG0U9EFFBVtRT/665jkYTpjvDYxK5d5N79MHBfBn5NU+hQE8l3/c87lA== X-Received: by 2002:aa7:88d6:0:b0:6c3:1b90:8554 with SMTP id k22-20020aa788d6000000b006c31b908554mr6534965pff.12.1699578112454; Thu, 09 Nov 2023 17:01:52 -0800 (PST) Received: from localhost ([2620:15c:9d:2:e584:25c0:d29c:54c]) by smtp.gmail.com with UTF8SMTPSA id m15-20020a056a00080f00b006c4db182074sm82725pfk.196.2023.11.09.17.01.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 09 Nov 2023 17:01:52 -0800 (PST) From: Sarthak Kukreti To: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Jens Axboe , Mike Snitzer , "Darrick J . Wong" , Christoph Hellwig , Dave Chinner , Brian Foster , Sarthak Kukreti Subject: [PATCH v9 3/3] loop: Add support for provision requests Date: Thu, 9 Nov 2023 17:01:38 -0800 Message-ID: <20231110010139.3901150-4-sarthakkukreti@chromium.org> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog In-Reply-To: <20231110010139.3901150-1-sarthakkukreti@chromium.org> References: <20231110010139.3901150-1-sarthakkukreti@chromium.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add support for provision requests to loopback devices. Loop devices will configure provision support based on whether the underlying block device/file can support the provision request and, upon receiving a provision bio, will map it to the backing device/storage. For loop devices over files, a REQ_OP_PROVISION request will translate to an fallocate() mode 0 call on the backing file. Caveat: For filesystems with copy-on-write semantics, REQ_OP_PROVISION will guarantee the success of only the next write to the provisioned range with a ENOSPC. Signed-off-by: Sarthak Kukreti Signed-off-by: Mike Snitzer --- drivers/block/loop.c | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 9f2d412fc560..c84d4acdb18c 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -311,16 +311,20 @@ static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos, { /* * We use fallocate to manipulate the space mappings used by the image - * a.k.a. discard/zerorange. + * a.k.a. discard/provision/zerorange. */ struct file *file = lo->lo_backing_file; int ret; - mode |= FALLOC_FL_KEEP_SIZE; + if (mode & (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_ZERO_RANGE) && + !bdev_max_discard_sectors(lo->lo_device)) + return -EOPNOTSUPP; - if (!bdev_max_discard_sectors(lo->lo_device)) + if (mode == 0 && !bdev_max_provision_sectors(lo->lo_device)) return -EOPNOTSUPP; + mode |= FALLOC_FL_KEEP_SIZE; + ret = file->f_op->fallocate(file, mode, pos, blk_rq_bytes(rq)); if (unlikely(ret && ret != -EINVAL && ret != -EOPNOTSUPP)) return -EIO; @@ -488,6 +492,13 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq) FALLOC_FL_PUNCH_HOLE); case REQ_OP_DISCARD: return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE); + case REQ_OP_PROVISION: + /* + * fallocate() guarantees that the next writes to the + * provisioned range will succeed without ENOSPC but does not + * guarantee that every write to this range will succeed. + */ + return lo_fallocate(lo, rq, pos, 0); case REQ_OP_WRITE: if (cmd->use_aio) return lo_rw_aio(lo, cmd, pos, ITER_SOURCE); @@ -754,6 +765,25 @@ static void loop_sysfs_exit(struct loop_device *lo) &loop_attribute_group); } +static void loop_config_provision(struct loop_device *lo) +{ + struct file *file = lo->lo_backing_file; + struct inode *inode = file->f_mapping->host; + + /* + * If the backing device is a block device, mirror its provisioning + * capability. + */ + if (S_ISBLK(inode->i_mode)) { + blk_queue_max_provision_sectors(lo->lo_queue, + bdev_max_provision_sectors(I_BDEV(inode))); + } else if (file->f_op->fallocate) { + blk_queue_max_provision_sectors(lo->lo_queue, UINT_MAX >> 9); + } else { + blk_queue_max_provision_sectors(lo->lo_queue, 0); + } +} + static void loop_config_discard(struct loop_device *lo) { struct file *file = lo->lo_backing_file; @@ -1092,6 +1122,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode, blk_queue_io_min(lo->lo_queue, bsize); loop_config_discard(lo); + loop_config_provision(lo); loop_update_rotational(lo); loop_update_dio(lo); loop_sysfs_init(lo); @@ -1304,6 +1335,7 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info) } loop_config_discard(lo); + loop_config_provision(lo); /* update dio if lo_offset or transfer is changed */ __loop_update_dio(lo, lo->use_dio); @@ -1857,6 +1889,7 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx, case REQ_OP_FLUSH: case REQ_OP_DISCARD: case REQ_OP_WRITE_ZEROES: + case REQ_OP_PROVISION: cmd->use_aio = false; break; default: