Message ID | 20201206061537.3870-1-tom.ty89@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] block: avoid the unnecessary blk_bio_discard_split() | expand |
Hello! On 06.12.2020 9:15, Tom Yan wrote: > It doesn't seem necessary to have the redundant layer of splitting. > The request size will even be more consistent / aligned to the cap. > > Signed-off-by: Tom Yan <tom.ty89@gmail.com> > --- > block/blk-lib.c | 5 ++++- > block/blk-merge.c | 2 +- > block/blk.h | 8 ++++++-- > 3 files changed, 11 insertions(+), 4 deletions(-) > > diff --git a/block/blk-lib.c b/block/blk-lib.c > index e90614fd8d6a..f606184a9050 100644 > --- a/block/blk-lib.c > +++ b/block/blk-lib.c > @@ -85,9 +85,12 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, > * is split in device drive, the split ones are very probably > * to be aligned to discard_granularity of the device's queue. > */ > - if (granularity_aligned_lba == sector_mapped) > + if (granularity_aligned_lba == sector_mapped) { > req_sects = min_t(sector_t, nr_sects, > bio_aligned_discard_max_sectors(q)); > + if (!req_sects) > + return -EOPNOTSUPP; > + } > else Needs to be } else { according to the CodingStyle doc... > req_sects = min_t(sector_t, nr_sects, > granularity_aligned_lba - sector_mapped); [...] MBR, Sergei
diff --git a/block/blk-lib.c b/block/blk-lib.c index e90614fd8d6a..f606184a9050 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -85,9 +85,12 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, * is split in device drive, the split ones are very probably * to be aligned to discard_granularity of the device's queue. */ - if (granularity_aligned_lba == sector_mapped) + if (granularity_aligned_lba == sector_mapped) { req_sects = min_t(sector_t, nr_sects, bio_aligned_discard_max_sectors(q)); + if (!req_sects) + return -EOPNOTSUPP; + } else req_sects = min_t(sector_t, nr_sects, granularity_aligned_lba - sector_mapped); diff --git a/block/blk-merge.c b/block/blk-merge.c index bcf5e4580603..2439216585d9 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -59,6 +59,7 @@ static inline bool req_gap_front_merge(struct request *req, struct bio *bio) return bio_will_gap(req->q, NULL, bio, req->bio); } +/* deprecated */ static struct bio *blk_bio_discard_split(struct request_queue *q, struct bio *bio, struct bio_set *bs, @@ -303,7 +304,6 @@ void __blk_queue_split(struct bio **bio, unsigned int *nr_segs) switch (bio_op(*bio)) { case REQ_OP_DISCARD: case REQ_OP_SECURE_ERASE: - split = blk_bio_discard_split(q, *bio, &q->bio_split, nr_segs); break; case REQ_OP_WRITE_ZEROES: split = blk_bio_write_zeroes_split(q, *bio, &q->bio_split, diff --git a/block/blk.h b/block/blk.h index dfab98465db9..e7e31a8c4930 100644 --- a/block/blk.h +++ b/block/blk.h @@ -281,8 +281,12 @@ static inline unsigned int bio_allowed_max_sectors(struct request_queue *q) static inline unsigned int bio_aligned_discard_max_sectors( struct request_queue *q) { - return round_down(UINT_MAX, q->limits.discard_granularity) >> - SECTOR_SHIFT; + unsigned int discard_max_sectors, granularity; + discard_max_sectors = min(q->limits.max_discard_sectors, + bio_allowed_max_sectors(q)); + /* Zero-sector (unknown) and one-sector granularities are the same. */ + granularity = max(q->limits.discard_granularity >> SECTOR_SHIFT, 1U) + return round_down(max, granularity); } /*
It doesn't seem necessary to have the redundant layer of splitting. The request size will even be more consistent / aligned to the cap. Signed-off-by: Tom Yan <tom.ty89@gmail.com> --- block/blk-lib.c | 5 ++++- block/blk-merge.c | 2 +- block/blk.h | 8 ++++++-- 3 files changed, 11 insertions(+), 4 deletions(-)