diff mbox series

[PATCHv4,10/10] blk-integrity: improved sg segment mapping

Message ID 20240911201240.3982856-11-kbusch@meta.com (mailing list archive)
State Superseded
Headers show
Series block integrity merging and counting | expand

Commit Message

Keith Busch Sept. 11, 2024, 8:12 p.m. UTC
From: Keith Busch <kbusch@kernel.org>

Make the integrity mapping more like data mapping, blk_rq_map_sg. Use
the request to validate the segment count, and update the callers so
they don't have to.

Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 block/blk-integrity.c         | 15 +++++++++++----
 drivers/nvme/host/rdma.c      |  4 ++--
 drivers/scsi/scsi_lib.c       | 11 +++--------
 include/linux/blk-integrity.h |  6 ++----
 4 files changed, 18 insertions(+), 18 deletions(-)

Comments

Keith Busch Sept. 11, 2024, 11:23 p.m. UTC | #1
On Wed, Sep 11, 2024 at 01:12:40PM -0700, Keith Busch wrote:
> @@ -102,6 +103,12 @@ int blk_rq_map_integrity_sg(struct request_queue *q, struct bio *bio,
> +	 */
> +	BUG_ON(segments > blk_rq_nr_phys_segments(rq));

Doh, this was mixed up with the copy from blk_rq_map_sg. It should say:

	BUG_ON(segments > blk_rq_nr_integrity_segments(rq));

Question though, blk_rq_map_sg uses WARN and scsi used BUG for this
check. But if the condition is true, a buffer overrun occured. So BUG,
right?
Christoph Hellwig Sept. 12, 2024, 7:46 a.m. UTC | #2
On Wed, Sep 11, 2024 at 05:23:26PM -0600, Keith Busch wrote:
> On Wed, Sep 11, 2024 at 01:12:40PM -0700, Keith Busch wrote:
> > @@ -102,6 +103,12 @@ int blk_rq_map_integrity_sg(struct request_queue *q, struct bio *bio,
> > +	 */
> > +	BUG_ON(segments > blk_rq_nr_phys_segments(rq));
> 
> Doh, this was mixed up with the copy from blk_rq_map_sg. It should say:
> 
> 	BUG_ON(segments > blk_rq_nr_integrity_segments(rq));
> 
> Question though, blk_rq_map_sg uses WARN and scsi used BUG for this
> check. But if the condition is true, a buffer overrun occured. So BUG,
> right?

That would be my preference, unless we manage to add a error return
condition.  Note that Linus seems to be on his weird anti-BUG crusade
again, though.
Christoph Hellwig Sept. 12, 2024, 7:47 a.m. UTC | #3
Looks good modulo the BUG thing:

Reviewed-by: Christoph Hellwig <hch@lst.de>
Martin K. Petersen Sept. 13, 2024, 2 a.m. UTC | #4
Keith,

> Make the integrity mapping more like data mapping, blk_rq_map_sg. Use
> the request to validate the segment count, and update the callers so
> they don't have to.

Looks OK except for the phys vs. integrity snafu.

It has been a constant source of problems that we haven't been able to
have a common mapping function that works for both data and metadata.
blk_rq_map_sg() and blk_rq_map_integrity_sg() always seem to get out of
sync in peculiar ways.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
kernel test robot Sept. 13, 2024, 3:45 a.m. UTC | #5
Hi Keith,

kernel test robot noticed the following build warnings:

[auto build test WARNING on axboe-block/for-next]
[also build test WARNING on next-20240912]
[cannot apply to mkp-scsi/for-next jejb-scsi/for-next linus/master v6.11-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Keith-Busch/blk-mq-unconditional-nr_integrity_segments/20240912-041504
base:   https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
patch link:    https://lore.kernel.org/r/20240911201240.3982856-11-kbusch%40meta.com
patch subject: [PATCHv4 10/10] blk-integrity: improved sg segment mapping
config: openrisc-randconfig-r072-20240913 (https://download.01.org/0day-ci/archive/20240913/202409131138.fuzBKPCG-lkp@intel.com/config)
compiler: or1k-linux-gcc (GCC) 14.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240913/202409131138.fuzBKPCG-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202409131138.fuzBKPCG-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> block/blk-integrity.c:69: warning: Function parameter or struct member 'rq' not described in 'blk_rq_map_integrity_sg'
>> block/blk-integrity.c:69: warning: Excess function parameter 'q' description in 'blk_rq_map_integrity_sg'
>> block/blk-integrity.c:69: warning: Excess function parameter 'bio' description in 'blk_rq_map_integrity_sg'


vim +69 block/blk-integrity.c

7ba1ba12eeef0a Martin K. Petersen 2008-06-30   56  
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   57  /**
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   58   * blk_rq_map_integrity_sg - Map integrity metadata into a scatterlist
13f05c8d8e98bb Martin K. Petersen 2010-09-10   59   * @q:		request queue
13f05c8d8e98bb Martin K. Petersen 2010-09-10   60   * @bio:	bio with integrity metadata attached
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   61   * @sglist:	target scatterlist
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   62   *
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   63   * Description: Map the integrity vectors in request into a
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   64   * scatterlist.  The scatterlist must be big enough to hold all
19f67fc3c069b6 Keith Busch        2024-09-11   65   * elements.  I.e. sized using blk_rq_count_integrity_sg() or
19f67fc3c069b6 Keith Busch        2024-09-11   66   * rq->nr_integrity_segments.
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   67   */
19f67fc3c069b6 Keith Busch        2024-09-11   68  int blk_rq_map_integrity_sg(struct request *rq, struct scatterlist *sglist)
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  @69  {
d57a5f7c6605f1 Kent Overstreet    2013-11-23   70  	struct bio_vec iv, ivprv = { NULL };
19f67fc3c069b6 Keith Busch        2024-09-11   71  	struct request_queue *q = rq->q;
13f05c8d8e98bb Martin K. Petersen 2010-09-10   72  	struct scatterlist *sg = NULL;
19f67fc3c069b6 Keith Busch        2024-09-11   73  	struct bio *bio = rq->bio;
13f05c8d8e98bb Martin K. Petersen 2010-09-10   74  	unsigned int segments = 0;
d57a5f7c6605f1 Kent Overstreet    2013-11-23   75  	struct bvec_iter iter;
d57a5f7c6605f1 Kent Overstreet    2013-11-23   76  	int prev = 0;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   77  
d57a5f7c6605f1 Kent Overstreet    2013-11-23   78  	bio_for_each_integrity_vec(iv, bio, iter) {
d57a5f7c6605f1 Kent Overstreet    2013-11-23   79  		if (prev) {
3dccdae54fe836 Christoph Hellwig  2018-09-24   80  			if (!biovec_phys_mergeable(q, &ivprv, &iv))
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   81  				goto new_segment;
d57a5f7c6605f1 Kent Overstreet    2013-11-23   82  			if (sg->length + iv.bv_len > queue_max_segment_size(q))
13f05c8d8e98bb Martin K. Petersen 2010-09-10   83  				goto new_segment;
13f05c8d8e98bb Martin K. Petersen 2010-09-10   84  
d57a5f7c6605f1 Kent Overstreet    2013-11-23   85  			sg->length += iv.bv_len;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   86  		} else {
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   87  new_segment:
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   88  			if (!sg)
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   89  				sg = sglist;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   90  			else {
c8164d8931fdee Paolo Bonzini      2013-03-20   91  				sg_unmark_end(sg);
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   92  				sg = sg_next(sg);
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   93  			}
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   94  
d57a5f7c6605f1 Kent Overstreet    2013-11-23   95  			sg_set_page(sg, iv.bv_page, iv.bv_len, iv.bv_offset);
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   96  			segments++;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   97  		}
7ba1ba12eeef0a Martin K. Petersen 2008-06-30   98  
d57a5f7c6605f1 Kent Overstreet    2013-11-23   99  		prev = 1;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  100  		ivprv = iv;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  101  	}
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  102  
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  103  	if (sg)
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  104  		sg_mark_end(sg);
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  105  
19f67fc3c069b6 Keith Busch        2024-09-11  106  	/*
19f67fc3c069b6 Keith Busch        2024-09-11  107  	 * Something must have been wrong if the figured number of segment
19f67fc3c069b6 Keith Busch        2024-09-11  108  	 * is bigger than number of req's physical integrity segments
19f67fc3c069b6 Keith Busch        2024-09-11  109  	 */
19f67fc3c069b6 Keith Busch        2024-09-11  110  	BUG_ON(segments > blk_rq_nr_phys_segments(rq));
19f67fc3c069b6 Keith Busch        2024-09-11  111  	BUG_ON(segments > queue_max_integrity_segments(q));
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  112  	return segments;
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  113  }
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  114  EXPORT_SYMBOL(blk_rq_map_integrity_sg);
7ba1ba12eeef0a Martin K. Petersen 2008-06-30  115
diff mbox series

Patch

diff --git a/block/blk-integrity.c b/block/blk-integrity.c
index 1d82b18e06f8e..549480aa2a069 100644
--- a/block/blk-integrity.c
+++ b/block/blk-integrity.c
@@ -62,19 +62,20 @@  int blk_rq_count_integrity_sg(struct request_queue *q, struct bio *bio)
  *
  * Description: Map the integrity vectors in request into a
  * scatterlist.  The scatterlist must be big enough to hold all
- * elements.  I.e. sized using blk_rq_count_integrity_sg().
+ * elements.  I.e. sized using blk_rq_count_integrity_sg() or
+ * rq->nr_integrity_segments.
  */
-int blk_rq_map_integrity_sg(struct request_queue *q, struct bio *bio,
-			    struct scatterlist *sglist)
+int blk_rq_map_integrity_sg(struct request *rq, struct scatterlist *sglist)
 {
 	struct bio_vec iv, ivprv = { NULL };
+	struct request_queue *q = rq->q;
 	struct scatterlist *sg = NULL;
+	struct bio *bio = rq->bio;
 	unsigned int segments = 0;
 	struct bvec_iter iter;
 	int prev = 0;
 
 	bio_for_each_integrity_vec(iv, bio, iter) {
-
 		if (prev) {
 			if (!biovec_phys_mergeable(q, &ivprv, &iv))
 				goto new_segment;
@@ -102,6 +103,12 @@  int blk_rq_map_integrity_sg(struct request_queue *q, struct bio *bio,
 	if (sg)
 		sg_mark_end(sg);
 
+	/*
+	 * Something must have been wrong if the figured number of segment
+	 * is bigger than number of req's physical integrity segments
+	 */
+	BUG_ON(segments > blk_rq_nr_phys_segments(rq));
+	BUG_ON(segments > queue_max_integrity_segments(q));
 	return segments;
 }
 EXPORT_SYMBOL(blk_rq_map_integrity_sg);
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 537844ee906b3..0d6d8431208a5 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1504,8 +1504,8 @@  static int nvme_rdma_dma_map_req(struct ib_device *ibdev, struct request *rq,
 			goto out_unmap_sg;
 		}
 
-		req->metadata_sgl->nents = blk_rq_map_integrity_sg(rq->q,
-				rq->bio, req->metadata_sgl->sg_table.sgl);
+		req->metadata_sgl->nents = blk_rq_map_integrity_sg(rq,
+				req->metadata_sgl->sg_table.sgl);
 		*pi_count = ib_dma_map_sg(ibdev,
 					  req->metadata_sgl->sg_table.sgl,
 					  req->metadata_sgl->nents,
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index fa59b54a8f4c6..16e97925606b6 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1163,7 +1163,6 @@  blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd)
 
 	if (blk_integrity_rq(rq)) {
 		struct scsi_data_buffer *prot_sdb = cmd->prot_sdb;
-		int ivecs;
 
 		if (WARN_ON_ONCE(!prot_sdb)) {
 			/*
@@ -1175,19 +1174,15 @@  blk_status_t scsi_alloc_sgtables(struct scsi_cmnd *cmd)
 			goto out_free_sgtables;
 		}
 
-		ivecs = blk_rq_nr_integrity_segments(rq);
-		if (sg_alloc_table_chained(&prot_sdb->table, ivecs,
+		if (sg_alloc_table_chained(&prot_sdb->table,
+				blk_rq_nr_integrity_segments(rq),
 				prot_sdb->table.sgl,
 				SCSI_INLINE_PROT_SG_CNT)) {
 			ret = BLK_STS_RESOURCE;
 			goto out_free_sgtables;
 		}
 
-		count = blk_rq_map_integrity_sg(rq->q, rq->bio,
-						prot_sdb->table.sgl);
-		BUG_ON(count > ivecs);
-		BUG_ON(count > queue_max_integrity_segments(rq->q));
-
+		count = blk_rq_map_integrity_sg(rq, prot_sdb->table.sgl);
 		cmd->prot_sdb = prot_sdb;
 		cmd->prot_sdb->table.nents = count;
 	}
diff --git a/include/linux/blk-integrity.h b/include/linux/blk-integrity.h
index 9c7029aa9c22a..6a62885a6beab 100644
--- a/include/linux/blk-integrity.h
+++ b/include/linux/blk-integrity.h
@@ -25,8 +25,7 @@  static inline bool queue_limits_stack_integrity_bdev(struct queue_limits *t,
 }
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
-int blk_rq_map_integrity_sg(struct request_queue *, struct bio *,
-				   struct scatterlist *);
+int blk_rq_map_integrity_sg(struct request *, struct scatterlist *);
 int blk_rq_count_integrity_sg(struct request_queue *, struct bio *);
 int blk_rq_integrity_map_user(struct request *rq, void __user *ubuf,
 			      ssize_t bytes, u32 seed);
@@ -98,8 +97,7 @@  static inline int blk_rq_count_integrity_sg(struct request_queue *q,
 {
 	return 0;
 }
-static inline int blk_rq_map_integrity_sg(struct request_queue *q,
-					  struct bio *b,
+static inline int blk_rq_map_integrity_sg(struct request *q,
 					  struct scatterlist *s)
 {
 	return 0;