Message ID | 20190428073932.9898-3-ming.lei@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | scsi: core: avoid big pre-allocation for sg list | expand |
On Sun, 2019-04-28 at 15:39 +0800, Ming Lei wrote: > Now scsi_mq_setup_tags() pre-allocates a big buffer for protection > sg entries, and the buffer size is scsi_mq_sgl_size(). > > This way isn't correct, scsi_mq_sgl_size() is used to pre-allocate > sg entries for IO data. And the protection data buffer is much less, > for example, one 512byte sector needs 8byte protection data, and > the max sector number for one request is 2560(BLK_DEF_MAX_SECTORS), > so the max protection data size is just 20k. > > The usual case is that one bio builds one single bip segment. Attribute > to bio split, bio merge is seldom done for big IO, and it is only done > in case of small bios. And protection data segment number is usually > same with bio count in the request, so the number won't be very big, > and allocating from slab is fast enough. > > Reduce to pre-allocate one sg entry for protection data, and switch > to runtime allocation in case that the protection data segment number > is bigger than 1. Then we can save huge pre-alocation, for example, > 500 Reviewed-by: Bart Van Assche <bvanassche@acm.org>
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index c37263c123eb..2eaba41655de 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -39,6 +39,12 @@ #include "scsi_priv.h" #include "scsi_logging.h" +/* + * Size of integrity metadata is usually small, 1 inline sg should + * cover normal cases. + */ +#define SCSI_INLINE_PROT_SG_CNT 1 + static struct kmem_cache *scsi_sdb_cache; static struct kmem_cache *scsi_sense_cache; static struct kmem_cache *scsi_sense_isadma_cache; @@ -543,7 +549,8 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd) if (cmd->sdb.table.nents) sg_free_table_chained(&cmd->sdb.table, SG_CHUNK_SIZE); if (scsi_prot_sg_count(cmd)) - sg_free_table_chained(&cmd->prot_sdb->table, SG_CHUNK_SIZE); + sg_free_table_chained(&cmd->prot_sdb->table, + SCSI_INLINE_PROT_SG_CNT); } static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd) @@ -1032,7 +1039,7 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd) if (sg_alloc_table_chained(&prot_sdb->table, ivecs, prot_sdb->table.sgl, - SG_CHUNK_SIZE)) { + SCSI_INLINE_PROT_SG_CNT)) { ret = BLK_STS_RESOURCE; goto out_free_sgtables; } @@ -1820,7 +1827,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost) sgl_size = scsi_mq_sgl_size(shost); cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size + sgl_size; if (scsi_host_get_prot(shost)) - cmd_size += sizeof(struct scsi_data_buffer) + sgl_size; + cmd_size += sizeof(struct scsi_data_buffer) + + sizeof(struct scatterlist) * SCSI_INLINE_PROT_SG_CNT; memset(&shost->tag_set, 0, sizeof(shost->tag_set)); shost->tag_set.ops = &scsi_mq_ops;