From patchwork Fri Apr 26 00:53:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10917887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F39FA14D5 for ; Fri, 26 Apr 2019 00:54:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E01CB28D7D for ; Fri, 26 Apr 2019 00:54:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D497528D80; Fri, 26 Apr 2019 00:54:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B43528D7F for ; Fri, 26 Apr 2019 00:54:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730442AbfDZAyA (ORCPT ); Thu, 25 Apr 2019 20:54:00 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45814 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730440AbfDZAyA (ORCPT ); Thu, 25 Apr 2019 20:54:00 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A508D307DAAF; Fri, 26 Apr 2019 00:53:59 +0000 (UTC) Received: from localhost (ovpn-8-19.pek2.redhat.com [10.72.8.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 892155D9CC; Fri, 26 Apr 2019 00:53:56 +0000 (UTC) From: Ming Lei To: James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: linux-block@vger.kernel.org, Ming Lei , Christoph Hellwig , Bart Van Assche , "Ewan D . Milne" , Hannes Reinecke Subject: [PATCH V3 1/3] lib/sg_pool.c: improve APIs for allocating sg pool Date: Fri, 26 Apr 2019 08:53:44 +0800 Message-Id: <20190426005346.27962-2-ming.lei@redhat.com> In-Reply-To: <20190426005346.27962-1-ming.lei@redhat.com> References: <20190426005346.27962-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.42]); Fri, 26 Apr 2019 00:53:59 +0000 (UTC) Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now sg_alloc_table_chained() allows user to provide one preallocated SGL, and returns simply if the requested number isn't bigger than size of that SGL. This way is nice for inline small SGL to small IO request. However, scattergather code only allows that size of the 1st preallocated SGL is SG_CHUNK_SIZE(128), and this way isn't flexiable and useful, because it may take too much memory(4KB) to pre-allocat one such size SGL for each IO request, especially block layer always pre-allocates IO request structure. Instead it is more friendly to pre-allocate one small size inline SGL just for small IO. Introduces __sg_alloc_table_chained() and __sg_free_table_chained() with one extra parameter to specify size of the pre-allocated SGL, then the 'first_chunk' SGL can include any number of entries. Both __sg_free_table() and __sg_alloc_table() supposes that each SGL has same size except for the last one, changes code to allow both to accept variant size for the 1st preallocated SGL. Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Ewan D. Milne Cc: Hannes Reinecke Suggested-by: Christoph Hellwig Signed-off-by: Ming Lei --- include/linux/scatterlist.h | 27 ++++++++++++++++++++----- lib/scatterlist.c | 36 +++++++++++++++++++++------------ lib/sg_pool.c | 49 ++++++++++++++++++++++++++++++--------------- 3 files changed, 78 insertions(+), 34 deletions(-) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index b4be960c7e5d..045d7aa81f2c 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -266,10 +266,11 @@ int sg_split(struct scatterlist *in, const int in_mapped_nents, typedef struct scatterlist *(sg_alloc_fn)(unsigned int, gfp_t); typedef void (sg_free_fn)(struct scatterlist *, unsigned int); -void __sg_free_table(struct sg_table *, unsigned int, bool, sg_free_fn *); +void __sg_free_table(struct sg_table *, unsigned int, unsigned int, + sg_free_fn *); void sg_free_table(struct sg_table *); int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int, - struct scatterlist *, gfp_t, sg_alloc_fn *); + struct scatterlist *, unsigned int, gfp_t, sg_alloc_fn *); int sg_alloc_table(struct sg_table *, unsigned int, gfp_t); int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages, unsigned int n_pages, unsigned int offset, @@ -331,9 +332,25 @@ size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, #endif #ifdef CONFIG_SG_POOL -void sg_free_table_chained(struct sg_table *table, bool first_chunk); -int sg_alloc_table_chained(struct sg_table *table, int nents, - struct scatterlist *first_chunk); +void __sg_free_table_chained(struct sg_table *table, + unsigned nents_first_chunk); +int __sg_alloc_table_chained(struct sg_table *table, int nents, + struct scatterlist *first_chunk, + unsigned nents_first_chunk); + +static inline void sg_free_table_chained(struct sg_table *table, + bool first_chunk) +{ + __sg_free_table_chained(table, first_chunk ? SG_CHUNK_SIZE : 0); +} + +static inline int sg_alloc_table_chained(struct sg_table *table, + int nents, + struct scatterlist *first_chunk) +{ + return __sg_alloc_table_chained(table, nents, first_chunk, + first_chunk ? SG_CHUNK_SIZE : 0); +} #endif /* diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 739dc9fe2c55..77ec8eec3fd0 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -181,7 +181,8 @@ static void sg_kfree(struct scatterlist *sg, unsigned int nents) * __sg_free_table - Free a previously mapped sg table * @table: The sg table header to use * @max_ents: The maximum number of entries per single scatterlist - * @skip_first_chunk: don't free the (preallocated) first scatterlist chunk + * @nents_first_chunk: Number of entries int the (preallocated) first + * scatterlist chunk, 0 means no such preallocated first chunk * @free_fn: Free function * * Description: @@ -191,9 +192,10 @@ static void sg_kfree(struct scatterlist *sg, unsigned int nents) * **/ void __sg_free_table(struct sg_table *table, unsigned int max_ents, - bool skip_first_chunk, sg_free_fn *free_fn) + unsigned int nents_first_chunk, sg_free_fn *free_fn) { struct scatterlist *sgl, *next; + unsigned curr_max_ents = nents_first_chunk ?: max_ents; if (unlikely(!table->sgl)) return; @@ -209,9 +211,9 @@ void __sg_free_table(struct sg_table *table, unsigned int max_ents, * sg_size is then one less than alloc size, since the last * element is the chain pointer. */ - if (alloc_size > max_ents) { - next = sg_chain_ptr(&sgl[max_ents - 1]); - alloc_size = max_ents; + if (alloc_size > curr_max_ents) { + next = sg_chain_ptr(&sgl[curr_max_ents - 1]); + alloc_size = curr_max_ents; sg_size = alloc_size - 1; } else { sg_size = alloc_size; @@ -219,11 +221,12 @@ void __sg_free_table(struct sg_table *table, unsigned int max_ents, } table->orig_nents -= sg_size; - if (skip_first_chunk) - skip_first_chunk = false; + if (nents_first_chunk) + nents_first_chunk = 0; else free_fn(sgl, alloc_size); sgl = next; + curr_max_ents = max_ents; } table->sgl = NULL; @@ -246,6 +249,8 @@ EXPORT_SYMBOL(sg_free_table); * @table: The sg table header to use * @nents: Number of entries in sg list * @max_ents: The maximum number of entries the allocator returns per call + * @nents_first_chunk: Number of entries int the (preallocated) first + * scatterlist chunk, 0 means no such preallocated chunk provided by user * @gfp_mask: GFP allocation mask * @alloc_fn: Allocator to use * @@ -262,10 +267,13 @@ EXPORT_SYMBOL(sg_free_table); **/ int __sg_alloc_table(struct sg_table *table, unsigned int nents, unsigned int max_ents, struct scatterlist *first_chunk, - gfp_t gfp_mask, sg_alloc_fn *alloc_fn) + unsigned int nents_first_chunk, gfp_t gfp_mask, + sg_alloc_fn *alloc_fn) { struct scatterlist *sg, *prv; unsigned int left; + unsigned curr_max_ents = nents_first_chunk ?: max_ents; + unsigned prv_max_ents; memset(table, 0, sizeof(*table)); @@ -281,8 +289,8 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, do { unsigned int sg_size, alloc_size = left; - if (alloc_size > max_ents) { - alloc_size = max_ents; + if (alloc_size > curr_max_ents) { + alloc_size = curr_max_ents; sg_size = alloc_size - 1; } else sg_size = alloc_size; @@ -316,7 +324,7 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, * If this is not the first mapping, chain previous part. */ if (prv) - sg_chain(prv, max_ents, sg); + sg_chain(prv, prv_max_ents, sg); else table->sgl = sg; @@ -327,6 +335,8 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents, sg_mark_end(&sg[sg_size - 1]); prv = sg; + prv_max_ents = curr_max_ents; + curr_max_ents = max_ents; } while (left); return 0; @@ -349,9 +359,9 @@ int sg_alloc_table(struct sg_table *table, unsigned int nents, gfp_t gfp_mask) int ret; ret = __sg_alloc_table(table, nents, SG_MAX_SINGLE_ALLOC, - NULL, gfp_mask, sg_kmalloc); + NULL, 0, gfp_mask, sg_kmalloc); if (unlikely(ret)) - __sg_free_table(table, SG_MAX_SINGLE_ALLOC, false, sg_kfree); + __sg_free_table(table, SG_MAX_SINGLE_ALLOC, 0, sg_kfree); return ret; } diff --git a/lib/sg_pool.c b/lib/sg_pool.c index d1c1e6388eaa..8026e210a25a 100644 --- a/lib/sg_pool.c +++ b/lib/sg_pool.c @@ -67,56 +67,73 @@ static struct scatterlist *sg_pool_alloc(unsigned int nents, gfp_t gfp_mask) } /** - * sg_free_table_chained - Free a previously mapped sg table + * __sg_free_table_chained - Free a previously mapped sg table * @table: The sg table header to use - * @first_chunk: was first_chunk not NULL in sg_alloc_table_chained? + * @nents_first_chunk: size of the first_chunk SGL passed to + * __sg_alloc_table_chained * * Description: * Free an sg table previously allocated and setup with - * sg_alloc_table_chained(). + * __sg_alloc_table_chained(). + * + * @nents_first_chunk has to be same with that same parameter passed + * to __sg_alloc_table_chained(). * **/ -void sg_free_table_chained(struct sg_table *table, bool first_chunk) +void __sg_free_table_chained(struct sg_table *table, + unsigned nents_first_chunk) { - if (first_chunk && table->orig_nents <= SG_CHUNK_SIZE) + if (table->orig_nents <= nents_first_chunk) return; - __sg_free_table(table, SG_CHUNK_SIZE, first_chunk, sg_pool_free); + + if (nents_first_chunk == 1) + nents_first_chunk = 0; + + __sg_free_table(table, SG_CHUNK_SIZE, nents_first_chunk, sg_pool_free); } -EXPORT_SYMBOL_GPL(sg_free_table_chained); +EXPORT_SYMBOL_GPL(__sg_free_table_chained); /** - * sg_alloc_table_chained - Allocate and chain SGLs in an sg table + * __sg_alloc_table_chained - Allocate and chain SGLs in an sg table * @table: The sg table header to use * @nents: Number of entries in sg list * @first_chunk: first SGL + * @nents_first_chunk: number of the SGL of @first_chunk * * Description: * Allocate and chain SGLs in an sg table. If @nents@ is larger than - * SG_CHUNK_SIZE a chained sg table will be setup. + * @nents_first_chunk a chained sg table will be setup. * **/ -int sg_alloc_table_chained(struct sg_table *table, int nents, - struct scatterlist *first_chunk) +int __sg_alloc_table_chained(struct sg_table *table, int nents, + struct scatterlist *first_chunk, unsigned nents_first_chunk) { int ret; BUG_ON(!nents); - if (first_chunk) { - if (nents <= SG_CHUNK_SIZE) { + if (first_chunk && nents_first_chunk) { + if (nents <= nents_first_chunk) { table->nents = table->orig_nents = nents; sg_init_table(table->sgl, nents); return 0; } } + /* User supposes that the 1st SGL includes real entry */ + if (nents_first_chunk == 1) { + first_chunk = NULL; + nents_first_chunk = 0; + } + ret = __sg_alloc_table(table, nents, SG_CHUNK_SIZE, - first_chunk, GFP_ATOMIC, sg_pool_alloc); + first_chunk, nents_first_chunk, + GFP_ATOMIC, sg_pool_alloc); if (unlikely(ret)) - sg_free_table_chained(table, (bool)first_chunk); + __sg_free_table_chained(table, nents_first_chunk); return ret; } -EXPORT_SYMBOL_GPL(sg_alloc_table_chained); +EXPORT_SYMBOL_GPL(__sg_alloc_table_chained); static __init int sg_pool_init(void) { From patchwork Fri Apr 26 00:53:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10917891 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC1401398 for ; Fri, 26 Apr 2019 00:54:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B893F28BE6 for ; Fri, 26 Apr 2019 00:54:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ACE0C28D7D; Fri, 26 Apr 2019 00:54:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5141D28D7F for ; Fri, 26 Apr 2019 00:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730445AbfDZAyD (ORCPT ); Thu, 25 Apr 2019 20:54:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34854 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730440AbfDZAyD (ORCPT ); Thu, 25 Apr 2019 20:54:03 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CD8713082E69; Fri, 26 Apr 2019 00:54:02 +0000 (UTC) Received: from localhost (ovpn-8-19.pek2.redhat.com [10.72.8.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id F303E5D9CC; Fri, 26 Apr 2019 00:54:01 +0000 (UTC) From: Ming Lei To: James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: linux-block@vger.kernel.org, Ming Lei , Christoph Hellwig , Bart Van Assche , "Ewan D . Milne" , Hannes Reinecke Subject: [PATCH V3 2/3] scsi: core: avoid to pre-allocate big chunk for protection meta data Date: Fri, 26 Apr 2019 08:53:45 +0800 Message-Id: <20190426005346.27962-3-ming.lei@redhat.com> In-Reply-To: <20190426005346.27962-1-ming.lei@redhat.com> References: <20190426005346.27962-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Fri, 26 Apr 2019 00:54:02 +0000 (UTC) Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now scsi_mq_setup_tags() pre-allocates a big buffer for protection sg entries, and the buffer size is scsi_mq_sgl_size(). This way isn't correct, scsi_mq_sgl_size() is used to pre-allocate sg entries for IO data. And the protection data buffer is much less, for example, one 512byte sector needs 8byte protection data, and the max sector number for one request is 2560(BLK_DEF_MAX_SECTORS), so the max protection data size is just 20k. The usual case is that one bio builds one single bip segment. Attribute to bio split, bio merge is seldom done for big IO, and it is only done in case of small bios. And protection data segment number is usually same with bio count in the request, so the number won't be very big, and allocating from slab is fast enough. Reduce to pre-allocate one sg entry for protection data, and switch to runtime allocation in case that the protection data segment number is bigger than 1. Then we can save huge pre-alocation, for example, 500+MB is saved on single lpfc HBA. Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Ewan D. Milne Cc: Hannes Reinecke Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig --- drivers/scsi/scsi_lib.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 07dfc17d4824..989539de78c6 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -39,6 +39,12 @@ #include "scsi_priv.h" #include "scsi_logging.h" +/* + * Size of integrity metadata is usually small, 1 inline sg should + * cover normal cases. + */ +#define SCSI_INLINE_PROT_SG_CNT 1 + static struct kmem_cache *scsi_sdb_cache; static struct kmem_cache *scsi_sense_cache; static struct kmem_cache *scsi_sense_isadma_cache; @@ -558,7 +564,8 @@ static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd) if (cmd->sdb.table.nents) sg_free_table_chained(&cmd->sdb.table, true); if (scsi_prot_sg_count(cmd)) - sg_free_table_chained(&cmd->prot_sdb->table, true); + __sg_free_table_chained(&cmd->prot_sdb->table, + SCSI_INLINE_PROT_SG_CNT); } static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd) @@ -1045,8 +1052,9 @@ blk_status_t scsi_init_io(struct scsi_cmnd *cmd) ivecs = blk_rq_count_integrity_sg(rq->q, rq->bio); - if (sg_alloc_table_chained(&prot_sdb->table, ivecs, - prot_sdb->table.sgl)) { + if (__sg_alloc_table_chained(&prot_sdb->table, ivecs, + prot_sdb->table.sgl, + SCSI_INLINE_PROT_SG_CNT)) { ret = BLK_STS_RESOURCE; goto out_free_sgtables; } @@ -1846,7 +1854,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost) sgl_size = scsi_mq_sgl_size(shost); cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size + sgl_size; if (scsi_host_get_prot(shost)) - cmd_size += sizeof(struct scsi_data_buffer) + sgl_size; + cmd_size += sizeof(struct scsi_data_buffer) + + sizeof(struct scatterlist) * SCSI_INLINE_PROT_SG_CNT; memset(&shost->tag_set, 0, sizeof(shost->tag_set)); shost->tag_set.ops = &scsi_mq_ops; From patchwork Fri Apr 26 00:53:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10917895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F13614D5 for ; Fri, 26 Apr 2019 00:54:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BA9A28BE6 for ; Fri, 26 Apr 2019 00:54:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 805CA28D7F; Fri, 26 Apr 2019 00:54:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 198CC28D7D for ; Fri, 26 Apr 2019 00:54:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730459AbfDZAyJ (ORCPT ); Thu, 25 Apr 2019 20:54:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55218 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730451AbfDZAyJ (ORCPT ); Thu, 25 Apr 2019 20:54:09 -0400 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 864927DCD7; Fri, 26 Apr 2019 00:54:08 +0000 (UTC) Received: from localhost (ovpn-8-19.pek2.redhat.com [10.72.8.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6A1B760C8E; Fri, 26 Apr 2019 00:54:05 +0000 (UTC) From: Ming Lei To: James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: linux-block@vger.kernel.org, Ming Lei , Christoph Hellwig , Bart Van Assche , "Ewan D . Milne" , Hannes Reinecke Subject: [PATCH V3 3/3] scsi: core: avoid to pre-allocate big chunk for sg list Date: Fri, 26 Apr 2019 08:53:46 +0800 Message-Id: <20190426005346.27962-4-ming.lei@redhat.com> In-Reply-To: <20190426005346.27962-1-ming.lei@redhat.com> References: <20190426005346.27962-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Fri, 26 Apr 2019 00:54:08 +0000 (UTC) Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now scsi_mq_setup_tags() pre-allocates a big buffer for IO sg list, and the buffer size is scsi_mq_sgl_size() which depends on smaller value between shost->sg_tablesize and SG_CHUNK_SIZE. Modern HBA's DMA is often capable of deadling with very big segment number, so scsi_mq_sgl_size() is often big. Suppose the max sg number of SG_CHUNK_SIZE is taken, scsi_mq_sgl_size() will be 4KB. Then if one HBA has lots of queues, and each hw queue's depth is high, pre-allocation for sg list can consume huge memory. For example of lpfc, nr_hw_queues can be 70, each queue's depth can be 3781, so the pre-allocation for data sg list is 70*3781*2k =517MB for single HBA. There is Red Hat internal report that scsi_debug based tests can't be run any more since legacy io path is killed because too big pre-allocation. So switch to runtime allocation for sg list, meantime pre-allocate 2 inline sg entries. This way has been applied to NVMe PCI for a while, so it should be fine for SCSI too. Also runtime sg entries allocation has verified and run always in the original legacy io path. Not see performance effect in my big BS test on scsi_debug. Cc: Christoph Hellwig Cc: Bart Van Assche Cc: Ewan D. Milne Cc: Hannes Reinecke Signed-off-by: Ming Lei Reviewed-by: Christoph Hellwig --- drivers/scsi/scsi_lib.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 989539de78c6..b701fc65da76 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -45,6 +45,8 @@ */ #define SCSI_INLINE_PROT_SG_CNT 1 +#define SCSI_INLINE_SG_CNT 2 + static struct kmem_cache *scsi_sdb_cache; static struct kmem_cache *scsi_sense_cache; static struct kmem_cache *scsi_sense_isadma_cache; @@ -562,7 +564,8 @@ static void scsi_uninit_cmd(struct scsi_cmnd *cmd) static void scsi_mq_free_sgtables(struct scsi_cmnd *cmd) { if (cmd->sdb.table.nents) - sg_free_table_chained(&cmd->sdb.table, true); + __sg_free_table_chained(&cmd->sdb.table, + SCSI_INLINE_SG_CNT); if (scsi_prot_sg_count(cmd)) __sg_free_table_chained(&cmd->prot_sdb->table, SCSI_INLINE_PROT_SG_CNT); @@ -998,8 +1001,10 @@ static blk_status_t scsi_init_sgtable(struct request *req, /* * If sg table allocation fails, requeue request later. */ - if (unlikely(sg_alloc_table_chained(&sdb->table, - blk_rq_nr_phys_segments(req), sdb->table.sgl))) + if (unlikely(__sg_alloc_table_chained(&sdb->table, + blk_rq_nr_phys_segments(req), + sdb->table.sgl, + SCSI_INLINE_SG_CNT))) return BLK_STS_RESOURCE; /* @@ -1565,9 +1570,9 @@ static int scsi_dispatch_cmd(struct scsi_cmnd *cmd) } /* Size in bytes of the sg-list stored in the scsi-mq command-private data. */ -static unsigned int scsi_mq_sgl_size(struct Scsi_Host *shost) +static unsigned int scsi_mq_inline_sgl_size(struct Scsi_Host *shost) { - return min_t(unsigned int, shost->sg_tablesize, SG_CHUNK_SIZE) * + return min_t(unsigned int, shost->sg_tablesize, SCSI_INLINE_SG_CNT) * sizeof(struct scatterlist); } @@ -1757,7 +1762,7 @@ static int scsi_mq_init_request(struct blk_mq_tag_set *set, struct request *rq, if (scsi_host_get_prot(shost)) { sg = (void *)cmd + sizeof(struct scsi_cmnd) + shost->hostt->cmd_size; - cmd->prot_sdb = (void *)sg + scsi_mq_sgl_size(shost); + cmd->prot_sdb = (void *)sg + scsi_mq_inline_sgl_size(shost); } return 0; @@ -1851,7 +1856,7 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost) { unsigned int cmd_size, sgl_size; - sgl_size = scsi_mq_sgl_size(shost); + sgl_size = scsi_mq_inline_sgl_size(shost); cmd_size = sizeof(struct scsi_cmnd) + shost->hostt->cmd_size + sgl_size; if (scsi_host_get_prot(shost)) cmd_size += sizeof(struct scsi_data_buffer) +