From patchwork Fri Feb 28 23:29:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 481B9159A for ; Fri, 28 Feb 2020 23:29:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 26E75246A8 for ; Fri, 28 Feb 2020 23:29:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="f7DnE8g9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726642AbgB1X3o (ORCPT ); Fri, 28 Feb 2020 18:29:44 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:22875 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726603AbgB1X3o (ORCPT ); Fri, 28 Feb 2020 18:29:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932582; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5PS2Kj4SN/yeDXN2hNmXYDUkTimvWRXiu7vxPGL1OcE=; b=f7DnE8g95H+yv0NogVJaKfiTpCD2+FqnrLS5g1aBKBldWo8h1Kz4taPjwEiCiWdw+271QG 8daJiaq4iubo/AL3wjUaQE3YWRspYr5RKTJT70RxAKFHVgCQtdlx5jzz8Nm59+tY+fgSll TB7ffqfa4UGIQAakPu0O/XGBrStZrlk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-356-PHNWtnO8OsaOw6HsWwptPg-1; Fri, 28 Feb 2020 18:29:38 -0500 X-MC-Unique: PHNWtnO8OsaOw6HsWwptPg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1EC28100550E; Fri, 28 Feb 2020 23:29:36 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id AF36860BF1; Fri, 28 Feb 2020 23:29:32 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 01/10] sbitmap: maintain allocation round_robin in sbitmap Date: Sat, 29 Feb 2020 07:29:11 +0800 Message-Id: <20200228232920.20960-2-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Now allocation round_robin info is maintained by sbitmap_queue. Actually, bit allocation belongs to sbitmap. Also the following patch will move alloc_hint to sbitmap for users with high depth. So move round_robin to sbitmap. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- block/blk-mq.c | 2 +- block/kyber-iosched.c | 3 ++- drivers/dma/idxd/device.c | 2 +- drivers/dma/idxd/submit.c | 2 +- include/linux/sbitmap.h | 20 ++++++++++---------- lib/sbitmap.c | 28 ++++++++++++++-------------- 6 files changed, 29 insertions(+), 28 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index a12b1763508d..116626b76b08 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2397,7 +2397,7 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set, goto free_cpumask; if (sbitmap_init_node(&hctx->ctx_map, nr_cpu_ids, ilog2(8), - gfp, node)) + gfp, node, false)) goto free_ctxs; hctx->nr_ctx = 0; diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 34dcea0ef637..21587cca1cb3 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -479,7 +479,8 @@ static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) for (i = 0; i < KYBER_NUM_DOMAINS; i++) { if (sbitmap_init_node(&khd->kcq_map[i], hctx->nr_ctx, - ilog2(8), GFP_KERNEL, hctx->numa_node)) { + ilog2(8), GFP_KERNEL, hctx->numa_node, + false)) { while (--i >= 0) sbitmap_free(&khd->kcq_map[i]); goto err_kcqs; diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c index ada69e722f84..5d59c868b2da 100644 --- a/drivers/dma/idxd/device.c +++ b/drivers/dma/idxd/device.c @@ -181,7 +181,7 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) goto fail_alloc_descs; rc = sbitmap_init_node(&wq->sbmap, num_descs, -1, GFP_KERNEL, - dev_to_node(dev)); + dev_to_node(dev), false); if (rc < 0) goto fail_sbitmap_init; diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c index 45a0c5869a0a..842af85c7d43 100644 --- a/drivers/dma/idxd/submit.c +++ b/drivers/dma/idxd/submit.c @@ -45,7 +45,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype) percpu_up_read(&wq->submit_lock); } - idx = sbitmap_get(&wq->sbmap, 0, false); + idx = sbitmap_get(&wq->sbmap, 0); if (idx < 0) { atomic_dec(&wq->dq_count); return ERR_PTR(-EAGAIN); diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index e40d019c3d9d..559c37e27d30 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -61,6 +61,11 @@ struct sbitmap { */ unsigned int map_nr; + /** + * @round_robin: Allocate bits in strict round-robin order. + */ + bool round_robin; + /** * @map: Allocated bitmap. */ @@ -129,11 +134,6 @@ struct sbitmap_queue { */ atomic_t ws_active; - /** - * @round_robin: Allocate bits in strict round-robin order. - */ - bool round_robin; - /** * @min_shallow_depth: The minimum shallow depth which may be passed to * sbitmap_queue_get_shallow() or __sbitmap_queue_get_shallow(). @@ -149,11 +149,14 @@ struct sbitmap_queue { * given, a good default is chosen. * @flags: Allocation flags. * @node: Memory node to allocate on. + * @round_robin: If true, be stricter about allocation order; always allocate + * starting from the last allocated bit. This is less efficient + * than the default behavior (false). * * Return: Zero on success or negative errno on failure. */ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, - gfp_t flags, int node); + gfp_t flags, int node, bool round_robin); /** * sbitmap_free() - Free memory used by a &struct sbitmap. @@ -179,15 +182,12 @@ void sbitmap_resize(struct sbitmap *sb, unsigned int depth); * sbitmap_get() - Try to allocate a free bit from a &struct sbitmap. * @sb: Bitmap to allocate from. * @alloc_hint: Hint for where to start searching for a free bit. - * @round_robin: If true, be stricter about allocation order; always allocate - * starting from the last allocated bit. This is less efficient - * than the default behavior (false). * * This operation provides acquire barrier semantics if it succeeds. * * Return: Non-negative allocated bit number if successful, -1 otherwise. */ -int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint, bool round_robin); +int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint); /** * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap, diff --git a/lib/sbitmap.c b/lib/sbitmap.c index af88d1346dd7..86018a6fccae 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -42,7 +42,7 @@ static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index) } int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, - gfp_t flags, int node) + gfp_t flags, int node, bool round_robin) { unsigned int bits_per_word; unsigned int i; @@ -67,6 +67,7 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, sb->shift = shift; sb->depth = depth; sb->map_nr = DIV_ROUND_UP(sb->depth, bits_per_word); + sb->round_robin = round_robin; if (depth == 0) { sb->map = NULL; @@ -137,14 +138,14 @@ static int __sbitmap_get_word(unsigned long *word, unsigned long depth, } static int sbitmap_find_bit_in_index(struct sbitmap *sb, int index, - unsigned int alloc_hint, bool round_robin) + unsigned int alloc_hint) { int nr; do { nr = __sbitmap_get_word(&sb->map[index].word, sb->map[index].depth, alloc_hint, - !round_robin); + !sb->round_robin); if (nr != -1) break; if (!sbitmap_deferred_clear(sb, index)) @@ -154,7 +155,7 @@ static int sbitmap_find_bit_in_index(struct sbitmap *sb, int index, return nr; } -int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint, bool round_robin) +int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint) { unsigned int i, index; int nr = -1; @@ -166,14 +167,13 @@ int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint, bool round_robin) * alloc_hint to find the right word index. No point in looping * twice in find_next_zero_bit() for that case. */ - if (round_robin) + if (sb->round_robin) alloc_hint = SB_NR_TO_BIT(sb, alloc_hint); else alloc_hint = 0; for (i = 0; i < sb->map_nr; i++) { - nr = sbitmap_find_bit_in_index(sb, index, alloc_hint, - round_robin); + nr = sbitmap_find_bit_in_index(sb, index, alloc_hint); if (nr != -1) { nr += index << sb->shift; break; @@ -355,7 +355,8 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, int ret; int i; - ret = sbitmap_init_node(&sbq->sb, depth, shift, flags, node); + ret = sbitmap_init_node(&sbq->sb, depth, shift, flags, node, + round_robin); if (ret) return ret; @@ -387,7 +388,6 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, atomic_set(&sbq->ws[i].wait_cnt, sbq->wake_batch); } - sbq->round_robin = round_robin; return 0; } EXPORT_SYMBOL_GPL(sbitmap_queue_init_node); @@ -429,12 +429,12 @@ int __sbitmap_queue_get(struct sbitmap_queue *sbq) hint = depth ? prandom_u32() % depth : 0; this_cpu_write(*sbq->alloc_hint, hint); } - nr = sbitmap_get(&sbq->sb, hint, sbq->round_robin); + nr = sbitmap_get(&sbq->sb, hint); if (nr == -1) { /* If the map is full, a hint won't do us much good. */ this_cpu_write(*sbq->alloc_hint, 0); - } else if (nr == hint || unlikely(sbq->round_robin)) { + } else if (nr == hint || unlikely(sbq->sb.round_robin)) { /* Only update the hint if we used it. */ hint = nr + 1; if (hint >= depth - 1) @@ -465,7 +465,7 @@ int __sbitmap_queue_get_shallow(struct sbitmap_queue *sbq, if (nr == -1) { /* If the map is full, a hint won't do us much good. */ this_cpu_write(*sbq->alloc_hint, 0); - } else if (nr == hint || unlikely(sbq->round_robin)) { + } else if (nr == hint || unlikely(sbq->sb.round_robin)) { /* Only update the hint if we used it. */ hint = nr + 1; if (hint >= depth - 1) @@ -581,7 +581,7 @@ void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr, smp_mb__after_atomic(); sbitmap_queue_wake_up(sbq); - if (likely(!sbq->round_robin && nr < sbq->sb.depth)) + if (likely(!sbq->sb.round_robin && nr < sbq->sb.depth)) *per_cpu_ptr(sbq->alloc_hint, cpu) = nr; } EXPORT_SYMBOL_GPL(sbitmap_queue_clear); @@ -638,7 +638,7 @@ void sbitmap_queue_show(struct sbitmap_queue *sbq, struct seq_file *m) } seq_puts(m, "}\n"); - seq_printf(m, "round_robin=%d\n", sbq->round_robin); + seq_printf(m, "round_robin=%d\n", sbq->sb.round_robin); seq_printf(m, "min_shallow_depth=%u\n", sbq->min_shallow_depth); } EXPORT_SYMBOL_GPL(sbitmap_queue_show); From patchwork Fri Feb 28 23:29:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413481 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FCCF13A4 for ; Fri, 28 Feb 2020 23:29:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1E5FD2469C for ; Fri, 28 Feb 2020 23:29:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="C+i9ETEO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726682AbgB1X3t (ORCPT ); Fri, 28 Feb 2020 18:29:49 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:37901 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726603AbgB1X3t (ORCPT ); Fri, 28 Feb 2020 18:29:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vp58DvZjnpkYaJYd38hCRkXrB4APAd+myF/vlSec9bE=; b=C+i9ETEOECkgbb6EI5OfrwNUid+aAjGCUj11cASlscf81lRdnkYvVGLkzBOckMB5fsxqqh OkU1WupmBCP3WkqVPWpxXxds90exGNHW//9PblXr6rpKWrqgCeQjrAkNPi0Ivn5VVK7UuC s6gRjyeGWDPyAc7dIFU4qRHM0IMLJsE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-25-aTwSBZpoO1GBgHiaEdALsA-1; Fri, 28 Feb 2020 18:29:44 -0500 X-MC-Unique: aTwSBZpoO1GBgHiaEdALsA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2BAA18017DF; Fri, 28 Feb 2020 23:29:42 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id B994F19C7F; Fri, 28 Feb 2020 23:29:38 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 02/10] sbitmap: add helpers for updating allocation hint Date: Sat, 29 Feb 2020 07:29:12 +0800 Message-Id: <20200228232920.20960-3-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add helpers for updating allocation hint, so that we can avoid to duplicate code. Prepare for moving allocation hint into sbitmap. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- lib/sbitmap.c | 93 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 54 insertions(+), 39 deletions(-) diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 86018a6fccae..683343e02c3b 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -9,6 +9,55 @@ #include #include +static int init_alloc_hint(struct sbitmap_queue *sbq, gfp_t flags) +{ + unsigned depth = sbq->sb.depth; + + sbq->alloc_hint = alloc_percpu_gfp(unsigned int, flags); + if (!sbq->alloc_hint) + return -ENOMEM; + + if (depth && !sbq->sb.round_robin) { + int i; + + for_each_possible_cpu(i) + *per_cpu_ptr(sbq->alloc_hint, i) = prandom_u32() % depth; + } + + return 0; +} + +static inline unsigned update_alloc_hint_before_get(struct sbitmap_queue *sbq, + unsigned int depth) +{ + unsigned hint; + + hint = this_cpu_read(*sbq->alloc_hint); + if (unlikely(hint >= depth)) { + hint = depth ? prandom_u32() % depth : 0; + this_cpu_write(*sbq->alloc_hint, hint); + } + + return hint; +} + +static inline void update_alloc_hint_after_get(struct sbitmap_queue *sbq, + unsigned int depth, + unsigned int hint, + unsigned int nr) +{ + if (nr == -1) { + /* If the map is full, a hint won't do us much good. */ + this_cpu_write(*sbq->alloc_hint, 0); + } else if (nr == hint || unlikely(sbq->sb.round_robin)) { + /* Only update the hint if we used it. */ + hint = nr + 1; + if (hint >= depth - 1) + hint = 0; + this_cpu_write(*sbq->alloc_hint, hint); + } +} + /* * See if we have deferred clears that we can batch move */ @@ -360,17 +409,11 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, if (ret) return ret; - sbq->alloc_hint = alloc_percpu_gfp(unsigned int, flags); - if (!sbq->alloc_hint) { + if (init_alloc_hint(sbq, flags) != 0) { sbitmap_free(&sbq->sb); return -ENOMEM; } - if (depth && !round_robin) { - for_each_possible_cpu(i) - *per_cpu_ptr(sbq->alloc_hint, i) = prandom_u32() % depth; - } - sbq->min_shallow_depth = UINT_MAX; sbq->wake_batch = sbq_calc_wake_batch(sbq, depth); atomic_set(&sbq->wake_index, 0); @@ -423,24 +466,10 @@ int __sbitmap_queue_get(struct sbitmap_queue *sbq) unsigned int hint, depth; int nr; - hint = this_cpu_read(*sbq->alloc_hint); depth = READ_ONCE(sbq->sb.depth); - if (unlikely(hint >= depth)) { - hint = depth ? prandom_u32() % depth : 0; - this_cpu_write(*sbq->alloc_hint, hint); - } + hint = update_alloc_hint_before_get(sbq, depth); nr = sbitmap_get(&sbq->sb, hint); - - if (nr == -1) { - /* If the map is full, a hint won't do us much good. */ - this_cpu_write(*sbq->alloc_hint, 0); - } else if (nr == hint || unlikely(sbq->sb.round_robin)) { - /* Only update the hint if we used it. */ - hint = nr + 1; - if (hint >= depth - 1) - hint = 0; - this_cpu_write(*sbq->alloc_hint, hint); - } + update_alloc_hint_after_get(sbq, depth, hint, nr); return nr; } @@ -454,24 +483,10 @@ int __sbitmap_queue_get_shallow(struct sbitmap_queue *sbq, WARN_ON_ONCE(shallow_depth < sbq->min_shallow_depth); - hint = this_cpu_read(*sbq->alloc_hint); depth = READ_ONCE(sbq->sb.depth); - if (unlikely(hint >= depth)) { - hint = depth ? prandom_u32() % depth : 0; - this_cpu_write(*sbq->alloc_hint, hint); - } + hint = update_alloc_hint_before_get(sbq, depth); nr = sbitmap_get_shallow(&sbq->sb, hint, shallow_depth); - - if (nr == -1) { - /* If the map is full, a hint won't do us much good. */ - this_cpu_write(*sbq->alloc_hint, 0); - } else if (nr == hint || unlikely(sbq->sb.round_robin)) { - /* Only update the hint if we used it. */ - hint = nr + 1; - if (hint >= depth - 1) - hint = 0; - this_cpu_write(*sbq->alloc_hint, hint); - } + update_alloc_hint_after_get(sbq, depth, hint, nr); return nr; } From patchwork Fri Feb 28 23:29:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413485 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 815D613A4 for ; Fri, 28 Feb 2020 23:29:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 60D442469C for ; Fri, 28 Feb 2020 23:29:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dwx80B2D" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726720AbgB1X3z (ORCPT ); Fri, 28 Feb 2020 18:29:55 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:34333 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726674AbgB1X3y (ORCPT ); Fri, 28 Feb 2020 18:29:54 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932594; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=utWzRwD1J+TLeCfKl8qxMp4PjbRU/QJ4O369KCrzxC0=; b=dwx80B2Dsn1dVm8oN4mxWll8tWF0v8yJ+qLpdNlSdkQgmpJHsw7hYHUNKuuQdvueiDix4/ Ez8gAeQzw3jGJWcqpwEAGlbH2w+39NtZV4PzzvZEvKuJlBbn8tkwV+Fm+5AQ4RZIQ8mvxY I9562MenFAibdghhYmU9B8hNQFZhHD4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-276-g4ypaRY2NqK1WOuvncHscA-1; Fri, 28 Feb 2020 18:29:49 -0500 X-MC-Unique: g4ypaRY2NqK1WOuvncHscA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C7FD813E4; Fri, 28 Feb 2020 23:29:47 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 799AE8C096; Fri, 28 Feb 2020 23:29:44 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 03/10] sbitmap: remove sbitmap_clear_bit_unlock Date: Sat, 29 Feb 2020 07:29:13 +0800 Message-Id: <20200228232920.20960-4-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org No one uses this helper any more, so kill it. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- include/linux/sbitmap.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 559c37e27d30..68097b052ec3 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -320,12 +320,6 @@ static inline void sbitmap_deferred_clear_bit(struct sbitmap *sb, unsigned int b set_bit(SB_NR_TO_BIT(sb, bitnr), addr); } -static inline void sbitmap_clear_bit_unlock(struct sbitmap *sb, - unsigned int bitnr) -{ - clear_bit_unlock(SB_NR_TO_BIT(sb, bitnr), __sbitmap_word(sb, bitnr)); -} - static inline int sbitmap_test_bit(struct sbitmap *sb, unsigned int bitnr) { return test_bit(SB_NR_TO_BIT(sb, bitnr), __sbitmap_word(sb, bitnr)); From patchwork Fri Feb 28 23:29:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413489 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C82BC14B7 for ; Fri, 28 Feb 2020 23:29:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 948FD246A8 for ; Fri, 28 Feb 2020 23:29:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ShMK6WYv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726752AbgB1X37 (ORCPT ); Fri, 28 Feb 2020 18:29:59 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:35272 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725957AbgB1X37 (ORCPT ); Fri, 28 Feb 2020 18:29:59 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932597; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HkzQ9ZaOjR48dtlJ3KQXI2dZp2DSkC8Scra19N5z5qs=; b=ShMK6WYvKtOVpxF5Wq5Bs6uQzJgi1AcuxtLiYMa7E75A7gzVtV21kRWK1vHt6/H1mRhqNI GHYvOikKxAFEJUOEk1Z5w/0RA5kHFeI6+E/SFIew+ijEtomCcLX1PJQ70scuM3psbJxqcn yX16RO8mE2tR9a1LxHFYGeX3cak2M6E= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-242-o63HwD7YPE-nzxmA5ZAzgQ-1; Fri, 28 Feb 2020 18:29:53 -0500 X-MC-Unique: o63HwD7YPE-nzxmA5ZAzgQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A296D10509A0; Fri, 28 Feb 2020 23:29:51 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 512198C096; Fri, 28 Feb 2020 23:29:49 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 04/10] sbitmap: move allocation hint into sbitmap Date: Sat, 29 Feb 2020 07:29:14 +0800 Message-Id: <20200228232920.20960-5-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Allocation hint should have belonged to sbitmap, also when sbitmap's depth is high and no need to use mulitple wakeup queues, user can benefit from percpu allocation hint too. So move allocation hint into sbitmap. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- block/blk-mq.c | 2 +- block/kyber-iosched.c | 2 +- drivers/dma/idxd/device.c | 2 +- drivers/dma/idxd/submit.c | 2 +- include/linux/sbitmap.h | 41 +++++++++----- lib/sbitmap.c | 115 +++++++++++++++++++++++--------------- 6 files changed, 99 insertions(+), 65 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 116626b76b08..76a5e919c336 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2397,7 +2397,7 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set, goto free_cpumask; if (sbitmap_init_node(&hctx->ctx_map, nr_cpu_ids, ilog2(8), - gfp, node, false)) + gfp, node, false, false)) goto free_ctxs; hctx->nr_ctx = 0; diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index 21587cca1cb3..8ecb96bb4fd5 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -480,7 +480,7 @@ static int kyber_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int hctx_idx) for (i = 0; i < KYBER_NUM_DOMAINS; i++) { if (sbitmap_init_node(&khd->kcq_map[i], hctx->nr_ctx, ilog2(8), GFP_KERNEL, hctx->numa_node, - false)) { + false, false)) { while (--i >= 0) sbitmap_free(&khd->kcq_map[i]); goto err_kcqs; diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c index 5d59c868b2da..ed9f877a32de 100644 --- a/drivers/dma/idxd/device.c +++ b/drivers/dma/idxd/device.c @@ -181,7 +181,7 @@ int idxd_wq_alloc_resources(struct idxd_wq *wq) goto fail_alloc_descs; rc = sbitmap_init_node(&wq->sbmap, num_descs, -1, GFP_KERNEL, - dev_to_node(dev), false); + dev_to_node(dev), false, false); if (rc < 0) goto fail_sbitmap_init; diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c index 842af85c7d43..17f7607ebfa1 100644 --- a/drivers/dma/idxd/submit.c +++ b/drivers/dma/idxd/submit.c @@ -45,7 +45,7 @@ struct idxd_desc *idxd_alloc_desc(struct idxd_wq *wq, enum idxd_op_type optype) percpu_up_read(&wq->submit_lock); } - idx = sbitmap_get(&wq->sbmap, 0); + idx = sbitmap_get(&wq->sbmap); if (idx < 0) { atomic_dec(&wq->dq_count); return ERR_PTR(-EAGAIN); diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 68097b052ec3..103b41c03311 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -70,6 +70,14 @@ struct sbitmap { * @map: Allocated bitmap. */ struct sbitmap_word *map; + + /* + * @alloc_hint: Cache of last successfully allocated or freed bit. + * + * This is per-cpu, which allows multiple users to stick to different + * cachelines until the map is exhausted. + */ + unsigned int __percpu *alloc_hint; }; #define SBQ_WAIT_QUEUES 8 @@ -105,14 +113,6 @@ struct sbitmap_queue { */ struct sbitmap sb; - /* - * @alloc_hint: Cache of last successfully allocated or freed bit. - * - * This is per-cpu, which allows multiple users to stick to different - * cachelines until the map is exhausted. - */ - unsigned int __percpu *alloc_hint; - /** * @wake_batch: Number of bits which must be freed before we wake up any * waiters. @@ -152,11 +152,13 @@ struct sbitmap_queue { * @round_robin: If true, be stricter about allocation order; always allocate * starting from the last allocated bit. This is less efficient * than the default behavior (false). + * @alloc_hint: If true, apply percpu hint for where to start searching for + * a free bit. * * Return: Zero on success or negative errno on failure. */ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, - gfp_t flags, int node, bool round_robin); + gfp_t flags, int node, bool round_robin, bool alloc_hint); /** * sbitmap_free() - Free memory used by a &struct sbitmap. @@ -164,6 +166,7 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, */ static inline void sbitmap_free(struct sbitmap *sb) { + free_percpu(sb->alloc_hint); kfree(sb->map); sb->map = NULL; } @@ -181,19 +184,17 @@ void sbitmap_resize(struct sbitmap *sb, unsigned int depth); /** * sbitmap_get() - Try to allocate a free bit from a &struct sbitmap. * @sb: Bitmap to allocate from. - * @alloc_hint: Hint for where to start searching for a free bit. * * This operation provides acquire barrier semantics if it succeeds. * * Return: Non-negative allocated bit number if successful, -1 otherwise. */ -int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint); +int sbitmap_get(struct sbitmap *sb); /** * sbitmap_get_shallow() - Try to allocate a free bit from a &struct sbitmap, * limiting the depth used from each word. * @sb: Bitmap to allocate from. - * @alloc_hint: Hint for where to start searching for a free bit. * @shallow_depth: The maximum number of bits to allocate from a single word. * * This rather specific operation allows for having multiple users with @@ -205,8 +206,7 @@ int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint); * * Return: Non-negative allocated bit number if successful, -1 otherwise. */ -int sbitmap_get_shallow(struct sbitmap *sb, unsigned int alloc_hint, - unsigned long shallow_depth); +int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth); /** * sbitmap_any_bit_set() - Check for a set bit in a &struct sbitmap. @@ -320,6 +320,18 @@ static inline void sbitmap_deferred_clear_bit(struct sbitmap *sb, unsigned int b set_bit(SB_NR_TO_BIT(sb, bitnr), addr); } +/* + * Pair of sbitmap_get, and this one applies both cleared bit and + * allocation hint. + */ +static inline void sbitmap_put(struct sbitmap *sb, unsigned int bitnr) +{ + sbitmap_deferred_clear_bit(sb, bitnr); + + if (likely(sb->alloc_hint && !sb->round_robin && bitnr < sb->depth)) + *this_cpu_ptr(sb->alloc_hint) = bitnr; +} + static inline int sbitmap_test_bit(struct sbitmap *sb, unsigned int bitnr) { return test_bit(SB_NR_TO_BIT(sb, bitnr), __sbitmap_word(sb, bitnr)); @@ -368,7 +380,6 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, static inline void sbitmap_queue_free(struct sbitmap_queue *sbq) { kfree(sbq->ws); - free_percpu(sbq->alloc_hint); sbitmap_free(&sbq->sb); } diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 683343e02c3b..ca1a446574aa 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -9,52 +9,51 @@ #include #include -static int init_alloc_hint(struct sbitmap_queue *sbq, gfp_t flags) +static int init_alloc_hint(struct sbitmap *sb, gfp_t flags) { - unsigned depth = sbq->sb.depth; + unsigned depth = sb->depth; - sbq->alloc_hint = alloc_percpu_gfp(unsigned int, flags); - if (!sbq->alloc_hint) + sb->alloc_hint = alloc_percpu_gfp(unsigned int, flags); + if (!sb->alloc_hint) return -ENOMEM; - if (depth && !sbq->sb.round_robin) { + if (depth && !sb->round_robin) { int i; for_each_possible_cpu(i) - *per_cpu_ptr(sbq->alloc_hint, i) = prandom_u32() % depth; + *per_cpu_ptr(sb->alloc_hint, i) = prandom_u32() % depth; } - return 0; } -static inline unsigned update_alloc_hint_before_get(struct sbitmap_queue *sbq, +static inline unsigned update_alloc_hint_before_get(struct sbitmap *sb, unsigned int depth) { unsigned hint; - hint = this_cpu_read(*sbq->alloc_hint); + hint = this_cpu_read(*sb->alloc_hint); if (unlikely(hint >= depth)) { hint = depth ? prandom_u32() % depth : 0; - this_cpu_write(*sbq->alloc_hint, hint); + this_cpu_write(*sb->alloc_hint, hint); } return hint; } -static inline void update_alloc_hint_after_get(struct sbitmap_queue *sbq, +static inline void update_alloc_hint_after_get(struct sbitmap *sb, unsigned int depth, unsigned int hint, unsigned int nr) { if (nr == -1) { /* If the map is full, a hint won't do us much good. */ - this_cpu_write(*sbq->alloc_hint, 0); - } else if (nr == hint || unlikely(sbq->sb.round_robin)) { + this_cpu_write(*sb->alloc_hint, 0); + } else if (nr == hint || unlikely(sb->round_robin)) { /* Only update the hint if we used it. */ hint = nr + 1; if (hint >= depth - 1) hint = 0; - this_cpu_write(*sbq->alloc_hint, hint); + this_cpu_write(*sb->alloc_hint, hint); } } @@ -91,7 +90,8 @@ static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index) } int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, - gfp_t flags, int node, bool round_robin) + gfp_t flags, int node, bool round_robin, + bool alloc_hint) { unsigned int bits_per_word; unsigned int i; @@ -123,9 +123,18 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, return 0; } + if (alloc_hint) { + if (init_alloc_hint(sb, flags)) + return -ENOMEM; + } else { + sb->alloc_hint = NULL; + } + sb->map = kcalloc_node(sb->map_nr, sizeof(*sb->map), flags, node); - if (!sb->map) + if (!sb->map) { + free_percpu(sb->alloc_hint); return -ENOMEM; + } for (i = 0; i < sb->map_nr; i++) { sb->map[i].depth = min(depth, bits_per_word); @@ -204,7 +213,7 @@ static int sbitmap_find_bit_in_index(struct sbitmap *sb, int index, return nr; } -int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint) +static int __sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint) { unsigned int i, index; int nr = -1; @@ -236,10 +245,29 @@ int sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint) return nr; } + +int sbitmap_get(struct sbitmap *sb) +{ + int nr; + + if (likely(sb->alloc_hint)) { + unsigned int hint, depth; + + depth = READ_ONCE(sb->depth); + hint = update_alloc_hint_before_get(sb, depth); + nr = __sbitmap_get(sb, hint); + update_alloc_hint_after_get(sb, depth, hint, nr); + } else { + nr = __sbitmap_get(sb, 0); + } + + return nr; +} EXPORT_SYMBOL_GPL(sbitmap_get); -int sbitmap_get_shallow(struct sbitmap *sb, unsigned int alloc_hint, - unsigned long shallow_depth) +static int __sbitmap_get_shallow(struct sbitmap *sb, + unsigned int alloc_hint, + unsigned long shallow_depth) { unsigned int i, index; int nr = -1; @@ -271,6 +299,23 @@ int sbitmap_get_shallow(struct sbitmap *sb, unsigned int alloc_hint, return nr; } + +int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth) +{ + int nr; + + if (likely(sb->alloc_hint)) { + unsigned int hint, depth; + + depth = READ_ONCE(sb->depth); + hint = update_alloc_hint_before_get(sb, depth); + nr = __sbitmap_get_shallow(sb, hint, shallow_depth); + update_alloc_hint_after_get(sb, depth, hint, nr); + } else { + nr = __sbitmap_get_shallow(sb, 0, shallow_depth); + } + return nr; +} EXPORT_SYMBOL_GPL(sbitmap_get_shallow); bool sbitmap_any_bit_set(const struct sbitmap *sb) @@ -405,15 +450,10 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, int i; ret = sbitmap_init_node(&sbq->sb, depth, shift, flags, node, - round_robin); + round_robin, true); if (ret) return ret; - if (init_alloc_hint(sbq, flags) != 0) { - sbitmap_free(&sbq->sb); - return -ENOMEM; - } - sbq->min_shallow_depth = UINT_MAX; sbq->wake_batch = sbq_calc_wake_batch(sbq, depth); atomic_set(&sbq->wake_index, 0); @@ -421,7 +461,6 @@ int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, sbq->ws = kzalloc_node(SBQ_WAIT_QUEUES * sizeof(*sbq->ws), flags, node); if (!sbq->ws) { - free_percpu(sbq->alloc_hint); sbitmap_free(&sbq->sb); return -ENOMEM; } @@ -463,32 +502,16 @@ EXPORT_SYMBOL_GPL(sbitmap_queue_resize); int __sbitmap_queue_get(struct sbitmap_queue *sbq) { - unsigned int hint, depth; - int nr; - - depth = READ_ONCE(sbq->sb.depth); - hint = update_alloc_hint_before_get(sbq, depth); - nr = sbitmap_get(&sbq->sb, hint); - update_alloc_hint_after_get(sbq, depth, hint, nr); - - return nr; + return sbitmap_get(&sbq->sb); } EXPORT_SYMBOL_GPL(__sbitmap_queue_get); int __sbitmap_queue_get_shallow(struct sbitmap_queue *sbq, unsigned int shallow_depth) { - unsigned int hint, depth; - int nr; - WARN_ON_ONCE(shallow_depth < sbq->min_shallow_depth); - depth = READ_ONCE(sbq->sb.depth); - hint = update_alloc_hint_before_get(sbq, depth); - nr = sbitmap_get_shallow(&sbq->sb, hint, shallow_depth); - update_alloc_hint_after_get(sbq, depth, hint, nr); - - return nr; + return sbitmap_get_shallow(&sbq->sb, shallow_depth); } EXPORT_SYMBOL_GPL(__sbitmap_queue_get_shallow); @@ -597,7 +620,7 @@ void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr, sbitmap_queue_wake_up(sbq); if (likely(!sbq->sb.round_robin && nr < sbq->sb.depth)) - *per_cpu_ptr(sbq->alloc_hint, cpu) = nr; + *per_cpu_ptr(sbq->sb.alloc_hint, cpu) = nr; } EXPORT_SYMBOL_GPL(sbitmap_queue_clear); @@ -635,7 +658,7 @@ void sbitmap_queue_show(struct sbitmap_queue *sbq, struct seq_file *m) if (!first) seq_puts(m, ", "); first = false; - seq_printf(m, "%u", *per_cpu_ptr(sbq->alloc_hint, i)); + seq_printf(m, "%u", *per_cpu_ptr(sbq->sb.alloc_hint, i)); } seq_puts(m, "}\n"); From patchwork Fri Feb 28 23:29:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11025159A for ; Fri, 28 Feb 2020 23:30:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E47CE246AC for ; Fri, 28 Feb 2020 23:30:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZNmc6ktU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726764AbgB1XaC (ORCPT ); Fri, 28 Feb 2020 18:30:02 -0500 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:31345 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726658AbgB1XaC (ORCPT ); Fri, 28 Feb 2020 18:30:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932601; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WiTipfkLrqtLl/ZixFIKofa3xAQ/IjBrUQCQVY1fURk=; b=ZNmc6ktUJLGfXD1bvOh29XnFjBH5SaHU1z9qfxROBcVc11Vru0G0mCMt53bN9u1hVZhRll MOdvcbTJGX1Sdvha7IiJCraIrNqRCvSdyauB3i6ngmfLkdjvHKzSh1DpoeO5w6AHtRejse lN/NHo2CcnLrFJYrGArePgESV42qn2o= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-288-Qp-io9LcMPOFXQdZ9Ki7QA-1; Fri, 28 Feb 2020 18:29:59 -0500 X-MC-Unique: Qp-io9LcMPOFXQdZ9Ki7QA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 731FC1882CE9; Fri, 28 Feb 2020 23:29:57 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2E749101D491; Fri, 28 Feb 2020 23:29:53 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 05/10] sbitmap: export sbitmap_weight Date: Sat, 29 Feb 2020 07:29:15 +0800 Message-Id: <20200228232920.20960-6-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org SCSI's .device_busy will be converted to sbitmap, and sbitmap_weight is needed, so export the helper. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- include/linux/sbitmap.h | 9 +++++++++ lib/sbitmap.c | 11 ++++++----- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 103b41c03311..34343ce3ef6c 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -346,6 +346,15 @@ static inline int sbitmap_test_bit(struct sbitmap *sb, unsigned int bitnr) */ void sbitmap_show(struct sbitmap *sb, struct seq_file *m); + +/** + * sbitmap_weight() - Return how many real bits set in a &struct sbitmap. + * @sb: Bitmap to check. + * + * Return: How many real bits set + */ +unsigned int sbitmap_weight(const struct sbitmap *sb); + /** * sbitmap_bitmap_show() - Write a hex dump of a &struct sbitmap to a &struct * seq_file. diff --git a/lib/sbitmap.c b/lib/sbitmap.c index ca1a446574aa..254475865b3d 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -345,20 +345,21 @@ static unsigned int __sbitmap_weight(const struct sbitmap *sb, bool set) return weight; } -static unsigned int sbitmap_weight(const struct sbitmap *sb) +static unsigned int sbitmap_cleared(const struct sbitmap *sb) { - return __sbitmap_weight(sb, true); + return __sbitmap_weight(sb, false); } -static unsigned int sbitmap_cleared(const struct sbitmap *sb) +unsigned int sbitmap_weight(const struct sbitmap *sb) { - return __sbitmap_weight(sb, false); + return __sbitmap_weight(sb, true) - sbitmap_cleared(sb); } +EXPORT_SYMBOL_GPL(sbitmap_weight); void sbitmap_show(struct sbitmap *sb, struct seq_file *m) { seq_printf(m, "depth=%u\n", sb->depth); - seq_printf(m, "busy=%u\n", sbitmap_weight(sb) - sbitmap_cleared(sb)); + seq_printf(m, "busy=%u\n", sbitmap_weight(sb)); seq_printf(m, "cleared=%u\n", sbitmap_cleared(sb)); seq_printf(m, "bits_per_word=%u\n", 1U << sb->shift); seq_printf(m, "map_nr=%u\n", sb->map_nr); From patchwork Fri Feb 28 23:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413499 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C873417E0 for ; Fri, 28 Feb 2020 23:30:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A822C2469C for ; Fri, 28 Feb 2020 23:30:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Ryqr5p9u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbgB1XaJ (ORCPT ); Fri, 28 Feb 2020 18:30:09 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:59738 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726765AbgB1XaI (ORCPT ); Fri, 28 Feb 2020 18:30:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932607; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dNMWdKtvi6Hy1HcK6nwSHcXHEGZv8eiC4BgPrfgPvCs=; b=Ryqr5p9uZoH92hBrbKtW67JcKrAfVp0iNzAz9vBwzM+KO2B5Vs1ySwQJNYmmwEtG0NfPqO wBUs6Onh4Hk9rPuLdpWaNnOEtHO83lLTwYwlZImB4yMxKvEsuX7Z0uRY3MmeyONOId6BvG LzgsXlXk8PHvHODET3vH1RPVTJYLTFU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-223-AAw9EoVnPTiiQu9DPPeiHA-1; Fri, 28 Feb 2020 18:30:03 -0500 X-MC-Unique: AAw9EoVnPTiiQu9DPPeiHA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 69C5E800D5F; Fri, 28 Feb 2020 23:30:01 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8B8CE101D491; Fri, 28 Feb 2020 23:30:00 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 06/10] sbitmap: add helper of sbitmap_calculate_shift Date: Sat, 29 Feb 2020 07:29:16 +0800 Message-Id: <20200228232920.20960-7-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Move code for calculating default shift into one public helper, which can be used for SCSI to calculate shift. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- include/linux/sbitmap.h | 18 ++++++++++++++++++ lib/sbitmap.c | 16 +++------------- 2 files changed, 21 insertions(+), 13 deletions(-) diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 34343ce3ef6c..2a40364d6d00 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -337,6 +337,24 @@ static inline int sbitmap_test_bit(struct sbitmap *sb, unsigned int bitnr) return test_bit(SB_NR_TO_BIT(sb, bitnr), __sbitmap_word(sb, bitnr)); } +static inline int sbitmap_calculate_shift(unsigned int depth) +{ + int shift = ilog2(BITS_PER_LONG); + + /* + * If the bitmap is small, shrink the number of bits per word so + * we spread over a few cachelines, at least. If less than 4 + * bits, just forget about it, it's not going to work optimally + * anyway. + */ + if (depth >= 4) { + while ((4U << shift) > depth) + shift--; + } + + return shift; +} + /** * sbitmap_show() - Dump &struct sbitmap information to a &struct seq_file. * @sb: Bitmap to show. diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 254475865b3d..bb88a3643d64 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -96,19 +96,9 @@ int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, unsigned int bits_per_word; unsigned int i; - if (shift < 0) { - shift = ilog2(BITS_PER_LONG); - /* - * If the bitmap is small, shrink the number of bits per word so - * we spread over a few cachelines, at least. If less than 4 - * bits, just forget about it, it's not going to work optimally - * anyway. - */ - if (depth >= 4) { - while ((4U << shift) > depth) - shift--; - } - } + if (shift < 0) + shift = sbitmap_calculate_shift(depth); + bits_per_word = 1U << shift; if (bits_per_word > BITS_PER_LONG) return -EINVAL; From patchwork Fri Feb 28 23:29:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413501 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BBF9213A4 for ; Fri, 28 Feb 2020 23:30:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9138E246A8 for ; Fri, 28 Feb 2020 23:30:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="W4OUZhA9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726766AbgB1XaQ (ORCPT ); Fri, 28 Feb 2020 18:30:16 -0500 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:26770 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726561AbgB1XaQ (ORCPT ); Fri, 28 Feb 2020 18:30:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932614; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fy0lLVurOeeWBy1Pme0N9KiPrE/JtC6PTNzIax1tFi8=; b=W4OUZhA9H9bXoUB+FmuNZVgywr7VPrvlPxE84D+K0A+eOFrX+XqXPL/JzMp48WQ6n+74wZ ThXKMNkCgx1bDiH36qQY8uTeGdlaHwy9I056GHCSDCJ7PZaj2tgyuuv4wkh15KQhCTXkkQ 91N8lbpeK2WZGLpN4aX/OmQNaKRXJro= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-385-dv_bP4uyOSGp_c9GAdvUfw-1; Fri, 28 Feb 2020 18:30:10 -0500 X-MC-Unique: dv_bP4uyOSGp_c9GAdvUfw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6E51F13EA; Fri, 28 Feb 2020 23:30:08 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DBFA60BF1; Fri, 28 Feb 2020 23:30:03 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 07/10] blk-mq: return budget token from .get_budget callback Date: Sat, 29 Feb 2020 07:29:17 +0800 Message-Id: <20200228232920.20960-8-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org SCSI uses one global atomic variable to track queue depth for each LUN/request queue. This way doesn't scale well when there is lots of CPU cores and the disk is very fast. It has been observed that IOPS is affected a lot by tracking queue depth via sdev->device_busy in IO path. Return budget token from .get_budget callback, and the budget token can be passed to driver, so that we can replace the atomic variable with sbitmap_queue, then the scale issue can be fixed. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- block/blk-mq-sched.c | 20 ++++++++++++-------- block/blk-mq.c | 18 ++++++++++++------ block/blk-mq.h | 11 ++++++----- drivers/scsi/scsi_lib.c | 10 +++++----- include/linux/blk-mq.h | 4 ++-- 5 files changed, 37 insertions(+), 26 deletions(-) diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index ca22afd47b3d..38dee68ba04a 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -90,6 +90,7 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; struct elevator_queue *e = q->elevator; LIST_HEAD(rq_list); + int budget_token; do { struct request *rq; @@ -97,12 +98,13 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) if (e->type->ops.has_work && !e->type->ops.has_work(hctx)) break; - if (!blk_mq_get_dispatch_budget(hctx)) + budget_token = blk_mq_get_dispatch_budget(hctx); + if (budget_token < 0) break; rq = e->type->ops.dispatch_request(hctx); if (!rq) { - blk_mq_put_dispatch_budget(hctx); + blk_mq_put_dispatch_budget(hctx, budget_token); break; } @@ -112,7 +114,7 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx) * in blk_mq_dispatch_rq_list(). */ list_add(&rq->queuelist, &rq_list); - } while (blk_mq_dispatch_rq_list(q, &rq_list, true)); + } while (blk_mq_dispatch_rq_list(q, &rq_list, budget_token)); } static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx, @@ -136,6 +138,7 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; LIST_HEAD(rq_list); struct blk_mq_ctx *ctx = READ_ONCE(hctx->dispatch_from); + int budget_token; do { struct request *rq; @@ -143,12 +146,13 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) if (!sbitmap_any_bit_set(&hctx->ctx_map)) break; - if (!blk_mq_get_dispatch_budget(hctx)) + budget_token = blk_mq_get_dispatch_budget(hctx); + if (budget_token < 0) break; rq = blk_mq_dequeue_from_ctx(hctx, ctx); if (!rq) { - blk_mq_put_dispatch_budget(hctx); + blk_mq_put_dispatch_budget(hctx, budget_token); break; } @@ -162,7 +166,7 @@ static void blk_mq_do_dispatch_ctx(struct blk_mq_hw_ctx *hctx) /* round robin for fair dispatch */ ctx = blk_mq_next_ctx(hctx, rq->mq_ctx); - } while (blk_mq_dispatch_rq_list(q, &rq_list, true)); + } while (blk_mq_dispatch_rq_list(q, &rq_list, budget_token)); WRITE_ONCE(hctx->dispatch_from, ctx); } @@ -206,7 +210,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) */ if (!list_empty(&rq_list)) { blk_mq_sched_mark_restart_hctx(hctx); - if (blk_mq_dispatch_rq_list(q, &rq_list, false)) { + if (blk_mq_dispatch_rq_list(q, &rq_list, -1)) { if (has_sched_dispatch) blk_mq_do_dispatch_sched(hctx); else @@ -219,7 +223,7 @@ void blk_mq_sched_dispatch_requests(struct blk_mq_hw_ctx *hctx) blk_mq_do_dispatch_ctx(hctx); } else { blk_mq_flush_busy_ctxs(hctx, &rq_list); - blk_mq_dispatch_rq_list(q, &rq_list, false); + blk_mq_dispatch_rq_list(q, &rq_list, -1); } } diff --git a/block/blk-mq.c b/block/blk-mq.c index 76a5e919c336..43ae2b973d99 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1182,13 +1182,14 @@ static void blk_mq_update_dispatch_busy(struct blk_mq_hw_ctx *hctx, bool busy) * Returns true if we did some work AND can potentially do more. */ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, - bool got_budget) + int budget_token) { struct blk_mq_hw_ctx *hctx; struct request *rq, *nxt; bool no_tag = false; int errors, queued; blk_status_t ret = BLK_STS_OK; + bool got_budget = budget_token >= 0; if (list_empty(list)) return false; @@ -1205,8 +1206,11 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, rq = list_first_entry(list, struct request, queuelist); hctx = rq->mq_hctx; - if (!got_budget && !blk_mq_get_dispatch_budget(hctx)) - break; + if (!got_budget) { + budget_token = blk_mq_get_dispatch_budget(hctx); + if (budget_token < 0) + break; + } if (!blk_mq_get_driver_tag(rq)) { /* @@ -1217,7 +1221,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, * we'll re-run it below. */ if (!blk_mq_mark_tag_wait(hctx, rq)) { - blk_mq_put_dispatch_budget(hctx); + blk_mq_put_dispatch_budget(hctx, budget_token); /* * For non-shared tags, the RESTART check * will suffice. @@ -1819,6 +1823,7 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, { struct request_queue *q = rq->q; bool run_queue = true; + int budget_token; /* * RCU or SRCU read lock is needed before checking quiesced flag. @@ -1836,11 +1841,12 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, if (q->elevator && !bypass_insert) goto insert; - if (!blk_mq_get_dispatch_budget(hctx)) + budget_token = blk_mq_get_dispatch_budget(hctx); + if (budget_token < 0) goto insert; if (!blk_mq_get_driver_tag(rq)) { - blk_mq_put_dispatch_budget(hctx); + blk_mq_put_dispatch_budget(hctx, budget_token); goto insert; } diff --git a/block/blk-mq.h b/block/blk-mq.h index eaaca8fc1c28..1c82d89791c0 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -40,7 +40,7 @@ struct blk_mq_ctx { void blk_mq_exit_queue(struct request_queue *q); int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr); void blk_mq_wake_waiters(struct request_queue *q); -bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool); +bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, int); void blk_mq_add_to_requeue_list(struct request *rq, bool at_head, bool kick_requeue_list); void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list); @@ -179,21 +179,22 @@ unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part); void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]); -static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx) +static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx, + int budget_token) { struct request_queue *q = hctx->queue; if (q->mq_ops->put_budget) - q->mq_ops->put_budget(hctx); + q->mq_ops->put_budget(hctx, budget_token); } -static inline bool blk_mq_get_dispatch_budget(struct blk_mq_hw_ctx *hctx) +static inline int blk_mq_get_dispatch_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; if (q->mq_ops->get_budget) return q->mq_ops->get_budget(hctx); - return true; + return 0; } static inline void __blk_mq_put_driver_tag(struct blk_mq_hw_ctx *hctx, diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 610ee41fa54c..c8eb555b3ce8 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1619,7 +1619,7 @@ static void scsi_mq_done(struct scsi_cmnd *cmd) clear_bit(SCMD_STATE_COMPLETE, &cmd->state); } -static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) +static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx, int budget_token) { struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; @@ -1627,17 +1627,17 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) atomic_dec(&sdev->device_busy); } -static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) +static int scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; if (scsi_dev_queue_ready(q, sdev)) - return true; + return 0; if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev)) blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY); - return false; + return -1; } static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, @@ -1701,7 +1701,7 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, if (scsi_target(sdev)->can_queue > 0) atomic_dec(&scsi_target(sdev)->target_busy); out_put_budget: - scsi_mq_put_budget(hctx); + scsi_mq_put_budget(hctx, 0); switch (ret) { case BLK_STS_OK: break; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 11cfd6470b1a..6fd8d7cfe158 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -265,8 +265,8 @@ struct blk_mq_queue_data { typedef blk_status_t (queue_rq_fn)(struct blk_mq_hw_ctx *, const struct blk_mq_queue_data *); typedef void (commit_rqs_fn)(struct blk_mq_hw_ctx *); -typedef bool (get_budget_fn)(struct blk_mq_hw_ctx *); -typedef void (put_budget_fn)(struct blk_mq_hw_ctx *); +typedef int (get_budget_fn)(struct blk_mq_hw_ctx *); +typedef void (put_budget_fn)(struct blk_mq_hw_ctx *, int); typedef enum blk_eh_timer_return (timeout_fn)(struct request *, bool); typedef int (init_hctx_fn)(struct blk_mq_hw_ctx *, void *, unsigned int); typedef void (exit_hctx_fn)(struct blk_mq_hw_ctx *, unsigned int); From patchwork Fri Feb 28 23:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 11008159A for ; Fri, 28 Feb 2020 23:30:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E4CC1246AC for ; Fri, 28 Feb 2020 23:30:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DBp8HRxr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbgB1XaV (ORCPT ); Fri, 28 Feb 2020 18:30:21 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:31499 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726277AbgB1XaV (ORCPT ); Fri, 28 Feb 2020 18:30:21 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932620; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B0NLt7jSWmmwehEELmRWoiWSF3chC1MReX7sLahxdAA=; b=DBp8HRxrscgkHgW1/9iv4blEX4GLCeNeGIuzpstCz400smLNU+df9npVbafbLgkJIzDR0P rvBiM4AyMtRWyFUsOd2rW0720b/oh2IebmhF12ml349PHOK08yIihiYc+f1xLsnMOczCly b465Pe08MRyrs34i/s2GuPFZGDQPv3Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-444-hHycpg4WMSmnnRMHCBY9Uw-1; Fri, 28 Feb 2020 18:30:16 -0500 X-MC-Unique: hHycpg4WMSmnnRMHCBY9Uw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 374BC13F5; Fri, 28 Feb 2020 23:30:14 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 040538B77B; Fri, 28 Feb 2020 23:30:10 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 08/10] blk-mq: pass budget token to dirver via blk_mq_queue_data Date: Sat, 29 Feb 2020 07:29:18 +0800 Message-Id: <20200228232920.20960-9-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Pass the budget token to driver via blk_mq_queue_dada before calling .queue_rq(), and prepare for tracing SCSI's device_busy via sbitmap_queue. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- block/blk-mq.c | 25 +++++++++++++------------ include/linux/blk-mq.h | 1 + 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 43ae2b973d99..013272cad500 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1235,6 +1235,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, list_del_init(&rq->queuelist); bd.rq = rq; + bd.budget_token = budget_token; /* * Flag last if we have no more requests, or if we have more @@ -1778,14 +1779,11 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio, } static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, - struct request *rq, - blk_qc_t *cookie, bool last) + struct blk_mq_queue_data *bd, + blk_qc_t *cookie) { + struct request *rq = bd->rq; struct request_queue *q = rq->q; - struct blk_mq_queue_data bd = { - .rq = rq, - .last = last, - }; blk_qc_t new_cookie; blk_status_t ret; @@ -1796,7 +1794,7 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, * Any other error (busy), just add it to our list as we * previously would have done. */ - ret = q->mq_ops->queue_rq(hctx, &bd); + ret = q->mq_ops->queue_rq(hctx, bd); switch (ret) { case BLK_STS_OK: blk_mq_update_dispatch_busy(hctx, false); @@ -1823,7 +1821,10 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, { struct request_queue *q = rq->q; bool run_queue = true; - int budget_token; + struct blk_mq_queue_data bd = { + .rq = rq, + .last = last, + }; /* * RCU or SRCU read lock is needed before checking quiesced flag. @@ -1841,16 +1842,16 @@ static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, if (q->elevator && !bypass_insert) goto insert; - budget_token = blk_mq_get_dispatch_budget(hctx); - if (budget_token < 0) + bd.budget_token = blk_mq_get_dispatch_budget(hctx); + if (bd.budget_token < 0) goto insert; if (!blk_mq_get_driver_tag(rq)) { - blk_mq_put_dispatch_budget(hctx, budget_token); + blk_mq_put_dispatch_budget(hctx, bd.budget_token); goto insert; } - return __blk_mq_issue_directly(hctx, rq, cookie, last); + return __blk_mq_issue_directly(hctx, &bd, cookie); insert: if (bypass_insert) return BLK_STS_RESOURCE; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 6fd8d7cfe158..42b59b97bfd8 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -259,6 +259,7 @@ struct blk_mq_tag_set { */ struct blk_mq_queue_data { struct request *rq; + int budget_token; bool last; }; From patchwork Fri Feb 28 23:29:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31DC713A4 for ; Fri, 28 Feb 2020 23:30:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 11DCB246AC for ; Fri, 28 Feb 2020 23:30:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Q1paOzq2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726682AbgB1XaZ (ORCPT ); Fri, 28 Feb 2020 18:30:25 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:28359 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726561AbgB1XaZ (ORCPT ); Fri, 28 Feb 2020 18:30:25 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932624; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e9vcqksjgrzibgB/ZK/iccYxqnzedXPb7WyOQzHZ7cA=; b=Q1paOzq27ddsmy7Tofa5EkHxIvnEtNKTiWuRMRLohn1Dyc66zlTnFCl5GBVPkf+DsvXkI9 NZPn5ryicyrl4Of8ssYSaUs+ueqw+U6x9IW/D4CdT3BHPVl5km7tNMlCE5IwB35AudmvzM kFzfBjsb8BSxmFIssxXVAzN8Oy6Gaqc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-390-oPu8jtkkN_WK2Ta1KiVlHw-1; Fri, 28 Feb 2020 18:30:22 -0500 X-MC-Unique: oPu8jtkkN_WK2Ta1KiVlHw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 16CA413F5; Fri, 28 Feb 2020 23:30:20 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id D0A59101D491; Fri, 28 Feb 2020 23:30:16 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 09/10] scsi: add scsi_device_busy() to read sdev->device_busy Date: Sat, 29 Feb 2020 07:29:19 +0800 Message-Id: <20200228232920.20960-10-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add scsi_device_busy() for drivers, so that we can prepare for tracking device queue depth via sbitmap_queue. Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- drivers/message/fusion/mptsas.c | 2 +- drivers/scsi/mpt3sas/mpt3sas_scsih.c | 2 +- drivers/scsi/scsi_lib.c | 4 ++-- drivers/scsi/scsi_sysfs.c | 2 +- drivers/scsi/sg.c | 2 +- include/scsi/scsi_device.h | 5 +++++ 6 files changed, 11 insertions(+), 6 deletions(-) diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c index 6a79cd0ebe2b..71d96526b24d 100644 --- a/drivers/message/fusion/mptsas.c +++ b/drivers/message/fusion/mptsas.c @@ -3756,7 +3756,7 @@ mptsas_send_link_status_event(struct fw_event_work *fw_event) printk(MYIOC_s_DEBUG_FMT "SDEV OUTSTANDING CMDS" "%d\n", ioc->name, - atomic_read(&sdev->device_busy))); + scsi_device_busy(sdev))); } } diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c index c597d544eb39..6685ec041c38 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c +++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c @@ -3060,7 +3060,7 @@ scsih_dev_reset(struct scsi_cmnd *scmd) MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET, 0, 0, tr_timeout, tr_method); /* Check for busy commands after reset */ - if (r == SUCCESS && atomic_read(&scmd->device->device_busy)) + if (r == SUCCESS && scsi_device_busy(scmd->device)) r = FAILED; out: sdev_printk(KERN_INFO, scmd->device, "device reset: %s scmd(0x%p)\n", diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index c8eb555b3ce8..a3474b418602 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -410,7 +410,7 @@ static void scsi_single_lun_run(struct scsi_device *current_sdev) static inline bool scsi_device_is_busy(struct scsi_device *sdev) { - if (atomic_read(&sdev->device_busy) >= sdev->queue_depth) + if (scsi_device_busy(sdev) >= sdev->queue_depth) return true; if (atomic_read(&sdev->device_blocked) > 0) return true; @@ -1635,7 +1635,7 @@ static int scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) if (scsi_dev_queue_ready(q, sdev)) return 0; - if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev)) + if (scsi_device_busy(sdev) == 0 && !scsi_device_blocked(sdev)) blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY); return -1; } diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c index 677b5c5403d2..d6cb5a0a03f2 100644 --- a/drivers/scsi/scsi_sysfs.c +++ b/drivers/scsi/scsi_sysfs.c @@ -659,7 +659,7 @@ sdev_show_device_busy(struct device *dev, struct device_attribute *attr, char *buf) { struct scsi_device *sdev = to_scsi_device(dev); - return snprintf(buf, 20, "%d\n", atomic_read(&sdev->device_busy)); + return snprintf(buf, 20, "%d\n", scsi_device_busy(sdev)); } static DEVICE_ATTR(device_busy, S_IRUGO, sdev_show_device_busy, NULL); diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c index 4e6af592f018..943a18256881 100644 --- a/drivers/scsi/sg.c +++ b/drivers/scsi/sg.c @@ -2507,7 +2507,7 @@ static int sg_proc_seq_show_dev(struct seq_file *s, void *v) scsidp->id, scsidp->lun, (int) scsidp->type, 1, (int) scsidp->queue_depth, - (int) atomic_read(&scsidp->device_busy), + (int) scsi_device_busy(scsidp), (int) scsi_device_online(scsidp)); } read_unlock_irqrestore(&sg_index_lock, iflags); diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index f8312a3e5b42..09caf6f2f528 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -584,6 +584,11 @@ static inline int scsi_device_supports_vpd(struct scsi_device *sdev) return 0; } +static inline int scsi_device_busy(struct scsi_device *sdev) +{ + return atomic_read(&sdev->device_busy); +} + #define MODULE_ALIAS_SCSI_DEVICE(type) \ MODULE_ALIAS("scsi:t-" __stringify(type) "*") #define SCSI_DEVICE_MODALIAS_FMT "scsi:t-0x%02x" From patchwork Fri Feb 28 23:29:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11413513 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C97E0159A for ; Fri, 28 Feb 2020 23:30:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9F7562469C for ; Fri, 28 Feb 2020 23:30:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="crGKxoSI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726766AbgB1Xaa (ORCPT ); Fri, 28 Feb 2020 18:30:30 -0500 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:23740 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726627AbgB1Xaa (ORCPT ); Fri, 28 Feb 2020 18:30:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582932628; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YNirpvSklIIdah2sC5h7VgXF/Vtwy4NgWlnpUX3i0go=; b=crGKxoSIIWCvQqQLR+bOEGAv/wIyjkvranJleNkVGPlfzw3sMx0xD9O//8p0Jy4R6AbSRo 4fQsVLWW2HMRbLyAG8hQtLgLn0XraYsM2n4fBNSgPbJfQxKHatxQ09jv+HiMiavtA5j81B xEXnttj7HFR1RjHDEFySQqfnhQ39DSg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-224-YOarZEngPCSuglUyrcok7g-1; Fri, 28 Feb 2020 18:30:25 -0500 X-MC-Unique: YOarZEngPCSuglUyrcok7g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7E7188017DF; Fri, 28 Feb 2020 23:30:23 +0000 (UTC) Received: from localhost (ovpn-8-18.pek2.redhat.com [10.72.8.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9137C60BF1; Fri, 28 Feb 2020 23:30:22 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org, James Bottomley , linux-scsi@vger.kernel.org, "Martin K . Petersen" Cc: Ming Lei , Omar Sandoval , Sathya Prakash , Chaitra P B , Suganath Prabu Subramani , Kashyap Desai , Sumit Saxena , Shivasharan S , "Ewan D . Milne" , Hannes Reinecke , Bart Van Assche Subject: [PATCH V2 10/10] scsi: replace sdev->device_busy with sbitmap Date: Sat, 29 Feb 2020 07:29:20 +0800 Message-Id: <20200228232920.20960-11-ming.lei@redhat.com> In-Reply-To: <20200228232920.20960-1-ming.lei@redhat.com> References: <20200228232920.20960-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org scsi uses one global atomic variable to track queue depth for each LUN/request queue. This way doesn't scale well when there is lots of CPU cores and the disk is very fast. It has been observed that IOPS is affected a lot by tracking queue depth via sdev->device_busy in IO path. So replace the atomic variable sdev->device_busy with sbitmap for tracking scsi device queue depth. Test shows that IOPS difference is just ~1% compared with bypassing device queue depth on scsi_debug by applying patches [1]. Meantime IOPS is improved > 20% compared with linus tree. Follows test steps: 1) test machine(32 logical CPU cores) Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 2 Model name: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz 2) setup scsi_debug: modprobe scsi_debug virtual_gb=128 max_luns=1 submit_queues=32 delay=0 max_queue=256 3) fio script: fio --rw=randread --size=128G --direct=1 --ioengine=libaio --iodepth=2048 \ --numjobs=32 --bs=4k --group_reporting=1 --group_reporting=1 --runtime=60 \ --loops=10000 --name=job1 --filename=/dev/sdN [1] https://lore.kernel.org/linux-block/20200119071432.18558-6-ming.lei@redhat.com/ Cc: Omar Sandoval Cc: Sathya Prakash Cc: Chaitra P B Cc: Suganath Prabu Subramani Cc: Kashyap Desai Cc: Sumit Saxena Cc: Shivasharan S Cc: Ewan D. Milne Cc: Hannes Reinecke Cc: Bart Van Assche Signed-off-by: Ming Lei --- drivers/scsi/scsi.c | 2 ++ drivers/scsi/scsi_lib.c | 37 ++++++++++++++++++------------------- drivers/scsi/scsi_priv.h | 1 + drivers/scsi/scsi_scan.c | 21 +++++++++++++++++++-- drivers/scsi/scsi_sysfs.c | 2 ++ include/scsi/scsi_cmnd.h | 2 ++ include/scsi/scsi_device.h | 5 +++-- 7 files changed, 47 insertions(+), 23 deletions(-) diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c index 4b9fdfab77d9..62771a92db9b 100644 --- a/drivers/scsi/scsi.c +++ b/drivers/scsi/scsi.c @@ -245,6 +245,8 @@ int scsi_change_queue_depth(struct scsi_device *sdev, int depth) if (sdev->request_queue) blk_set_queue_depth(sdev->request_queue, depth); + sbitmap_resize(&sdev->budget_map, sdev->queue_depth); + return sdev->queue_depth; } EXPORT_SYMBOL(scsi_change_queue_depth); diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index a3474b418602..e7fbf3a9a6aa 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -354,7 +354,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) if (starget->can_queue > 0) atomic_dec(&starget->target_busy); - atomic_dec(&sdev->device_busy); + sbitmap_put(&sdev->budget_map, cmd->budget_token); } static void scsi_kick_queue(struct request_queue *q) @@ -1274,19 +1274,17 @@ scsi_prep_state_check(struct scsi_device *sdev, struct request *req) } /* - * scsi_dev_queue_ready: if we can send requests to sdev, return 1 else - * return 0. - * - * Called with the queue_lock held. + * scsi_dev_queue_ready: if we can send requests to sdev, assign one token + * and return the token else return -1. */ static inline int scsi_dev_queue_ready(struct request_queue *q, struct scsi_device *sdev) { - unsigned int busy; + int token; - busy = atomic_inc_return(&sdev->device_busy) - 1; + token = sbitmap_get(&sdev->budget_map); if (atomic_read(&sdev->device_blocked)) { - if (busy) + if (token >= 0) goto out_dec; /* @@ -1298,13 +1296,11 @@ static inline int scsi_dev_queue_ready(struct request_queue *q, "unblocking device at zero depth\n")); } - if (busy >= sdev->queue_depth) - goto out_dec; - - return 1; + return token; out_dec: - atomic_dec(&sdev->device_busy); - return 0; + if (token >= 0) + sbitmap_put(&sdev->budget_map, token); + return -1; } /* @@ -1624,16 +1620,17 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx, int budget_token) struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; - atomic_dec(&sdev->device_busy); + sbitmap_put(&sdev->budget_map, budget_token); } static int scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; + int token = scsi_dev_queue_ready(q, sdev); - if (scsi_dev_queue_ready(q, sdev)) - return 0; + if (token >= 0) + return token; if (scsi_device_busy(sdev) == 0 && !scsi_device_blocked(sdev)) blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY); @@ -1677,6 +1674,8 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, blk_mq_start_request(req); } + cmd->budget_token = bd->budget_token; + cmd->flags &= SCMD_PRESERVED_FLAGS; if (sdev->simple_tags) cmd->flags |= SCMD_TAGGED; @@ -1701,12 +1700,12 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx, if (scsi_target(sdev)->can_queue > 0) atomic_dec(&scsi_target(sdev)->target_busy); out_put_budget: - scsi_mq_put_budget(hctx, 0); + scsi_mq_put_budget(hctx, bd->budget_token); switch (ret) { case BLK_STS_OK: break; case BLK_STS_RESOURCE: - if (atomic_read(&sdev->device_busy) || + if (sbitmap_any_bit_set(&sdev->budget_map) || scsi_device_blocked(sdev)) ret = BLK_STS_DEV_RESOURCE; break; diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h index 25b0aaaf5ae8..f6856a946542 100644 --- a/drivers/scsi/scsi_priv.h +++ b/drivers/scsi/scsi_priv.h @@ -5,6 +5,7 @@ #include #include #include +#include struct request_queue; struct request; diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 058079f915f1..27b64c82d26b 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -215,6 +215,7 @@ static void scsi_unlock_floptical(struct scsi_device *sdev, static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, u64 lun, void *hostdata) { + unsigned int depth; struct scsi_device *sdev; int display_failure_msg = 1, ret; struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); @@ -277,8 +278,23 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, WARN_ON_ONCE(!blk_get_queue(sdev->request_queue)); sdev->request_queue->queuedata = sdev; - scsi_change_queue_depth(sdev, sdev->host->cmd_per_lun ? - sdev->host->cmd_per_lun : 1); + depth = sdev->host->cmd_per_lun ?: 1; + + /* + * Use .can_queue as budget map's depth because we have to + * support adjusting queue depth from sysfs. Meantime use + * default device queue depth to figure out sbitmap shift + * since we use this queue depth most of times. + */ + if (sbitmap_init_node(&sdev->budget_map, sdev->host->can_queue, + sbitmap_calculate_shift(depth), + GFP_KERNEL, sdev->request_queue->node, + false, true)) { + put_device(&starget->dev); + kfree(sdev); + } + + scsi_change_queue_depth(sdev, depth); scsi_sysfs_device_initialize(sdev); @@ -980,6 +996,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result, scsi_attach_vpd(sdev); sdev->max_queue_depth = sdev->queue_depth; + WARN_ON_ONCE(sdev->max_queue_depth > sdev->budget_map.depth); sdev->sdev_bflags = *bflags; /* diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c index d6cb5a0a03f2..835566b805fb 100644 --- a/drivers/scsi/scsi_sysfs.c +++ b/drivers/scsi/scsi_sysfs.c @@ -466,6 +466,8 @@ static void scsi_device_dev_release_usercontext(struct work_struct *work) /* NULL queue means the device can't be used */ sdev->request_queue = NULL; + sbitmap_free(&sdev->budget_map); + mutex_lock(&sdev->inquiry_mutex); vpd_pg0 = rcu_replace_pointer(sdev->vpd_pg0, vpd_pg0, lockdep_is_held(&sdev->inquiry_mutex)); diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h index a2849bb9cd19..e6f750f43889 100644 --- a/include/scsi/scsi_cmnd.h +++ b/include/scsi/scsi_cmnd.h @@ -76,6 +76,8 @@ struct scsi_cmnd { int eh_eflags; /* Used by error handlr */ + int budget_token; + /* * This is set to jiffies as it was when the command was first * allocated. It is used to time how long the command has diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index 09caf6f2f528..cdc1f28ec57f 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -8,6 +8,7 @@ #include #include #include +#include struct device; struct request_queue; @@ -106,7 +107,7 @@ struct scsi_device { struct list_head siblings; /* list of all devices on this host */ struct list_head same_target_siblings; /* just the devices sharing same target id */ - atomic_t device_busy; /* commands actually active on LLDD */ + struct sbitmap budget_map; atomic_t device_blocked; /* Device returned QUEUE_FULL. */ spinlock_t list_lock; @@ -586,7 +587,7 @@ static inline int scsi_device_supports_vpd(struct scsi_device *sdev) static inline int scsi_device_busy(struct scsi_device *sdev) { - return atomic_read(&sdev->device_busy); + return sbitmap_weight(&sdev->budget_map); } #define MODULE_ALIAS_SCSI_DEVICE(type) \