From patchwork Sun Sep 18 07:37:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Gordeev X-Patchwork-Id: 9337617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DC0A16089F for ; Sun, 18 Sep 2016 07:37:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D391429079 for ; Sun, 18 Sep 2016 07:37:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C830D2907A; Sun, 18 Sep 2016 07:37:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5AD9629079 for ; Sun, 18 Sep 2016 07:37:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758218AbcIRHhj (ORCPT ); Sun, 18 Sep 2016 03:37:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55354 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758142AbcIRHhg (ORCPT ); Sun, 18 Sep 2016 03:37:36 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id CF44343A39; Sun, 18 Sep 2016 07:37:35 +0000 (UTC) Received: from dhcp-27-118.brq.redhat.com (dhcp-27-122.brq.redhat.com [10.34.27.122]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u8I7bTpS025554; Sun, 18 Sep 2016 03:37:34 -0400 From: Alexander Gordeev To: linux-kernel@vger.kernel.org Cc: Alexander Gordeev , linux-block@vger.kernel.org Subject: [PATCH 04/14] blk-mq: Do not limit number of queues to 'nr_cpu_ids' in allocations Date: Sun, 18 Sep 2016 09:37:14 +0200 Message-Id: <9bb584d504bc2f8ef9d66822e68f082ee9a74ded.1474183901.git.agordeev@redhat.com> In-Reply-To: References: X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Sun, 18 Sep 2016 07:37:35 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently maximum number of used hardware queues is limited to number of CPUs in the system. However, using 'nr_cpu_ids' as the limit for (de-)allocations of data structures instead of existing data structures' counters (a) worsens readability and (b) leads to unused memory when number of hardware queues is less than number of CPUs. CC: linux-block@vger.kernel.org Signed-off-by: Alexander Gordeev --- block/blk-mq.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 276ec7b..2c77b68 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2054,8 +2054,8 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set, if (!q->queue_ctx) goto err_exit; - q->queue_hw_ctx = kzalloc_node(nr_cpu_ids * sizeof(*(q->queue_hw_ctx)), - GFP_KERNEL, set->numa_node); + q->queue_hw_ctx = kzalloc_node(set->nr_hw_queues * + sizeof(*(q->queue_hw_ctx)), GFP_KERNEL, set->numa_node); if (!q->queue_hw_ctx) goto err_percpu; @@ -2319,7 +2319,7 @@ int blk_mq_alloc_tag_set(struct blk_mq_tag_set *set) if (set->nr_hw_queues > nr_cpu_ids) set->nr_hw_queues = nr_cpu_ids; - set->tags = kzalloc_node(nr_cpu_ids * sizeof(struct blk_mq_tags *), + set->tags = kzalloc_node(set->nr_hw_queues * sizeof(*set->tags), GFP_KERNEL, set->numa_node); if (!set->tags) return -ENOMEM; @@ -2360,7 +2360,7 @@ void blk_mq_free_tag_set(struct blk_mq_tag_set *set) { int i; - for (i = 0; i < nr_cpu_ids; i++) { + for (i = 0; i < set->nr_hw_queues; i++) { if (set->tags[i]) blk_mq_free_rq_map(set, set->tags[i], i); }