From patchwork Thu Jan 14 18:07:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lin X-Patchwork-Id: 8034861 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 9C28EBEEE5 for ; Thu, 14 Jan 2016 18:07:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B8F942045A for ; Thu, 14 Jan 2016 18:07:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B909B20458 for ; Thu, 14 Jan 2016 18:07:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753872AbcANSHt (ORCPT ); Thu, 14 Jan 2016 13:07:49 -0500 Received: from mail.kernel.org ([198.145.29.136]:46883 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753337AbcANSHs (ORCPT ); Thu, 14 Jan 2016 13:07:48 -0500 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E992420497; Thu, 14 Jan 2016 18:07:47 +0000 (UTC) Received: from localhost.localdomain (c-76-103-151-150.hsd1.ca.comcast.net [76.103.151.150]) (using TLSv1.2 with cipher AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8DAD52045A; Thu, 14 Jan 2016 18:07:46 +0000 (UTC) From: Ming Lin To: linux-block@vger.kernel.org, Jens Axboe Cc: Sagi Grimberg , Bart Van Assche , Christoph Hellwig Subject: [PATCH] blk-mq: check if all HW queues are mapped to cpu Date: Thu, 14 Jan 2016 10:07:20 -0800 Message-Id: <1452794840-29767-1-git-send-email-mlin@kernel.org> X-Mailer: git-send-email 1.9.1 X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ming Lin Suppose that a system has 8 logical CPUs(4 cores with hyperthread) and that 5 hardware queues are provided by a block driver. With the current algorithm this will lead to the following assignment of logical CPU to hardware queue: HWQ 0: 0 1 HWQ 1: 2 3 HWQ 2: 4 5 HWQ 3: 6 7 HWQ 4: (none) One way to fix it is to change the algorithm so the assignment may be: HWQ 0: 0 1 HWQ 1: 2 3 HWQ 2: 4 5 HWQ 3: 6 HWQ 4: 7 This patch only checks if all HW queues are mapped to cpu. blk_mq_init_queue() will fail if not all HW queues are mapped. And it also fails when hardware queue numbers > cpu numbers. Signed-off-by: Ming Lin --- block/blk-mq-cpumap.c | 27 +++++++++++++++++++++++---- 1 file changed, 23 insertions(+), 4 deletions(-) diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c index 8764c24..5a88ac4 100644 --- a/block/blk-mq-cpumap.c +++ b/block/blk-mq-cpumap.c @@ -36,10 +36,17 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues, { unsigned int i, nr_cpus, nr_uniq_cpus, queue, first_sibling; cpumask_var_t cpus; + int *queue_map; - if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) + queue_map = kzalloc(sizeof(int) * nr_queues, GFP_KERNEL); + if (!queue_map) return 1; + if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) { + kfree(queue_map); + return 1; + } + cpumask_clear(cpus); nr_cpus = nr_uniq_cpus = 0; for_each_cpu(i, online_mask) { @@ -54,7 +61,7 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues, for_each_possible_cpu(i) { if (!cpumask_test_cpu(i, online_mask)) { map[i] = 0; - continue; + goto mapped; } /* @@ -65,7 +72,7 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues, if (nr_queues >= nr_cpus || nr_cpus == nr_uniq_cpus) { map[i] = cpu_to_queue_index(nr_cpus, nr_queues, queue); queue++; - continue; + goto mapped; } /* @@ -80,10 +87,22 @@ int blk_mq_update_queue_map(unsigned int *map, unsigned int nr_queues, queue++; } else map[i] = map[first_sibling]; + +mapped: + /* i is cpu, map[i] is queue index */ + queue_map[map[i]] = 1; } free_cpumask_var(cpus); - return 0; + + /* check if all queues are mapped to cpu */ + for (i = 0; i < nr_queues; i++) + if (!queue_map[i]) + break; + + kfree(queue_map); + + return i != nr_queues; } unsigned int *blk_mq_make_queue_map(struct blk_mq_tag_set *set)