From patchwork Tue Aug 8 10:42:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13346264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5B84C04FE1 for ; Tue, 8 Aug 2023 16:11:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232080AbjHHQLp (ORCPT ); Tue, 8 Aug 2023 12:11:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229774AbjHHQJo (ORCPT ); Tue, 8 Aug 2023 12:09:44 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1291576BE for ; Tue, 8 Aug 2023 08:46:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1691509566; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=k/XiItvuETzuFaV0q2/0Ml9SkHL0l2Lbxykjfon98Ww=; b=gzWBponId6Rk3ISzI2vUdqURoGvJDI7RBQf/FK+rvwyj9HojA96sH8kSFesaWxyPoz7sFs zUj9NytMkBJhEHqbi3/nIUPGZJq9646WDcXtaPMumsN9WLzakryZG4IaSd+C0hbpFwK++r nIG3PXEj1yznjMkudL9DVCrYkaI3JVY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-484-dHf5P5j1NCCY04pi7Cukww-1; Tue, 08 Aug 2023 06:42:55 -0400 X-MC-Unique: dHf5P5j1NCCY04pi7Cukww-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3A89B85CCE0; Tue, 8 Aug 2023 10:42:55 +0000 (UTC) Received: from localhost (unknown [10.72.120.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 65E124E3D1; Tue, 8 Aug 2023 10:42:53 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , linux-nvme@lists.infradead.org, "Martin K . Petersen" , linux-scsi@vger.kernel.org Cc: linux-block@vger.kernel.org, Wen Xiong , Keith Busch , Ming Lei Subject: [PATCH V3 02/14] nvme-pci: use blk_mq_max_nr_hw_queues() to calculate io queues Date: Tue, 8 Aug 2023 18:42:27 +0800 Message-Id: <20230808104239.146085-3-ming.lei@redhat.com> In-Reply-To: <20230808104239.146085-1-ming.lei@redhat.com> References: <20230808104239.146085-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Take blk-mq's knowledge into account for calculating io queues. Fix wrong queue mapping in case of kdump kernel. On arm and ppc64, 'maxcpus=1' is passed to kdump command line, see `Documentation/admin-guide/kdump/kdump.rst`, so num_possible_cpus() still returns all CPUs because 'maxcpus=1' just bring up one single cpu core during booting. blk-mq sees single queue in kdump kernel, and in driver's viewpoint there are still multiple queues, this inconsistency causes driver to apply wrong queue mapping for handling IO, and IO timeout is triggered. Meantime, single queue makes much less resource utilization, and reduce risk of kernel failure. Reported-by: Wen Xiong Signed-off-by: Ming Lei --- drivers/nvme/host/pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index baf69af7ea78..a1227ae7eb39 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2251,7 +2251,7 @@ static unsigned int nvme_max_io_queues(struct nvme_dev *dev) */ if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS) return 1; - return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues; + return blk_mq_max_nr_hw_queues() + dev->nr_write_queues + dev->nr_poll_queues; } static int nvme_setup_io_queues(struct nvme_dev *dev)