From patchwork Thu Jul 15 12:08:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12379905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E6BC07E96 for ; Thu, 15 Jul 2021 12:09:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6858D611C0 for ; Thu, 15 Jul 2021 12:09:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232377AbhGOMMA (ORCPT ); Thu, 15 Jul 2021 08:12:00 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:41354 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229973AbhGOMMA (ORCPT ); Thu, 15 Jul 2021 08:12:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626350946; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pMdTzJHNKVWe9GNkdImtqswTDriEuMcivsx+NF2L4oM=; b=gtrCt26qUN9OgMe+7FXfsMBvCQEWw9Ei7NmrhvZTybbXbzyKgftuU2aRJMQ0WH+4JVSqH0 oJxgGQCP8sc7kqKRHJOOLXEWMsa+NkVlRO6yWEwVz7AY2QF5nHdM/TAQIbJb7RD1FyGPls szKpTwubk7iL5aMt06tv2vStarjSUYw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-176-OJhv6rFHMpmzMXH3APQCSQ-1; Thu, 15 Jul 2021 08:09:05 -0400 X-MC-Unique: OJhv6rFHMpmzMXH3APQCSQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CF41D1835AC4; Thu, 15 Jul 2021 12:09:03 +0000 (UTC) Received: from localhost (ovpn-12-61.pek2.redhat.com [10.72.12.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id 71A2E1002D71; Thu, 15 Jul 2021 12:08:58 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Greg Kroah-Hartman , Bjorn Helgaas , linux-pci@vger.kernel.org Cc: Thomas Gleixner , Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Ming Lei Subject: [PATCH V4 1/3] driver core: mark device as irq affinity managed if any irq is managed Date: Thu, 15 Jul 2021 20:08:42 +0800 Message-Id: <20210715120844.636968-2-ming.lei@redhat.com> In-Reply-To: <20210715120844.636968-1-ming.lei@redhat.com> References: <20210715120844.636968-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org irq vector allocation with managed affinity may be used by driver, and blk-mq needs this info because managed irq will be shutdown when all CPUs in the affinity mask are offline. The info of using managed irq is often produced by drivers(pci subsystem, platform device, ...), and it is consumed by blk-mq, so different subsystems are involved in this info flow Address this issue by adding one field of .irq_affinity_managed into 'struct device'. Suggested-by: Christoph Hellwig Signed-off-by: Ming Lei --- drivers/base/platform.c | 7 +++++++ drivers/pci/msi.c | 3 +++ include/linux/device.h | 1 + 3 files changed, 11 insertions(+) diff --git a/drivers/base/platform.c b/drivers/base/platform.c index 8640578f45e9..d28cb91d5cf9 100644 --- a/drivers/base/platform.c +++ b/drivers/base/platform.c @@ -388,6 +388,13 @@ int devm_platform_get_irqs_affinity(struct platform_device *dev, ptr->irq[i], ret); goto err_free_desc; } + + /* + * mark the device as irq affinity managed if any irq affinity + * descriptor is managed + */ + if (desc[i].is_managed) + dev->dev.irq_affinity_managed = true; } devres_add(&dev->dev, ptr); diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 3d6db20d1b2b..7ddec90b711d 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -1197,6 +1197,7 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, if (flags & PCI_IRQ_AFFINITY) { if (!affd) affd = &msi_default_affd; + dev->dev.irq_affinity_managed = true; } else { if (WARN_ON(affd)) affd = NULL; @@ -1215,6 +1216,8 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, return nvecs; } + dev->dev.irq_affinity_managed = false; + /* use legacy IRQ if allowed */ if (flags & PCI_IRQ_LEGACY) { if (min_vecs == 1 && dev->irq) { diff --git a/include/linux/device.h b/include/linux/device.h index 59940f1744c1..9ec6e671279e 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -569,6 +569,7 @@ struct device { #ifdef CONFIG_DMA_OPS_BYPASS bool dma_ops_bypass : 1; #endif + bool irq_affinity_managed : 1; }; /** From patchwork Thu Jul 15 12:08:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12379907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1A91C07E96 for ; Thu, 15 Jul 2021 12:09:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 812876136E for ; Thu, 15 Jul 2021 12:09:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232385AbhGOMMH (ORCPT ); Thu, 15 Jul 2021 08:12:07 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:45161 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232367AbhGOMMH (ORCPT ); Thu, 15 Jul 2021 08:12:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626350953; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1Nw+X27P2N0k1f/4Z+RbTJx7kTAVQrNvB7dNv3jx6hA=; b=Y6Ah1Dus9+aQ6f83DMtuKsUU8ME7O+fJPnHDlkb9ij+fCxI9I0US0YCTdsA0CrI/dPF61s zKVkW0j5zvgN72oHYvLpYuSroUDmpCkjUG0V7YlGZouaEX8DPh/4vYG5aLat7SzxD5R0l6 xEaR351bC4eiw1Ym+frA79jVG0hfwS8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-400-5BQR0uGEN8iA2aClCvUJZA-1; Thu, 15 Jul 2021 08:09:12 -0400 X-MC-Unique: 5BQR0uGEN8iA2aClCvUJZA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DDDB1804301; Thu, 15 Jul 2021 12:09:10 +0000 (UTC) Received: from localhost (ovpn-12-61.pek2.redhat.com [10.72.12.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id 74E3718A50; Thu, 15 Jul 2021 12:09:06 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Greg Kroah-Hartman , Bjorn Helgaas , linux-pci@vger.kernel.org Cc: Thomas Gleixner , Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Ming Lei Subject: [PATCH V4 2/3] blk-mq: mark if one queue map uses managed irq Date: Thu, 15 Jul 2021 20:08:43 +0800 Message-Id: <20210715120844.636968-3-ming.lei@redhat.com> In-Reply-To: <20210715120844.636968-1-ming.lei@redhat.com> References: <20210715120844.636968-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Retrieve this info via the field 'irq_affinity_managed' of 'struct device' in queue map helpers. Signed-off-by: Ming Lei --- block/blk-mq-pci.c | 1 + block/blk-mq-rdma.c | 3 +++ block/blk-mq-virtio.c | 1 + include/linux/blk-mq.h | 3 ++- 4 files changed, 7 insertions(+), 1 deletion(-) diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c index b595a94c4d16..aa0bdb80d0ce 100644 --- a/block/blk-mq-pci.c +++ b/block/blk-mq-pci.c @@ -37,6 +37,7 @@ int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap, struct pci_dev *pdev, for_each_cpu(cpu, mask) qmap->mq_map[cpu] = qmap->queue_offset + queue; } + qmap->use_managed_irq = pdev->dev.irq_affinity_managed; return 0; diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c index 14f968e58b8f..7b10d8bd2a37 100644 --- a/block/blk-mq-rdma.c +++ b/block/blk-mq-rdma.c @@ -36,6 +36,9 @@ int blk_mq_rdma_map_queues(struct blk_mq_queue_map *map, map->mq_map[cpu] = map->queue_offset + queue; } + /* So far RDMA doesn't use managed irq */ + map->use_managed_irq = false; + return 0; fallback: diff --git a/block/blk-mq-virtio.c b/block/blk-mq-virtio.c index 7b8a42c35102..b57a0aa6d900 100644 --- a/block/blk-mq-virtio.c +++ b/block/blk-mq-virtio.c @@ -38,6 +38,7 @@ int blk_mq_virtio_map_queues(struct blk_mq_queue_map *qmap, for_each_cpu(cpu, mask) qmap->mq_map[cpu] = qmap->queue_offset + queue; } + qmap->use_managed_irq = vdev->dev.irq_affinity_managed; return 0; fallback: diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 1d18447ebebc..d54a795ec971 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -192,7 +192,8 @@ struct blk_mq_hw_ctx { struct blk_mq_queue_map { unsigned int *mq_map; unsigned int nr_queues; - unsigned int queue_offset; + unsigned int queue_offset:31; + unsigned int use_managed_irq:1; }; /** From patchwork Thu Jul 15 12:08:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 12379909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3F8BC07E96 for ; Thu, 15 Jul 2021 12:09:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9DC5361360 for ; Thu, 15 Jul 2021 12:09:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232384AbhGOMMQ (ORCPT ); Thu, 15 Jul 2021 08:12:16 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:38454 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229973AbhGOMMQ (ORCPT ); Thu, 15 Jul 2021 08:12:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626350963; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uEyeHofMx8tj7VEsxrvtta8tXqNSRwnjfqPdjz6a1lM=; b=E48lh5KslV834wfz3cenaJ+gXXtS320nxVeiqUSkSjW+wYN4zuiRjlm3dJcGwptQ4RM50A EbrhflBPbrH8LdM8B7yVlIK1DuuFG2LOAYyPSL8OgiCNgaUwRPHUEy7ak0++gkYdn4oKow +alpI1nvRqwhI8bNU/B28aTXXZkQtiw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-297-m9vk4FB5PIWaxLVzrqPSmw-1; Thu, 15 Jul 2021 08:09:19 -0400 X-MC-Unique: m9vk4FB5PIWaxLVzrqPSmw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A3CEB800D62; Thu, 15 Jul 2021 12:09:17 +0000 (UTC) Received: from localhost (ovpn-12-61.pek2.redhat.com [10.72.12.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3C2E119C45; Thu, 15 Jul 2021 12:09:12 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Greg Kroah-Hartman , Bjorn Helgaas , linux-pci@vger.kernel.org Cc: Thomas Gleixner , Sagi Grimberg , Daniel Wagner , Wen Xiong , John Garry , Hannes Reinecke , Keith Busch , Ming Lei Subject: [PATCH V4 3/3] blk-mq: don't deactivate hctx if managed irq isn't used Date: Thu, 15 Jul 2021 20:08:44 +0800 Message-Id: <20210715120844.636968-4-ming.lei@redhat.com> In-Reply-To: <20210715120844.636968-1-ming.lei@redhat.com> References: <20210715120844.636968-1-ming.lei@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org blk-mq deactivates one hctx when the last CPU in hctx->cpumask become offline by draining all requests originated from this hctx and moving new allocation on other active hctx. This way is for avoiding inflight IO in case of managed irq because managed irq is shutdown when the last CPU in the irq's affinity becomes offline. However, lots of drivers(nvme fc, rdma, tcp, loop, ...) don't use managed irq, so they needn't to deactivate hctx when the last CPU becomes offline. Also, some of them are the only user of blk_mq_alloc_request_hctx() which is used for connecting io queue. And their requirement is that the connect request needs to be submitted successfully via one specified hctx even though all CPUs in this hctx->cpumask have become offline. Addressing the requirement for nvme fc/rdma/loop by allowing to allocate request from one hctx when all CPUs in this hctx are offline, since these drivers don't use managed irq. Finally don't deactivate one hctx when it doesn't use managed irq. Signed-off-by: Ming Lei --- block/blk-mq.c | 27 +++++++++++++++++---------- block/blk-mq.h | 8 ++++++++ 2 files changed, 25 insertions(+), 10 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 2c4ac51e54eb..591ab07c64d8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -427,6 +427,15 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, } EXPORT_SYMBOL(blk_mq_alloc_request); +static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx) +{ + int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask); + + if (cpu >= nr_cpu_ids) + cpu = cpumask_first(hctx->cpumask); + return cpu; +} + struct request *blk_mq_alloc_request_hctx(struct request_queue *q, unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx) { @@ -468,7 +477,10 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, data.hctx = q->queue_hw_ctx[hctx_idx]; if (!blk_mq_hw_queue_mapped(data.hctx)) goto out_queue_exit; - cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask); + + WARN_ON_ONCE(blk_mq_hctx_use_managed_irq(data.hctx)); + + cpu = blk_mq_first_mapped_cpu(data.hctx); data.ctx = __blk_mq_get_ctx(q, cpu); if (!q->elevator) @@ -1501,15 +1513,6 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx) hctx_unlock(hctx, srcu_idx); } -static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx) -{ - int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask); - - if (cpu >= nr_cpu_ids) - cpu = cpumask_first(hctx->cpumask); - return cpu; -} - /* * It'd be great if the workqueue API had a way to pass * in a mask and had some smarts for more clever placement. @@ -2556,6 +2559,10 @@ static int blk_mq_hctx_notify_offline(unsigned int cpu, struct hlist_node *node) struct blk_mq_hw_ctx *hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_online); + /* hctx needn't to be deactivated in case managed irq isn't used */ + if (!blk_mq_hctx_use_managed_irq(hctx)) + return 0; + if (!cpumask_test_cpu(cpu, hctx->cpumask) || !blk_mq_last_cpu_in_hctx(cpu, hctx)) return 0; diff --git a/block/blk-mq.h b/block/blk-mq.h index d08779f77a26..7333b659d8f5 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -119,6 +119,14 @@ static inline struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *q, return ctx->hctxs[type]; } +static inline bool blk_mq_hctx_use_managed_irq(struct blk_mq_hw_ctx *hctx) +{ + if (hctx->type == HCTX_TYPE_POLL) + return false; + + return hctx->queue->tag_set->map[hctx->type].use_managed_irq; +} + /* * sysfs helpers */