diff mbox

[11/13] nvme: switch to use pci_alloc_irq_vectors

Message ID c8150106-764e-f8e9-4c1c-27e60ad96e83@grimberg.me (mailing list archive)
State New, archived
Headers show

Commit Message

Sagi Grimberg Sept. 23, 2016, 10:21 p.m. UTC
On 14/09/16 07:18, Christoph Hellwig wrote:
> Use the new helper to automatically select the right interrupt type, as
> well as to use the automatic interupt affinity assignment.

Patch title and the change description are a little short IMO to
describe what is going on here (need the blk-mq side too).

I'd also think it would be better to split this to 2 patches but
really not a must...

> +static int nvme_pci_map_queues(struct blk_mq_tag_set *set)
> +{
> +	struct nvme_dev *dev = set->driver_data;
> +
> +	return blk_mq_pci_map_queues(set, to_pci_dev(dev->dev));
> +}
> +

Question: is using pci_alloc_irq_vectors() obligated for
supplying blk-mq with the device affinity mask?

If I do this completely-untested [1] what will happen?

[1]:
--
--
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Christoph Hellwig Sept. 26, 2016, 3:09 p.m. UTC | #1
On Fri, Sep 23, 2016 at 03:21:14PM -0700, Sagi Grimberg wrote:
> Question: is using pci_alloc_irq_vectors() obligated for
> supplying blk-mq with the device affinity mask?

No, but it's very useful.  We'll need equivalents for other busses
that provide multipl vectors and vector spreading.

> If I do this completely-untested [1] what will happen?

Everything will be crashing and burning because you call to_pci_dev on
something that's not a PCI dev?

For the next merge window I plan to wire up the affinity information
for the RDMA code, and I will add a counterpart to blk_mq_pci_map_queues
that spreads the queues over the completion vectors.
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 8d2875b4c56d..76693d406efe 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1518,6 +1518,14 @@  static void nvme_rdma_complete_rq(struct request *rq)
         blk_mq_end_request(rq, error);
  }

+static int nvme_rdma_map_queues(struct blk_mq_tag_set *set)
+{
+       struct nvme_rdma_ctrl *ctrl = set->driver_data;
+       struct device *dev = ctrl->device->dev.dma_device;
+
+       return blk_mq_pci_map_queues(set, to_pci_dev(dev));
+}
+
  static struct blk_mq_ops nvme_rdma_mq_ops = {
         .queue_rq       = nvme_rdma_queue_rq,
         .complete       = nvme_rdma_complete_rq,
@@ -1528,6 +1536,7 @@  static struct blk_mq_ops nvme_rdma_mq_ops = {
         .init_hctx      = nvme_rdma_init_hctx,
         .poll           = nvme_rdma_poll,
         .timeout        = nvme_rdma_timeout,
+       .map_queues     = nvme_rdma_map_queues,
  };

  static struct blk_mq_ops nvme_rdma_admin_mq_ops = {