Message ID | 20230613153415.350528-7-apatel@ventanamicro.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Linux RISC-V AIA Support | expand |
Context | Check | Description |
---|---|---|
conchuod/tree_selection | fail | Failed to apply to next/pending-fixes, riscv/for-next or riscv/master |
On Tue, Jun 13, 2023 at 09:04:11PM +0530, Anup Patel wrote: > We have a separate RISC-V IMSIC MSI address for each CPU so changing > MSI (or IRQ) affinity results in re-programming of MSI address in > the PCIe (or platform) device. > > Currently, the iommu_dma_prepare_msi() is called only once at the > time of IRQ allocation so IOMMU DMA domain will only have mapping > for one MSI page. This means iommu_dma_compose_msi_msg() called > by imsic_irq_compose_msi_msg() will always use the same MSI page > irrespective to target CPU MSI address. In other words, changing > MSI (or IRQ) affinity for device using IOMMU DMA domain will not > work. You didn't answer my question from last time - there seems to be no iommu driver here so why are you messing with iommu_dma_prepare_msi()? This path is only for platforms that have IOMMU drivers that translate the MSI window. You should add this code to link the interrupt controller to the iommu driver when you introduce the iommu driver, not in this series? And, as I said before, I'd like to NOT see new users of iommu_dma_prepare_msi() since it is a very problematic API. This hacking of it here is not making it better :( Jason
On Wed, Jun 14, 2023 at 8:16 PM Jason Gunthorpe <jgg@ziepe.ca> wrote: > > On Tue, Jun 13, 2023 at 09:04:11PM +0530, Anup Patel wrote: > > We have a separate RISC-V IMSIC MSI address for each CPU so changing > > MSI (or IRQ) affinity results in re-programming of MSI address in > > the PCIe (or platform) device. > > > > Currently, the iommu_dma_prepare_msi() is called only once at the > > time of IRQ allocation so IOMMU DMA domain will only have mapping > > for one MSI page. This means iommu_dma_compose_msi_msg() called > > by imsic_irq_compose_msi_msg() will always use the same MSI page > > irrespective to target CPU MSI address. In other words, changing > > MSI (or IRQ) affinity for device using IOMMU DMA domain will not > > work. > > You didn't answer my question from last time - there seems to be no > iommu driver here so why are you messing with iommu_dma_prepare_msi()? > > This path is only for platforms that have IOMMU drivers that translate > the MSI window. You should add this code to link the interrupt > controller to the iommu driver when you introduce the iommu driver, > not in this series? > > And, as I said before, I'd like to NOT see new users of > iommu_dma_prepare_msi() since it is a very problematic API. > > This hacking of it here is not making it better :( I misunderstood your previous comments. We can certainly deal with this later when the IOMMU driver is available for RISC-V. I will drop this patch in the next revision. Regards, Anup
On Wed, Jun 14, 2023 at 09:47:53PM +0530, Anup Patel wrote: > On Wed, Jun 14, 2023 at 8:16 PM Jason Gunthorpe <jgg@ziepe.ca> wrote: > > > > On Tue, Jun 13, 2023 at 09:04:11PM +0530, Anup Patel wrote: > > > We have a separate RISC-V IMSIC MSI address for each CPU so changing > > > MSI (or IRQ) affinity results in re-programming of MSI address in > > > the PCIe (or platform) device. > > > > > > Currently, the iommu_dma_prepare_msi() is called only once at the > > > time of IRQ allocation so IOMMU DMA domain will only have mapping > > > for one MSI page. This means iommu_dma_compose_msi_msg() called > > > by imsic_irq_compose_msi_msg() will always use the same MSI page > > > irrespective to target CPU MSI address. In other words, changing > > > MSI (or IRQ) affinity for device using IOMMU DMA domain will not > > > work. > > > > You didn't answer my question from last time - there seems to be no > > iommu driver here so why are you messing with iommu_dma_prepare_msi()? > > > > This path is only for platforms that have IOMMU drivers that translate > > the MSI window. You should add this code to link the interrupt > > controller to the iommu driver when you introduce the iommu driver, > > not in this series? > > > > And, as I said before, I'd like to NOT see new users of > > iommu_dma_prepare_msi() since it is a very problematic API. > > > > This hacking of it here is not making it better :( > > I misunderstood your previous comments. > > We can certainly deal with this later when the IOMMU > driver is available for RISC-V. I will drop this patch in the > next revision. Not only just this patch but the calls to iommu_dma_prepare_msi() and related APIs in the prior patch too. Assume the MSI window is directly visible to DMA without translation. When you come with an iommu driver we can discuss how best to proceed. Thanks, Jason
On Wed, Jun 14, 2023 at 10:20 PM Jason Gunthorpe <jgg@ziepe.ca> wrote: > > On Wed, Jun 14, 2023 at 09:47:53PM +0530, Anup Patel wrote: > > On Wed, Jun 14, 2023 at 8:16 PM Jason Gunthorpe <jgg@ziepe.ca> wrote: > > > > > > On Tue, Jun 13, 2023 at 09:04:11PM +0530, Anup Patel wrote: > > > > We have a separate RISC-V IMSIC MSI address for each CPU so changing > > > > MSI (or IRQ) affinity results in re-programming of MSI address in > > > > the PCIe (or platform) device. > > > > > > > > Currently, the iommu_dma_prepare_msi() is called only once at the > > > > time of IRQ allocation so IOMMU DMA domain will only have mapping > > > > for one MSI page. This means iommu_dma_compose_msi_msg() called > > > > by imsic_irq_compose_msi_msg() will always use the same MSI page > > > > irrespective to target CPU MSI address. In other words, changing > > > > MSI (or IRQ) affinity for device using IOMMU DMA domain will not > > > > work. > > > > > > You didn't answer my question from last time - there seems to be no > > > iommu driver here so why are you messing with iommu_dma_prepare_msi()? > > > > > > This path is only for platforms that have IOMMU drivers that translate > > > the MSI window. You should add this code to link the interrupt > > > controller to the iommu driver when you introduce the iommu driver, > > > not in this series? > > > > > > And, as I said before, I'd like to NOT see new users of > > > iommu_dma_prepare_msi() since it is a very problematic API. > > > > > > This hacking of it here is not making it better :( > > > > I misunderstood your previous comments. > > > > We can certainly deal with this later when the IOMMU > > driver is available for RISC-V. I will drop this patch in the > > next revision. > > Not only just this patch but the calls to iommu_dma_prepare_msi() and > related APIs in the prior patch too. Assume the MSI window is directly > visible to DMA without translation. Okay, I will remove iommu_dma_xyz() usage from IMSIC driver in the next revision. > > When you come with an iommu driver we can discuss how best to proceed. Yes, that's better. Regards, Anup
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7a9f0b0bddbd..df96bcccbe28 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1687,14 +1687,32 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg) struct device *dev = msi_desc_to_dev(desc); const struct iommu_domain *domain = iommu_get_domain_for_dev(dev); const struct iommu_dma_msi_page *msi_page; + struct iommu_dma_cookie *cookie; + phys_addr_t msi_addr; - msi_page = msi_desc_get_iommu_cookie(desc); + if (!domain || !domain->iova_cookie) + return; - if (!domain || !domain->iova_cookie || WARN_ON(!msi_page)) + cookie = domain->iova_cookie; + msi_page = msi_desc_get_iommu_cookie(desc); + if (!msi_page || msi_page->phys != msi_addr) { + msi_addr = ((u64)msg->address_hi << 32) | msg->address_lo; + msi_addr &= ~(phys_addr_t)(cookie_msi_granule(cookie) - 1); + + msi_desc_set_iommu_cookie(desc, NULL); + list_for_each_entry(msi_page, &cookie->msi_page_list, list) { + if (msi_page->phys == msi_addr) { + msi_desc_set_iommu_cookie(desc, msi_page); + break; + } + } + msi_page = msi_desc_get_iommu_cookie(desc); + } + if (WARN_ON(!msi_page)) return; msg->address_hi = upper_32_bits(msi_page->iova); - msg->address_lo &= cookie_msi_granule(domain->iova_cookie) - 1; + msg->address_lo &= cookie_msi_granule(cookie) - 1; msg->address_lo += lower_32_bits(msi_page->iova); } diff --git a/drivers/irqchip/irq-riscv-imsic.c b/drivers/irqchip/irq-riscv-imsic.c index 30247c84a6b0..19dedd036dd4 100644 --- a/drivers/irqchip/irq-riscv-imsic.c +++ b/drivers/irqchip/irq-riscv-imsic.c @@ -493,11 +493,18 @@ static int imsic_irq_domain_alloc(struct irq_domain *domain, int i, hwirq, err = 0; unsigned int cpu; - err = imsic_get_cpu(&imsic->lmask, false, &cpu); - if (err) - return err; + /* Map MSI address of all CPUs */ + for_each_cpu(cpu, &imsic->lmask) { + err = imsic_cpu_page_phys(cpu, 0, &msi_addr); + if (err) + return err; + + err = iommu_dma_prepare_msi(info->desc, msi_addr); + if (err) + return err; + } - err = imsic_cpu_page_phys(cpu, 0, &msi_addr); + err = imsic_get_cpu(&imsic->lmask, false, &cpu); if (err) return err; @@ -505,10 +512,6 @@ static int imsic_irq_domain_alloc(struct irq_domain *domain, if (hwirq < 0) return hwirq; - err = iommu_dma_prepare_msi(info->desc, msi_addr); - if (err) - goto fail; - for (i = 0; i < nr_irqs; i++) { imsic_id_set_target(hwirq + i, cpu); irq_domain_set_info(domain, virq + i, hwirq + i, @@ -528,10 +531,6 @@ static int imsic_irq_domain_alloc(struct irq_domain *domain, } return 0; - -fail: - imsic_ids_free(hwirq, get_count_order(nr_irqs)); - return err; } static void imsic_irq_domain_free(struct irq_domain *domain,
We have a separate RISC-V IMSIC MSI address for each CPU so changing MSI (or IRQ) affinity results in re-programming of MSI address in the PCIe (or platform) device. Currently, the iommu_dma_prepare_msi() is called only once at the time of IRQ allocation so IOMMU DMA domain will only have mapping for one MSI page. This means iommu_dma_compose_msi_msg() called by imsic_irq_compose_msi_msg() will always use the same MSI page irrespective to target CPU MSI address. In other words, changing MSI (or IRQ) affinity for device using IOMMU DMA domain will not work. To address the above issue, we do the following: 1) Map MSI pages for all CPUs in imsic_irq_domain_alloc() using iommu_dma_prepare_msi(). 2) Extend iommu_dma_compose_msi_msg() to lookup the correct msi_page whenever the msi_page stored as iommu cookie does not match. Reported-by: Vincent Chen <vincent.chen@sifive.com> Signed-off-by: Anup Patel <apatel@ventanamicro.com> --- drivers/iommu/dma-iommu.c | 24 +++++++++++++++++++++--- drivers/irqchip/irq-riscv-imsic.c | 23 +++++++++++------------ 2 files changed, 32 insertions(+), 15 deletions(-)