Message ID | 20230825-dma_iommu-v12-0-4134455994a7@linux.ibm.com (mailing list archive) |
---|---|
Headers | show |
Series | iommu/dma: s390 DMA API conversion and optimized IOTLB flushing | expand |
On 8/25/23 6:11 AM, Niklas Schnelle wrote: > Hi All, > > This patch series converts s390's PCI support from its platform specific DMA > API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer. > The conversion itself is done in patches 3-4 with patch 2 providing the final > necessary IOMMU driver improvement to handle s390's special IOTLB flush > out-of-resource indication in virtualized environments. The conversion > itself only touches the s390 IOMMU driver and s390 arch code moving over > remaining functions from the s390 DMA API implementation. No changes to > common code are necessary. > I also picked up this latest version and ran various tests with ISM, mlx5 and some NVMe drives. FWIW, I have been including versions of this series in my s390 dev environments for a number of months now and have also been building my s390 pci iommufd nested translation series on top of this, so it's seen quite a bit of testing from me at least. So as far as I'm concerned anyway, this series is ready for -next (after the merge window). Thanks, Matt
On 2023-08-25 19:26, Matthew Rosato wrote: > On 8/25/23 6:11 AM, Niklas Schnelle wrote: >> Hi All, >> >> This patch series converts s390's PCI support from its platform specific DMA >> API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer. >> The conversion itself is done in patches 3-4 with patch 2 providing the final >> necessary IOMMU driver improvement to handle s390's special IOTLB flush >> out-of-resource indication in virtualized environments. The conversion >> itself only touches the s390 IOMMU driver and s390 arch code moving over >> remaining functions from the s390 DMA API implementation. No changes to >> common code are necessary. >> > > I also picked up this latest version and ran various tests with ISM, mlx5 and some NVMe drives. FWIW, I have been including versions of this series in my s390 dev environments for a number of months now and have also been building my s390 pci iommufd nested translation series on top of this, so it's seen quite a bit of testing from me at least. > > So as far as I'm concerned anyway, this series is ready for -next (after the merge window). Agreed; I'll trust your reviews for the s390-specific parts, so indeed it looks like this should have all it needs now and is ready for a nice long soak in -next once Joerg opens the tree for 6.7 material. Cheers, Robin.
On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > Niklas Schnelle (6): > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > s390/pci: prepare is_passed_through() for dma-iommu > s390/pci: Use dma-iommu layer > iommu/s390: Disable deferred flush for ISM devices > iommu/dma: Allow a single FQ in addition to per-CPU FQs > iommu/dma: Use a large flush queue and timeout for shadow_on_flush Applied, thanks.
Hi Niklas, On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > Niklas Schnelle (6): > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > s390/pci: prepare is_passed_through() for dma-iommu > s390/pci: Use dma-iommu layer > iommu/s390: Disable deferred flush for ISM devices > iommu/dma: Allow a single FQ in addition to per-CPU FQs > iommu/dma: Use a large flush queue and timeout for shadow_on_flush Turned out this series has non-trivial conflicts with Jasons default-domain work so I had to remove it from the IOMMU tree for now. Can you please rebase it to the latest iommu/core branch and re-send? I will take it into the tree again then. Thanks, Joerg
On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote: > Hi Niklas, > > On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > > Niklas Schnelle (6): > > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > > s390/pci: prepare is_passed_through() for dma-iommu > > s390/pci: Use dma-iommu layer > > iommu/s390: Disable deferred flush for ISM devices > > iommu/dma: Allow a single FQ in addition to per-CPU FQs > > iommu/dma: Use a large flush queue and timeout for shadow_on_flush > > Turned out this series has non-trivial conflicts with Jasons > default-domain work so I had to remove it from the IOMMU tree for now. > Can you please rebase it to the latest iommu/core branch and re-send? I > will take it into the tree again then. Niklas, I think you just 'take yours' to resolve this. All the IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be removed. Let me know if you need anything Thanks, Jason
On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote: > On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote: > > Hi Niklas, > > > > On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: > > > Niklas Schnelle (6): > > > iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return > > > s390/pci: prepare is_passed_through() for dma-iommu > > > s390/pci: Use dma-iommu layer > > > iommu/s390: Disable deferred flush for ISM devices > > > iommu/dma: Allow a single FQ in addition to per-CPU FQs > > > iommu/dma: Use a large flush queue and timeout for shadow_on_flush > > > > Turned out this series has non-trivial conflicts with Jasons > > default-domain work so I had to remove it from the IOMMU tree for now. > > Can you please rebase it to the latest iommu/core branch and re-send? I > > will take it into the tree again then. > > Niklas, I think you just 'take yours' to resolve this. All the > IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be > removed. Let me know if you need anything > > Thanks, > Jason Hi Joerg, Hi Jason, I've run into an unfortunate problem, not with the rebase itself but with the iommu/core branch. Jason is right, I basically need to just remove the platform ops and .default_domain ops. This seems to work fine for an NVMe both in the host and also when using the IOMMU with vfio-pci + KVM. I've already pushed the result of that to my git.kernel.org: https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu The problem is that something seems to be broken in the iommu/core branch. Regardless of whether I have my DMA API conversion on top or with the base iommu/core branch I can not use ConnectX-4 VFs. # lspci 111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] # dmesg | grep mlx [ 3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting [ 3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12 This same card works on v6.6-rc3 both with and without my DMA API conversion patch series applied. Looking at mlx5_mdev_init() -> mlx5_cmd_init(). The -ENOMEM seems to come from the following dma_pool_create(): cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0); I'll try to debug this further but wanted to let you know already in case you have some ideas. Either way as it doesn't seem to be related to the DMA API conversion I can sent that out again regardless if you want, really don't want to miss another cycle. Thanks, Niklas
On 2023-09-27 09:55, Niklas Schnelle wrote: > On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote: >> On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote: >>> Hi Niklas, >>> >>> On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote: >>>> Niklas Schnelle (6): >>>> iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return >>>> s390/pci: prepare is_passed_through() for dma-iommu >>>> s390/pci: Use dma-iommu layer >>>> iommu/s390: Disable deferred flush for ISM devices >>>> iommu/dma: Allow a single FQ in addition to per-CPU FQs >>>> iommu/dma: Use a large flush queue and timeout for shadow_on_flush >>> >>> Turned out this series has non-trivial conflicts with Jasons >>> default-domain work so I had to remove it from the IOMMU tree for now. >>> Can you please rebase it to the latest iommu/core branch and re-send? I >>> will take it into the tree again then. >> >> Niklas, I think you just 'take yours' to resolve this. All the >> IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be >> removed. Let me know if you need anything >> >> Thanks, >> Jason > > Hi Joerg, Hi Jason, > > I've run into an unfortunate problem, not with the rebase itself but > with the iommu/core branch. > > Jason is right, I basically need to just remove the platform ops and > .default_domain ops. This seems to work fine for an NVMe both in the > host and also when using the IOMMU with vfio-pci + KVM. I've already > pushed the result of that to my git.kernel.org: > https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu > > The problem is that something seems to be broken in the iommu/core > branch. Regardless of whether I have my DMA API conversion on top or > with the base iommu/core branch I can not use ConnectX-4 VFs. > > # lspci > 111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] > # dmesg | grep mlx > [ 3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting > [ 3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12 > > This same card works on v6.6-rc3 both with and without my DMA API > conversion patch series applied. Looking at mlx5_mdev_init() -> > mlx5_cmd_init(). The -ENOMEM seems to come from the following > dma_pool_create(): > > cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0); > > I'll try to debug this further but wanted to let you know already in > case you have some ideas. I could imagine that potentially something in the initial default domain conversion somehow interferes with the DMA ops in a way that ends up causing alloc_cmd_page() to fail (maybe calling zpci_dma_init_device() at the wrong point, or too many times?). FWIW I see nothing that would obviously affect dma_pool_create() itself. Robin. > Either way as it doesn't seem to be related > to the DMA API conversion I can sent that out again regardless if you > want, really don't want to miss another cycle. > > Thanks, > Niklas
Hi Niklas, On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote: > The problem is that something seems to be broken in the iommu/core > branch. Regardless of whether I have my DMA API conversion on top or > with the base iommu/core branch I can not use ConnectX-4 VFs. Have you already tried to bisect the issue in the iommu/core branch? The result might sched some light on the issue. Regards, Joerg
On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote: > Hi Niklas, > > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote: > > The problem is that something seems to be broken in the iommu/core > > branch. Regardless of whether I have my DMA API conversion on top or > > with the base iommu/core branch I can not use ConnectX-4 VFs. > > Have you already tried to bisect the issue in the iommu/core branch? > The result might sched some light on the issue. > > Regards, > > Joerg Hi Joerg, Working on it, somehow I must have messed up earlier. It now looks like it might in fact be caused by my DMA API conversion rebase and the "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction with Jason's patches that I haven't thought about. So sorry for any wrong blame. Thanks, Niklas
On Wed, 2023-09-27 at 13:24 +0200, Niklas Schnelle wrote: > On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote: > > Hi Niklas, > > > > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote: > > > The problem is that something seems to be broken in the iommu/core > > > branch. Regardless of whether I have my DMA API conversion on top or > > > with the base iommu/core branch I can not use ConnectX-4 VFs. > > > > Have you already tried to bisect the issue in the iommu/core branch? > > The result might sched some light on the issue. > > > > Regards, > > > > Joerg > > Hi Joerg, > > Working on it, somehow I must have messed up earlier. It now looks like > it might in fact be caused by my DMA API conversion rebase and the > "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction > with Jason's patches that I haven't thought about. So sorry for any > wrong blame. > > Thanks, > Niklas Hi, I tracked the problem down from mlx5_core's alloc_cmd_page() via dma_alloc_coherent(), ops->alloc, iommu_dma_alloc_remap(), and __iommu_dma_alloc_noncontiguous() to a failed iommu_dma_alloc_iova(). The allocation here is for 4K so nothing crazy. On second look I also noticed: nvme 2007:00:00.0: Using 42-bit DMA addresses for the NVMe that is working. The problem here seems to be that we set iommu_dma_forcedac = true in s390_iommu_probe_finalize() because we have currently have a reserved region over the first 4 GiB anyway so will always use IOVAs larger than that. That however is too late since iommu_dma_set_pci_32bit_workaround() is already checked in __iommu_probe_device() which is called just before ops- >probe_finalize(). So I moved setting iommu_dma_forcedac = true to zpci_init_iommu() and that gets rid of the notice for the NVMe but I still get a failure of iommu_dma_alloc_iova() in __iommu_dma_alloc_noncontiguous(). So I'll keep digging. Thanks, Niklas
On Wed, 2023-09-27 at 15:20 +0200, Niklas Schnelle wrote: > On Wed, 2023-09-27 at 13:24 +0200, Niklas Schnelle wrote: > > On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote: > > > Hi Niklas, > > > > > > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote: > > > > The problem is that something seems to be broken in the iommu/core > > > > branch. Regardless of whether I have my DMA API conversion on top or > > > > with the base iommu/core branch I can not use ConnectX-4 VFs. > > > > > > Have you already tried to bisect the issue in the iommu/core branch? > > > The result might sched some light on the issue. > > > > > > Regards, > > > > > > Joerg > > > > Hi Joerg, > > > > Working on it, somehow I must have messed up earlier. It now looks like > > it might in fact be caused by my DMA API conversion rebase and the > > "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction > > with Jason's patches that I haven't thought about. So sorry for any > > wrong blame. > > > > Thanks, > > Niklas > > Hi, > > I tracked the problem down from mlx5_core's alloc_cmd_page() via > dma_alloc_coherent(), ops->alloc, iommu_dma_alloc_remap(), and > __iommu_dma_alloc_noncontiguous() to a failed iommu_dma_alloc_iova(). > The allocation here is for 4K so nothing crazy. > > On second look I also noticed: > > nvme 2007:00:00.0: Using 42-bit DMA addresses > > for the NVMe that is working. The problem here seems to be that we set > iommu_dma_forcedac = true in s390_iommu_probe_finalize() because we > have currently have a reserved region over the first 4 GiB anyway so > will always use IOVAs larger than that. That however is too late since > iommu_dma_set_pci_32bit_workaround() is already checked in > __iommu_probe_device() which is called just before ops- > > probe_finalize(). So I moved setting iommu_dma_forcedac = true to > zpci_init_iommu() and that gets rid of the notice for the NVMe but I > still get a failure of iommu_dma_alloc_iova() in > __iommu_dma_alloc_noncontiguous(). So I'll keep digging. > > Thanks, > Niklas Ok I think I got it and this doesn't seem strictly s390x specific but I'd think should happen with iommu.forcedac=1 everywhere. The reason iommu_dma_alloc_iova() fails seems to be that mlx5_core does dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) in mlx5_pci_init()->set_dma_caps() which happens after it already called mlx5_mdev_init()->mlx5_cmd_init()->alloc_cmd_page() so for the dma_alloc_coherent() in there the dev->coherent_dma_mask is still DMA_BIT_MASK(32) for which we can't find an IOVA because well we don't have IOVAs below 4 GiB. Not entirely sure what caused this not to be enforced before. Thanks, Niklas
On Wed, 2023-09-27 at 16:31 +0200, Niklas Schnelle wrote: > On Wed, 2023-09-27 at 15:20 +0200, Niklas Schnelle wrote: > > On Wed, 2023-09-27 at 13:24 +0200, Niklas Schnelle wrote: > > > On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote: > > > > Hi Niklas, > > > > > > > > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote: > > > > > The problem is that something seems to be broken in the iommu/core > > > > > branch. Regardless of whether I have my DMA API conversion on top or > > > > > with the base iommu/core branch I can not use ConnectX-4 VFs. > > > > > > > > Have you already tried to bisect the issue in the iommu/core branch? > > > > The result might sched some light on the issue. > > > > > > > > Regards, > > > > > > > > Joerg > > > > > > Hi Joerg, > > > > > > Working on it, somehow I must have messed up earlier. It now looks like > > > it might in fact be caused by my DMA API conversion rebase and the > > > "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction > > > with Jason's patches that I haven't thought about. So sorry for any > > > wrong blame. > > > > > > Thanks, > > > Niklas > > > > Hi, > > > > I tracked the problem down from mlx5_core's alloc_cmd_page() via > > dma_alloc_coherent(), ops->alloc, iommu_dma_alloc_remap(), and > > __iommu_dma_alloc_noncontiguous() to a failed iommu_dma_alloc_iova(). > > The allocation here is for 4K so nothing crazy. > > > > On second look I also noticed: > > > > nvme 2007:00:00.0: Using 42-bit DMA addresses > > > > for the NVMe that is working. The problem here seems to be that we set > > iommu_dma_forcedac = true in s390_iommu_probe_finalize() because we > > have currently have a reserved region over the first 4 GiB anyway so > > will always use IOVAs larger than that. That however is too late since > > iommu_dma_set_pci_32bit_workaround() is already checked in > > __iommu_probe_device() which is called just before ops- > > > probe_finalize(). So I moved setting iommu_dma_forcedac = true to > > zpci_init_iommu() and that gets rid of the notice for the NVMe but I > > still get a failure of iommu_dma_alloc_iova() in > > __iommu_dma_alloc_noncontiguous(). So I'll keep digging. > > > > Thanks, > > Niklas > > > Ok I think I got it and this doesn't seem strictly s390x specific but > I'd think should happen with iommu.forcedac=1 everywhere. > > The reason iommu_dma_alloc_iova() fails seems to be that mlx5_core does > dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) in > mlx5_pci_init()->set_dma_caps() which happens after it already called > mlx5_mdev_init()->mlx5_cmd_init()->alloc_cmd_page() so for the > dma_alloc_coherent() in there the dev->coherent_dma_mask is still > DMA_BIT_MASK(32) for which we can't find an IOVA because well we don't > have IOVAs below 4 GiB. Not entirely sure what caused this not to be > enforced before. > > Thanks, > Niklas > Ok, another update. On trying it out again this problem actually also occurs when applying this v12 on top of v6.6-rc3 too. Also I guess unlike my prior thinking it probably doesn't occur with iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might be the only ones who don't support those. From my point of view this sounds like a mlx5_core issue they really should call dma_set_mask_and_coherent() before their first call to dma_alloc_coherent() not after. So I guess I'll send a v13 of this series rebased on iommu/core and with an additional mlx5 patch and then let's hope we can get that merged in a way that doesn't leave us with broken ConnectX VFs for too long. Thanks, Niklas
On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote: > Ok, another update. On trying it out again this problem actually also > occurs when applying this v12 on top of v6.6-rc3 too. Also I guess > unlike my prior thinking it probably doesn't occur with > iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might > be the only ones who don't support those. From my point of view this > sounds like a mlx5_core issue they really should call > dma_set_mask_and_coherent() before their first call to > dma_alloc_coherent() not after. So I guess I'll send a v13 of this > series rebased on iommu/core and with an additional mlx5 patch and then > let's hope we can get that merged in a way that doesn't leave us with > broken ConnectX VFs for too long. Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before setting it's dma_set_mask_and_coherent(). Please link to this thread and we can get Leon or Saeed to ack it for Joerg. (though wondering why s390 is the only case that ever hit this?) Jason
On 27/09/2023 4:40 pm, Jason Gunthorpe wrote: > On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote: > >> Ok, another update. On trying it out again this problem actually also >> occurs when applying this v12 on top of v6.6-rc3 too. Also I guess >> unlike my prior thinking it probably doesn't occur with >> iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might >> be the only ones who don't support those. From my point of view this >> sounds like a mlx5_core issue they really should call >> dma_set_mask_and_coherent() before their first call to >> dma_alloc_coherent() not after. So I guess I'll send a v13 of this >> series rebased on iommu/core and with an additional mlx5 patch and then >> let's hope we can get that merged in a way that doesn't leave us with >> broken ConnectX VFs for too long. > > Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before > setting it's dma_set_mask_and_coherent(). Please link to this thread > and we can get Leon or Saeed to ack it for Joerg. > > (though wondering why s390 is the only case that ever hit this?) Probably because most systems happen to be able to satisfy the allocation within the default 32-bit mask - the whole bottom 4GB of IOVA space being reserved is pretty atypical. TBH it makes me wonder the opposite - how this ever worked on s390 before? And I think the answer to that is "by pure chance", since upon inspection the existing s390_pci_dma_ops implementation appears to pay absolutely no attention to the device's DMA masks whatsoever :( Robin.
On 9/27/23 11:40 AM, Jason Gunthorpe wrote: > On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote: > >> Ok, another update. On trying it out again this problem actually also >> occurs when applying this v12 on top of v6.6-rc3 too. Also I guess >> unlike my prior thinking it probably doesn't occur with >> iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might >> be the only ones who don't support those. From my point of view this >> sounds like a mlx5_core issue they really should call >> dma_set_mask_and_coherent() before their first call to >> dma_alloc_coherent() not after. So I guess I'll send a v13 of this >> series rebased on iommu/core and with an additional mlx5 patch and then >> let's hope we can get that merged in a way that doesn't leave us with >> broken ConnectX VFs for too long. > > Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before > setting it's dma_set_mask_and_coherent(). Please link to this thread > and we can get Leon or Saeed to ack it for Joerg. > Hi Niklas, I bisected the start of this issue to the following commit (only noticeable on s390 when you apply this subject series on top): 06cd555f73caec515a14d42ef052221fa2587ff9 ("net/mlx5: split mlx5_cmd_init() to probe and reload routines") Which went in during the merge window. Please include with your fix and/or report to the mlx5 maintainers. Looks like the changes in this patch match what you and Jason describe; it splits up mlx5_cmd_init() and moves part of the call earlier. The net result is we first call mlx5_mdev_init>mlx5_cmd_init->alloc_cmd_page->dma_alloc_coherent and then sometime later call mlx5_pci_init->set_dma_caps->dma_set_mask_and_coherent. Prior to this patch, we would not drive mlx5_cmd_init (and thus that first dma_alloc_coherent) until mlx5_init_one which happens _after_ mlx5_pci_init->set_dma_caps->dma_set_mask_and_coherent. Thanks, Matt