Message ID | 20190620161240.22738-5-logang@deltatee.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Removing struct page from P2PDMA | expand |
On Thu, Jun 20, 2019 at 10:12:16AM -0600, Logan Gunthorpe wrote: > It is expected the creator of the dma-direct bio will ensure the > target device can access the DMA address it's creating bios for. > It's also not possible to bounce a dma-direct bio seeing the block > layer doesn't have any way to access the underlying data behind > the DMA address. > > Thus, never bounce dma-direct bios. I wonder how feasible it would be to implement a 'dma vec' copy from/to? That is about the only operation you could safely do on P2P BAR memory. I wonder if a copy implementation could somehow query the iommu layer to get a kmap of the memory pointed at by the dma address so we don't need to carry struct page around? Jason
On 2019-06-20 11:23 a.m., Jason Gunthorpe wrote: > On Thu, Jun 20, 2019 at 10:12:16AM -0600, Logan Gunthorpe wrote: >> It is expected the creator of the dma-direct bio will ensure the >> target device can access the DMA address it's creating bios for. >> It's also not possible to bounce a dma-direct bio seeing the block >> layer doesn't have any way to access the underlying data behind >> the DMA address. >> >> Thus, never bounce dma-direct bios. > > I wonder how feasible it would be to implement a 'dma vec' copy > from/to? > That is about the only operation you could safely do on P2P BAR > memory. > > I wonder if a copy implementation could somehow query the iommu layer > to get a kmap of the memory pointed at by the dma address so we don't > need to carry struct page around? That sounds a bit nasty. First we'd have to determine what the dma_addr_t points to; and with P2P it may be a bus address or it may be an IOVA address and it would probably have to be based on whether the IOVA is reserved or not (PCI bus addresses should all be reserved). Second, if it is an IOVA then the we'd have to get the physical address back from the IOMMU tables and hope we can then get it back to a sensible kernel mapping -- and if it points to a PCI bus address we'd then have to somehow get back to the kernel mapping which could be anywhere in the VMALLOC region as we no longer have the linear mapping that struct page provides. I think if we need access to the memory, then this is the wrong approach and we should keep struct page or try pfn_t so we can map the memory in a way that would perform better. In theory, I could relatively easily do the same thing I did for dma_vec but with a pfn_t_vec. Though we'd still have the problem of determining virtual address from physical address for memory that isn't linearly mapped. We'd probably have to introduce some arch-specific thing to linearly map an io region or something which may be possible on some arches on not on others (same problems we have with struct page). Logan
diff --git a/block/bounce.c b/block/bounce.c index f8ed677a1bf7..17e020a40cca 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -367,6 +367,14 @@ void blk_queue_bounce(struct request_queue *q, struct bio **bio_orig) if (!bio_has_data(*bio_orig)) return; + /* + * For DMA direct bios, Upper layers are expected to ensure + * the device in question can access the DMA addresses. So + * it never makes sense to bounce a DMA direct bio. + */ + if (bio_is_dma_direct(*bio_orig)) + return; + /* * for non-isa bounce case, just check if the bounce pfn is equal * to or bigger than the highest pfn in the system -- in that case,
It is expected the creator of the dma-direct bio will ensure the target device can access the DMA address it's creating bios for. It's also not possible to bounce a dma-direct bio seeing the block layer doesn't have any way to access the underlying data behind the DMA address. Thus, never bounce dma-direct bios. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> --- block/bounce.c | 8 ++++++++ 1 file changed, 8 insertions(+)