Message ID | 20200505084614.30424-2-m.szyprowski@samsung.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | DRM: fix struct sg_table nents vs. orig_nents misuse | expand |
Hi Christoph, On 05.05.2020 12:15, Christoph Hellwig wrote: >> - for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) >> + for_each_sg_page(st->sgl, &sg_iter, st->orig_nents, 0) > Would it make sense to also add a for_each_sgtable_page helper that > hides the use of orig_nents? To be used like: > > for_each_sgtable_page(st, &sg_iter, 0) { We would need two helpers: for_each_sgtable_cpu_page() and for_each_sgtable_dma_page(). I considered them, but then I found that there are already for_each_sg_page(), for_each_sg_dma_page() and various special iterators like sg_page_iter, sg_dma_page_iter and sg_mapping_iter. Too bad that they are almost not used, at least in the DRM subsystem. I wonder if it make sense to apply them or simply provide the two above mentioned wrappers? >> + for_each_sg(sgt->sgl, sg, sgt->orig_nents, count) { > Same here, e.g. > > for_each_sgtable_entry(sgt, sg, count) { > > ? > Best regards
Hi Christoph, On 05.05.2020 13:09, Christoph Hellwig wrote: > On Tue, May 05, 2020 at 12:51:58PM +0200, Marek Szyprowski wrote: >> On 05.05.2020 12:15, Christoph Hellwig wrote: >>>> - for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) >>>> + for_each_sg_page(st->sgl, &sg_iter, st->orig_nents, 0) >>> Would it make sense to also add a for_each_sgtable_page helper that >>> hides the use of orig_nents? To be used like: >>> >>> for_each_sgtable_page(st, &sg_iter, 0) { >> We would need two helpers: >> >> for_each_sgtable_cpu_page() and for_each_sgtable_dma_page(). >> >> I considered them, but then I found that there are already >> for_each_sg_page(), for_each_sg_dma_page() and various special iterators >> like sg_page_iter, sg_dma_page_iter and sg_mapping_iter. Too bad that >> they are almost not used, at least in the DRM subsystem. I wonder if it >> make sense to apply them or simply provide the two above mentioned >> wrappers? > None of the helpers helps with passing the right parameters from the > sg_table. So in doube we'd need wrappers for all of the above, or > none.. I've played a bit with the code and the existing scatterlist iterators - for_each_sg_page() and for_each_sg_dma_page(). I've found them quite handy! The biggest advantage of them is that they always iterate over scatterlist in PAGE_SIZE units, what should make the code much easier to understand. Here is example of their application to the function that started this thread: int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages, dma_addr_t *addrs, int max_entries) { struct sg_dma_page_iter dma_iter; struct sg_page_iter page_iter; if (addrs) for_each_sgtable_dma_page(sgt, &dma_iter, 0) *addrs++ = sg_page_iter_dma_address(&dma_iter); if (pages) for_each_sgtable_page(sgt, &page_iter, 0) *pages++ = sg_page_iter_page(&page_iter); return 0; } After applying those iterators where possible (they can be used only for reading the scatterlist), we would just need to add 2 trivial wrappers to use them with sg_table objects: #define for_each_sgtable_page(sgt, piter, pgoffset) \ for_each_sg_page(sgt->sgl, piter, sgt->orig_nents, pgoffset) #define for_each_sgtable_dma_page(sgt, dma_iter, pgoffset) \ for_each_sg_dma_page(sgt->sgl, dma_iter, sgt->nents, pgoffset) Then we would just need one more helper to construct scatterlist, as the above two are read-only don't allow to modify scatterlist: #define for_each_sgtable_sg(sgt, sg, i) \ for_each_sg(sgt->sgl, sg, sgt->orig_nents, i) With the above 3 helpers we can probably get rid of all instances of sg_table->{nents,orig_nents} from the DRM code. I will prepare patches soon. Best regards
Hi Christoph, On 08.05.2020 09:16, Christoph Hellwig wrote: > On Fri, May 08, 2020 at 09:12:13AM +0200, Marek Szyprowski wrote: >> Then we would just need one more helper to construct scatterlist, as the >> above two are read-only don't allow to modify scatterlist: >> >> #define for_each_sgtable_sg(sgt, sg, i) \ >> for_each_sg(sgt->sgl, sg, sgt->orig_nents, i) >> >> With the above 3 helpers we can probably get rid of all instances of >> sg_table->{nents,orig_nents} from the DRM code. I will prepare patches soon. > Sounds great, thanks! It turned out that the 4th helper (for_each_sgtable_dma_sg) was needed as some drivers makes use of the larger than the PAGE_SIZE unit for DMA mapped pages. Best regards
diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c index 03e01b0..63bd497 100644 --- a/drivers/gpu/drm/drm_cache.c +++ b/drivers/gpu/drm/drm_cache.c @@ -127,7 +127,7 @@ static void drm_cache_flush_clflush(struct page *pages[], struct sg_page_iter sg_iter; mb(); /*CLFLUSH is ordered only by using memory barriers*/ - for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) + for_each_sg_page(st->sgl, &sg_iter, st->orig_nents, 0) drm_clflush_page(sg_page_iter_page(&sg_iter)); mb(); /*Make sure that all cache line entry is flushed*/ diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index df31e57..cfcfc0d 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -117,8 +117,8 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj) kvfree(shmem->pages); } else { if (shmem->sgt) { - dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, - shmem->sgt->nents, DMA_BIDIRECTIONAL); + dma_unmap_sgtable(obj->dev->dev, shmem->sgt, + DMA_BIDIRECTIONAL); sg_free_table(shmem->sgt); kfree(shmem->sgt); } @@ -395,8 +395,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_object *obj) WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); - dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl, - shmem->sgt->nents, DMA_BIDIRECTIONAL); + dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL); sg_free_table(shmem->sgt); kfree(shmem->sgt); shmem->sgt = NULL; @@ -623,12 +622,17 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_object *obj) goto err_put_pages; } /* Map the pages for use by the h/w. */ - dma_map_sg(obj->dev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL); + ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL); + if (ret) + goto err_free_sgt; shmem->sgt = sgt; return sgt; +err_free_sgt: + sg_free_table(sgt); + kfree(sgt); err_put_pages: drm_gem_shmem_put_pages(shmem); return ERR_PTR(ret); diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 282774e..3e7cb02 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -617,6 +617,7 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, { struct drm_gem_object *obj = attach->dmabuf->priv; struct sg_table *sgt; + int ret; if (WARN_ON(dir == DMA_NONE)) return ERR_PTR(-EINVAL); @@ -626,11 +627,12 @@ struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, else sgt = obj->dev->driver->gem_prime_get_sg_table(obj); - if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, - DMA_ATTR_SKIP_CPU_SYNC)) { + ret = dma_map_sgtable_attrs(attach->dev, sgt, dir, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) { sg_free_table(sgt); kfree(sgt); - sgt = ERR_PTR(-ENOMEM); + sgt = ERR_PTR(ret); } return sgt; @@ -652,8 +654,7 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, if (!sgt) return; - dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, - DMA_ATTR_SKIP_CPU_SYNC); + dma_unmap_sgtable_attrs(attach->dev, sgt, dir, DMA_ATTR_SKIP_CPU_SYNC); sg_free_table(sgt); kfree(sgt); } @@ -975,7 +976,7 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages, */ page_index = 0; dma_index = 0; - for_each_sg(sgt->sgl, sg, sgt->nents, count) { + for_each_sg(sgt->sgl, sg, sgt->orig_nents, count) { page_len = sg->length; page = sg_page(sg); dma_len = sg_dma_len(sg);
The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the numer of the created entries in the DMA address space. However the subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be called with the original number of the entries passed to dma_map_sg. The sg_table->nents in turn holds the result of the dma_map_sg call as stated in include/linux/scatterlist.h. A common mistake was to ignore a result of the dma_map_sg function and don't use the sg_table->orig_nents at all. To avoid such issues, lets use common dma-mapping wrappers operating directly on the struct sg_table objects and adjust references to the nents and orig_nents respectively. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> --- For more information, see '[PATCH v3 00/25] DRM: fix struct sg_table nents vs. orig_nents misuse' thread: https://lkml.org/lkml/2020/5/5/187 --- drivers/gpu/drm/drm_cache.c | 2 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 14 +++++++++----- drivers/gpu/drm/drm_prime.c | 13 +++++++------ 3 files changed, 17 insertions(+), 12 deletions(-)