Message ID | 20240801104512.4056860-4-link@vivo.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | udmbuf bug fix and some improvements | expand |
Am 01.08.24 um 12:45 schrieb Huan Yang: > Currently vmap_udmabuf set page's array by each folio. > But, ubuf->folios is only contain's the folio's head page. > > That mean we repeatedly mapped the folio head page to the vmalloc area. > > This patch fix it, set each folio's page correct, so that pages array > contains right page, and then map into vmalloc area > > Signed-off-by: Huan Yang <link@vivo.com> > --- > drivers/dma-buf/udmabuf.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c > index a915714c5dce..7ed532342d7f 100644 > --- a/drivers/dma-buf/udmabuf.c > +++ b/drivers/dma-buf/udmabuf.c > @@ -77,18 +77,27 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) > static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map) > { > struct udmabuf *ubuf = buf->priv; > - struct page **pages; > + struct page **pages, **tmp; > + struct sg_table *sg = ubuf->sg; > + struct sg_page_iter piter; > void *vaddr; > - pgoff_t pg; > > dma_resv_assert_held(buf->resv); > > + if (!sg) { > + sg = get_sg_table(NULL, buf, 0); > + if (IS_ERR(sg)) > + return PTR_ERR(sg); > + ubuf->sg = sg; > + } > + > pages = kvmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL); > if (!pages) > return -ENOMEM; > + tmp = pages; > > - for (pg = 0; pg < ubuf->pagecount; pg++) > - pages[pg] = &ubuf->folios[pg]->page; > + for_each_sgtable_page(sg, &piter, 0) > + *tmp++ = sg_page_iter_page(&piter); Again don't abuse the sg table for that! Regards, Christian. > > vaddr = vm_map_ram(pages, ubuf->pagecount, -1); > kvfree(pages);
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index a915714c5dce..7ed532342d7f 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -77,18 +77,27 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma) static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map) { struct udmabuf *ubuf = buf->priv; - struct page **pages; + struct page **pages, **tmp; + struct sg_table *sg = ubuf->sg; + struct sg_page_iter piter; void *vaddr; - pgoff_t pg; dma_resv_assert_held(buf->resv); + if (!sg) { + sg = get_sg_table(NULL, buf, 0); + if (IS_ERR(sg)) + return PTR_ERR(sg); + ubuf->sg = sg; + } + pages = kvmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL); if (!pages) return -ENOMEM; + tmp = pages; - for (pg = 0; pg < ubuf->pagecount; pg++) - pages[pg] = &ubuf->folios[pg]->page; + for_each_sgtable_page(sg, &piter, 0) + *tmp++ = sg_page_iter_page(&piter); vaddr = vm_map_ram(pages, ubuf->pagecount, -1); kvfree(pages);
Currently vmap_udmabuf set page's array by each folio. But, ubuf->folios is only contain's the folio's head page. That mean we repeatedly mapped the folio head page to the vmalloc area. This patch fix it, set each folio's page correct, so that pages array contains right page, and then map into vmalloc area Signed-off-by: Huan Yang <link@vivo.com> --- drivers/dma-buf/udmabuf.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-)