Message ID | 20200327162126.29705-1-m.szyprowski@samsung.com (mailing list archive) |
---|---|
State | Mainlined |
Commit | c0f83d164fb8f3a2b7bc379a6c1e27d1123a9eab |
Headers | show |
Series | [v2] drm/prime: fix extracting of the DMA addresses from a scatterlist | expand |
>-----Original Message----- >From: Marek Szyprowski <m.szyprowski@samsung.com> >Sent: Friday, March 27, 2020 12:21 PM >To: dri-devel@lists.freedesktop.org; linux-samsung-soc@vger.kernel.org; >linux-kernel@vger.kernel.org >Cc: Marek Szyprowski <m.szyprowski@samsung.com>; >stable@vger.kernel.org; Bartlomiej Zolnierkiewicz ><b.zolnierkie@samsung.com>; Maarten Lankhorst ><maarten.lankhorst@linux.intel.com>; Maxime Ripard ><mripard@kernel.org>; Thomas Zimmermann <tzimmermann@suse.de>; >David Airlie <airlied@linux.ie>; Daniel Vetter <daniel@ffwll.ch>; Alex Deucher ><alexander.deucher@amd.com>; Shane Francis <bigbeeshane@gmail.com>; >Ruhl, Michael J <michael.j.ruhl@intel.com> >Subject: [PATCH v2] drm/prime: fix extracting of the DMA addresses from a >scatterlist > >Scatterlist elements contains both pages and DMA addresses, but one >should not assume 1:1 relation between them. The sg->length is the size >of the physical memory chunk described by the sg->page, while >sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by >the sg_dma_address(sg). > >The proper way of extracting both: pages and DMA addresses of the whole >buffer described by a scatterlist it to iterate independently over the >sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. > >Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") >Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> >Reviewed-by: Alex Deucher <alexander.deucher@amd.com> >--- > drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++----------- >- > 1 file changed, 25 insertions(+), 12 deletions(-) > >diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c >index 1de2cde2277c..282774e469ac 100644 >--- a/drivers/gpu/drm/drm_prime.c >+++ b/drivers/gpu/drm/drm_prime.c >@@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct >sg_table *sgt, struct page **pages, > unsigned count; > struct scatterlist *sg; > struct page *page; >- u32 len, index; >+ u32 page_len, page_index; > dma_addr_t addr; >+ u32 dma_len, dma_index; > >- index = 0; >+ /* >+ * Scatterlist elements contains both pages and DMA addresses, but >+ * one shoud not assume 1:1 relation between them. The sg->length >is >+ * the size of the physical memory chunk described by the sg->page, >+ * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk >+ * described by the sg_dma_address(sg). >+ */ Is there an example of what the scatterlist would look like in this case? Does each SG entry always have the page and dma info? or could you have entries that have page information only, and entries that have dma info only? If the same entry has different size info (page_len = PAGE_SIZE, dma_len = 4 * PAGE_SIZE?), are we guaranteed that the arrays (page and addrs) have been sized correctly? Just trying to get my head wrapped around this. Thanks, Mike >+ page_index = 0; >+ dma_index = 0; > for_each_sg(sgt->sgl, sg, sgt->nents, count) { >- len = sg_dma_len(sg); >+ page_len = sg->length; > page = sg_page(sg); >+ dma_len = sg_dma_len(sg); > addr = sg_dma_address(sg); > >- while (len > 0) { >- if (WARN_ON(index >= max_entries)) >+ while (pages && page_len > 0) { >+ if (WARN_ON(page_index >= max_entries)) > return -1; >- if (pages) >- pages[index] = page; >- if (addrs) >- addrs[index] = addr; >- >+ pages[page_index] = page; > page++; >+ page_len -= PAGE_SIZE; >+ page_index++; >+ } >+ while (addrs && dma_len > 0) { >+ if (WARN_ON(dma_index >= max_entries)) >+ return -1; >+ addrs[dma_index] = addr; > addr += PAGE_SIZE; >- len -= PAGE_SIZE; >- index++; >+ dma_len -= PAGE_SIZE; >+ dma_index++; > } > } > return 0; >-- >2.17.1
On Fri, Mar 27, 2020 at 05:21:26PM +0100, Marek Szyprowski wrote: > Scatterlist elements contains both pages and DMA addresses, but one > should not assume 1:1 relation between them. The sg->length is the size > of the physical memory chunk described by the sg->page, while > sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by > the sg_dma_address(sg). > > The proper way of extracting both: pages and DMA addresses of the whole > buffer described by a scatterlist it to iterate independently over the > sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. > > Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++------------ > 1 file changed, 25 insertions(+), 12 deletions(-) <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter>
On Fri, Mar 27, 2020 at 6:31 PM Ruhl, Michael J <michael.j.ruhl@intel.com> wrote: > Is there an example of what the scatterlist would look like in this case? > > Does each SG entry always have the page and dma info? or could you have > entries that have page information only, and entries that have dma info only? > > If the same entry has different size info (page_len = PAGE_SIZE, > dma_len = 4 * PAGE_SIZE?), are we guaranteed that the arrays (page and addrs) have > been sized correctly? > > Just trying to get my head wrapped around this. > > Thanks, > > Mike > My understanding is that page_len and dma_len in this case could have different values (looking at iommu_dma_map_sg within dma-iommu.c), this seems to add some padding calculated by using the device iova domain to s_length but sg_dma_len is set to the original length The scatterlists table can also get reduced down within "__finalise_sg" possibly causing (if reduced) the dma_len of the last table elements to be 0 (page_len would not be 0 in this case). Documentation around looping & accessing scatterlists in DMA-API.txt states that sg_dma_address() and sg_dma_len() should be used when accessing addr and len rather than sg->address and sg->length. Maybe it would be worth splitting this out into 2 functions to avoid potential issues with the above use case ? Regards, Shane Francis
Hi Michael, On 2020-03-27 19:31, Ruhl, Michael J wrote: >> -----Original Message----- >> From: Marek Szyprowski <m.szyprowski@samsung.com> >> Sent: Friday, March 27, 2020 12:21 PM >> To: dri-devel@lists.freedesktop.org; linux-samsung-soc@vger.kernel.org; >> linux-kernel@vger.kernel.org >> Cc: Marek Szyprowski <m.szyprowski@samsung.com>; >> stable@vger.kernel.org; Bartlomiej Zolnierkiewicz >> <b.zolnierkie@samsung.com>; Maarten Lankhorst >> <maarten.lankhorst@linux.intel.com>; Maxime Ripard >> <mripard@kernel.org>; Thomas Zimmermann <tzimmermann@suse.de>; >> David Airlie <airlied@linux.ie>; Daniel Vetter <daniel@ffwll.ch>; Alex Deucher >> <alexander.deucher@amd.com>; Shane Francis <bigbeeshane@gmail.com>; >> Ruhl, Michael J <michael.j.ruhl@intel.com> >> Subject: [PATCH v2] drm/prime: fix extracting of the DMA addresses from a >> scatterlist >> >> Scatterlist elements contains both pages and DMA addresses, but one >> should not assume 1:1 relation between them. The sg->length is the size >> of the physical memory chunk described by the sg->page, while >> sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by >> the sg_dma_address(sg). >> >> The proper way of extracting both: pages and DMA addresses of the whole >> buffer described by a scatterlist it to iterate independently over the >> sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. >> >> Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") >> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> >> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> >> --- >> drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++----------- >> - >> 1 file changed, 25 insertions(+), 12 deletions(-) >> >> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c >> index 1de2cde2277c..282774e469ac 100644 >> --- a/drivers/gpu/drm/drm_prime.c >> +++ b/drivers/gpu/drm/drm_prime.c >> @@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct >> sg_table *sgt, struct page **pages, >> unsigned count; >> struct scatterlist *sg; >> struct page *page; >> - u32 len, index; >> + u32 page_len, page_index; >> dma_addr_t addr; >> + u32 dma_len, dma_index; >> >> - index = 0; >> + /* >> + * Scatterlist elements contains both pages and DMA addresses, but >> + * one shoud not assume 1:1 relation between them. The sg->length >> is >> + * the size of the physical memory chunk described by the sg->page, >> + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk >> + * described by the sg_dma_address(sg). >> + */ > Is there an example of what the scatterlist would look like in this case? DMA framework or IOMMU is allowed to join consecutive chunks while mapping if such operation is supported by the hw. Here is the example: Lets assume that we have a scatterlist with 4 4KiB pages of the physical addresses: 0x12000000, 0x13011000, 0x13012000, 0x11011000. The total size of the buffer is 16KiB. After mapping this scatterlist to a device behind an IOMMU it may end up as a contiguous buffer in the DMA (IOVA) address space. at 0xf0010000. The scatterlist will look like this: sg[0].page = 0x12000000 sg[0].len = 4096 sg[0].dma_addr = 0xf0010000 sg[0].dma_len = 16384 sg[1].page = 0x13011000 sg[1].len = 4096 sg[1].dma_addr = 0 sg[1].dma_len = 0 sg[2].page = 0x13012000 sg[2].len = 4096 sg[2].dma_addr = 0 sg[2].dma_len = 0 sg[3].page = 0x11011000 sg[3].len = 4096 sg[3].dma_addr = 0 sg[3].dma_len = 0 (I've intentionally wrote page as physical address to make it easier to understand, in real SGs it is stored a struct page pointer). > Does each SG entry always have the page and dma info? or could you have > entries that have page information only, and entries that have dma info only? When SG is not mapped yet it contains only the ->pages and ->len entries. I'm not aware of the SGs with the DMA information only, but in theory it might be possible to have such. > If the same entry has different size info (page_len = PAGE_SIZE, > dma_len = 4 * PAGE_SIZE?), are we guaranteed that the arrays (page and addrs) have > been sized correctly? There are always no more DMA related entries than the phys pages. If there is 1:1 mapping between physical memory and DMA (IOVA) space, then each SG entry will have len == dma_len, and dma_addr will be describing the same as page entry. DMA mapping framework is allowed only to join entries while mapping to DMA (IOVA). > Just trying to get my head wrapped around this. Sure, I hope my explanation helps a bit. Best regards
On 2020-03-29 10:55 am, Marek Szyprowski wrote: > Hi Michael, > > On 2020-03-27 19:31, Ruhl, Michael J wrote: >>> -----Original Message----- >>> From: Marek Szyprowski <m.szyprowski@samsung.com> >>> Sent: Friday, March 27, 2020 12:21 PM >>> To: dri-devel@lists.freedesktop.org; linux-samsung-soc@vger.kernel.org; >>> linux-kernel@vger.kernel.org >>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>; >>> stable@vger.kernel.org; Bartlomiej Zolnierkiewicz >>> <b.zolnierkie@samsung.com>; Maarten Lankhorst >>> <maarten.lankhorst@linux.intel.com>; Maxime Ripard >>> <mripard@kernel.org>; Thomas Zimmermann <tzimmermann@suse.de>; >>> David Airlie <airlied@linux.ie>; Daniel Vetter <daniel@ffwll.ch>; Alex Deucher >>> <alexander.deucher@amd.com>; Shane Francis <bigbeeshane@gmail.com>; >>> Ruhl, Michael J <michael.j.ruhl@intel.com> >>> Subject: [PATCH v2] drm/prime: fix extracting of the DMA addresses from a >>> scatterlist >>> >>> Scatterlist elements contains both pages and DMA addresses, but one >>> should not assume 1:1 relation between them. The sg->length is the size >>> of the physical memory chunk described by the sg->page, while >>> sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by >>> the sg_dma_address(sg). >>> >>> The proper way of extracting both: pages and DMA addresses of the whole >>> buffer described by a scatterlist it to iterate independently over the >>> sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. >>> >>> Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") >>> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> >>> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> >>> --- >>> drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++----------- >>> - >>> 1 file changed, 25 insertions(+), 12 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c >>> index 1de2cde2277c..282774e469ac 100644 >>> --- a/drivers/gpu/drm/drm_prime.c >>> +++ b/drivers/gpu/drm/drm_prime.c >>> @@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct >>> sg_table *sgt, struct page **pages, >>> unsigned count; >>> struct scatterlist *sg; >>> struct page *page; >>> - u32 len, index; >>> + u32 page_len, page_index; >>> dma_addr_t addr; >>> + u32 dma_len, dma_index; >>> >>> - index = 0; >>> + /* >>> + * Scatterlist elements contains both pages and DMA addresses, but >>> + * one shoud not assume 1:1 relation between them. The sg->length >>> is >>> + * the size of the physical memory chunk described by the sg->page, >>> + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk >>> + * described by the sg_dma_address(sg). >>> + */ >> Is there an example of what the scatterlist would look like in this case? > > DMA framework or IOMMU is allowed to join consecutive chunks while > mapping if such operation is supported by the hw. Here is the example: > > Lets assume that we have a scatterlist with 4 4KiB pages of the physical > addresses: 0x12000000, 0x13011000, 0x13012000, 0x11011000. The total > size of the buffer is 16KiB. After mapping this scatterlist to a device > behind an IOMMU it may end up as a contiguous buffer in the DMA (IOVA) > address space. at 0xf0010000. The scatterlist will look like this: > > sg[0].page = 0x12000000 > sg[0].len = 4096 > sg[0].dma_addr = 0xf0010000 > sg[0].dma_len = 16384 > sg[1].page = 0x13011000 > sg[1].len = 4096 > sg[1].dma_addr = 0 > sg[1].dma_len = 0 > sg[2].page = 0x13012000 > sg[2].len = 4096 > sg[2].dma_addr = 0 > sg[2].dma_len = 0 > sg[3].page = 0x11011000 > sg[3].len = 4096 > sg[3].dma_addr = 0 > sg[3].dma_len = 0 > > (I've intentionally wrote page as physical address to make it easier to > understand, in real SGs it is stored a struct page pointer). > >> Does each SG entry always have the page and dma info? or could you have >> entries that have page information only, and entries that have dma info only? > When SG is not mapped yet it contains only the ->pages and ->len > entries. I'm not aware of the SGs with the DMA information only, but in > theory it might be possible to have such. >> If the same entry has different size info (page_len = PAGE_SIZE, >> dma_len = 4 * PAGE_SIZE?), are we guaranteed that the arrays (page and addrs) have >> been sized correctly? > > There are always no more DMA related entries than the phys pages. If > there is 1:1 mapping between physical memory and DMA (IOVA) space, then > each SG entry will have len == dma_len, and dma_addr will be describing > the same as page entry. DMA mapping framework is allowed only to join > entries while mapping to DMA (IOVA). Nit: even in a 1:1 mapping, merging would still be permitted (subject to dma_parms constraints) during a bounce-buffer copy, or if the caller simply generates a naive list like so: sg[0].page = 0x12000000 sg[0].len = 4096 sg[1].page = 0x12001000 sg[1].len = 4096 dma_map_sg() => sg[0].dma_addr = 0x12000000 sg[0].dma_len = 8192 sg[1].dma_addr = 0 sg[1].dma_len = 0 I'm not sure that any non-IOMMU DMA API implementations actually take advantage of this, but they are *allowed* to ;) Robin.
>-----Original Message----- >From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of >Marek Szyprowski >Sent: Sunday, March 29, 2020 5:56 AM >To: Ruhl, Michael J <michael.j.ruhl@intel.com>; dri- >devel@lists.freedesktop.org; linux-samsung-soc@vger.kernel.org; linux- >kernel@vger.kernel.org >Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>; David Airlie ><airlied@linux.ie>; Shane Francis <bigbeeshane@gmail.com>; >stable@vger.kernel.org; Thomas Zimmermann <tzimmermann@suse.de>; >Alex Deucher <alexander.deucher@amd.com> >Subject: Re: [PATCH v2] drm/prime: fix extracting of the DMA addresses from >a scatterlist > >Hi Michael, > <snip> >> Is there an example of what the scatterlist would look like in this case? > >DMA framework or IOMMU is allowed to join consecutive chunks while >mapping if such operation is supported by the hw. Here is the example: > >Lets assume that we have a scatterlist with 4 4KiB pages of the physical >addresses: 0x12000000, 0x13011000, 0x13012000, 0x11011000. The total >size of the buffer is 16KiB. After mapping this scatterlist to a device >behind an IOMMU it may end up as a contiguous buffer in the DMA (IOVA) >address space. at 0xf0010000. The scatterlist will look like this: > >sg[0].page = 0x12000000 >sg[0].len = 4096 >sg[0].dma_addr = 0xf0010000 >sg[0].dma_len = 16384 >sg[1].page = 0x13011000 >sg[1].len = 4096 >sg[1].dma_addr = 0 >sg[1].dma_len = 0 >sg[2].page = 0x13012000 >sg[2].len = 4096 >sg[2].dma_addr = 0 >sg[2].dma_len = 0 >sg[3].page = 0x11011000 >sg[3].len = 4096 >sg[3].dma_addr = 0 >sg[3].dma_len = 0 > >(I've intentionally wrote page as physical address to make it easier to >understand, in real SGs it is stored a struct page pointer). > >> Does each SG entry always have the page and dma info? or could you have >> entries that have page information only, and entries that have dma info >only? >When SG is not mapped yet it contains only the ->pages and ->len >entries. I'm not aware of the SGs with the DMA information only, but in >theory it might be possible to have such. >> If the same entry has different size info (page_len = PAGE_SIZE, >> dma_len = 4 * PAGE_SIZE?), are we guaranteed that the arrays (page and >addrs) have >> been sized correctly? > >There are always no more DMA related entries than the phys pages. If >there is 1:1 mapping between physical memory and DMA (IOVA) space, then >each SG entry will have len == dma_len, and dma_addr will be describing >the same as page entry. DMA mapping framework is allowed only to join >entries while mapping to DMA (IOVA). > >> Just trying to get my head wrapped around this. > >Sure, I hope my explanation helps a bit. That is a great example! Thank you very much for the explanation. I was somehow seeing it as the dma side getting split and extended (rather than consolidated) into more possible entries. This clarifies the issue for me. Thanks! Mike >Best regards >-- >Marek Szyprowski, PhD >Samsung R&D Institute Poland > >_______________________________________________ >dri-devel mailing list >dri-devel@lists.freedesktop.org >https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Fri, Mar 27, 2020 at 12:23 PM Marek Szyprowski <m.szyprowski@samsung.com> wrote: > > Scatterlist elements contains both pages and DMA addresses, but one > should not assume 1:1 relation between them. The sg->length is the size > of the physical memory chunk described by the sg->page, while > sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by > the sg_dma_address(sg). > > The proper way of extracting both: pages and DMA addresses of the whole > buffer described by a scatterlist it to iterate independently over the > sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. > > Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Applied. Thanks and sorry for the breakage. Alex > --- > drivers/gpu/drm/drm_prime.c | 37 +++++++++++++++++++++++++------------ > 1 file changed, 25 insertions(+), 12 deletions(-) > > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c > index 1de2cde2277c..282774e469ac 100644 > --- a/drivers/gpu/drm/drm_prime.c > +++ b/drivers/gpu/drm/drm_prime.c > @@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages, > unsigned count; > struct scatterlist *sg; > struct page *page; > - u32 len, index; > + u32 page_len, page_index; > dma_addr_t addr; > + u32 dma_len, dma_index; > > - index = 0; > + /* > + * Scatterlist elements contains both pages and DMA addresses, but > + * one shoud not assume 1:1 relation between them. The sg->length is > + * the size of the physical memory chunk described by the sg->page, > + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk > + * described by the sg_dma_address(sg). > + */ > + page_index = 0; > + dma_index = 0; > for_each_sg(sgt->sgl, sg, sgt->nents, count) { > - len = sg_dma_len(sg); > + page_len = sg->length; > page = sg_page(sg); > + dma_len = sg_dma_len(sg); > addr = sg_dma_address(sg); > > - while (len > 0) { > - if (WARN_ON(index >= max_entries)) > + while (pages && page_len > 0) { > + if (WARN_ON(page_index >= max_entries)) > return -1; > - if (pages) > - pages[index] = page; > - if (addrs) > - addrs[index] = addr; > - > + pages[page_index] = page; > page++; > + page_len -= PAGE_SIZE; > + page_index++; > + } > + while (addrs && dma_len > 0) { > + if (WARN_ON(dma_index >= max_entries)) > + return -1; > + addrs[dma_index] = addr; > addr += PAGE_SIZE; > - len -= PAGE_SIZE; > - index++; > + dma_len -= PAGE_SIZE; > + dma_index++; > } > } > return 0; > -- > 2.17.1 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Sun, Apr 05, 2020 at 10:47:49AM -0400, Alex Deucher wrote: > On Fri, Mar 27, 2020 at 12:23 PM Marek Szyprowski > <m.szyprowski@samsung.com> wrote: > > > > Scatterlist elements contains both pages and DMA addresses, but one > > should not assume 1:1 relation between them. The sg->length is the size > > of the physical memory chunk described by the sg->page, while > > sg_dma_len(sg) is the size of the DMA (IO virtual) chunk described by > > the sg_dma_address(sg). > > > > The proper way of extracting both: pages and DMA addresses of the whole > > buffer described by a scatterlist it to iterate independently over the > > sg->pages/sg->length and sg_dma_address(sg)/sg_dma_len(sg) entries. > > > > Fixes: 42e67b479eab ("drm/prime: use dma length macro when mapping sg") > > Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> > > Reviewed-by: Alex Deucher <alexander.deucher@amd.com> > > Applied. Thanks and sorry for the breakage. <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter>
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 1de2cde2277c..282774e469ac 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages, unsigned count; struct scatterlist *sg; struct page *page; - u32 len, index; + u32 page_len, page_index; dma_addr_t addr; + u32 dma_len, dma_index; - index = 0; + /* + * Scatterlist elements contains both pages and DMA addresses, but + * one shoud not assume 1:1 relation between them. The sg->length is + * the size of the physical memory chunk described by the sg->page, + * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk + * described by the sg_dma_address(sg). + */ + page_index = 0; + dma_index = 0; for_each_sg(sgt->sgl, sg, sgt->nents, count) { - len = sg_dma_len(sg); + page_len = sg->length; page = sg_page(sg); + dma_len = sg_dma_len(sg); addr = sg_dma_address(sg); - while (len > 0) { - if (WARN_ON(index >= max_entries)) + while (pages && page_len > 0) { + if (WARN_ON(page_index >= max_entries)) return -1; - if (pages) - pages[index] = page; - if (addrs) - addrs[index] = addr; - + pages[page_index] = page; page++; + page_len -= PAGE_SIZE; + page_index++; + } + while (addrs && dma_len > 0) { + if (WARN_ON(dma_index >= max_entries)) + return -1; + addrs[dma_index] = addr; addr += PAGE_SIZE; - len -= PAGE_SIZE; - index++; + dma_len -= PAGE_SIZE; + dma_index++; } } return 0;