diff mbox

[RFC] drm/ttm: dma: Fixes for 32-bit and 64-bit ARM

Message ID 1415795945-17575-1-git-send-email-thierry.reding@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Thierry Reding Nov. 12, 2014, 12:39 p.m. UTC
From: Thierry Reding <treding@nvidia.com>

dma_alloc_coherent() returns a kernel virtual address that is part of
the linear range. Passing such an address to virt_to_page() is illegal
on non-coherent architectures. This causes the kernel to oops on 64-bit
ARM because the struct page * obtained from virt_to_page() points to
unmapped memory.

This commit fixes this by using phys_to_page() since we get a physical
address from dma_alloc_coherent(). Note that this is not a proper fix
because if an IOMMU is set up to translate addresses for the GPU this
address will be an I/O virtual address rather than a physical one. The
proper fix probably involves not getting a pointer to the struct page
in the first place, but that would be a much more intrusive change, if
at all possible.

Until that time, this temporary fix will allow TTM to work on 32-bit
and 64-bit ARM as well, provided that no IOMMU translations are enabled
for the GPU.

Signed-off-by: Thierry Reding <treding@nvidia.com>
---
Arnd, I realize that this isn't a proper fix according to what we discussed on
IRC yesterday, but I can't see a way to remove access to the pages array that
would be as simple as this. I've marked this as RFC in the hope that it will
trigger some discussion that will lead to a proper solution.

 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Konrad Rzeszutek Wilk Nov. 12, 2014, 2:18 p.m. UTC | #1
On Wed, Nov 12, 2014 at 01:39:05PM +0100, Thierry Reding wrote:
> From: Thierry Reding <treding@nvidia.com>
> 
> dma_alloc_coherent() returns a kernel virtual address that is part of
> the linear range. Passing such an address to virt_to_page() is illegal
> on non-coherent architectures. This causes the kernel to oops on 64-bit
> ARM because the struct page * obtained from virt_to_page() points to
> unmapped memory.

Oh! That is not good!
> 
> This commit fixes this by using phys_to_page() since we get a physical
> address from dma_alloc_coherent(). Note that this is not a proper fix
> because if an IOMMU is set up to translate addresses for the GPU this
> address will be an I/O virtual address rather than a physical one. The
> proper fix probably involves not getting a pointer to the struct page
> in the first place, but that would be a much more intrusive change, if
> at all possible.

What type of caching types are there on ARM? We use the 'struct page'
on the set_pages_to_[wc|uc|wb] but all of those are X86 specfic.

But I think you could by passing the 'struct dma_page' instead
of 'struct page' (and the array uses) around. That should solve
the touching of 'struct page' and we can treat it as an opaque
type.
> 
> Until that time, this temporary fix will allow TTM to work on 32-bit
> and 64-bit ARM as well, provided that no IOMMU translations are enabled
> for the GPU.

Is there a way to query the 'struct device' to see if the IOMMU translation
is enabled/disabled for said device?

Now your patch looks to get the 'struct page' by doing some  form of
translation. Could you explain to me which type of memory have a 'struct page'
and which ones do not ?

It is OK if you explain this in nauseating details :-)

> 
> Signed-off-by: Thierry Reding <treding@nvidia.com>
> ---
> Arnd, I realize that this isn't a proper fix according to what we discussed on
> IRC yesterday, but I can't see a way to remove access to the pages array that
> would be as simple as this. I've marked this as RFC in the hope that it will
> trigger some discussion that will lead to a proper solution.
> 
>  drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
> index c96db433f8af..d7993985752c 100644
> --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
> @@ -343,7 +343,11 @@ static struct dma_page *__ttm_dma_alloc_page(struct dma_pool *pool)
>  					   &d_page->dma,
>  					   pool->gfp_flags);
>  	if (d_page->vaddr)
> +#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
> +		d_page->p = phys_to_page(d_page->dma);
> +#else
>  		d_page->p = virt_to_page(d_page->vaddr);
> +#endif
>  	else {
>  		kfree(d_page);
>  		d_page = NULL;
> -- 
> 2.1.3
>
Arnd Bergmann Nov. 12, 2014, 5:03 p.m. UTC | #2
On Wednesday 12 November 2014 09:18:59 Konrad Rzeszutek Wilk wrote:
> On Wed, Nov 12, 2014 at 01:39:05PM +0100, Thierry Reding wrote:
> > From: Thierry Reding <treding@nvidia.com>
> > 
> > dma_alloc_coherent() returns a kernel virtual address that is part of
> > the linear range. Passing such an address to virt_to_page() is illegal
> > on non-coherent architectures. This causes the kernel to oops on 64-bit
> > ARM because the struct page * obtained from virt_to_page() points to
> > unmapped memory.
> 
> Oh! That is not good!
>
I think what Thierry meant is that the returned pointer is /not/ in the
linear range.
 
> > Until that time, this temporary fix will allow TTM to work on 32-bit
> > and 64-bit ARM as well, provided that no IOMMU translations are enabled
> > for the GPU.
> 
> Is there a way to query the 'struct device' to see if the IOMMU translation
> is enabled/disabled for said device?
> 
> Now your patch looks to get the 'struct page' by doing some  form of
> translation. Could you explain to me which type of memory have a 'struct page'
> and which ones do not ?
> 
> It is OK if you explain this in nauseating details 

Basically there are two types of memory that have a struct page:

- directly mapped cacheable memory, i.e. anything that can be accessed
  through a kernel pointer without having to go though ioremap/vmalloc/...

- highmem pages on 32-bit system.

On noncoherent ARM systems, dma_alloc_coherent will return memory that
is was unmapped from the linear range to avoid having both cacheable and
noncachable mappings for the same page.

	Arnd
Konrad Rzeszutek Wilk Dec. 1, 2014, 4:43 p.m. UTC | #3
On Wed, Nov 12, 2014 at 06:03:49PM +0100, Arnd Bergmann wrote:
> On Wednesday 12 November 2014 09:18:59 Konrad Rzeszutek Wilk wrote:
> > On Wed, Nov 12, 2014 at 01:39:05PM +0100, Thierry Reding wrote:
> > > From: Thierry Reding <treding@nvidia.com>
> > > 
> > > dma_alloc_coherent() returns a kernel virtual address that is part of
> > > the linear range. Passing such an address to virt_to_page() is illegal
> > > on non-coherent architectures. This causes the kernel to oops on 64-bit
> > > ARM because the struct page * obtained from virt_to_page() points to
> > > unmapped memory.
> > 
> > Oh! That is not good!
> >
> I think what Thierry meant is that the returned pointer is /not/ in the
> linear range.
>  
> > > Until that time, this temporary fix will allow TTM to work on 32-bit
> > > and 64-bit ARM as well, provided that no IOMMU translations are enabled
> > > for the GPU.
> > 
> > Is there a way to query the 'struct device' to see if the IOMMU translation
> > is enabled/disabled for said device?

?
> > 
> > Now your patch looks to get the 'struct page' by doing some  form of
> > translation. Could you explain to me which type of memory have a 'struct page'
> > and which ones do not ?
> > 
> > It is OK if you explain this in nauseating details 
> 
> Basically there are two types of memory that have a struct page:
> 
> - directly mapped cacheable memory, i.e. anything that can be accessed
>   through a kernel pointer without having to go though ioremap/vmalloc/...
> 
> - highmem pages on 32-bit system.
> 
> On noncoherent ARM systems, dma_alloc_coherent will return memory that
> is was unmapped from the linear range to avoid having both cacheable and
> noncachable mappings for the same page.
> 
> 	Arnd
Alexandre Courbot Dec. 8, 2014, 7:14 a.m. UTC | #4
On 11/12/2014 09:39 PM, Thierry Reding wrote:
> From: Thierry Reding <treding@nvidia.com>
>
> dma_alloc_coherent() returns a kernel virtual address that is part of
> the linear range. Passing such an address to virt_to_page() is illegal
> on non-coherent architectures. This causes the kernel to oops on 64-bit
> ARM because the struct page * obtained from virt_to_page() points to
> unmapped memory.
>
> This commit fixes this by using phys_to_page() since we get a physical
> address from dma_alloc_coherent(). Note that this is not a proper fix
> because if an IOMMU is set up to translate addresses for the GPU this
> address will be an I/O virtual address rather than a physical one. The
> proper fix probably involves not getting a pointer to the struct page
> in the first place, but that would be a much more intrusive change, if
> at all possible.
>
> Until that time, this temporary fix will allow TTM to work on 32-bit
> and 64-bit ARM as well, provided that no IOMMU translations are enabled
> for the GPU.
>
> Signed-off-by: Thierry Reding <treding@nvidia.com>
> ---
> Arnd, I realize that this isn't a proper fix according to what we discussed on
> IRC yesterday, but I can't see a way to remove access to the pages array that
> would be as simple as this. I've marked this as RFC in the hope that it will
> trigger some discussion that will lead to a proper solution.
>
>   drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 4 ++++
>   1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
> index c96db433f8af..d7993985752c 100644
> --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
> @@ -343,7 +343,11 @@ static struct dma_page *__ttm_dma_alloc_page(struct dma_pool *pool)
>   					   &d_page->dma,
>   					   pool->gfp_flags);
>   	if (d_page->vaddr)
> +#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
> +		d_page->p = phys_to_page(d_page->dma);
> +#else
>   		d_page->p = virt_to_page(d_page->vaddr);
> +#endif

Since I am messing with the IOMMU I just happened to have hit the issue 
you are mentioning. Wouldn't the following work:

-       if (d_page->vaddr)
-#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-               d_page->p = phys_to_page(d_page->dma);
-#else
-               d_page->p = virt_to_page(d_page->vaddr);
-#endif
-       else {
+       if (d_page->vaddr) {
+               if (is_vmalloc_addr(d_page->vaddr)) {
+                       d_page->p = vmalloc_to_page(d_page->vaddr);
+               } else {
+                       d_page->p = virt_to_page(d_page->vaddr);
+               }
+       } else {

A remapped page will end up in the vmalloc range of the address space, 
and in this case we can use vmalloc_to_page() to get the right page. 
Pages outside of this range are part of the linear mapping and can be 
resolved using virt_to_page().

Jetson seems to be mostly happy with this, although I sometimes get the 
following trace:

[   13.174763] kernel BUG at ../mm/slab.c:2593!
[   13.174767] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
[   13.174790] Modules linked in: nouveau_platform(O+) nouveau(O) 
cfbfillrect cfbimgblt cfbcopyarea ttm
...
[   13.175234] [<c00de238>] (cache_alloc_refill) from [<c00de528>] 
(__kmalloc+0x100/0x13c)
[   13.175247] [<c00de528>] (__kmalloc) from [<c001d564>] 
(arm_iommu_alloc_attrs+0x94/0x3a8)
[   13.175269] [<c001d564>] (arm_iommu_alloc_attrs) from [<bf008f4c>] 
(ttm_dma_populate+0x498/0x76c [ttm])
[   13.175294] [<bf008f4c>] (ttm_dma_populate [ttm]) from [<bf000bb8>] 
(ttm_tt_bind+0x38/0x68 [ttm])
[   13.175315] [<bf000bb8>] (ttm_tt_bind [ttm]) from [<bf00298c>] 
(ttm_bo_handle_move_mem+0x408/0x47c [ttm])
[   13.175337] [<bf00298c>] (ttm_bo_handle_move_mem [ttm]) from 
[<bf003758>] (ttm_bo_validate+0x220/0x22c [ttm])
[   13.175359] [<bf003758>] (ttm_bo_validate [ttm]) from [<bf003984>] 
(ttm_bo_init+0x220/0x338 [ttm])
[   13.175480] [<bf003984>] (ttm_bo_init [ttm]) from [<bf0c70a0>] 
(nouveau_bo_new+0x1c0/0x294 [nouveau])
[   13.175688] [<bf0c70a0>] (nouveau_bo_new [nouveau]) from [<bf0ce88c>] 
(nv84_fence_create+0x1cc/0x240 [nouveau])
[   13.175891] [<bf0ce88c>] (nv84_fence_create [nouveau]) from 
[<bf0cec90>] (nvc0_fence_create+0xc/0x24 [nouveau])
[   13.176094] [<bf0cec90>] (nvc0_fence_create [nouveau]) from 
[<bf0c1480>] (nouveau_accel_init+0xec/0x450 [nouveau])

I suspect this is related to this change, but it might also be the 
side-effect of another bug in my code.
Alexandre Courbot Dec. 8, 2014, 7:36 a.m. UTC | #5
On 12/08/2014 04:14 PM, Alexandre Courbot wrote:
> On 11/12/2014 09:39 PM, Thierry Reding wrote:
>> From: Thierry Reding <treding@nvidia.com>
>>
>> dma_alloc_coherent() returns a kernel virtual address that is part of
>> the linear range. Passing such an address to virt_to_page() is illegal
>> on non-coherent architectures. This causes the kernel to oops on 64-bit
>> ARM because the struct page * obtained from virt_to_page() points to
>> unmapped memory.
>>
>> This commit fixes this by using phys_to_page() since we get a physical
>> address from dma_alloc_coherent(). Note that this is not a proper fix
>> because if an IOMMU is set up to translate addresses for the GPU this
>> address will be an I/O virtual address rather than a physical one. The
>> proper fix probably involves not getting a pointer to the struct page
>> in the first place, but that would be a much more intrusive change, if
>> at all possible.
>>
>> Until that time, this temporary fix will allow TTM to work on 32-bit
>> and 64-bit ARM as well, provided that no IOMMU translations are enabled
>> for the GPU.
>>
>> Signed-off-by: Thierry Reding <treding@nvidia.com>
>> ---
>> Arnd, I realize that this isn't a proper fix according to what we
>> discussed on
>> IRC yesterday, but I can't see a way to remove access to the pages
>> array that
>> would be as simple as this. I've marked this as RFC in the hope that
>> it will
>> trigger some discussion that will lead to a proper solution.
>>
>>   drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> index c96db433f8af..d7993985752c 100644
>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> @@ -343,7 +343,11 @@ static struct dma_page
>> *__ttm_dma_alloc_page(struct dma_pool *pool)
>>                          &d_page->dma,
>>                          pool->gfp_flags);
>>       if (d_page->vaddr)
>> +#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
>> +        d_page->p = phys_to_page(d_page->dma);
>> +#else
>>           d_page->p = virt_to_page(d_page->vaddr);
>> +#endif
>
> Since I am messing with the IOMMU I just happened to have hit the issue
> you are mentioning. Wouldn't the following work:
>
> -       if (d_page->vaddr)
> -#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
> -               d_page->p = phys_to_page(d_page->dma);
> -#else
> -               d_page->p = virt_to_page(d_page->vaddr);
> -#endif
> -       else {
> +       if (d_page->vaddr) {
> +               if (is_vmalloc_addr(d_page->vaddr)) {
> +                       d_page->p = vmalloc_to_page(d_page->vaddr);
> +               } else {
> +                       d_page->p = virt_to_page(d_page->vaddr);
> +               }
> +       } else {
>
> A remapped page will end up in the vmalloc range of the address space,
> and in this case we can use vmalloc_to_page() to get the right page.
> Pages outside of this range are part of the linear mapping and can be
> resolved using virt_to_page().
>
> Jetson seems to be mostly happy with this, although I sometimes get the
> following trace:
>
> [   13.174763] kernel BUG at ../mm/slab.c:2593!
> [   13.174767] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
> [   13.174790] Modules linked in: nouveau_platform(O+) nouveau(O)
> cfbfillrect cfbimgblt cfbcopyarea ttm
> ...
> [   13.175234] [<c00de238>] (cache_alloc_refill) from [<c00de528>]
> (__kmalloc+0x100/0x13c)
> [   13.175247] [<c00de528>] (__kmalloc) from [<c001d564>]
> (arm_iommu_alloc_attrs+0x94/0x3a8)
> [   13.175269] [<c001d564>] (arm_iommu_alloc_attrs) from [<bf008f4c>]
> (ttm_dma_populate+0x498/0x76c [ttm])
> [   13.175294] [<bf008f4c>] (ttm_dma_populate [ttm]) from [<bf000bb8>]
> (ttm_tt_bind+0x38/0x68 [ttm])
> [   13.175315] [<bf000bb8>] (ttm_tt_bind [ttm]) from [<bf00298c>]
> (ttm_bo_handle_move_mem+0x408/0x47c [ttm])
> [   13.175337] [<bf00298c>] (ttm_bo_handle_move_mem [ttm]) from
> [<bf003758>] (ttm_bo_validate+0x220/0x22c [ttm])
> [   13.175359] [<bf003758>] (ttm_bo_validate [ttm]) from [<bf003984>]
> (ttm_bo_init+0x220/0x338 [ttm])
> [   13.175480] [<bf003984>] (ttm_bo_init [ttm]) from [<bf0c70a0>]
> (nouveau_bo_new+0x1c0/0x294 [nouveau])
> [   13.175688] [<bf0c70a0>] (nouveau_bo_new [nouveau]) from [<bf0ce88c>]
> (nv84_fence_create+0x1cc/0x240 [nouveau])
> [   13.175891] [<bf0ce88c>] (nv84_fence_create [nouveau]) from
> [<bf0cec90>] (nvc0_fence_create+0xc/0x24 [nouveau])
> [   13.176094] [<bf0cec90>] (nvc0_fence_create [nouveau]) from
> [<bf0c1480>] (nouveau_accel_init+0xec/0x450 [nouveau])
>
> I suspect this is related to this change, but it might also be the
> side-effect of another bug in my code.

FWIW and after some more testing, I noticed that without IOMMU 
vmalloc_to_page() and phys_to_page() both return the same valid page. 
With the IOMMU enabled vmalloc_to_page() still returns what seems to be 
valid pages (phys_to_page() of course doesn't make sense anymore).

So AFAICT the change I proposed seems valid. I am not sure what causes 
the BUG() in the slab allocator.
Alexandre Courbot Dec. 15, 2014, 8:04 a.m. UTC | #6
On Mon, Dec 8, 2014 at 4:14 PM, Alexandre Courbot <acourbot@nvidia.com> wrote:
> On 11/12/2014 09:39 PM, Thierry Reding wrote:
>>
>> From: Thierry Reding <treding@nvidia.com>
>>
>> dma_alloc_coherent() returns a kernel virtual address that is part of
>> the linear range. Passing such an address to virt_to_page() is illegal
>> on non-coherent architectures. This causes the kernel to oops on 64-bit
>> ARM because the struct page * obtained from virt_to_page() points to
>> unmapped memory.
>>
>> This commit fixes this by using phys_to_page() since we get a physical
>> address from dma_alloc_coherent(). Note that this is not a proper fix
>> because if an IOMMU is set up to translate addresses for the GPU this
>> address will be an I/O virtual address rather than a physical one. The
>> proper fix probably involves not getting a pointer to the struct page
>> in the first place, but that would be a much more intrusive change, if
>> at all possible.
>>
>> Until that time, this temporary fix will allow TTM to work on 32-bit
>> and 64-bit ARM as well, provided that no IOMMU translations are enabled
>> for the GPU.
>>
>> Signed-off-by: Thierry Reding <treding@nvidia.com>
>> ---
>> Arnd, I realize that this isn't a proper fix according to what we
>> discussed on
>> IRC yesterday, but I can't see a way to remove access to the pages array
>> that
>> would be as simple as this. I've marked this as RFC in the hope that it
>> will
>> trigger some discussion that will lead to a proper solution.
>>
>>   drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> index c96db433f8af..d7993985752c 100644
>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
>> @@ -343,7 +343,11 @@ static struct dma_page *__ttm_dma_alloc_page(struct
>> dma_pool *pool)
>>                                            &d_page->dma,
>>                                            pool->gfp_flags);
>>         if (d_page->vaddr)
>> +#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
>> +               d_page->p = phys_to_page(d_page->dma);
>> +#else
>>                 d_page->p = virt_to_page(d_page->vaddr);
>> +#endif
>
>
> Since I am messing with the IOMMU I just happened to have hit the issue you
> are mentioning. Wouldn't the following work:
>
> -       if (d_page->vaddr)
> -#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
> -               d_page->p = phys_to_page(d_page->dma);
> -#else
> -               d_page->p = virt_to_page(d_page->vaddr);
> -#endif
> -       else {
> +       if (d_page->vaddr) {
> +               if (is_vmalloc_addr(d_page->vaddr)) {
> +                       d_page->p = vmalloc_to_page(d_page->vaddr);
> +               } else {
> +                       d_page->p = virt_to_page(d_page->vaddr);
> +               }
> +       } else {
>
> A remapped page will end up in the vmalloc range of the address space, and
> in this case we can use vmalloc_to_page() to get the right page. Pages
> outside of this range are part of the linear mapping and can be resolved
> using virt_to_page().

Thierry, have you had a chance to try this? If not, do you want to to
try to push this patch? It seems to solve the issue AFAICT, but needs
more testing.
diff mbox

Patch

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
index c96db433f8af..d7993985752c 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c
@@ -343,7 +343,11 @@  static struct dma_page *__ttm_dma_alloc_page(struct dma_pool *pool)
 					   &d_page->dma,
 					   pool->gfp_flags);
 	if (d_page->vaddr)
+#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
+		d_page->p = phys_to_page(d_page->dma);
+#else
 		d_page->p = virt_to_page(d_page->vaddr);
+#endif
 	else {
 		kfree(d_page);
 		d_page = NULL;