diff mbox series

[v2] dma-mapping: use vmalloc_to_page for vmalloc addresses

Message ID 20210622133414.132754-1-rm.skakun@gmail.com (mailing list archive)
State Superseded
Headers show
Series [v2] dma-mapping: use vmalloc_to_page for vmalloc addresses | expand

Commit Message

Roman Skakun June 22, 2021, 1:34 p.m. UTC
This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.

Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
---
Hey!
Thanks for suggestions, Christoph!
I updated the patch according to your advice.
But, I'm so surprised because nobody catches this problem
in the common code before. It looks a bit strange as for me. 


 kernel/dma/ops_helpers.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Konrad Rzeszutek Wilk July 14, 2021, 12:15 a.m. UTC | #1
On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote:
> This commit is dedicated to fix incorrect conversion from
> cpu_addr to page address in cases when we get virtual
> address which allocated in the vmalloc range.
> As the result, virt_to_page() cannot convert this address
> properly and return incorrect page address.
> 
> Need to detect such cases and obtains the page address using
> vmalloc_to_page() instead.
> 
> Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> ---
> Hey!
> Thanks for suggestions, Christoph!
> I updated the patch according to your advice.
> But, I'm so surprised because nobody catches this problem
> in the common code before. It looks a bit strange as for me. 

This looks like it wasn't picked up? Should it go in rc1?
> 
> 
>  kernel/dma/ops_helpers.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> index 910ae69cae77..782728d8a393 100644
> --- a/kernel/dma/ops_helpers.c
> +++ b/kernel/dma/ops_helpers.c
> @@ -5,6 +5,14 @@
>   */
>  #include <linux/dma-map-ops.h>
>  
> +static struct page *cpu_addr_to_page(void *cpu_addr)
> +{
> +	if (is_vmalloc_addr(cpu_addr))
> +		return vmalloc_to_page(cpu_addr);
> +	else
> +		return virt_to_page(cpu_addr);
> +}
> +
>  /*
>   * Create scatter-list for the already allocated DMA buffer.
>   */
> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
>  		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		 unsigned long attrs)
>  {
> -	struct page *page = virt_to_page(cpu_addr);
> +	struct page *page = cpu_addr_to_page(cpu_addr);
>  	int ret;
>  
>  	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>  		return -ENXIO;
>  
>  	return remap_pfn_range(vma, vma->vm_start,
> -			page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> +			page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
>  			user_count << PAGE_SHIFT, vma->vm_page_prot);
>  #else
>  	return -ENXIO;
> -- 
> 2.25.1
>
Stefano Stabellini July 14, 2021, 1:23 a.m. UTC | #2
On Tue, 22 Jun 2021, Roman Skakun wrote:
> This commit is dedicated to fix incorrect conversion from
> cpu_addr to page address in cases when we get virtual
> address which allocated in the vmalloc range.
> As the result, virt_to_page() cannot convert this address
> properly and return incorrect page address.
> 
> Need to detect such cases and obtains the page address using
> vmalloc_to_page() instead.
> 
> Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> ---
> Hey!
> Thanks for suggestions, Christoph!
> I updated the patch according to your advice.
> But, I'm so surprised because nobody catches this problem
> in the common code before. It looks a bit strange as for me. 
> 
> 
>  kernel/dma/ops_helpers.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> index 910ae69cae77..782728d8a393 100644
> --- a/kernel/dma/ops_helpers.c
> +++ b/kernel/dma/ops_helpers.c
> @@ -5,6 +5,14 @@
>   */
>  #include <linux/dma-map-ops.h>
>  
> +static struct page *cpu_addr_to_page(void *cpu_addr)
> +{
> +	if (is_vmalloc_addr(cpu_addr))
> +		return vmalloc_to_page(cpu_addr);
> +	else
> +		return virt_to_page(cpu_addr);
> +}

We have the same thing in xen_swiotlb_free_coherent. Can we find a way
to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can
make cpu_addr_to_page non-static? No problem if it is too much trouble.


>  /*
>   * Create scatter-list for the already allocated DMA buffer.
>   */
> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
>  		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		 unsigned long attrs)
>  {
> -	struct page *page = virt_to_page(cpu_addr);
> +	struct page *page = cpu_addr_to_page(cpu_addr);
>  	int ret;
>  
>  	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>  		return -ENXIO;
>  
>  	return remap_pfn_range(vma, vma->vm_start,
> -			page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> +			page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
>  			user_count << PAGE_SHIFT, vma->vm_page_prot);
>  #else
>  	return -ENXIO;
> -- 
> 2.25.1
>
Roman Skakun July 15, 2021, 7:31 a.m. UTC | #3
Hi, Stefano!

> We have the same thing in xen_swiotlb_free_coherent. Can we find a way
> to call cpu_addr_to_page() from xen_swiotlb_free_coherent?
> Maybe we can make cpu_addr_to_page non-static?

Yes, right, If we want to reuse this helper instead of the same code
block in xen_swiotlb_free_coherent() need to make cpu_addr_to_page() as
global and add a new declaration for this helper in include/linux/dma-map-ops.h.
What do you think?

Cheers!

ср, 14 июл. 2021 г. в 04:23, Stefano Stabellini <sstabellini@kernel.org>:
>
> On Tue, 22 Jun 2021, Roman Skakun wrote:
> > This commit is dedicated to fix incorrect conversion from
> > cpu_addr to page address in cases when we get virtual
> > address which allocated in the vmalloc range.
> > As the result, virt_to_page() cannot convert this address
> > properly and return incorrect page address.
> >
> > Need to detect such cases and obtains the page address using
> > vmalloc_to_page() instead.
> >
> > Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> > ---
> > Hey!
> > Thanks for suggestions, Christoph!
> > I updated the patch according to your advice.
> > But, I'm so surprised because nobody catches this problem
> > in the common code before. It looks a bit strange as for me.
> >
> >
> >  kernel/dma/ops_helpers.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> > index 910ae69cae77..782728d8a393 100644
> > --- a/kernel/dma/ops_helpers.c
> > +++ b/kernel/dma/ops_helpers.c
> > @@ -5,6 +5,14 @@
> >   */
> >  #include <linux/dma-map-ops.h>
> >
> > +static struct page *cpu_addr_to_page(void *cpu_addr)
> > +{
> > +     if (is_vmalloc_addr(cpu_addr))
> > +             return vmalloc_to_page(cpu_addr);
> > +     else
> > +             return virt_to_page(cpu_addr);
> > +}
>
> We have the same thing in xen_swiotlb_free_coherent. Can we find a way
> to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can
> make cpu_addr_to_page non-static? No problem if it is too much trouble.
>
>
> >  /*
> >   * Create scatter-list for the already allocated DMA buffer.
> >   */
> > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
> >                void *cpu_addr, dma_addr_t dma_addr, size_t size,
> >                unsigned long attrs)
> >  {
> > -     struct page *page = virt_to_page(cpu_addr);
> > +     struct page *page = cpu_addr_to_page(cpu_addr);
> >       int ret;
> >
> >       ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> >               return -ENXIO;
> >
> >       return remap_pfn_range(vma, vma->vm_start,
> > -                     page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> > +                     page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
> >                       user_count << PAGE_SHIFT, vma->vm_page_prot);
> >  #else
> >       return -ENXIO;
> > --
> > 2.25.1
> >
Roman Skakun July 15, 2021, 7:39 a.m. UTC | #4
> This looks like it wasn't picked up? Should it go in rc1?

Hi, Konrad!

This looks like an unambiguous bug, and should be in rc1.

Cheers!

ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>:
>
> On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote:
> > This commit is dedicated to fix incorrect conversion from
> > cpu_addr to page address in cases when we get virtual
> > address which allocated in the vmalloc range.
> > As the result, virt_to_page() cannot convert this address
> > properly and return incorrect page address.
> >
> > Need to detect such cases and obtains the page address using
> > vmalloc_to_page() instead.
> >
> > Signed-off-by: Roman Skakun <roman_skakun@epam.com>
> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
> > ---
> > Hey!
> > Thanks for suggestions, Christoph!
> > I updated the patch according to your advice.
> > But, I'm so surprised because nobody catches this problem
> > in the common code before. It looks a bit strange as for me.
>
> This looks like it wasn't picked up? Should it go in rc1?
> >
> >
> >  kernel/dma/ops_helpers.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> > index 910ae69cae77..782728d8a393 100644
> > --- a/kernel/dma/ops_helpers.c
> > +++ b/kernel/dma/ops_helpers.c
> > @@ -5,6 +5,14 @@
> >   */
> >  #include <linux/dma-map-ops.h>
> >
> > +static struct page *cpu_addr_to_page(void *cpu_addr)
> > +{
> > +     if (is_vmalloc_addr(cpu_addr))
> > +             return vmalloc_to_page(cpu_addr);
> > +     else
> > +             return virt_to_page(cpu_addr);
> > +}
> > +
> >  /*
> >   * Create scatter-list for the already allocated DMA buffer.
> >   */
> > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
> >                void *cpu_addr, dma_addr_t dma_addr, size_t size,
> >                unsigned long attrs)
> >  {
> > -     struct page *page = virt_to_page(cpu_addr);
> > +     struct page *page = cpu_addr_to_page(cpu_addr);
> >       int ret;
> >
> >       ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> >               return -ENXIO;
> >
> >       return remap_pfn_range(vma, vma->vm_start,
> > -                     page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> > +                     page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
> >                       user_count << PAGE_SHIFT, vma->vm_page_prot);
> >  #else
> >       return -ENXIO;
> > --
> > 2.25.1
> >
Boris Ostrovsky July 15, 2021, 4:58 p.m. UTC | #5
On 7/15/21 3:39 AM, Roman Skakun wrote:
>> This looks like it wasn't picked up? Should it go in rc1?
> Hi, Konrad!
>
> This looks like an unambiguous bug, and should be in rc1.


Looks like you didn't copy Christoph which could be part of the problem. Adding him.


-boris



>
> Cheers!
>
> ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>:
>> On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote:
>>> This commit is dedicated to fix incorrect conversion from
>>> cpu_addr to page address in cases when we get virtual
>>> address which allocated in the vmalloc range.
>>> As the result, virt_to_page() cannot convert this address
>>> properly and return incorrect page address.
>>>
>>> Need to detect such cases and obtains the page address using
>>> vmalloc_to_page() instead.
>>>
>>> Signed-off-by: Roman Skakun <roman_skakun@epam.com>
>>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
>>> ---
>>> Hey!
>>> Thanks for suggestions, Christoph!
>>> I updated the patch according to your advice.
>>> But, I'm so surprised because nobody catches this problem
>>> in the common code before. It looks a bit strange as for me.
>> This looks like it wasn't picked up? Should it go in rc1?
>>>
>>>  kernel/dma/ops_helpers.c | 12 ++++++++++--
>>>  1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
>>> index 910ae69cae77..782728d8a393 100644
>>> --- a/kernel/dma/ops_helpers.c
>>> +++ b/kernel/dma/ops_helpers.c
>>> @@ -5,6 +5,14 @@
>>>   */
>>>  #include <linux/dma-map-ops.h>
>>>
>>> +static struct page *cpu_addr_to_page(void *cpu_addr)
>>> +{
>>> +     if (is_vmalloc_addr(cpu_addr))
>>> +             return vmalloc_to_page(cpu_addr);
>>> +     else
>>> +             return virt_to_page(cpu_addr);
>>> +}
>>> +
>>>  /*
>>>   * Create scatter-list for the already allocated DMA buffer.
>>>   */
>>> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
>>>                void *cpu_addr, dma_addr_t dma_addr, size_t size,
>>>                unsigned long attrs)
>>>  {
>>> -     struct page *page = virt_to_page(cpu_addr);
>>> +     struct page *page = cpu_addr_to_page(cpu_addr);
>>>       int ret;
>>>
>>>       ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
>>> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>>>               return -ENXIO;
>>>
>>>       return remap_pfn_range(vma, vma->vm_start,
>>> -                     page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
>>> +                     page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
>>>                       user_count << PAGE_SHIFT, vma->vm_page_prot);
>>>  #else
>>>       return -ENXIO;
>>> --
>>> 2.25.1
>>>
>
>
Christoph Hellwig July 15, 2021, 5 p.m. UTC | #6
On Thu, Jul 15, 2021 at 12:58:53PM -0400, Boris Ostrovsky wrote:
> 
> On 7/15/21 3:39 AM, Roman Skakun wrote:
> >> This looks like it wasn't picked up? Should it go in rc1?
> > Hi, Konrad!
> >
> > This looks like an unambiguous bug, and should be in rc1.
> 
> 
> Looks like you didn't copy Christoph which could be part of the problem. Adding him.

Can someone just send a clean patch that I can review and hopefully
apply?
diff mbox series

Patch

diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
index 910ae69cae77..782728d8a393 100644
--- a/kernel/dma/ops_helpers.c
+++ b/kernel/dma/ops_helpers.c
@@ -5,6 +5,14 @@ 
  */
 #include <linux/dma-map-ops.h>
 
+static struct page *cpu_addr_to_page(void *cpu_addr)
+{
+	if (is_vmalloc_addr(cpu_addr))
+		return vmalloc_to_page(cpu_addr);
+	else
+		return virt_to_page(cpu_addr);
+}
+
 /*
  * Create scatter-list for the already allocated DMA buffer.
  */
@@ -12,7 +20,7 @@  int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs)
 {
-	struct page *page = virt_to_page(cpu_addr);
+	struct page *page = cpu_addr_to_page(cpu_addr);
 	int ret;
 
 	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
@@ -43,7 +51,7 @@  int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 		return -ENXIO;
 
 	return remap_pfn_range(vma, vma->vm_start,
-			page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
+			page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff,
 			user_count << PAGE_SHIFT, vma->vm_page_prot);
 #else
 	return -ENXIO;