diff mbox series

[01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt addresses

Message ID 20200520234520.22563-1-sstabellini@kernel.org (mailing list archive)
State Superseded
Headers show
Series fix swiotlb-xen for RPi4 | expand

Commit Message

Stefano Stabellini May 20, 2020, 11:45 p.m. UTC
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Don't just assume that virt_to_page works on all virtual addresses.
Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
virt addresses.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Julien Grall May 21, 2020, 8:01 a.m. UTC | #1
Hi,

On 21/05/2020 00:45, Stefano Stabellini wrote:
> From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> Don't just assume that virt_to_page works on all virtual addresses.
> Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
> virt addresses.

Can you provide an example where swiotlb is used with vmalloc()?

> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>   drivers/xen/swiotlb-xen.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index b6d27762c6f8..a42129cba36e 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>   	int order = get_order(size);
>   	phys_addr_t phys;
>   	u64 dma_mask = DMA_BIT_MASK(32);
> +	struct page *pg;
>   
>   	if (hwdev && hwdev->coherent_dma_mask)
>   		dma_mask = hwdev->coherent_dma_mask;
> @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>   	/* Convert the size to actually allocated. */
>   	size = 1UL << (order + XEN_PAGE_SHIFT);
>   
> +	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
> +				      virt_to_page(vaddr);

Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it 
something we want to do it here as well? Or is there any other condition 
where vmalloc can happen?

>   	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>   		     range_straddles_page_boundary(phys, size)) &&
> -	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> +	    TestClearPageXenRemapped(pg))
>   		xen_destroy_contiguous_region(phys, order);
>   
>   	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
> 

Cheers,
Stefano Stabellini May 22, 2020, 3:54 a.m. UTC | #2
On Thu, 21 May 2020, Julien Grall wrote:
> Hi,
> 
> On 21/05/2020 00:45, Stefano Stabellini wrote:
> > From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > 
> > Don't just assume that virt_to_page works on all virtual addresses.
> > Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
> > virt addresses.
> 
> Can you provide an example where swiotlb is used with vmalloc()?

The issue was reported here happening on the Rasperry Pi 4:
https://marc.info/?l=xen-devel&m=158862573216800

If you are asking where in the Linux codebase the vmalloc is happening
specifically, I don't know for sure, my information is limited to the
stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
but I shall have one soon.)


> > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> >   drivers/xen/swiotlb-xen.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index b6d27762c6f8..a42129cba36e 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
> > size, void *vaddr,
> >   	int order = get_order(size);
> >   	phys_addr_t phys;
> >   	u64 dma_mask = DMA_BIT_MASK(32);
> > +	struct page *pg;
> >     	if (hwdev && hwdev->coherent_dma_mask)
> >   		dma_mask = hwdev->coherent_dma_mask;
> > @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
> > size, void *vaddr,
> >   	/* Convert the size to actually allocated. */
> >   	size = 1UL << (order + XEN_PAGE_SHIFT);
> >   +	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
> > +				      virt_to_page(vaddr);
> 
> Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it
> something we want to do it here as well? Or is there any other condition where
> vmalloc can happen?

I can see it in dma_direct_free_pages:

	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
		vunmap(cpu_addr);

I wonder why the common DMA code does that. is_vmalloc_addr should work
regardless of CONFIG_DMA_REMAP. Maybe just for efficiency?
Julien Grall May 22, 2020, 6:11 p.m. UTC | #3
Hi Stefano,

On 22/05/2020 04:54, Stefano Stabellini wrote:
> On Thu, 21 May 2020, Julien Grall wrote:
>> Hi,
>>
>> On 21/05/2020 00:45, Stefano Stabellini wrote:
>>> From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>
>>> Don't just assume that virt_to_page works on all virtual addresses.
>>> Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
>>> virt addresses.
>>
>> Can you provide an example where swiotlb is used with vmalloc()?
> 
> The issue was reported here happening on the Rasperry Pi 4:
> https://marc.info/?l=xen-devel&m=158862573216800

Thanks, it would be good if the commit message contains a bit more details.

> 
> If you are asking where in the Linux codebase the vmalloc is happening
> specifically, I don't know for sure, my information is limited to the
> stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
> but I shall have one soon.)

Looking at the code there is a comment in xen_swiotlb_alloc_coherent() 
suggesting that xen_alloc_coherent_pages() may return an ioremap'ped 
region on Arm. So it feels to me that commit 
b877ac9815a8fe7e5f6d7fdde3dc34652408840a "xen/swiotlb: remember having 
called xen_create_contiguous_region()" has always been broken on Arm.

As an aside, your commit message also suggests this is an issue for 
every virtual address used in swiotlb. But only the virt_to_page() call 
in xen_swiotlb_free_coherent() is modified. Is it intended? If yes, I 
think you want to update your commit message.

> 
> 
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>>    drivers/xen/swiotlb-xen.c | 5 ++++-
>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index b6d27762c6f8..a42129cba36e 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
>>> size, void *vaddr,
>>>    	int order = get_order(size);
>>>    	phys_addr_t phys;
>>>    	u64 dma_mask = DMA_BIT_MASK(32);
>>> +	struct page *pg;
>>>      	if (hwdev && hwdev->coherent_dma_mask)
>>>    		dma_mask = hwdev->coherent_dma_mask;
>>> @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
>>> size, void *vaddr,
>>>    	/* Convert the size to actually allocated. */
>>>    	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>    +	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
>>> +				      virt_to_page(vaddr);
>>
>> Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it
>> something we want to do it here as well? Or is there any other condition where
>> vmalloc can happen?
> 
> I can see it in dma_direct_free_pages:
> 
> 	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
> 		vunmap(cpu_addr);
> 
> I wonder why the common DMA code does that. is_vmalloc_addr should work
> regardless of CONFIG_DMA_REMAP. Maybe just for efficiency?
is_vmalloc_addr() doesn't looks very expensive (although it is not an 
inline function). So I am not sure the reason behind it. It might be 
worth asking the author of the config.

Cheers,
Stefano Stabellini May 22, 2020, 8:36 p.m. UTC | #4
On Fri, 22 May 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/05/2020 04:54, Stefano Stabellini wrote:
> > On Thu, 21 May 2020, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 21/05/2020 00:45, Stefano Stabellini wrote:
> > > > From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > > > 
> > > > Don't just assume that virt_to_page works on all virtual addresses.
> > > > Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
> > > > virt addresses.
> > > 
> > > Can you provide an example where swiotlb is used with vmalloc()?
> > 
> > The issue was reported here happening on the Rasperry Pi 4:
> > https://marc.info/?l=xen-devel&m=158862573216800
> 
> Thanks, it would be good if the commit message contains a bit more details.
> 
> > 
> > If you are asking where in the Linux codebase the vmalloc is happening
> > specifically, I don't know for sure, my information is limited to the
> > stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
> > but I shall have one soon.)
> 
> Looking at the code there is a comment in xen_swiotlb_alloc_coherent()
> suggesting that xen_alloc_coherent_pages() may return an ioremap'ped region on
> Arm. So it feels to me that commit b877ac9815a8fe7e5f6d7fdde3dc34652408840a
> "xen/swiotlb: remember having called xen_create_contiguous_region()" has
> always been broken on Arm.

Yes, I think you are right


> As an aside, your commit message also suggests this is an issue for every
> virtual address used in swiotlb. But only the virt_to_page() call in
> xen_swiotlb_free_coherent() is modified. Is it intended? If yes, I think you
> want to update your commit message.

I see, yes I can explain better
diff mbox series

Patch

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d27762c6f8..a42129cba36e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -335,6 +335,7 @@  xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	int order = get_order(size);
 	phys_addr_t phys;
 	u64 dma_mask = DMA_BIT_MASK(32);
+	struct page *pg;
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -346,9 +347,11 @@  xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
+	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
+				      virt_to_page(vaddr);
 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
 		     range_straddles_page_boundary(phys, size)) &&
-	    TestClearPageXenRemapped(virt_to_page(vaddr)))
+	    TestClearPageXenRemapped(pg))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);