diff mbox series

[v2,1/2] xen/swiotlb: add alignment check for dma buffers

Message ID 20240916064748.18071-2-jgross@suse.com (mailing list archive)
State New
Headers show
Series xen/swiotlb: fix 2 issues in xen-swiotlb | expand

Commit Message

Juergen Gross Sept. 16, 2024, 6:47 a.m. UTC
When checking a memory buffer to be consecutive in machine memory,
the alignment needs to be checked, too. Failing to do so might result
in DMA memory not being aligned according to its requested size,
leading to error messages like:

  4xxx 0000:2b:00.0: enabling device (0140 -> 0142)
  4xxx 0000:2b:00.0: Ring address not aligned
  4xxx 0000:2b:00.0: Failed to initialise service qat_crypto
  4xxx 0000:2b:00.0: Resetting device qat_dev0
  4xxx: probe of 0000:2b:00.0 failed with error -14

Fixes: 9435cce87950 ("xen/swiotlb: Add support for 64KB page granularity")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- use 1ULL for creating align mask in order to cover ARM32 LPAE
- fix case of XEN_PAGE_SIZE != PAGE_SIZE (Jan Beulich)
---
 drivers/xen/swiotlb-xen.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Jan Beulich Sept. 16, 2024, 6:50 a.m. UTC | #1
On 16.09.2024 08:47, Juergen Gross wrote:
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>  {
>  	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>  	unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
> +	phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>  
>  	next_bfn = pfn_to_bfn(xen_pfn);
>  
> +	/* If buffer is physically aligned, ensure DMA alignment. */
> +	if (IS_ALIGNED(p, algn) &&
> +	    !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))

And this shift is not at risk of losing bits on Arm LPAE?

Jan
Juergen Gross Sept. 16, 2024, 6:56 a.m. UTC | #2
On 16.09.24 08:50, Jan Beulich wrote:
> On 16.09.2024 08:47, Juergen Gross wrote:
>> --- a/drivers/xen/swiotlb-xen.c
>> +++ b/drivers/xen/swiotlb-xen.c
>> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>>   {
>>   	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>>   	unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
>> +	phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>>   
>>   	next_bfn = pfn_to_bfn(xen_pfn);
>>   
>> +	/* If buffer is physically aligned, ensure DMA alignment. */
>> +	if (IS_ALIGNED(p, algn) &&
>> +	    !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
> 
> And this shift is not at risk of losing bits on Arm LPAE?

For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
smaller than 4G).


Juergen
Jan Beulich Sept. 16, 2024, 6:59 a.m. UTC | #3
On 16.09.2024 08:56, Juergen Gross wrote:
> On 16.09.24 08:50, Jan Beulich wrote:
>> On 16.09.2024 08:47, Juergen Gross wrote:
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>>>   {
>>>   	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>>>   	unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
>>> +	phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>>>   
>>>   	next_bfn = pfn_to_bfn(xen_pfn);
>>>   
>>> +	/* If buffer is physically aligned, ensure DMA alignment. */
>>> +	if (IS_ALIGNED(p, algn) &&
>>> +	    !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
>>
>> And this shift is not at risk of losing bits on Arm LPAE?
> 
> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
> smaller than 4G).

Oh, yes - of course. A (hypothetical?) strict no-overflow checking
mode of the kernel may take issue though, so maybe better to right-
shift "algn"?

Jan
Juergen Gross Sept. 16, 2024, 6:59 a.m. UTC | #4
On 16.09.24 08:56, Juergen Gross wrote:
> On 16.09.24 08:50, Jan Beulich wrote:
>> On 16.09.2024 08:47, Juergen Gross wrote:
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -78,9 +78,15 @@ static inline int 
>>> range_straddles_page_boundary(phys_addr_t p, size_t size)
>>>   {
>>>       unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>>>       unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
>>> +    phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>>>       next_bfn = pfn_to_bfn(xen_pfn);
>>> +    /* If buffer is physically aligned, ensure DMA alignment. */
>>> +    if (IS_ALIGNED(p, algn) &&
>>> +        !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
>>
>> And this shift is not at risk of losing bits on Arm LPAE?
> 
> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
> smaller than 4G).

Wait, that was nonsense.

I'll change the check to:

	!IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn)


Juergen
Jan Beulich Sept. 16, 2024, 7:01 a.m. UTC | #5
On 16.09.2024 08:59, Juergen Gross wrote:
> On 16.09.24 08:56, Juergen Gross wrote:
>> On 16.09.24 08:50, Jan Beulich wrote:
>>> On 16.09.2024 08:47, Juergen Gross wrote:
>>>> --- a/drivers/xen/swiotlb-xen.c
>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>> @@ -78,9 +78,15 @@ static inline int 
>>>> range_straddles_page_boundary(phys_addr_t p, size_t size)
>>>>   {
>>>>       unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>>>>       unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
>>>> +    phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>>>>       next_bfn = pfn_to_bfn(xen_pfn);
>>>> +    /* If buffer is physically aligned, ensure DMA alignment. */
>>>> +    if (IS_ALIGNED(p, algn) &&
>>>> +        !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
>>>
>>> And this shift is not at risk of losing bits on Arm LPAE?
>>
>> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
>> smaller than 4G).
> 
> Wait, that was nonsense.

I think it was quite reasonable, as long as also algn (and hence size)
isn't absurdly large.

Jan
Juergen Gross Sept. 16, 2024, 7:02 a.m. UTC | #6
On 16.09.24 08:59, Jan Beulich wrote:
> On 16.09.2024 08:56, Juergen Gross wrote:
>> On 16.09.24 08:50, Jan Beulich wrote:
>>> On 16.09.2024 08:47, Juergen Gross wrote:
>>>> --- a/drivers/xen/swiotlb-xen.c
>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>> @@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>>>>    {
>>>>    	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>>>>    	unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
>>>> +	phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>>>>    
>>>>    	next_bfn = pfn_to_bfn(xen_pfn);
>>>>    
>>>> +	/* If buffer is physically aligned, ensure DMA alignment. */
>>>> +	if (IS_ALIGNED(p, algn) &&
>>>> +	    !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
>>>
>>> And this shift is not at risk of losing bits on Arm LPAE?
>>
>> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
>> smaller than 4G).
> 
> Oh, yes - of course. A (hypothetical?) strict no-overflow checking
> mode of the kernel may take issue though, so maybe better to right-
> shift "algn"?

No, this wouldn't work in case algn < XEN_PAGE_SIZE.


Juergen
Juergen Gross Sept. 16, 2024, 7:03 a.m. UTC | #7
On 16.09.24 09:01, Jan Beulich wrote:
> On 16.09.2024 08:59, Juergen Gross wrote:
>> On 16.09.24 08:56, Juergen Gross wrote:
>>> On 16.09.24 08:50, Jan Beulich wrote:
>>>> On 16.09.2024 08:47, Juergen Gross wrote:
>>>>> --- a/drivers/xen/swiotlb-xen.c
>>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>>> @@ -78,9 +78,15 @@ static inline int
>>>>> range_straddles_page_boundary(phys_addr_t p, size_t size)
>>>>>    {
>>>>>        unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
>>>>>        unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
>>>>> +    phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
>>>>>        next_bfn = pfn_to_bfn(xen_pfn);
>>>>> +    /* If buffer is physically aligned, ensure DMA alignment. */
>>>>> +    if (IS_ALIGNED(p, algn) &&
>>>>> +        !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
>>>>
>>>> And this shift is not at risk of losing bits on Arm LPAE?
>>>
>>> For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
>>> smaller than 4G).
>>
>> Wait, that was nonsense.
> 
> I think it was quite reasonable, as long as also algn (and hence size)
> isn't absurdly large.

Better safe than sorry.


Juergen
Stefano Stabellini Sept. 16, 2024, 8:19 p.m. UTC | #8
On Mon, 16 Sep 2024, Juergen Gross wrote:
> On 16.09.24 08:56, Juergen Gross wrote:
> > On 16.09.24 08:50, Jan Beulich wrote:
> > > On 16.09.2024 08:47, Juergen Gross wrote:
> > > > --- a/drivers/xen/swiotlb-xen.c
> > > > +++ b/drivers/xen/swiotlb-xen.c
> > > > @@ -78,9 +78,15 @@ static inline int
> > > > range_straddles_page_boundary(phys_addr_t p, size_t size)
> > > >   {
> > > >       unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
> > > >       unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) +
> > > > size);
> > > > +    phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
> > > >       next_bfn = pfn_to_bfn(xen_pfn);
> > > > +    /* If buffer is physically aligned, ensure DMA alignment. */
> > > > +    if (IS_ALIGNED(p, algn) &&
> > > > +        !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
> > > 
> > > And this shift is not at risk of losing bits on Arm LPAE?
> > 
> > For alignment check this just doesn't matter (assuming XEN_PAGE_SIZE is
> > smaller than 4G).
> 
> Wait, that was nonsense.
> 
> I'll change the check to:
> 
> 	!IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn)

With this change:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
diff mbox series

Patch

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 35155258a7e2..ddf5b1df632e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -78,9 +78,15 @@  static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 {
 	unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
 	unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
+	phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
 
 	next_bfn = pfn_to_bfn(xen_pfn);
 
+	/* If buffer is physically aligned, ensure DMA alignment. */
+	if (IS_ALIGNED(p, algn) &&
+	    !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
+		return 1;
+
 	for (i = 1; i < nr_pages; i++)
 		if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
 			return 1;