diff mbox series

xen/virtio: Handle cases when page offset > PAGE_SIZE properly

Message ID 20221007132736.2275574-1-olekstysh@gmail.com (mailing list archive)
State New, archived
Headers show
Series xen/virtio: Handle cases when page offset > PAGE_SIZE properly | expand

Commit Message

Oleksandr Tyshchenko Oct. 7, 2022, 1:27 p.m. UTC
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Passed to xen_grant_dma_map_page() offset in the page
can be > PAGE_SIZE even if the guest uses the same page granularity
as Xen (4KB).

Before current patch, if such case happened we ended up providing
grants for the whole region in xen_grant_dma_map_page() which
was really unnecessary. The more, we ended up not releasing all
grants which represented that region in xen_grant_dma_unmap_page().

Current patch updates the code to be able to deal with such cases.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
---
Cc: Juergen Gross <jgross@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>

Depens on:
https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/

Should go in only after that series.
---
 drivers/xen/grant-dma-ops.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Comments

Jürgen Groß Oct. 7, 2022, 3:10 p.m. UTC | #1
On 07.10.22 15:27, Oleksandr Tyshchenko wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Passed to xen_grant_dma_map_page() offset in the page
> can be > PAGE_SIZE even if the guest uses the same page granularity
> as Xen (4KB).
> 
> Before current patch, if such case happened we ended up providing
> grants for the whole region in xen_grant_dma_map_page() which
> was really unnecessary. The more, we ended up not releasing all
> grants which represented that region in xen_grant_dma_unmap_page().
> 
> Current patch updates the code to be able to deal with such cases.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen
Xenia Ragiadakou Oct. 8, 2022, 11:08 a.m. UTC | #2
On 10/7/22 16:27, Oleksandr Tyshchenko wrote:

Hi Oleksandr

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> Passed to xen_grant_dma_map_page() offset in the page
> can be > PAGE_SIZE even if the guest uses the same page granularity
> as Xen (4KB).
> 
> Before current patch, if such case happened we ended up providing
> grants for the whole region in xen_grant_dma_map_page() which
> was really unnecessary. The more, we ended up not releasing all
> grants which represented that region in xen_grant_dma_unmap_page().
> 
> Current patch updates the code to be able to deal with such cases.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> ---
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Depens on:
> https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/
> 
> Should go in only after that series.
> ---
>   drivers/xen/grant-dma-ops.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> index c66f56d24013..1385f0e686fe 100644
> --- a/drivers/xen/grant-dma-ops.c
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
>   					 unsigned long attrs)
>   {
>   	struct xen_grant_dma_data *data;
> -	unsigned int i, n_pages = PFN_UP(offset + size);
> +	unsigned long dma_offset = offset_in_page(offset),
> +			gfn_offset = PFN_DOWN(offset);
> +	unsigned int i, n_pages = PFN_UP(dma_offset + size);

IIUC, the above with a later patch will become:

dma_offset = xen_offset_in_page(offset)
gfn_offset = XEN_PFN_DOWN(offset)
n_pages = XEN_PFN_UP(dma_offset + size)

>   	grant_ref_t grant;
>   	dma_addr_t dma_handle;
>   
> @@ -187,10 +189,10 @@ static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
>   
>   	for (i = 0; i < n_pages; i++) {
>   		gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
> -				xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
> +				xen_page_to_gfn(page) + i + gfn_offset, dir == DMA_TO_DEVICE);

Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)

>   	}
>   
> -	dma_handle = grant_to_dma(grant) + offset;
> +	dma_handle = grant_to_dma(grant) + dma_offset;
>   
>   	return dma_handle;
>   }
Oleksandr Tyshchenko Oct. 8, 2022, 12:52 p.m. UTC | #3
On 08.10.22 14:08, Xenia Ragiadakou wrote:

Hello Xenia

>
> On 10/7/22 16:27, Oleksandr Tyshchenko wrote:
>
> Hi Oleksandr
>
>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>
>> Passed to xen_grant_dma_map_page() offset in the page
>> can be > PAGE_SIZE even if the guest uses the same page granularity
>> as Xen (4KB).
>>
>> Before current patch, if such case happened we ended up providing
>> grants for the whole region in xen_grant_dma_map_page() which
>> was really unnecessary. The more, we ended up not releasing all
>> grants which represented that region in xen_grant_dma_unmap_page().
>>
>> Current patch updates the code to be able to deal with such cases.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>> ---
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: Xenia Ragiadakou <burzalodowa@gmail.com>
>>
>> Depens on:
>> https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/__;!!GF_29dbcQIUBPA!xnkNaKpfZ4LssQJcJs_J91KERZKMP2Rd-xEdBqXNXJ8GyCXJ0gkRer1elVYfxOWtwN_FOl9tVieDWlfN-UZaHQsyLMhA$ 
>> [lore[.]kernel[.]org]
>>
>> Should go in only after that series.
>> ---
>>   drivers/xen/grant-dma-ops.c | 8 +++++---
>>   1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>> index c66f56d24013..1385f0e686fe 100644
>> --- a/drivers/xen/grant-dma-ops.c
>> +++ b/drivers/xen/grant-dma-ops.c
>> @@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct 
>> device *dev, struct page *page,
>>                        unsigned long attrs)
>>   {
>>       struct xen_grant_dma_data *data;
>> -    unsigned int i, n_pages = PFN_UP(offset + size);
>> +    unsigned long dma_offset = offset_in_page(offset),
>> +            gfn_offset = PFN_DOWN(offset);
>> +    unsigned int i, n_pages = PFN_UP(dma_offset + size);
>
> IIUC, the above with a later patch will become:
>
> dma_offset = xen_offset_in_page(offset)
> gfn_offset = XEN_PFN_DOWN(offset)
> n_pages = XEN_PFN_UP(dma_offset + size)


If saying "later" patch you meant "xen/virtio: Convert 
PAGE_SIZE/PAGE_SHIFT/PFN_UP to Xen counterparts" then yes, exactly.


>
>
>>       grant_ref_t grant;
>>       dma_addr_t dma_handle;
>>   @@ -187,10 +189,10 @@ static dma_addr_t 
>> xen_grant_dma_map_page(struct device *dev, struct page *page,
>>         for (i = 0; i < n_pages; i++) {
>>           gnttab_grant_foreign_access_ref(grant + i, 
>> data->backend_domid,
>> -                xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
>> +                xen_page_to_gfn(page) + i + gfn_offset, dir == 
>> DMA_TO_DEVICE);
>
> Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
> I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)

The gfn_offset is just a const value here, which just means how many 
gfns we should skip. But ...

... I think, I get your point. So, if the region which is contiguous in 
pfn might be non-contiguous in gfn (which seems to be the case for x86's 
PV, but I may mistake) we should indeed use open-coded

construction "pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)".  And 
the gfn_offset should be renamed to pfn_offset then.


Correct?


Thanks.


>
>>       }
>>   -    dma_handle = grant_to_dma(grant) + offset;
>> +    dma_handle = grant_to_dma(grant) + dma_offset;
>>         return dma_handle;
>>   }
>
Xenia Ragiadakou Oct. 8, 2022, 12:59 p.m. UTC | #4
On 10/8/22 15:52, Oleksandr Tyshchenko wrote:
> 
> On 08.10.22 14:08, Xenia Ragiadakou wrote:
> 
> Hello Xenia
> 
>>
>> On 10/7/22 16:27, Oleksandr Tyshchenko wrote:
>>
>> Hi Oleksandr
>>
>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>
>>> Passed to xen_grant_dma_map_page() offset in the page
>>> can be > PAGE_SIZE even if the guest uses the same page granularity
>>> as Xen (4KB).
>>>
>>> Before current patch, if such case happened we ended up providing
>>> grants for the whole region in xen_grant_dma_map_page() which
>>> was really unnecessary. The more, we ended up not releasing all
>>> grants which represented that region in xen_grant_dma_unmap_page().
>>>
>>> Current patch updates the code to be able to deal with such cases.
>>>
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>> ---
>>> Cc: Juergen Gross <jgross@suse.com>
>>> Cc: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>
>>> Depens on:
>>> https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/__;!!GF_29dbcQIUBPA!xnkNaKpfZ4LssQJcJs_J91KERZKMP2Rd-xEdBqXNXJ8GyCXJ0gkRer1elVYfxOWtwN_FOl9tVieDWlfN-UZaHQsyLMhA$
>>> [lore[.]kernel[.]org]
>>>
>>> Should go in only after that series.
>>> ---
>>>    drivers/xen/grant-dma-ops.c | 8 +++++---
>>>    1 file changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>>> index c66f56d24013..1385f0e686fe 100644
>>> --- a/drivers/xen/grant-dma-ops.c
>>> +++ b/drivers/xen/grant-dma-ops.c
>>> @@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct
>>> device *dev, struct page *page,
>>>                         unsigned long attrs)
>>>    {
>>>        struct xen_grant_dma_data *data;
>>> -    unsigned int i, n_pages = PFN_UP(offset + size);
>>> +    unsigned long dma_offset = offset_in_page(offset),
>>> +            gfn_offset = PFN_DOWN(offset);
>>> +    unsigned int i, n_pages = PFN_UP(dma_offset + size);
>>
>> IIUC, the above with a later patch will become:
>>
>> dma_offset = xen_offset_in_page(offset)
>> gfn_offset = XEN_PFN_DOWN(offset)
>> n_pages = XEN_PFN_UP(dma_offset + size)
> 
> 
> If saying "later" patch you meant "xen/virtio: Convert
> PAGE_SIZE/PAGE_SHIFT/PFN_UP to Xen counterparts" then yes, exactly.

Ah ok, I see.

>>
>>
>>>        grant_ref_t grant;
>>>        dma_addr_t dma_handle;
>>>    @@ -187,10 +189,10 @@ static dma_addr_t
>>> xen_grant_dma_map_page(struct device *dev, struct page *page,
>>>          for (i = 0; i < n_pages; i++) {
>>>            gnttab_grant_foreign_access_ref(grant + i,
>>> data->backend_domid,
>>> -                xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
>>> +                xen_page_to_gfn(page) + i + gfn_offset, dir ==
>>> DMA_TO_DEVICE);
>>
>> Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
>> I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)
> 
> The gfn_offset is just a const value here, which just means how many
> gfns we should skip. But ...
> 
> ... I think, I get your point. So, if the region which is contiguous in
> pfn might be non-contiguous in gfn (which seems to be the case for x86's
> PV, but I may mistake) we should indeed use open-coded
> 
> construction "pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)".  And
> the gfn_offset should be renamed to pfn_offset then.
> 
> 
> Correct?

Yes, that 's what I had in mind unless I 'm missing sth.

>>
>>>        }
>>>    -    dma_handle = grant_to_dma(grant) + offset;
>>> +    dma_handle = grant_to_dma(grant) + dma_offset;
>>>          return dma_handle;
>>>    }
>>
Oleksandr Tyshchenko Oct. 8, 2022, 1:21 p.m. UTC | #5
On 08.10.22 15:59, Xenia Ragiadakou wrote:

Hello Xenia

>
> On 10/8/22 15:52, Oleksandr Tyshchenko wrote:
>>
>> On 08.10.22 14:08, Xenia Ragiadakou wrote:
>>
>> Hello Xenia
>>
>>>
>>> On 10/7/22 16:27, Oleksandr Tyshchenko wrote:
>>>
>>> Hi Oleksandr
>>>
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>>
>>>> Passed to xen_grant_dma_map_page() offset in the page
>>>> can be > PAGE_SIZE even if the guest uses the same page granularity
>>>> as Xen (4KB).
>>>>
>>>> Before current patch, if such case happened we ended up providing
>>>> grants for the whole region in xen_grant_dma_map_page() which
>>>> was really unnecessary. The more, we ended up not releasing all
>>>> grants which represented that region in xen_grant_dma_unmap_page().
>>>>
>>>> Current patch updates the code to be able to deal with such cases.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>>>> ---
>>>> Cc: Juergen Gross <jgross@suse.com>
>>>> Cc: Xenia Ragiadakou <burzalodowa@gmail.com>
>>>>
>>>> Depens on:
>>>> https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/__;!!GF_29dbcQIUBPA!xnkNaKpfZ4LssQJcJs_J91KERZKMP2Rd-xEdBqXNXJ8GyCXJ0gkRer1elVYfxOWtwN_FOl9tVieDWlfN-UZaHQsyLMhA$ 
>>>>
>>>> [lore[.]kernel[.]org]
>>>>
>>>> Should go in only after that series.
>>>> ---
>>>>    drivers/xen/grant-dma-ops.c | 8 +++++---
>>>>    1 file changed, 5 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>>>> index c66f56d24013..1385f0e686fe 100644
>>>> --- a/drivers/xen/grant-dma-ops.c
>>>> +++ b/drivers/xen/grant-dma-ops.c
>>>> @@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct
>>>> device *dev, struct page *page,
>>>>                         unsigned long attrs)
>>>>    {
>>>>        struct xen_grant_dma_data *data;
>>>> -    unsigned int i, n_pages = PFN_UP(offset + size);
>>>> +    unsigned long dma_offset = offset_in_page(offset),
>>>> +            gfn_offset = PFN_DOWN(offset);
>>>> +    unsigned int i, n_pages = PFN_UP(dma_offset + size);
>>>
>>> IIUC, the above with a later patch will become:
>>>
>>> dma_offset = xen_offset_in_page(offset)
>>> gfn_offset = XEN_PFN_DOWN(offset)
>>> n_pages = XEN_PFN_UP(dma_offset + size)
>>
>>
>> If saying "later" patch you meant "xen/virtio: Convert
>> PAGE_SIZE/PAGE_SHIFT/PFN_UP to Xen counterparts" then yes, exactly.
>
> Ah ok, I see.
>
>>>
>>>
>>>>        grant_ref_t grant;
>>>>        dma_addr_t dma_handle;
>>>>    @@ -187,10 +189,10 @@ static dma_addr_t
>>>> xen_grant_dma_map_page(struct device *dev, struct page *page,
>>>>          for (i = 0; i < n_pages; i++) {
>>>>            gnttab_grant_foreign_access_ref(grant + i,
>>>> data->backend_domid,
>>>> -                xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
>>>> +                xen_page_to_gfn(page) + i + gfn_offset, dir ==
>>>> DMA_TO_DEVICE);
>>>
>>> Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
>>> I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)
>>
>> The gfn_offset is just a const value here, which just means how many
>> gfns we should skip. But ...
>>
>> ... I think, I get your point. So, if the region which is contiguous in
>> pfn might be non-contiguous in gfn (which seems to be the case for x86's
>> PV, but I may mistake) we should indeed use open-coded
>>
>> construction "pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)".  And
>> the gfn_offset should be renamed to pfn_offset then.
>>
>>
>> Correct?
>
> Yes, that 's what I had in mind unless I 'm missing sth.


ok, thanks for confirming. So I will create V2 then.


>
>>>
>>>>        }
>>>>    -    dma_handle = grant_to_dma(grant) + offset;
>>>> +    dma_handle = grant_to_dma(grant) + dma_offset;
>>>>          return dma_handle;
>>>>    }
>>>
>
diff mbox series

Patch

diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index c66f56d24013..1385f0e686fe 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -168,7 +168,9 @@  static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
 					 unsigned long attrs)
 {
 	struct xen_grant_dma_data *data;
-	unsigned int i, n_pages = PFN_UP(offset + size);
+	unsigned long dma_offset = offset_in_page(offset),
+			gfn_offset = PFN_DOWN(offset);
+	unsigned int i, n_pages = PFN_UP(dma_offset + size);
 	grant_ref_t grant;
 	dma_addr_t dma_handle;
 
@@ -187,10 +189,10 @@  static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
 
 	for (i = 0; i < n_pages; i++) {
 		gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
-				xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
+				xen_page_to_gfn(page) + i + gfn_offset, dir == DMA_TO_DEVICE);
 	}
 
-	dma_handle = grant_to_dma(grant) + offset;
+	dma_handle = grant_to_dma(grant) + dma_offset;
 
 	return dma_handle;
 }