diff mbox series

[V2,2/2] include: linux: Update gather only if it's not NULL

Message ID 20240801033432.106837-2-amhetre@nvidia.com (mailing list archive)
State New, archived
Headers show
Series [V2,1/2] iommu: Optimize IOMMU UnMap | expand

Commit Message

Ashish Mhetre Aug. 1, 2024, 3:34 a.m. UTC
Gather can be NULL when unmap is called for freeing old table while
mapping. If it's NULL then there is no need to add page for syncing
the TLB.

Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
---
 include/linux/iommu.h | 3 +++
 1 file changed, 3 insertions(+)

Comments

Robin Murphy Aug. 1, 2024, 4:42 p.m. UTC | #1
On 01/08/2024 4:34 am, Ashish Mhetre wrote:
> Gather can be NULL when unmap is called for freeing old table while
> mapping. If it's NULL then there is no need to add page for syncing
> the TLB.

But that's only because __arm_lpae_unmap() is now choosing to 
over-invalidate the table entries for simplicity. I think it would make 
more sense to handle that at the callsite, e.g. "if (gather && 
!iommu_iotlb_gather_queued(gather))".

Also doesn't this mean that bisection is broken as-is since patch #1 on 
its own is going to blow up dereferencing NULL->pgsize when it gets here?

Thanks,
Robin.

> Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
> ---
>   include/linux/iommu.h | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 4d47f2c33311..2a28c1ef8517 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -928,6 +928,9 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
>   					       struct iommu_iotlb_gather *gather,
>   					       unsigned long iova, size_t size)
>   {
> +	if (!gather)
> +		return;
> +
>   	/*
>   	 * If the new page is disjoint from the current range or is mapped at
>   	 * a different granularity, then sync the TLB so that the gather
Ashish Mhetre Aug. 5, 2024, 4:25 a.m. UTC | #2
On 8/1/2024 10:12 PM, Robin Murphy wrote:
> External email: Use caution opening links or attachments
>
>
> On 01/08/2024 4:34 am, Ashish Mhetre wrote:
>> Gather can be NULL when unmap is called for freeing old table while
>> mapping. If it's NULL then there is no need to add page for syncing
>> the TLB.
>
> But that's only because __arm_lpae_unmap() is now choosing to
> over-invalidate the table entries for simplicity. I think it would make
> more sense to handle that at the callsite, e.g. "if (gather &&
> !iommu_iotlb_gather_queued(gather))".
>
> Also doesn't this mean that bisection is broken as-is since patch #1 on
> its own is going to blow up dereferencing NULL->pgsize when it gets here?
>
> Thanks,
> Robin.
>
Yes, gather can be NULL because of patch #1 only. I will fix it at 
callsite only
with a single patch rather than splitting.

>> Signed-off-by: Ashish Mhetre <amhetre@nvidia.com>
>> ---
>>   include/linux/iommu.h | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>> index 4d47f2c33311..2a28c1ef8517 100644
>> --- a/include/linux/iommu.h
>> +++ b/include/linux/iommu.h
>> @@ -928,6 +928,9 @@ static inline void 
>> iommu_iotlb_gather_add_page(struct iommu_domain *domain,
>>                                              struct 
>> iommu_iotlb_gather *gather,
>>                                              unsigned long iova, 
>> size_t size)
>>   {
>> +     if (!gather)
>> +             return;
>> +
>>       /*
>>        * If the new page is disjoint from the current range or is 
>> mapped at
>>        * a different granularity, then sync the TLB so that the gather
diff mbox series

Patch

diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 4d47f2c33311..2a28c1ef8517 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -928,6 +928,9 @@  static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
 					       struct iommu_iotlb_gather *gather,
 					       unsigned long iova, size_t size)
 {
+	if (!gather)
+		return;
+
 	/*
 	 * If the new page is disjoint from the current range or is mapped at
 	 * a different granularity, then sync the TLB so that the gather