diff mbox series

[2/3] iommu: mediatek: Add support of unmanaged iommu domain

Message ID 20230130102722.133271-3-abailon@baylibre.com (mailing list archive)
State New, archived
Headers show
Series Add support of unmanaged domain to mediatek IOMMU | expand

Commit Message

Alexandre Bailon Jan. 30, 2023, 10:27 a.m. UTC
Currently, the driver can allocate an unmanaged iommu domain.
But, this only works for SoC having multiple bank or multiple iova region.
This updates the driver to also support unmanaged iommu domain if
MTK_UNMANAGED_DEVICE is set in the iommu id.

Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
---
 drivers/iommu/mtk_iommu.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Alexandre Mergnat Jan. 30, 2023, 12:04 p.m. UTC | #1
On 30/01/2023 11:27, Alexandre Bailon wrote:
> Currently, the driver can allocate an unmanaged iommu domain.
> But, this only works for SoC having multiple bank or multiple iova region.
> This updates the driver to also support unmanaged iommu domain if
> MTK_UNMANAGED_DEVICE is set in the iommu id.
> 
> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>

Reviewed-by: Alexandre Mergnat <amergnat@baylibre.com>
Robin Murphy Jan. 30, 2023, 12:04 p.m. UTC | #2
On 2023-01-30 10:27, Alexandre Bailon wrote:
> Currently, the driver can allocate an unmanaged iommu domain.
> But, this only works for SoC having multiple bank or multiple iova region.

That is for good reason - there is only a single pagetable per bank, so 
if there are multiple devices assigned to a single bank, they cannot 
possibly be attached to different domains at the same time. Hence why 
the banks are modelled as groups.

Thanks,
Robin.

> This updates the driver to also support unmanaged iommu domain if
> MTK_UNMANAGED_DEVICE is set in the iommu id.
> 
> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
> ---
>   drivers/iommu/mtk_iommu.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> index 2badd6acfb23d..5884a1ad409a0 100644
> --- a/drivers/iommu/mtk_iommu.c
> +++ b/drivers/iommu/mtk_iommu.c
> @@ -858,9 +858,15 @@ static int mtk_iommu_get_group_id(struct device *dev, const struct mtk_iommu_pla
>   static struct iommu_group *mtk_iommu_device_group(struct device *dev)
>   {
>   	struct mtk_iommu_data *c_data = dev_iommu_priv_get(dev), *data;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>   	struct list_head *hw_list = c_data->hw_list;
>   	struct iommu_group *group;
>   	int groupid;
> +	int i;
> +
> +	for (i = 0; i < fwspec->num_ids; i++)
> +		if (MTK_M4U_UNMANAGED(fwspec->ids[i]))
> +			return iommu_group_alloc();
>   
>   	data = mtk_iommu_get_frst_data(hw_list);
>   	if (!data)
Alexandre Bailon Jan. 31, 2023, 1:08 p.m. UTC | #3
Hi Robin

On 1/30/23 13:04, Robin Murphy wrote:
> On 2023-01-30 10:27, Alexandre Bailon wrote:
>> Currently, the driver can allocate an unmanaged iommu domain.
>> But, this only works for SoC having multiple bank or multiple iova 
>> region.
> 
> That is for good reason - there is only a single pagetable per bank, so 
> if there are multiple devices assigned to a single bank, they cannot 
> possibly be attached to different domains at the same time. Hence why 
> the banks are modelled as groups.
I understand.
I am trying to upstream a remoteproc driver but the remote processor is
behind the iommu.
remoteproc can manage the iommu but it requires an unmanaged domain.
I tried a couple of things but this cause code duplication,
implies many hacks and not always reliable.
Do you have any suggestion ?

Thanks,
Alexandre
> 
> Thanks,
> Robin.
> 
>> This updates the driver to also support unmanaged iommu domain if
>> MTK_UNMANAGED_DEVICE is set in the iommu id.
>>
>> Signed-off-by: Alexandre Bailon <abailon@baylibre.com>
>> ---
>>   drivers/iommu/mtk_iommu.c | 6 ++++++
>>   1 file changed, 6 insertions(+)
>>
>> diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
>> index 2badd6acfb23d..5884a1ad409a0 100644
>> --- a/drivers/iommu/mtk_iommu.c
>> +++ b/drivers/iommu/mtk_iommu.c
>> @@ -858,9 +858,15 @@ static int mtk_iommu_get_group_id(struct device 
>> *dev, const struct mtk_iommu_pla
>>   static struct iommu_group *mtk_iommu_device_group(struct device *dev)
>>   {
>>       struct mtk_iommu_data *c_data = dev_iommu_priv_get(dev), *data;
>> +    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>>       struct list_head *hw_list = c_data->hw_list;
>>       struct iommu_group *group;
>>       int groupid;
>> +    int i;
>> +
>> +    for (i = 0; i < fwspec->num_ids; i++)
>> +        if (MTK_M4U_UNMANAGED(fwspec->ids[i]))
>> +            return iommu_group_alloc();
>>       data = mtk_iommu_get_frst_data(hw_list);
>>       if (!data)
Robin Murphy Jan. 31, 2023, 2:15 p.m. UTC | #4
On 31/01/2023 1:08 pm, Alexandre Bailon wrote:
> Hi Robin
> 
> On 1/30/23 13:04, Robin Murphy wrote:
>> On 2023-01-30 10:27, Alexandre Bailon wrote:
>>> Currently, the driver can allocate an unmanaged iommu domain.
>>> But, this only works for SoC having multiple bank or multiple iova 
>>> region.
>>
>> That is for good reason - there is only a single pagetable per bank, 
>> so if there are multiple devices assigned to a single bank, they 
>> cannot possibly be attached to different domains at the same time. 
>> Hence why the banks are modelled as groups.
> I understand.
> I am trying to upstream a remoteproc driver but the remote processor is
> behind the iommu.
> remoteproc can manage the iommu but it requires an unmanaged domain.
> I tried a couple of things but this cause code duplication,
> implies many hacks and not always reliable.
> Do you have any suggestion ?

If there are other active devices behind the same IOMMU, and the 
remoteproc device cannot be isolated into its own bank using the 
existing IOMMU driver logic, then the remoteproc driver cannot manage 
the IOMMU directly, and must just use the regular DMA API. There's no 
way around it; you can't have two different parts of the kernel both 
thinking they have exclusive control of a single IOMMU address space at 
the same time. Similarly, remoteproc also cannot take explicit control 
of a multi-device group if it's not actually in control of the other 
devices, since their drivers will not be expecting the DMA address space 
to suddenly change underfoot - that's why iommu_attach_device() has the 
check which you presumably ran into.

Thanks,
Robin.
Alexandre Bailon Jan. 31, 2023, 3:31 p.m. UTC | #5
On 1/31/23 15:15, Robin Murphy wrote:
> On 31/01/2023 1:08 pm, Alexandre Bailon wrote:
>> Hi Robin
>>
>> On 1/30/23 13:04, Robin Murphy wrote:
>>> On 2023-01-30 10:27, Alexandre Bailon wrote:
>>>> Currently, the driver can allocate an unmanaged iommu domain.
>>>> But, this only works for SoC having multiple bank or multiple iova 
>>>> region.
>>>
>>> That is for good reason - there is only a single pagetable per bank, 
>>> so if there are multiple devices assigned to a single bank, they 
>>> cannot possibly be attached to different domains at the same time. 
>>> Hence why the banks are modelled as groups.
>> I understand.
>> I am trying to upstream a remoteproc driver but the remote processor is
>> behind the iommu.
>> remoteproc can manage the iommu but it requires an unmanaged domain.
>> I tried a couple of things but this cause code duplication,
>> implies many hacks and not always reliable.
>> Do you have any suggestion ?
> 
> If there are other active devices behind the same IOMMU, and the 
> remoteproc device cannot be isolated into its own bank using the 
> existing IOMMU driver logic, then the remoteproc driver cannot manage 
> the IOMMU directly, and must just use the regular DMA API. There's no 
> way around it; you can't have two different parts of the kernel both 
> thinking they have exclusive control of a single IOMMU address space at 
> the same time. Similarly, remoteproc also cannot take explicit control 
> of a multi-device group if it's not actually in control of the other 
> devices, since their drivers will not be expecting the DMA address space 
> to suddenly change underfoot - that's why iommu_attach_device() has the 
> check which you presumably ran into.
Unfortunately, we can't just use the regular DMA API.
Basically, the firmware use static addresses (and the remote core is 
only supposed to access addresses between 0x60000000 and 0x70000000).
When we use DMA API, we get a random address that doesn't match what the
firmware would expect.
remoteproc use directly the iommu API to map physical address to the
static address expected by the firmware when DMA API can't be use.

Thanks,
Alexandre
Yong Wu (吴勇) Feb. 14, 2023, 5:48 a.m. UTC | #6
On Tue, 2023-01-31 at 16:31 +0100, Alexandre Bailon wrote:
> 
> On 1/31/23 15:15, Robin Murphy wrote:
> > On 31/01/2023 1:08 pm, Alexandre Bailon wrote:
> > > Hi Robin
> > > 
> > > On 1/30/23 13:04, Robin Murphy wrote:
> > > > On 2023-01-30 10:27, Alexandre Bailon wrote:
> > > > > Currently, the driver can allocate an unmanaged iommu domain.
> > > > > But, this only works for SoC having multiple bank or multiple
> > > > > iova 
> > > > > region.
> > > > 
> > > > That is for good reason - there is only a single pagetable per
> > > > bank, 
> > > > so if there are multiple devices assigned to a single bank,
> > > > they 
> > > > cannot possibly be attached to different domains at the same
> > > > time. 
> > > > Hence why the banks are modelled as groups.
> > > 
> > > I understand.
> > > I am trying to upstream a remoteproc driver but the remote
> > > processor is
> > > behind the iommu.
> > > remoteproc can manage the iommu but it requires an unmanaged
> > > domain.
> > > I tried a couple of things but this cause code duplication,
> > > implies many hacks and not always reliable.
> > > Do you have any suggestion ?
> > 
> > If there are other active devices behind the same IOMMU, and the 
> > remoteproc device cannot be isolated into its own bank using the 
> > existing IOMMU driver logic, then the remoteproc driver cannot
> > manage 
> > the IOMMU directly, and must just use the regular DMA API. There's
> > no 
> > way around it; you can't have two different parts of the kernel
> > both 
> > thinking they have exclusive control of a single IOMMU address
> > space at 
> > the same time. Similarly, remoteproc also cannot take explicit
> > control 
> > of a multi-device group if it's not actually in control of the
> > other 
> > devices, since their drivers will not be expecting the DMA address
> > space 
> > to suddenly change underfoot - that's why iommu_attach_device() has
> > the 
> > check which you presumably ran into.
> 
> Unfortunately, we can't just use the regular DMA API.
> Basically, the firmware use static addresses (and the remote core is 
> only supposed to access addresses between 0x60000000 and 0x70000000).
> When we use DMA API, we get a random address that doesn't match what
> the
> firmware would expect.
> remoteproc use directly the iommu API to map physical address to the
> static address expected by the firmware when DMA API can't be use.

If this master can only support this special address, We could handle
it inside this driver.

Could you help try to add these two patches [3/11] and [4/11]?

[3/11] 
https://patchwork.kernel.org/project/linux-mediatek/patch/20230214031114.926-4-yong.wu@mediatek.com/
[4/11] 
https://patchwork.kernel.org/project/linux-mediatek/patch/20230214031114.926-5-yong.wu@mediatek.com/


and, then add the logical for mt8365(I see the APU is larb0 port10/11
in the binding):
--------------------------------

+#define MT8365_REGION_NR		2
+
+static const struct mtk_iommu_iova_region
mt8365_multi_rgn[MT8365_REGION_NR] = {
+	{ .iova_base = 0x0,		.size = SZ_4G},	       /* 0 ~
4G. */
+	{ .iova_base = 0x60000000,	.size = SZ_256M},      /* APU
*/
+};
+

xxxxxxxxxxx

+static const unsigned int
mt8365_larb_region_msk[MT8365_REGION_NR][MTK_LARB_NR_MAX] = {
+	[0] = {~(u32)(BIT(10) | BIT(11)), ~0, ~0, ~0, ~0, ~0},
+	[1] = {[0] = BIT(10) | BIT(11)},
+};
+
 static const struct mtk_iommu_plat_data mt8365_data = {
 	.m4u_plat	= M4U_MT8365,
 	.flags		= RESET_AXI | INT_ID_PORT_WIDTH_6,
 	.inv_sel_reg	= REG_MMU_INV_SEL_GEN1,
 	.banks_num	= 1,
 	.banks_enable	= {true},
-	.iova_region	= single_domain,
-	.iova_region_nr	= ARRAY_SIZE(single_domain),
+	.iova_region	= mt8365_multi_rgn,
+	.iova_region_nr	= ARRAY_SIZE(mt8365_multi_rgn),
+	.iova_region_larb_msk = mt8365_larb_region_msk,
 	.larbid_remap	= {{0}, {1}, {2}, {3}, {4}, {5}}, /* Linear
mapping. */
 };
--------------------------------

After that, If we call DMA API with the device whose dtsi has
M4U_PORT_APU_READ/M4U_PORT_APU_WRITE. The iova should be located at
that special address. Sorry, I have no board to test.

Thanks. 

> 
> Thanks,
> Alexandre
>
Alexandre Bailon Sept. 14, 2023, 8:07 a.m. UTC | #7
Hi,

Sorry for long delay before the response.

On 2/14/23 06:48, Yong Wu (吴勇) wrote:
> On Tue, 2023-01-31 at 16:31 +0100, Alexandre Bailon wrote:
>>
>> On 1/31/23 15:15, Robin Murphy wrote:
>>> On 31/01/2023 1:08 pm, Alexandre Bailon wrote:
>>>> Hi Robin
>>>>
>>>> On 1/30/23 13:04, Robin Murphy wrote:
>>>>> On 2023-01-30 10:27, Alexandre Bailon wrote:
>>>>>> Currently, the driver can allocate an unmanaged iommu domain.
>>>>>> But, this only works for SoC having multiple bank or multiple
>>>>>> iova
>>>>>> region.
>>>>>
>>>>> That is for good reason - there is only a single pagetable per
>>>>> bank,
>>>>> so if there are multiple devices assigned to a single bank,
>>>>> they
>>>>> cannot possibly be attached to different domains at the same
>>>>> time.
>>>>> Hence why the banks are modelled as groups.
>>>>
>>>> I understand.
>>>> I am trying to upstream a remoteproc driver but the remote
>>>> processor is
>>>> behind the iommu.
>>>> remoteproc can manage the iommu but it requires an unmanaged
>>>> domain.
>>>> I tried a couple of things but this cause code duplication,
>>>> implies many hacks and not always reliable.
>>>> Do you have any suggestion ?
>>>
>>> If there are other active devices behind the same IOMMU, and the
>>> remoteproc device cannot be isolated into its own bank using the
>>> existing IOMMU driver logic, then the remoteproc driver cannot
>>> manage
>>> the IOMMU directly, and must just use the regular DMA API. There's
>>> no
>>> way around it; you can't have two different parts of the kernel
>>> both
>>> thinking they have exclusive control of a single IOMMU address
>>> space at
>>> the same time. Similarly, remoteproc also cannot take explicit
>>> control
>>> of a multi-device group if it's not actually in control of the
>>> other
>>> devices, since their drivers will not be expecting the DMA address
>>> space
>>> to suddenly change underfoot - that's why iommu_attach_device() has
>>> the
>>> check which you presumably ran into.
>>
>> Unfortunately, we can't just use the regular DMA API.
>> Basically, the firmware use static addresses (and the remote core is
>> only supposed to access addresses between 0x60000000 and 0x70000000).
>> When we use DMA API, we get a random address that doesn't match what
>> the
>> firmware would expect.
>> remoteproc use directly the iommu API to map physical address to the
>> static address expected by the firmware when DMA API can't be use.
> 
> If this master can only support this special address, We could handle
> it inside this driver.
> 
> Could you help try to add these two patches [3/11] and [4/11]?
> 
> [3/11]
> https://patchwork.kernel.org/project/linux-mediatek/patch/20230214031114.926-4-yong.wu@mediatek.com/
> [4/11]
> https://patchwork.kernel.org/project/linux-mediatek/patch/20230214031114.926-5-yong.wu@mediatek.com/
> 
> 
> and, then add the logical for mt8365(I see the APU is larb0 port10/11
> in the binding):
> --------------------------------
> 
> +#define MT8365_REGION_NR		2
> +
> +static const struct mtk_iommu_iova_region
> mt8365_multi_rgn[MT8365_REGION_NR] = {
> +	{ .iova_base = 0x0,		.size = SZ_4G},	       /* 0 ~
> 4G. */
> +	{ .iova_base = 0x60000000,	.size = SZ_256M},      /* APU
> */
> +};
> +
> 
> xxxxxxxxxxx
> 
> +static const unsigned int
> mt8365_larb_region_msk[MT8365_REGION_NR][MTK_LARB_NR_MAX] = {
> +	[0] = {~(u32)(BIT(10) | BIT(11)), ~0, ~0, ~0, ~0, ~0},
> +	[1] = {[0] = BIT(10) | BIT(11)},
> +};
> +
>   static const struct mtk_iommu_plat_data mt8365_data = {
>   	.m4u_plat	= M4U_MT8365,
>   	.flags		= RESET_AXI | INT_ID_PORT_WIDTH_6,
>   	.inv_sel_reg	= REG_MMU_INV_SEL_GEN1,
>   	.banks_num	= 1,
>   	.banks_enable	= {true},
> -	.iova_region	= single_domain,
> -	.iova_region_nr	= ARRAY_SIZE(single_domain),
> +	.iova_region	= mt8365_multi_rgn,
> +	.iova_region_nr	= ARRAY_SIZE(mt8365_multi_rgn),
> +	.iova_region_larb_msk = mt8365_larb_region_msk,
>   	.larbid_remap	= {{0}, {1}, {2}, {3}, {4}, {5}}, /* Linear
> mapping. */
>   };
> --------------------------------
> 
> After that, If we call DMA API with the device whose dtsi has
> M4U_PORT_APU_READ/M4U_PORT_APU_WRITE. The iova should be located at
> that special address. Sorry, I have no board to test.
> 

I have not yet tested the patches but it will only address one part of 
the problem.
Using your patches, I could allocate some shared memory using DMA API 
but the main issue still remain.
The firmware is not relocatable at all. So, once the firmware is built, 
it is expected to be loaded at a specific address.
Remoteproc framework support this use case. Using the resource table, 
the firmware expose to remoteproc what device address is expect and 
remoteproc manually call iommu_map to satisfy this requirement.
Using DMA_API, I could allocate the memory to load the firmware but I 
could not be sure that the DMA address will be the one expected by firmware.

Thanks,
Alexandre
Robin Murphy Sept. 14, 2023, 2:19 p.m. UTC | #8
On 2023-09-14 09:07, Alexandre Bailon wrote:
> Hi,
> 
> Sorry for long delay before the response.
> 
> On 2/14/23 06:48, Yong Wu (吴勇) wrote:
>> On Tue, 2023-01-31 at 16:31 +0100, Alexandre Bailon wrote:
>>>
>>> On 1/31/23 15:15, Robin Murphy wrote:
>>>> On 31/01/2023 1:08 pm, Alexandre Bailon wrote:
>>>>> Hi Robin
>>>>>
>>>>> On 1/30/23 13:04, Robin Murphy wrote:
>>>>>> On 2023-01-30 10:27, Alexandre Bailon wrote:
>>>>>>> Currently, the driver can allocate an unmanaged iommu domain.
>>>>>>> But, this only works for SoC having multiple bank or multiple
>>>>>>> iova
>>>>>>> region.
>>>>>>
>>>>>> That is for good reason - there is only a single pagetable per
>>>>>> bank,
>>>>>> so if there are multiple devices assigned to a single bank,
>>>>>> they
>>>>>> cannot possibly be attached to different domains at the same
>>>>>> time.
>>>>>> Hence why the banks are modelled as groups.
>>>>>
>>>>> I understand.
>>>>> I am trying to upstream a remoteproc driver but the remote
>>>>> processor is
>>>>> behind the iommu.
>>>>> remoteproc can manage the iommu but it requires an unmanaged
>>>>> domain.
>>>>> I tried a couple of things but this cause code duplication,
>>>>> implies many hacks and not always reliable.
>>>>> Do you have any suggestion ?
>>>>
>>>> If there are other active devices behind the same IOMMU, and the
>>>> remoteproc device cannot be isolated into its own bank using the
>>>> existing IOMMU driver logic, then the remoteproc driver cannot
>>>> manage
>>>> the IOMMU directly, and must just use the regular DMA API. There's
>>>> no
>>>> way around it; you can't have two different parts of the kernel
>>>> both
>>>> thinking they have exclusive control of a single IOMMU address
>>>> space at
>>>> the same time. Similarly, remoteproc also cannot take explicit
>>>> control
>>>> of a multi-device group if it's not actually in control of the
>>>> other
>>>> devices, since their drivers will not be expecting the DMA address
>>>> space
>>>> to suddenly change underfoot - that's why iommu_attach_device() has
>>>> the
>>>> check which you presumably ran into.
>>>
>>> Unfortunately, we can't just use the regular DMA API.
>>> Basically, the firmware use static addresses (and the remote core is
>>> only supposed to access addresses between 0x60000000 and 0x70000000).
>>> When we use DMA API, we get a random address that doesn't match what
>>> the
>>> firmware would expect.
>>> remoteproc use directly the iommu API to map physical address to the
>>> static address expected by the firmware when DMA API can't be use.
>>
>> If this master can only support this special address, We could handle
>> it inside this driver.
>>
>> Could you help try to add these two patches [3/11] and [4/11]?
>>
>> [3/11]
>> https://patchwork.kernel.org/project/linux-mediatek/patch/20230214031114.926-4-yong.wu@mediatek.com/
>> [4/11]
>> https://patchwork.kernel.org/project/linux-mediatek/patch/20230214031114.926-5-yong.wu@mediatek.com/
>>
>>
>> and, then add the logical for mt8365(I see the APU is larb0 port10/11
>> in the binding):
>> --------------------------------
>>
>> +#define MT8365_REGION_NR        2
>> +
>> +static const struct mtk_iommu_iova_region
>> mt8365_multi_rgn[MT8365_REGION_NR] = {
>> +    { .iova_base = 0x0,        .size = SZ_4G},           /* 0 ~
>> 4G. */
>> +    { .iova_base = 0x60000000,    .size = SZ_256M},      /* APU
>> */
>> +};
>> +
>>
>> xxxxxxxxxxx
>>
>> +static const unsigned int
>> mt8365_larb_region_msk[MT8365_REGION_NR][MTK_LARB_NR_MAX] = {
>> +    [0] = {~(u32)(BIT(10) | BIT(11)), ~0, ~0, ~0, ~0, ~0},
>> +    [1] = {[0] = BIT(10) | BIT(11)},
>> +};
>> +
>>   static const struct mtk_iommu_plat_data mt8365_data = {
>>       .m4u_plat    = M4U_MT8365,
>>       .flags        = RESET_AXI | INT_ID_PORT_WIDTH_6,
>>       .inv_sel_reg    = REG_MMU_INV_SEL_GEN1,
>>       .banks_num    = 1,
>>       .banks_enable    = {true},
>> -    .iova_region    = single_domain,
>> -    .iova_region_nr    = ARRAY_SIZE(single_domain),
>> +    .iova_region    = mt8365_multi_rgn,
>> +    .iova_region_nr    = ARRAY_SIZE(mt8365_multi_rgn),
>> +    .iova_region_larb_msk = mt8365_larb_region_msk,
>>       .larbid_remap    = {{0}, {1}, {2}, {3}, {4}, {5}}, /* Linear
>> mapping. */
>>   };
>> --------------------------------
>>
>> After that, If we call DMA API with the device whose dtsi has
>> M4U_PORT_APU_READ/M4U_PORT_APU_WRITE. The iova should be located at
>> that special address. Sorry, I have no board to test.
>>
> 
> I have not yet tested the patches but it will only address one part of 
> the problem.
> Using your patches, I could allocate some shared memory using DMA API 
> but the main issue still remain.
> The firmware is not relocatable at all. So, once the firmware is built, 
> it is expected to be loaded at a specific address.
> Remoteproc framework support this use case. Using the resource table, 
> the firmware expose to remoteproc what device address is expect and 
> remoteproc manually call iommu_map to satisfy this requirement.
> Using DMA_API, I could allocate the memory to load the firmware but I 
> could not be sure that the DMA address will be the one expected by 
> firmware.

I think the solution to this is now to use the new DT binding to require 
an IOMMU identity mapping of the firmware region[1], such that it's all 
taken care of for you.

Thanks,
Robin.

[1] https://git.kernel.org/torvalds/c/af0d81357cc5
diff mbox series

Patch

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 2badd6acfb23d..5884a1ad409a0 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -858,9 +858,15 @@  static int mtk_iommu_get_group_id(struct device *dev, const struct mtk_iommu_pla
 static struct iommu_group *mtk_iommu_device_group(struct device *dev)
 {
 	struct mtk_iommu_data *c_data = dev_iommu_priv_get(dev), *data;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 	struct list_head *hw_list = c_data->hw_list;
 	struct iommu_group *group;
 	int groupid;
+	int i;
+
+	for (i = 0; i < fwspec->num_ids; i++)
+		if (MTK_M4U_UNMANAGED(fwspec->ids[i]))
+			return iommu_group_alloc();
 
 	data = mtk_iommu_get_frst_data(hw_list);
 	if (!data)