Message ID | 20240608-adreno-v1-1-2e470480eee7@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | drm/msm/a6xx: request memory region | expand |
On Sat, Jun 08, 2024 at 11:43:47AM -0400, Kiarash Hajian wrote: > The driver's memory regions are currently just ioremap()ed, but not > reserved through a request. That's not a bug, but having the request is > a little more robust. > > Implement the region-request through the corresponding managed > devres-function. > > Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> > --- > Changes in v6: > -Fix compile error > -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com > > Changes in v5: > - Fix error hanlding problems. > - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com > > Changes in v4: > - Combine v3 commits into a singel commit > - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com > > Changes in v3: > - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. > > Changes in v2: > - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. > --- > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- > 1 file changed, 11 insertions(+), 22 deletions(-) > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian <kiarash8112hajian@gmail.com> wrote: > > The driver's memory regions are currently just ioremap()ed, but not > reserved through a request. That's not a bug, but having the request is > a little more robust. > > Implement the region-request through the corresponding managed > devres-function. > > Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> > --- > Changes in v6: > -Fix compile error > -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com > > Changes in v5: > - Fix error hanlding problems. > - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com > > Changes in v4: > - Combine v3 commits into a singel commit > - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com > > Changes in v3: > - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. > > Changes in v2: > - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. > --- > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- > 1 file changed, 11 insertions(+), 22 deletions(-) > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > index 8bea8ef26f77..d26cc6254ef9 100644 > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > bool pdc_in_aop = false; > > if (IS_ERR(pdcptr)) > - goto err; > + return; > > if (adreno_is_a650(adreno_gpu) || > adreno_is_a660_family(adreno_gpu) || > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > if (!pdc_in_aop) { > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq"); > if (IS_ERR(seqptr)) > - goto err; > + return; > } > > /* Disable SDE clock gating */ > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > wmb(); > > a6xx_rpmh_stop(gmu); > - > -err: > - if (!IS_ERR_OR_NULL(pdcptr)) > - iounmap(pdcptr); > - if (!IS_ERR_OR_NULL(seqptr)) > - iounmap(seqptr); > } > > /* > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > return ERR_PTR(-EINVAL); > } > > - ret = ioremap(res->start, resource_size(res)); > + ret = devm_ioremap_resource(&pdev->dev, res); So, this doesn't actually work, failing in __request_region_locked(), because the gmu region partially overlaps with the gpucc region (which is busy). I think this is intentional, since gmu is controlling the gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this overlapping region. Maybe Akhil knows more about GMU. BR, -R > if (!ret) { > DRM_DEV_ERROR(&pdev->dev, "Unable to map the %s registers\n", name); > return ERR_PTR(-EINVAL); > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu"); > if (IS_ERR(gmu->mmio)) { > ret = PTR_ERR(gmu->mmio); > - goto err_mmio; > + goto err_cleanup; > } > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > if (IS_ERR(gmu->cxpd)) { > ret = PTR_ERR(gmu->cxpd); > - goto err_mmio; > + goto err_cleanup; > } > > if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) { > @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx"); > if (IS_ERR(gmu->gxpd)) { > ret = PTR_ERR(gmu->gxpd); > - goto err_mmio; > + goto err_cleanup; > } > > gmu->initialized = true; > @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > detach_cxpd: > dev_pm_domain_detach(gmu->cxpd, false); > > -err_mmio: > - iounmap(gmu->mmio); > - > +err_cleanup: > /* Drop reference taken in of_find_device_by_node */ > put_device(gmu->dev); > > @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); > if (IS_ERR(gmu->rscc)) { > ret = -ENODEV; > - goto err_mmio; > + goto err_cleanup; > } > } else { > gmu->rscc = gmu->mmio + 0x23000; > @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) { > ret = -ENODEV; > - goto err_mmio; > + goto err_cleanup; > } > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > if (IS_ERR(gmu->cxpd)) { > ret = PTR_ERR(gmu->cxpd); > - goto err_mmio; > + goto err_cleanup; > } > > link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME); > @@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > detach_cxpd: > dev_pm_domain_detach(gmu->cxpd, false); > > -err_mmio: > - iounmap(gmu->mmio); > - if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc")) > - iounmap(gmu->rscc); > +err_cleanup: > free_irq(gmu->gmu_irq, gmu); > free_irq(gmu->hfi_irq, gmu); > > > --- > base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc > change-id: 20240608-adreno-98c412bfdc03 > > Best regards, > -- > Kiarash Hajian <kiarash8112hajian@gmail.com> >
On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote: > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian > <kiarash8112hajian@gmail.com> wrote: > > > > The driver's memory regions are currently just ioremap()ed, but not > > reserved through a request. That's not a bug, but having the request is > > a little more robust. > > > > Implement the region-request through the corresponding managed > > devres-function. > > > > Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> > > --- > > Changes in v6: > > -Fix compile error > > -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com > > > > Changes in v5: > > - Fix error hanlding problems. > > - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com > > > > Changes in v4: > > - Combine v3 commits into a singel commit > > - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com > > > > Changes in v3: > > - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. > > > > Changes in v2: > > - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. > > --- > > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- > > 1 file changed, 11 insertions(+), 22 deletions(-) > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > index 8bea8ef26f77..d26cc6254ef9 100644 > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > bool pdc_in_aop = false; > > > > if (IS_ERR(pdcptr)) > > - goto err; > > + return; > > > > if (adreno_is_a650(adreno_gpu) || > > adreno_is_a660_family(adreno_gpu) || > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > if (!pdc_in_aop) { > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq"); > > if (IS_ERR(seqptr)) > > - goto err; > > + return; > > } > > > > /* Disable SDE clock gating */ > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > wmb(); > > > > a6xx_rpmh_stop(gmu); > > - > > -err: > > - if (!IS_ERR_OR_NULL(pdcptr)) > > - iounmap(pdcptr); > > - if (!IS_ERR_OR_NULL(seqptr)) > > - iounmap(seqptr); > > } > > > > /* > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > > return ERR_PTR(-EINVAL); > > } > > > > - ret = ioremap(res->start, resource_size(res)); > > + ret = devm_ioremap_resource(&pdev->dev, res); > > So, this doesn't actually work, failing in __request_region_locked(), > because the gmu region partially overlaps with the gpucc region (which > is busy). I think this is intentional, since gmu is controlling the > gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this > overlapping region. Maybe Akhil knows more about GMU. We don't really need to map gpucc region from driver on behalf of gmu. Since we don't access any gpucc register from drm-msm driver, we can update the range size to correct this. But due to backward compatibility requirement with older dt, can we still enable region locking? I prefer it if that is possible. FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So gpucc region has to be mapped by kgsl and that is reflected in the kgsl device tree. -Akhil > > BR, > -R > > > if (!ret) { > > DRM_DEV_ERROR(&pdev->dev, "Unable to map the %s registers\n", name); > > return ERR_PTR(-EINVAL); > > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu"); > > if (IS_ERR(gmu->mmio)) { > > ret = PTR_ERR(gmu->mmio); > > - goto err_mmio; > > + goto err_cleanup; > > } > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > if (IS_ERR(gmu->cxpd)) { > > ret = PTR_ERR(gmu->cxpd); > > - goto err_mmio; > > + goto err_cleanup; > > } > > > > if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) { > > @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx"); > > if (IS_ERR(gmu->gxpd)) { > > ret = PTR_ERR(gmu->gxpd); > > - goto err_mmio; > > + goto err_cleanup; > > } > > > > gmu->initialized = true; > > @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > detach_cxpd: > > dev_pm_domain_detach(gmu->cxpd, false); > > > > -err_mmio: > > - iounmap(gmu->mmio); > > - > > +err_cleanup: > > /* Drop reference taken in of_find_device_by_node */ > > put_device(gmu->dev); > > > > @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); > > if (IS_ERR(gmu->rscc)) { > > ret = -ENODEV; > > - goto err_mmio; > > + goto err_cleanup; > > } > > } else { > > gmu->rscc = gmu->mmio + 0x23000; > > @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) { > > ret = -ENODEV; > > - goto err_mmio; > > + goto err_cleanup; > > } > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > if (IS_ERR(gmu->cxpd)) { > > ret = PTR_ERR(gmu->cxpd); > > - goto err_mmio; > > + goto err_cleanup; > > } > > > > link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME); > > @@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > detach_cxpd: > > dev_pm_domain_detach(gmu->cxpd, false); > > > > -err_mmio: > > - iounmap(gmu->mmio); > > - if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc")) > > - iounmap(gmu->rscc); > > +err_cleanup: > > free_irq(gmu->gmu_irq, gmu); > > free_irq(gmu->hfi_irq, gmu); > > > > > > --- > > base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc > > change-id: 20240608-adreno-98c412bfdc03 > > > > Best regards, > > -- > > Kiarash Hajian <kiarash8112hajian@gmail.com> > >
On Tue, Jun 25, 2024 at 10:59 AM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote: > > On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote: > > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian > > <kiarash8112hajian@gmail.com> wrote: > > > > > > The driver's memory regions are currently just ioremap()ed, but not > > > reserved through a request. That's not a bug, but having the request is > > > a little more robust. > > > > > > Implement the region-request through the corresponding managed > > > devres-function. > > > > > > Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> > > > --- > > > Changes in v6: > > > -Fix compile error > > > -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com > > > > > > Changes in v5: > > > - Fix error hanlding problems. > > > - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com > > > > > > Changes in v4: > > > - Combine v3 commits into a singel commit > > > - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com > > > > > > Changes in v3: > > > - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. > > > > > > Changes in v2: > > > - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. > > > --- > > > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- > > > 1 file changed, 11 insertions(+), 22 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > index 8bea8ef26f77..d26cc6254ef9 100644 > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > bool pdc_in_aop = false; > > > > > > if (IS_ERR(pdcptr)) > > > - goto err; > > > + return; > > > > > > if (adreno_is_a650(adreno_gpu) || > > > adreno_is_a660_family(adreno_gpu) || > > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > if (!pdc_in_aop) { > > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq"); > > > if (IS_ERR(seqptr)) > > > - goto err; > > > + return; > > > } > > > > > > /* Disable SDE clock gating */ > > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > wmb(); > > > > > > a6xx_rpmh_stop(gmu); > > > - > > > -err: > > > - if (!IS_ERR_OR_NULL(pdcptr)) > > > - iounmap(pdcptr); > > > - if (!IS_ERR_OR_NULL(seqptr)) > > > - iounmap(seqptr); > > > } > > > > > > /* > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > > > return ERR_PTR(-EINVAL); > > > } > > > > > > - ret = ioremap(res->start, resource_size(res)); > > > + ret = devm_ioremap_resource(&pdev->dev, res); > > > > So, this doesn't actually work, failing in __request_region_locked(), > > because the gmu region partially overlaps with the gpucc region (which > > is busy). I think this is intentional, since gmu is controlling the > > gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this > > overlapping region. Maybe Akhil knows more about GMU. > > We don't really need to map gpucc region from driver on behalf of gmu. > Since we don't access any gpucc register from drm-msm driver, we can > update the range size to correct this. But due to backward compatibility > requirement with older dt, can we still enable region locking? I prefer > it if that is possible. Actually, when I reduced the region size to not overlap with gpucc, the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4. So I guess that register is actually part of gpucc? BR, -R > FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So > gpucc region has to be mapped by kgsl and that is reflected in the kgsl > device tree. > > -Akhil > > > > > BR, > > -R > > > > > if (!ret) { > > > DRM_DEV_ERROR(&pdev->dev, "Unable to map the %s registers\n", name); > > > return ERR_PTR(-EINVAL); > > > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu"); > > > if (IS_ERR(gmu->mmio)) { > > > ret = PTR_ERR(gmu->mmio); > > > - goto err_mmio; > > > + goto err_cleanup; > > > } > > > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > > if (IS_ERR(gmu->cxpd)) { > > > ret = PTR_ERR(gmu->cxpd); > > > - goto err_mmio; > > > + goto err_cleanup; > > > } > > > > > > if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) { > > > @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx"); > > > if (IS_ERR(gmu->gxpd)) { > > > ret = PTR_ERR(gmu->gxpd); > > > - goto err_mmio; > > > + goto err_cleanup; > > > } > > > > > > gmu->initialized = true; > > > @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > detach_cxpd: > > > dev_pm_domain_detach(gmu->cxpd, false); > > > > > > -err_mmio: > > > - iounmap(gmu->mmio); > > > - > > > +err_cleanup: > > > /* Drop reference taken in of_find_device_by_node */ > > > put_device(gmu->dev); > > > > > > @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); > > > if (IS_ERR(gmu->rscc)) { > > > ret = -ENODEV; > > > - goto err_mmio; > > > + goto err_cleanup; > > > } > > > } else { > > > gmu->rscc = gmu->mmio + 0x23000; > > > @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > > if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) { > > > ret = -ENODEV; > > > - goto err_mmio; > > > + goto err_cleanup; > > > } > > > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > > if (IS_ERR(gmu->cxpd)) { > > > ret = PTR_ERR(gmu->cxpd); > > > - goto err_mmio; > > > + goto err_cleanup; > > > } > > > > > > link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME); > > > @@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > detach_cxpd: > > > dev_pm_domain_detach(gmu->cxpd, false); > > > > > > -err_mmio: > > > - iounmap(gmu->mmio); > > > - if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc")) > > > - iounmap(gmu->rscc); > > > +err_cleanup: > > > free_irq(gmu->gmu_irq, gmu); > > > free_irq(gmu->hfi_irq, gmu); > > > > > > > > > --- > > > base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc > > > change-id: 20240608-adreno-98c412bfdc03 > > > > > > Best regards, > > > -- > > > Kiarash Hajian <kiarash8112hajian@gmail.com> > > >
On Tue, Jun 25, 2024 at 11:03:42AM -0700, Rob Clark wrote: > On Tue, Jun 25, 2024 at 10:59 AM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote: > > > > On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote: > > > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian > > > <kiarash8112hajian@gmail.com> wrote: > > > > > > > > The driver's memory regions are currently just ioremap()ed, but not > > > > reserved through a request. That's not a bug, but having the request is > > > > a little more robust. > > > > > > > > Implement the region-request through the corresponding managed > > > > devres-function. > > > > > > > > Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> > > > > --- > > > > Changes in v6: > > > > -Fix compile error > > > > -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com > > > > > > > > Changes in v5: > > > > - Fix error hanlding problems. > > > > - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com > > > > > > > > Changes in v4: > > > > - Combine v3 commits into a singel commit > > > > - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com > > > > > > > > Changes in v3: > > > > - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. > > > > > > > > Changes in v2: > > > > - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. > > > > --- > > > > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- > > > > 1 file changed, 11 insertions(+), 22 deletions(-) > > > > > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > > index 8bea8ef26f77..d26cc6254ef9 100644 > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > > bool pdc_in_aop = false; > > > > > > > > if (IS_ERR(pdcptr)) > > > > - goto err; > > > > + return; > > > > > > > > if (adreno_is_a650(adreno_gpu) || > > > > adreno_is_a660_family(adreno_gpu) || > > > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > > if (!pdc_in_aop) { > > > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq"); > > > > if (IS_ERR(seqptr)) > > > > - goto err; > > > > + return; > > > > } > > > > > > > > /* Disable SDE clock gating */ > > > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > > wmb(); > > > > > > > > a6xx_rpmh_stop(gmu); > > > > - > > > > -err: > > > > - if (!IS_ERR_OR_NULL(pdcptr)) > > > > - iounmap(pdcptr); > > > > - if (!IS_ERR_OR_NULL(seqptr)) > > > > - iounmap(seqptr); > > > > } > > > > > > > > /* > > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > > > > return ERR_PTR(-EINVAL); > > > > } > > > > > > > > - ret = ioremap(res->start, resource_size(res)); > > > > + ret = devm_ioremap_resource(&pdev->dev, res); > > > > > > So, this doesn't actually work, failing in __request_region_locked(), > > > because the gmu region partially overlaps with the gpucc region (which > > > is busy). I think this is intentional, since gmu is controlling the > > > gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this > > > overlapping region. Maybe Akhil knows more about GMU. > > > > We don't really need to map gpucc region from driver on behalf of gmu. > > Since we don't access any gpucc register from drm-msm driver, we can > > update the range size to correct this. But due to backward compatibility > > requirement with older dt, can we still enable region locking? I prefer > > it if that is possible. > > Actually, when I reduced the region size to not overlap with gpucc, > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4. > > So I guess that register is actually part of gpucc? Yes. It has *GPU_CC* in its name. :P I just saw that we program this register on legacy a6xx targets to ensure retention is really ON before collapsing gdsc. So we can't avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate! -Akhil. > > BR, > -R > > > FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So > > gpucc region has to be mapped by kgsl and that is reflected in the kgsl > > device tree. > > > > -Akhil > > > > > > > > BR, > > > -R > > > > > > > if (!ret) { > > > > DRM_DEV_ERROR(&pdev->dev, "Unable to map the %s registers\n", name); > > > > return ERR_PTR(-EINVAL); > > > > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu"); > > > > if (IS_ERR(gmu->mmio)) { > > > > ret = PTR_ERR(gmu->mmio); > > > > - goto err_mmio; > > > > + goto err_cleanup; > > > > } > > > > > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > > > if (IS_ERR(gmu->cxpd)) { > > > > ret = PTR_ERR(gmu->cxpd); > > > > - goto err_mmio; > > > > + goto err_cleanup; > > > > } > > > > > > > > if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) { > > > > @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx"); > > > > if (IS_ERR(gmu->gxpd)) { > > > > ret = PTR_ERR(gmu->gxpd); > > > > - goto err_mmio; > > > > + goto err_cleanup; > > > > } > > > > > > > > gmu->initialized = true; > > > > @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > detach_cxpd: > > > > dev_pm_domain_detach(gmu->cxpd, false); > > > > > > > > -err_mmio: > > > > - iounmap(gmu->mmio); > > > > - > > > > +err_cleanup: > > > > /* Drop reference taken in of_find_device_by_node */ > > > > put_device(gmu->dev); > > > > > > > > @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); > > > > if (IS_ERR(gmu->rscc)) { > > > > ret = -ENODEV; > > > > - goto err_mmio; > > > > + goto err_cleanup; > > > > } > > > > } else { > > > > gmu->rscc = gmu->mmio + 0x23000; > > > > @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > > > > if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) { > > > > ret = -ENODEV; > > > > - goto err_mmio; > > > > + goto err_cleanup; > > > > } > > > > > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > > > if (IS_ERR(gmu->cxpd)) { > > > > ret = PTR_ERR(gmu->cxpd); > > > > - goto err_mmio; > > > > + goto err_cleanup; > > > > } > > > > > > > > link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME); > > > > @@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > detach_cxpd: > > > > dev_pm_domain_detach(gmu->cxpd, false); > > > > > > > > -err_mmio: > > > > - iounmap(gmu->mmio); > > > > - if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc")) > > > > - iounmap(gmu->rscc); > > > > +err_cleanup: > > > > free_irq(gmu->gmu_irq, gmu); > > > > free_irq(gmu->hfi_irq, gmu); > > > > > > > > > > > > --- > > > > base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc > > > > change-id: 20240608-adreno-98c412bfdc03 > > > > > > > > Best regards, > > > > -- > > > > Kiarash Hajian <kiarash8112hajian@gmail.com> > > > >
On Tue, Jun 25, 2024 at 1:23 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote: > > On Tue, Jun 25, 2024 at 11:03:42AM -0700, Rob Clark wrote: > On Tue, Jun 25, 2024 at 10:59 AM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote: > > > > > > On Fri, Jun 21, 2024 at 02:09:58PM -0700, Rob Clark wrote: > > > > On Sat, Jun 8, 2024 at 8:44 AM Kiarash Hajian > > > > <kiarash8112hajian@gmail.com> wrote: > > > > > > > > > > The driver's memory regions are currently just ioremap()ed, but not > > > > > reserved through a request. That's not a bug, but having the request is > > > > > a little more robust. > > > > > > > > > > Implement the region-request through the corresponding managed > > > > > devres-function. > > > > > > > > > > Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> > > > > > --- > > > > > Changes in v6: > > > > > -Fix compile error > > > > > -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com > > > > > > > > > > Changes in v5: > > > > > - Fix error hanlding problems. > > > > > - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com > > > > > > > > > > Changes in v4: > > > > > - Combine v3 commits into a singel commit > > > > > - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com > > > > > > > > > > Changes in v3: > > > > > - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. > > > > > > > > > > Changes in v2: > > > > > - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. > > > > > --- > > > > > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- > > > > > 1 file changed, 11 insertions(+), 22 deletions(-) > > > > > > > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > > > index 8bea8ef26f77..d26cc6254ef9 100644 > > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c > > > > > @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > > > bool pdc_in_aop = false; > > > > > > > > > > if (IS_ERR(pdcptr)) > > > > > - goto err; > > > > > + return; > > > > > > > > > > if (adreno_is_a650(adreno_gpu) || > > > > > adreno_is_a660_family(adreno_gpu) || > > > > > @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > > > if (!pdc_in_aop) { > > > > > seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq"); > > > > > if (IS_ERR(seqptr)) > > > > > - goto err; > > > > > + return; > > > > > } > > > > > > > > > > /* Disable SDE clock gating */ > > > > > @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) > > > > > wmb(); > > > > > > > > > > a6xx_rpmh_stop(gmu); > > > > > - > > > > > -err: > > > > > - if (!IS_ERR_OR_NULL(pdcptr)) > > > > > - iounmap(pdcptr); > > > > > - if (!IS_ERR_OR_NULL(seqptr)) > > > > > - iounmap(seqptr); > > > > > } > > > > > > > > > > /* > > > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > > > > > return ERR_PTR(-EINVAL); > > > > > } > > > > > > > > > > - ret = ioremap(res->start, resource_size(res)); > > > > > + ret = devm_ioremap_resource(&pdev->dev, res); > > > > > > > > So, this doesn't actually work, failing in __request_region_locked(), > > > > because the gmu region partially overlaps with the gpucc region (which > > > > is busy). I think this is intentional, since gmu is controlling the > > > > gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this > > > > overlapping region. Maybe Akhil knows more about GMU. > > > > > > We don't really need to map gpucc region from driver on behalf of gmu. > > > Since we don't access any gpucc register from drm-msm driver, we can > > > update the range size to correct this. But due to backward compatibility > > > requirement with older dt, can we still enable region locking? I prefer > > > it if that is possible. > > > > Actually, when I reduced the region size to not overlap with gpucc, > > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4. > > > > So I guess that register is actually part of gpucc? > > Yes. It has *GPU_CC* in its name. :P > > I just saw that we program this register on legacy a6xx targets to > ensure retention is really ON before collapsing gdsc. So we can't > avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate! I guess we could still use devm_ioremap().. idk if there is a better way to solve this BR, -R > -Akhil. > > > > > BR, > > -R > > > > > FYI, kgsl accesses gpucc registers to ensure gdsc has collapsed. So > > > gpucc region has to be mapped by kgsl and that is reflected in the kgsl > > > device tree. > > > > > > -Akhil > > > > > > > > > > > BR, > > > > -R > > > > > > > > > if (!ret) { > > > > > DRM_DEV_ERROR(&pdev->dev, "Unable to map the %s registers\n", name); > > > > > return ERR_PTR(-EINVAL); > > > > > @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu"); > > > > > if (IS_ERR(gmu->mmio)) { > > > > > ret = PTR_ERR(gmu->mmio); > > > > > - goto err_mmio; > > > > > + goto err_cleanup; > > > > > } > > > > > > > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > > > > if (IS_ERR(gmu->cxpd)) { > > > > > ret = PTR_ERR(gmu->cxpd); > > > > > - goto err_mmio; > > > > > + goto err_cleanup; > > > > > } > > > > > > > > > > if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) { > > > > > @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx"); > > > > > if (IS_ERR(gmu->gxpd)) { > > > > > ret = PTR_ERR(gmu->gxpd); > > > > > - goto err_mmio; > > > > > + goto err_cleanup; > > > > > } > > > > > > > > > > gmu->initialized = true; > > > > > @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > detach_cxpd: > > > > > dev_pm_domain_detach(gmu->cxpd, false); > > > > > > > > > > -err_mmio: > > > > > - iounmap(gmu->mmio); > > > > > - > > > > > +err_cleanup: > > > > > /* Drop reference taken in of_find_device_by_node */ > > > > > put_device(gmu->dev); > > > > > > > > > > @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); > > > > > if (IS_ERR(gmu->rscc)) { > > > > > ret = -ENODEV; > > > > > - goto err_mmio; > > > > > + goto err_cleanup; > > > > > } > > > > > } else { > > > > > gmu->rscc = gmu->mmio + 0x23000; > > > > > @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > > > > > > if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) { > > > > > ret = -ENODEV; > > > > > - goto err_mmio; > > > > > + goto err_cleanup; > > > > > } > > > > > > > > > > gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); > > > > > if (IS_ERR(gmu->cxpd)) { > > > > > ret = PTR_ERR(gmu->cxpd); > > > > > - goto err_mmio; > > > > > + goto err_cleanup; > > > > > } > > > > > > > > > > link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME); > > > > > @@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) > > > > > detach_cxpd: > > > > > dev_pm_domain_detach(gmu->cxpd, false); > > > > > > > > > > -err_mmio: > > > > > - iounmap(gmu->mmio); > > > > > - if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc")) > > > > > - iounmap(gmu->rscc); > > > > > +err_cleanup: > > > > > free_irq(gmu->gmu_irq, gmu); > > > > > free_irq(gmu->hfi_irq, gmu); > > > > > > > > > > > > > > > --- > > > > > base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc > > > > > change-id: 20240608-adreno-98c412bfdc03 > > > > > > > > > > Best regards, > > > > > -- > > > > > Kiarash Hajian <kiarash8112hajian@gmail.com> > > > > >
<< snip >> > > > > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > > > > > > return ERR_PTR(-EINVAL); > > > > > > } > > > > > > > > > > > > - ret = ioremap(res->start, resource_size(res)); > > > > > > + ret = devm_ioremap_resource(&pdev->dev, res); > > > > > > > > > > So, this doesn't actually work, failing in __request_region_locked(), > > > > > because the gmu region partially overlaps with the gpucc region (which > > > > > is busy). I think this is intentional, since gmu is controlling the > > > > > gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this > > > > > overlapping region. Maybe Akhil knows more about GMU. > > > > > > > > We don't really need to map gpucc region from driver on behalf of gmu. > > > > Since we don't access any gpucc register from drm-msm driver, we can > > > > update the range size to correct this. But due to backward compatibility > > > > requirement with older dt, can we still enable region locking? I prefer > > > > it if that is possible. > > > > > > Actually, when I reduced the region size to not overlap with gpucc, > > > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4. > > > > > > So I guess that register is actually part of gpucc? > > > > Yes. It has *GPU_CC* in its name. :P > > > > I just saw that we program this register on legacy a6xx targets to > > ensure retention is really ON before collapsing gdsc. So we can't > > avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate! > > I guess we could still use devm_ioremap().. idk if there is a better > way to solve this Can we do it without breaking backward compatibility with dt? -Akhil > > BR, > -R > > > -Akhil. > > > > > > > > BR, > > > -R
On Thu, Jun 27, 2024 at 03:22:18AM GMT, Akhil P Oommen wrote: > << snip >> > > > > > > > > @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, > > > > > > > return ERR_PTR(-EINVAL); > > > > > > > } > > > > > > > > > > > > > > - ret = ioremap(res->start, resource_size(res)); > > > > > > > + ret = devm_ioremap_resource(&pdev->dev, res); > > > > > > > > > > > > So, this doesn't actually work, failing in __request_region_locked(), > > > > > > because the gmu region partially overlaps with the gpucc region (which > > > > > > is busy). I think this is intentional, since gmu is controlling the > > > > > > gpu clocks, etc. In particular REG_A6XX_GPU_CC_GX_GDSCR is in this > > > > > > overlapping region. Maybe Akhil knows more about GMU. > > > > > > > > > > We don't really need to map gpucc region from driver on behalf of gmu. > > > > > Since we don't access any gpucc register from drm-msm driver, we can > > > > > update the range size to correct this. But due to backward compatibility > > > > > requirement with older dt, can we still enable region locking? I prefer > > > > > it if that is possible. > > > > > > > > Actually, when I reduced the region size to not overlap with gpucc, > > > > the region is smaller than REG_A6XX_GPU_CC_GX_GDSCR * 4. > > > > > > > > So I guess that register is actually part of gpucc? > > > > > > Yes. It has *GPU_CC* in its name. :P > > > > > > I just saw that we program this register on legacy a6xx targets to > > > ensure retention is really ON before collapsing gdsc. So we can't > > > avoid mapping gpucc region in legacy a6xx GPUs. That is unfortunate! > > > > I guess we could still use devm_ioremap().. idk if there is a better > > way to solve this > > Can we do it without breaking backward compatibility with dt? I think a proper way would be to use devm_ioremap in the gpucc driver, then the GPU driver can use devm_platform_ioremap_resource(). I'll take a look at sketching the gpucc patches in one of the next few days. > > -Akhil > > > > > BR, > > -R > > > > > -Akhil. > > > > > > > > > > > BR, > > > > -R
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 8bea8ef26f77..d26cc6254ef9 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -525,7 +525,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) bool pdc_in_aop = false; if (IS_ERR(pdcptr)) - goto err; + return; if (adreno_is_a650(adreno_gpu) || adreno_is_a660_family(adreno_gpu) || @@ -541,7 +541,7 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) if (!pdc_in_aop) { seqptr = a6xx_gmu_get_mmio(pdev, "gmu_pdc_seq"); if (IS_ERR(seqptr)) - goto err; + return; } /* Disable SDE clock gating */ @@ -633,12 +633,6 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) wmb(); a6xx_rpmh_stop(gmu); - -err: - if (!IS_ERR_OR_NULL(pdcptr)) - iounmap(pdcptr); - if (!IS_ERR_OR_NULL(seqptr)) - iounmap(seqptr); } /* @@ -1503,7 +1497,7 @@ static void __iomem *a6xx_gmu_get_mmio(struct platform_device *pdev, return ERR_PTR(-EINVAL); } - ret = ioremap(res->start, resource_size(res)); + ret = devm_ioremap_resource(&pdev->dev, res); if (!ret) { DRM_DEV_ERROR(&pdev->dev, "Unable to map the %s registers\n", name); return ERR_PTR(-EINVAL); @@ -1613,13 +1607,13 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) gmu->mmio = a6xx_gmu_get_mmio(pdev, "gmu"); if (IS_ERR(gmu->mmio)) { ret = PTR_ERR(gmu->mmio); - goto err_mmio; + goto err_cleanup; } gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); if (IS_ERR(gmu->cxpd)) { ret = PTR_ERR(gmu->cxpd); - goto err_mmio; + goto err_cleanup; } if (!device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME)) { @@ -1635,7 +1629,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) gmu->gxpd = dev_pm_domain_attach_by_name(gmu->dev, "gx"); if (IS_ERR(gmu->gxpd)) { ret = PTR_ERR(gmu->gxpd); - goto err_mmio; + goto err_cleanup; } gmu->initialized = true; @@ -1645,9 +1639,7 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) detach_cxpd: dev_pm_domain_detach(gmu->cxpd, false); -err_mmio: - iounmap(gmu->mmio); - +err_cleanup: /* Drop reference taken in of_find_device_by_node */ put_device(gmu->dev); @@ -1762,7 +1754,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) gmu->rscc = a6xx_gmu_get_mmio(pdev, "rscc"); if (IS_ERR(gmu->rscc)) { ret = -ENODEV; - goto err_mmio; + goto err_cleanup; } } else { gmu->rscc = gmu->mmio + 0x23000; @@ -1774,13 +1766,13 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (gmu->hfi_irq < 0 || gmu->gmu_irq < 0) { ret = -ENODEV; - goto err_mmio; + goto err_cleanup; } gmu->cxpd = dev_pm_domain_attach_by_name(gmu->dev, "cx"); if (IS_ERR(gmu->cxpd)) { ret = PTR_ERR(gmu->cxpd); - goto err_mmio; + goto err_cleanup; } link = device_link_add(gmu->dev, gmu->cxpd, DL_FLAG_PM_RUNTIME); @@ -1824,10 +1816,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) detach_cxpd: dev_pm_domain_detach(gmu->cxpd, false); -err_mmio: - iounmap(gmu->mmio); - if (platform_get_resource_byname(pdev, IORESOURCE_MEM, "rscc")) - iounmap(gmu->rscc); +err_cleanup: free_irq(gmu->gmu_irq, gmu); free_irq(gmu->hfi_irq, gmu);
The driver's memory regions are currently just ioremap()ed, but not reserved through a request. That's not a bug, but having the request is a little more robust. Implement the region-request through the corresponding managed devres-function. Signed-off-by: Kiarash Hajian <kiarash8112hajian@gmail.com> --- Changes in v6: -Fix compile error -Link to v5: https://lore.kernel.org/all/20240607-memory-v1-1-8664f52fc2a1@gmail.com Changes in v5: - Fix error hanlding problems. - Link to v4: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v4-1-3881a64088e6@gmail.com Changes in v4: - Combine v3 commits into a singel commit - Link to v3: https://lore.kernel.org/r/20240512-msm-adreno-memory-region-v3-0-0a728ad45010@gmail.com Changes in v3: - Remove redundant devm_iounmap calls, relying on devres for automatic resource cleanup. Changes in v2: - update the subject prefix to "drm/msm/a6xx:", to match the majority of other changes to this file. --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 33 +++++++++++---------------------- 1 file changed, 11 insertions(+), 22 deletions(-) --- base-commit: 1b294a1f35616977caddaddf3e9d28e576a1adbc change-id: 20240608-adreno-98c412bfdc03 Best regards,