diff mbox series

[02/10] pmdomain: rockchip: Simplify locking with guard()

Message ID 20240823-cleanup-h-guard-pm-domain-v1-2-8320722eaf39@linaro.org (mailing list archive)
State Awaiting Upstream
Delegated to: Geert Uytterhoeven
Headers show
Series pmdomain: Simplify with cleanup.h | expand

Commit Message

Krzysztof Kozlowski Aug. 23, 2024, 12:51 p.m. UTC
Simplify error handling (smaller error handling) over locks with
guard().

Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
---
 drivers/pmdomain/rockchip/pm-domains.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

Comments

Jonathan Cameron Aug. 27, 2024, 9:59 a.m. UTC | #1
On Fri, 23 Aug 2024 14:51:06 +0200
Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> wrote:

> Simplify error handling (smaller error handling) over locks with
> guard().
> 
> Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Musing inline.

LGTM
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>


> ---
>  drivers/pmdomain/rockchip/pm-domains.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/drivers/pmdomain/rockchip/pm-domains.c b/drivers/pmdomain/rockchip/pm-domains.c
> index 5679ad336a11..538dde58d924 100644
> --- a/drivers/pmdomain/rockchip/pm-domains.c
> +++ b/drivers/pmdomain/rockchip/pm-domains.c
> @@ -910,7 +910,7 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
>  	 * Prevent any rockchip_pmu_block() from racing with the remainder of
>  	 * setup (clocks, register initialization).
>  	 */
> -	mutex_lock(&dmc_pmu_mutex);
> +	guard(mutex)(&dmc_pmu_mutex);
>  
>  	for_each_available_child_of_node_scoped(np, node) {
>  		error = rockchip_pm_add_one_domain(pmu, node);
> @@ -943,13 +943,10 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
>  	if (!WARN_ON_ONCE(dmc_pmu))
>  		dmc_pmu = pmu;
>  
> -	mutex_unlock(&dmc_pmu_mutex);
> -
>  	return 0;
>  
>  err_out:
>  	rockchip_pm_domain_cleanup(pmu);

I wonder.  Could you use a devm_add_action_or_reset for this and allow early
returns throughout?

Would need to take the lock again perhaps and I haven't checked if there
is any issue in dropping and retaking the mutex however.
The block logic is non obvious so I couldn't quickly figure this out.
 
> -	mutex_unlock(&dmc_pmu_mutex);
>  	return error;
>  }
>  
>
Krzysztof Kozlowski Aug. 27, 2024, 10:30 a.m. UTC | #2
On 27/08/2024 11:59, Jonathan Cameron wrote:
> On Fri, 23 Aug 2024 14:51:06 +0200
> Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> wrote:
> 
>> Simplify error handling (smaller error handling) over locks with
>> guard().
>>
>> Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
> Musing inline.
> 
> LGTM
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> 
> 
>> ---
>>  drivers/pmdomain/rockchip/pm-domains.c | 5 +----
>>  1 file changed, 1 insertion(+), 4 deletions(-)
>>
>> diff --git a/drivers/pmdomain/rockchip/pm-domains.c b/drivers/pmdomain/rockchip/pm-domains.c
>> index 5679ad336a11..538dde58d924 100644
>> --- a/drivers/pmdomain/rockchip/pm-domains.c
>> +++ b/drivers/pmdomain/rockchip/pm-domains.c
>> @@ -910,7 +910,7 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
>>  	 * Prevent any rockchip_pmu_block() from racing with the remainder of
>>  	 * setup (clocks, register initialization).
>>  	 */
>> -	mutex_lock(&dmc_pmu_mutex);
>> +	guard(mutex)(&dmc_pmu_mutex);
>>  
>>  	for_each_available_child_of_node_scoped(np, node) {
>>  		error = rockchip_pm_add_one_domain(pmu, node);
>> @@ -943,13 +943,10 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
>>  	if (!WARN_ON_ONCE(dmc_pmu))
>>  		dmc_pmu = pmu;
>>  
>> -	mutex_unlock(&dmc_pmu_mutex);
>> -
>>  	return 0;
>>  
>>  err_out:
>>  	rockchip_pm_domain_cleanup(pmu);
> 
> I wonder.  Could you use a devm_add_action_or_reset for this and allow early
> returns throughout?
> 
> Would need to take the lock again perhaps and I haven't checked if there
> is any issue in dropping and retaking the mutex however.
> The block logic is non obvious so I couldn't quickly figure this out.

I will take a look, but as you already pointed out it is a bit further
from trivial functionally-equivalent cleanup. I might mess with the locks.

Best regards,
Krzysztof
diff mbox series

Patch

diff --git a/drivers/pmdomain/rockchip/pm-domains.c b/drivers/pmdomain/rockchip/pm-domains.c
index 5679ad336a11..538dde58d924 100644
--- a/drivers/pmdomain/rockchip/pm-domains.c
+++ b/drivers/pmdomain/rockchip/pm-domains.c
@@ -910,7 +910,7 @@  static int rockchip_pm_domain_probe(struct platform_device *pdev)
 	 * Prevent any rockchip_pmu_block() from racing with the remainder of
 	 * setup (clocks, register initialization).
 	 */
-	mutex_lock(&dmc_pmu_mutex);
+	guard(mutex)(&dmc_pmu_mutex);
 
 	for_each_available_child_of_node_scoped(np, node) {
 		error = rockchip_pm_add_one_domain(pmu, node);
@@ -943,13 +943,10 @@  static int rockchip_pm_domain_probe(struct platform_device *pdev)
 	if (!WARN_ON_ONCE(dmc_pmu))
 		dmc_pmu = pmu;
 
-	mutex_unlock(&dmc_pmu_mutex);
-
 	return 0;
 
 err_out:
 	rockchip_pm_domain_cleanup(pmu);
-	mutex_unlock(&dmc_pmu_mutex);
 	return error;
 }