diff mbox series

[V3] arm64: psci: Reduce waiting time for cpu_psci_cpu_kill()

Message ID 433980c7-f246-f741-f00c-fce103a60af7@huawei.com (mailing list archive)
State New, archived
Headers show
Series [V3] arm64: psci: Reduce waiting time for cpu_psci_cpu_kill() | expand

Commit Message

Yunfeng Ye Oct. 18, 2019, 11:24 a.m. UTC
In a case like suspend-to-disk, a large number of CPU cores need to be
shut down. At present, the CPU hotplug operation is serialised, and the
CPU cores can only be shut down one by one. In this process, if PSCI
affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill()
needs to wait for 10ms. If hundreds of CPU cores need to be shut down,
it will take a long time.

Normally, it is no need to wait 10ms in cpu_psci_cpu_kill(). So change
the wait interval from 10 ms to max 1 ms and use usleep_range() instead
of msleep() for more accurate schedule.

In addition, reduce the time interval will increase the messages output,
so remove the "Retry ..." message, instead, put the number of waiting
times to the sucessful message.

Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
---
v2 -> v3:
 - update the comment
 - remove the busy-wait logic, modify the loop logic and output message

v1 -> v2:
 - use usleep_range() instead of udelay() after waiting for a while

 arch/arm64/kernel/psci.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

Comments

Mark Rutland Oct. 18, 2019, 11:41 a.m. UTC | #1
On Fri, Oct 18, 2019 at 07:24:14PM +0800, Yunfeng Ye wrote:
> In a case like suspend-to-disk, a large number of CPU cores need to be
> shut down. At present, the CPU hotplug operation is serialised, and the
> CPU cores can only be shut down one by one. In this process, if PSCI
> affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill()
> needs to wait for 10ms. If hundreds of CPU cores need to be shut down,
> it will take a long time.

Do we have an idea of roughly how long a CPU _usually_ takes to
transition state?

i.e. are we _just_ missing the transition the first time we call
AFFINITY_INFO?

> Normally, it is no need to wait 10ms in cpu_psci_cpu_kill(). So change
> the wait interval from 10 ms to max 1 ms and use usleep_range() instead
> of msleep() for more accurate schedule.
> 
> In addition, reduce the time interval will increase the messages output,
> so remove the "Retry ..." message, instead, put the number of waiting
> times to the sucessful message.
> 
> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
> ---
> v2 -> v3:
>  - update the comment
>  - remove the busy-wait logic, modify the loop logic and output message
> 
> v1 -> v2:
>  - use usleep_range() instead of udelay() after waiting for a while
> 
>  arch/arm64/kernel/psci.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
> index c9f72b2665f1..00b8c0825a08 100644
> --- a/arch/arm64/kernel/psci.c
> +++ b/arch/arm64/kernel/psci.c
> @@ -91,15 +91,14 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
>  	 * while it is dying. So, try again a few times.
>  	 */
> 
> -	for (i = 0; i < 10; i++) {
> +	for (i = 0; i < 100; i++) {
>  		err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
>  		if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
> -			pr_info("CPU%d killed.\n", cpu);
> +			pr_info("CPU%d killed by waiting %d loops.\n", cpu, i);

Could we please make that:

			pr_info("CPU%d killed (polled %d times)\n", cpu, i + 1);



>  			return 0;
>  		}
> 
> -		msleep(10);
> -		pr_info("Retrying again to check for CPU kill\n");
> +		usleep_range(100, 1000);

Hmm, so now we'll wait somewhere between 10ms and 100ms before giving up
on a CPU depending on how long we actually sleep for each iteration of
the loop. That should be called out in the commit message.

That could matter for kdump when you have a large number of CPUs, as in
the worst case for 256 CPUs we've gone from ~2.6s to ~26s. But tbh in
that case I'm not sure I care that much...

In the majority of cases I'd hope AFFINITY_INFO would return OFF after
an iteration or two.

Thanks,
Mark.
Sudeep Holla Oct. 18, 2019, 11:45 a.m. UTC | #2
On Fri, Oct 18, 2019 at 07:24:14PM +0800, Yunfeng Ye wrote:
> In a case like suspend-to-disk, a large number of CPU cores need to be

Add suspend-to-ram also to list, i.e.
"In case like suspend-to-disk and suspend-to-ram, a large number..."

> shut down. At present, the CPU hotplug operation is serialised, and the
> CPU cores can only be shut down one by one. In this process, if PSCI
> affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill()
> needs to wait for 10ms. If hundreds of CPU cores need to be shut down,
> it will take a long time.
>
> Normally, it is no need to wait 10ms in cpu_psci_cpu_kill(). So change

s/it is/there is/

> the wait interval from 10 ms to max 1 ms and use usleep_range() instead
> of msleep() for more accurate schedule.
>

s/for more accurate schedule/for more accurate timer/

> In addition, reduce the time interval will increase the messages output,

s/reduce/reducing/

> so remove the "Retry ..." message, instead, put the number of waiting
> times to the sucessful message.
> 
> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
> ---
> v2 -> v3:
>  - update the comment
>  - remove the busy-wait logic, modify the loop logic and output message
> 
> v1 -> v2:
>  - use usleep_range() instead of udelay() after waiting for a while
> 
>  arch/arm64/kernel/psci.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
> index c9f72b2665f1..00b8c0825a08 100644
> --- a/arch/arm64/kernel/psci.c
> +++ b/arch/arm64/kernel/psci.c
> @@ -91,15 +91,14 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
>  	 * while it is dying. So, try again a few times.
>  	 */
> 
> -	for (i = 0; i < 10; i++) {
> +	for (i = 0; i < 100; i++) {
>  		err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
>  		if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
> -			pr_info("CPU%d killed.\n", cpu);
> +			pr_info("CPU%d killed by waiting %d loops.\n", cpu, i);
>  			return 0;
>  		}
> 
> -		msleep(10);
> -		pr_info("Retrying again to check for CPU kill\n");
> +		usleep_range(100, 1000);

Since usleep_range can return anytime between 100us to 1ms, does it make
sense to check for (time_before(jiffies, timeout)) you had in v2 ?

--
Regards,
Sudeep
Yunfeng Ye Oct. 18, 2019, 12:03 p.m. UTC | #3
On 2019/10/18 19:45, Sudeep Holla wrote:
> On Fri, Oct 18, 2019 at 07:24:14PM +0800, Yunfeng Ye wrote:
>> In a case like suspend-to-disk, a large number of CPU cores need to be
> 
> Add suspend-to-ram also to list, i.e.
> "In case like suspend-to-disk and suspend-to-ram, a large number..."
> 
ok, thanks.

>> shut down. At present, the CPU hotplug operation is serialised, and the
>> CPU cores can only be shut down one by one. In this process, if PSCI
>> affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill()
>> needs to wait for 10ms. If hundreds of CPU cores need to be shut down,
>> it will take a long time.
>>
>> Normally, it is no need to wait 10ms in cpu_psci_cpu_kill(). So change
> 
> s/it is/there is/
> 
ok, thanks.

>> the wait interval from 10 ms to max 1 ms and use usleep_range() instead
>> of msleep() for more accurate schedule.
>>
> 
> s/for more accurate schedule/for more accurate timer/
> 
ok, thanks.

>> In addition, reduce the time interval will increase the messages output,
> 
> s/reduce/reducing/
> 
ok, thanks.

>> so remove the "Retry ..." message, instead, put the number of waiting
>> times to the sucessful message.
>>
>> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
>> ---
>> v2 -> v3:
>>  - update the comment
>>  - remove the busy-wait logic, modify the loop logic and output message
>>
>> v1 -> v2:
>>  - use usleep_range() instead of udelay() after waiting for a while
>>
>>  arch/arm64/kernel/psci.c | 7 +++----
>>  1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
>> index c9f72b2665f1..00b8c0825a08 100644
>> --- a/arch/arm64/kernel/psci.c
>> +++ b/arch/arm64/kernel/psci.c
>> @@ -91,15 +91,14 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
>>  	 * while it is dying. So, try again a few times.
>>  	 */
>>
>> -	for (i = 0; i < 10; i++) {
>> +	for (i = 0; i < 100; i++) {
>>  		err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
>>  		if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
>> -			pr_info("CPU%d killed.\n", cpu);
>> +			pr_info("CPU%d killed by waiting %d loops.\n", cpu, i);
>>  			return 0;
>>  		}
>>
>> -		msleep(10);
>> -		pr_info("Retrying again to check for CPU kill\n");
>> +		usleep_range(100, 1000);
> 
> Since usleep_range can return anytime between 100us to 1ms, does it make
> sense to check for (time_before(jiffies, timeout)) you had in v2 ?
> 
ok, if using (time_before(jiffies, timeout)), the output message change to print
waiting xxx jiffies ? or still print the number of loops?

> --
> Regards,
> Sudeep
> 
> .
>
Yunfeng Ye Oct. 18, 2019, 12:22 p.m. UTC | #4
On 2019/10/18 19:41, Mark Rutland wrote:
> On Fri, Oct 18, 2019 at 07:24:14PM +0800, Yunfeng Ye wrote:
>> In a case like suspend-to-disk, a large number of CPU cores need to be
>> shut down. At present, the CPU hotplug operation is serialised, and the
>> CPU cores can only be shut down one by one. In this process, if PSCI
>> affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill()
>> needs to wait for 10ms. If hundreds of CPU cores need to be shut down,
>> it will take a long time.
> 
> Do we have an idea of roughly how long a CPU _usually_ takes to
> transition state?
> 
> i.e. are we _just_ missing the transition the first time we call
> AFFINITY_INFO?
> 
we have test that in most case is less than 1ms, 50us-500us. the time not
only include hardware state transition, but also include flush caches in BIOS.
and flush caches operation is time-consuming.

>> Normally, it is no need to wait 10ms in cpu_psci_cpu_kill(). So change
>> the wait interval from 10 ms to max 1 ms and use usleep_range() instead
>> of msleep() for more accurate schedule.
>>
>> In addition, reduce the time interval will increase the messages output,
>> so remove the "Retry ..." message, instead, put the number of waiting
>> times to the sucessful message.
>>
>> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
>> ---
>> v2 -> v3:
>>  - update the comment
>>  - remove the busy-wait logic, modify the loop logic and output message
>>
>> v1 -> v2:
>>  - use usleep_range() instead of udelay() after waiting for a while
>>
>>  arch/arm64/kernel/psci.c | 7 +++----
>>  1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
>> index c9f72b2665f1..00b8c0825a08 100644
>> --- a/arch/arm64/kernel/psci.c
>> +++ b/arch/arm64/kernel/psci.c
>> @@ -91,15 +91,14 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
>>  	 * while it is dying. So, try again a few times.
>>  	 */
>>
>> -	for (i = 0; i < 10; i++) {
>> +	for (i = 0; i < 100; i++) {
>>  		err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
>>  		if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
>> -			pr_info("CPU%d killed.\n", cpu);
>> +			pr_info("CPU%d killed by waiting %d loops.\n", cpu, i);
> 
> Could we please make that:
> 
> 			pr_info("CPU%d killed (polled %d times)\n", cpu, i + 1);
> 
ok, thanks.
> 
> 
>>  			return 0;
>>  		}
>>
>> -		msleep(10);
>> -		pr_info("Retrying again to check for CPU kill\n");
>> +		usleep_range(100, 1000);
> 
> Hmm, so now we'll wait somewhere between 10ms and 100ms before giving up
> on a CPU depending on how long we actually sleep for each iteration of
> the loop. That should be called out in the commit message.
> 
> That could matter for kdump when you have a large number of CPUs, as in
> the worst case for 256 CPUs we've gone from ~2.6s to ~26s. But tbh in
> that case I'm not sure I care that much...
> 
> In the majority of cases I'd hope AFFINITY_INFO would return OFF after
> an iteration or two.
> 
Normally it will no need so much time.

> Thanks,
> Mark.
> 
> .
>
diff mbox series

Patch

diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index c9f72b2665f1..00b8c0825a08 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -91,15 +91,14 @@  static int cpu_psci_cpu_kill(unsigned int cpu)
 	 * while it is dying. So, try again a few times.
 	 */

-	for (i = 0; i < 10; i++) {
+	for (i = 0; i < 100; i++) {
 		err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
 		if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
-			pr_info("CPU%d killed.\n", cpu);
+			pr_info("CPU%d killed by waiting %d loops.\n", cpu, i);
 			return 0;
 		}

-		msleep(10);
-		pr_info("Retrying again to check for CPU kill\n");
+		usleep_range(100, 1000);
 	}

 	pr_warn("CPU%d may not have shut down cleanly (AFFINITY_INFO reports %d)\n",