mbox series

[v5,0/2] CPU-Idle latency selftest framework

Message ID 20210430082804.38018-1-psampat@linux.ibm.com (mailing list archive)
Headers show
Series CPU-Idle latency selftest framework | expand

Message

Pratik R. Sampat April 30, 2021, 8:28 a.m. UTC
Changelog RFC v4 --> PATCH v5:
1. Added a CPU online check prior to parsing the CPU topology to avoid
   parsing topologies for CPUs unavailable for the latency test
2. Added comment describing the selftest in cpuidle.sh

As I have made changes to cpuidle.sh's working, hence dropping
"Reviewed-by" from Doug Smythies for the second patch, while retaining
it for the first patch.

RFC v4: https://lkml.org/lkml/2021/4/12/99
---
A kernel module + userspace driver to estimate the wakeup latency
caused by going into stop states. The motivation behind this program is
to find significant deviations behind advertised latency and residency
values.

The patchset measures latencies for two kinds of events. IPIs and Timers
As this is a software-only mechanism, there will additional latencies of
the kernel-firmware-hardware interactions. To account for that, the
program also measures a baseline latency on a 100 percent loaded CPU
and the latencies achieved must be in view relative to that.

To achieve this, we introduce a kernel module and expose its control
knobs through the debugfs interface that the selftests can engage with.

The kernel module provides the following interfaces within
/sys/kernel/debug/latency_test/ for,

IPI test:
    ipi_cpu_dest = Destination CPU for the IPI
    ipi_cpu_src = Origin of the IPI
    ipi_latency_ns = Measured latency time in ns
Timeout test:
    timeout_cpu_src = CPU on which the timer to be queued
    timeout_expected_ns = Timer duration
    timeout_diff_ns = Difference of actual duration vs expected timer

Sample output on a POWER9 system is as follows:
# --IPI Latency Test---
# Baseline Average IPI latency(ns): 3114
# Observed Average IPI latency(ns) - State0: 3265
# Observed Average IPI latency(ns) - State1: 3507
# Observed Average IPI latency(ns) - State2: 3739
# Observed Average IPI latency(ns) - State3: 3807
# Observed Average IPI latency(ns) - State4: 17070
# Observed Average IPI latency(ns) - State5: 1038174
# Observed Average IPI latency(ns) - State6: 1068784
# 
# --Timeout Latency Test--
# Baseline Average timeout diff(ns): 1420
# Observed Average timeout diff(ns) - State0: 1640
# Observed Average timeout diff(ns) - State1: 1764
# Observed Average timeout diff(ns) - State2: 1715
# Observed Average timeout diff(ns) - State3: 1845
# Observed Average timeout diff(ns) - State4: 16581
# Observed Average timeout diff(ns) - State5: 939977
# Observed Average timeout diff(ns) - State6: 1073024


Things to keep in mind:

1. This kernel module + bash driver does not guarantee idleness on a
   core when the IPI and the Timer is armed. It only invokes sleep and
   hopes that the core is idle once the IPI/Timer is invoked onto it.
   Hence this program must be run on a completely idle system for best
   results

2. Even on a completely idle system, there maybe book-keeping tasks or
   jitter tasks that can run on the core we want idle. This can create
   outliers in the latency measurement. Thankfully, these outliers
   should be large enough to easily weed them out.

3. A userspace only selftest variant was also sent out as RFC based on
   suggestions over the previous patchset to simply the kernel
   complexeity. However, a userspace only approach had more noise in
   the latency measurement due to userspace-kernel interactions
   which led to run to run variance and a lesser accurate test.
   Another downside of the nature of a userspace program is that it
   takes orders of magnitude longer to complete a full system test
   compared to the kernel framework.
   RFC patch: https://lkml.org/lkml/2020/9/2/356

4. For Intel Systems, the Timer based latencies don't exactly give out
   the measure of idle latencies. This is because of a hardware
   optimization mechanism that pre-arms a CPU when a timer is set to
   wakeup. That doesn't make this metric useless for Intel systems,
   it just means that is measuring IPI/Timer responding latency rather
   than idle wakeup latencies.
   (Source: https://lkml.org/lkml/2020/9/2/610)
   For solution to this problem, a hardware based latency analyzer is
   devised by Artem Bityutskiy from Intel.
   https://youtu.be/Opk92aQyvt0?t=8266
   https://intel.github.io/wult/

Pratik R. Sampat (2):
  cpuidle: Extract IPI based and timer based wakeup latency from idle
    states
  selftest/cpuidle: Add support for cpuidle latency measurement

 drivers/cpuidle/Makefile                   |   1 +
 drivers/cpuidle/test-cpuidle_latency.c     | 157 ++++++++
 lib/Kconfig.debug                          |  10 +
 tools/testing/selftests/Makefile           |   1 +
 tools/testing/selftests/cpuidle/Makefile   |   6 +
 tools/testing/selftests/cpuidle/cpuidle.sh | 414 +++++++++++++++++++++
 tools/testing/selftests/cpuidle/settings   |   2 +
 7 files changed, 591 insertions(+)
 create mode 100644 drivers/cpuidle/test-cpuidle_latency.c
 create mode 100644 tools/testing/selftests/cpuidle/Makefile
 create mode 100755 tools/testing/selftests/cpuidle/cpuidle.sh
 create mode 100644 tools/testing/selftests/cpuidle/settings

Comments

Pratik R. Sampat May 13, 2021, 9:18 a.m. UTC | #1
Hi @Rafael and @Shuah,

Gentle ping.

Is there any feedback on this patch-set?

Quick summary and history:
1. The patchset introduces a kernel module and a bash selftest driver to
    estimate wakeup latency caused by entering idle states
2. The patchset has seemed to provide useful feedback on latency of idle 
states
    on the IBM POWER architecture
3. It also seems to also be providing desirable results on Intel 
machines with
    the IPI mechanism (Timer tests are optional here due to some Intel
    processors having a pre-wakeup feature and may not tend to actual idle
    latency) as reviewed by Doug Smythies.
    Intel numbers for reference: https://lkml.org/lkml/2021/4/13/785

--
Thanks
Pratik

On 30/04/21 1:58 pm, Pratik R. Sampat wrote:
> Changelog RFC v4 --> PATCH v5:
> 1. Added a CPU online check prior to parsing the CPU topology to avoid
>     parsing topologies for CPUs unavailable for the latency test
> 2. Added comment describing the selftest in cpuidle.sh
>
> As I have made changes to cpuidle.sh's working, hence dropping
> "Reviewed-by" from Doug Smythies for the second patch, while retaining
> it for the first patch.
>
> RFC v4: https://lkml.org/lkml/2021/4/12/99
> ---
> A kernel module + userspace driver to estimate the wakeup latency
> caused by going into stop states. The motivation behind this program is
> to find significant deviations behind advertised latency and residency
> values.
>
> The patchset measures latencies for two kinds of events. IPIs and Timers
> As this is a software-only mechanism, there will additional latencies of
> the kernel-firmware-hardware interactions. To account for that, the
> program also measures a baseline latency on a 100 percent loaded CPU
> and the latencies achieved must be in view relative to that.
>
> To achieve this, we introduce a kernel module and expose its control
> knobs through the debugfs interface that the selftests can engage with.
>
> The kernel module provides the following interfaces within
> /sys/kernel/debug/latency_test/ for,
>
> IPI test:
>      ipi_cpu_dest = Destination CPU for the IPI
>      ipi_cpu_src = Origin of the IPI
>      ipi_latency_ns = Measured latency time in ns
> Timeout test:
>      timeout_cpu_src = CPU on which the timer to be queued
>      timeout_expected_ns = Timer duration
>      timeout_diff_ns = Difference of actual duration vs expected timer
>
> Sample output on a POWER9 system is as follows:
> # --IPI Latency Test---
> # Baseline Average IPI latency(ns): 3114
> # Observed Average IPI latency(ns) - State0: 3265
> # Observed Average IPI latency(ns) - State1: 3507
> # Observed Average IPI latency(ns) - State2: 3739
> # Observed Average IPI latency(ns) - State3: 3807
> # Observed Average IPI latency(ns) - State4: 17070
> # Observed Average IPI latency(ns) - State5: 1038174
> # Observed Average IPI latency(ns) - State6: 1068784
> #
> # --Timeout Latency Test--
> # Baseline Average timeout diff(ns): 1420
> # Observed Average timeout diff(ns) - State0: 1640
> # Observed Average timeout diff(ns) - State1: 1764
> # Observed Average timeout diff(ns) - State2: 1715
> # Observed Average timeout diff(ns) - State3: 1845
> # Observed Average timeout diff(ns) - State4: 16581
> # Observed Average timeout diff(ns) - State5: 939977
> # Observed Average timeout diff(ns) - State6: 1073024
>
>
> Things to keep in mind:
>
> 1. This kernel module + bash driver does not guarantee idleness on a
>     core when the IPI and the Timer is armed. It only invokes sleep and
>     hopes that the core is idle once the IPI/Timer is invoked onto it.
>     Hence this program must be run on a completely idle system for best
>     results
>
> 2. Even on a completely idle system, there maybe book-keeping tasks or
>     jitter tasks that can run on the core we want idle. This can create
>     outliers in the latency measurement. Thankfully, these outliers
>     should be large enough to easily weed them out.
>
> 3. A userspace only selftest variant was also sent out as RFC based on
>     suggestions over the previous patchset to simply the kernel
>     complexeity. However, a userspace only approach had more noise in
>     the latency measurement due to userspace-kernel interactions
>     which led to run to run variance and a lesser accurate test.
>     Another downside of the nature of a userspace program is that it
>     takes orders of magnitude longer to complete a full system test
>     compared to the kernel framework.
>     RFC patch: https://lkml.org/lkml/2020/9/2/356
>
> 4. For Intel Systems, the Timer based latencies don't exactly give out
>     the measure of idle latencies. This is because of a hardware
>     optimization mechanism that pre-arms a CPU when a timer is set to
>     wakeup. That doesn't make this metric useless for Intel systems,
>     it just means that is measuring IPI/Timer responding latency rather
>     than idle wakeup latencies.
>     (Source: https://lkml.org/lkml/2020/9/2/610)
>     For solution to this problem, a hardware based latency analyzer is
>     devised by Artem Bityutskiy from Intel.
>     https://youtu.be/Opk92aQyvt0?t=8266
>     https://intel.github.io/wult/
>
> Pratik R. Sampat (2):
>    cpuidle: Extract IPI based and timer based wakeup latency from idle
>      states
>    selftest/cpuidle: Add support for cpuidle latency measurement
>
>   drivers/cpuidle/Makefile                   |   1 +
>   drivers/cpuidle/test-cpuidle_latency.c     | 157 ++++++++
>   lib/Kconfig.debug                          |  10 +
>   tools/testing/selftests/Makefile           |   1 +
>   tools/testing/selftests/cpuidle/Makefile   |   6 +
>   tools/testing/selftests/cpuidle/cpuidle.sh | 414 +++++++++++++++++++++
>   tools/testing/selftests/cpuidle/settings   |   2 +
>   7 files changed, 591 insertions(+)
>   create mode 100644 drivers/cpuidle/test-cpuidle_latency.c
>   create mode 100644 tools/testing/selftests/cpuidle/Makefile
>   create mode 100755 tools/testing/selftests/cpuidle/cpuidle.sh
>   create mode 100644 tools/testing/selftests/cpuidle/settings
>