From patchwork Wed Aug 1 15:19:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 10552455 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4902A15E9 for ; Wed, 1 Aug 2018 15:17:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 36EDA2B5AF for ; Wed, 1 Aug 2018 15:17:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2950D2B703; Wed, 1 Aug 2018 15:17:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B11852B5AF for ; Wed, 1 Aug 2018 15:17:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE20C6B0274; Wed, 1 Aug 2018 11:17:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A921E6B0276; Wed, 1 Aug 2018 11:17:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95CB46B0277; Wed, 1 Aug 2018 11:17:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f200.google.com (mail-qk0-f200.google.com [209.85.220.200]) by kanga.kvack.org (Postfix) with ESMTP id 5F3D36B0274 for ; Wed, 1 Aug 2018 11:17:16 -0400 (EDT) Received: by mail-qk0-f200.google.com with SMTP id x204-v6so17082745qka.6 for ; Wed, 01 Aug 2018 08:17:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id; bh=LmnzuKp3OyFfCv6zkVrfXnQkRZdj9C9bGUFAxIbSZTM=; b=YA6PVwzvnBzH+A1HJHtsvuqDN0FipSY4IvAchR6SocjccB+tAKEJz6ZiHm9hWjMu5h jfV0UJyge145/NdDFZKYn0DNsaMlm4n47x6mOvlcem7ki51scsoIf1tGnm8BEVhRMHp4 YD3i+nEHFh2v2bKAOJvSiIl0a0CAc0bCZ+C+wwNn8JyyB88nBRHt6Z6t+Gvles4vSqeW CiP3rR2YE2Vb2gg/AYnqlsVvozsYGA8Yokiw0yBMDXbyWdob7pCWvJ+4u05GXpilbe4Y f9dbuBfd1P/lGNJOUczR9C4mPQjSsQqpqcez0UGLf3OM/bjYSuAUhIdv0i9iiPkeBO1A dbcg== X-Gm-Message-State: AOUpUlGFN8SzXm5V50DGeoacGNyC1zJTNNe0cIZ9yEHAlDFuM7MZchEv eKQwir0SPiVJGTHjoxbYTIhVrrGEUfx/Ekd9z0fxQf5K/bJJI6nTm/9E6FF5E1vWzWsiaXe/iRv wpz5WnJsvccLCq+HK3subyR0wvEhICY4VafyQb0GRrc14PWBKDtlqCbbSkdkPTW4QmYe7qxRtvy 7rQoPW+clb0Frw095s3yVRq7hF1P4UY5Mp9Zq5Q+to+Uj+WpHC0k8KPLdSw1jkL5JYg93ECG5Ee /B42v4PfFgmmQ8qeidNdFltX+HF3YwKveYhd+kNRuBczQWsweyscsVRwMmDur8rVQuC0qrBH705 FlqD3lGuVvYylUXf3L84Je09RVNAOgm98LqTp4GdQVQ+cSgY0qT9V7s9cuzJXqm3DFRrW1wY/rB d X-Received: by 2002:a37:4150:: with SMTP id o77-v6mr24260512qka.78.1533136636065; Wed, 01 Aug 2018 08:17:16 -0700 (PDT) X-Received: by 2002:a37:4150:: with SMTP id o77-v6mr24260409qka.78.1533136634639; Wed, 01 Aug 2018 08:17:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533136634; cv=none; d=google.com; s=arc-20160816; b=VF7rqOzmoV8LnQwRSyYvOqb1EogiJP+1jkjl8fIXFp+ENR2k/4QJJwkqpksF6lICp4 rDMZqWTXRnjB8ShX7wLZ+c7rWbGlEfLzgt+cQBVp44OOViHnKBY5yN70ISUfBqzmbzmH ydCufU97kQNUzWvBShcCiPs4ZQQHLiJgn31mC92nvMCEvKlJhK+IMt04Bk9vzWqsSN8t nv0lJUndaLKAbZWcrYRTdcoNPw2UhjR1Zq/9E7tH6wtoF7AiHItFm4VscLevJFt0Tk1g jXpT3ZIMvRJ0GSfU3wf9cqDj5kCNLLgmy5wo/wQqEZ36ucfCpEg/fsJDlLq2CFNszbiu KyOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=LmnzuKp3OyFfCv6zkVrfXnQkRZdj9C9bGUFAxIbSZTM=; b=DNsOilCQgJPavVuZyIovFqBD/yzmf7YPyxWcPUJzX9VdXy6+yn6NumQgskmXtGx0/5 U4IqUvHJVGg1RVsTUrIMmOSq9WHVUu9PRmJxR4MVJXFCsqcsmV8ljtnql/fb/VFUZ9rQ W5273n2MEdI6XaAeUmTIJLNNLGRVrdAfgqDhHIV4Vt+LN3ODA/ck8D8cPt5F97N91frY Hqeen2XX1RhFY7HnSXAtpwLstza05zFIJcSm/HAkR6Mkq/qAbf7l69Ag2ISJ3QZSjLe5 1TY8qIBWeb1Sad9MtX3oQV5ox2Dggapsz8RVmiOfnvKYnW3/K2YKKs4ydqWCPvHwW9i/ 4yvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=Vnh2jjjS; spf=pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id p14-v6sor8271766qki.17.2018.08.01.08.17.14 for (Google Transport Security); Wed, 01 Aug 2018 08:17:14 -0700 (PDT) Received-SPF: pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=Vnh2jjjS; spf=pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=LmnzuKp3OyFfCv6zkVrfXnQkRZdj9C9bGUFAxIbSZTM=; b=Vnh2jjjSwi2e6Z70WDaZYpwczIm/2ZzZ9uBczahayqjVSqPTRd+dlxO8CKd6WwLSru fJDbd09U+VnYRuQvTWi4z1SNunwkVuNVXmFxQYrKUcC1SRFuuV6l6d1B2GsJqDIbLmwc 4qPFjsxb1g4JqUePNVunjyZHjmgrJNryO+U4TtosWoM7eWU6eOrM2X230wOSVjCq5bM+ RhP1JlqyeCCDSQJH3h5H7kwFn6ATtEncpWU2efmk4DpsfhBaYtcPRgeE4HeU0ISR0xj/ NZLzW6X2rl0CmQTq/iHkCs4U83da/QdAvnaAo6NCmSLuqAR8+I/fSNacuFcNg6sslQPv KZUw== X-Google-Smtp-Source: AAOMgpeL85p968AX8BYHMfgwNJ9IL4qteq+je+KhsrMBVP8hrVde12rfjZvZVuHVEv4eB6VM9fuDEQ== X-Received: by 2002:a37:ad07:: with SMTP id f7-v6mr22994927qkm.114.1533136633903; Wed, 01 Aug 2018 08:17:13 -0700 (PDT) Received: from localhost (216.49.36.201.res-cmts.bus.ptd.net. [216.49.36.201]) by smtp.gmail.com with ESMTPSA id r13-v6sm11160744qke.21.2018.08.01.08.17.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 01 Aug 2018 08:17:12 -0700 (PDT) From: Johannes Weiner To: Ingo Molnar , Peter Zijlstra , Andrew Morton , Linus Torvalds Cc: Tejun Heo , Suren Baghdasaryan , Daniel Drake , Vinayak Menon , Christopher Lameter , Mike Galbraith , Shakeel Butt , Peter Enderborg , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 0/9] psi: pressure stall information for CPU, memory, and IO v3 Date: Wed, 1 Aug 2018 11:19:49 -0400 Message-Id: <20180801151958.32590-1-hannes@cmpxchg.org> X-Mailer: git-send-email 2.18.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP [ Resend after butchering the message headers in the first attempt, sorry ] This is version 3 of the PSI series. It incorporates again a ton of feedback from Peter; more details at the very bottom of this email. I've added benchmark results from the v2 thread in the FAQ below. I also re-ran the memcache test that showed a 1% CPU utilization increase with v2, but with v3 that difference is no longer detectable. Overview PSI reports the overall wallclock time in which the tasks in a system (or cgroup) wait for (contended) hardware resources. This helps users understand the resource pressure their workloads are under, which allows them to rootcause and fix throughput and latency problems caused by overcommitting, underprovisioning, suboptimal job placement in a grid; as well as anticipate major disruptions like OOM. Real-world applications We're using the data collected by PSI (and its previous incarnation, memdelay) quite extensively at Facebook, and with several success stories. One usecase is avoiding OOM hangs/livelocks. The reason these happen is because the OOM killer is triggered by reclaim not being able to free pages, but with fast flash devices there is *always* some clean and uptodate cache to reclaim; the OOM killer never kicks in, even as tasks spend 90% of the time thrashing the cache pages of their own executables. There is no situation where this ever makes sense in practice. We wrote a <100 line POC python script to monitor memory pressure and kill stuff way before such pathological thrashing leads to full system losses that would require forcible hard resets. We've since extended and deployed this code into other places to guarantee latency and throughput SLAs, since they're usually violated way before the kernel OOM killer would ever kick in. It is available here: https://github.com/facebookincubator/oomd Eventually we probably want to trigger the in-kernel OOM killer based on extreme sustained pressure as well, so that Linux can avoid memory livelocks - which technically aren't deadlocks, but to the user indistinguishable from them - out of the box. We'd continue using OOMD as the first line of defense to ensure workload health and implement complex kill policies that are beyond the scope of the kernel. We also use PSI memory pressure for loadshedding. Our batch job infrastructure used to use heuristics based on various VM stats to anticipate OOM situations, with lackluster success. We switched it to PSI and managed to anticipate and avoid OOM kills and lockups fairly reliably. The reduction of OOM outages in the worker pool raised the pool's aggregate productivity, and we were able to switch that service to smaller machines. Lastly, we use cgroups to isolate a machine's main workload from maintenance crap like package upgrades, logging, configuration, as well as to prevent multiple workloads on a machine from stepping on each others' toes. We were not able to configure this properly without the pressure metrics; we would see latency or bandwidth drops, but it would often be hard to impossible to rootcause it post-mortem. We now log and graph pressure for the containers in our fleet and can trivially link latency spikes and throughput drops to shortages of specific resources after the fact, and fix the job config/scheduling. PSI has also received testing, feedback, and feature requests from Android and EndlessOS for the purpose of low-latency OOM killing, to intervene in pressure situations before the UI starts hanging. How do you use this feature? A kernel with CONFIG_PSI=y will create a /proc/pressure directory with 3 files: cpu, memory, and io. If using cgroup2, cgroups will also have cpu.pressure, memory.pressure and io.pressure files, which simply aggregate task stalls at the cgroup level instead of system-wide. The cpu file contains one line: some avg10=2.04 avg60=0.75 avg300=0.40 total=157656722 The averages give the percentage of walltime in which one or more tasks are delayed on the runqueue while another task has the CPU. They're recent averages over 10s, 1m, 5m windows, so you can tell short term trends from long term ones, similarly to the load average. The total= value gives the absolute stall time in microseconds. This allows detecting latency spikes that might be too short to sway the running averages. It also allows custom time averaging in case the 10s/1m/5m windows aren't adequate for the usecase (or are too coarse with future hardware). What to make of this "some" metric? If CPU utilization is at 100% and CPU pressure is 0, it means the system is perfectly utilized, with one runnable thread per CPU and nobody waiting. At two or more runnable tasks per CPU, the system is 100% overcommitted and the pressure average will indicate as much. From a utilization perspective this is a great state of course: no CPU cycles are being wasted, even when 50% of the threads were to go idle (as most workloads do vary). From the perspective of the individual job it's not great, however, and they would do better with more resources. Depending on what your priority and options are, raised "some" numbers may or may not require action. The memory file contains two lines: some avg10=70.24 avg60=68.52 avg300=69.91 total=3559632828 full avg10=57.59 avg60=58.06 avg300=60.38 total=3300487258 The some line is the same as for cpu, the time in which at least one task is stalled on the resource. In the case of memory, this includes waiting on swap-in, page cache refaults and page reclaim. The full line, however, indicates time in which *nobody* is using the CPU productively due to pressure: all non-idle tasks are waiting for memory in one form or another. Significant time spent in there is a good trigger for killing things, moving jobs to other machines, or dropping incoming requests, since neither the jobs nor the machine overall are making too much headway. The io file is similar to memory. Because the block layer doesn't have a concept of hardware contention right now (how much longer is my IO request taking due to other tasks?), it reports CPU potential lost on all IO delays, not just the potential lost due to competition. FAQ Q: How is PSI's CPU component different from the load average? A: There are several quirks in the load average that make it hard to impossible to tell how overcommitted the CPU really is. 1. The load average is reported as a raw number of active tasks. You need to know how many CPUs there are in the system, how many CPUs the workload is allowed to use, then think about what the proportion between load and the number of CPUs mean for the tasks trying to run. PSI reports the percentage of wallclock time in which tasks are waiting for a CPU to run on. It doesn't matter how many CPUs are present or usable. The number always tells the quality of life of tasks in the system or in a particular cgroup. 2. The shortest averaging window is 1m, which is extremely coarse, and it's sampled in 5s intervals. A *lot* can happen on a CPU in 5 seconds. This *may* be able to identify persistent long-term trends and very clear and obvious overloads, but it's unusable for latency spikes and more subtle overutilization. PSI's shortest window is 10s. It also exports the cumulative stall times (in microseconds) of synchronously recorded events. 3. On Linux, the load average for historical reasons includes all TASK_UNINTERRUPTIBLE tasks. This gives a broader sense of how busy the system is, but on the flipside it doesn't distinguish whether tasks are likely to contend over the CPU or IO - which obviously requires very different interventions from a sys admin or a job scheduler. PSI reports independent metrics for CPU and IO. You can tell which resource is making the tasks wait, but in conjunction still see how overloaded the system is overall. Q: What's the cost / performance impact of this feature? A: PSI's primary cost is in the scheduler, in particular task wakeups and sleeps. I benchmarked this code using Facebook's two most scheduling sensitive workloads: memcache and webserver. They handle a ton of small requests - lots of wakeups and sleeps with little actual work in between - so they tend to be canaries for scheduler regressions. In the tests, the boxes were handling live traffic over the course of several hours. Half the machines, the control, ran with CONFIG_PSI=n. For memcache I used eight machines total. They're 2-socket, 14 core, 56 thread boxes. The test runs for half the test period, flips the test and control kernels on the hardware to rule out HW factors, DC location etc., then runs the other half of the test. For the webservers, I used 32 machines total. They're single socket, 16 core, 32 thread machines. During the memcache test, CPU load was nopsi=78.05% psi=78.98% in the first half and nopsi=77.52% psi=78.25%, so PSI added between 0.7 and 0.9 percentage points to the CPU load, a difference of about 1%. UPDATE: I re-ran this test with the v3 version of this patch set and the CPU utilization was equivalent between test and control. As far as end-to-end request latency from the client perspective goes, we don't sample those finely enough to capture the requests going to those particular machines during the test, but we know the p50 turnaround time in this workload is 54us, and perf bench sched pipe on those machines show nopsi=5.232666 us/op and psi=5.587347 us/op, so this doesn't add much here either. The profile for the pipe benchmark shows: 0.87% sched-pipe [kernel.vmlinux] [k] psi_group_change 0.83% perf.real [kernel.vmlinux] [k] psi_group_change 0.82% perf.real [kernel.vmlinux] [k] psi_task_change 0.58% sched-pipe [kernel.vmlinux] [k] psi_task_change The webserver load is running inside 4 nested cgroup levels. The CPU load with both nopsi and psi kernels was indistinguishable at 81%. For comparison, we had to disable the cgroup cpu controller on the webservers because it added 4 percentage points to the CPU% during this same exact test. Versions of this accounting code now run on 80% of our fleet. None of our workloads have reported regressions during the rollout. These patches are against v4.17. They're maintained against upstream here as well: http://git.cmpxchg.org/cgit.cgi/linux-psi.git Documentation/accounting/psi.txt | 73 +++ Documentation/cgroup-v2.txt | 18 + arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +- arch/powerpc/platforms/cell/spufs/sched.c | 9 +- arch/s390/appldata/appldata_os.c | 4 - drivers/cpuidle/governors/menu.c | 4 - fs/proc/loadavg.c | 3 - include/linux/cgroup-defs.h | 4 + include/linux/cgroup.h | 15 + include/linux/delayacct.h | 23 + include/linux/mmzone.h | 1 + include/linux/page-flags.h | 5 +- include/linux/psi.h | 52 ++ include/linux/psi_types.h | 87 +++ include/linux/sched.h | 10 + include/linux/sched/loadavg.h | 24 +- include/linux/swap.h | 2 +- include/trace/events/mmflags.h | 1 + include/uapi/linux/taskstats.h | 6 +- init/Kconfig | 19 + kernel/cgroup/cgroup.c | 45 +- kernel/debug/kdb/kdb_main.c | 7 +- kernel/delayacct.c | 15 + kernel/fork.c | 4 + kernel/sched/Makefile | 1 + kernel/sched/core.c | 15 +- kernel/sched/loadavg.c | 139 ++--- kernel/sched/psi.c | 720 ++++++++++++++++++++++ kernel/sched/sched.h | 178 +++--- kernel/sched/stats.h | 80 +++ mm/compaction.c | 5 + mm/filemap.c | 27 +- mm/huge_memory.c | 1 + mm/memcontrol.c | 2 + mm/migrate.c | 2 + mm/page_alloc.c | 10 + mm/swap_state.c | 1 + mm/vmscan.c | 14 + mm/vmstat.c | 1 + mm/workingset.c | 113 ++-- tools/accounting/getdelays.c | 8 +- 41 files changed, 1503 insertions(+), 247 deletions(-) Changes in v2: - Extensive documentation and comment update. Per everybody. In particular, I've added a much more detailed explanation of the SMP model, which caused some misunderstandings last time. - Uninlined calc_load_n(), as it was just too fat. Per Peter. - Split kernel/sched/stats.h churn into its own commit to avoid noise in the main patch and explain the reshuffle. Per Peter. - Abstracted this_rq_lock_irq(). Per Peter. - Eliminated cumulative clock drift error. Per Peter. - Packed the per-cpu datastructure. Per Peter. - Fixed 64-bit divisions on 32 bit. Per Peter. - Added outer-most psi_disabled checks. Per Peter. - Fixed some coding style issues. Per Peter. - Fixed a bug in the lazy clock. Per Suren. - On-demand stat aggregation when user reads. Per Suren. - Fixed task state corruption on preemption race. Per Suren. - Fixed a CONFIG_PSI=n build error. - Minor cleanups, optimizations. Changes in v3: - Packed scheduler hotpath data into one cacheline, as per Peter and Linus - Implemented live state aggregation without the rq lock, as per Peter - do_div -> div64_ul and some other cleanups, as per Peter - Dropped unnecessary SCHED_INFO dependency, as per Peter - Realtime sampling period and slipped sample handling, as per Tejun - Fixed 64-bit divsion on 32 bit & checkpatch warnings, as per Andrew