From patchwork Sun Sep 26 03:27:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhaoyang Huang X-Patchwork-Id: 12517905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 666B2C433EF for ; Sun, 26 Sep 2021 03:28:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14DA960ED4 for ; Sun, 26 Sep 2021 03:28:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 14DA960ED4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9D0996B0071; Sat, 25 Sep 2021 23:28:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9803A6B0072; Sat, 25 Sep 2021 23:28:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86E8B6B0073; Sat, 25 Sep 2021 23:28:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 746576B0071 for ; Sat, 25 Sep 2021 23:28:02 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2FCD518033D39 for ; Sun, 26 Sep 2021 03:28:02 +0000 (UTC) X-FDA: 78628290804.36.6DE0765 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf04.hostedemail.com (Postfix) with ESMTP id D475E50000AD for ; Sun, 26 Sep 2021 03:28:01 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id n2so9276457plk.12 for ; Sat, 25 Sep 2021 20:28:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id; bh=Orm3UeNHwghzj3N/xisLRG+91t3d+f7m71sow6oNdJA=; b=X6SLIpTlDoVQDM/NHlvsid9wBtoz7sfRJSazyL4YgTj8NZvWjoOHq8Zn8uKIo6VVcn nlqlumjDWwIiEb418ovlhqFNX2pl7ePJZO/UBcTq8FPmHlKD4rpGxufedADsMh3wRGZU M1J4gOlXpuwdn+639U1RVk/NmMaQAhhbKbqawA+eQXR6F/U/ZKxR+X4TL6e0cTi7UH3m Z5Jz2EO0czdnIcIB9RCP/7dzSMMJFrql6eI7g+oBRbnggh59OUg7yvdkvg7J4M14U/vR YfP1bNnQ+73Yam1VqRgCxiKDhl/o7TUzCOCgU8yvMYey9D2+p9vZrZvbYrpbILu0M7NI Qknw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id; bh=Orm3UeNHwghzj3N/xisLRG+91t3d+f7m71sow6oNdJA=; b=QP4IEQshOyLlUXhPpsS7aRCmYy2ug7B0Gt+Rv7NEtIwNsTWm0hREd8elQAVduezxg5 ivD7Z2G7YGEr130zYiHqodN3wUhHzqp/TWYAYWRJtv2LGY+wI0AcCtqYeJqSq4kgzUm2 JouC9NGrsMDBy47YlJpL30x2+VUnx5LgZYzlxedjaatSn20mPkb17z1sL8xj2gm4WFMW l17/zysM4d0fxd57x9TiVVV0Fj3D820CEpgg4VokMsKeRID6ACR4fBfaxsv/yUvagDHh E9AQidQM8Nj9tGZF/VyipkF8NBrP6mtxM+KMRzPBGNqGJioCFQQ9Kmr26u9XhG2jLf61 58+Q== X-Gm-Message-State: AOAM531Pl8o1b1IiAjZ+cEXZc0iJdkpU9RD4Ui39L5h3s/gbAIc013an 1LkAv4Jzq9Dlr2u3Y3hBWJ6Tf9yB7IA= X-Google-Smtp-Source: ABdhPJxURf/JddHBdbheKzORXay3uy/+T9lHFyy2OH2d4SKLBor1vDh2/Wi3QbWP681YSvbDtjbhAg== X-Received: by 2002:a17:90a:1942:: with SMTP id 2mr11561511pjh.36.1632626880873; Sat, 25 Sep 2021 20:28:00 -0700 (PDT) Received: from bj03382pcu.spreadtrum.com ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x16sm14537878pgc.49.2021.09.25.20.27.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Sep 2021 20:28:00 -0700 (PDT) From: Huangzhaoyang To: Johannes Weiner , Zhaoyang Huang , linux-mm@kvack.org, linux-kernel@vger.kernel.org, xuewen.yan@unisoc.com, ke.wang@unisoc.com Subject: [Resend PATCH] psi : calc cfs task memstall time more precisely Date: Sun, 26 Sep 2021 11:27:16 +0800 Message-Id: <1632626836-27923-1-git-send-email-huangzhaoyang@gmail.com> X-Mailer: git-send-email 1.7.9.5 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D475E50000AD X-Stat-Signature: fjehhzfem6q8tpoa4fichsb3y1oyj1it Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=X6SLIpTl; spf=pass (imf04.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1632626881-350970 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000455, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang In an EAS enabled system, there are two scenarios discordant to current design, 1. workload used to be heavy uneven among cores for sake of scheduler policy. RT task usually preempts CFS task in little core. 2. CFS task's memstall time is counted as simple as exit - entry so far, which ignore the preempted time by RT, DL and Irqs. With these two constraints, the percpu nonidle time would be mainly consumed by none CFS tasks and couldn't be averaged. Eliminating them by calc the time growth via the proportion of cfs_rq's utilization on the whole rq. eg. Here is the scenario which this commit want to fix, that is the rt and irq consume some utilization of the whole rq. This scenario could be typical in a core which is assigned to deal with all irqs. Furthermore, the rt task used to run on little core under EAS. Binder:305_3-314 [002] d..1 257.880195: psi_memtime_fixup: original:30616,adjusted:25951,se:89,cfs:353,rt:139,dl:0,irq:18 droid.phone-1525 [001] d..1 265.145492: psi_memtime_fixup: original:61616,adjusted:53492,se:55,cfs:225,rt:121,dl:0,irq:15 Signed-off-by: Zhaoyang Huang --- kernel/sched/psi.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index cc25a3c..754a836 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -182,6 +182,8 @@ struct psi_group psi_system = { static void psi_avgs_work(struct work_struct *work); +static unsigned long psi_memtime_fixup(u32 growth); + static void group_init(struct psi_group *group) { int cpu; @@ -492,6 +494,21 @@ static u64 window_update(struct psi_window *win, u64 now, u64 value) return growth; } +static unsigned long psi_memtime_fixup(u32 growth) +{ + struct rq *rq = task_rq(current); + unsigned long growth_fixed = (unsigned long)growth; + + if (!(current->policy == SCHED_NORMAL || current->policy == SCHED_BATCH)) + return growth_fixed; + + if (current->in_memstall) + growth_fixed = div64_ul((1024 - rq->avg_rt.util_avg - rq->avg_dl.util_avg + - rq->avg_irq.util_avg + 1) * growth, 1024); + + return growth_fixed; +} + static void init_triggers(struct psi_group *group, u64 now) { struct psi_trigger *t; @@ -658,6 +675,7 @@ static void record_times(struct psi_group_cpu *groupc, u64 now) } if (groupc->state_mask & (1 << PSI_MEM_SOME)) { + delta = psi_memtime_fixup(delta); groupc->times[PSI_MEM_SOME] += delta; if (groupc->state_mask & (1 << PSI_MEM_FULL)) groupc->times[PSI_MEM_FULL] += delta; @@ -928,8 +946,8 @@ void psi_memstall_leave(unsigned long *flags) */ rq = this_rq_lock_irq(&rf); - current->in_memstall = 0; psi_task_change(current, TSK_MEMSTALL, 0); + current->in_memstall = 0; rq_unlock_irq(rq, &rf); }