From patchwork Fri Apr 21 14:17:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9281C7618E for ; Fri, 21 Apr 2023 14:17:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229575AbjDUOR6 (ORCPT ); Fri, 21 Apr 2023 10:17:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232383AbjDUORs (ORCPT ); Fri, 21 Apr 2023 10:17:48 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8363CE50 for ; Fri, 21 Apr 2023 07:17:47 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54c2999fdc7so27601737b3.2 for ; Fri, 21 Apr 2023 07:17:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086666; x=1684678666; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=K9ehmzwOW4Qc9gQvpwM/ZZofEU4OqXe1UUgzo44mFis=; b=jQ/EfbElWOoD7Q5RcZ9PEMa2xeB33n91soMxfAboXtG9IdbuenydvfqGJRJs173o0i vupY0UegJS39vUDZPWg//Oym1AxFL0seSvyqJuJ2iCaBeWL8/gw7JMqp4PQ7JqaqcVXN iNIK16Y8y4rXoL54RmLqFH1ccS7vUP4c6UAmjsYbFOzVDuRQ9s7pTGXMPN+J00WMZlh/ dkBxnZoW2ygD8DRNK6Rn+6P0ltnrb6sIgfoNbyE0Ov4dTaBIn6WtKU0bQDqeRL8gbvhA /vwQxur+PunJsSaBWZDh+8oiYEaDyLaOgE7fzoSNYbg7eGhEH2OnC/ShEbI1U1+eZhx8 QppA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086666; x=1684678666; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K9ehmzwOW4Qc9gQvpwM/ZZofEU4OqXe1UUgzo44mFis=; b=IOd+xSdat9DTyWW5c1gHDhTen9rJoqqAjC81gDzxLFAsnYQO+OI8Bk4CqwmV5dfYj5 3vEp7yHTvp7dTauZYZ2Qwp4hYjIVyEvcC+Li+U2VzgcHW4Va+s0CAGKDibGcHaF3YOD0 KmmmVgIjobEGXLGoeMdPuSeeSyyrv2etB0vLhY2r9t2EcAw/OF1V1ezJpI8U6EuJvpHN QGpHwIhXu+82DEl252uQV98iXvYSIV6Ty+OvSWSQQ1xAu/l6vbL2aUXAbvdwUX5o4fkj GdRNTq5JHEw4g5Djef4X5pASIIImMkVdblt5uKg49w4pox/dt6/IEGC4MyK7kqcEU87/ XOyA== X-Gm-Message-State: AAQBX9fa85tHpSqBRcavvdkcB4xJWcLTRvh1bVwJ5b56l/D9nDwT0NPR 41F4j2FWtSNMGsUzsZT1XGBroEDHWERuOyJoZw== X-Google-Smtp-Source: AKy350bGSojqd0RF8hFVJ3f1aMQaH6BbLkjOI4vRi/ZfKEmx8aOtAggSlmMc650/M/hrY1VHYKP2C4yWp1Cj3Omwyg== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a81:b725:0:b0:555:f33e:e346 with SMTP id v37-20020a81b725000000b00555f33ee346mr1484199ywh.6.1682086666823; Fri, 21 Apr 2023 07:17:46 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:15 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-2-peternewman@google.com> Subject: [PATCH v1 1/9] selftests/resctrl: Verify all RMIDs count together From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org AMD CPUs in particular implement fewer monitors than RMIDs, so add a test case to see if a large number of monitoring groups can be measured together. Signed-off-by: Peter Newman --- tools/testing/selftests/resctrl/test_rmids.sh | 93 +++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100755 tools/testing/selftests/resctrl/test_rmids.sh diff --git a/tools/testing/selftests/resctrl/test_rmids.sh b/tools/testing/selftests/resctrl/test_rmids.sh new file mode 100755 index 000000000000..475e69c0217e --- /dev/null +++ b/tools/testing/selftests/resctrl/test_rmids.sh @@ -0,0 +1,93 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +cd /sys/fs/resctrl + +grep -q mbm_total_bytes info/L3_MON/mon_features || { + echo "MBM required" + exit 4 +} + +which perf > /dev/null || { + echo "perf tool required" + exit 4 +} + +num_rmids=$(cat info/L3_MON/num_rmids) + +count=0 + +result=0 + +# use as many RMIDs as possible, up to the number of RMIDs +for i in `seq $num_rmids`; do + mkdir mon_groups/_test_m$((count+1)) 2> /dev/null || break + if [[ -d mon_groups/_test_m$((count+1))/mon_data ]]; then + count=$((count+1)) + else + break; + fi +done + +echo "Created $count monitoring groups." + +if [[ $count -eq 0 ]]; then + echo "need monitoring groups to continue." + exit 4 +fi + +declare -a bytes_array + +unavailable=0 +unavailable0=0 + +for i in `seq $count`; do + bytes_array[$i]=$(cat mon_groups/_test_m${i}/mon_data/mon_L3_00/mbm_total_bytes) + + if [[ "${bytes_array[$i]}" = "Unavailable" ]]; then + unavailable0=$((unavailable0 + 1)) + fi +done + +for i in `seq $count`; do + echo $$ > mon_groups/_test_m${i}/tasks + taskset 0x1 perf bench mem memcpy -s 100MB -f default > /dev/null +done +echo $$ > tasks + +# zero non-integer values +declare -i bytes bytes0 + +success_count=0 + +for i in `seq $count`; do + raw_bytes=$(cat mon_groups/_test_m${i}/mon_data/mon_L3_00/mbm_total_bytes) + raw_bytes0=${bytes_array[$i]} + + # coerce the value to an integer for math + bytes=$raw_bytes + bytes0=$raw_bytes0 + + echo -n "g${i}: mbm_total_bytes: $raw_bytes0 -> $raw_bytes" + + if [[ "$raw_bytes" = "Unavailable" ]]; then + unavailable=$((unavailable + 1)) + fi + + if [[ $bytes -gt $bytes0 ]]; then + success_count=$((success_count+1)) + echo "" + else + echo " (FAIL)" + result=1 + fi +done + +first=$((count-unavailable0)) +second=$((count-unavailable)) +echo "$count groups, $first returned counts in first pass, $second in second" +echo "successfully measured bandwidth from ${success_count}/${count} groups" + +rmdir mon_groups/_test_m* + +exit $result From patchwork Fri Apr 21 14:17:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD46CC77B71 for ; Fri, 21 Apr 2023 14:18:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231301AbjDUOR7 (ORCPT ); Fri, 21 Apr 2023 10:17:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232428AbjDUORv (ORCPT ); Fri, 21 Apr 2023 10:17:51 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E93949030 for ; Fri, 21 Apr 2023 07:17:49 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b92309d84c1so6368143276.1 for ; Fri, 21 Apr 2023 07:17:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086669; x=1684678669; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9ZdT0+yxI631GXfw7STY7XxcZTuaYlRecO3uXqz7sRs=; b=Y2+bxGSHDI0cOeuV61GwDsFnnX9uAq9WV4Qgojc5LTOAuEP7C/2lbNBzZCmr9EdXh7 T9jJFsDqzueATZPhTVjNwdJ4MFL89AyXFBgYcElt1FG2xsfcm9t/jgq2PV+yLwMb7P37 sBDmeAm7DfdmrMZ0bI/85s0NTFkKfCr8eZPBFv7mbUVMz/4yNLvDgNiI8lS/+WydrCT9 0xlrKDptRLKB0mb733akzzr1zKgHEJXS5NUA2IWaMU05W6pdg6KhkQHmIygs+G8eC3bI FtBXepDYkaHIXGahLzv8DVUcjVfhwk/8CZCRLk9Ay5NK+v9x3qPgWv2q/rUiCbhfuZUt 3gxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086669; x=1684678669; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9ZdT0+yxI631GXfw7STY7XxcZTuaYlRecO3uXqz7sRs=; b=lppX1DE5Fw4HwSqqsYdkyJilI+1aHUrocbrWU9IqkR9Mztl0Pf8/2Mx6vcDhjQLKWL nWlM4eMwL4dFZNBxGChwUzr4fAqrbfRPz1/90qtvzEqKx+adwIFx4C/KzQpeqixWNcT+ kcwLSkCj+3KYHr3jcVSgTCrz5wLeOdSrDMRN33ZgrISnmHX8a1XURiVtBDEjlna/WINM BDhLp0WNbUvapQhKgeRxXQsyqb1A0lMMQtwZxpKGq51kAbtPVotw/3MwdVnsgAVY/xXN Mt5bNyBDz+xmdocXRiUQBitUoLVazgvR2IegE5FPH2dL9PlgZ0smdU4Qae+mQfNHax9o Y1Wg== X-Gm-Message-State: AAQBX9f0c1cnJUWDy/Sfby2FxOXdZXMWWV83Hm4Ezi+gB5FIDHzOf0XV uJwoQYztb1Ty6nNL8KuWBvam70Jr0964fFat5A== X-Google-Smtp-Source: AKy350bw3Zi5wPB0lMm8zUks5VeLUBbeduMsul45B2dWHkdZeBUg6vpt39ys1PXVjq9XFMDP/4rsZCKkUKoSq7Meng== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a05:690c:c8c:b0:54f:e2ca:3085 with SMTP id cm12-20020a05690c0c8c00b0054fe2ca3085mr1849543ywb.1.1682086669711; Fri, 21 Apr 2023 07:17:49 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:16 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-3-peternewman@google.com> Subject: [PATCH v1 2/9] x86/resctrl: Hold a spinlock in __rmid_read() on AMD From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Stephane Eranian In AMD PQoS Versions 1.0 and 2.0, IA32_QM_EVTSEL MSR is shared by all processors in a QOS domain. So there's a chance it can read a different event when two processors are reading the counter concurrently. Add a spinlock to prevent this race. Co-developed-by: Peter Newman Signed-off-by: Peter Newman Signed-off-by: Stephane Eranian --- arch/x86/kernel/cpu/resctrl/core.c | 41 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/resctrl/internal.h | 5 ++++ arch/x86/kernel/cpu/resctrl/monitor.c | 14 +++++++-- 3 files changed, 57 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index 030d3b409768..47b1c37a81f8 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -25,6 +25,8 @@ #include #include "internal.h" +DEFINE_STATIC_KEY_FALSE(rmid_read_locked); + /* Mutex to protect rdtgroup access. */ DEFINE_MUTEX(rdtgroup_mutex); @@ -529,6 +531,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r) d->id = id; cpumask_set_cpu(cpu, &d->cpu_mask); + raw_spin_lock_init(&hw_dom->evtsel_lock); + rdt_domain_reconfigure_cdp(r); if (r->alloc_capable && domain_setup_ctrlval(r, d)) { @@ -829,6 +833,41 @@ static __init bool get_rdt_mon_resources(void) return !rdt_get_mon_l3_config(r); } +static __init bool amd_shared_qm_evtsel(void) +{ + /* + * From AMD64 Technology Platform Quality of Service Extensions, + * Revision 1.03: + * + * "For PQoS Version 1.0 and 2.0, as identified by Family/Model, the + * QM_EVTSEL register is shared by all the processors in a QOS domain." + * + * Check the inclusive Family/Model ranges for PQoS Extension versions + * 1.0 and 2.0 from the PQoS Extension Versions table. + */ + if (boot_cpu_data.x86 == 0x17) + /* V1.0 */ + return boot_cpu_data.x86_model >= 0x30 && + boot_cpu_data.x86_model <= 0x9f; + + if (boot_cpu_data.x86 == 0x19) + /* V2.0 */ + return (boot_cpu_data.x86_model <= 0xf) || + ((boot_cpu_data.x86_model >= 0x20) && + (boot_cpu_data.x86_model <= 0x5f)); + + return false; +} + +static __init void __check_quirks_amd(void) +{ + if (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL) || + rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL)) { + if (amd_shared_qm_evtsel()) + static_branch_enable(&rmid_read_locked); + } +} + static __init void __check_quirks_intel(void) { switch (boot_cpu_data.x86_model) { @@ -852,6 +891,8 @@ static __init void check_quirks(void) { if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) __check_quirks_intel(); + else if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) + __check_quirks_amd(); } static __init bool get_rdt_resources(void) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 85ceaf9a31ac..02a062558c67 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -325,6 +325,7 @@ struct arch_mbm_state { * @ctrl_val: array of cache or mem ctrl values (indexed by CLOSID) * @arch_mbm_total: arch private state for MBM total bandwidth * @arch_mbm_local: arch private state for MBM local bandwidth + * @lock: serializes counter reads when QM_EVTSEL MSR is shared per-domain * * Members of this structure are accessed via helpers that provide abstraction. */ @@ -333,6 +334,7 @@ struct rdt_hw_domain { u32 *ctrl_val; struct arch_mbm_state *arch_mbm_total; struct arch_mbm_state *arch_mbm_local; + raw_spinlock_t evtsel_lock; }; static inline struct rdt_hw_domain *resctrl_to_arch_dom(struct rdt_domain *r) @@ -428,6 +430,9 @@ extern struct rdt_hw_resource rdt_resources_all[]; extern struct rdtgroup rdtgroup_default; DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key); +/* Serialization required in resctrl_arch_rmid_read(). */ +DECLARE_STATIC_KEY_FALSE(rmid_read_locked); + extern struct dentry *debugfs_resctrl; enum resctrl_res_level { diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index 20952419be75..2de8397f91cd 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -146,10 +146,15 @@ static inline struct rmid_entry *__rmid_entry(u32 rmid) return entry; } -static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val) +static int __rmid_read(struct rdt_hw_domain *hw_dom, u32 rmid, + enum resctrl_event_id eventid, u64 *val) { + unsigned long flags; u64 msr_val; + if (static_branch_likely(&rmid_read_locked)) + raw_spin_lock_irqsave(&hw_dom->evtsel_lock, flags); + /* * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured * with a valid event code for supported resource type and the bits @@ -161,6 +166,9 @@ static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val) wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid); rdmsrl(MSR_IA32_QM_CTR, msr_val); + if (static_branch_likely(&rmid_read_locked)) + raw_spin_unlock_irqrestore(&hw_dom->evtsel_lock, flags); + if (msr_val & RMID_VAL_ERROR) return -EIO; if (msr_val & RMID_VAL_UNAVAIL) @@ -200,7 +208,7 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d, memset(am, 0, sizeof(*am)); /* Record any initial, non-zero count value. */ - __rmid_read(rmid, eventid, &am->prev_msr); + __rmid_read(hw_dom, rmid, eventid, &am->prev_msr); } } @@ -241,7 +249,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d, if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask)) return -EINVAL; - ret = __rmid_read(rmid, eventid, &msr_val); + ret = __rmid_read(hw_dom, rmid, eventid, &msr_val); if (ret) return ret; From patchwork Fri Apr 21 14:17:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA92DC7618E for ; Fri, 21 Apr 2023 14:18:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231628AbjDUOSA (ORCPT ); Fri, 21 Apr 2023 10:18:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232455AbjDUORz (ORCPT ); Fri, 21 Apr 2023 10:17:55 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 989FFE50 for ; Fri, 21 Apr 2023 07:17:53 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54fc35ab48fso9379957b3.2 for ; Fri, 21 Apr 2023 07:17:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086673; x=1684678673; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BuRpJ8asAkuXVj/aByE26xnYMHbcW3VWyH6v7oH/F3A=; b=YF6mH/8C6d2LEICnoqqT4IlypJNJI7h7vMGL3vmU8nm89xmj7nFwuuIi0Wugm9WY7S LZT7Lgv38D5hvS66vh1juZA/85ZJAPR2JWb6rtiB3SRpL/EO75XCQVejewfggFNY6tFf RzHFp1/kP+25z9AnutEzY4IoniQvEzo16VK7X+sycwpCy3UwwvKbnqksNfF1U7vt2PHV bSJAcLXepaAKacebZQY18LDeX7Zs7xw1hB+gldk/e7vBIaWUaSVtYUz6W4roqTjhSZJG HMyd3SE7sDxYTWk7s5ZqTodFH5yuHwd0x1uf+1epjoOsNQcwdzTFPBgd9pwdMOellgcs H4CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086673; x=1684678673; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BuRpJ8asAkuXVj/aByE26xnYMHbcW3VWyH6v7oH/F3A=; b=KLtvrBCz40Z3gYkAQ8yHTjtSWPLK2FLQ27mEeEjfaMaA2rrcW3O0FpQhIj0YjKLco/ TRjMTtgaij8dELB+BuYzVOg8MEPwxVPJwdpQNEg67iArDBp3XmGZsCx8PtZRcs9VduJ9 upaHsuSzjp9x7+xMqjSlEaoLR3gg2kFfZlxQKNev3cyCvhwvmImwRlQGnZabz6XbM+Zv eTPUtEQPJMRNLphV+YhPSnLUw+ppxWUXrW0BiymZpj8QIYlVy0rtzaAYjuRE3fgpYAAO ZzSXq7vOeEH9dmXDNnOozVImweau6B5AgRcmfYmZvaG7F3V+G+zdzxQaZFH1EZXGYcRu VwjQ== X-Gm-Message-State: AAQBX9e+3RdKJ2ZKrbt/9aHdVrrSJY/uc4KdWQIg2yEWWCa7AIo9bFl4 QhgbC+40yoJwTOy5rcpw++8PQsQAHUxtE822IQ== X-Google-Smtp-Source: AKy350bSfP1gP4DD2HHVEqwXmh0xWlWT1hVdE3FbvnbuPnVQDeyMt9ZPm07M+aCRtKM49nBWk/ZKtfMv2uVqwFggNA== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a0d:ec49:0:b0:552:b607:634b with SMTP id r9-20020a0dec49000000b00552b607634bmr1395087ywn.4.1682086672863; Fri, 21 Apr 2023 07:17:52 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:17 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-4-peternewman@google.com> Subject: [PATCH v1 3/9] x86/resctrl: Add resctrl_mbm_flush_cpu() to collect CPUs' MBM events From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org AMD implementations so far are only guaranteed to provide MBM event counts for RMIDs which are currently assigned in CPUs' PQR_ASSOC MSRs. Hardware can reallocate the counter resources for all other RMIDs' which are not currently assigned to those which are, zeroing the event counts of the unassigned RMIDs. In practice, this makes it impossible to simultaneously calculate the memory bandwidth speed of all RMIDs on a busy system where all RMIDs are in use. Over a multiple-second measurement window, the RMID would need to remain assigned in all of the L3 cache domains where it has been assigned for the duration of the measurement, otherwise portions of the final count will be zero. In general, it is not possible to bound the number of RMIDs which will be assigned in an L3 domain over any interval of time. To provide reliable MBM counts on such systems, introduce "soft" RMIDs: when enabled, each CPU is permanently assigned a hardware RMID whose event counts are flushed to the current soft RMID during context switches which result in a change in soft RMID as well as whenever userspace requests the current event count for a domain. Implement resctrl_mbm_flush_cpu(), which collects a domain's current MBM event counts into its current software RMID. The delta for each CPU is determined by tracking the previous event counts in per-CPU data. The software byte counts reside in the arch-independent mbm_state structures. Co-developed-by: Stephane Eranian Signed-off-by: Stephane Eranian Signed-off-by: Peter Newman --- arch/x86/include/asm/resctrl.h | 2 + arch/x86/kernel/cpu/resctrl/internal.h | 10 ++-- arch/x86/kernel/cpu/resctrl/monitor.c | 78 ++++++++++++++++++++++++++ 3 files changed, 86 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h index 255a78d9d906..e7acf118d770 100644 --- a/arch/x86/include/asm/resctrl.h +++ b/arch/x86/include/asm/resctrl.h @@ -13,6 +13,7 @@ * @cur_closid: The cached Class Of Service ID * @default_rmid: The user assigned Resource Monitoring ID * @default_closid: The user assigned cached Class Of Service ID + * @hw_rmid: The permanently-assigned RMID when soft RMIDs are in use * * The upper 32 bits of MSR_IA32_PQR_ASSOC contain closid and the * lower 10 bits rmid. The update to MSR_IA32_PQR_ASSOC always @@ -27,6 +28,7 @@ struct resctrl_pqr_state { u32 cur_closid; u32 default_rmid; u32 default_closid; + u32 hw_rmid; }; DECLARE_PER_CPU(struct resctrl_pqr_state, pqr_state); diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 02a062558c67..256eee05d447 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -298,12 +298,14 @@ struct rftype { * @prev_bw: The most recent bandwidth in MBps * @delta_bw: Difference between the current and previous bandwidth * @delta_comp: Indicates whether to compute the delta_bw + * @soft_rmid_bytes: Recent bandwidth count in bytes when using soft RMIDs */ struct mbm_state { - u64 prev_bw_bytes; - u32 prev_bw; - u32 delta_bw; - bool delta_comp; + u64 prev_bw_bytes; + u32 prev_bw; + u32 delta_bw; + bool delta_comp; + atomic64_t soft_rmid_bytes; }; /** diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index 2de8397f91cd..3671100d3cc7 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -404,6 +404,84 @@ static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 rmid, } } +struct mbm_soft_counter { + u64 prev_bytes; + bool initialized; +}; + +struct mbm_flush_state { + struct mbm_soft_counter local; + struct mbm_soft_counter total; +}; + +DEFINE_PER_CPU(struct mbm_flush_state, flush_state); + +/* + * flushes the value of the cpu_rmid to the current soft rmid + */ +static void __mbm_flush(int evtid, struct rdt_resource *r, struct rdt_domain *d) +{ + struct mbm_flush_state *state = this_cpu_ptr(&flush_state); + u32 soft_rmid = this_cpu_ptr(&pqr_state)->cur_rmid; + u32 hw_rmid = this_cpu_ptr(&pqr_state)->hw_rmid; + struct mbm_soft_counter *counter; + struct mbm_state *m; + u64 val; + + /* cache occupancy events are disabled in this mode */ + WARN_ON(!is_mbm_event(evtid)); + + if (evtid == QOS_L3_MBM_LOCAL_EVENT_ID) { + counter = &state->local; + } else { + WARN_ON(evtid != QOS_L3_MBM_TOTAL_EVENT_ID); + counter = &state->total; + } + + /* + * Propagate the value read from the hw_rmid assigned to the current CPU + * into the "soft" rmid associated with the current task or CPU. + */ + m = get_mbm_state(d, soft_rmid, evtid); + if (!m) + return; + + if (resctrl_arch_rmid_read(r, d, hw_rmid, evtid, &val)) + return; + + /* Count bandwidth after the first successful counter read. */ + if (counter->initialized) { + /* Assume that mbm_update() will prevent double-overflows. */ + if (val != counter->prev_bytes) + atomic64_add(val - counter->prev_bytes, + &m->soft_rmid_bytes); + } else { + counter->initialized = true; + } + + counter->prev_bytes = val; +} + +/* + * Called from context switch code __resctrl_sched_in() when the current soft + * RMID is changing or before reporting event counts to user space. + */ +void resctrl_mbm_flush_cpu(void) +{ + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + int cpu = smp_processor_id(); + struct rdt_domain *d; + + d = get_domain_from_cpu(cpu, r); + if (!d) + return; + + if (is_mbm_local_enabled()) + __mbm_flush(QOS_L3_MBM_LOCAL_EVENT_ID, r, d); + if (is_mbm_total_enabled()) + __mbm_flush(QOS_L3_MBM_TOTAL_EVENT_ID, r, d); +} + static int __mon_event_count(u32 rmid, struct rmid_read *rr) { struct mbm_state *m; From patchwork Fri Apr 21 14:17:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B54CC77B7F for ; Fri, 21 Apr 2023 14:18:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232315AbjDUOSC (ORCPT ); Fri, 21 Apr 2023 10:18:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231838AbjDUOR6 (ORCPT ); Fri, 21 Apr 2023 10:17:58 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B12FA19B2 for ; Fri, 21 Apr 2023 07:17:56 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-54f855ecb9cso23421547b3.0 for ; Fri, 21 Apr 2023 07:17:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086676; x=1684678676; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+n+j8heP4pCjXZYyBacJTBZB3NL6fBggzFdjHd9oRv0=; b=1J+xPZGWBNzha1s2A0zh1bYQja2oWHpvpOCxFHuTZtFUWKMmSQnBfxHBH0Zcm44kZA mPOZ/Fht1Aftwl9AJ1rKtC59A4jvXdb1ZYW8T2qD/3ZYsP8HOjEF5aK/sPWCyvfddAGs YZVHhrnd9w2mxWucAZPFAtJnP9MSj3lmKy/TSOAxNQS4f3hIn2T2k8w1STVe4BtZELVX pJkgxjkF5hE3w133duRa8aLyLmZJq1iOL967XYQ/GUbKLPXFkKw4b9ahEO8C0ZO6zECt izmMTaC7MHs6J71osGTiOvAsndQZ1oFYtSarJD7aSOdBEp0EuGUc3FUDwpkI1pL1x4Bm A4Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086676; x=1684678676; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+n+j8heP4pCjXZYyBacJTBZB3NL6fBggzFdjHd9oRv0=; b=PHI9Bm0l4pLTxg02Kzp8GUA3oTwGCAJ2e8A4LmzJxu4V7xSVc4uo4ixnrTudeIt0It 1OeuV+VkCVzTso3va6Z286EElKaZ4ntMxOrJoXQFKtNZLvFwSj2K3s2jO0iieZ/sX50c cqad20ghmevJoseCR6lPIynz7lhMkTt+pIIrR2awZnbt+PNSuDbRuS8xstJV/hb/S826 CkHcS9vgBpBqHYWoYieEFhmPz2gmNZFbw6vkjNr7+7MkfqUmzCyrDPr0J6Wll+JPyl6i rezulOZCSVtFVgauc68CbjnAO/RbzJjJGw/VMRitXQY/5BoMldXf06j11aDnxgRgm5i/ 7i1Q== X-Gm-Message-State: AAQBX9cmMmVJDtoVTQmz14YorCcXxSF7510cfkE2kOmDi3D0eoqyiiZh Ui+G7CzV20D0kaqiMr+wD6f4PLDRBGQwFwXd0w== X-Google-Smtp-Source: AKy350arYGzYEIPhtnp3xySUlV+rNDfVObOqcPYOAss6sIzU6gdyo3tnVcUWjCj9A0vbWC7usI/Jxf0ocUwJeeSCCw== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a0d:ec48:0:b0:54f:ae82:3f92 with SMTP id r8-20020a0dec48000000b0054fae823f92mr1328767ywn.2.1682086675839; Fri, 21 Apr 2023 07:17:55 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:18 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-5-peternewman@google.com> Subject: [PATCH v1 4/9] x86/resctrl: Flush MBM event counts on soft RMID change From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org To implement soft RMIDs, context switch must detect when the current soft RMID is changing and if so, flush the CPU's MBM event counts to the outgoing soft RMID. To avoid impacting context switch performance in the non-soft RMID case, protect the new logic with a static branch. Co-developed-by: Stephane Eranian Signed-off-by: Stephane Eranian Signed-off-by: Peter Newman --- arch/x86/include/asm/resctrl.h | 27 +++++++++++++++++++++++++- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 1 + 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h index e7acf118d770..50d05e883dbb 100644 --- a/arch/x86/include/asm/resctrl.h +++ b/arch/x86/include/asm/resctrl.h @@ -36,6 +36,9 @@ DECLARE_PER_CPU(struct resctrl_pqr_state, pqr_state); DECLARE_STATIC_KEY_FALSE(rdt_enable_key); DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key); DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key); +DECLARE_STATIC_KEY_FALSE(rdt_soft_rmid_enable_key); + +void resctrl_mbm_flush_cpu(void); /* * __resctrl_sched_in() - Writes the task's CLOSid/RMID to IA32_PQR_MSR @@ -75,9 +78,31 @@ static inline void __resctrl_sched_in(struct task_struct *tsk) } if (closid != state->cur_closid || rmid != state->cur_rmid) { + if (static_branch_likely(&rdt_soft_rmid_enable_key)) { + /* + * Flush current event counts to outgoing soft rmid + * when it changes. + */ + if (rmid != state->cur_rmid) + resctrl_mbm_flush_cpu(); + + /* + * rmid never changes in this mode, so skip wrmsr if the + * closid is not changing. + */ + if (closid != state->cur_closid) + wrmsr(MSR_IA32_PQR_ASSOC, state->hw_rmid, + closid); + } else { + wrmsr(MSR_IA32_PQR_ASSOC, rmid, closid); + } + + /* + * Record new closid/rmid last so soft rmid case can detect + * changes. + */ state->cur_closid = closid; state->cur_rmid = rmid; - wrmsr(MSR_IA32_PQR_ASSOC, rmid, closid); } } diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index 6ad33f355861..c10f4798156a 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -35,6 +35,7 @@ DEFINE_STATIC_KEY_FALSE(rdt_enable_key); DEFINE_STATIC_KEY_FALSE(rdt_mon_enable_key); DEFINE_STATIC_KEY_FALSE(rdt_alloc_enable_key); +DEFINE_STATIC_KEY_FALSE(rdt_soft_rmid_enable_key); static struct kernfs_root *rdt_root; struct rdtgroup rdtgroup_default; LIST_HEAD(rdt_all_groups); From patchwork Fri Apr 21 14:17:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61533C7618E for ; Fri, 21 Apr 2023 14:18:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231340AbjDUOSE (ORCPT ); Fri, 21 Apr 2023 10:18:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232228AbjDUOSB (ORCPT ); Fri, 21 Apr 2023 10:18:01 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C67CF12CA3 for ; Fri, 21 Apr 2023 07:17:59 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-555f6759323so22692867b3.2 for ; Fri, 21 Apr 2023 07:17:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086679; x=1684678679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pHZMVL6RAnw5G7UfUNTFa5CzwFEHRn1u+l8+oLzhKx4=; b=MfKd7cn5Uub1nNM2RiV3ISktGqpLP5EyFHMUPTILCfI1qopVg8PNxzIa5VZ+WVXZcx YZQQ69rym1OqIQnu263AjZ26mfrN9dRc/O+yt9qJlZr0OQWY7+sFU45p712eZ73X/pq+ 4Nuhsixxps7DcDhS8ICV53xkgX4iMhzzYLMYV6mG4O4qavIN7UXl1yLnU6UYwzdbAfEJ XNoVRfO8lEMNDiQ7idtMdbVoXBEUHQvqduc/A77nuADM4Lw/KkA2p6bc2WIgZsi3U3N/ trNTBvt2RGd59HStffaZOaFrM5q5/qYO3yTC7XGOJfGCsPSLoeWY96CWcfKm4Xj9mPLG QXQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086679; x=1684678679; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pHZMVL6RAnw5G7UfUNTFa5CzwFEHRn1u+l8+oLzhKx4=; b=a0Xn+RUXf7LR69ADpUPPcDQjX5kfCCGzK8qGiKw1rbGvKBJ51goafU08BVQVl8gIxN ZzvqsKIRPlzUSnIpyVxoqzeFGb1aOvLZB2WLTA8iM6FpIfPzsWiW0mEha7nv1bDVGBZY cPM8xCbgZeHyiPhD+h+L0KNF4IktdkknKEmV+csP5BGw0gM4tLQQoTfGkhgnj+3FZ0cr qAikgHbZgD18OGyLrOzkEjGRLSPiDjpjDcl8Wj/2xoIA6GmCh89m6exyT2sliApoIoUv A7MBCzeJ86MXlY/YMjI3uXj38/kLmt2G/HPLEMz6UllHRi0y/U+BILZZZ1kUAYTC5UWD 4LhA== X-Gm-Message-State: AAQBX9eSlPOQu5sliNlmaDWkgKrVhsfCBRWoC5xQ3yDiuIfKpXUQUSer Jr9kjshaDHinO6waloZHEg+TZDl6oA+B8tAiuQ== X-Google-Smtp-Source: AKy350YvnC4D+q7r4n05l+Chcu0lsHZ9od5ZFR9ulTDdbShO1MkzfnoQXP97E9KcRU77diwK2ta/zXGnRQ/DKTb2qw== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a81:ae68:0:b0:533:8f19:4576 with SMTP id g40-20020a81ae68000000b005338f194576mr1440134ywk.0.1682086679114; Fri, 21 Apr 2023 07:17:59 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:19 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-6-peternewman@google.com> Subject: [PATCH v1 5/9] x86/resctrl: Call mon_event_count() directly for soft RMIDs From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org There is no point in using IPIs to call mon_event_count() when it is only reading software counters from memory. When RMIDs are soft, mon_event_read() just calls mon_event_count() directly. Signed-off-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 9 ++++++++- arch/x86/kernel/cpu/resctrl/internal.h | 1 + arch/x86/kernel/cpu/resctrl/monitor.c | 5 +++++ 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c index b44c487727d4..b2ed25a08f6f 100644 --- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c +++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c @@ -534,7 +534,14 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r, rr->val = 0; rr->first = first; - smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); + if (rdt_mon_soft_rmid) + /* + * Soft RMID counters reside in memory, so they can be read from + * anywhere. + */ + mon_event_count(rr); + else + smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1); } int rdtgroup_mondata_show(struct seq_file *m, void *arg) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index 256eee05d447..e6ff31a4dbc4 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -115,6 +115,7 @@ struct rmid_read { extern bool rdt_alloc_capable; extern bool rdt_mon_capable; +extern bool rdt_mon_soft_rmid; extern unsigned int rdt_mon_features; extern struct list_head resctrl_schema_all; diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index 3671100d3cc7..bb857eefa3b0 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -57,6 +57,11 @@ static struct rmid_entry *rmid_ptrs; */ bool rdt_mon_capable; +/* + * Global boolean to indicate when RMIDs are implemented in software. + */ +bool rdt_mon_soft_rmid; + /* * Global to indicate which monitoring events are enabled. */ From patchwork Fri Apr 21 14:17:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C360C7618E for ; Fri, 21 Apr 2023 14:18:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231838AbjDUOSj (ORCPT ); Fri, 21 Apr 2023 10:18:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232344AbjDUOSG (ORCPT ); Fri, 21 Apr 2023 10:18:06 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFEDB974A for ; Fri, 21 Apr 2023 07:18:02 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b98d97c8130so1167372276.3 for ; Fri, 21 Apr 2023 07:18:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086682; x=1684678682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eG+d82X49QK4SbPD8xART0pPpkKrWLwG04cl9OQ+l3E=; b=Wd+O3j54Skqqn0pDaOa4Mjhf66LdJYTTj9EXOkDn/rwzAbg0BNewxKMK2fmMK8iG98 PicL46080hpLDBhfW/fTzkNo0EtVBh5MW6t23IZuu1RN6M338Hjbb0qD9I7z5KZ1n6QO t1cYVANMd85R44HbBJnjZausS2fw39/88Myh/hUlRcAU44GrbT8jfDhuYCOrhyj4MRYK WjHWGraOdx84TsWCjTu5s9cG6f4hnFQ/BeIhldOpzYVeBs4+iwLMdNanU9uFoitxwZAs 4OJ9cE65XStC7x334oAqiaoPQt5KUOL7kSzkZ3ntdiTUox5Fmt7osx4w5LeXwpEmOJoP U8Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086682; x=1684678682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eG+d82X49QK4SbPD8xART0pPpkKrWLwG04cl9OQ+l3E=; b=ktYeui0f9QulXvT13JGlQ2FzPgMGxdtIJQRq7OvSbZeeXVfrTxjFNmSH8DE+XJxImz 9EQf2eHhEQOJtJsJ3Sa5WI1tFUTEKbG8TVFWZVr1GjthwC3zx47DQqW6D0Y4xbbE+h/4 w8GZOSMOwlCbkZV5nqh0lYI8qvMTFIhYK5XHs+imOX07N5fbZ1d8PbNqJIcB+7qgAKE7 jxS5NQwGnLALwjhjqFKJjbCgAezaHL2JkOJdgRckXuaJbVm1Cscgh0dKvhJgBYOarkPI m8nRldDh/l+/uCy1LpuSXYJoh0BSijZMkKsIEVLpFc6ioK6Pq4ldTSq/8zMurNx9r88i Vw4A== X-Gm-Message-State: AAQBX9cRQJ6Kc04gArJSQN/YBAuw1NeQOedlEbiP+IZ5JsUYS1xNs6Co CvYOtfMSfBxHX8LlczGiX1lSylCzADbHLQpK5g== X-Google-Smtp-Source: AKy350Y7AKukcnsH8485FXzjDHmrAYQuKfedUSn3WuV6xXVT8plxC+N4U7Ap5J23/mNoQobBP9sepBiAuDXaC0lvzQ== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a25:cc93:0:b0:b92:570c:57a1 with SMTP id l141-20020a25cc93000000b00b92570c57a1mr1764698ybf.2.1682086682008; Fri, 21 Apr 2023 07:18:02 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:20 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-7-peternewman@google.com> Subject: [PATCH v1 6/9] x86/resctrl: Create soft RMID version of __mon_event_count() From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org When RMIDs are soft, __mon_event_count() only needs to report the current byte count in memory and should not touch the hardware RMIDs. Create a parallel version for the soft RMID configuration and update __mon_event_count() to choose between it and the original depending on whether the soft RMID static key is enabled. Signed-off-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/monitor.c | 33 ++++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index bb857eefa3b0..3d54a634471a 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -487,7 +487,30 @@ void resctrl_mbm_flush_cpu(void) __mbm_flush(QOS_L3_MBM_TOTAL_EVENT_ID, r, d); } -static int __mon_event_count(u32 rmid, struct rmid_read *rr) +static int __mon_event_count_soft_rmid(u32 rmid, struct rmid_read *rr) +{ + struct mbm_state *m; + + WARN_ON(!is_mbm_event(rr->evtid)); + m = get_mbm_state(rr->d, rmid, rr->evtid); + if (!m) + /* implies !is_mbm_event(...) */ + return -1; + + rr->val += atomic64_read(&m->soft_rmid_bytes); + + if (rr->first) { + /* + * Discard any bandwidth resulting from the initial HW counter + * reads. + */ + atomic64_set(&m->soft_rmid_bytes, 0); + } + + return 0; +} + +static int __mon_event_count_default(u32 rmid, struct rmid_read *rr) { struct mbm_state *m; u64 tval = 0; @@ -509,6 +532,14 @@ static int __mon_event_count(u32 rmid, struct rmid_read *rr) return 0; } +static int __mon_event_count(u32 rmid, struct rmid_read *rr) +{ + if (rdt_mon_soft_rmid) + return __mon_event_count_soft_rmid(rmid, rr); + else + return __mon_event_count_default(rmid, rr); +} + /* * mbm_bw_count() - Update bw count from values previously read by * __mon_event_count(). From patchwork Fri Apr 21 14:17:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B124C7618E for ; Fri, 21 Apr 2023 14:18:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232548AbjDUOSw (ORCPT ); Fri, 21 Apr 2023 10:18:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231748AbjDUOSS (ORCPT ); Fri, 21 Apr 2023 10:18:18 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A339A13846 for ; Fri, 21 Apr 2023 07:18:05 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-556011695d1so18267687b3.1 for ; Fri, 21 Apr 2023 07:18:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086685; x=1684678685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=W6639SD0H0EtLqB4ey0Z/SCfQQd2BwEFoKtAHXawttg=; b=F99tHTPdoQ7C9Ix3lhzgS+44QzUBAXokIJ7FiF1lJ2iBIJ3DQYl/MLze6AckhIXvdc Hn1QgxpX5Y0KkD6eTmkiklywcUgCl65/mQAQ80srV+i65MYkbmqxNV6Wuv9hhHPEJgYj hj+Bvt+Bxx/5LGd0tPAx/FTKSotEpkvZ8kjx5jYAVY0+ZyZBlSdJuheD1Xo4AwSjku6L dt04yxCX5IT74kYysO2NmG5vOOht4o5PPfR6VATD2lZ3ZJXCc50kxU8yoAvhAHZBevKb 84pRmjxRdUbiYE/VJLBlphZEamluppZApe1XIXNcNQGXa6KhQTOAtfRelsZKUhgS7kOl u/dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086685; x=1684678685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=W6639SD0H0EtLqB4ey0Z/SCfQQd2BwEFoKtAHXawttg=; b=N7ix5NABr/buuEIp3XDefPgQS1IuDtF1wm8AJ2B3cAJSOsZ5mRTNj25zT1sKyh9Mw7 fkUFjCAgAXJDeYrhFNSV1xITTp1lAajeunlQ+0v98vBECeffHemMs5PN1PGCb2Y1vBHe iZmRZc0wvn6GdW694Uw82wvvwrp5BzuX7WpRCClp0OWZjLN2DV9jN73uNlSjDAPhCuU1 5o2YGuZsSh2KjkYAH1wuVW9ZmTwjtX/5IciUby0HBXbgxpdNnpm68z4ZxotJkwbLwuat 993GBvlzeTEYf9MX8Mf0wwwhcO87lM5KnO58x3VluXzBmeucD0g9DaZ8rIG4MfvwMEjh IuLA== X-Gm-Message-State: AAQBX9fL1cEMM0RVZnsJWE1kz5pdzG7qxk1CfCXommFk/iHBM+QoYuFR DBOyhvpnFqF2+r/6Af+DVPCiQGAeX7ZeHM19MA== X-Google-Smtp-Source: AKy350Zqg4pAFdUI6twozs01y17oAXKmGbDox7mIzThIQRrL0cyze52D8zeEHvJU+8u/Ow1KWrdOeo4oh9evn0a9dA== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a81:b149:0:b0:54f:bb37:4a1c with SMTP id p70-20020a81b149000000b0054fbb374a1cmr1291578ywh.8.1682086684958; Fri, 21 Apr 2023 07:18:04 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:21 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-8-peternewman@google.com> Subject: [PATCH v1 7/9] x86/resctrl: Assign HW RMIDs to CPUs for soft RMID From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org To implement soft RMIDs, each CPU needs a HW RMID that is unique within its L3 cache domain. This is the minimum number of RMIDs needed to monitor all CPUs. This is accomplished by determining the rank of each CPU's mask bit within its L3 shared_cpu_mask in resctrl_online_cpu(). Signed-off-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/core.c | 39 +++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c index 47b1c37a81f8..b0d873231b1e 100644 --- a/arch/x86/kernel/cpu/resctrl/core.c +++ b/arch/x86/kernel/cpu/resctrl/core.c @@ -596,6 +596,38 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) } } +/* Assign each CPU an RMID that is unique within its cache domain. */ +static u32 determine_hw_rmid_for_cpu(int cpu) +{ + struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu); + struct cacheinfo *l3ci = NULL; + u32 rmid; + int i; + + /* Locate the cacheinfo for this CPU's L3 cache. */ + for (i = 0; i < ci->num_leaves; i++) { + if (ci->info_list[i].level == 3 && + (ci->info_list[i].attributes & CACHE_ID)) { + l3ci = &ci->info_list[i]; + break; + } + } + WARN_ON(!l3ci); + + if (!l3ci) + return 0; + + /* Use the position of cpu in its shared_cpu_mask as its RMID. */ + rmid = 0; + for_each_cpu(i, &l3ci->shared_cpu_map) { + if (i == cpu) + break; + rmid++; + } + + return rmid; +} + static void clear_closid_rmid(int cpu) { struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state); @@ -604,7 +636,12 @@ static void clear_closid_rmid(int cpu) state->default_rmid = 0; state->cur_closid = 0; state->cur_rmid = 0; - wrmsr(MSR_IA32_PQR_ASSOC, 0, 0); + state->hw_rmid = 0; + + if (static_branch_likely(&rdt_soft_rmid_enable_key)) + state->hw_rmid = determine_hw_rmid_for_cpu(cpu); + + wrmsr(MSR_IA32_PQR_ASSOC, state->hw_rmid, 0); } static int resctrl_online_cpu(unsigned int cpu) From patchwork Fri Apr 21 14:17:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220137 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F093C7618E for ; Fri, 21 Apr 2023 14:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232410AbjDUOTA (ORCPT ); Fri, 21 Apr 2023 10:19:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232484AbjDUOSu (ORCPT ); Fri, 21 Apr 2023 10:18:50 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01F8013C34 for ; Fri, 21 Apr 2023 07:18:08 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b98f3ca02b5so1686287276.1 for ; Fri, 21 Apr 2023 07:18:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086688; x=1684678688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4R40htVFkZdRKo0vr5qbbFiTdJ0ln75dPc18Ex1B+xs=; b=C5BMyCvlSu+eRzBuquykI60dxIq34oqxyOw42+ATs4ZC1qIFL9f0A1m6cTj1Jn86Bc 44vRd37/KN9LFP3+Vgd6I3Tk6cDxaeV8xTNfWw+0kITVFdf4LaajI4M8GClIsJGdlrVo ZXBYbKbAqjawn9c6qHTuBylMX2eEVIDSk0Hh2f96PyUpJcGQfEmbP1iNKvAkagqwNwAZ fBeCOT5hlALd9f2omWh3ORkodnLziGqV4RS8XBrfYg7P9BwfF/wp99EnLYDYtUg6u7Q2 xLGNagHO1uKYyrOjiHAOyGA9bxx8PXRaYcRCp6RHq+hedzzLAVlekMwLMyDxa8FSee+d Xn8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086688; x=1684678688; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4R40htVFkZdRKo0vr5qbbFiTdJ0ln75dPc18Ex1B+xs=; b=LloSnBdq+3wwMsq5TmQ97+gDdP8i8u+2UV+r/g0TagywfbG+UimeEwzUctK0XtvXY6 aencq15gZSIRJICALxhqAp6jdDCcPNtSnylrcquvRUi2ksmCqvU32C3eETlaS3qlzSlY yziwP31wY1Pd1jdfKnUrUjnZrV3lxXC++JLMr56sUnwI6iFC8aq10F2FAcBlKfEOE+TM UfbOU+PZIrzOSv5/C+8vDryBtFSyZbQSiO2zTyBbu14MRIWaW+bvD+f6tSBPQ0hvewBr O0qLZynD/wmeoPmjiqWvO1Uj+aJKjqt+yAozCn0G+IDs8omn4iUENLtZeLfI0sgJ/AJB 9cPw== X-Gm-Message-State: AAQBX9cD/zB5D2lseXfDAOQiJj9dLwNKdHeDxhgVTY3PJUTaxpf8uxVs BeeyDtBRcLscXFFh0LKA/JAKZm2f+GeoWet/Pw== X-Google-Smtp-Source: AKy350bMJkGMk536H1XiJn0O8T/FQl3R76sq1Khx/MzdZxPj4mYPweyZpf3i2nlBd/q2SesN161ZSeUiH7ipq5W7HA== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a25:d1d0:0:b0:b98:6352:be19 with SMTP id i199-20020a25d1d0000000b00b986352be19mr1290128ybg.9.1682086687990; Fri, 21 Apr 2023 07:18:07 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:22 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-9-peternewman@google.com> Subject: [PATCH v1 8/9] x86/resctrl: Use mbm_update() to push soft RMID counts From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org __mon_event_count() only reads the current software count and does not cause CPUs in the domain to flush. For mbm_update() to be effective in preventing overflow in hardware counters with soft RMIDs, it needs to flush the domain CPUs so that all of the HW RMIDs are read. When RMIDs are soft, mbm_update() is intended to push bandwidth counts to the software counters rather than pulling the counts from hardware when userspace reads event counts, as this is a lot more efficient when the number of HW RMIDs is fixed. When RMIDs are soft, mbm_update() only calls mbm_flush_cpu_handler() on each CPU in the domain rather than reading all RMIDs. Signed-off-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/monitor.c | 28 +++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c index 3d54a634471a..9575cb79b8ee 100644 --- a/arch/x86/kernel/cpu/resctrl/monitor.c +++ b/arch/x86/kernel/cpu/resctrl/monitor.c @@ -487,6 +487,11 @@ void resctrl_mbm_flush_cpu(void) __mbm_flush(QOS_L3_MBM_TOTAL_EVENT_ID, r, d); } +static void mbm_flush_cpu_handler(void *p) +{ + resctrl_mbm_flush_cpu(); +} + static int __mon_event_count_soft_rmid(u32 rmid, struct rmid_read *rr) { struct mbm_state *m; @@ -806,12 +811,27 @@ void mbm_handle_overflow(struct work_struct *work) r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; d = container_of(work, struct rdt_domain, mbm_over.work); + if (rdt_mon_soft_rmid) { + /* + * HW RMIDs are permanently assigned to CPUs, so only a per-CPU + * flush is needed. + */ + on_each_cpu_mask(&d->cpu_mask, mbm_flush_cpu_handler, NULL, + false); + } + list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) { - mbm_update(r, d, prgrp->mon.rmid); + /* + * mbm_update() on every RMID would result in excessive IPIs + * when RMIDs are soft. + */ + if (!rdt_mon_soft_rmid) { + mbm_update(r, d, prgrp->mon.rmid); - head = &prgrp->mon.crdtgrp_list; - list_for_each_entry(crgrp, head, mon.crdtgrp_list) - mbm_update(r, d, crgrp->mon.rmid); + head = &prgrp->mon.crdtgrp_list; + list_for_each_entry(crgrp, head, mon.crdtgrp_list) + mbm_update(r, d, crgrp->mon.rmid); + } if (is_mba_sc(NULL)) update_mba_bw(prgrp, d); From patchwork Fri Apr 21 14:17:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Newman X-Patchwork-Id: 13220138 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFAC7C77B76 for ; Fri, 21 Apr 2023 14:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232455AbjDUOTB (ORCPT ); Fri, 21 Apr 2023 10:19:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231614AbjDUOSw (ORCPT ); Fri, 21 Apr 2023 10:18:52 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8ABE91386F for ; Fri, 21 Apr 2023 07:18:11 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b8ed0a07a6fso3066994276.0 for ; Fri, 21 Apr 2023 07:18:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682086690; x=1684678690; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=y32/TdG8kcYzN5rBv7k3EHHxOjAgmuz/MTd8pNlAXEA=; b=hpKZdihTkw8uY98Jt7btWsnsgQm4pu7ZD+gTsJEer7e7Bhi4FdnOZnlQ0Q4Yx2PbWa 7xSOTnqiOHSs/2r/I9Vlh3kHkIbqeHzKJ4pkH7SZw9v8N39RvB0BrXX6Efid57ldBfB8 YSMkcHnlJcFGMD6gLc9OUxomMd4vi8/JIq+Bsa6b4ZFxql9q24eL2t76KnEtvIInVvyo 0Yj2ocYfKjIC6LYE/0r9bZk91BJw2SK6Oza3APs+9XnCv84snwE2/sFQrhlqD3a1UUK/ 8HKf2zhrp6a+Ns1+EfDJm6Dl1cvSAoQ0zdjCb5PqEK+zL6tQfHOxqvglYP3q+F2hiop7 hr0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682086690; x=1684678690; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=y32/TdG8kcYzN5rBv7k3EHHxOjAgmuz/MTd8pNlAXEA=; b=IesFCjvJ59bAFOS9tAVc4ycl63Svtz9SSqUJ3Ib0rf4517942ptiXfFU0Rqo8HHujj DEAnG9FFmdVi6OzTa2V7KkHQBmMd1uhuc8VsPQNd7r1wxiLuh8bSQQ6KmDWC++geNYLq gQF3o2+wm58wbuHndthgbBwNKo1fGTzLWCyn2wRmQRX8SHI3GGEclnOZiR9Fyzx6UXdX VuRtCVowfEFM3U3ruwLqWb0t6+3cBjLZX77kf+IK4Ujqu/3YjVqC01w8lXVP1hVdMGYK QHgbcWgOVo72EGozh1NFY1I2IlkZJZeAPXWE2kKpr5o0dZqORTUY42i5X4AEDkLNZZ8G /Shw== X-Gm-Message-State: AAQBX9edgYWJ1WsWX+WrELoLexVU7iOTM4OZe086QMlVXxu8itzbxKf7 OUClSHqpw1OtdlWOo55qgtXkLbacsPCjv3pHXA== X-Google-Smtp-Source: AKy350bfbYkOd/MrAlE1looMHPMg/8RG4Dbi6FUJN+YVIR/OvN3oIl90LzbbXcXa4uOFWx/vgJiTrhfQRXJFWCKGXA== X-Received: from peternewman0.zrh.corp.google.com ([2a00:79e0:9d:6:c801:daa2:428c:d3fc]) (user=peternewman job=sendgmr) by 2002:a25:d78c:0:b0:b8b:eea7:525e with SMTP id o134-20020a25d78c000000b00b8beea7525emr1763936ybg.5.1682086690780; Fri, 21 Apr 2023 07:18:10 -0700 (PDT) Date: Fri, 21 Apr 2023 16:17:23 +0200 In-Reply-To: <20230421141723.2405942-1-peternewman@google.com> Mime-Version: 1.0 References: <20230421141723.2405942-1-peternewman@google.com> X-Mailer: git-send-email 2.40.0.634.g4ca3ef3211-goog Message-ID: <20230421141723.2405942-10-peternewman@google.com> Subject: [PATCH v1 9/9] x86/resctrl: Add mount option to enable soft RMID From: Peter Newman To: Fenghua Yu , Reinette Chatre Cc: Babu Moger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Stephane Eranian , James Morse , linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add the 'mbm_soft_rmid' mount option to enable soft RMIDs. This requires adding a mechanism for disabling a monitoring event at mount time to prevent the llc_occupancy event from being presented to the user. Signed-off-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/internal.h | 3 ++ arch/x86/kernel/cpu/resctrl/rdtgroup.c | 51 ++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h index e6ff31a4dbc4..604e3d550601 100644 --- a/arch/x86/kernel/cpu/resctrl/internal.h +++ b/arch/x86/kernel/cpu/resctrl/internal.h @@ -59,6 +59,7 @@ struct rdt_fs_context { bool enable_cdpl2; bool enable_cdpl3; bool enable_mba_mbps; + bool enable_mbm_soft_rmid; }; static inline struct rdt_fs_context *rdt_fc2context(struct fs_context *fc) @@ -76,12 +77,14 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key); * @evtid: event id * @name: name of the event * @configurable: true if the event is configurable + * @enabled: true if event is disabled * @list: entry in &rdt_resource->evt_list */ struct mon_evt { enum resctrl_event_id evtid; char *name; bool configurable; + bool disabled; struct list_head list; }; diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c index c10f4798156a..c2abf69c2dcf 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -1013,6 +1013,8 @@ static int rdt_mon_features_show(struct kernfs_open_file *of, struct mon_evt *mevt; list_for_each_entry(mevt, &r->evt_list, list) { + if (mevt->disabled) + continue; seq_printf(seq, "%s\n", mevt->name); if (mevt->configurable) seq_printf(seq, "%s_config\n", mevt->name); @@ -2204,6 +2206,37 @@ static bool supports_mba_mbps(void) r->alloc_capable && is_mba_linear()); } +static bool supports_mbm_soft_rmid(void) +{ + return is_mbm_enabled(); +} + +int set_mbm_soft_rmid(bool mbm_soft_rmid) +{ + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; + struct mon_evt *mevt = NULL; + + /* + * is_llc_occupancy_enabled() will always return false when disabling, + * so search for the llc_occupancy event unconditionally. + */ + list_for_each_entry(mevt, &r->evt_list, list) { + if (strcmp(mevt->name, "llc_occupancy") == 0) { + mevt->disabled = mbm_soft_rmid; + break; + } + } + + rdt_mon_soft_rmid = mbm_soft_rmid; + + if (mbm_soft_rmid) + static_branch_enable_cpuslocked(&rdt_soft_rmid_enable_key); + else + static_branch_disable_cpuslocked(&rdt_soft_rmid_enable_key); + + return 0; +} + /* * Enable or disable the MBA software controller * which helps user specify bandwidth in MBps. @@ -2359,6 +2392,9 @@ static int rdt_enable_ctx(struct rdt_fs_context *ctx) if (!ret && ctx->enable_mba_mbps) ret = set_mba_sc(true); + if (!ret && ctx->enable_mbm_soft_rmid) + ret = set_mbm_soft_rmid(true); + return ret; } @@ -2534,6 +2570,8 @@ static int rdt_get_tree(struct fs_context *fc) out_mba: if (ctx->enable_mba_mbps) set_mba_sc(false); + if (ctx->enable_mbm_soft_rmid) + set_mbm_soft_rmid(false); out_cdp: cdp_disable_all(); out: @@ -2547,6 +2585,7 @@ enum rdt_param { Opt_cdp, Opt_cdpl2, Opt_mba_mbps, + Opt_mbm_soft_rmid, nr__rdt_params }; @@ -2554,6 +2593,7 @@ static const struct fs_parameter_spec rdt_fs_parameters[] = { fsparam_flag("cdp", Opt_cdp), fsparam_flag("cdpl2", Opt_cdpl2), fsparam_flag("mba_MBps", Opt_mba_mbps), + fsparam_flag("mbm_soft_rmid", Opt_mbm_soft_rmid), {} }; @@ -2579,6 +2619,11 @@ static int rdt_parse_param(struct fs_context *fc, struct fs_parameter *param) return -EINVAL; ctx->enable_mba_mbps = true; return 0; + case Opt_mbm_soft_rmid: + if (!supports_mbm_soft_rmid()) + return -EINVAL; + ctx->enable_mbm_soft_rmid = true; + return 0; } return -EINVAL; @@ -2767,6 +2812,7 @@ static void rdt_kill_sb(struct super_block *sb) cpus_read_lock(); mutex_lock(&rdtgroup_mutex); + set_mbm_soft_rmid(false); set_mba_sc(false); /*Put everything back to default values. */ @@ -2861,6 +2907,8 @@ static int mkdir_mondata_subdir(struct kernfs_node *parent_kn, priv.u.rid = r->rid; priv.u.domid = d->id; list_for_each_entry(mevt, &r->evt_list, list) { + if (mevt->disabled) + continue; priv.u.evtid = mevt->evtid; ret = mon_addfile(kn, mevt->name, priv.priv); if (ret) @@ -3517,6 +3565,9 @@ static int rdtgroup_show_options(struct seq_file *seq, struct kernfs_root *kf) if (is_mba_sc(&rdt_resources_all[RDT_RESOURCE_MBA].r_resctrl)) seq_puts(seq, ",mba_MBps"); + if (static_branch_likely(&rdt_soft_rmid_enable_key)) + seq_puts(seq, ",mbm_soft_rmid"); + return 0; }