From patchwork Mon Oct 24 09:12:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E3FECDFA1 for ; Mon, 24 Oct 2022 09:13:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230171AbiJXJNF (ORCPT ); Mon, 24 Oct 2022 05:13:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230343AbiJXJNE (ORCPT ); Mon, 24 Oct 2022 05:13:04 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3EC95A2D4 for ; Mon, 24 Oct 2022 02:13:03 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id p3so6994422pld.10 for ; Mon, 24 Oct 2022 02:13:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qaw17Upa3BItxpXUKeacE4Corj+Zh28fyNt32bUya00=; b=CvVoUAs2g3YUKmJrQSp2QE/nT4MqwM607FU6g9LxDjGhRsicgIQKG4laE6BXVERyti rGUXd0Bkz3URTmHDgnZzAeBZh38CbYaykEHHRGe2xTzqVnA62scInXSUXUDru16c1GbT wEOlYu9VaL2Ipr+uDnbtdtGX4d+YdxK6QWverez3x7M5XO86ibrwGk00jhWAkHIM/VaG sy3ByBwirrDCSSdKu5ONFICg6zG3a5DUGiGeR2/xWna9Mj/hSW5kumtmWCqAypr0mqbE MSDCcrqBA8jIVATYVAgFxQi/MY0Bp/WgSkpUmDQxvJvDwrr80gdARJAI7W2xBSTdCMlx J0SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qaw17Upa3BItxpXUKeacE4Corj+Zh28fyNt32bUya00=; b=kiBFBVsQnd/7+duFEmmfv3suiFiPa7EbT4aPZfb17jXEsJRDCt4dRHoZoiyY2Af/Zw KsFZSab/INcEQbinMjDG0+BxC2FJ6lS/iwaOX0RfA13fko+acTzdVjmt9/ahbzlyknIu y64KomZ3MA9v69stnGlcm0UsXDbZsqTDYSS/jAy/SG+zt5A4zeTKRGwMPhLXcXpCKSWe AywYJYykbAvd7PKkadEXQgYOwV/uGKfcqhVpCHDK185VT3o8QikqVGTGd9qpo3Q9ynmG xMyk9irs2Cowldqcf5J6rXqk2WHbfyXCLVJ3bUaUhToMpiP9TzUEhfxgBa0OvD9fwQFu OQdA== X-Gm-Message-State: ACrzQf2tvhnHW627kh7cJ4C9YOg2B8dYX6FfOCWQsuC0LQtRe1QUmfNq I9iJ87jyAsrXRuDV5A2Slk/ib6E77Nzh/5Dz X-Google-Smtp-Source: AMsMyM6YKoU01t9nj32FikvbclDC7QSZp8l08j006aIovvClNr6ebCee4oHb/xxyyxl7rwgf9zTK/A== X-Received: by 2002:a17:90b:1e08:b0:212:ee11:7b09 with SMTP id pg8-20020a17090b1e0800b00212ee117b09mr10545426pjb.60.1666602783315; Mon, 24 Oct 2022 02:13:03 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:03 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 01/24] x86/pmu: Add PDCM check before accessing PERF_CAP register Date: Mon, 24 Oct 2022 17:12:00 +0800 Message-Id: <20221024091223.42631-2-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu On virtual platforms without PDCM support (e.g. AMD), #GP failure on MSR_IA32_PERF_CAPABILITIES is completely avoidable. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- lib/x86/processor.h | 8 ++++++++ x86/pmu.c | 2 +- x86/pmu_lbr.c | 2 +- 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 0324220..f85abe3 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -847,4 +847,12 @@ static inline bool pmu_gp_counter_is_available(int i) return !(cpuid(10).b & BIT(i)); } +static inline u64 this_cpu_perf_capabilities(void) +{ + if (!this_cpu_has(X86_FEATURE_PDCM)) + return 0; + + return rdmsr(MSR_IA32_PERF_CAPABILITIES); +} + #endif diff --git a/x86/pmu.c b/x86/pmu.c index d59baf1..d278bb5 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -660,7 +660,7 @@ int main(int ac, char **av) check_counters(); - if (rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES) { + if (this_cpu_perf_capabilities() & PMU_CAP_FW_WRITES) { gp_counter_base = MSR_IA32_PMC0; report_prefix_push("full-width writes"); check_counters(); diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index 8dad1f1..c040b14 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -72,7 +72,7 @@ int main(int ac, char **av) return report_summary(); } - perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); + perf_cap = this_cpu_perf_capabilities(); if (!(perf_cap & PMU_CAP_LBR_FMT)) { report_skip("(Architectural) LBR is not supported."); From patchwork Mon Oct 24 09:12:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBF47ECDFA1 for ; Mon, 24 Oct 2022 09:13:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230418AbiJXJNH (ORCPT ); Mon, 24 Oct 2022 05:13:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230343AbiJXJNG (ORCPT ); Mon, 24 Oct 2022 05:13:06 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85FC05AC4F for ; Mon, 24 Oct 2022 02:13:05 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id 4so2040905pli.0 for ; Mon, 24 Oct 2022 02:13:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k4vcz7CUnuVPDIxBcYX4dnZVgkohHXNFdn+dJEZQQ7c=; b=pw+uT8VonT8IRH3tB69/PoWI88/eDpGGTIpMR050zex9Y0lh6e5aW8CZIlpm0/OepK T9cta8xtWnlaCN+kjgooFo1Cvbpa5/8M1PiCPQIy6TwBxSMp3jZZxQh2ZW+q/DNVWVuv XjoD9cxzTZKBvRdE3Hb56TbZOeVr0bEFH9uhLO0TUaZIqtDb0Yq4LbSlRzi9NnAuDgee s0lwESp2ko0qYKzw2hf2/o9i+/szwz08PmQz90ejL41fmmufpfQvhu5ecamIyjdlIjry zF0Cj62CktnvX27u1usWHBkY4UAmXPR0VCl44xKxalFfmmIPB+uhZFHbkw5KblTe45pi EyFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k4vcz7CUnuVPDIxBcYX4dnZVgkohHXNFdn+dJEZQQ7c=; b=4IHaEo+exYLkWZjwTRfMve/iZ2cu8efE0s5ORYAdBT+5/Meas6gf8jb16QAZs8nK9z R13JWTfqm+ktdpxgjmCLplLoJvxnpEnjwS633DVVSqjFPDsQWdQluQQ1FaFPJ3lLjZy0 MOzlUEMT0SAG6XAKBinPaacm0f7xqwnDxkSrd8qKLfUDGtTQcpLyMBU1ycyZjMqqfKV+ wYU3rSOf1M6ZpBStSwxm3Pl7B+Gbpn+TUB7FJcPrXA8uDcqZhzUqAu/YuSUFC1b/3asV oFZzzGMkedq78B8DQzASllSgoKo+XXI56+DdxM84STJPlh1gFTiBnzsf/XxX4EirhlZ6 fw0A== X-Gm-Message-State: ACrzQf0ec9VnuWYPDfRbC1Pr8viWM8smZTyFNZlNECnTzYB3mUTKCIxi zUmmvvaamSagyhwLMHMuwzk= X-Google-Smtp-Source: AMsMyM4XJXy7Fo+ZEccYTOK3Z+EwpiKBoRMlihpV2qDg6Q0JY88sifNVXu/OxjE+XAYhPU9Zs0F6IA== X-Received: by 2002:a17:90b:3b4d:b0:20d:7844:64ba with SMTP id ot13-20020a17090b3b4d00b0020d784464bamr36744628pjb.136.1666602784937; Mon, 24 Oct 2022 02:13:04 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:04 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 02/24] x86/pmu: Test emulation instructions on full-width counters Date: Mon, 24 Oct 2022 17:12:01 +0800 Message-Id: <20221024091223.42631-3-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Move check_emulated_instr() into check_counters() so that full-width counters could be tested with ease by the same test case. Signed-off-by: Like Xu --- x86/pmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index d278bb5..533851b 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -520,6 +520,9 @@ static void check_emulated_instr(void) static void check_counters(void) { + if (is_fep_available()) + check_emulated_instr(); + check_gp_counters(); check_fixed_counters(); check_rdpmc(); @@ -655,9 +658,6 @@ int main(int ac, char **av) apic_write(APIC_LVTPC, PC_VECTOR); - if (is_fep_available()) - check_emulated_instr(); - check_counters(); if (this_cpu_perf_capabilities() & PMU_CAP_FW_WRITES) { From patchwork Mon Oct 24 09:12:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29AEDECDFA1 for ; Mon, 24 Oct 2022 09:13:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229455AbiJXJNL (ORCPT ); Mon, 24 Oct 2022 05:13:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230343AbiJXJNI (ORCPT ); Mon, 24 Oct 2022 05:13:08 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76C0E5C9FA for ; Mon, 24 Oct 2022 02:13:07 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id m2so3680248pjr.3 for ; Mon, 24 Oct 2022 02:13:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O5Lq2G1/MyyEIMKYQdIKipDnUvzN9ndLiGp8vugwH/Y=; b=Xg47FrsXWbYWjpcPJMZ8CqmSz3we+c3KAO9n9aTM0wDM6I68HwxXQzlZH99RjEyPUm h3G3F2iZgYLt9+vjY1Kqxxb+hDVXX1TCY2yL5R9RorKS8OBZBnlIWb97njXfRzKUieI0 QY0xjvZIAm9ZnByj3e+Sm7ZXIAhym15O+cumsQr7r9FxF0rvXEJbREs2Ja6Ro+1KmKs9 tbVXh9Lq2nk4Uns+W+be8w6CIXJad2EOgfrcrLnlh9iQCgUq8TKg9i+IL9jGX8KC53VB GzIjNQxLtoMyLe+5dHoWqdqoQpcbCHP/T8zw2Kn0V4AqrbeRthV0KpfTTjYRFk5qiQA7 IanQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O5Lq2G1/MyyEIMKYQdIKipDnUvzN9ndLiGp8vugwH/Y=; b=teAnWpz0a1aA+7fUEtIdLUqQ8ywPyLpK0qb2hlWCgT6l464i1XENfZdNV3gVOKZd7v i0wK9gA/nAMYM2dCuaBiv0e2BwfFYGJlMLWjf+bK9jJSu9dZxFea1D/UYgaPfkvouVhD ShQnEvEnujXGEBAhhPviQsohWzwoo61kQ8xlwzWnwR8K0KQ05HOz0pqNnlrlPCFYnGlc F4e6C3TuA9ONl/fwPRue8qGRUylEcQ5vf3+6e7ZSwvtN2t5bf7guSw+WkQ5OUUFVcq3X VSdYsB2ywwVQQYjIzL+YS+5W32JV78XdHDYPfVdxDK2h1hXa4lbqsVtEbAmP0ANGBtfQ q4rw== X-Gm-Message-State: ACrzQf2RUxhds5iprmEMDc54w93Doo0gdx1HfMVnImizmG5nZLsD5khY zyX5xsR4t/V+Cm8e2roFND0= X-Google-Smtp-Source: AMsMyM4yxKCwI70y65bo83JZwfS6ygjY/fiGXFwMQc/nnneqnwDA5wxtQ/Lm4RQTqXEj6aAJCYW7+A== X-Received: by 2002:a17:90a:ad08:b0:212:d5f1:e0c6 with SMTP id r8-20020a17090aad0800b00212d5f1e0c6mr16551665pjq.228.1666602786513; Mon, 24 Oct 2022 02:13:06 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:06 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 03/24] x86/pmu: Pop up FW prefix to avoid out-of-context propagation Date: Mon, 24 Oct 2022 17:12:02 +0800 Message-Id: <20221024091223.42631-4-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The inappropriate prefix "full-width writes" may be propagated to later test cases if it is not popped out. Signed-off-by: Like Xu Reviewed-by: Sean Christopherson --- x86/pmu.c | 1 + 1 file changed, 1 insertion(+) diff --git a/x86/pmu.c b/x86/pmu.c index 533851b..c8a2e91 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -665,6 +665,7 @@ int main(int ac, char **av) report_prefix_push("full-width writes"); check_counters(); check_gp_counters_write_width(); + report_prefix_pop(); } return report_summary(); From patchwork Mon Oct 24 09:12:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29523C38A2D for ; Mon, 24 Oct 2022 09:13:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230520AbiJXJNN (ORCPT ); Mon, 24 Oct 2022 05:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230296AbiJXJNK (ORCPT ); Mon, 24 Oct 2022 05:13:10 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C32615F123 for ; Mon, 24 Oct 2022 02:13:08 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id m6-20020a17090a5a4600b00212f8dffec9so2969377pji.0 for ; Mon, 24 Oct 2022 02:13:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EJoEahVw7AP5Ed1td5Q825LtZBEhqfOy92LmDa7bRBQ=; b=Esh7m6HHVSpDva3OT7fIpyHdTO7MRqFE4ELY28gKAaujoPcP4H4VxtRjUjGyloZMyt qa/+3TuvcaAy6nlFC498sTvRwqNxv0xeV9l1nIXvG6+7QmPP9+rVYDq7pgnSbviSSNEH nZpe3TYh5mwaq4dDDrBhgRJCNADzBhbDDt80pe39G9bYq7xZPkgqmdEeu6g/gaiWR+Rp 8GDyYQk5YQ+90fjKmSL3iMPZnV+/Jau5pyplejuKD3fI2cuh17UJXLmHagprrPmSaPIj r1piR/IUW3wj+yhganLve7bK2pYQviTtkvCkS9lqtaLsl/z0gTzOuI+2HGYJPURGP02L CEig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EJoEahVw7AP5Ed1td5Q825LtZBEhqfOy92LmDa7bRBQ=; b=vpD08Vk641BKoJ+RthcThJDcTuXBrl7Y5XF6uiiPuCEJUl6im1u+fuBUVqlIN0PQdw qRHdWiCfW3bIohIWgeqjjErNTiXZISCIB7OvsfD5vayi4b8Xg+zxtZmUBRMiiAIfoizi nqgC93YbcQQJlP7tQUUnsROrJKZhjzx2Z3eEa20+EsZe9bXVPMwJk9rPDJY3vdfyoMn+ Q/KmehuDJgEHrd5wC/Qij7M/w9lWl+XPjmak/fySy+tV5Rm2TqDWzX/DTW6Ezxp9zmYi TX9cM5J+CddxYfMWKY6ivCJiUyhgwrlEjmy4BpSbOBCDSY9LiMNpERjoSvF33NFC1NdL RUJg== X-Gm-Message-State: ACrzQf0VhpE92CHlD7GiDjfoxTubQNvAkPhPJqDWAfajFvzB8Y9O72y+ SxxjCWvsws1w/clHnB3mVDE= X-Google-Smtp-Source: AMsMyM4yy9/qLIugPTmZnjFHKEEJKWUvCtht+hxUwNJMLPm/0AmVzgwsksHlzHNiFOFlwCG2zdrlCg== X-Received: by 2002:a17:902:f691:b0:186:b250:9767 with SMTP id l17-20020a170902f69100b00186b2509767mr1583178plg.60.1666602788067; Mon, 24 Oct 2022 02:13:08 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:07 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 04/24] x86/pmu: Report SKIP when testing Intel LBR on AMD platforms Date: Mon, 24 Oct 2022 17:12:03 +0800 Message-Id: <20221024091223.42631-5-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The test conclusion of running Intel LBR on AMD platforms should not be PASS, but SKIP, fix it. Signed-off-by: Like Xu Reviewed-by: Sean Christopherson --- x86/pmu_lbr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index c040b14..a641d79 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -59,7 +59,7 @@ int main(int ac, char **av) if (!is_intel()) { report_skip("PMU_LBR test is for intel CPU's only"); - return 0; + return report_summary(); } if (!this_cpu_has_pmu()) { From patchwork Mon Oct 24 09:12:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFDC5FA3740 for ; Mon, 24 Oct 2022 09:13:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230525AbiJXJNO (ORCPT ); Mon, 24 Oct 2022 05:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230058AbiJXJNL (ORCPT ); Mon, 24 Oct 2022 05:13:11 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 592875C376 for ; Mon, 24 Oct 2022 02:13:10 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id b11so1553864pjp.2 for ; Mon, 24 Oct 2022 02:13:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wgGMC5SYMjT91BuTv+EO5nwnbbtDDIxJAEUbq8MFlhY=; b=eP7rI0Zu2+BFGsbWxEDs+USiHPooJMRAYzYJfccOihsuCJPWGQG/lyvN3NbZcCz6Gt MKjZLQOr0wcr7SQ9PvLulx+Pm67Ojj1pwBqFn4mDmqs1rYeeCNLYzDMq1f3M3anNvNY2 v25sDkiZxxAWCjXZNxjzBsMBN/ZZLYZyRIJM2RvTdLA5Ym3lVeppXpwBtQ54lRdJ7Cfp EwhdN3+ZFgfjfiaX7QLjk3FrfLYyBvz5I4cC0DzKokXrXKzg7p0G5H8I6Qx59aE6JnM5 vP6aOMlQ4LgYT2ZSfPXNCDSh7cnqMNUl3i5LksISxIRhTkPXvOhm/79P3GqA/M4NcdB1 rGtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wgGMC5SYMjT91BuTv+EO5nwnbbtDDIxJAEUbq8MFlhY=; b=aI1CTKG+EkrT9JL1md7yGAnDaTNc3m34alZxrTXe1helaRbo/gKCdllbkXKU82eaiE DO44sbsQxYY+ZX/oY4ywsyC7OERIDXfrOP/t7VdiavB3arAwTYpR4MYs0mtRzoqoJ8Ol 6ompUfcLnubjohFa+5+xOTfYtWSM1DI1u4rVOL0/CFZ4OHGN/6RVKSw6lB2DbJDy5a0Q cjKRejssy77456jorDe7ioAx8lZ8uTAiLI88TE8guwCo64ExkEpFSgReUwJP/Xxf50tD sZmTeoVdlb4T4yD/RF/zrqRTrXBMnNMQWloa4oj/vPvv9wi8VekBkHvylTN0aoHC/FVT vpCQ== X-Gm-Message-State: ACrzQf3d+LNAEu+Af+9Tk4n1Weq447vLnj8Uaru+fahPtzBVgurYq2Ay SjhwSiS8XfRHuEUWHx8sE+I= X-Google-Smtp-Source: AMsMyM7/ZRoNtlV0Akv7pRj/mqCksu421NpXCoFFvbnFzaTZWY4lbmGZdttIoEGw+mf6hx87/zPl3g== X-Received: by 2002:a17:902:7c8a:b0:17b:6eaa:5da3 with SMTP id y10-20020a1709027c8a00b0017b6eaa5da3mr31804237pll.33.1666602789848; Mon, 24 Oct 2022 02:13:09 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:09 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org, Sandipan Das Subject: [kvm-unit-tests PATCH v4 05/24] x86/pmu: Fix printed messages for emulated instruction test Date: Mon, 24 Oct 2022 17:12:04 +0800 Message-Id: <20221024091223.42631-6-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu This test case uses MSR_IA32_PERFCTR0 to count branch instructions and PERFCTR1 to count instruction events. The same correspondence should be maintained at report(), specifically this should use status bit 1 for instructions and bit 0 for branches. Fixes: 20cf914 ("x86/pmu: Test PMU virtualization on emulated instructions") Reported-by: Sandipan Das Signed-off-by: Like Xu --- x86/pmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index c8a2e91..d303a36 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -512,8 +512,8 @@ static void check_emulated_instr(void) "branch count"); // Additionally check that those counters overflowed properly. status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); - report(status & 1, "instruction counter overflow"); - report(status & 2, "branch counter overflow"); + report(status & 1, "branch counter overflow"); + report(status & 2, "instruction counter overflow"); report_prefix_pop(); } From patchwork Mon Oct 24 09:12:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D7ECECDFA1 for ; Mon, 24 Oct 2022 09:13:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231180AbiJXJNX (ORCPT ); Mon, 24 Oct 2022 05:13:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230526AbiJXJNO (ORCPT ); Mon, 24 Oct 2022 05:13:14 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 688F65DF15 for ; Mon, 24 Oct 2022 02:13:12 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id b11so1553911pjp.2 for ; Mon, 24 Oct 2022 02:13:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lFiYRPMFwxC0VCioIK1PlS5FRVqg/JDQyNk2MrSN+sI=; b=IPXNbTJAP/7X09sUuKEkIRo7r3bhkoMOrakDLJl5bsBC3X9nC8K843G+nJXCbqeTXD 3yA3O3N09ZbPHqhh7a9YMwhlYkmmlT0D5kQ4km0jPXLaG+CgWWs+xSNvBm74WuWHSMFU t40t+B7yKTi28gqRV6GP0/ClJw5pfJkAW1GFjqL+jZdomerUdqdOycS77Hc7sb+AscRd mLcc7isNAQz0YJGX52kD8sdt6WnMHwCF+eF5LGlmq/H8Phli4gA85gfsH5MblU342pJJ APr3nJ/xpRlvRC8avSXNo9/qWBIIU1ug3vuW9z+/ZN77pXm7SPtIskETk3UerFEJpoNO FLzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lFiYRPMFwxC0VCioIK1PlS5FRVqg/JDQyNk2MrSN+sI=; b=ujbSUtRhncRoRmYytpih0ieDCGjeLd69mC+B5zTgXqs05Kt2aFohTyVObUDRl1Eq2h 2NgVxXoh6XOMnHsEAMR6+R77LaDZK0OMqX6RamJqkYaCVA4Zlp20RsFxPhqbjMKt3lU4 nra+oZ5ecPD11uHFU3L6cqurBrNJb+rle2GgWmHGJLPfyrNPzEWxGlwkRaVpGq2z6BUV ep7F/xl6sbNq0agbkBA/6mU6wx0LWJvfWKfi8JnJvbaVIbeSx16zHOqJ0js3mxGt+XqT +q8yD7e6uhKOHAoUg0tOBKoumHvO69mhXmqkUCRbRhw9ja6qoozbf/3SRlNDUtGdoFGP B3DA== X-Gm-Message-State: ACrzQf1JB11qwjQ67Ax199wnlg/wrfAJwZbMgttJVSMbvgYH7fsHFycQ 5f+LHbyKtxf5VlzFGSVCWAo= X-Google-Smtp-Source: AMsMyM7Sd+6o4j2UJnC9NjiSRaAEB8MPq8OD3vxG2lOOrHLJ1Dl+ZxNY1nvL2WsEgAN7CMnhLgHaQw== X-Received: by 2002:a17:90a:4607:b0:202:e22d:4892 with SMTP id w7-20020a17090a460700b00202e22d4892mr36546376pjg.220.1666602791475; Mon, 24 Oct 2022 02:13:11 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:11 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 06/24] x86/pmu: Introduce __start_event() to drop all of the manual zeroing Date: Mon, 24 Oct 2022 17:12:05 +0800 Message-Id: <20221024091223.42631-7-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Most invocation of start_event() and measure() first sets evt.count=0. Instead of forcing each caller to ensure count is zeroed, optimize the count to zero during start_event(), then drop all of the manual zeroing. Accumulating counts can be handled by reading the current count before start_event(), and doing something like stuffing a high count to test an edge case could be handled by an inner helper, __start_event(). For overflow, just open code measure() for that one-off case. Requiring callers to zero out a field in most common cases isn't exactly flexible. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- x86/pmu.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index d303a36..ba67aa6 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -137,9 +137,9 @@ static void global_disable(pmu_counter_t *cnt) ~(1ull << cnt->idx)); } - -static void start_event(pmu_counter_t *evt) +static void __start_event(pmu_counter_t *evt, uint64_t count) { + evt->count = count; wrmsr(evt->ctr, evt->count); if (is_gp(evt)) wrmsr(MSR_P6_EVNTSEL0 + event_to_global_idx(evt), @@ -162,6 +162,11 @@ static void start_event(pmu_counter_t *evt) apic_write(APIC_LVTPC, PC_VECTOR); } +static void start_event(pmu_counter_t *evt) +{ + __start_event(evt, 0); +} + static void stop_event(pmu_counter_t *evt) { global_disable(evt); @@ -186,6 +191,13 @@ static void measure(pmu_counter_t *evt, int count) stop_event(&evt[i]); } +static void __measure(pmu_counter_t *evt, uint64_t count) +{ + __start_event(evt, count); + loop(); + stop_event(evt); +} + static bool verify_event(uint64_t count, struct pmu_event *e) { // printf("%d <= %ld <= %d\n", e->min, count, e->max); @@ -208,7 +220,6 @@ static void check_gp_counter(struct pmu_event *evt) int i; for (i = 0; i < nr_gp_counters; i++, cnt.ctr++) { - cnt.count = 0; measure(&cnt, 1); report(verify_event(cnt.count, evt), "%s-%d", evt->name, i); } @@ -235,7 +246,6 @@ static void check_fixed_counters(void) int i; for (i = 0; i < nr_fixed_counters; i++) { - cnt.count = 0; cnt.ctr = fixed_events[i].unit_sel; measure(&cnt, 1); report(verify_event(cnt.count, &fixed_events[i]), "fixed-%d", i); @@ -253,14 +263,12 @@ static void check_counters_many(void) if (!pmu_gp_counter_is_available(i)) continue; - cnt[n].count = 0; cnt[n].ctr = gp_counter_base + n; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR | gp_events[i % ARRAY_SIZE(gp_events)].unit_sel; n++; } for (i = 0; i < nr_fixed_counters; i++) { - cnt[n].count = 0; cnt[n].ctr = fixed_events[i].unit_sel; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR; n++; @@ -283,9 +291,8 @@ static void check_counter_overflow(void) pmu_counter_t cnt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, - .count = 0, }; - measure(&cnt, 1); + __measure(&cnt, 0); count = cnt.count; /* clear status before test */ @@ -311,7 +318,7 @@ static void check_counter_overflow(void) else cnt.config &= ~EVNTSEL_INT; idx = event_to_global_idx(&cnt); - measure(&cnt, 1); + __measure(&cnt, cnt.count); report(cnt.count == 1, "cntr-%d", i); status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); report(status & (1ull << idx), "status-%d", i); @@ -329,7 +336,6 @@ static void check_gp_counter_cmask(void) pmu_counter_t cnt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, - .count = 0, }; cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT); measure(&cnt, 1); @@ -415,7 +421,6 @@ static void check_running_counter_wrmsr(void) pmu_counter_t evt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, - .count = 0, }; report_prefix_push("running counter wrmsr"); @@ -430,7 +435,6 @@ static void check_running_counter_wrmsr(void) wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); - evt.count = 0; start_event(&evt); count = -1; @@ -454,13 +458,11 @@ static void check_emulated_instr(void) .ctr = MSR_IA32_PERFCTR0, /* branch instructions */ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[5].unit_sel, - .count = 0, }; pmu_counter_t instr_cnt = { .ctr = MSR_IA32_PERFCTR0 + 1, /* instructions */ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, - .count = 0, }; report_prefix_push("emulated instruction"); @@ -592,7 +594,6 @@ static void set_ref_cycle_expectations(void) pmu_counter_t cnt = { .ctr = MSR_IA32_PERFCTR0, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[2].unit_sel, - .count = 0, }; uint64_t tsc_delta; uint64_t t0, t1, t2, t3; From patchwork Mon Oct 24 09:12:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C28BC38A2D for ; Mon, 24 Oct 2022 09:13:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231129AbiJXJN1 (ORCPT ); Mon, 24 Oct 2022 05:13:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230523AbiJXJNW (ORCPT ); Mon, 24 Oct 2022 05:13:22 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E27867C99 for ; Mon, 24 Oct 2022 02:13:14 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id q71so8180612pgq.8 for ; Mon, 24 Oct 2022 02:13:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1HCdfoJ+ufRET4NO1GLfMwugudiiGl/j2xp9aKOsdco=; b=HG/onTOfg/WGUNTCcxGcy6ULR4bPodLRPz6cptMTKyroFFng2v//dZPOjhUsHmQUgE X3e+5tmnTXL2GHbU6IhmJY1J2PgJi/X90DHCcJnweIfNzgOTweMyZtKZ/53ttI6FrD5G 0hdBfsIHxo9sXyKnOi/7ZfBFG9A/6CYidWB13J9mBxS297MD9tlfbQnQOSTP/QeZ7k8C 9xCnMXnxxdki0ByWs1mr+1i5OODh4MD/sAlZJwobdFie3Q1Qq8/Jtqcfl/R3pKNvvwxK aozix2UbCbH3Ai3lGy2zLUurdirLl/I5skvRK4RPZ9ls4H08ZWI9O0H9Ma65bqh3ZmUo SI3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1HCdfoJ+ufRET4NO1GLfMwugudiiGl/j2xp9aKOsdco=; b=7RS3lLorTlv5Eaj9/AwWviGc6HDQt0KvB7s55mswKo3qltRjbaA2kTL3ricN6qrOZg 2rpiXkRnm5TQ52ITde18CwFVPDdIPyqDpCcCyvI8jjEyF6x3TRCsJwU1zL/M8zYbkhPZ GuRSyY00Lx+p/V3iUQzaCr4GZChLhf3ZCwhKVXI6pJkB+Ceg8sVo1uxu9mF1WfZS0vGy XCd1BGrPtsshUDkc573HgRZeq7YiJe7LsZTWB3YEbDljHQPL53TxCyY9WE10ElaVCbsM D2p2rfgU/8yaxMb6jt1xatSj0ehHVmf4LQ2BV5jSCLoO8nFbWUIQErEOxOMdI+pDfZpT oLVw== X-Gm-Message-State: ACrzQf0OrWc96LsiThs5WebmZAjOyJj2oVtGk63jAJD5zo99NcsYkfK3 qIeAzkekpXcaXbiaJoGwTmQ= X-Google-Smtp-Source: AMsMyM5KXwR1ve7iqQ6Kfww7rCHheuMQSdq5NVx0XAWanm/Ohyu1WrwbIYVVW+9kWy1Age+86c1vKg== X-Received: by 2002:a63:d845:0:b0:44b:d074:97d with SMTP id k5-20020a63d845000000b0044bd074097dmr27888531pgj.32.1666602793048; Mon, 24 Oct 2022 02:13:13 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:12 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 07/24] x86/pmu: Introduce multiple_{one, many}() to improve readability Date: Mon, 24 Oct 2022 17:12:06 +0800 Message-Id: <20221024091223.42631-8-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The current measure_one() forces the common case to pass in unnecessary information in order to give flexibility to a single use case. It's just syntatic sugar, but it really does help readers as it's not obvious that the "1" specifies the number of events, whereas multiple_many() and measure_one() are relatively self-explanatory. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- x86/pmu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index ba67aa6..3b1ed16 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -181,7 +181,7 @@ static void stop_event(pmu_counter_t *evt) evt->count = rdmsr(evt->ctr); } -static void measure(pmu_counter_t *evt, int count) +static void measure_many(pmu_counter_t *evt, int count) { int i; for (i = 0; i < count; i++) @@ -191,6 +191,11 @@ static void measure(pmu_counter_t *evt, int count) stop_event(&evt[i]); } +static void measure_one(pmu_counter_t *evt) +{ + measure_many(evt, 1); +} + static void __measure(pmu_counter_t *evt, uint64_t count) { __start_event(evt, count); @@ -220,7 +225,7 @@ static void check_gp_counter(struct pmu_event *evt) int i; for (i = 0; i < nr_gp_counters; i++, cnt.ctr++) { - measure(&cnt, 1); + measure_one(&cnt); report(verify_event(cnt.count, evt), "%s-%d", evt->name, i); } } @@ -247,7 +252,7 @@ static void check_fixed_counters(void) for (i = 0; i < nr_fixed_counters; i++) { cnt.ctr = fixed_events[i].unit_sel; - measure(&cnt, 1); + measure_one(&cnt); report(verify_event(cnt.count, &fixed_events[i]), "fixed-%d", i); } } @@ -274,7 +279,7 @@ static void check_counters_many(void) n++; } - measure(cnt, n); + measure_many(cnt, n); for (i = 0; i < n; i++) if (!verify_counter(&cnt[i])) @@ -338,7 +343,7 @@ static void check_gp_counter_cmask(void) .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, }; cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT); - measure(&cnt, 1); + measure_one(&cnt); report(cnt.count < gp_events[1].min, "cmask"); } From patchwork Mon Oct 24 09:12:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 971CDC38A2D for ; Mon, 24 Oct 2022 09:13:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231197AbiJXJNg (ORCPT ); Mon, 24 Oct 2022 05:13:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231165AbiJXJNZ (ORCPT ); Mon, 24 Oct 2022 05:13:25 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2116169BC4 for ; Mon, 24 Oct 2022 02:13:15 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id f23so7939374plr.6 for ; Mon, 24 Oct 2022 02:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NL81jaqKKFzrMCxOCGcpdgaauWXeDH5p8SvcsPbwmls=; b=RblnWhL5S8XjnpDFO9vdvkfdHtr+kXptXKV+qEG1pJ3MPyTF2aAvcWVIqkObwc/Kp+ jsM1sqKFoNcACrE+6etqyxkThy2B0CUxLOkx4HjFXrqpHJpBfC6ewesewTqWTe39Site ZDi2epwlIsEhlsVvBACmk3A7TIlkblfGAqKx4kKn1u2jRVw76kbQwpp/z+S2modJfS36 mMAVAkHxkd8po+jZQ9Z2+Desnoo1703c8M72WuAYYuQPgxA1dgWCEDQdjR2c0KXjiF2P Wrj0ycd9acVmJv08KhVWGmnZSJw84iXsIg2O06xNsy5mEUc+ct7m/fZjG91/QKOAmuut PX4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NL81jaqKKFzrMCxOCGcpdgaauWXeDH5p8SvcsPbwmls=; b=LAIhBEjo9kx6s/9Y7tdOrMjaw6GBkxvvRZGA6j0JDdOlBET3IXCwiFMmaUWa5z+m/M oRYNyN3XIYJ9eR6bzIwGqrELTvpaNkxQiyM7P33EkWkjfymxwXLkL9NiuXBCc87vsGYB UKyJCWIyg0bRLlVnw5DwEj/X0DBo1ZU2KrbRfS9hmBncGA1BJHJUWIm6YtD7Lwsf8k5p ehLzCnSo8P0XGoMDWuPtv6wxSn1IiNFYQnYGZI1MKkZ/NTONYDECyTfrAGOpEFDC3Q7U nyaq3LABtRAYHAvqfsEyPbfdiGXmHuNXGyRmsV66SmaROG6JTHRTz5NniDXJrsg3t+Be qrKQ== X-Gm-Message-State: ACrzQf2COwsQD8l0VoDkJLHc//Hc7fxVD/bN2j44YY+RUfd6BU/K4Jre y9VF/VBCXgWq+H2NLF7vk34FmtaNf6uOj5Dv X-Google-Smtp-Source: AMsMyM7jD64a2Q/5mfcmtlUYXwqw6ceR8lwszBGyJAGNyE/YCLTRy4MCLvXeKYm1qvFZPQSqWZ24LQ== X-Received: by 2002:a17:902:8d93:b0:17f:852e:f84e with SMTP id v19-20020a1709028d9300b0017f852ef84emr31818346plo.20.1666602794744; Mon, 24 Oct 2022 02:13:14 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:14 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 08/24] x86/pmu: Reset the expected count of the fixed counter 0 when i386 Date: Mon, 24 Oct 2022 17:12:07 +0800 Message-Id: <20221024091223.42631-9-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The pmu test check_counter_overflow() always fails with 32-bit binaries. The cnt.count obtained from the latter run of measure() (based on fixed counter 0) is not equal to the expected value (based on gp counter 0) and there is a positive error with a value of 2. The two extra instructions come from inline wrmsr() and inline rdmsr() inside the global_disable() binary code block. Specifically, for each msr access, the i386 code will have two assembly mov instructions before rdmsr/wrmsr (mark it for fixed counter 0, bit 32), but only one assembly mov is needed for x86_64 and gp counter 0 on i386. The sequence of instructions to count events using the #GP and #Fixed counters is different. Thus the fix is quite high level, to use the same counter (w/ same instruction sequences) to set initial value for the same counter. Fix the expected init cnt.count for fixed counter 0 overflow based on the same fixed counter 0, not always using gp counter 0. The difference of 1 for this count enables the interrupt to be generated immediately after the selected event count has been reached, instead of waiting for the overflow to be propagation through the counter. Adding a helper to measure/compute the overflow preset value. It provides a convenient location to document the weird behavior that's necessary to ensure immediate event delivery. Signed-off-by: Like Xu --- x86/pmu.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 3b1ed16..bb6e97e 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -288,17 +288,30 @@ static void check_counters_many(void) report(i == n, "all counters"); } +static uint64_t measure_for_overflow(pmu_counter_t *cnt) +{ + __measure(cnt, 0); + /* + * To generate overflow, i.e. roll over to '0', the initial count just + * needs to be preset to the negative expected count. However, as per + * Intel's SDM, the preset count needs to be incremented by 1 to ensure + * the overflow interrupt is generated immediately instead of possibly + * waiting for the overflow to propagate through the counter. + */ + assert(cnt->count > 1); + return 1 - cnt->count; +} + static void check_counter_overflow(void) { int nr_gp_counters = pmu_nr_gp_counters(); - uint64_t count; + uint64_t overflow_preset; int i; pmu_counter_t cnt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, }; - __measure(&cnt, 0); - count = cnt.count; + overflow_preset = measure_for_overflow(&cnt); /* clear status before test */ wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); @@ -309,12 +322,13 @@ static void check_counter_overflow(void) uint64_t status; int idx; - cnt.count = 1 - count; + cnt.count = overflow_preset; if (gp_counter_base == MSR_IA32_PMC0) cnt.count &= (1ull << pmu_gp_counter_width()) - 1; if (i == nr_gp_counters) { cnt.ctr = fixed_events[0].unit_sel; + cnt.count = measure_for_overflow(&cnt); cnt.count &= (1ull << pmu_fixed_counter_width()) - 1; } From patchwork Mon Oct 24 09:12:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66C12FA3740 for ; Mon, 24 Oct 2022 09:13:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230499AbiJXJNf (ORCPT ); Mon, 24 Oct 2022 05:13:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230526AbiJXJNY (ORCPT ); Mon, 24 Oct 2022 05:13:24 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F0F669BFE for ; Mon, 24 Oct 2022 02:13:17 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id i3so8439053pfc.11 for ; Mon, 24 Oct 2022 02:13:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SoScm23nDDX7JHFkbL9fpuIHnIArLeXAMMYwS7VZ3YY=; b=Cv1AIVMJhkLp6GyXJ78ZOLYqfoqx0pbT5xw1vZ/duXZzOUCNLP8y+snJTRCOGCfbjv Axv5YS4NvIM8mQHny5izQ2zEdaGH8RfuFzejpet3XH73fyfldy1hZZH6EaO09Gb8kjlG L8V5TxCGNaD63seu7S6gD0eewSI6ABCjhWBJQ/EfRVvUDpuhElGdBwp7zWThNK9eHb0u 7WA2gfD7BdTVBxRKrLovgJEQqrRqqpaRcKwNwZKcmvbAEk78U4CBEDd6VdF6cYgj8lU5 F7A2vEKNMxb0yApkN6ZuPi09WzhcSo2EYxKi5eqoXTVM+ub93hTeTbUvyhVZSnhypKBl DQdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SoScm23nDDX7JHFkbL9fpuIHnIArLeXAMMYwS7VZ3YY=; b=3UfBsiAT40GNebxFDDqN0Kaw/MDXntxp/sCEmqteAjhCprAfWxLwWSaZ1GMcOSuftt AFVouG5MT1Or5VzmVwxBtD945kMufTYdCK7qHnyt/SwG+cD/7koTcADFFAqUxmEEOVIq tTROwuZnuYRAh2Ert+iiW7rewMR8iGFOm7GXDZB+il+ivOR+HFPpmhrGxjs35a0qqhJ8 IcOJJUYQ9NpfMEubJpy+f50zACg7HXYyyk8Wjfll9hC7YjilYPnnSKko7AZvSJnogCSy IPxnyMUMjQPweJ4Hxuv0gNO6Bdwz9QcaFXAtIcOBNB8Pc7M7xcK5SAxpTJ0l2nOfCz4S sBXA== X-Gm-Message-State: ACrzQf0dOZu9VnDhweZYUKhG+80xSg1zjDh1YRLrXoNpcu5NVyThhNt1 4mOy1SW7eTIG49ljpMMd8eU= X-Google-Smtp-Source: AMsMyM64Njj+AxehMevOGMPM22qPsdPAcf1dK1q7D/unw5dUFfGkkAjOS/wqfJ2JabzAT7nOHvL10A== X-Received: by 2002:a63:6b88:0:b0:46a:ff3c:b64a with SMTP id g130-20020a636b88000000b0046aff3cb64amr27245050pgc.196.1666602796322; Mon, 24 Oct 2022 02:13:16 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:16 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 09/24] x86: create pmu group for quick pmu-scope testing Date: Mon, 24 Oct 2022 17:12:08 +0800 Message-Id: <20221024091223.42631-10-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Any agent can run "./run_tests.sh -g pmu" to run all PMU tests easily, e.g. when verifying the x86/PMU KVM changes. Signed-off-by: Like Xu Reviewed-by: Sean Christopherson --- x86/unittests.cfg | 2 ++ 1 file changed, 2 insertions(+) diff --git a/x86/unittests.cfg b/x86/unittests.cfg index ed65185..07d0507 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -189,6 +189,7 @@ file = pmu.flat extra_params = -cpu max check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm +groups = pmu [pmu_lbr] arch = x86_64 @@ -197,6 +198,7 @@ extra_params = -cpu host,migratable=no check = /sys/module/kvm/parameters/ignore_msrs=N check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm +groups = pmu [vmware_backdoors] file = vmware_backdoors.flat From patchwork Mon Oct 24 09:12:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24AEDFA373E for ; Mon, 24 Oct 2022 09:13:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229717AbiJXJNi (ORCPT ); Mon, 24 Oct 2022 05:13:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231171AbiJXJNZ (ORCPT ); Mon, 24 Oct 2022 05:13:25 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30FDF6A48E for ; Mon, 24 Oct 2022 02:13:18 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id s196so8201610pgs.3 for ; Mon, 24 Oct 2022 02:13:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eOfqx+essPUE8lmXeQjnIuCL9UwXhImUnnXh9pI3KvM=; b=ewfPXUBMDWRXLWxBUjPrA8gE6/K6t7jM5xffg3JCm7j8AKu9TpyNkvnV7VnveRGlnx a2An1L0ikWriq4Q1tZWo/KJNfpCJuNzhI6t1XB4soPqw2zoZS5jntOqmnerseqhmFHUh GL/OEhh3YGDdzw0aZ1TXH/fCaYZfnZOPC5001H4RmVuQ1NPCktBvNz2nb8o2EEVgaGPa YAW94VlkyZYBdaP59s06anEIEniVCEHmu/1JQfbSgyyfZN5/F5MO7hhQdRcPHEI95zOC b4GDH9pxa/idov7P4MYSKk1gAUHB67pdq9LDMvuZeFZRh3AVRSZXI3c0pAFDh5Zqhy1j 6t+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eOfqx+essPUE8lmXeQjnIuCL9UwXhImUnnXh9pI3KvM=; b=YVz5Mpc++Sjml9vQkGonPONP5Uan3oIGpCWelxjqlgChVVz+HUVSOz0GHsVQQ146hZ nOIsElwF38SsZFeWv1yqR/1Zd/Hu7PoqCMexAXfphmKpo/AHY9YXHIZumK9EA2KNaF1a d6QJnCQ9Ynjf4AEBw154jNWZVJFL0b7wP+TT+S3BHo6hJ96VYifU+Gr66vlTr5+KB0Wi sX+FrIS3/6ry+4Vw3vXwWp6M30g1jh/D/1k4O93MDIQIR+fKS0z4Q7fhaLA2RkwqraDk u8CL7LAUglB2y+9XYvlAQqbx3gfKuc+PsJSqrxr/a0yTB4wFqRH73TZ+tyYfDeD2l9x3 6mRg== X-Gm-Message-State: ACrzQf1E3Jer+lOVsVsrihl06PvVFWeVHEWqKOGRz1CcnAJ/YlP6AfZg 1LG0y23l6MaGk0o7WfOYH6pnm8GxFMD0SIv0 X-Google-Smtp-Source: AMsMyM5drsRBhb4L2L9qCVOzPOPv4VD3hlQk+hC820KGsG6+RyNiI/nem3I2Yw3I+PRI9l5aR3XaFQ== X-Received: by 2002:a63:1f5c:0:b0:469:d0e6:dac0 with SMTP id q28-20020a631f5c000000b00469d0e6dac0mr26683091pgm.427.1666602798004; Mon, 24 Oct 2022 02:13:18 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:17 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 10/24] x86/pmu: Refine info to clarify the current support Date: Mon, 24 Oct 2022 17:12:09 +0800 Message-Id: <20221024091223.42631-11-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Existing unit tests do not cover AMD pmu, nor Intel pmu that is not architecture (on some obsolete cpu's). AMD's PMU support will be coming in subsequent commits. Signed-off-by: Like Xu --- x86/pmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/x86/pmu.c b/x86/pmu.c index bb6e97e..15572e3 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -658,7 +658,7 @@ int main(int ac, char **av) buf = malloc(N*64); if (!pmu_version()) { - report_skip("No pmu is detected!"); + report_skip("No Intel Arch PMU is detected!"); return report_summary(); } From patchwork Mon Oct 24 09:12:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94024C38A2D for ; Mon, 24 Oct 2022 09:13:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231232AbiJXJNb (ORCPT ); Mon, 24 Oct 2022 05:13:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231194AbiJXJNZ (ORCPT ); Mon, 24 Oct 2022 05:13:25 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B304B6A4AE for ; Mon, 24 Oct 2022 02:13:20 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id b29so4043335pfp.13 for ; Mon, 24 Oct 2022 02:13:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YibZy+BFQ3euZlRWTAO2YccWowhNFq/VeLwfRP/m6QM=; b=bHzVuwu3wrjkp7HlWHall2En6UODNIY5FYvbbxpJK77MdoM0+RgnMA6MkdExV/2094 ns69h1yIrE9+HzBRulNEcPALhW2SxClNwB8mtpNnEFSvj0lQhcbMiuMqtNMLFvTCzD2u of3eo76c7K0V1715zN9aG4QY52YJhrV0rU5FnZGArFGSg1DZwpy9WefQka/x9uR1Ozz9 AFd6O2TfxDsnCu7K8r1xySZTzuf8HsHkUWAI9QGu3AQ/XLN/Y4droLmKpcmAO48q/Yoh g3LePWSA/TiPlZCNq5y1lm3r4behloYd/zgrWG1jAAIfvImJkhiWgHVWj6P0EguD/U+H AThQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YibZy+BFQ3euZlRWTAO2YccWowhNFq/VeLwfRP/m6QM=; b=Qsmht1AjpjTgFNu7ISOTjdDwaXI+O5m61fAJVlUVit2g1T1HT/i53y5TyYpluAwUSD tSQ5iKK3FyEwIhcwCm73zYsobv/ND1ftIJng5sFHDkuO6zjzqUyhg/A5gpGS0B8f8RKC uHux5KJEQRi0r9MjUviOiIHiH/+21nN9xxVJTieL1z4Eu4aQrRhFugMTt4jSp/YZa/Xm KImbFU0w1hHxeE9etPzQXUc/U0/WCsVz12sV4NJIZif5VvdYBbMJWbztLVcbJGiXsn8O ESUUGIv8iYc7olsRwh8iZA/cYYz6qqaqRLlVvVjkV+1RwjL3powyU5oPhjlCGaJD6nhH fNyw== X-Gm-Message-State: ACrzQf3W//OGSjcuLFioOce52MDUlWOXUjKmojcxzTXfHZwPduKZvEZ7 jgEwvN/2WL3D1e+j6soqAVs= X-Google-Smtp-Source: AMsMyM58Jce/SNBl9snXAQT+O4akbHR9RYdwMF/Ft2JM4hia6XPOhbKTft9p7P9UMsx8SSpfRRnbCw== X-Received: by 2002:a63:db14:0:b0:44d:e4f3:b45c with SMTP id e20-20020a63db14000000b0044de4f3b45cmr26643765pgg.267.1666602799604; Mon, 24 Oct 2022 02:13:19 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:19 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 11/24] x86/pmu: Update rdpmc testcase to cover #GP path Date: Mon, 24 Oct 2022 17:12:10 +0800 Message-Id: <20221024091223.42631-12-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Specifying an unsupported PMC encoding will cause a #GP(0). There are multiple reasons RDPMC can #GP, the one that is being relied on to guarantee #GP is specifically that the PMC is invalid. The most extensible solution is to provide a safe variant. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- lib/x86/processor.h | 21 ++++++++++++++++++--- x86/pmu.c | 10 ++++++++++ 2 files changed, 28 insertions(+), 3 deletions(-) diff --git a/lib/x86/processor.h b/lib/x86/processor.h index f85abe3..cb396ed 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -438,11 +438,26 @@ static inline int wrmsr_safe(u32 index, u64 val) return exception_vector(); } -static inline uint64_t rdpmc(uint32_t index) +static inline int rdpmc_safe(u32 index, uint64_t *val) { uint32_t a, d; - asm volatile ("rdpmc" : "=a"(a), "=d"(d) : "c"(index)); - return a | ((uint64_t)d << 32); + + asm volatile (ASM_TRY("1f") + "rdpmc\n\t" + "1:" + : "=a"(a), "=d"(d) : "c"(index) : "memory"); + *val = (uint64_t)a | ((uint64_t)d << 32); + return exception_vector(); +} + +static inline uint64_t rdpmc(uint32_t index) +{ + uint64_t val; + int vector = rdpmc_safe(index, &val); + + assert_msg(!vector, "Unexpected %s on RDPMC(%d)", + exception_mnemonic(vector), index); + return val; } static inline int write_cr0_safe(ulong val) diff --git a/x86/pmu.c b/x86/pmu.c index 15572e3..d0de196 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -651,12 +651,22 @@ static void set_ref_cycle_expectations(void) gp_events[2].max = (gp_events[2].max * cnt.count) / tsc_delta; } +static void check_invalid_rdpmc_gp(void) +{ + uint64_t val; + + report(rdpmc_safe(64, &val) == GP_VECTOR, + "Expected #GP on RDPMC(64)"); +} + int main(int ac, char **av) { setup_vm(); handle_irq(PC_VECTOR, cnt_overflow); buf = malloc(N*64); + check_invalid_rdpmc_gp(); + if (!pmu_version()) { report_skip("No Intel Arch PMU is detected!"); return report_summary(); From patchwork Mon Oct 24 09:12:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A62BECDFA1 for ; Mon, 24 Oct 2022 09:13:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230439AbiJXJNq (ORCPT ); Mon, 24 Oct 2022 05:13:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229721AbiJXJNi (ORCPT ); Mon, 24 Oct 2022 05:13:38 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A95CA5C951 for ; Mon, 24 Oct 2022 02:13:21 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id q71so8180867pgq.8 for ; Mon, 24 Oct 2022 02:13:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JI6xgwjY9qAqH7+ofTPnYV1zYKHo6juwDwO6LRgYnj8=; b=jeXWHrRuhJfHKrMJw+Wm/v2mHZG+d3nQmVwwV1pgTkEUdVTF6TgAIQA9ozunZ7qdAj 55VuwScf4i9VwhiKJKX0BbTOkVdW13lqFUhgxRsh1a2VCSbDFL8BNAARh/wNPJ3t9fo+ kZ9ff7FasfbmNTbJGAkEXpnBmwYUb9+4MjHZPXgk+iVuVCcgNufTbsjE6Aa9ZnWGOHJ/ 5lPHAF9poq77+TpF1OrFzo01495+c1RVlLEAa2eFgegAO1PGywJ4Nl2jLoYh4XKec18f Kb0dKCI2PcIPg/U3ZX9VYrUEVizVGOLFc/H1G0EZg9klnHA1kxPlY/+qyX9AyVdyKuOZ f45A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JI6xgwjY9qAqH7+ofTPnYV1zYKHo6juwDwO6LRgYnj8=; b=0W3jZl28tmoGoKbahXlE23mrnSpySYss3ANm2RNBM9cYlzX6p68KfwU3yfreVhUgXJ YEMcqA/lDiOJI31OMHlS9owzjbRIWPGDodhToKE0XznuYJahNzpX5UKCR3NqGZ2o+Cc8 TKOT/YyUsXraU7GEEz3F83KVYYFTTplbtTgkF2yzHbvk9QsPQ3l2x/1QfuNX6WwqmeaC 9FIa7Q3cXX+/j525uGH8yU9BnRkFqhDTeTvSz524aEurc83upNzhLLLFwUGxyoAfC7uh 2KHwL/nL1SKGMRHAvzo6ddk/88hMhO7RZoBO0IJ0zhavWYcUbcM5+XwEQ0eEjxUSgF/L Ic7w== X-Gm-Message-State: ACrzQf3iLar9DxnBojsrceWwI7jTK184p7Zqw75+XY/1Tv//stPqkUjj Xsd+l1Ss2gxh/Y920DQUBII= X-Google-Smtp-Source: AMsMyM5Axq2t8+gW/6YYwbpplXF8QRbtngzxHk8TeeJr6uPNp+k0P1ehnJzU5s4MN4RCt+4w5Bdwbg== X-Received: by 2002:a63:211a:0:b0:451:f444:3b55 with SMTP id h26-20020a63211a000000b00451f4443b55mr26561891pgh.60.1666602801164; Mon, 24 Oct 2022 02:13:21 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:20 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 12/24] x86/pmu: Rename PC_VECTOR to PMI_VECTOR for better readability Date: Mon, 24 Oct 2022 17:12:11 +0800 Message-Id: <20221024091223.42631-13-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The original name "PC_VECTOR" comes from the LVT Performance Counter Register. Rename it to PMI_VECTOR. That's much more familiar for KVM developers and it's still correct, e.g. it's the PMI vector that's programmed into the LVT PC register. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- x86/pmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index d0de196..3b36caa 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -11,7 +11,9 @@ #include #define FIXED_CNT_INDEX 32 -#define PC_VECTOR 32 + +/* Performance Counter Vector for the LVT PC Register */ +#define PMI_VECTOR 32 #define EVNSEL_EVENT_SHIFT 0 #define EVNTSEL_UMASK_SHIFT 8 @@ -159,7 +161,7 @@ static void __start_event(pmu_counter_t *evt, uint64_t count) wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, ctrl); } global_enable(evt); - apic_write(APIC_LVTPC, PC_VECTOR); + apic_write(APIC_LVTPC, PMI_VECTOR); } static void start_event(pmu_counter_t *evt) @@ -662,7 +664,7 @@ static void check_invalid_rdpmc_gp(void) int main(int ac, char **av) { setup_vm(); - handle_irq(PC_VECTOR, cnt_overflow); + handle_irq(PMI_VECTOR, cnt_overflow); buf = malloc(N*64); check_invalid_rdpmc_gp(); @@ -686,7 +688,7 @@ int main(int ac, char **av) printf("Fixed counters: %d\n", pmu_nr_fixed_counters()); printf("Fixed counter width: %d\n", pmu_fixed_counter_width()); - apic_write(APIC_LVTPC, PC_VECTOR); + apic_write(APIC_LVTPC, PMI_VECTOR); check_counters(); From patchwork Mon Oct 24 09:12:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01DB8C38A2D for ; Mon, 24 Oct 2022 09:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229798AbiJXJNn (ORCPT ); Mon, 24 Oct 2022 05:13:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231199AbiJXJNg (ORCPT ); Mon, 24 Oct 2022 05:13:36 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A94295C9DA for ; Mon, 24 Oct 2022 02:13:25 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id g16so3033862pfr.12 for ; Mon, 24 Oct 2022 02:13:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H2wDWa5anhn02VmOEJnPcmUczDdx54RmT/GSoURgY3U=; b=ldm/rcGTCIIntLzN6owiG3h+Q6ZFb3LhNBqnj7fw1CvrDjo4OgS9WNPtfA7J1ido8Q 6jqS+8EES7ep9I/kYCdqUhdjUJMguTcoi9CnD7om8+xPAc4pcYBwVXb7uVnVKQm5KcK6 T4JykQSEeSAsZZGS/S70la2UCdFcdJKsTa+RtK8OX27KzoVnwS8OD9STVSJdefbmNtrt EzulDM7QWhI1Hu48tSvpRFRxs42z93YLdKj4MCSrkqv5PZLu8GYue9wqMTSMMhxJAlzm oBDXx/qU9LkmqrcmpoQdEzLUuGSw0sxz0NATKRv8TGex93h7NQ5VawIRrX5nIwUFJXj7 EhFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H2wDWa5anhn02VmOEJnPcmUczDdx54RmT/GSoURgY3U=; b=ZS5HieuLYLKX4AMX47qgkuq0jSYfAds7Yw6zNrPOIrQ7ti/kfXdUuXVbSIjyFBpT4H f+QmhtyUKv2iEzANrPDKZXeRuIXA/2++nre2u3JnyuHNXUD/pWTZuRHtMeMSGY7AOyj1 /gU5rWAfwolFL7dFpkw0NOdR72y2CgROJ+fCXnO3iQ19cwDa9qJWkV3mVc+Xa3nBQvHQ sf0tfpQRG8KseMgt+8eC8pnBYvoc0dJ2p16JMop0gwSy95e3PoM/vgpAOAjaPioRSKzb Bf1WviLy/n7JgjL6Tpucij/LD3eeMhJsivs+9SX254VHkE2JKsu5FEwbw7KdgwJTpwjN stHQ== X-Gm-Message-State: ACrzQf3EbMNlGdlRwSzWcrzujlS79OlICB28JPFdUclA/EcQmP4n2nD1 gVmdpQrKAX4hayWtQ3O0hlAWFqV4ceLgk29/ X-Google-Smtp-Source: AMsMyM717Hs7aIxOFDvX+okvXZObJBR5cb8WNjZiaxo2vIZrB+xzBiUaEt4vwGi0thAnIH2+Ty8TYA== X-Received: by 2002:a05:6a00:450d:b0:562:51ad:7cdf with SMTP id cw13-20020a056a00450d00b0056251ad7cdfmr32224466pfb.54.1666602802821; Mon, 24 Oct 2022 02:13:22 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:22 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 13/24] x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files Date: Mon, 24 Oct 2022 17:12:12 +0800 Message-Id: <20221024091223.42631-14-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all of the hardware-defined stuff, e.g. #defines, accessors, helpers and structs that are dictated by hardware. This will greatly help with code reuse and reduce unnecessary vm-exit. Opportunistically move lbr msrs definition to header processor.h. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- lib/x86/msr.h | 7 ++++ lib/x86/pmu.c | 1 + lib/x86/pmu.h | 100 ++++++++++++++++++++++++++++++++++++++++++++ lib/x86/processor.h | 64 ---------------------------- x86/Makefile.common | 1 + x86/pmu.c | 25 +---------- x86/pmu_lbr.c | 11 +---- x86/vmx_tests.c | 1 + 8 files changed, 112 insertions(+), 98 deletions(-) create mode 100644 lib/x86/pmu.c create mode 100644 lib/x86/pmu.h diff --git a/lib/x86/msr.h b/lib/x86/msr.h index fa1c0c8..bbe29fd 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -86,6 +86,13 @@ #define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10) #define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11) +#define MSR_LBR_NHM_FROM 0x00000680 +#define MSR_LBR_NHM_TO 0x000006c0 +#define MSR_LBR_CORE_FROM 0x00000040 +#define MSR_LBR_CORE_TO 0x00000060 +#define MSR_LBR_TOS 0x000001c9 +#define MSR_LBR_SELECT 0x000001c8 + #define MSR_IA32_MC0_CTL 0x00000400 #define MSR_IA32_MC0_STATUS 0x00000401 #define MSR_IA32_MC0_ADDR 0x00000402 diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c new file mode 100644 index 0000000..9d048ab --- /dev/null +++ b/lib/x86/pmu.c @@ -0,0 +1 @@ +#include "pmu.h" diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h new file mode 100644 index 0000000..078a974 --- /dev/null +++ b/lib/x86/pmu.h @@ -0,0 +1,100 @@ +#ifndef _X86_PMU_H_ +#define _X86_PMU_H_ + +#include "processor.h" +#include "libcflat.h" + +#define FIXED_CNT_INDEX 32 +#define MAX_NUM_LBR_ENTRY 32 + +/* Performance Counter Vector for the LVT PC Register */ +#define PMI_VECTOR 32 + +#define DEBUGCTLMSR_LBR (1UL << 0) + +#define PMU_CAP_LBR_FMT 0x3f +#define PMU_CAP_FW_WRITES (1ULL << 13) + +#define EVNSEL_EVENT_SHIFT 0 +#define EVNTSEL_UMASK_SHIFT 8 +#define EVNTSEL_USR_SHIFT 16 +#define EVNTSEL_OS_SHIFT 17 +#define EVNTSEL_EDGE_SHIFT 18 +#define EVNTSEL_PC_SHIFT 19 +#define EVNTSEL_INT_SHIFT 20 +#define EVNTSEL_EN_SHIF 22 +#define EVNTSEL_INV_SHIF 23 +#define EVNTSEL_CMASK_SHIFT 24 + +#define EVNTSEL_EN (1 << EVNTSEL_EN_SHIF) +#define EVNTSEL_USR (1 << EVNTSEL_USR_SHIFT) +#define EVNTSEL_OS (1 << EVNTSEL_OS_SHIFT) +#define EVNTSEL_PC (1 << EVNTSEL_PC_SHIFT) +#define EVNTSEL_INT (1 << EVNTSEL_INT_SHIFT) +#define EVNTSEL_INV (1 << EVNTSEL_INV_SHIF) + +static inline u8 pmu_version(void) +{ + return cpuid(10).a & 0xff; +} + +static inline bool this_cpu_has_pmu(void) +{ + return !!pmu_version(); +} + +static inline bool this_cpu_has_perf_global_ctrl(void) +{ + return pmu_version() > 1; +} + +static inline u8 pmu_nr_gp_counters(void) +{ + return (cpuid(10).a >> 8) & 0xff; +} + +static inline u8 pmu_gp_counter_width(void) +{ + return (cpuid(10).a >> 16) & 0xff; +} + +static inline u8 pmu_gp_counter_mask_length(void) +{ + return (cpuid(10).a >> 24) & 0xff; +} + +static inline u8 pmu_nr_fixed_counters(void) +{ + struct cpuid id = cpuid(10); + + if ((id.a & 0xff) > 1) + return id.d & 0x1f; + else + return 0; +} + +static inline u8 pmu_fixed_counter_width(void) +{ + struct cpuid id = cpuid(10); + + if ((id.a & 0xff) > 1) + return (id.d >> 5) & 0xff; + else + return 0; +} + +static inline bool pmu_gp_counter_is_available(int i) +{ + /* CPUID.0xA.EBX bit is '1 if they counter is NOT available. */ + return !(cpuid(10).b & BIT(i)); +} + +static inline u64 this_cpu_perf_capabilities(void) +{ + if (!this_cpu_has(X86_FEATURE_PDCM)) + return 0; + + return rdmsr(MSR_IA32_PERF_CAPABILITIES); +} + +#endif /* _X86_PMU_H_ */ diff --git a/lib/x86/processor.h b/lib/x86/processor.h index cb396ed..ee2b5a2 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -806,68 +806,4 @@ static inline void flush_tlb(void) write_cr4(cr4); } -static inline u8 pmu_version(void) -{ - return cpuid(10).a & 0xff; -} - -static inline bool this_cpu_has_pmu(void) -{ - return !!pmu_version(); -} - -static inline bool this_cpu_has_perf_global_ctrl(void) -{ - return pmu_version() > 1; -} - -static inline u8 pmu_nr_gp_counters(void) -{ - return (cpuid(10).a >> 8) & 0xff; -} - -static inline u8 pmu_gp_counter_width(void) -{ - return (cpuid(10).a >> 16) & 0xff; -} - -static inline u8 pmu_gp_counter_mask_length(void) -{ - return (cpuid(10).a >> 24) & 0xff; -} - -static inline u8 pmu_nr_fixed_counters(void) -{ - struct cpuid id = cpuid(10); - - if ((id.a & 0xff) > 1) - return id.d & 0x1f; - else - return 0; -} - -static inline u8 pmu_fixed_counter_width(void) -{ - struct cpuid id = cpuid(10); - - if ((id.a & 0xff) > 1) - return (id.d >> 5) & 0xff; - else - return 0; -} - -static inline bool pmu_gp_counter_is_available(int i) -{ - /* CPUID.0xA.EBX bit is '1 if they counter is NOT available. */ - return !(cpuid(10).b & BIT(i)); -} - -static inline u64 this_cpu_perf_capabilities(void) -{ - if (!this_cpu_has(X86_FEATURE_PDCM)) - return 0; - - return rdmsr(MSR_IA32_PERF_CAPABILITIES); -} - #endif diff --git a/x86/Makefile.common b/x86/Makefile.common index b7010e2..8cbdd2a 100644 --- a/x86/Makefile.common +++ b/x86/Makefile.common @@ -22,6 +22,7 @@ cflatobjs += lib/x86/acpi.o cflatobjs += lib/x86/stack.o cflatobjs += lib/x86/fault_test.o cflatobjs += lib/x86/delay.o +cflatobjs += lib/x86/pmu.o ifeq ($(CONFIG_EFI),y) cflatobjs += lib/x86/amd_sev.o cflatobjs += lib/efi.o diff --git a/x86/pmu.c b/x86/pmu.c index 3b36caa..46e9fca 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -1,6 +1,7 @@ #include "x86/msr.h" #include "x86/processor.h" +#include "x86/pmu.h" #include "x86/apic-defs.h" #include "x86/apic.h" #include "x86/desc.h" @@ -10,29 +11,6 @@ #include "libcflat.h" #include -#define FIXED_CNT_INDEX 32 - -/* Performance Counter Vector for the LVT PC Register */ -#define PMI_VECTOR 32 - -#define EVNSEL_EVENT_SHIFT 0 -#define EVNTSEL_UMASK_SHIFT 8 -#define EVNTSEL_USR_SHIFT 16 -#define EVNTSEL_OS_SHIFT 17 -#define EVNTSEL_EDGE_SHIFT 18 -#define EVNTSEL_PC_SHIFT 19 -#define EVNTSEL_INT_SHIFT 20 -#define EVNTSEL_EN_SHIF 22 -#define EVNTSEL_INV_SHIF 23 -#define EVNTSEL_CMASK_SHIFT 24 - -#define EVNTSEL_EN (1 << EVNTSEL_EN_SHIF) -#define EVNTSEL_USR (1 << EVNTSEL_USR_SHIFT) -#define EVNTSEL_OS (1 << EVNTSEL_OS_SHIFT) -#define EVNTSEL_PC (1 << EVNTSEL_PC_SHIFT) -#define EVNTSEL_INT (1 << EVNTSEL_INT_SHIFT) -#define EVNTSEL_INV (1 << EVNTSEL_INV_SHIF) - #define N 1000000 // These values match the number of instructions and branches in the @@ -66,7 +44,6 @@ struct pmu_event { {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N} }; -#define PMU_CAP_FW_WRITES (1ULL << 13) static u64 gp_counter_base = MSR_IA32_PERFCTR0; char *buf; diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index a641d79..e6d9823 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -1,18 +1,9 @@ #include "x86/msr.h" #include "x86/processor.h" +#include "x86/pmu.h" #include "x86/desc.h" #define N 1000000 -#define MAX_NUM_LBR_ENTRY 32 -#define DEBUGCTLMSR_LBR (1UL << 0) -#define PMU_CAP_LBR_FMT 0x3f - -#define MSR_LBR_NHM_FROM 0x00000680 -#define MSR_LBR_NHM_TO 0x000006c0 -#define MSR_LBR_CORE_FROM 0x00000040 -#define MSR_LBR_CORE_TO 0x00000060 -#define MSR_LBR_TOS 0x000001c9 -#define MSR_LBR_SELECT 0x000001c8 volatile int count; u32 lbr_from, lbr_to; diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c index aa2ecbb..fd36e43 100644 --- a/x86/vmx_tests.c +++ b/x86/vmx_tests.c @@ -9,6 +9,7 @@ #include "vmx.h" #include "msr.h" #include "processor.h" +#include "pmu.h" #include "vm.h" #include "pci.h" #include "fwcfg.h" From patchwork Mon Oct 24 09:12:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BD77ECDFA1 for ; Mon, 24 Oct 2022 09:13:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229571AbiJXJNs (ORCPT ); Mon, 24 Oct 2022 05:13:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231168AbiJXJNj (ORCPT ); Mon, 24 Oct 2022 05:13:39 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AB0E68CF7 for ; Mon, 24 Oct 2022 02:13:27 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id g62so4456016pfb.10 for ; Mon, 24 Oct 2022 02:13:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SFaZTlHcQRjsut1B3hTFdCAleH/8KeDEboCMJpbltJY=; b=OdXXQ9l/AcIKaLGrGL3g8hXErt8vIm/WWPm1dNWLtlBDqgf9GZgu+sivR/Fm+Hizk7 uQ05z/67P127edRmQgmwKWfINEHB+ch3jWxMpPmhSvoAaQ0Tlz2TFBa4uAqR7E4uU0o6 HxwLB9kxElUo8bFMRG5SqJ7SdvOzb1dYOnQuI14gNk8j9xFLFlNcxwzL7qJlDL3JPlVB +8KipnlHEb1ruLQOv9h+n46ruGzwgnox6U9++7zfADacO8N5OATRC8c195o/AgEnkqMF LfrNF9GKbG8nZMg8xCir68xNS+o43/KTF4PCFeCUF0pJfXMjQVRvHNVFiJG0GjzcRBWC 34/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SFaZTlHcQRjsut1B3hTFdCAleH/8KeDEboCMJpbltJY=; b=nZhaKUDslAI0nQa7ithw43H27SrWvC22Y7r5IqOLKreo4kCWe56oo/kgNzYU0a5nqw LgVBrUpM/Ff6XccnuBiEC0b3kbEvB91i0hOLvxiJFqgzzO+2ELhXWbvhewk6UyekYh/t ViNfDYvpJa8ENU5rtViOI732MMqA4n0a6tNdIdHdBBCQWKMAXJlxzkyw0EI9FKt2jVh6 5plLA072vkooBDpcSEosJo71upr7kZoD/bcIU6nAXcPTcfnRD+QPuO7WRzYG0HVxc+3n iy2ajRuWNKTGZ1RZ3QOE1yg+mtDIbPiX19W8hHOh3NwYS0/qoRr3nXTem5lhtVASSNPm 8ajw== X-Gm-Message-State: ACrzQf2+Hmo7uaNVi2Oa0cTPsdxU1P8RtTAE7fLdfJHkNXUYyHbkoYyi vU5B90eMXOZYWjTZX525ZPgeWiT+NKFqAkOO X-Google-Smtp-Source: AMsMyM7fHyZ1hIs9Pr45JadiCY0b96PPVnevD/C0zJPdTCFChJlvxltoticSGBssd5do5lKKl9AVMQ== X-Received: by 2002:aa7:9292:0:b0:56b:c4d3:a723 with SMTP id j18-20020aa79292000000b0056bc4d3a723mr5168641pfa.57.1666602804465; Mon, 24 Oct 2022 02:13:24 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:24 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 14/24] x86/pmu: Read cpuid(10) in the pmu_init() to reduce VM-Exit Date: Mon, 24 Oct 2022 17:12:13 +0800 Message-Id: <20221024091223.42631-15-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The type of CPUID accessors can also go in the common pmu. Re-reading cpuid(10) each time when needed, adding the overhead of eimulating CPUID isn't meaningless in the grand scheme of the test. A common "PMU init" routine would allow the library to provide helpers access to more PMU common information. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- lib/x86/pmu.c | 7 +++++++ lib/x86/pmu.h | 26 +++++++++++++------------- lib/x86/smp.c | 2 ++ 3 files changed, 22 insertions(+), 13 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 9d048ab..e8b9ae9 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -1 +1,8 @@ #include "pmu.h" + +struct cpuid cpuid_10; + +void pmu_init(void) +{ + cpuid_10 = cpuid(10); +} \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 078a974..7f4e797 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -33,9 +33,13 @@ #define EVNTSEL_INT (1 << EVNTSEL_INT_SHIFT) #define EVNTSEL_INV (1 << EVNTSEL_INV_SHIF) +extern struct cpuid cpuid_10; + +void pmu_init(void); + static inline u8 pmu_version(void) { - return cpuid(10).a & 0xff; + return cpuid_10.a & 0xff; } static inline bool this_cpu_has_pmu(void) @@ -50,35 +54,31 @@ static inline bool this_cpu_has_perf_global_ctrl(void) static inline u8 pmu_nr_gp_counters(void) { - return (cpuid(10).a >> 8) & 0xff; + return (cpuid_10.a >> 8) & 0xff; } static inline u8 pmu_gp_counter_width(void) { - return (cpuid(10).a >> 16) & 0xff; + return (cpuid_10.a >> 16) & 0xff; } static inline u8 pmu_gp_counter_mask_length(void) { - return (cpuid(10).a >> 24) & 0xff; + return (cpuid_10.a >> 24) & 0xff; } static inline u8 pmu_nr_fixed_counters(void) { - struct cpuid id = cpuid(10); - - if ((id.a & 0xff) > 1) - return id.d & 0x1f; + if ((cpuid_10.a & 0xff) > 1) + return cpuid_10.d & 0x1f; else return 0; } static inline u8 pmu_fixed_counter_width(void) { - struct cpuid id = cpuid(10); - - if ((id.a & 0xff) > 1) - return (id.d >> 5) & 0xff; + if ((cpuid_10.a & 0xff) > 1) + return (cpuid_10.d >> 5) & 0xff; else return 0; } @@ -86,7 +86,7 @@ static inline u8 pmu_fixed_counter_width(void) static inline bool pmu_gp_counter_is_available(int i) { /* CPUID.0xA.EBX bit is '1 if they counter is NOT available. */ - return !(cpuid(10).b & BIT(i)); + return !(cpuid_10.b & BIT(i)); } static inline u64 this_cpu_perf_capabilities(void) diff --git a/lib/x86/smp.c b/lib/x86/smp.c index b9b91c7..29197fc 100644 --- a/lib/x86/smp.c +++ b/lib/x86/smp.c @@ -4,6 +4,7 @@ #include #include "processor.h" +#include "pmu.h" #include "atomic.h" #include "smp.h" #include "apic.h" @@ -155,6 +156,7 @@ void smp_init(void) on_cpu(i, setup_smp_id, 0); atomic_inc(&active_cpus); + pmu_init(); } static void do_reset_apic(void *data) From patchwork Mon Oct 24 09:12:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE4F9ECDFA1 for ; Mon, 24 Oct 2022 09:13:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231168AbiJXJNx (ORCPT ); Mon, 24 Oct 2022 05:13:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229972AbiJXJNo (ORCPT ); Mon, 24 Oct 2022 05:13:44 -0400 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2374C69BFE for ; Mon, 24 Oct 2022 02:13:28 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id g28so8463476pfk.8 for ; Mon, 24 Oct 2022 02:13:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BDxVmBPiY3G5DtYBGeO+YP21IvKefS3GSWM3I4vd7Lo=; b=iW5a1ThUt66QtDBNy2YNm6kq5EfVRDlawHglCEuDZvzncjdEmUmQuRk7STVqBD+RSd RMQgDmCPpzk531w+y6WISQGcz/IGiE6yQ7473nJ98CHruQFsO89a/PFCsb8KzqpTmwhc mlJj3UB7trCPnIkJbsV7ncqtpt6gZnFYXwKoolvRMv9/l5Av837Rd9C2cci8K7StJSHs qdb/bd5AIyRccvVQzl6viBEi6zXhXwynJzF3aX7HpFqrOYjxIGhWGQXQSCbrHZD1jcxb CXN43gd4ZF5pnb2XNlqgRE9mdkrQqkuzS7qYWoMlFNc/krD3EBtwkSitGjCGt6Op07eY 8ZDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BDxVmBPiY3G5DtYBGeO+YP21IvKefS3GSWM3I4vd7Lo=; b=KempdjZ0KRjrctYQFJ0Im1s4NbtEw2OxO7vtBQF7At2cKNCzUrsft5n/BBnFUxh7qS 6koKepPiRBBzcDj2FGG/OkXeBfP5PP9YfX6gS1zY0rV3JwVHlnX1G9a/yy02f21QZzG/ 10fSwMJyR+C1/qfAFMwbWjgSuoVaUDs/ZWiyZm4be95iAL9dg9y2kWPn6pMMkU4mGJ7t N+CoEOmidX0WWfdqF5NiF3PYlNM17UGrdgO4DifM2vKNWS1JuJkdmCdRohExb6L7qCrb w+op3vlxdzhkdxRnElmLFLu0tbRPgCBNMVgt3LMfbF2fJJTi6WFeIRWj6IK0px/m1Puz URJw== X-Gm-Message-State: ACrzQf1JeGrezgW0q1hZ+KNUAb2UKe9mtlT1rU/V9b//a0P7FXR4ipf7 rNDN6jWEIJ9VDhaa+vhR2Vg= X-Google-Smtp-Source: AMsMyM66CEvOL3DQgH3BFlJIAHocznhu6f9B+0yMKhylH1zKk6Ai4LQp8Fo+vzFOkKGKYDanwpz7iw== X-Received: by 2002:aa7:88c4:0:b0:563:9fe9:5da9 with SMTP id k4-20020aa788c4000000b005639fe95da9mr32434577pff.41.1666602806074; Mon, 24 Oct 2022 02:13:26 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:25 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 15/24] x86/pmu: Initialize PMU perf_capabilities at pmu_init() Date: Mon, 24 Oct 2022 17:12:14 +0800 Message-Id: <20221024091223.42631-16-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Re-reading PERF_CAPABILITIES each time when needed, adding the overhead of eimulating RDMSR isn't also meaningless in the grand scheme of the test. Based on this, more helpers for full_writes and lbr_fmt can also be added to increase the readability of the test cases. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- lib/x86/pmu.c | 3 +++ lib/x86/pmu.h | 18 +++++++++++++++--- x86/pmu.c | 2 +- x86/pmu_lbr.c | 7 ++----- 4 files changed, 21 insertions(+), 9 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index e8b9ae9..35b7efb 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -1,8 +1,11 @@ #include "pmu.h" struct cpuid cpuid_10; +struct pmu_caps pmu; void pmu_init(void) { cpuid_10 = cpuid(10); + if (this_cpu_has(X86_FEATURE_PDCM)) + pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 7f4e797..95b17da 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -33,7 +33,12 @@ #define EVNTSEL_INT (1 << EVNTSEL_INT_SHIFT) #define EVNTSEL_INV (1 << EVNTSEL_INV_SHIF) +struct pmu_caps { + u64 perf_cap; +}; + extern struct cpuid cpuid_10; +extern struct pmu_caps pmu; void pmu_init(void); @@ -91,10 +96,17 @@ static inline bool pmu_gp_counter_is_available(int i) static inline u64 this_cpu_perf_capabilities(void) { - if (!this_cpu_has(X86_FEATURE_PDCM)) - return 0; + return pmu.perf_cap; +} - return rdmsr(MSR_IA32_PERF_CAPABILITIES); +static inline u64 pmu_lbr_version(void) +{ + return this_cpu_perf_capabilities() & PMU_CAP_LBR_FMT; +} + +static inline bool pmu_has_full_writes(void) +{ + return this_cpu_perf_capabilities() & PMU_CAP_FW_WRITES; } #endif /* _X86_PMU_H_ */ diff --git a/x86/pmu.c b/x86/pmu.c index 46e9fca..a6329cd 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -669,7 +669,7 @@ int main(int ac, char **av) check_counters(); - if (this_cpu_perf_capabilities() & PMU_CAP_FW_WRITES) { + if (pmu_has_full_writes()) { gp_counter_base = MSR_IA32_PMC0; report_prefix_push("full-width writes"); check_counters(); diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c index e6d9823..d013552 100644 --- a/x86/pmu_lbr.c +++ b/x86/pmu_lbr.c @@ -43,7 +43,6 @@ static bool test_init_lbr_from_exception(u64 index) int main(int ac, char **av) { - u64 perf_cap; int max, i; setup_vm(); @@ -63,15 +62,13 @@ int main(int ac, char **av) return report_summary(); } - perf_cap = this_cpu_perf_capabilities(); - - if (!(perf_cap & PMU_CAP_LBR_FMT)) { + if (!pmu_lbr_version()) { report_skip("(Architectural) LBR is not supported."); return report_summary(); } printf("PMU version: %d\n", pmu_version()); - printf("LBR version: %ld\n", perf_cap & PMU_CAP_LBR_FMT); + printf("LBR version: %ld\n", pmu_lbr_version()); /* Look for LBR from and to MSRs */ lbr_from = MSR_LBR_CORE_FROM; From patchwork Mon Oct 24 09:12:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AE96C38A2D for ; Mon, 24 Oct 2022 09:14:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230505AbiJXJN5 (ORCPT ); Mon, 24 Oct 2022 05:13:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbiJXJNo (ORCPT ); Mon, 24 Oct 2022 05:13:44 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C597669F53 for ; Mon, 24 Oct 2022 02:13:31 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id v4-20020a17090a088400b00212cb0ed97eso8394145pjc.5 for ; Mon, 24 Oct 2022 02:13:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JKgpAoNh2IEQOZ+LvJjkBexGM/DA1qHleGUwQYu4zQo=; b=KqgSBjYdaME/Oa9ujefmR28w8m4Mc6v4q8nvT4Tb9pf5BEmH/oytYTsGTU+rrr3n7W tGC+GSFECoRPHhHG/ftWto5wMRn0VzpEw3NdwtMEIUIONUi0SuIaR1XoogIbcRhEvVlt yujrNtr/QqOHEopYDO9sLsNLLlRwJOn3dbH2yVM93ADws6cu/gzIdI3ddydLJg3H8u4b DzlsOkWrwvcLu429Pv1ZteI/kNy7XEFacvlmJ0mE4973YyPnw//Gk9qp4o4B2BtsugyY 6wqwjp3M5cHSmQhh2Z+VB5FK9oDY3Rsp/7YDYJVwl8xvOZp9gylI2X4ZXNYovIEWEQc/ F2Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JKgpAoNh2IEQOZ+LvJjkBexGM/DA1qHleGUwQYu4zQo=; b=p6IOQX410Q6qRiFOYmVR3PheQwpNWL0JE58URmI0OPt+Vyro29yDNWXlOx6YCUbfbZ cbevFC6sAxo76i+Mj0GRfFZxfQKxYV7A2tR+NKbm+fGa6iHPYoSDxWPjO5YjpLaaqT+P Tm6cpSnPjFRlbxRblQW0Nw5SzJoCxjQgPBiNqy1IcD4ok+wjgtDe9ouaP+ULfvsiM4n+ 8/6OCCND/si6myVxE3CkkQ6d1C7lU9THp+1apVIwkFYzc4nuUTxIBv94CbaTQLC6OFmK bKR/IF5frU3WeSrRaQexnyXGfqYMmBuD0rxZpW47FQkNGAg+spV1gcGjVYW2uHYCk94D XSng== X-Gm-Message-State: ACrzQf1eA+yUaxlVySYzS610dxVCtJQSuFL07r6IjwbAi9BVbQKeS0jO Gg7SXFSFHz6P49/2tcMudj4= X-Google-Smtp-Source: AMsMyM5WR+CvU0aW0pc7zqruVS1CK8HBuxVr23B4XBUBC/vTW/wHkzZxRuDID+BWOdwqDaHWfVC6Fw== X-Received: by 2002:a17:90a:6045:b0:212:fe9a:5792 with SMTP id h5-20020a17090a604500b00212fe9a5792mr7403798pjm.178.1666602807702; Mon, 24 Oct 2022 02:13:27 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:27 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 16/24] x86/pmu: Add GP counter related helpers Date: Mon, 24 Oct 2022 17:12:15 +0800 Message-Id: <20221024091223.42631-17-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Continuing the theme of code reuse, tests shouldn't need to manually compute gp_counter_base and gp_event_select_base. They can be accessed directly after initialization and changed via setters when they need to be changed in some cases, e.g. full writes. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- lib/x86/pmu.c | 2 ++ lib/x86/pmu.h | 47 +++++++++++++++++++++++++++++++++++++++++++++++ x86/pmu.c | 50 ++++++++++++++++++++++++-------------------------- 3 files changed, 73 insertions(+), 26 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 35b7efb..c0d100d 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -8,4 +8,6 @@ void pmu_init(void) cpuid_10 = cpuid(10); if (this_cpu_has(X86_FEATURE_PDCM)) pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); + pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; + pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 95b17da..7487a30 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -35,6 +35,8 @@ struct pmu_caps { u64 perf_cap; + u32 msr_gp_counter_base; + u32 msr_gp_event_select_base; }; extern struct cpuid cpuid_10; @@ -42,6 +44,46 @@ extern struct pmu_caps pmu; void pmu_init(void); +static inline u32 gp_counter_base(void) +{ + return pmu.msr_gp_counter_base; +} + +static inline void set_gp_counter_base(u32 new_base) +{ + pmu.msr_gp_counter_base = new_base; +} + +static inline u32 gp_event_select_base(void) +{ + return pmu.msr_gp_event_select_base; +} + +static inline void set_gp_event_select_base(u32 new_base) +{ + pmu.msr_gp_event_select_base = new_base; +} + +static inline u32 gp_counter_msr(unsigned int i) +{ + return gp_counter_base() + i; +} + +static inline u32 gp_event_select_msr(unsigned int i) +{ + return gp_event_select_base() + i; +} + +static inline void write_gp_counter_value(unsigned int i, u64 value) +{ + wrmsr(gp_counter_msr(i), value); +} + +static inline void write_gp_event_select(unsigned int i, u64 value) +{ + wrmsr(gp_event_select_msr(i), value); +} + static inline u8 pmu_version(void) { return cpuid_10.a & 0xff; @@ -109,4 +151,9 @@ static inline bool pmu_has_full_writes(void) return this_cpu_perf_capabilities() & PMU_CAP_FW_WRITES; } +static inline bool pmu_use_full_writes(void) +{ + return gp_counter_base() == MSR_IA32_PMC0; +} + #endif /* _X86_PMU_H_ */ diff --git a/x86/pmu.c b/x86/pmu.c index a6329cd..589c7cb 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -44,8 +44,6 @@ struct pmu_event { {"fixed 3", MSR_CORE_PERF_FIXED_CTR0 + 2, 0.1*N, 30*N} }; -static u64 gp_counter_base = MSR_IA32_PERFCTR0; - char *buf; static inline void loop(void) @@ -84,7 +82,7 @@ static bool is_gp(pmu_counter_t *evt) static int event_to_global_idx(pmu_counter_t *cnt) { - return cnt->ctr - (is_gp(cnt) ? gp_counter_base : + return cnt->ctr - (is_gp(cnt) ? gp_counter_base() : (MSR_CORE_PERF_FIXED_CTR0 - FIXED_CNT_INDEX)); } @@ -121,8 +119,7 @@ static void __start_event(pmu_counter_t *evt, uint64_t count) evt->count = count; wrmsr(evt->ctr, evt->count); if (is_gp(evt)) - wrmsr(MSR_P6_EVNTSEL0 + event_to_global_idx(evt), - evt->config | EVNTSEL_EN); + write_gp_event_select(event_to_global_idx(evt), evt->config | EVNTSEL_EN); else { uint32_t ctrl = rdmsr(MSR_CORE_PERF_FIXED_CTR_CTRL); int shift = (evt->ctr - MSR_CORE_PERF_FIXED_CTR0) * 4; @@ -150,8 +147,7 @@ static void stop_event(pmu_counter_t *evt) { global_disable(evt); if (is_gp(evt)) - wrmsr(MSR_P6_EVNTSEL0 + event_to_global_idx(evt), - evt->config & ~EVNTSEL_EN); + write_gp_event_select(event_to_global_idx(evt), evt->config & ~EVNTSEL_EN); else { uint32_t ctrl = rdmsr(MSR_CORE_PERF_FIXED_CTR_CTRL); int shift = (evt->ctr - MSR_CORE_PERF_FIXED_CTR0) * 4; @@ -198,12 +194,12 @@ static void check_gp_counter(struct pmu_event *evt) { int nr_gp_counters = pmu_nr_gp_counters(); pmu_counter_t cnt = { - .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | evt->unit_sel, }; int i; - for (i = 0; i < nr_gp_counters; i++, cnt.ctr++) { + for (i = 0; i < nr_gp_counters; i++) { + cnt.ctr = gp_counter_msr(i); measure_one(&cnt); report(verify_event(cnt.count, evt), "%s-%d", evt->name, i); } @@ -247,7 +243,7 @@ static void check_counters_many(void) if (!pmu_gp_counter_is_available(i)) continue; - cnt[n].ctr = gp_counter_base + n; + cnt[n].ctr = gp_counter_msr(n); cnt[n].config = EVNTSEL_OS | EVNTSEL_USR | gp_events[i % ARRAY_SIZE(gp_events)].unit_sel; n++; @@ -287,7 +283,7 @@ static void check_counter_overflow(void) uint64_t overflow_preset; int i; pmu_counter_t cnt = { - .ctr = gp_counter_base, + .ctr = gp_counter_msr(0), .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, }; overflow_preset = measure_for_overflow(&cnt); @@ -297,18 +293,20 @@ static void check_counter_overflow(void) report_prefix_push("overflow"); - for (i = 0; i < nr_gp_counters + 1; i++, cnt.ctr++) { + for (i = 0; i < nr_gp_counters + 1; i++) { uint64_t status; int idx; cnt.count = overflow_preset; - if (gp_counter_base == MSR_IA32_PMC0) + if (pmu_use_full_writes()) cnt.count &= (1ull << pmu_gp_counter_width()) - 1; if (i == nr_gp_counters) { cnt.ctr = fixed_events[0].unit_sel; cnt.count = measure_for_overflow(&cnt); cnt.count &= (1ull << pmu_fixed_counter_width()) - 1; + } else { + cnt.ctr = gp_counter_msr(i); } if (i % 2) @@ -332,7 +330,7 @@ static void check_counter_overflow(void) static void check_gp_counter_cmask(void) { pmu_counter_t cnt = { - .ctr = gp_counter_base, + .ctr = gp_counter_msr(0), .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, }; cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT); @@ -367,7 +365,7 @@ static void check_rdpmc(void) for (i = 0; i < nr_gp_counters; i++) { uint64_t x; pmu_counter_t cnt = { - .ctr = gp_counter_base + i, + .ctr = gp_counter_msr(i), .idx = i }; @@ -375,7 +373,7 @@ static void check_rdpmc(void) * Without full-width writes, only the low 32 bits are writable, * and the value is sign-extended. */ - if (gp_counter_base == MSR_IA32_PERFCTR0) + if (gp_counter_base() == MSR_IA32_PERFCTR0) x = (uint64_t)(int64_t)(int32_t)val; else x = (uint64_t)(int64_t)val; @@ -383,7 +381,7 @@ static void check_rdpmc(void) /* Mask according to the number of supported bits */ x &= (1ull << gp_counter_width) - 1; - wrmsr(gp_counter_base + i, val); + write_gp_counter_value(i, val); report(rdpmc(i) == x, "cntr-%d", i); exc = test_for_exception(GP_VECTOR, do_rdpmc_fast, &cnt); @@ -417,7 +415,7 @@ static void check_running_counter_wrmsr(void) uint64_t status; uint64_t count; pmu_counter_t evt = { - .ctr = gp_counter_base, + .ctr = gp_counter_msr(0), .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, }; @@ -425,7 +423,7 @@ static void check_running_counter_wrmsr(void) start_event(&evt); loop(); - wrmsr(gp_counter_base, 0); + write_gp_counter_value(0, 0); stop_event(&evt); report(evt.count < gp_events[1].min, "cntr"); @@ -436,10 +434,10 @@ static void check_running_counter_wrmsr(void) start_event(&evt); count = -1; - if (gp_counter_base == MSR_IA32_PMC0) + if (pmu_use_full_writes()) count &= (1ull << pmu_gp_counter_width()) - 1; - wrmsr(gp_counter_base, count); + write_gp_counter_value(0, count); loop(); stop_event(&evt); @@ -453,12 +451,12 @@ static void check_emulated_instr(void) { uint64_t status, instr_start, brnch_start; pmu_counter_t brnch_cnt = { - .ctr = MSR_IA32_PERFCTR0, + .ctr = gp_counter_msr(0), /* branch instructions */ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[5].unit_sel, }; pmu_counter_t instr_cnt = { - .ctr = MSR_IA32_PERFCTR0 + 1, + .ctr = gp_counter_msr(1), /* instructions */ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, }; @@ -472,8 +470,8 @@ static void check_emulated_instr(void) brnch_start = -EXPECTED_BRNCH; instr_start = -EXPECTED_INSTR; - wrmsr(MSR_IA32_PERFCTR0, brnch_start); - wrmsr(MSR_IA32_PERFCTR0 + 1, instr_start); + write_gp_counter_value(0, brnch_start); + write_gp_counter_value(1, instr_start); // KVM_FEP is a magic prefix that forces emulation so // 'KVM_FEP "jne label\n"' just counts as a single instruction. asm volatile( @@ -670,7 +668,7 @@ int main(int ac, char **av) check_counters(); if (pmu_has_full_writes()) { - gp_counter_base = MSR_IA32_PMC0; + set_gp_counter_base(MSR_IA32_PMC0); report_prefix_push("full-width writes"); check_counters(); check_gp_counters_write_width(); From patchwork Mon Oct 24 09:12:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6990FC38A2D for ; Mon, 24 Oct 2022 09:14:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbiJXJO4 (ORCPT ); Mon, 24 Oct 2022 05:14:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231262AbiJXJOF (ORCPT ); Mon, 24 Oct 2022 05:14:05 -0400 Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE32968CC1 for ; Mon, 24 Oct 2022 02:13:44 -0700 (PDT) Received: by mail-oi1-x22c.google.com with SMTP id p127so10180847oih.9 for ; Mon, 24 Oct 2022 02:13:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W0exMknsHAhrgvmnsuI1Ey6Y7vpYTw5SLOBYL5aM5Jk=; b=WtxS9JPp1hXw2GSTQ2s3TI1XBjBD6fIxtOlg70S8Kf4rv7FXW3eweBAkRmMvpcgws9 tNcfdHdzdLL24gyVHX523elXKo7sFqV2Ba9YQorJMKTnBhW966TxVVNlME6VhNnpkoJN 501N3yRMEYlFPl5bElx9NdAAh3TzSi/VkcX1IXdZ+Tlod8ab+Y2B7Okp+aVKf/dXTRDb UstUGWvFKdLvXFbTuqNqFKt+vzd+i/xKwjxMlacl5zj/oEYxsDOYR6K5ZSazxz2Vouzn l+RaC+GuQaqA7c9YNRMtAnI8r5AZK2wxgq3x1X1x+T2ipIJWZuonmtZzfEBJTlKFTPFH VMVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W0exMknsHAhrgvmnsuI1Ey6Y7vpYTw5SLOBYL5aM5Jk=; b=NzOcu86GCxPVH2+YP/sTwB4xilqt7Q/nrgSgU6CLkWcXtD8P43n89E007n7AqsKaNn 7Mb0upiNdnDHEB6cjaYn5jMG3FKWwEcJSDnxcaXmP1HdFOtRCd4Gt6ZH0a/ugDx/JEBQ lv58puPz6QsdzcgXlYAmrMt1d7qW4oEh/S7W24CN8xYsbSWlvVahdRm4BZctMqFsDAkR rmjoWNuFrLqiSvpA4s5g3ZJydT/EvJOcls9yr46Gn0I9IWjKUud0MrLqTsSbIbkbqOCN gHS+9vLvQdgArM9nEBrEkW8pGOxN4Enu18hWyWdyHuW7DaVNifSaLWPm0b2Byqgk8Prk 5wIg== X-Gm-Message-State: ACrzQf0/+AvYOf8BZI+twXYXgwF0jaTNPchwldru1hZ4pghhV9wS3IHq WR9oZIKh2CCLFTK2VT23MlttVEdMGtF0Mg6S X-Google-Smtp-Source: AMsMyM5S/gaqAUvKCVpIP7PB6eXwqWlCU4W/6EdGOMi6SQZDAqlJBYkE3uXngjWWEH8Vn/vSdDK/Gw== X-Received: by 2002:a17:90b:1b05:b0:20d:3b10:3800 with SMTP id nu5-20020a17090b1b0500b0020d3b103800mr73181402pjb.91.1666602809264; Mon, 24 Oct 2022 02:13:29 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:29 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 17/24] x86/pmu: Add GP/Fixed counters reset helpers Date: Mon, 24 Oct 2022 17:12:16 +0800 Message-Id: <20221024091223.42631-18-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu In generic pmu testing, it is very common to initialize the test env by resetting counters registers. Add these helpers to for code reusability. Signed-off-by: Like Xu --- lib/x86/pmu.c | 1 + lib/x86/pmu.h | 38 ++++++++++++++++++++++++++++++++++++++ x86/pmu.c | 2 +- 3 files changed, 40 insertions(+), 1 deletion(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index c0d100d..0ce1691 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -10,4 +10,5 @@ void pmu_init(void) pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; + reset_all_counters(); } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 7487a30..564b672 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -156,4 +156,42 @@ static inline bool pmu_use_full_writes(void) return gp_counter_base() == MSR_IA32_PMC0; } +static inline u32 fixed_counter_msr(unsigned int i) +{ + return MSR_CORE_PERF_FIXED_CTR0 + i; +} + +static inline void write_fixed_counter_value(unsigned int i, u64 value) +{ + wrmsr(fixed_counter_msr(i), value); +} + +static inline void reset_all_gp_counters(void) +{ + unsigned int idx; + + for (idx = 0; idx < pmu_nr_gp_counters(); idx++) { + write_gp_event_select(idx, 0); + write_gp_counter_value(idx, 0); + } +} + +static inline void reset_all_fixed_counters(void) +{ + unsigned int idx; + + if (!pmu_nr_fixed_counters()) + return; + + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, 0); + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) + write_fixed_counter_value(idx, 0); +} + +static inline void reset_all_counters(void) +{ + reset_all_gp_counters(); + reset_all_fixed_counters(); +} + #endif /* _X86_PMU_H_ */ diff --git a/x86/pmu.c b/x86/pmu.c index 589c7cb..7786b49 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -397,7 +397,7 @@ static void check_rdpmc(void) .idx = i }; - wrmsr(MSR_CORE_PERF_FIXED_CTR0 + i, x); + write_fixed_counter_value(i, x); report(rdpmc(i | (1 << 30)) == x, "fixed cntr-%d", i); exc = test_for_exception(GP_VECTOR, do_rdpmc_fast, &cnt); From patchwork Mon Oct 24 09:12:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82BD3ECDFA1 for ; Mon, 24 Oct 2022 09:14:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231238AbiJXJOD (ORCPT ); Mon, 24 Oct 2022 05:14:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbiJXJNq (ORCPT ); Mon, 24 Oct 2022 05:13:46 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FC835D0E3 for ; Mon, 24 Oct 2022 02:13:33 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id l6so4084104pjj.0 for ; Mon, 24 Oct 2022 02:13:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Cq7UpDqVP2pq0R7JK/J7HN1k4KkbHGM+tLfSJ1nuzPM=; b=Y+tPcHWDAtkTQ1Km1fi5NpiRDDtqfPbyaN+DXZ/rSqweY/XsQQiu69Zfp68e7A+B4m 6ZNAZjIX22DaCaxAAt8LwAQdVH7MIs3rL2sLRaN0j3ChfKKzJmudck3bQSidxq/8Rp/J SMlhAGRqwIyvQ5fprgxCRuZBXRkNpN0nZI7qNprEO1QowTnNjKog5oc4nbKnQco9qxpC 17S7ucHC++kXCnW++le03Bv+NrRQgzJ6Am94SBgN6swsXhXk75XATCLLj54eB6kJikpG Cd96Zccdpw2g/+ymBFjFWXCmF3repdRKdkdmTNi3Gl9bvvSnhyeDz3o6zoMvB+LGUTWz lCBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Cq7UpDqVP2pq0R7JK/J7HN1k4KkbHGM+tLfSJ1nuzPM=; b=nYfRxveDk/qSSn/nefUoPLggurRdCyb3U3CcEc6sHMk1zyU/rdI/w7gpdffsxaIhTd WqRm9+rr7azSG1f9dpxPMoZUdStRR11JQkEik1Qfu/kJu6ticB02VD7bSAitcp4O0hio LMmZQylnqERHeGdGmc2QUgjo14fgy7jD7eaQuzgSB4X1g7+3sA17Kp/e9DvOhVCgEGhY 88nvUxRiIUKYHNyo8CD1VJ+SnZvReasP0udY8Y6c4zkvWajn4y1HFLlQCUvwUYMTECsu Kgpj1qZS301YiB3VJdFAVCeAbnHXohKK7LI0Eb9DekybGOzM77yMFsAvXAOsgAsnZtIY M7wA== X-Gm-Message-State: ACrzQf2tfIt2ln1H8e7GECFz1opyZATPir+r7XQJiTfP5jeYal0bVi3a 0CBSQFjEV9PZofscSXnKm8Q= X-Google-Smtp-Source: AMsMyM7bNH05BvRIAkZLQgRzmL4gdynVW4Z2bJc7dA31VJ50Mwr8gypUQ54X/SHKXg7x1+XMIcJOHA== X-Received: by 2002:a17:903:124c:b0:184:cb7e:67c5 with SMTP id u12-20020a170903124c00b00184cb7e67c5mr32945163plh.117.1666602810855; Mon, 24 Oct 2022 02:13:30 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:30 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 18/24] x86/pmu: Add a set of helpers related to global registers Date: Mon, 24 Oct 2022 17:12:17 +0800 Message-Id: <20221024091223.42631-19-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Although AMD and Intel's pmu have the same semantics in terms of global control features (including ctl and status), their msr indexes are not the same, and the tests can be fully reused by adding helpers. Signed-off-by: Like Xu --- lib/x86/pmu.c | 3 +++ lib/x86/pmu.h | 33 +++++++++++++++++++++++++++++++++ x86/pmu.c | 31 +++++++++++++------------------ 3 files changed, 49 insertions(+), 18 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 0ce1691..3b6be37 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -10,5 +10,8 @@ void pmu_init(void) pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; + pmu.msr_global_status = MSR_CORE_PERF_GLOBAL_STATUS; + pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL; + pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL; reset_all_counters(); } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 564b672..ef83934 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -37,6 +37,9 @@ struct pmu_caps { u64 perf_cap; u32 msr_gp_counter_base; u32 msr_gp_event_select_base; + u32 msr_global_status; + u32 msr_global_ctl; + u32 msr_global_status_clr; }; extern struct cpuid cpuid_10; @@ -194,4 +197,34 @@ static inline void reset_all_counters(void) reset_all_fixed_counters(); } +static inline void pmu_clear_global_status(void) +{ + wrmsr(pmu.msr_global_status_clr, rdmsr(pmu.msr_global_status)); +} + +static inline u64 pmu_get_global_status(void) +{ + return rdmsr(pmu.msr_global_status); +} + +static inline u64 pmu_get_global_enable(void) +{ + return rdmsr(pmu.msr_global_ctl); +} + +static inline void pmu_set_global_enable(u64 bitmask) +{ + wrmsr(pmu.msr_global_ctl, bitmask); +} + +static inline void pmu_reset_global_enable(void) +{ + wrmsr(pmu.msr_global_ctl, 0); +} + +static inline void pmu_ack_global_status(u64 value) +{ + wrmsr(pmu.msr_global_status_clr, value); +} + #endif /* _X86_PMU_H_ */ diff --git a/x86/pmu.c b/x86/pmu.c index 7786b49..015591f 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -103,15 +103,12 @@ static struct pmu_event* get_counter_event(pmu_counter_t *cnt) static void global_enable(pmu_counter_t *cnt) { cnt->idx = event_to_global_idx(cnt); - - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_CTRL) | - (1ull << cnt->idx)); + pmu_set_global_enable(pmu_get_global_enable() | BIT_ULL(cnt->idx)); } static void global_disable(pmu_counter_t *cnt) { - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_CTRL) & - ~(1ull << cnt->idx)); + pmu_set_global_enable(pmu_get_global_enable() & ~BIT_ULL(cnt->idx)); } static void __start_event(pmu_counter_t *evt, uint64_t count) @@ -289,7 +286,7 @@ static void check_counter_overflow(void) overflow_preset = measure_for_overflow(&cnt); /* clear status before test */ - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + pmu_clear_global_status(); report_prefix_push("overflow"); @@ -316,10 +313,10 @@ static void check_counter_overflow(void) idx = event_to_global_idx(&cnt); __measure(&cnt, cnt.count); report(cnt.count == 1, "cntr-%d", i); - status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + status = pmu_get_global_status(); report(status & (1ull << idx), "status-%d", i); - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, status); - status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + pmu_ack_global_status(status); + status = pmu_get_global_status(); report(!(status & (1ull << idx)), "status clear-%d", i); report(check_irq() == (i % 2), "irq-%d", i); } @@ -428,8 +425,7 @@ static void check_running_counter_wrmsr(void) report(evt.count < gp_events[1].min, "cntr"); /* clear status before overflow test */ - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, - rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + pmu_clear_global_status(); start_event(&evt); @@ -441,8 +437,8 @@ static void check_running_counter_wrmsr(void) loop(); stop_event(&evt); - status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); - report(status & 1, "status"); + status = pmu_get_global_status(); + report(status & 1, "status msr bit"); report_prefix_pop(); } @@ -462,8 +458,7 @@ static void check_emulated_instr(void) }; report_prefix_push("emulated instruction"); - wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, - rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + pmu_clear_global_status(); start_event(&brnch_cnt); start_event(&instr_cnt); @@ -497,7 +492,7 @@ static void check_emulated_instr(void) : : "eax", "ebx", "ecx", "edx"); - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + pmu_reset_global_enable(); stop_event(&brnch_cnt); stop_event(&instr_cnt); @@ -509,7 +504,7 @@ static void check_emulated_instr(void) report(brnch_cnt.count - brnch_start >= EXPECTED_BRNCH, "branch count"); // Additionally check that those counters overflowed properly. - status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); + status = pmu_get_global_status(); report(status & 1, "branch counter overflow"); report(status & 2, "instruction counter overflow"); @@ -598,7 +593,7 @@ static void set_ref_cycle_expectations(void) if (!pmu_nr_gp_counters() || !pmu_gp_counter_is_available(2)) return; - wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + pmu_reset_global_enable(); t0 = fenced_rdtsc(); start_event(&cnt); From patchwork Mon Oct 24 09:12:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4668FA373E for ; Mon, 24 Oct 2022 09:13:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230092AbiJXJNv (ORCPT ); Mon, 24 Oct 2022 05:13:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230355AbiJXJNi (ORCPT ); Mon, 24 Oct 2022 05:13:38 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E79C66A4A2 for ; Mon, 24 Oct 2022 02:13:34 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id jo13so4339524plb.13 for ; Mon, 24 Oct 2022 02:13:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hSkj+WOujRg4RHSzSLHE3d4VitztuhVOwma0quWKcQs=; b=hyvTG+4tbavxhnPhjBD7r51AEUFAGFo2kHMQwifgsJZj3WWx5Qye1ujFxbRS+npOkR W12ZeJB7ZxPs8UpG7QZR8FJtscmr0EbdJg/xADKqKHYKW6EJ5q5kera2BKfJe58m9cs5 0+8VsD+DQ2cVBmYpIz8JPmfGgklI2GnEMsT2H8DuT7aRUwEojArAm5T9TjBWhpznl4kX sFRQxiKvMN/nBJjGbM26/T8X4UsRLgnGlVKISo7ERKLGtub7kBp5fQKc2dkduF6EDZ5Y vZboePB9sQtUMdF35GJJ8WrGNk0eNKAJxHH6oE2AP3BCHrmrho3isAaWOL1QdJWsNqG2 qYWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hSkj+WOujRg4RHSzSLHE3d4VitztuhVOwma0quWKcQs=; b=MsQHvIROH7+prInE09fFxeUDcdPrAdCsTxYCL8nwzMLTZU+MowO8EcbixY3VGVSRUn Lb+P3DYGAhtaNxuBjijlW0YVCxWLEjfPcFfEqjSnD7G/YJvFMHR8Z+5mPnQw4Xu0qK/j bqabkM4kblS4scKu2JmdxIZfmGJPYtKH55pa0oV5EJEgG6v2J+7Tq77z/UrWTDmDS46M /OD3rGPjUeuaUKLhf8XyOkQHH1bwIk5HILKdKXxBcHiZqCzT3QYWm5f95ZxS0H3yhYwN /xLhFAgaxW0ikLSBEpphglAE7Wr+x9wnOPNog6cYMbM3KhFGZjX2qOtk0oIdV/MHtyJv R68Q== X-Gm-Message-State: ACrzQf2/Gr0O0Np0PGV6WON89Bqyhb/02jCYFpT5FWcKpwEiJHeuqUWc kZs4RN4NB4IujWDbOeD11dqvSSSTbtvyKCKJ X-Google-Smtp-Source: AMsMyM6e63k/DVPeOawUSLE70Wao84sngPPMOOvAdWESE09QvtC2DL2RpDHS+MA6s0OKdNEpBlo8Mg== X-Received: by 2002:a17:90b:1f87:b0:212:533:3184 with SMTP id so7-20020a17090b1f8700b0021205333184mr25537554pjb.3.1666602812499; Mon, 24 Oct 2022 02:13:32 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:32 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 19/24] x86: Add tests for Guest Processor Event Based Sampling (PEBS) Date: Mon, 24 Oct 2022 17:12:18 +0800 Message-Id: <20221024091223.42631-20-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu This unit-test is intended to test the KVM's support for the Processor Event Based Sampling (PEBS) which is another PMU feature on Intel processors (start from Ice Lake Server). If a bit in PEBS_ENABLE is set to 1, its corresponding counter will write at least one PEBS records (including partial state of the vcpu at the time of the current hardware event) to the guest memory on counter overflow, and trigger an interrupt at a specific DS state. The format of a PEBS record can be configured by another register. These tests cover most usage scenarios, for example there are some specially constructed scenarios (not a typical behaviour of Linux PEBS driver). It lowers the threshold for others to understand this feature and opens up more exploration of KVM implementation or hw feature itself. Signed-off-by: Like Xu --- lib/x86/msr.h | 1 + lib/x86/pmu.h | 29 +++ x86/Makefile.x86_64 | 1 + x86/pmu_pebs.c | 433 ++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 8 + 5 files changed, 472 insertions(+) create mode 100644 x86/pmu_pebs.c diff --git a/lib/x86/msr.h b/lib/x86/msr.h index bbe29fd..68d8837 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -52,6 +52,7 @@ #define MSR_IA32_MCG_CTL 0x0000017b #define MSR_IA32_PEBS_ENABLE 0x000003f1 +#define MSR_PEBS_DATA_CFG 0x000003f2 #define MSR_IA32_DS_AREA 0x00000600 #define MSR_IA32_PERF_CAPABILITIES 0x00000345 diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index ef83934..9ba2419 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -14,6 +14,8 @@ #define PMU_CAP_LBR_FMT 0x3f #define PMU_CAP_FW_WRITES (1ULL << 13) +#define PMU_CAP_PEBS_BASELINE (1ULL << 14) +#define PERF_CAP_PEBS_FORMAT 0xf00 #define EVNSEL_EVENT_SHIFT 0 #define EVNTSEL_UMASK_SHIFT 8 @@ -33,6 +35,18 @@ #define EVNTSEL_INT (1 << EVNTSEL_INT_SHIFT) #define EVNTSEL_INV (1 << EVNTSEL_INV_SHIF) +#define GLOBAL_STATUS_BUFFER_OVF_BIT 62 +#define GLOBAL_STATUS_BUFFER_OVF BIT_ULL(GLOBAL_STATUS_BUFFER_OVF_BIT) + +#define PEBS_DATACFG_MEMINFO BIT_ULL(0) +#define PEBS_DATACFG_GP BIT_ULL(1) +#define PEBS_DATACFG_XMMS BIT_ULL(2) +#define PEBS_DATACFG_LBRS BIT_ULL(3) + +#define ICL_EVENTSEL_ADAPTIVE (1ULL << 34) +#define PEBS_DATACFG_LBR_SHIFT 24 +#define MAX_NUM_LBR_ENTRY 32 + struct pmu_caps { u64 perf_cap; u32 msr_gp_counter_base; @@ -227,4 +241,19 @@ static inline void pmu_ack_global_status(u64 value) wrmsr(pmu.msr_global_status_clr, value); } +static inline bool pmu_version_support_pebs(void) +{ + return pmu_version() > 1; +} + +static inline u8 pmu_pebs_format(void) +{ + return (pmu.perf_cap & PERF_CAP_PEBS_FORMAT ) >> 8; +} + +static inline bool pebs_has_baseline(void) +{ + return pmu.perf_cap & PMU_CAP_PEBS_BASELINE; +} + #endif /* _X86_PMU_H_ */ diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index 8f9463c..bd827fe 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -33,6 +33,7 @@ tests += $(TEST_DIR)/vmware_backdoors.$(exe) tests += $(TEST_DIR)/rdpru.$(exe) tests += $(TEST_DIR)/pks.$(exe) tests += $(TEST_DIR)/pmu_lbr.$(exe) +tests += $(TEST_DIR)/pmu_pebs.$(exe) ifeq ($(CONFIG_EFI),y) tests += $(TEST_DIR)/amd_sev.$(exe) diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c new file mode 100644 index 0000000..b318a2c --- /dev/null +++ b/x86/pmu_pebs.c @@ -0,0 +1,433 @@ +#include "x86/msr.h" +#include "x86/processor.h" +#include "x86/pmu.h" +#include "x86/isr.h" +#include "x86/apic.h" +#include "x86/apic-defs.h" +#include "x86/desc.h" +#include "alloc.h" + +#include "vm.h" +#include "types.h" +#include "processor.h" +#include "vmalloc.h" +#include "alloc_page.h" + +/* bits [63:48] provides the size of the current record in bytes */ +#define RECORD_SIZE_OFFSET 48 + +static unsigned int max_nr_gp_events; +static unsigned long *ds_bufer; +static unsigned long *pebs_buffer; +static u64 ctr_start_val; +static bool has_baseline; + +struct debug_store { + u64 bts_buffer_base; + u64 bts_index; + u64 bts_absolute_maximum; + u64 bts_interrupt_threshold; + u64 pebs_buffer_base; + u64 pebs_index; + u64 pebs_absolute_maximum; + u64 pebs_interrupt_threshold; + u64 pebs_event_reset[64]; +}; + +struct pebs_basic { + u64 format_size; + u64 ip; + u64 applicable_counters; + u64 tsc; +}; + +struct pebs_meminfo { + u64 address; + u64 aux; + u64 latency; + u64 tsx_tuning; +}; + +struct pebs_gprs { + u64 flags, ip, ax, cx, dx, bx, sp, bp, si, di; + u64 r8, r9, r10, r11, r12, r13, r14, r15; +}; + +struct pebs_xmm { + u64 xmm[16*2]; /* two entries for each register */ +}; + +struct lbr_entry { + u64 from; + u64 to; + u64 info; +}; + +enum pmc_type { + GP = 0, + FIXED, +}; + +static uint32_t intel_arch_events[] = { + 0x00c4, /* PERF_COUNT_HW_BRANCH_INSTRUCTIONS */ + 0x00c5, /* PERF_COUNT_HW_BRANCH_MISSES */ + 0x0300, /* PERF_COUNT_HW_REF_CPU_CYCLES */ + 0x003c, /* PERF_COUNT_HW_CPU_CYCLES */ + 0x00c0, /* PERF_COUNT_HW_INSTRUCTIONS */ + 0x013c, /* PERF_COUNT_HW_BUS_CYCLES */ + 0x4f2e, /* PERF_COUNT_HW_CACHE_REFERENCES */ + 0x412e, /* PERF_COUNT_HW_CACHE_MISSES */ +}; + +static u64 pebs_data_cfgs[] = { + PEBS_DATACFG_MEMINFO, + PEBS_DATACFG_GP, + PEBS_DATACFG_XMMS, + PEBS_DATACFG_LBRS | ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT), +}; + +/* Iterating each counter value is a waste of time, pick a few typical values. */ +static u64 counter_start_values[] = { + /* if PEBS counter doesn't overflow at all */ + 0, + 0xfffffffffff0, + /* normal counter overflow to have PEBS records */ + 0xfffffffffffe, + /* test whether emulated instructions should trigger PEBS */ + 0xffffffffffff, +}; + +static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) +{ + unsigned int sz = sizeof(struct pebs_basic); + + if (!has_baseline) + return sz; + + if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) + sz += sizeof(struct pebs_meminfo); + if (pebs_data_cfg & PEBS_DATACFG_GP) + sz += sizeof(struct pebs_gprs); + if (pebs_data_cfg & PEBS_DATACFG_XMMS) + sz += sizeof(struct pebs_xmm); + if (pebs_data_cfg & PEBS_DATACFG_LBRS) + sz += MAX_NUM_LBR_ENTRY * sizeof(struct lbr_entry); + + return sz; +} + +static void cnt_overflow(isr_regs_t *regs) +{ + apic_write(APIC_EOI, 0); +} + +static inline void workload(void) +{ + asm volatile( + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "label2:\n" + : + : + : "eax", "ebx", "ecx", "edx"); +} + +static inline void workload2(void) +{ + asm volatile( + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "label3:\n" + : + : + : "eax", "ebx", "ecx", "edx"); +} + +static void alloc_buffers(void) +{ + ds_bufer = alloc_page(); + force_4k_page(ds_bufer); + memset(ds_bufer, 0x0, PAGE_SIZE); + + pebs_buffer = alloc_page(); + force_4k_page(pebs_buffer); + memset(pebs_buffer, 0x0, PAGE_SIZE); +} + +static void free_buffers(void) +{ + if (ds_bufer) + free_page(ds_bufer); + + if (pebs_buffer) + free_page(pebs_buffer); +} + +static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) +{ + static struct debug_store *ds; + u64 baseline_extra_ctrl = 0, fixed_ctr_ctrl = 0; + unsigned int idx; + + if (has_baseline) + wrmsr(MSR_PEBS_DATA_CFG, pebs_data_cfg); + + ds = (struct debug_store *)ds_bufer; + ds->pebs_index = ds->pebs_buffer_base = (unsigned long)pebs_buffer; + ds->pebs_absolute_maximum = (unsigned long)pebs_buffer + PAGE_SIZE; + ds->pebs_interrupt_threshold = ds->pebs_buffer_base + + get_adaptive_pebs_record_size(pebs_data_cfg); + + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) { + if (!(BIT_ULL(FIXED_CNT_INDEX + idx) & bitmask)) + continue; + if (has_baseline) + baseline_extra_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); + write_fixed_counter_value(idx, ctr_start_val); + fixed_ctr_ctrl |= (0xbULL << (idx * 4) | baseline_extra_ctrl); + } + if (fixed_ctr_ctrl) + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, fixed_ctr_ctrl); + + for (idx = 0; idx < max_nr_gp_events; idx++) { + if (!(BIT_ULL(idx) & bitmask)) + continue; + if (has_baseline) + baseline_extra_ctrl = ICL_EVENTSEL_ADAPTIVE; + write_gp_event_select(idx, EVNTSEL_EN | EVNTSEL_OS | EVNTSEL_USR | + intel_arch_events[idx] | baseline_extra_ctrl); + write_gp_counter_value(idx, ctr_start_val); + } + + wrmsr(MSR_IA32_DS_AREA, (unsigned long)ds_bufer); + wrmsr(MSR_IA32_PEBS_ENABLE, bitmask); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, bitmask); +} + +static void reset_pebs(void) +{ + memset(ds_bufer, 0x0, PAGE_SIZE); + memset(pebs_buffer, 0x0, PAGE_SIZE); + wrmsr(MSR_IA32_PEBS_ENABLE, 0); + wrmsr(MSR_IA32_DS_AREA, 0); + if (has_baseline) + wrmsr(MSR_PEBS_DATA_CFG, 0); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); + + reset_all_counters(); +} + +static void pebs_disable(unsigned int idx) +{ + /* + * If we only clear the PEBS_ENABLE bit, the counter will continue to increment. + * In this very tiny time window, if the counter overflows no pebs record will be generated, + * but a normal counter irq. Test this fully with two ways. + */ + if (idx % 2) + wrmsr(MSR_IA32_PEBS_ENABLE, 0); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); +} + +static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) +{ + struct pebs_basic *pebs_rec = (struct pebs_basic *)pebs_buffer; + struct debug_store *ds = (struct debug_store *)ds_bufer; + unsigned int pebs_record_size = get_adaptive_pebs_record_size(pebs_data_cfg); + unsigned int count = 0; + bool expected, pebs_idx_match, pebs_size_match, data_cfg_match; + void *cur_record; + + expected = (ds->pebs_index == ds->pebs_buffer_base) && !pebs_rec->format_size; + if (!(rdmsr(MSR_CORE_PERF_GLOBAL_STATUS) & GLOBAL_STATUS_BUFFER_OVF)) { + report(expected, "No OVF irq, none PEBS records."); + return; + } + + if (expected) { + report(!expected, "A OVF irq, but none PEBS records."); + return; + } + + expected = ds->pebs_index >= ds->pebs_interrupt_threshold; + cur_record = (void *)pebs_buffer; + do { + pebs_rec = (struct pebs_basic *)cur_record; + pebs_record_size = pebs_rec->format_size >> RECORD_SIZE_OFFSET; + pebs_idx_match = + pebs_rec->applicable_counters & bitmask; + pebs_size_match = + pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); + data_cfg_match = + (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; + expected = pebs_idx_match && pebs_size_match && data_cfg_match; + report(expected, + "PEBS record (written seq %d) is verified (inclduing size, counters and cfg).", count); + cur_record = cur_record + pebs_record_size; + count++; + } while (expected && (void *)cur_record < (void *)ds->pebs_index); + + if (!expected) { + if (!pebs_idx_match) + printf("FAIL: The applicable_counters (0x%lx) doesn't match with pmc_bitmask (0x%lx).\n", + pebs_rec->applicable_counters, bitmask); + if (!pebs_size_match) + printf("FAIL: The pebs_record_size (%d) doesn't match with MSR_PEBS_DATA_CFG (%d).\n", + pebs_record_size, get_adaptive_pebs_record_size(pebs_data_cfg)); + if (!data_cfg_match) + printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with MSR_PEBS_DATA_CFG (0x%lx).\n", + pebs_rec->format_size & 0xffffffffffff, pebs_data_cfg); + } +} + +static void check_one_counter(enum pmc_type type, + unsigned int idx, u64 pebs_data_cfg) +{ + int pebs_bit = BIT_ULL(type == FIXED ? FIXED_CNT_INDEX + idx : idx); + + report_prefix_pushf("%s counter %d (0x%lx)", + type == FIXED ? "Extended Fixed" : "GP", idx, ctr_start_val); + reset_pebs(); + pebs_enable(pebs_bit, pebs_data_cfg); + workload(); + pebs_disable(idx); + check_pebs_records(pebs_bit, pebs_data_cfg); + report_prefix_pop(); +} + +/* more than one PEBS records will be generated. */ +static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg) +{ + reset_pebs(); + pebs_enable(bitmask, pebs_data_cfg); + workload2(); + pebs_disable(0); + check_pebs_records(bitmask, pebs_data_cfg); +} + +static void check_pebs_counters(u64 pebs_data_cfg) +{ + unsigned int idx; + u64 bitmask = 0; + + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) + check_one_counter(FIXED, idx, pebs_data_cfg); + + for (idx = 0; idx < max_nr_gp_events; idx++) + check_one_counter(GP, idx, pebs_data_cfg); + + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) + bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); + for (idx = 0; idx < max_nr_gp_events; idx += 2) + bitmask |= BIT_ULL(idx); + report_prefix_pushf("Multiple (0x%lx)", bitmask); + check_multiple_counters(bitmask, pebs_data_cfg); + report_prefix_pop(); +} + +/* + * Known reasons for none PEBS records: + * 1. The selected event does not support PEBS; + * 2. From a core pmu perspective, the vCPU and pCPU models are not same; + * 3. Guest counter has not yet overflowed or been cross-mapped by the host; + */ +int main(int ac, char **av) +{ + unsigned int i, j; + + setup_vm(); + + max_nr_gp_events = MIN(pmu_nr_gp_counters(), ARRAY_SIZE(intel_arch_events)); + + printf("PMU version: %d\n", pmu_version()); + + has_baseline = pebs_has_baseline(); + if (pmu_has_full_writes()) + set_gp_counter_base(MSR_IA32_PMC0); + + if (!is_intel()) { + report_skip("PEBS requires Intel ICX or later, non-Intel detected"); + return report_summary(); + } else if (!pmu_version_support_pebs()) { + report_skip("PEBS required PMU version 2, reported version is %d", pmu_version()); + return report_summary(); + } else if (!pmu_pebs_format()) { + report_skip("PEBS not enumerated in PERF_CAPABILITIES"); + return report_summary(); + } else if (rdmsr(MSR_IA32_MISC_ENABLE) & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL) { + report_skip("PEBS unavailable according to MISC_ENABLE"); + return report_summary(); + } + + printf("PEBS format: %d\n", pmu_pebs_format()); + printf("PEBS GP counters: %d\n", pmu_nr_gp_counters()); + printf("PEBS Fixed counters: %d\n", pmu_nr_fixed_counters()); + printf("PEBS baseline (Adaptive PEBS): %d\n", has_baseline); + + handle_irq(PMI_VECTOR, cnt_overflow); + alloc_buffers(); + + for (i = 0; i < ARRAY_SIZE(counter_start_values); i++) { + ctr_start_val = counter_start_values[i]; + check_pebs_counters(0); + if (!has_baseline) + continue; + + for (j = 0; j < ARRAY_SIZE(pebs_data_cfgs); j++) { + report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfgs[j]); + check_pebs_counters(pebs_data_cfgs[j]); + report_prefix_pop(); + } + } + + free_buffers(); + + return report_summary(); +} diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 07d0507..54f0437 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -200,6 +200,14 @@ check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm groups = pmu +[pmu_pebs] +arch = x86_64 +file = pmu_pebs.flat +extra_params = -cpu host,migratable=no +check = /proc/sys/kernel/nmi_watchdog=0 +accel = kvm +groups = pmu + [vmware_backdoors] file = vmware_backdoors.flat extra_params = -machine vmport=on -cpu max From patchwork Mon Oct 24 09:12:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D147EFA373E for ; Mon, 24 Oct 2022 09:14:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230382AbiJXJOZ (ORCPT ); Mon, 24 Oct 2022 05:14:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230355AbiJXJNw (ORCPT ); Mon, 24 Oct 2022 05:13:52 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3D566A507 for ; Mon, 24 Oct 2022 02:13:36 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id f140so8498261pfa.1 for ; Mon, 24 Oct 2022 02:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6J1OnS3XdfNtdXPYWlgN4cK2RdhyhyKRrLj4TTIwxis=; b=kf39dU7bz8GMy9tWXwer6RL/+zHvrScsserW8FoK4YctR2Qu4bodz8hz44oZmP7gG+ v4vu/cqg1JQTujNUwIT8zwlywelFzNuxvqNaR9MsfrehM/ezcPptep9TRTjOAGOY5vng ssMypcBc0jVdT5XaUjckzZhvMrwCp19Oa+kSU+QM83vHLKVRBbkClmY3gJ+a4tEe8esN 1P3RZViPleZDwTjJt6IckVGmLGqvdUYuKtx+BKlw5cIVNyCmS5zVvmpwdTMObUae/DK4 +0J8zzOI6r+/Ng/ZOBialzili8MBA0NEiE+CBmUD6j4b1IxzzqFxwM/RtLTUtnI/e99I Zydw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6J1OnS3XdfNtdXPYWlgN4cK2RdhyhyKRrLj4TTIwxis=; b=f3IWuOqxQMkD+VSOH4+Yjx110ITLyQj90j0k2gsXQZz8ba60DFbq3LI0nxIw5akjZi V/goC/5z7YbDpv9V9dNEFOY/vxHvx1EXgLwwfGvhorptDTdaDcMof4QHhbVVX+hSZqJ6 rUf1EfgfGwEmLH2I85xrWMkRJGfCvPzkRty4IIabCpUZcs4aa+wzUmPDhvF6D7vwsLNv 8uEd9btWpLbgxPzWA7hAm0kZzmqtpBRQKMXhencK301Rs6A7tyk5MgNddsDbV6HLgg5/ ZuGNObLAPfV3A0l3ws1rxrNh0vXIwHLZ0rNBMaYVquv8DnAXue4DUAKb1KriepVA8lGE 3IaA== X-Gm-Message-State: ACrzQf1Dca2wgXanxSZ3UtWrkpSw1LGlaC53ZI31gHKFyAXKz23YhmYZ Y8lzIyEjWBBUySCd9XAHYJ4= X-Google-Smtp-Source: AMsMyM4pxCnoDBeV5lGF0gL39+trKeU0gIXAs58IRmTe9mmQsohghWMHFbhqhhsT6Bh2WmVwwoAkug== X-Received: by 2002:a65:6849:0:b0:461:8779:2452 with SMTP id q9-20020a656849000000b0046187792452mr26515942pgt.383.1666602814319; Mon, 24 Oct 2022 02:13:34 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:34 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 20/24] x86/pmu: Add global helpers to cover Intel Arch PMU Version 1 Date: Mon, 24 Oct 2022 17:12:19 +0800 Message-Id: <20221024091223.42631-21-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, except no access to registers introduced only in PMU version 2. Adding some guardian's checks can seamlessly support version 1, while opening the door for normal AMD PMUs tests. Signed-off-by: Like Xu --- lib/x86/pmu.c | 8 +++++--- lib/x86/pmu.h | 5 +++++ x86/pmu.c | 47 +++++++++++++++++++++++++++++++---------------- 3 files changed, 41 insertions(+), 19 deletions(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 3b6be37..43e6a43 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -10,8 +10,10 @@ void pmu_init(void) pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; - pmu.msr_global_status = MSR_CORE_PERF_GLOBAL_STATUS; - pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL; - pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL; + if (this_cpu_support_perf_status()) { + pmu.msr_global_status = MSR_CORE_PERF_GLOBAL_STATUS; + pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL; + pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL; + } reset_all_counters(); } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 9ba2419..fa49a8f 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -116,6 +116,11 @@ static inline bool this_cpu_has_perf_global_ctrl(void) return pmu_version() > 1; } +static inline bool this_cpu_support_perf_status(void) +{ + return pmu_version() > 1; +} + static inline u8 pmu_nr_gp_counters(void) { return (cpuid_10.a >> 8) & 0xff; diff --git a/x86/pmu.c b/x86/pmu.c index 015591f..daeb7a2 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -102,12 +102,18 @@ static struct pmu_event* get_counter_event(pmu_counter_t *cnt) static void global_enable(pmu_counter_t *cnt) { + if (!this_cpu_has_perf_global_ctrl()) + return; + cnt->idx = event_to_global_idx(cnt); pmu_set_global_enable(pmu_get_global_enable() | BIT_ULL(cnt->idx)); } static void global_disable(pmu_counter_t *cnt) { + if (!this_cpu_has_perf_global_ctrl()) + return; + pmu_set_global_enable(pmu_get_global_enable() & ~BIT_ULL(cnt->idx)); } @@ -286,7 +292,8 @@ static void check_counter_overflow(void) overflow_preset = measure_for_overflow(&cnt); /* clear status before test */ - pmu_clear_global_status(); + if (this_cpu_support_perf_status()) + pmu_clear_global_status(); report_prefix_push("overflow"); @@ -313,6 +320,10 @@ static void check_counter_overflow(void) idx = event_to_global_idx(&cnt); __measure(&cnt, cnt.count); report(cnt.count == 1, "cntr-%d", i); + + if (!this_cpu_support_perf_status()) + continue; + status = pmu_get_global_status(); report(status & (1ull << idx), "status-%d", i); pmu_ack_global_status(status); @@ -425,7 +436,8 @@ static void check_running_counter_wrmsr(void) report(evt.count < gp_events[1].min, "cntr"); /* clear status before overflow test */ - pmu_clear_global_status(); + if (this_cpu_support_perf_status()) + pmu_clear_global_status(); start_event(&evt); @@ -437,8 +449,11 @@ static void check_running_counter_wrmsr(void) loop(); stop_event(&evt); - status = pmu_get_global_status(); - report(status & 1, "status msr bit"); + + if (this_cpu_support_perf_status()) { + status = pmu_get_global_status(); + report(status & 1, "status msr bit"); + } report_prefix_pop(); } @@ -458,7 +473,8 @@ static void check_emulated_instr(void) }; report_prefix_push("emulated instruction"); - pmu_clear_global_status(); + if (this_cpu_support_perf_status()) + pmu_clear_global_status(); start_event(&brnch_cnt); start_event(&instr_cnt); @@ -492,7 +508,8 @@ static void check_emulated_instr(void) : : "eax", "ebx", "ecx", "edx"); - pmu_reset_global_enable(); + if (this_cpu_has_perf_global_ctrl()) + pmu_reset_global_enable(); stop_event(&brnch_cnt); stop_event(&instr_cnt); @@ -503,10 +520,12 @@ static void check_emulated_instr(void) "instruction count"); report(brnch_cnt.count - brnch_start >= EXPECTED_BRNCH, "branch count"); - // Additionally check that those counters overflowed properly. - status = pmu_get_global_status(); - report(status & 1, "branch counter overflow"); - report(status & 2, "instruction counter overflow"); + if (this_cpu_support_perf_status()) { + // Additionally check that those counters overflowed properly. + status = pmu_get_global_status(); + report(status & 1, "branch counter overflow"); + report(status & 2, "instruction counter overflow"); + } report_prefix_pop(); } @@ -593,7 +612,8 @@ static void set_ref_cycle_expectations(void) if (!pmu_nr_gp_counters() || !pmu_gp_counter_is_available(2)) return; - pmu_reset_global_enable(); + if (this_cpu_has_perf_global_ctrl()) + pmu_reset_global_enable(); t0 = fenced_rdtsc(); start_event(&cnt); @@ -644,11 +664,6 @@ int main(int ac, char **av) return report_summary(); } - if (pmu_version() == 1) { - report_skip("PMU version 1 is not supported."); - return report_summary(); - } - set_ref_cycle_expectations(); printf("PMU version: %d\n", pmu_version()); From patchwork Mon Oct 24 09:12:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D14CC38A2D for ; Mon, 24 Oct 2022 09:14:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231182AbiJXJOA (ORCPT ); Mon, 24 Oct 2022 05:14:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230526AbiJXJNm (ORCPT ); Mon, 24 Oct 2022 05:13:42 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4B655AA36 for ; Mon, 24 Oct 2022 02:13:37 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id io19so2986936plb.8 for ; Mon, 24 Oct 2022 02:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=X01d27bImszq5JJeDGG8+c38fFyhTDHSDxshjcb4ihQ=; b=QDssjgFV/CduyXGykFBw8agaHCNPslVwhFaonbpfD7POcvDENv/X95dS0v+C4ebA6S SwQiRbCpJDkrjiCPUysOfflUyaMYtQHPcyFr5qdl+5kZ7RAS+eDgqVq9lUTnPnC65cO4 qXf1Ylo4/XM054nfK1V8jpqr+nXiBygER7EJU2YAqXvShrpxQhqbpVL1LCbQxhL2Qck5 C0V5RTiNArlUWEfkq1sIvr1SdlrBNBRz1dWw2dL9O0Z3zcXe7DCUZ9RZ6kivDzBvQjrg BS4Tzm+ybQQEGus4gi+WBCij93CIxxE1iOO+GpR4YioBx/y7I0UDNLImubuzl6UqGuBh IzIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X01d27bImszq5JJeDGG8+c38fFyhTDHSDxshjcb4ihQ=; b=IaFAfGujLpXEwxOdCSbETLgiPNv6oHD8TW6fj0qN7NNI5Pj9zYKnOLtFtmbV3ZInyk 47b5xR/JBP0PIz60GEBoiwyssOz2wuF9B4wxHxpmcVFtYq5ILxh5D10lZq9MRJ2ZMbEV Adkz5adcChgTAdUH4O8VIzH7zYVpYKI9izie42gl4lv8rqYnqtxz3iYI1S5TKrl02Vss RuQG+3i5LioiCJGV0msI2gjDIwHqlTUq4WX9ScHqUe8ItBIh181HI9K/JyT8T9tpse0B iK/LAlmWrXZgn2FB428OL6MeHamZUVDEJ4d/5UDZfBY8p6xFM/a4Wgymrl0gvq/AhGWw mQ7g== X-Gm-Message-State: ACrzQf0SexyaWk0R3VoCzFiJsW747DuMpZDMSFkaSy0NpEpNnkv8P8xO cn5KOBZR1EM08hFk+xCJcmA= X-Google-Smtp-Source: AMsMyM4fZmAyowCIhKQGwQyMPqqnU1Lt0p9YbsEpgnQve25UYFSRSleYoraf8D/Rvvj8KbzaETl4uQ== X-Received: by 2002:a17:902:d512:b0:181:f1f4:fcb4 with SMTP id b18-20020a170902d51200b00181f1f4fcb4mr32938538plg.102.1666602815972; Mon, 24 Oct 2022 02:13:35 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:35 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 21/24] x86/pmu: Add gp_events pointer to route different event tables Date: Mon, 24 Oct 2022 17:12:20 +0800 Message-Id: <20221024091223.42631-22-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu AMD and Intel do not share the same set of coding rules for performance events, and code to test the same performance event can be reused by pointing to a different coding table, noting that the table size also needs to be updated. Signed-off-by: Like Xu --- x86/pmu.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index daeb7a2..24d015e 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -30,7 +30,7 @@ struct pmu_event { uint32_t unit_sel; int min; int max; -} gp_events[] = { +} intel_gp_events[] = { {"core cycles", 0x003c, 1*N, 50*N}, {"instructions", 0x00c0, 10*N, 10.2*N}, {"ref cycles", 0x013c, 1*N, 30*N}, @@ -46,6 +46,9 @@ struct pmu_event { char *buf; +static struct pmu_event *gp_events; +static unsigned int gp_events_size; + static inline void loop(void) { unsigned long tmp, tmp2, tmp3; @@ -91,7 +94,7 @@ static struct pmu_event* get_counter_event(pmu_counter_t *cnt) if (is_gp(cnt)) { int i; - for (i = 0; i < sizeof(gp_events)/sizeof(gp_events[0]); i++) + for (i = 0; i < gp_events_size; i++) if (gp_events[i].unit_sel == (cnt->config & 0xffff)) return &gp_events[i]; } else @@ -212,7 +215,7 @@ static void check_gp_counters(void) { int i; - for (i = 0; i < sizeof(gp_events)/sizeof(gp_events[0]); i++) + for (i = 0; i < gp_events_size; i++) if (pmu_gp_counter_is_available(i)) check_gp_counter(&gp_events[i]); else @@ -248,7 +251,7 @@ static void check_counters_many(void) cnt[n].ctr = gp_counter_msr(n); cnt[n].config = EVNTSEL_OS | EVNTSEL_USR | - gp_events[i % ARRAY_SIZE(gp_events)].unit_sel; + gp_events[i % gp_events_size].unit_sel; n++; } for (i = 0; i < nr_fixed_counters; i++) { @@ -603,7 +606,7 @@ static void set_ref_cycle_expectations(void) { pmu_counter_t cnt = { .ctr = MSR_IA32_PERFCTR0, - .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[2].unit_sel, + .config = EVNTSEL_OS | EVNTSEL_USR | intel_gp_events[2].unit_sel, }; uint64_t tsc_delta; uint64_t t0, t1, t2, t3; @@ -639,8 +642,8 @@ static void set_ref_cycle_expectations(void) if (!tsc_delta) return; - gp_events[2].min = (gp_events[2].min * cnt.count) / tsc_delta; - gp_events[2].max = (gp_events[2].max * cnt.count) / tsc_delta; + intel_gp_events[2].min = (intel_gp_events[2].min * cnt.count) / tsc_delta; + intel_gp_events[2].max = (intel_gp_events[2].max * cnt.count) / tsc_delta; } static void check_invalid_rdpmc_gp(void) @@ -664,6 +667,8 @@ int main(int ac, char **av) return report_summary(); } + gp_events = (struct pmu_event *)intel_gp_events; + gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); set_ref_cycle_expectations(); printf("PMU version: %d\n", pmu_version()); From patchwork Mon Oct 24 09:12:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED70EC38A2D for ; Mon, 24 Oct 2022 09:14:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231194AbiJXJO1 (ORCPT ); Mon, 24 Oct 2022 05:14:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231208AbiJXJN4 (ORCPT ); Mon, 24 Oct 2022 05:13:56 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DD636A535 for ; Mon, 24 Oct 2022 02:13:41 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id y4so7951533plb.2 for ; Mon, 24 Oct 2022 02:13:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x5bWOD+v2BlHUhIdaS0K1ucB/W7XeYKRWhxTgH+JxYc=; b=Bto57ZTybUsuftiCJ9+JzaXGtwjk2Qvd16kA4CyTfzQ1lgJpMcaaAiiQsIL6JeMmSF E9mq5q80o9ypVZaiHQ2gaUQltMN/y/A8alzwdBc0tN9sjw/XEfwRBHgKRnpt6KEJ7CbE k0DLNGIIaSrEehiqai8I6vOfV+s2eMuJzzON0jbgnWJKTj2Pujk3VdtW4sJ2DF15RG5A AekrNUrg6daITmebrq/2mMHrP/fcqmnVm3nhHS7ZK5muzwUypog75FNA7iLLHMf1sFSt 3pl3m9Te8PEyGHJpT2xALiZvmDvQs3253SndDPJ5tOImNJm+appYXLXX5dqaog3v0LBq VhfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x5bWOD+v2BlHUhIdaS0K1ucB/W7XeYKRWhxTgH+JxYc=; b=OJx+hBZZJU2bp9JBd4C95vgJV3bwnUNhMXXoeZmcAEH2tOnNjvsM+tZ0PJW18JGJqG OrtTJQcuDIkwKJc3EEqcSEXCSnYoRb/NE8egw2iPjiBNlyjynOunQ9YV4COI3gFDQKSm AJ8cgU2AkxHWMrXI00OOjSUBcHxmX95anpCiJAWIshgi/MVznnelTG60yC4/lowR3IZJ 4HG1+0BBrwygGKMMPW78EceXdJUisYVWhCJxZSWqf6dCsSRGLsFRj7f4YAyKGnGW0VGS ZxLEIspgt4U2QBUrWm63DfBFzxTY/+RcOYUxjSskeEwXtOoeozQ6CL/CWroDb01AE4CK It6w== X-Gm-Message-State: ACrzQf1y4MnpcuHUaZbX/hy8Tx0enrXUs348LQkqnzuXp/IvX9/FZ+xu M2sW8SWk83K4Z5c18JRLf6mIqZKKgbSc8eT9 X-Google-Smtp-Source: AMsMyM5A1GEBXISLDJOCrZgBb9x6sv3zCe9ItgsiYhxMi3eqfFq7LJoGRdYyTrPN1PMUDltfuQmoGw== X-Received: by 2002:a17:902:690a:b0:17a:32d:7acc with SMTP id j10-20020a170902690a00b0017a032d7accmr32790443plk.18.1666602817582; Mon, 24 Oct 2022 02:13:37 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:37 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 22/24] x86/pmu: Add nr_gp_counters to limit the number of test counters Date: Mon, 24 Oct 2022 17:12:21 +0800 Message-Id: <20221024091223.42631-23-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The number of counters in amd is fixed (4 or 6), and the test code can be reused by dynamically switching the maximum number of counters (and register base addresses), with no change for Intel side. Signed-off-by: Like Xu --- lib/x86/pmu.c | 1 + lib/x86/pmu.h | 9 ++++++++- 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 43e6a43..25e21e5 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -10,6 +10,7 @@ void pmu_init(void) pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; + pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff; if (this_cpu_support_perf_status()) { pmu.msr_global_status = MSR_CORE_PERF_GLOBAL_STATUS; pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL; diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index fa49a8f..4312b6e 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -54,6 +54,7 @@ struct pmu_caps { u32 msr_global_status; u32 msr_global_ctl; u32 msr_global_status_clr; + unsigned int nr_gp_counters; }; extern struct cpuid cpuid_10; @@ -123,7 +124,13 @@ static inline bool this_cpu_support_perf_status(void) static inline u8 pmu_nr_gp_counters(void) { - return (cpuid_10.a >> 8) & 0xff; + return pmu.nr_gp_counters; +} + +static inline void set_nr_gp_counters(u8 new_num) +{ + if (new_num < pmu_nr_gp_counters()) + pmu.nr_gp_counters = new_num; } static inline u8 pmu_gp_counter_width(void) From patchwork Mon Oct 24 09:12:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31F3CC38A2D for ; Mon, 24 Oct 2022 09:14:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230179AbiJXJO3 (ORCPT ); Mon, 24 Oct 2022 05:14:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231252AbiJXJOE (ORCPT ); Mon, 24 Oct 2022 05:14:04 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE7C868CD1 for ; Mon, 24 Oct 2022 02:13:44 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id b5so8184106pgb.6 for ; Mon, 24 Oct 2022 02:13:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sefsVn/KyRtcLJhX7LG0gKatFJ95SB2+vN4YB2WOoh8=; b=MMtep/gwA5IJdcAjGR+yqgxvNBtX8NNAULQar8HRWxxnt7CLYC8y7XWhISgLUv2rxy GQbrfuyLF3r+sXzVpy2vKFMYEyoqttGMku96mFb2+au11tNOXBAhU9xfjeRc6GcHDDSY uBLYHkqIvpFci0FYBIPcfr07VljR/ANt8w0z8xDkmcGENyfd5f2iIqKUdyhZUtU5NYjn aySKHe95zeGRZ8V63nd8mwHCgouo9ASwery5agqvWOPr5ozQkQEKhbV9cxEV9YIF8s7D tZVGW9VO/IrIm5zg2s5X01qVba4mzGU8cuD2ZflYhHZXeFPgYEGepiWiTL8eghDPK4sB FbSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sefsVn/KyRtcLJhX7LG0gKatFJ95SB2+vN4YB2WOoh8=; b=oRN4zMlndrBQWtBg8dwtlJBdk43YKooQkWe3ViG8Uh17Rsijoyhm3F6tCgxaXX373y 4vtGuYs5MXA16M8NIg+KfMN43xI9+EkjHk9R55JgdwjPf9pKRupfFsc7I9WWJkk7lVe+ ldN7lDObhBqVPK7Kw+75fG9Oa0K9TWz21RVdY1IH4yFhS+Hon3V1wHDYhEkSdvsRR5Ax 1e+u9ls31fEv8Ls9GopRnsHqM1HIEOmM6mfx/keOur3ikh9Egmzuw2GIIVex7nn9VF+Q RBayQQ2FegwluOLZftfadgj+kPfrV43v1lbUWalFfSYvFUwc2jBan+bz8PhAlivRuH6d Pt7g== X-Gm-Message-State: ACrzQf2zm5wpdlGGVFrPpuyJeawtIT1TrDuRkiJYom9bNmLA0PbAT2vE RKYovUftsDDfg6mOJ1wB+AA= X-Google-Smtp-Source: AMsMyM4S6Bh/PynIW2ANQJbYgKvWwc0IOzISsgRT2CFbrEMPuR23qClBwpatca0HPHXBsRGPr0VnJA== X-Received: by 2002:a63:516:0:b0:46e:d2ea:22cb with SMTP id 22-20020a630516000000b0046ed2ea22cbmr10760849pgf.144.1666602819549; Mon, 24 Oct 2022 02:13:39 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:39 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org, Sandipan Das Subject: [kvm-unit-tests PATCH v4 23/24] x86/pmu: Update testcases to cover AMD PMU Date: Mon, 24 Oct 2022 17:12:22 +0800 Message-Id: <20221024091223.42631-24-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only hardware events common across amd generations (starting with K7) were added to amd_gp_events[] table. All above differences are instantiated at the detection step, and it also covers the K7 PMU registers, which is consistent with bare-metal. Signed-off-by: Like Xu Reviewed-by: Sandipan Das --- lib/x86/msr.h | 17 +++++++++++++ lib/x86/pmu.c | 29 +++++++++++++++-------- lib/x86/pmu.h | 35 +++++++++++++++++++++++++-- lib/x86/processor.h | 1 + x86/pmu.c | 58 ++++++++++++++++++++++++++++++++++++--------- 5 files changed, 117 insertions(+), 23 deletions(-) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 68d8837..6cf8f33 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -146,6 +146,23 @@ #define FAM10H_MMIO_CONF_BASE_SHIFT 20 #define MSR_FAM10H_NODE_ID 0xc001100c +/* Fam 15h MSRs */ +#define MSR_F15H_PERF_CTL 0xc0010200 +#define MSR_F15H_PERF_CTL0 MSR_F15H_PERF_CTL +#define MSR_F15H_PERF_CTL1 (MSR_F15H_PERF_CTL + 2) +#define MSR_F15H_PERF_CTL2 (MSR_F15H_PERF_CTL + 4) +#define MSR_F15H_PERF_CTL3 (MSR_F15H_PERF_CTL + 6) +#define MSR_F15H_PERF_CTL4 (MSR_F15H_PERF_CTL + 8) +#define MSR_F15H_PERF_CTL5 (MSR_F15H_PERF_CTL + 10) + +#define MSR_F15H_PERF_CTR 0xc0010201 +#define MSR_F15H_PERF_CTR0 MSR_F15H_PERF_CTR +#define MSR_F15H_PERF_CTR1 (MSR_F15H_PERF_CTR + 2) +#define MSR_F15H_PERF_CTR2 (MSR_F15H_PERF_CTR + 4) +#define MSR_F15H_PERF_CTR3 (MSR_F15H_PERF_CTR + 6) +#define MSR_F15H_PERF_CTR4 (MSR_F15H_PERF_CTR + 8) +#define MSR_F15H_PERF_CTR5 (MSR_F15H_PERF_CTR + 10) + /* K8 MSRs */ #define MSR_K8_TOP_MEM1 0xc001001a #define MSR_K8_TOP_MEM2 0xc001001d diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 25e21e5..7fd2279 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -5,16 +5,25 @@ struct pmu_caps pmu; void pmu_init(void) { - cpuid_10 = cpuid(10); - if (this_cpu_has(X86_FEATURE_PDCM)) - pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); - pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; - pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; - pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff; - if (this_cpu_support_perf_status()) { - pmu.msr_global_status = MSR_CORE_PERF_GLOBAL_STATUS; - pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL; - pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL; + if (is_intel()) { + cpuid_10 = cpuid(10); + if (this_cpu_has(X86_FEATURE_PDCM)) + pmu.perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); + pmu.msr_gp_counter_base = MSR_IA32_PERFCTR0; + pmu.msr_gp_event_select_base = MSR_P6_EVNTSEL0; + pmu.nr_gp_counters = (cpuid_10.a >> 8) & 0xff; + if (this_cpu_support_perf_status()) { + pmu.msr_global_status = MSR_CORE_PERF_GLOBAL_STATUS; + pmu.msr_global_ctl = MSR_CORE_PERF_GLOBAL_CTRL; + pmu.msr_global_status_clr = MSR_CORE_PERF_GLOBAL_OVF_CTRL; + } + } else { + pmu.msr_gp_counter_base = MSR_F15H_PERF_CTR0; + pmu.msr_gp_event_select_base = MSR_F15H_PERF_CTL0; + if (!has_amd_perfctr_core()) + pmu.nr_gp_counters = AMD64_NUM_COUNTERS; + else + pmu.nr_gp_counters = AMD64_NUM_COUNTERS_CORE; } reset_all_counters(); } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 4312b6e..a4e00c5 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -10,6 +10,11 @@ /* Performance Counter Vector for the LVT PC Register */ #define PMI_VECTOR 32 +#define AMD64_NUM_COUNTERS 4 +#define AMD64_NUM_COUNTERS_CORE 6 + +#define PMC_DEFAULT_WIDTH 48 + #define DEBUGCTLMSR_LBR (1UL << 0) #define PMU_CAP_LBR_FMT 0x3f @@ -84,11 +89,17 @@ static inline void set_gp_event_select_base(u32 new_base) static inline u32 gp_counter_msr(unsigned int i) { + if (gp_counter_base() == MSR_F15H_PERF_CTR0) + return gp_counter_base() + 2 * i; + return gp_counter_base() + i; } static inline u32 gp_event_select_msr(unsigned int i) { + if (gp_event_select_base() == MSR_F15H_PERF_CTL0) + return gp_event_select_base() + 2 * i; + return gp_event_select_base() + i; } @@ -104,11 +115,17 @@ static inline void write_gp_event_select(unsigned int i, u64 value) static inline u8 pmu_version(void) { + if (!is_intel()) + return 0; + return cpuid_10.a & 0xff; } static inline bool this_cpu_has_pmu(void) { + if (!is_intel()) + return true; + return !!pmu_version(); } @@ -135,12 +152,18 @@ static inline void set_nr_gp_counters(u8 new_num) static inline u8 pmu_gp_counter_width(void) { - return (cpuid_10.a >> 16) & 0xff; + if (is_intel()) + return (cpuid_10.a >> 16) & 0xff; + else + return PMC_DEFAULT_WIDTH; } static inline u8 pmu_gp_counter_mask_length(void) { - return (cpuid_10.a >> 24) & 0xff; + if (is_intel()) + return (cpuid_10.a >> 24) & 0xff; + else + return pmu_nr_gp_counters(); } static inline u8 pmu_nr_fixed_counters(void) @@ -161,6 +184,9 @@ static inline u8 pmu_fixed_counter_width(void) static inline bool pmu_gp_counter_is_available(int i) { + if (!is_intel()) + return i < pmu_nr_gp_counters(); + /* CPUID.0xA.EBX bit is '1 if they counter is NOT available. */ return !(cpuid_10.b & BIT(i)); } @@ -268,4 +294,9 @@ static inline bool pebs_has_baseline(void) return pmu.perf_cap & PMU_CAP_PEBS_BASELINE; } +static inline bool has_amd_perfctr_core(void) +{ + return this_cpu_has(X86_FEATURE_PERFCTR_CORE); +} + #endif /* _X86_PMU_H_ */ diff --git a/lib/x86/processor.h b/lib/x86/processor.h index ee2b5a2..64b36cf 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -252,6 +252,7 @@ static inline bool is_intel(void) * Extended Leafs, a.k.a. AMD defined */ #define X86_FEATURE_SVM (CPUID(0x80000001, 0, ECX, 2)) +#define X86_FEATURE_PERFCTR_CORE (CPUID(0x80000001, 0, ECX, 23)) #define X86_FEATURE_NX (CPUID(0x80000001, 0, EDX, 20)) #define X86_FEATURE_GBPAGES (CPUID(0x80000001, 0, EDX, 26)) #define X86_FEATURE_RDTSCP (CPUID(0x80000001, 0, EDX, 27)) diff --git a/x86/pmu.c b/x86/pmu.c index 24d015e..d4ef685 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -38,6 +38,11 @@ struct pmu_event { {"llc misses", 0x412e, 1, 1*N}, {"branches", 0x00c4, 1*N, 1.1*N}, {"branch misses", 0x00c5, 0, 0.1*N}, +}, amd_gp_events[] = { + {"core cycles", 0x0076, 1*N, 50*N}, + {"instructions", 0x00c0, 10*N, 10.2*N}, + {"branches", 0x00c2, 1*N, 1.1*N}, + {"branch misses", 0x00c3, 0, 0.1*N}, }, fixed_events[] = { {"fixed 1", MSR_CORE_PERF_FIXED_CTR0, 10*N, 10.2*N}, {"fixed 2", MSR_CORE_PERF_FIXED_CTR0 + 1, 1*N, 30*N}, @@ -79,14 +84,23 @@ static bool check_irq(void) static bool is_gp(pmu_counter_t *evt) { + if (!is_intel()) + return true; + return evt->ctr < MSR_CORE_PERF_FIXED_CTR0 || evt->ctr >= MSR_IA32_PMC0; } static int event_to_global_idx(pmu_counter_t *cnt) { - return cnt->ctr - (is_gp(cnt) ? gp_counter_base() : - (MSR_CORE_PERF_FIXED_CTR0 - FIXED_CNT_INDEX)); + if (is_intel()) + return cnt->ctr - (is_gp(cnt) ? gp_counter_base() : + (MSR_CORE_PERF_FIXED_CTR0 - FIXED_CNT_INDEX)); + + if (gp_counter_base() == MSR_F15H_PERF_CTR0) + return (cnt->ctr - gp_counter_base()) / 2; + else + return cnt->ctr - gp_counter_base(); } static struct pmu_event* get_counter_event(pmu_counter_t *cnt) @@ -309,6 +323,9 @@ static void check_counter_overflow(void) cnt.count &= (1ull << pmu_gp_counter_width()) - 1; if (i == nr_gp_counters) { + if (!is_intel()) + break; + cnt.ctr = fixed_events[0].unit_sel; cnt.count = measure_for_overflow(&cnt); cnt.count &= (1ull << pmu_fixed_counter_width()) - 1; @@ -322,7 +339,10 @@ static void check_counter_overflow(void) cnt.config &= ~EVNTSEL_INT; idx = event_to_global_idx(&cnt); __measure(&cnt, cnt.count); - report(cnt.count == 1, "cntr-%d", i); + if (is_intel()) + report(cnt.count == 1, "cntr-%d", i); + else + report(cnt.count == 0xffffffffffff || cnt.count < 7, "cntr-%d", i); if (!this_cpu_support_perf_status()) continue; @@ -464,10 +484,11 @@ static void check_running_counter_wrmsr(void) static void check_emulated_instr(void) { uint64_t status, instr_start, brnch_start; + unsigned int branch_idx = is_intel() ? 5 : 2; pmu_counter_t brnch_cnt = { .ctr = gp_counter_msr(0), /* branch instructions */ - .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[5].unit_sel, + .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[branch_idx].unit_sel, }; pmu_counter_t instr_cnt = { .ctr = gp_counter_msr(1), @@ -662,15 +683,21 @@ int main(int ac, char **av) check_invalid_rdpmc_gp(); - if (!pmu_version()) { - report_skip("No Intel Arch PMU is detected!"); - return report_summary(); + if (is_intel()) { + if (!pmu_version()) { + report_skip("No Intel Arch PMU is detected!"); + return report_summary(); + } + gp_events = (struct pmu_event *)intel_gp_events; + gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); + report_prefix_push("Intel"); + set_ref_cycle_expectations(); + } else { + gp_events_size = sizeof(amd_gp_events)/sizeof(amd_gp_events[0]); + gp_events = (struct pmu_event *)amd_gp_events; + report_prefix_push("AMD"); } - gp_events = (struct pmu_event *)intel_gp_events; - gp_events_size = sizeof(intel_gp_events)/sizeof(intel_gp_events[0]); - set_ref_cycle_expectations(); - printf("PMU version: %d\n", pmu_version()); printf("GP counters: %d\n", pmu_nr_gp_counters()); printf("GP counter width: %d\n", pmu_gp_counter_width()); @@ -690,5 +717,14 @@ int main(int ac, char **av) report_prefix_pop(); } + if (!is_intel()) { + report_prefix_push("K7"); + set_nr_gp_counters(AMD64_NUM_COUNTERS); + set_gp_counter_base(MSR_K7_PERFCTR0); + set_gp_event_select_base(MSR_K7_EVNTSEL0); + check_counters(); + report_prefix_pop(); + } + return report_summary(); } From patchwork Mon Oct 24 09:12:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 13016930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CF77C38A2D for ; Mon, 24 Oct 2022 09:14:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229615AbiJXJOc (ORCPT ); Mon, 24 Oct 2022 05:14:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231276AbiJXJOF (ORCPT ); Mon, 24 Oct 2022 05:14:05 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 154EE696E6 for ; Mon, 24 Oct 2022 02:13:51 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id 20so8195661pgc.5 for ; Mon, 24 Oct 2022 02:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Lw9Y59Kn+xhaw6POjvDvw41k8paQq3td8sumIgfruR8=; b=HUzOQscH4o8H+v0dhVH4qz3Igep9Vnd4woKHo4u5qbwBxLTla8t8JDanJ82vkvFS9g 80pniZniMkSxZbpolBGzaYYkIHRbQ5AUEZN/07IOOTeLrrksE3K8biXdU0wdcwrCZsXP owRhNPOzXVxMfl7lQb32IIS9jKzFlU3aPejJ02wpiEk6nZkyHdLGnVj/KryUlNbW8RCr 9jAyQwiVmofqNeNafuTCpfITM4MXSZ16HeEPgPt87xGRvuq0jUqKDi3ZzDCRJFs7yZ0l 4+027pJeXhIh5O6CeQ7b5SYHIj+rE79NO4LL76wwNlnPZoGxhCOxDRQRvToNESGwhWVN 6cHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Lw9Y59Kn+xhaw6POjvDvw41k8paQq3td8sumIgfruR8=; b=dlHEN0I/JEnUEa5dTN3UFqTGJAhA9EtzsH1KmaoYWqRnQLK8Peun1WQyLRnSsA2kt6 kymg7zzc1k35HKcuINHbQXj9KT8os+GH7o0aBxcvzvBjHNpaEkq9kZA0yJ1CAfg9gwMB oB6GTWE4AKjJV9KLffO5xaGjftVNmOg9xTOtwFONLqJGripwGqtyQKLZvMZgsarBmbt6 aMXDP6KlkeE5j0uwAI5T1WTIZhGX63CwqLKH+OP12f+raZIfeMRSesIjgzGkEzZ0/EyG jlDvQrBg4ojmg/8huBuN2t/ouRxzBgu0J6EAp6V9QDUF0/mmfDSUPkC/tLfbeNebCfcf mNnw== X-Gm-Message-State: ACrzQf3RwPUfmIp1QzY0p0OndpweWg01loJ07s6syfm4ZJGLb2uzaKCO V37VFaDIOJ5h5eMn4yoVb+o= X-Google-Smtp-Source: AMsMyM7lLAeTdPJNtwePcwQAkQ9ZpRNb4pQnz0KKu5t4nwKFLciG9wrwyepKL9O9dQ7qsLMM+7PdFA== X-Received: by 2002:a63:4e66:0:b0:456:b3a7:7a80 with SMTP id o38-20020a634e66000000b00456b3a77a80mr26864919pgl.467.1666602821130; Mon, 24 Oct 2022 02:13:41 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id r15-20020aa79ecf000000b00535da15a252sm19642213pfq.165.2022.10.24.02.13.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 02:13:40 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson Cc: Paolo Bonzini , Jim Mattson , kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v4 24/24] x86/pmu: Add AMD Guest PerfMonV2 testcases Date: Mon, 24 Oct 2022 17:12:23 +0800 Message-Id: <20221024091223.42631-25-likexu@tencent.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024091223.42631-1-likexu@tencent.com> References: <20221024091223.42631-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Updated test cases to cover KVM enabling code for AMD Guest PerfMonV2. The Intel-specific PMU helpers were added to check for AMD cpuid, and some of the same semantics of MSRs were assigned during the initialization phase. The vast majority of pmu test cases are reused seamlessly. On some x86 machines (AMD only), even with retired events, the same workload is measured repeatedly and the number of events collected is erratic, which essentially reflects the details of hardware implementation, and from a software perspective, the type of event is an unprecise event, which brings a tolerance check in the counter overflow testcases. Signed-off-by: Like Xu --- lib/x86/msr.h | 5 +++++ lib/x86/pmu.c | 9 ++++++++- lib/x86/pmu.h | 6 +++++- lib/x86/processor.h | 2 +- 4 files changed, 19 insertions(+), 3 deletions(-) diff --git a/lib/x86/msr.h b/lib/x86/msr.h index 6cf8f33..c9869be 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -426,6 +426,11 @@ #define MSR_CORE_PERF_GLOBAL_CTRL 0x0000038f #define MSR_CORE_PERF_GLOBAL_OVF_CTRL 0x00000390 +/* AMD Performance Counter Global Status and Control MSRs */ +#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 +#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301 +#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302 + /* Geode defined MSRs */ #define MSR_GEODE_BUSCONT_CONF0 0x00001900 diff --git a/lib/x86/pmu.c b/lib/x86/pmu.c index 7fd2279..d4034cb 100644 --- a/lib/x86/pmu.c +++ b/lib/x86/pmu.c @@ -20,10 +20,17 @@ void pmu_init(void) } else { pmu.msr_gp_counter_base = MSR_F15H_PERF_CTR0; pmu.msr_gp_event_select_base = MSR_F15H_PERF_CTL0; - if (!has_amd_perfctr_core()) + if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) + pmu.nr_gp_counters = cpuid(0x80000022).b & 0xf; + else if (!has_amd_perfctr_core()) pmu.nr_gp_counters = AMD64_NUM_COUNTERS; else pmu.nr_gp_counters = AMD64_NUM_COUNTERS_CORE; + if (this_cpu_support_perf_status()) { + pmu.msr_global_status = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS; + pmu.msr_global_ctl = MSR_AMD64_PERF_CNTR_GLOBAL_CTL; + pmu.msr_global_status_clr = MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR; + } } reset_all_counters(); } \ No newline at end of file diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index a4e00c5..8f5b5ac 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -115,8 +115,12 @@ static inline void write_gp_event_select(unsigned int i, u64 value) static inline u8 pmu_version(void) { - if (!is_intel()) + if (!is_intel()) { + /* Performance Monitoring Version 2 Supported */ + if (this_cpu_has(X86_FEATURE_AMD_PMU_V2)) + return 2; return 0; + } return cpuid_10.a & 0xff; } diff --git a/lib/x86/processor.h b/lib/x86/processor.h index 64b36cf..7f884f7 100644 --- a/lib/x86/processor.h +++ b/lib/x86/processor.h @@ -266,7 +266,7 @@ static inline bool is_intel(void) #define X86_FEATURE_PAUSEFILTER (CPUID(0x8000000A, 0, EDX, 10)) #define X86_FEATURE_PFTHRESHOLD (CPUID(0x8000000A, 0, EDX, 12)) #define X86_FEATURE_VGIF (CPUID(0x8000000A, 0, EDX, 16)) - +#define X86_FEATURE_AMD_PMU_V2 (CPUID(0x80000022, 0, EAX, 0)) static inline bool this_cpu_has(u64 feature) {