From patchwork Tue Aug 16 08:09:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12944538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CA30C2BB41 for ; Tue, 16 Aug 2022 09:47:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233907AbiHPJrd (ORCPT ); Tue, 16 Aug 2022 05:47:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233896AbiHPJrG (ORCPT ); Tue, 16 Aug 2022 05:47:06 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C341771BFF for ; Tue, 16 Aug 2022 01:10:13 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id ha11so9108459pjb.2 for ; Tue, 16 Aug 2022 01:10:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=5lQJ9N30h1dB8N7TzTDJkwsX1gtcyo4Gy4YtWgKBu68=; b=BY0GfeP6PtR9jeMQPMb1+ipUrkMU7Vc8GQfV0mksdMf+fKHZuRsMOYV+UNXJBu+OvQ pyTC/IsiJP8+IBrg+e3E9hZCuGMwaZLMVyVLyuFUVqEuLp58AEl7ycdWKikYblQrUWJ4 v3LU8L8BQGm0EXyvi1qQw3a9GJLfiR7sg6NikjoAv727SmhiKH4VwEukMEGuvWlQ09Vz g7o6Zxh5kaw7be7MaF+JCDZ8mmPSgPe+EtYhYChBlh+RdQsGk9sGoOgct6Wae5POa12Q axi1LlV6y9OnNscmgOarCXk2fqB5RmzGiJtOEYj7MfZQZH81+0W1zpjmqdo+vs6Ntkz8 2NAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=5lQJ9N30h1dB8N7TzTDJkwsX1gtcyo4Gy4YtWgKBu68=; b=E9Dr7LFSfA9A0UbXp23GmWIjIQG01WantX6uKBnGlSZOuYICxYRNN5aqylLOb2eoP1 GrXaKkW7vaXaaCRL8n7VpJaSMV8cQ0b2+mT2sl1WRPQh5kpPjtlnQR/iNYfr8Jn0cyL0 CS3wys+hj0LY/uBCtJ6yJfLKA1cOE6hnrQ3WCPfVZ6iFEmzv3Q3Zgm7lhR4G8Fxk129O JVGCXWo/3YJA9ZfdPH/xH20VJbUDuP0Dd+RGA+A7BOyyo9n8rbybXqgq/bJt9QFlbLoE hBs0Ih1timiWYcSalvUkxuiw87hwXosMKiPk4U9rivTXfzCiyWXrktEdaqCXrbQz550c Vwzw== X-Gm-Message-State: ACgBeo318kKu5oT8vgPzaYZQVxi/IMFOwh7/pECRxtyzz4xeycF4NZ+b LW3ryVWBXasOy9QsrmxG0lQ= X-Google-Smtp-Source: AA6agR6iqOTzJo/GbIgS/82TA8gu9lh8FSX6eOlXONqXLujwbz4szjMyfGgXSxI3wkb+ePdaQeROaA== X-Received: by 2002:a17:903:481:b0:172:715f:69d9 with SMTP id jj1-20020a170903048100b00172715f69d9mr7662725plb.5.1660637413265; Tue, 16 Aug 2022 01:10:13 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id m12-20020a170902db0c00b0016d7b2352desm8400920plx.244.2022.08.16.01.10.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 01:10:12 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v2 1/5] x86/pmu: Introduce __start_event() to drop all of the manual zeroing Date: Tue, 16 Aug 2022 16:09:05 +0800 Message-Id: <20220816080909.90622-2-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220816080909.90622-1-likexu@tencent.com> References: <20220816080909.90622-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Most invocation of start_event() and measure() first sets evt.count=0. Instead of forcing each caller to ensure count is zeroed, optimize the count to zero during start_event(), then drop all of the manual zeroing. Accumulating counts can be handled by reading the current count before start_event(), and doing something like stuffing a high count to test an edge case could be handled by an inner helper, __start_event(). For overflow, just open code measure() for that one-off case. Requiring callers to zero out a field in most common cases isn't exactly flexible. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- x86/pmu.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index d59baf1..817b4d0 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -137,9 +137,9 @@ static void global_disable(pmu_counter_t *cnt) ~(1ull << cnt->idx)); } - -static void start_event(pmu_counter_t *evt) +static void __start_event(pmu_counter_t *evt, uint64_t count) { + evt->count = count; wrmsr(evt->ctr, evt->count); if (is_gp(evt)) wrmsr(MSR_P6_EVNTSEL0 + event_to_global_idx(evt), @@ -162,6 +162,11 @@ static void start_event(pmu_counter_t *evt) apic_write(APIC_LVTPC, PC_VECTOR); } +static void start_event(pmu_counter_t *evt) +{ + __start_event(evt, 0); +} + static void stop_event(pmu_counter_t *evt) { global_disable(evt); @@ -186,6 +191,13 @@ static void measure(pmu_counter_t *evt, int count) stop_event(&evt[i]); } +static void __measure(pmu_counter_t *evt, uint64_t count) +{ + __start_event(evt, count); + loop(); + stop_event(evt); +} + static bool verify_event(uint64_t count, struct pmu_event *e) { // printf("%d <= %ld <= %d\n", e->min, count, e->max); @@ -208,7 +220,6 @@ static void check_gp_counter(struct pmu_event *evt) int i; for (i = 0; i < nr_gp_counters; i++, cnt.ctr++) { - cnt.count = 0; measure(&cnt, 1); report(verify_event(cnt.count, evt), "%s-%d", evt->name, i); } @@ -235,7 +246,6 @@ static void check_fixed_counters(void) int i; for (i = 0; i < nr_fixed_counters; i++) { - cnt.count = 0; cnt.ctr = fixed_events[i].unit_sel; measure(&cnt, 1); report(verify_event(cnt.count, &fixed_events[i]), "fixed-%d", i); @@ -253,14 +263,12 @@ static void check_counters_many(void) if (!pmu_gp_counter_is_available(i)) continue; - cnt[n].count = 0; cnt[n].ctr = gp_counter_base + n; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR | gp_events[i % ARRAY_SIZE(gp_events)].unit_sel; n++; } for (i = 0; i < nr_fixed_counters; i++) { - cnt[n].count = 0; cnt[n].ctr = fixed_events[i].unit_sel; cnt[n].config = EVNTSEL_OS | EVNTSEL_USR; n++; @@ -283,9 +291,8 @@ static void check_counter_overflow(void) pmu_counter_t cnt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, - .count = 0, }; - measure(&cnt, 1); + __measure(&cnt, 0); count = cnt.count; /* clear status before test */ @@ -311,7 +318,7 @@ static void check_counter_overflow(void) else cnt.config &= ~EVNTSEL_INT; idx = event_to_global_idx(&cnt); - measure(&cnt, 1); + __measure(&cnt, cnt.count); report(cnt.count == 1, "cntr-%d", i); status = rdmsr(MSR_CORE_PERF_GLOBAL_STATUS); report(status & (1ull << idx), "status-%d", i); @@ -329,7 +336,6 @@ static void check_gp_counter_cmask(void) pmu_counter_t cnt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, - .count = 0, }; cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT); measure(&cnt, 1); @@ -415,7 +421,6 @@ static void check_running_counter_wrmsr(void) pmu_counter_t evt = { .ctr = gp_counter_base, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, - .count = 0, }; report_prefix_push("running counter wrmsr"); @@ -430,7 +435,6 @@ static void check_running_counter_wrmsr(void) wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); - evt.count = 0; start_event(&evt); count = -1; @@ -454,13 +458,11 @@ static void check_emulated_instr(void) .ctr = MSR_IA32_PERFCTR0, /* branch instructions */ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[5].unit_sel, - .count = 0, }; pmu_counter_t instr_cnt = { .ctr = MSR_IA32_PERFCTR0 + 1, /* instructions */ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel, - .count = 0, }; report_prefix_push("emulated instruction"); @@ -589,7 +591,6 @@ static void set_ref_cycle_expectations(void) pmu_counter_t cnt = { .ctr = MSR_IA32_PERFCTR0, .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[2].unit_sel, - .count = 0, }; uint64_t tsc_delta; uint64_t t0, t1, t2, t3; From patchwork Tue Aug 16 08:09:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12944539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0625C32772 for ; Tue, 16 Aug 2022 09:48:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234008AbiHPJs4 (ORCPT ); Tue, 16 Aug 2022 05:48:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234010AbiHPJsY (ORCPT ); Tue, 16 Aug 2022 05:48:24 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 413A074346 for ; Tue, 16 Aug 2022 01:10:15 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id pm17so9093986pjb.3 for ; Tue, 16 Aug 2022 01:10:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=puBBR6EVoQqAaVenerbCRq3gvGe0Yaq7L/nKySh8igU=; b=LfNYKqwMiN3AHNO4ChiKf35lsIu2ZOcgmzTRFm+rIdXXSqhWpbPqeuRfBXVBIboSjM UBaUAgeBdswxaRrJie7oxV/YFEyDZ/LW6WUub04L6K7viVMOO0A0Q8ZzzTDF0gsR2JYX ugfVJG7ZxlpBqwSVpAY7GdU7LIMA2MyX2QNyUsA5fEynjE04ToyEfbXx8c2XKJYmjOcq 5Q0jXOco29xtxGSgEIQGwKLHjHQMQdGo4qJFiBYEIbXAsTSdRzf2USjvuLPkTEArsO/t IQVSvt9pnVbWW5mXF99xXDZxYoevgzWL32U0Z0n0HwM3MkB/8udahnxT0jeVDs1dvBIQ aGVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=puBBR6EVoQqAaVenerbCRq3gvGe0Yaq7L/nKySh8igU=; b=IdKxq5k/OkROSUqkgToHlCvll8aTNY308vyeaaiM41yhkKW4MeUtzbnxdp0unJ+65T 2ZNR1w3BJreF/fZnWJYqbxWZG/LIXxtvQg+fq7on/++6GYUrZdzyA1D6+g0c0POQGDQ9 KoXZaqIyefC6uGWVFtJkunuWKYfWM6b0wGMJAfmlBkGuPtpsK3B2KbAb8HJ0EiFEMFCz 4y4cStulj57bqVfckUpzNOZQ3Xp4fl8f8VvvrpmaVcZiniGyQeCJ7dRgI0wsSO8wKJLo NYyICIWXbsq1J9QNzlFygpkD4GCLzjex6bc/96yVkCX3IRjXv0oZvwwXqV/X/bvD2D+X PtGQ== X-Gm-Message-State: ACgBeo3ULGgv59LuBC0njpBA2EyLHR2khoGPObfAXt05KE4fykZCbysK GmkwOBx6q8i2++IixIVWX1o= X-Google-Smtp-Source: AA6agR6uxJsFkWU9IbhzMsdF9IRS5avuCGWjT5lA/XHiDVwwStUffHUjzTcHMLkZeiV8vCZ5IbfE4g== X-Received: by 2002:a17:902:eb8a:b0:16d:bf08:9311 with SMTP id q10-20020a170902eb8a00b0016dbf089311mr21430643plg.139.1660637414786; Tue, 16 Aug 2022 01:10:14 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id m12-20020a170902db0c00b0016d7b2352desm8400920plx.244.2022.08.16.01.10.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 01:10:14 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v2 2/5] x86/pmu: Introduce multiple_{one, many}() to improve readability Date: Tue, 16 Aug 2022 16:09:06 +0800 Message-Id: <20220816080909.90622-3-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220816080909.90622-1-likexu@tencent.com> References: <20220816080909.90622-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The current measure_one() forces the common case to pass in unnecessary information in order to give flexibility to a single use case. It's just syntatic sugar, but it really does help readers as it's not obvious that the "1" specifies the number of events, whereas multiple_many() and measure_one() are relatively self-explanatory. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- x86/pmu.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 817b4d0..277fa6c 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -181,7 +181,7 @@ static void stop_event(pmu_counter_t *evt) evt->count = rdmsr(evt->ctr); } -static void measure(pmu_counter_t *evt, int count) +static void multiple_many(pmu_counter_t *evt, int count) { int i; for (i = 0; i < count; i++) @@ -191,6 +191,11 @@ static void measure(pmu_counter_t *evt, int count) stop_event(&evt[i]); } +static void measure_one(pmu_counter_t *evt) +{ + multiple_many(evt, 1); +} + static void __measure(pmu_counter_t *evt, uint64_t count) { __start_event(evt, count); @@ -220,7 +225,7 @@ static void check_gp_counter(struct pmu_event *evt) int i; for (i = 0; i < nr_gp_counters; i++, cnt.ctr++) { - measure(&cnt, 1); + measure_one(&cnt); report(verify_event(cnt.count, evt), "%s-%d", evt->name, i); } } @@ -247,7 +252,7 @@ static void check_fixed_counters(void) for (i = 0; i < nr_fixed_counters; i++) { cnt.ctr = fixed_events[i].unit_sel; - measure(&cnt, 1); + measure_one(&cnt); report(verify_event(cnt.count, &fixed_events[i]), "fixed-%d", i); } } @@ -274,7 +279,7 @@ static void check_counters_many(void) n++; } - measure(cnt, n); + multiple_many(cnt, n); for (i = 0; i < n; i++) if (!verify_counter(&cnt[i])) @@ -338,7 +343,7 @@ static void check_gp_counter_cmask(void) .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */, }; cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT); - measure(&cnt, 1); + measure_one(&cnt); report(cnt.count < gp_events[1].min, "cmask"); } From patchwork Tue Aug 16 08:09:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12944528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 788F7C2BB41 for ; Tue, 16 Aug 2022 09:42:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233822AbiHPJmF (ORCPT ); Tue, 16 Aug 2022 05:42:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233820AbiHPJlg (ORCPT ); Tue, 16 Aug 2022 05:41:36 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B47E7754B7 for ; Tue, 16 Aug 2022 01:10:16 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id o5-20020a17090a3d4500b001ef76490983so8938235pjf.2 for ; Tue, 16 Aug 2022 01:10:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=zwlyMaIPE/ZQcE6GaGrSuAnRcD4WM7g6M4tfIXe6WN0=; b=l7g2h4EsRNGSBv6qEGZ1DdRSH9HdIoTan09B6E38n7VuvjOh2Hcy8aaqlQ3x9NbG00 pYTJNciYdZIthcc+ds+ESg25gjQgKXwMVxiUJmZdmScY8eOJdVjnJpum27Hrpi/3ixmg cgorwDuLZehu46z5raD/iYlufplXhk07MRhvRaT4eU34huzwRuyBbikKN9gBPu1nXWl/ IOjE9am40SgtDcF4DYQd7Y5/zX6OBoMt3Mj/2n5JSraI5c4FUM4YCdGjNHtpuraZawm8 N6uWNn/yakWq3P8D/zXCR47wj5ysvyuSOv2+4qjLiLePwPH4Xnw2ikO7naSv5ywiXyTC ZZcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=zwlyMaIPE/ZQcE6GaGrSuAnRcD4WM7g6M4tfIXe6WN0=; b=HHkxe0Gv+Ohhr7p5TssJL4W7KtepghbERYlGE8tOJf1L4fK7VwDXW0xDx9L62IRsk9 HtE9KWwjyrGWFVU7r1nUGW6rNt0N5m+8HYIasbrlz4e9B0PP0e5uu3VqniztgR/VXUyH q62FIK/j3rRhSzcJVi8Xjg02dMR+K2r9O4bYbSJnMEX9caPtmIRzP5uELO19wnibg5Uo VbFk131e6NcUl0eZf7pDkFkrTYT6bTh2tlLHAbzXqDlzjDsN45PNstC8vDi3OFuCXXGN 3hLSqoT9PupxGMcNb0s8ZgkB385TI4tf6hkEbygbyDD6pmty8Fo6tAZJr9TIq7CxnZvv Atfw== X-Gm-Message-State: ACgBeo2Ahf0fYxpOhsvpUh2SLCJkanPUKVdMW8U5IySwfkUhPPsIkft0 WZS2TlEeUvTfZ5IDbLj/Gjs= X-Google-Smtp-Source: AA6agR51pXnu7RFs1qM3f5uQ9+ai85wf6R0X60X0jFGNinQFzAS1PifHTXOng3MuyGC+wSS0ZgIm7Q== X-Received: by 2002:a17:90b:1c0b:b0:1f5:7bda:143a with SMTP id oc11-20020a17090b1c0b00b001f57bda143amr32757594pjb.139.1660637416255; Tue, 16 Aug 2022 01:10:16 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id m12-20020a170902db0c00b0016d7b2352desm8400920plx.244.2022.08.16.01.10.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 01:10:16 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v2 3/5] x86/pmu: Reset the expected count of the fixed counter 0 when i386 Date: Tue, 16 Aug 2022 16:09:07 +0800 Message-Id: <20220816080909.90622-4-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220816080909.90622-1-likexu@tencent.com> References: <20220816080909.90622-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The pmu test check_counter_overflow() always fails with the "./configure --arch=i386". The cnt.count obtained from the latter run of measure() (based on fixed counter 0) is not equal to the expected value (based on gp counter 0) and there is a positive error with a value of 2. The two extra instructions come from inline wrmsr() and inline rdmsr() inside the global_disable() binary code block. Specifically, for each msr access, the i386 code will have two assembly mov instructions before rdmsr/wrmsr (mark it for fixed counter 0, bit 32), but only one assembly mov is needed for x86_64 and gp counter 0 on i386. Fix the expected init cnt.count for fixed counter 0 overflow based on the same fixed counter 0, not always using gp counter 0. Signed-off-by: Like Xu --- x86/pmu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/x86/pmu.c b/x86/pmu.c index 277fa6c..0ed2890 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -315,6 +315,9 @@ static void check_counter_overflow(void) if (i == nr_gp_counters) { cnt.ctr = fixed_events[0].unit_sel; + __measure(&cnt, 0); + count = cnt.count; + cnt.count = 1 - count; cnt.count &= (1ull << pmu_fixed_counter_width()) - 1; } From patchwork Tue Aug 16 08:09:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12944529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A50C1C2BB41 for ; Tue, 16 Aug 2022 09:45:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233662AbiHPJpt (ORCPT ); Tue, 16 Aug 2022 05:45:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233859AbiHPJpH (ORCPT ); Tue, 16 Aug 2022 05:45:07 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51638760FF for ; Tue, 16 Aug 2022 01:10:18 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 202so8616666pgc.8 for ; Tue, 16 Aug 2022 01:10:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=DsUNkpvOMHdR49JPNiJyAnTXkdk3lRpHK1A3nq8N0v4=; b=LSor8647UVuhBDEsIUdI2gA4LfdbfkD4KDcMvCa4/QNiNUCBivwalYdSumsoxZ/yF9 50Hu93iFv+e3NXaffDoMjNO84pc3J9lgEWcEo18VueLtWgwfMpmqVsuDLm81g7Xg8xbQ +6RuC6bY75N6xwHdpk6VQ51WSfrYcF47W370cJDmvTclIbDLwrc355IV+BZ1LHLqnogy bEiHu8MqNitMH8K5J/xTPpJR5Dx2TShCCMMe9RflKJFgp3q3gkTdYwkU1PK2E3cGiPHI hcDSUimA90J+fIaLjaT6W2avHqy4l6gwlOBo24N9xi8zbWA0EBnOAZui60Q4V9MEOJv5 S0/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=DsUNkpvOMHdR49JPNiJyAnTXkdk3lRpHK1A3nq8N0v4=; b=FNE3YMVB3kMsLRTj3FVdAB1LW4JVZc8eO2In0rkV7F6sscHMFozxexndN1dtyxkUzS bPHQAQ70nOOJCJJPrphTv7/IIQm32xd0MjnOaghjuRvR59OGc1ubxZmdWOqGlcmLctJi J1dYzbIjga1qHTURn/Mx1au7n3cFmAL81P3KQmR5bRMd8j1255qObpUDNCWMwATc/vRC StbC8zAjyp3C4L69R9bQgi1DXCcFtwnILCocJYEo+W+a2JP3wxBk2bOS2mAOpkFWeuUE Uj38O51emlRqXu8x0m5gI/I/eGvLRGAsf/LHCzP/obTYr1S7z13spUYSgXMJGzIZsoY6 03jw== X-Gm-Message-State: ACgBeo0sWVwvGRyiQU1i7xKzOTF8/UPZDi3mu3Xm7KehyWLiJo4ZgBUr tfuLXG8wlaqssVWxcZkHsl0= X-Google-Smtp-Source: AA6agR5i/o88z9m+JqW78r4mZhNDEvrKMnWxpv9pxvKNFkeQrOtvGAYcy4MCmrmqjkRpFgsNE20hgw== X-Received: by 2002:a63:2dc6:0:b0:428:ab9e:bb85 with SMTP id t189-20020a632dc6000000b00428ab9ebb85mr8105164pgt.530.1660637417798; Tue, 16 Aug 2022 01:10:17 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id m12-20020a170902db0c00b0016d7b2352desm8400920plx.244.2022.08.16.01.10.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 01:10:17 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v2 4/5] x86/pmu: Add tests for Intel Processor Event Based Sampling (PEBS) Date: Tue, 16 Aug 2022 16:09:08 +0800 Message-Id: <20220816080909.90622-5-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220816080909.90622-1-likexu@tencent.com> References: <20220816080909.90622-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu This unit-test is intended to test the KVM's support for the Processor Event Based Sampling (PEBS) which is another PMU feature on Intel processors (start from Ice Lake Server). If a bit in PEBS_ENABLE is set to 1, its corresponding counter will write at least one PEBS records (including partial state of the vcpu at the time of the current hardware event) to the guest memory on counter overflow, and trigger an interrupt at a specific DS state. The format of a PEBS record can be configured by another register. These tests cover most usage scenarios, for example there are some specially constructed scenarios (not a typical behaviour of Linux PEBS driver). It lowers the threshold for others to understand this feature and opens up more exploration of KVM implementation or hw feature itself. Signed-off-by: Like Xu --- lib/x86/msr.h | 1 + x86/Makefile.x86_64 | 1 + x86/pmu_pebs.c | 486 ++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 7 + 4 files changed, 495 insertions(+) create mode 100644 x86/pmu_pebs.c diff --git a/lib/x86/msr.h b/lib/x86/msr.h index fa1c0c8..252e041 100644 --- a/lib/x86/msr.h +++ b/lib/x86/msr.h @@ -52,6 +52,7 @@ #define MSR_IA32_MCG_CTL 0x0000017b #define MSR_IA32_PEBS_ENABLE 0x000003f1 +#define MSR_PEBS_DATA_CFG 0x000003f2 #define MSR_IA32_DS_AREA 0x00000600 #define MSR_IA32_PERF_CAPABILITIES 0x00000345 diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64 index 8f9463c..bd827fe 100644 --- a/x86/Makefile.x86_64 +++ b/x86/Makefile.x86_64 @@ -33,6 +33,7 @@ tests += $(TEST_DIR)/vmware_backdoors.$(exe) tests += $(TEST_DIR)/rdpru.$(exe) tests += $(TEST_DIR)/pks.$(exe) tests += $(TEST_DIR)/pmu_lbr.$(exe) +tests += $(TEST_DIR)/pmu_pebs.$(exe) ifeq ($(CONFIG_EFI),y) tests += $(TEST_DIR)/amd_sev.$(exe) diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c new file mode 100644 index 0000000..db4ecbf --- /dev/null +++ b/x86/pmu_pebs.c @@ -0,0 +1,486 @@ +#include "x86/msr.h" +#include "x86/processor.h" +#include "x86/isr.h" +#include "x86/apic.h" +#include "x86/apic-defs.h" +#include "x86/desc.h" +#include "alloc.h" + +#include "vm.h" +#include "types.h" +#include "processor.h" +#include "vmalloc.h" +#include "alloc_page.h" + +#define PC_VECTOR 32 + +#define X86_FEATURE_PDCM (CPUID(0x1, 0, ECX, 15)) + +#define PERF_CAP_PEBS_FORMAT 0xf00 +#define PMU_CAP_FW_WRITES (1ULL << 13) +#define PMU_CAP_PEBS_BASELINE (1ULL << 14) + +#define INTEL_PMC_IDX_FIXED 32 + +#define GLOBAL_STATUS_BUFFER_OVF_BIT 62 +#define GLOBAL_STATUS_BUFFER_OVF BIT_ULL(GLOBAL_STATUS_BUFFER_OVF_BIT) + +#define EVNTSEL_USR_SHIFT 16 +#define EVNTSEL_OS_SHIFT 17 +#define EVNTSEL_EN_SHIF 22 + +#define EVNTSEL_EN (1 << EVNTSEL_EN_SHIF) +#define EVNTSEL_USR (1 << EVNTSEL_USR_SHIFT) +#define EVNTSEL_OS (1 << EVNTSEL_OS_SHIFT) + +#define PEBS_DATACFG_MEMINFO BIT_ULL(0) +#define PEBS_DATACFG_GP BIT_ULL(1) +#define PEBS_DATACFG_XMMS BIT_ULL(2) +#define PEBS_DATACFG_LBRS BIT_ULL(3) + +#define ICL_EVENTSEL_ADAPTIVE (1ULL << 34) +#define PEBS_DATACFG_LBR_SHIFT 24 +#define MAX_NUM_LBR_ENTRY 32 + +static u64 gp_counter_base = MSR_IA32_PERFCTR0; +static unsigned int max_nr_gp_events; +static unsigned long *ds_bufer; +static unsigned long *pebs_buffer; +static u64 ctr_start_val; +static u64 perf_cap; + +struct debug_store { + u64 bts_buffer_base; + u64 bts_index; + u64 bts_absolute_maximum; + u64 bts_interrupt_threshold; + u64 pebs_buffer_base; + u64 pebs_index; + u64 pebs_absolute_maximum; + u64 pebs_interrupt_threshold; + u64 pebs_event_reset[64]; +}; + +struct pebs_basic { + u64 format_size; + u64 ip; + u64 applicable_counters; + u64 tsc; +}; + +struct pebs_meminfo { + u64 address; + u64 aux; + u64 latency; + u64 tsx_tuning; +}; + +struct pebs_gprs { + u64 flags, ip, ax, cx, dx, bx, sp, bp, si, di; + u64 r8, r9, r10, r11, r12, r13, r14, r15; +}; + +struct pebs_xmm { + u64 xmm[16*2]; /* two entries for each register */ +}; + +struct lbr_entry { + u64 from; + u64 to; + u64 info; +}; + +enum pmc_type { + GP = 0, + FIXED, +}; + +static uint32_t intel_arch_events[] = { + 0x00c4, /* PERF_COUNT_HW_BRANCH_INSTRUCTIONS */ + 0x00c5, /* PERF_COUNT_HW_BRANCH_MISSES */ + 0x0300, /* PERF_COUNT_HW_REF_CPU_CYCLES */ + 0x003c, /* PERF_COUNT_HW_CPU_CYCLES */ + 0x00c0, /* PERF_COUNT_HW_INSTRUCTIONS */ + 0x013c, /* PERF_COUNT_HW_BUS_CYCLES */ + 0x4f2e, /* PERF_COUNT_HW_CACHE_REFERENCES */ + 0x412e, /* PERF_COUNT_HW_CACHE_MISSES */ +}; + +static u64 pebs_data_cfgs[] = { + PEBS_DATACFG_MEMINFO, + PEBS_DATACFG_GP, + PEBS_DATACFG_XMMS, + PEBS_DATACFG_LBRS | ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT), +}; + +/* Iterating each counter value is a waste of time, pick a few typical values. */ +static u64 counter_start_values[] = { + /* if PEBS counter doesn't overflow at all */ + 0, + 0xfffffffffff0, + /* normal counter overflow to have PEBS records */ + 0xfffffffffffe, + /* test whether emulated instructions should trigger PEBS */ + 0xffffffffffff, +}; + +static inline u8 pebs_format(void) +{ + return (perf_cap & PERF_CAP_PEBS_FORMAT ) >> 8; +} + +static inline bool pebs_has_baseline(void) +{ + return perf_cap & PMU_CAP_PEBS_BASELINE; +} + +static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) +{ + unsigned int sz = sizeof(struct pebs_basic); + + if (!pebs_has_baseline()) + return sz; + + if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) + sz += sizeof(struct pebs_meminfo); + if (pebs_data_cfg & PEBS_DATACFG_GP) + sz += sizeof(struct pebs_gprs); + if (pebs_data_cfg & PEBS_DATACFG_XMMS) + sz += sizeof(struct pebs_xmm); + if (pebs_data_cfg & PEBS_DATACFG_LBRS) + sz += MAX_NUM_LBR_ENTRY * sizeof(struct lbr_entry); + + return sz; +} + +static void cnt_overflow(isr_regs_t *regs) +{ + apic_write(APIC_EOI, 0); +} + +static inline void workload(void) +{ + asm volatile( + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "jne label2\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "label2:\n" + : + : + : "eax", "ebx", "ecx", "edx"); +} + +static inline void workload2(void) +{ + asm volatile( + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "mov $0x0, %%eax\n" + "cmp $0x0, %%eax\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "jne label3\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "mov $0xa, %%eax\n" + "cpuid\n" + "label3:\n" + : + : + : "eax", "ebx", "ecx", "edx"); +} + +static void alloc_buffers(void) +{ + ds_bufer = alloc_page(); + force_4k_page(ds_bufer); + memset(ds_bufer, 0x0, PAGE_SIZE); + + pebs_buffer = alloc_page(); + force_4k_page(pebs_buffer); + memset(pebs_buffer, 0x0, PAGE_SIZE); +} + +static void free_buffers(void) +{ + if (ds_bufer) + free_page(ds_bufer); + + if (pebs_buffer) + free_page(pebs_buffer); +} + +static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) +{ + static struct debug_store *ds; + u64 baseline_extra_ctrl, fixed_ctr_ctrl = 0; + unsigned int idx; + + if (pebs_has_baseline()) + wrmsr(MSR_PEBS_DATA_CFG, pebs_data_cfg); + + ds = (struct debug_store *)ds_bufer; + ds->pebs_index = ds->pebs_buffer_base = (unsigned long)pebs_buffer; + ds->pebs_absolute_maximum = (unsigned long)pebs_buffer + PAGE_SIZE; + ds->pebs_interrupt_threshold = ds->pebs_buffer_base + + get_adaptive_pebs_record_size(pebs_data_cfg); + + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) { + if (!(BIT_ULL(INTEL_PMC_IDX_FIXED + idx) & bitmask)) + continue; + baseline_extra_ctrl = pebs_has_baseline() ? + (1ULL << (INTEL_PMC_IDX_FIXED + idx * 4)) : 0; + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + idx, ctr_start_val); + fixed_ctr_ctrl |= (0xbULL << (idx * 4) | baseline_extra_ctrl); + } + if (fixed_ctr_ctrl) + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, fixed_ctr_ctrl); + + for (idx = 0; idx < max_nr_gp_events; idx++) { + if (!(BIT_ULL(idx) & bitmask)) + continue; + baseline_extra_ctrl = pebs_has_baseline() ? + ICL_EVENTSEL_ADAPTIVE : 0; + wrmsr(MSR_P6_EVNTSEL0 + idx, + EVNTSEL_EN | EVNTSEL_OS | EVNTSEL_USR | + intel_arch_events[idx] | baseline_extra_ctrl); + wrmsr(gp_counter_base + idx, ctr_start_val); + } + + wrmsr(MSR_IA32_DS_AREA, (unsigned long)ds_bufer); + wrmsr(MSR_IA32_PEBS_ENABLE, bitmask); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, bitmask); +} + +static void pmu_env_cleanup(void) +{ + unsigned int idx; + + memset(ds_bufer, 0x0, PAGE_SIZE); + memset(pebs_buffer, 0x0, PAGE_SIZE); + wrmsr(MSR_IA32_PEBS_ENABLE, 0); + wrmsr(MSR_IA32_DS_AREA, 0); + if (pebs_has_baseline()) + wrmsr(MSR_PEBS_DATA_CFG, 0); + + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, 0); + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) { + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + idx, 0); + } + + for (idx = 0; idx < pmu_nr_gp_counters(); idx++) { + wrmsr(MSR_P6_EVNTSEL0 + idx, 0); + wrmsr(MSR_IA32_PERFCTR0 + idx, 0); + } + + wrmsr(MSR_CORE_PERF_GLOBAL_OVF_CTRL, rdmsr(MSR_CORE_PERF_GLOBAL_STATUS)); +} + +static inline void pebs_disable_1(void) +{ + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); +} + +static inline void pebs_disable_2(void) +{ + wrmsr(MSR_IA32_PEBS_ENABLE, 0); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); +} + +static void pebs_disable(unsigned int idx) +{ + if (idx % 2) { + pebs_disable_1(); + } else { + pebs_disable_2(); + } +} + +static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) +{ + struct pebs_basic *pebs_rec = (struct pebs_basic *)pebs_buffer; + struct debug_store *ds = (struct debug_store *)ds_bufer; + unsigned int pebs_record_size = get_adaptive_pebs_record_size(pebs_data_cfg); + unsigned int count = 0; + bool expected, pebs_idx_match, pebs_size_match, data_cfg_match; + void *vernier; + + expected = (ds->pebs_index == ds->pebs_buffer_base) && !pebs_rec->format_size; + if (!(rdmsr(MSR_CORE_PERF_GLOBAL_STATUS) & GLOBAL_STATUS_BUFFER_OVF)) { + report(expected, "No OVF irq, none PEBS records."); + return; + } + + if (expected) { + report(!expected, "A OVF irq, but none PEBS records."); + return; + } + + expected = ds->pebs_index >= ds->pebs_interrupt_threshold; + vernier = (void *)pebs_buffer; + do { + pebs_rec = (struct pebs_basic *)vernier; + pebs_record_size = pebs_rec->format_size >> 48; + pebs_idx_match = + pebs_rec->applicable_counters & bitmask; + pebs_size_match = + pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); + data_cfg_match = + (pebs_rec->format_size & 0xffffffffffff) == pebs_data_cfg; + expected = pebs_idx_match && pebs_size_match && data_cfg_match; + report(expected, + "PEBS record (written seq %d) is verified (inclduing size, counters and cfg).", count); + vernier = vernier + pebs_record_size; + count++; + } while (expected && (void *)vernier < (void *)ds->pebs_index); + + if (!expected) { + if (!pebs_idx_match) + printf("FAIL: The applicable_counters (0x%lx) doesn't match with pmc_bitmask (0x%lx).\n", + pebs_rec->applicable_counters, bitmask); + if (!pebs_size_match) + printf("FAIL: The pebs_record_size (%d) doesn't match with MSR_PEBS_DATA_CFG (%d).\n", + pebs_record_size, get_adaptive_pebs_record_size(pebs_data_cfg)); + if (!data_cfg_match) + printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with MSR_PEBS_DATA_CFG (0x%lx).\n", + pebs_rec->format_size & 0xffffffffffff, pebs_data_cfg); + } +} + +static void check_one_counter(enum pmc_type type, + unsigned int idx, u64 pebs_data_cfg) +{ + report_prefix_pushf("%s counter %d (0x%lx)", + type == FIXED ? "Extended Fixed" : "GP", idx, ctr_start_val); + pmu_env_cleanup(); + pebs_enable(BIT_ULL(type == FIXED ? INTEL_PMC_IDX_FIXED + idx : idx), pebs_data_cfg); + workload(); + pebs_disable(idx); + check_pebs_records(BIT_ULL(type == FIXED ? INTEL_PMC_IDX_FIXED + idx : idx), pebs_data_cfg); + report_prefix_pop(); +} + +static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg) +{ + pmu_env_cleanup(); + pebs_enable(bitmask, pebs_data_cfg); + workload2(); + pebs_disable(0); + check_pebs_records(bitmask, pebs_data_cfg); +} + +static void check_pebs_counters(u64 pebs_data_cfg) +{ + unsigned int idx; + u64 bitmask = 0; + + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) + check_one_counter(FIXED, idx, pebs_data_cfg); + + for (idx = 0; idx < max_nr_gp_events; idx++) + check_one_counter(GP, idx, pebs_data_cfg); + + for (idx = 0; idx < pmu_nr_fixed_counters(); idx++) + bitmask |= BIT_ULL(INTEL_PMC_IDX_FIXED + idx); + for (idx = 0; idx < max_nr_gp_events; idx += 2) + bitmask |= BIT_ULL(idx); + report_prefix_pushf("Multiple (0x%lx)", bitmask); + check_multiple_counters(bitmask, pebs_data_cfg); + report_prefix_pop(); +} + +int main(int ac, char **av) +{ + unsigned int i, j; + + setup_vm(); + + max_nr_gp_events = MIN(pmu_nr_gp_counters(), ARRAY_SIZE(intel_arch_events)); + + printf("PMU version: %d\n", pmu_version()); + if (this_cpu_has(X86_FEATURE_PDCM)) + perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES); + + if (perf_cap & PMU_CAP_FW_WRITES) + gp_counter_base = MSR_IA32_PMC0; + + if (!is_intel()) { + report_skip("PEBS is only supported on Intel CPUs (ICX or later)"); + return report_summary(); + } else if (pmu_version() < 2) { + report_skip("Architectural PMU version is not high enough"); + return report_summary(); + } else if (!pebs_format()) { + report_skip("PEBS not enumerated in PERF_CAPABILITIES"); + return report_summary(); + } else if (rdmsr(MSR_IA32_MISC_ENABLE) & MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL) { + report_skip("PEBS unavailable according to MISC_ENABLE"); + return report_summary(); + } + + printf("PEBS format: %d\n", pebs_format()); + printf("PEBS GP counters: %d\n", pmu_nr_gp_counters()); + printf("PEBS Fixed counters: %d\n", pmu_nr_fixed_counters()); + printf("PEBS baseline (Adaptive PEBS): %d\n", pebs_has_baseline()); + + printf("Known reasons for none PEBS records:\n"); + printf("1. The selected event does not support PEBS;\n"); + printf("2. From a core pmu perspective, the vCPU and pCPU models are not same;\n"); + printf("3. Guest counter has not yet overflowed or been cross-mapped by the host;\n"); + + handle_irq(PC_VECTOR, cnt_overflow); + alloc_buffers(); + + for (i = 0; i < ARRAY_SIZE(counter_start_values); i++) { + ctr_start_val = counter_start_values[i]; + check_pebs_counters(0); + if (!pebs_has_baseline()) + continue; + + for (j = 0; j < ARRAY_SIZE(pebs_data_cfgs); j++) { + report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfgs[j]); + check_pebs_counters(pebs_data_cfgs[j]); + report_prefix_pop(); + } + } + + free_buffers(); + + return report_summary(); +} diff --git a/x86/unittests.cfg b/x86/unittests.cfg index ed65185..c5efb25 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -198,6 +198,13 @@ check = /sys/module/kvm/parameters/ignore_msrs=N check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm +[pmu_pebs] +arch = x86_64 +file = pmu_pebs.flat +extra_params = -cpu host,migratable=no +check = /proc/sys/kernel/nmi_watchdog=0 +accel = kvm + [vmware_backdoors] file = vmware_backdoors.flat extra_params = -machine vmport=on -cpu max From patchwork Tue Aug 16 08:09:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12944569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B61E2C2BB41 for ; Tue, 16 Aug 2022 10:07:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232831AbiHPKHo (ORCPT ); Tue, 16 Aug 2022 06:07:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234016AbiHPKHN (ORCPT ); Tue, 16 Aug 2022 06:07:13 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7F8C7757A for ; Tue, 16 Aug 2022 01:10:19 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id d16so8516719pll.11 for ; Tue, 16 Aug 2022 01:10:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=ql4Rh8J556MPf2Buydi+F1rT3HDm+TzYYIHtdS57x/8=; b=QlXIPk1tnsd/zpFN6hx30gy9IddpPd972R7XiCXk/QO5HZqnL7w+WLw8PfmqEbnF7k 8tOHqeALgK7l4K2dZlkqB/is5fCm911IpN31elSAU4ausVzKSDJ8Ngt2R0jiXlg+7mbK HAMz/mkBtxVrxvioMTaoSzTbyYt4ZgoRd3vV+HdmItFMCY4pEiG/okub0N7rGVi8YG8/ Zc9JUraJpbcR+uj1SBWj6osq1TTDQqW/O9jjLB9n40vWlpyaNJdc0yMLWFUj71ayjmrz Vn9dMuZHzdR3gaFKST6GMZCDgLCF3w/t4yCP87ziPcXrYGpv0z5RApbI5SuRWNuU0ENm jRHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=ql4Rh8J556MPf2Buydi+F1rT3HDm+TzYYIHtdS57x/8=; b=W2bSK1mD/kT16ozFc9lridm+teqZNLGQiO8Ci2KchMXhCj6nOkkBRCoZLWbfzr7y0L MMX6IyiVq4eRi43GEFgkFPwrysbRL76gYGlI3Um2N1jje7uGEP/Xj5HBK0t9+/XgkvWm bywzmKqByGWuaoWSDOvZa2jUkOh2xq/HzG/RnOxK1+Dmru4zVLDbtpgQeQ3G0L06TX0X prnirshtHl6NLaNJ6hDMAjDALFhL9hqEv0gv4fJP+YjojVE0jl7keZPCx9yUj1z6cFIQ rcnFQuOl4ZHlJHAoN6ZkhVI9k2Gdpz+mhe9GOWqxYupFEHv2tC9WrZH8UddPNxfOR1L8 JGZw== X-Gm-Message-State: ACgBeo2pbSFRp8yk2CBlI/UDHDXuRRWFVJlXwelvABoBqDIEfL9QeIDg V0lY1D5Hi8nmMFPnOQvWe7c= X-Google-Smtp-Source: AA6agR7+pKsXuERDxAOvvLf6ypBbENtc17iJHdcIDjZC9cM6sHMog5M8OtfIf5ZukQDXfwFioSRTUg== X-Received: by 2002:a17:902:ebcb:b0:168:e3ba:4b5a with SMTP id p11-20020a170902ebcb00b00168e3ba4b5amr20815261plg.11.1660637419209; Tue, 16 Aug 2022 01:10:19 -0700 (PDT) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id m12-20020a170902db0c00b0016d7b2352desm8400920plx.244.2022.08.16.01.10.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Aug 2022 01:10:19 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH v2 5/5] x86: create pmu group for quick pmu-scope testing Date: Tue, 16 Aug 2022 16:09:09 +0800 Message-Id: <20220816080909.90622-6-likexu@tencent.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220816080909.90622-1-likexu@tencent.com> References: <20220816080909.90622-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Any agent can run "./run_tests.sh -g pmu" for vPMU-related testcases. Signed-off-by: Like Xu --- x86/unittests.cfg | 3 +++ 1 file changed, 3 insertions(+) diff --git a/x86/unittests.cfg b/x86/unittests.cfg index c5efb25..54f0437 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -189,6 +189,7 @@ file = pmu.flat extra_params = -cpu max check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm +groups = pmu [pmu_lbr] arch = x86_64 @@ -197,6 +198,7 @@ extra_params = -cpu host,migratable=no check = /sys/module/kvm/parameters/ignore_msrs=N check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm +groups = pmu [pmu_pebs] arch = x86_64 @@ -204,6 +206,7 @@ file = pmu_pebs.flat extra_params = -cpu host,migratable=no check = /proc/sys/kernel/nmi_watchdog=0 accel = kvm +groups = pmu [vmware_backdoors] file = vmware_backdoors.flat