From patchwork Wed Feb 15 01:07:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13141152 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44AEBC05027 for ; Wed, 15 Feb 2023 01:16:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=aHDa1ItQyvsoPZ1syexOLXxdMiHoUC2fdOcmQw1mb3g=; b=wQ1DcV1CG/K+sYLwTplDp1kcfI S6E72UQvPevW2JxpqVxT0RUvZhwcgv4A71D+BBm7DxTBcpdUl81xCMmY5sWzY35oVldU+LE4HKv7d Fa87kM+5R2BDTH5tb8b5J+y13/v41zJOPYsohSQ7TTnAlgWygBuhEO7zIjZj0Rq+VrRf2QGlbbwOQ 32UZlwqYXDoaw1+zRvx5P7kQTtnYpAGVOHFyE0LuNhQW0J+EQEfrULnbZtnWboRlThMRNOvGC1Onw ZUY0aKMNClQ5FVJFmIi+QKqyJ15iInd3HfAmAFzve4C7V1usVeb+xqrhlv1Mw+Nu8SmSIZ6fd4WTF kqmK8IeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6KG-004AYh-Qm; Wed, 15 Feb 2023 01:11:13 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pS6Gx-00498K-9b for linux-arm-kernel@bombadil.infradead.org; Wed, 15 Feb 2023 01:07:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=D+ittpOLDcgo5WFi8/SjaIqjSo JB9kMTPm0rPoA8d3AIhtCpLbOjmjZbiERNAYLF9RLpo4o5jf1/5Jujev2yv3xgyew4TrWHg3uTl3t Qo0pv02sc84cAKGNEAxhLYNUY6nYQAS1lwi+UbyafxaO35jOZNJryLfYn0fzGeG4vYJQPkieA5Pev ywE+DnyEE11poulkvMQjDdVCsABdxMRF499cYGDkGz4hF4yGii4R8a/mDRE4mDzULmYAMqM5PYKdF ZaHNVJ//MkoEOt4KRxv2o/OFEElnKjyvY3GM839+Rueo3ChfV2ksTuDmQSGqRUoK/vutNQGSyhcHm XQmLRmpA==; Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pS6GB-009oaK-0M for linux-arm-kernel@lists.infradead.org; Wed, 15 Feb 2023 01:07:44 +0000 Received: by mail-il1-x14a.google.com with SMTP id t6-20020a056e02010600b0031417634f4bso9911728ilm.18 for ; Tue, 14 Feb 2023 17:07:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=AjTSergnfy8C/785t8axK+Vvs1zbfaktJqaWCHRu2YXjH4xSRl/eM7AVQrl/R1X4W0 Uw3DsRxwBYSZbHBn3a50IqNX0UB5SJmO6sHffJg1P0Bxnwv6AxD/+TbMWp78pL3GW6dO xz3WajyDhVJiyiUgqwXXdBGiiF60PBusNvd4DX+lO6ijTpygrm8HYu94OC3zngjlrJEd pfXf0uTNxmyDGyxCL4rdXRjIMORzQ2ycxh0iTW8lWA7rfbrU+o+bM1Ox+D0XaSAstClp /hxqsgBELyFZCuUJtJjhEnjO1QAi2UGnfSppIUljZnIZqQGIyx5NuAb5aZxrE3E8Eg7F 1UMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=WENQU8T17NJ3qD246lPYTPVEPLjhhY/RrNkpLsrBXLO78jUU04N++r3w5k5oOBfI/N eoFKWsNfiztTz8m/OZYHSv3y4+dlshPW7gfGLprW+VYmLeP9/e1yRBDeTHE7dAY5E4gg 1UBamMjTgovJ3sjt2ijgC2id8IXUDn9BJ7ZXDHexCjaGgubjzVjMtSyhDGwcVcWxeaJZ IzdNf2XfA+tPtB4DXN3wUXw+yhD6d+0K7kcOzxnJH6nYmhDWX16j6kYTRiNhiVxbbnGD jSMkN43IB41dkOfW1mG/CfIytxESKF4H9jFOfX2/NycyiNoxIIgKGY5hJTOJiTQ1/F7u LQ+A== X-Gm-Message-State: AO0yUKWUgz9MgHpHGqGTClNkdosPPpHL+xK2aew2ctI8p6SmIY/4rUWd agQZkZHV5Vp34tMb4OnO1mEPsn+HzjUZ X-Google-Smtp-Source: AK7set/7gzYfd4tGsCtPonDEuhVJyQ8zfTN9MXyCGP1j08RK6Rci4b2doJcZes78PxFC87kC1dW/HOjrZgFW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6602:334a:b0:73d:fd95:8e1c with SMTP id c10-20020a056602334a00b0073dfd958e1cmr304543ioz.41.1676423256592; Tue, 14 Feb 2023 17:07:36 -0800 (PST) Date: Wed, 15 Feb 2023 01:07:13 +0000 In-Reply-To: <20230215010717.3612794-1-rananta@google.com> Mime-Version: 1.0 References: <20230215010717.3612794-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230215010717.3612794-13-rananta@google.com> Subject: [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230215_010741_740364_77F98808 X-CRM114-Status: GOOD ( 33.82 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vCPU migration test to also validate the vPMU's functionality when set up for overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++-- 1 file changed, 198 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 0c9d801f4e602..066dc17fa3906 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -21,7 +21,9 @@ * * 4. Since the PMU registers are per-cpu, stress KVM by frequently * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. + * if the vPMU is still behaving as expected. The sub-tests include + * testing basic functionalities such as basic counters behavior, + * overflow, and overflow interrupts. * * Copyright (c) 2022 Google LLC. * @@ -41,13 +43,27 @@ #include #include "delay.h" +#include "gic.h" +#include "spinlock.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The cycle counter bit position that's common among the PMU registers */ +#define ARMV8_PMU_CYCLE_COUNTER_IDX 31 + /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define PMU_IRQ 23 + +#define COUNT_TO_OVERFLOW 0xFULL +#define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) +#define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + #define msecs_to_usecs(msec) ((msec) * 1000LL) /* @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val) isb(); } +static inline void write_pmovsclr(unsigned long val) +{ + write_sysreg(val, pmovsclr_el0); + isb(); +} + +static unsigned long read_pmovsclr(void) +{ + return read_sysreg(pmovsclr_el0); +} + static inline void enable_counter(int idx) { uint64_t v = read_sysreg(pmcntenset_el0); @@ -178,11 +205,33 @@ static inline void disable_counter(int idx) isb(); } +static inline void enable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenset_el1); + isb(); +} + +static inline void disable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenclr_el1); + isb(); +} + static inline uint64_t read_cycle_counter(void) { return read_sysreg(pmccntr_el0); } +static inline void write_cycle_counter(uint64_t v) +{ + write_sysreg(v, pmccntr_el0); + isb(); +} + static inline void reset_cycle_counter(void) { uint64_t v = read_sysreg(pmcr_el0); @@ -289,6 +338,15 @@ struct guest_data { static struct guest_data guest_data; +/* Data to communicate among guest threads */ +struct guest_irq_data { + uint32_t pmc_idx_bmap; + uint32_t irq_received_bmap; + struct spinlock lock; +}; + +static struct guest_irq_data guest_irq_data; + #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs) expected_ec = INVALID_EC; } +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap) +{ + /* + * Fail if there's an interrupt from unexpected PMCs. + * All the expected events' IRQs may not arrive at the same time. + * Hence, check if the interrupt is valid only if it's expected. + */ + if (pmovsclr & BIT(pmc_idx)) { + GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap); + write_pmovsclr(BIT(pmc_idx)); + } +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + uint32_t pmc_idx_bmap; + uint64_t i, pmcr_n = get_pmcr_n(); + uint32_t pmovsclr = read_pmovsclr(); + unsigned int intid = gic_get_and_ack_irq(); + + /* No other IRQ apart from the PMU IRQ is expected */ + GUEST_ASSERT_1(intid == PMU_IRQ, intid); + + spin_lock(&guest_irq_data.lock); + pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + + for (i = 0; i < pmcr_n; i++) + guest_validate_irq(i, pmovsclr, pmc_idx_bmap); + guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); + + /* Mark IRQ as recived for the corresponding PMCs */ + WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); + spin_unlock(&guest_irq_data.lock); + + gic_set_eoi(intid); +} + +static int pmu_irq_received(int pmc_idx) +{ + bool irq_received; + + spin_lock(&guest_irq_data.lock); + irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + return irq_received; +} + +static void pmu_irq_init(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + enable_irq(pmc_idx); +} + +static void pmu_irq_exit(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + disable_irq(pmc_idx); +} + /* * Run the given operation that should trigger an exception with the * given exception class. The exception handler (guest_sync_handler) @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr) precise_instrs_loop(loop, pmcr); } -static void test_instructions_count(int pmc_idx, bool expect_count) +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow) { int i; struct pmc_accessor *acc; - uint64_t cnt; - int instrs_count = 100; + uint64_t cntr_val = 0; + int instrs_count = 500; + + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT_1(expect_count, pmc_idx); + + cntr_val = PRE_OVERFLOW_32; + pmu_irq_init(pmc_idx); + } enable_counter(pmc_idx); @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count) for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { acc = &pmc_accessors[i]; - pmu_disable_reset(); - + acc->write_cntr(pmc_idx, cntr_val); acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); - /* Enable the PMU and execute precisely number of instructions as a workload */ - execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + /* + * Enable the PMU and execute a precise number of instructions as a workload. + * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count' + * should have enough instructions to raise an IRQ. + */ + execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E); - /* If a count is expected, the counter should be increased by 'instrs_count' */ - cnt = acc->read_cntr(pmc_idx); - GUEST_ASSERT_4(expect_count == (cnt == instrs_count), - i, expect_count, cnt, instrs_count); + /* + * If an overflow is expected, only check for the overflag flag. + * As overflow interrupt is enabled, the interrupt would add additional + * instructions and mess up the precise instruction count. Hence, measure + * the instructions count only when the test is not set up for an overflow. + */ + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i); + } else { + uint64_t cnt = acc->read_cntr(pmc_idx); + + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + pmc_idx, i, cnt, expect_count); + } } - disable_counter(pmc_idx); + if (test_overflow) + pmu_irq_exit(pmc_idx); } -static void test_cycles_count(bool expect_count) +static void test_cycles_count(bool expect_count, bool test_overflow) { uint64_t cnt; - pmu_enable(); - reset_cycle_counter(); + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT(expect_count); + + write_cycle_counter(PRE_OVERFLOW_64); + pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX); + } else { + reset_cycle_counter(); + } /* Count cycles in EL0 and EL1 */ write_pmccfiltr(0); enable_cycle_counter(); + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); cnt = read_cycle_counter(); /* * If a count is expected by the test, the cycle counter should be increased by - * at least 1, as there is at least one instruction between enabling the + * at least 1, as there are a number of instructions between enabling the * counter and reading the counter. */ GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count); + pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX); + } disable_cycle_counter(); pmu_disable_reset(); @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { case ARMV8_PMUV3_PERFCTR_INST_RETIRED: - test_instructions_count(pmc_idx, expect_count); + test_instructions_count(pmc_idx, expect_count, false); break; case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: - test_cycles_count(expect_count); + test_cycles_count(expect_count, false); break; } } static void test_basic_pmu_functionality(void) { + local_irq_disable(); + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_irq_enable(PMU_IRQ); + local_irq_enable(); + /* Test events on generic and cycle counters */ - test_instructions_count(0, true); - test_cycles_count(true); + test_instructions_count(0, true, false); + test_cycles_count(true, false); + + /* Test overflow with interrupts on generic and cycle counters */ + test_instructions_count(0, true, true); + test_cycles_count(true, true); } /* @@ -813,9 +988,6 @@ static void guest_code(void) GUEST_DONE(); } -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - static unsigned long * set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) { @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; - uint64_t dfr0, irq = 23; + uint64_t dfr0, irq = PMU_IRQ; struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) {