From patchwork Mon Feb 13 18:02:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D1AAC636CC for ; Mon, 13 Feb 2023 18:03:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XW9IR6Bd+XK5HEnjS5BNyWJXKWC4APeJZb4hHhppRew=; b=a96BYJQicqkzYHrJbqhT2VWFUV pSd+/3JiTP8FjmUXDR7a2jn4WCDOsZf1BGlxbz7zcK/Xcxeo+DzJN6Nfng841IcuvRIC8K7HwCBlf qB+wgG/5GNjBGtKn1SPWGFU6/pLBXgFepWowdYtcaWBK20pdKagEWUdzHzKF3mXVWOx4RPoC9i8dj BrP4Rr9Wj9yKYJMicDAicC1ARlF2CaGW3ePuUvcXiuhBm3ADBzpoInDHLRRDFuysPIhWe0kYKsn9Y HGPjQYblzBz3KGoXCABTfO3UtqXGvleZpPK4bqFq5q7S2J6xh41csOOz6IEriqbRfkYtxs/NpiHyi wHltNFCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA7-00FlcA-Js; Mon, 13 Feb 2023 18:02:47 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA2-00Fla6-Im for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:45 +0000 Received: by mail-il1-x14a.google.com with SMTP id b8-20020a920b08000000b00315565cf4b2so577571ilf.1 for ; Mon, 13 Feb 2023 10:02:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=70xvWuvYdaN0RHC9y2y+6cAohvWGNUH5IWQYtqhnjZE=; b=URPZnrYEApr1It+IVm3J16L4qUg2cigXzrcfhjuyLm3gPm2LeW9+RwiQI/kFr/6ju1 QRO6gvU1kY6dLPzMq2+wCiHIcftw3NDrPbCYlTgca1cM2BNmpFb0BRx5X9OGtMV8iMKn 4GC9+VjpgnH3ZAy+lR+VvnIrWIq2xvRyCFftvMcT7dG5qXF65SenNlCXkyi51D30k9MG Abkslkc8PQo9JTtVqLGYkkodvZP1FK5TTm+dDZmUgRPrThCwyXSKOQumFLGyGTRNrShi Ag9sAoQFvgZNJPH5i4fSQ1WSw6K2yJobD3ohEm/lMQeQ/f+tIPe4oookQuuonepQMnv3 WQQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=70xvWuvYdaN0RHC9y2y+6cAohvWGNUH5IWQYtqhnjZE=; b=KFbp6zJQujGgak1c+i+Ce0prrfcstbG/vqH6hFt97D79wiQyHmzc3iQTC4CWXNXRWi GhRFO+/44TsQ9JlJryYWmZsnwOcuytsTLKp307ff8akhuQAhUOffEx4Y+wDl0/LHFIoS YRRodK7XUkGWLo2xU40QoXGZaFQ/WEqz9eeii9ADXemH2Gyw2VvX7sb1EFAOZCKH7K+e 80P5cFfVchZou6bqJG4A11UCuyfJZpqKpxW5zYVTNb0M2WQjvWAuNOsI3DdlDqre9Rfx CBQyJuE+NRvrZ/3BBMf83orJ5R/D+sTtEXRWKabt5UGCP4yIYZl7etr9ZAbqninztRLp 3s1A== X-Gm-Message-State: AO0yUKXTiYGdtuJjzK7ab5xTbk9/WDNLgKinz/bzDrPmB9++1U+XsuUy 7KfSrpwgoYH/ffjNoNzgUAb/KJnMT/iF X-Google-Smtp-Source: AK7set/8lUVnM/xUxlc5ev5n38xMQU6UonaUcN7oe3E/mMvqt3yuPRKRB7RT8I01SIqBC8WTFb8l0KoRBFtB X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:2613:0:b0:310:9afc:aa6 with SMTP id n19-20020a922613000000b003109afc0aa6mr2640960ile.0.1676311360373; Mon, 13 Feb 2023 10:02:40 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:22 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-2-rananta@google.com> Subject: [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100244_042588_970DE9C9 X-CRM114-Status: UNSURE ( 9.90 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The upcoming patches would add more vPMU related tests to the file. Hence, rename it to be more generic. Signed-off-by: Raghavendra Rao Ananta --- tools/testing/selftests/kvm/Makefile | 2 +- .../kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) rename tools/testing/selftests/kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} (99%) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index b27fea0ce5918..a4d262e139b18 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -143,7 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq -TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access +TEST_GEN_PROGS_aarch64 += aarch64/vpmu_test TEST_GEN_PROGS_aarch64 += access_tracking_perf_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c similarity index 99% rename from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c rename to tools/testing/selftests/kvm/aarch64/vpmu_test.c index 453f0dd240f44..581be0c463ad1 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * vpmu_counter_access - Test vPMU event counter access + * vpmu_test - Test the vPMU * * Copyright (c) 2022 Google LLC. * From patchwork Mon Feb 13 18:02:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88252C636CC for ; Mon, 13 Feb 2023 18:04:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Jr9Ks5hBF59PtKW7FB4MSfioCHwSYimKKg0xfdBrc/g=; b=QHUn9Fs9L1D3tezxHldt2QG1xm +Wx/NtuQObGGm18SHhfLMtd7CqfbHq63cMUsQkihff/ntHxQwonSDv86/rGuBeZmI/y7/j9vREw4q t5WF4m9qoQu1SNEFtR+dLNyopPIclegxQ0ljQB/pYmaafypm3W5BLptKKA/lNnymdAEy5+4592SZX AXdppvpdRfUxFTf+oCVTG2uDTyUft2UasHAJqH3szBmnMmU1k1Jegq1YcVFfZ4ASYLfrB7leMG4gl nLmWiAWvGmmyUdKPK+XoE8Z0M9ts/6cGPObQN7UTfuCmxfW82vhUjgy2EIPMhajE8EwT+3p0ql1wH ha5bPjDA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAj-00Flsh-RY; Mon, 13 Feb 2023 18:03:25 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA3-00FlaD-V4 for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:47 +0000 Received: by mail-il1-x149.google.com with SMTP id w3-20020a92d2c3000000b003152aeb055eso4825586ilg.11 for ; Mon, 13 Feb 2023 10:02:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AsmO5eDYgPGzRFknyAuV3lp4/AdQBm/C7SIWaQDWgzs=; b=Cwyfu8Bq2PKtWPxrul5ebBEulrHVwsUaI40rkksHoY18wRNmgXilA9Ert3lhPhjxON UCkt9lQB7j/+RRwya+RbcV290veQl/3Jw9F88H6SqsjLrU4nnVODSYYiH9eJRouyB6II /qsFUZ1PIKaOYDahsLErY6Coe/tm9FDtSwspmSQtN/9CvXjv2wMaURvCiX889j+22PzX vuO8LqDxX6XxAYUscu7nk0JOconpZMPtvAIYFtHR8VruBW6vPhrRKWjPWxxQXFkgIak3 EoV8Q6qNkksBj3c9dDHndoddsc+ebUAGqCjGhrZUp5yDhA/EKml6KO8XoFTYbGu4zm9S Fz6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AsmO5eDYgPGzRFknyAuV3lp4/AdQBm/C7SIWaQDWgzs=; b=OwrWPM52A9n9l9H/iw1V5Uk55IZ8yW72bPQoiAIdQcYJY3wJ78cwoNuFRyM6/5zLB6 ANDmrv3mEs6hFW4qY0DNwc/OWxsZ/VYGdA8wK7y5U+r0JvErvTnMWjYedmYiswQmIlfp FTenEn++PvrKnB7IKuav+c8jXHQ7gxXhhrpDDYvGVDmzhyCOfXAU0tND7WDH2AQVYmIC INmXVhwp3NDB17gRtB9/STyvkuwh0H6JPTbYNj+opHoXHah+16AD3uKX8BS8iVTM+S7M 1QK+3rjqEj92T4Mbl8eYuiYS+zCvN/JXBRQmoywrnceY+2/n7b2D8uQrHjAzN7Af8c23 tM9g== X-Gm-Message-State: AO0yUKVGRWvmcvVRL34aweGVEu2D+x9iAVErKNspKwAQDG6lw7TqkqpE la/hJ+Z862gqOyZHVy0tvC1kjv+kD/hD X-Google-Smtp-Source: AK7set9pNf3dQ3WTwUtipCs3HqOPBdYIZEdSHW/5bEFYmaEXSbjn7kq84XWVTKBORlsRqT+UT9RJi2sQZ2qi X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:a144:0:b0:3c4:8536:446a with SMTP id m4-20020a02a144000000b003c48536446amr241jah.1.1676311361559; Mon, 13 Feb 2023 10:02:41 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:23 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-3-rananta@google.com> Subject: [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100244_046709_9961F5D0 X-CRM114-Status: GOOD ( 21.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor the existing counter access tests into its own independent functions and make running the tests generic to make way for the upcoming tests. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 140 ++++++++++++------ 1 file changed, 98 insertions(+), 42 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 581be0c463ad1..d72c3c9b9c39f 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,11 @@ static inline void disable_counter(int idx) isb(); } +static inline uint64_t get_pmcr_n(void) +{ + return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); +} + /* * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 * accessors that test cases will use. Each of the accessors will @@ -183,6 +188,23 @@ struct pmc_accessor pmc_accessors[] = { uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; +struct vpmu_vm { + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + int gic_fd; +}; + +enum test_stage { + TEST_STAGE_COUNTER_ACCESS = 1, +}; + +struct guest_data { + enum test_stage test_stage; + uint64_t expected_pmcr_n; +}; + +static struct guest_data guest_data; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -295,7 +317,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) write_sysreg(test_bit, pmovsset_el0); /* The bit will be set only if the counter is implemented */ - pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); + pmcr_n = get_pmcr_n(); set_expected = (pmc_idx < pmcr_n) ? true : false; } else { write_sysreg(test_bit, pmcntenclr_el0); @@ -424,15 +446,14 @@ static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) * if reading/writing PMU registers for implemented or unimplemented * counters can work as expected. */ -static void guest_code(uint64_t expected_pmcr_n) +static void guest_counter_access_test(uint64_t expected_pmcr_n) { - uint64_t pmcr, pmcr_n, unimp_mask; + uint64_t pmcr_n, unimp_mask; int i, pmc; GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS); - pmcr = read_sysreg(pmcr_el0); - pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); + pmcr_n = get_pmcr_n(); /* Make sure that PMCR_EL0.N indicates the value userspace set */ GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n); @@ -462,6 +483,18 @@ static void guest_code(uint64_t expected_pmcr_n) for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); } +} + +static void guest_code(void) +{ + switch (guest_data.test_stage) { + case TEST_STAGE_COUNTER_ACCESS: + guest_counter_access_test(guest_data.expected_pmcr_n); + break; + default: + GUEST_ASSERT_1(0, guest_data.test_stage); + } + GUEST_DONE(); } @@ -469,14 +502,14 @@ static void guest_code(uint64_t expected_pmcr_n) #define GICR_BASE_GPA 0x80A0000ULL /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, - int *gic_fd) +static struct vpmu_vm *create_vpmu_vm(void *guest_code) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; uint64_t dfr0, irq = 23; + struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, .attr = KVM_ARM_VCPU_PMU_V3_IRQ, @@ -487,7 +520,10 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, .attr = KVM_ARM_VCPU_PMU_V3_INIT, }; - vm = vm_create(1); + vpmu_vm = calloc(1, sizeof(*vpmu_vm)); + TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); + + vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { @@ -498,9 +534,9 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, /* Create vCPU with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); vcpu_init_descriptor_tables(vcpu); - *gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); + vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); /* Make sure that PMUv3 support is indicated in the ID register */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); @@ -513,15 +549,21 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup, vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); - *vcpup = vcpu; - return vm; + return vpmu_vm; +} + +static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) +{ + close(vpmu_vm->gic_fd); + kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm); } -static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) +static void run_vcpu(struct kvm_vcpu *vcpu) { struct ucall uc; - vcpu_args_set(vcpu, 1, pmcr_n); + sync_global_to_guest(vcpu->vm, guest_data); vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -539,16 +581,18 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, * and run the test. */ -static void run_test(uint64_t pmcr_n) +static void run_counter_access_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd; uint64_t sp, pmcr, pmcr_orig; struct kvm_vcpu_init init; + guest_data.expected_pmcr_n = pmcr_n; + pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -559,23 +603,22 @@ static void run_test(uint64_t pmcr_n) pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); /* * Reset and re-initialize the vCPU, and run the guest code again to * check if PMCR_EL0.N is preserved. */ - vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); aarch64_vcpu_setup(vcpu, &init); vcpu_init_descriptor_tables(vcpu); vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); - run_vcpu(vcpu, pmcr_n); + run_vcpu(vcpu); - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); } /* @@ -583,15 +626,18 @@ static void run_test(uint64_t pmcr_n) * the vCPU to @pmcr_n, which is larger than the host value. * The attempt should fail as @pmcr_n is too big to set for the vCPU. */ -static void run_error_test(uint64_t pmcr_n) +static void run_counter_access_error_test(uint64_t pmcr_n) { - struct kvm_vm *vm; + struct vpmu_vm *vpmu_vm; struct kvm_vcpu *vcpu; - int gic_fd, ret; + int ret; uint64_t pmcr, pmcr_orig; + guest_data.expected_pmcr_n = pmcr_n; + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -603,8 +649,25 @@ static void run_error_test(uint64_t pmcr_n) TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail", pmcr, pmcr_orig); - close(gic_fd); - kvm_vm_free(vm); + destroy_vpmu_vm(vpmu_vm); +} + +static void run_counter_access_tests(uint64_t pmcr_n) +{ + uint64_t i; + + guest_data.test_stage = TEST_STAGE_COUNTER_ACCESS; + + for (i = 0; i <= pmcr_n; i++) + run_counter_access_test(i); + + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) + run_counter_access_error_test(i); +} + +static void run_tests(uint64_t pmcr_n) +{ + run_counter_access_tests(pmcr_n); } /* @@ -613,30 +676,23 @@ static void run_error_test(uint64_t pmcr_n) */ static uint64_t get_pmcr_n_limit(void) { - struct kvm_vm *vm; - struct kvm_vcpu *vcpu; - int gic_fd; + struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd); - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); - close(gic_fd); - kvm_vm_free(vm); + vpmu_vm = create_vpmu_vm(guest_code); + vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } int main(void) { - uint64_t i, pmcr_n; + uint64_t pmcr_n; TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); pmcr_n = get_pmcr_n_limit(); - for (i = 0; i <= pmcr_n; i++) - run_test(i); - - for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) - run_error_test(i); + run_tests(pmcr_n); return 0; } From patchwork Mon Feb 13 18:02:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 834FFC636D4 for ; Mon, 13 Feb 2023 18:04:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=XLC41ibgkuEQ9gCpfcP/798I+N1Mq6+RgpGDxQR1WvI=; b=zAtqU7GAuWXHgiXiM5fRHKwVr0 7dJlnFRbvVnYLLHDmzGIFBkUqukp158v6nu/jaDn/W+n5K/rOnasHupAwmilsJTEsxFQMGB1bcoR2 IxSbg6ADzt+5aU7LxWS80NiXWEF4mFx9E+40EJ1A1Frqs3O0jwKMX7tdhRfrUOu6qJeRcrtLRlPyd NKH3SmVza9j0hbBgZsqaNt6GM4hg0aCw2pvKOmHvfI992S9x0GVimv+Ggkv7IwwyLltlGHX7Ycs5L 40FC0Z5ROVZZYNcJ+BFBjJ1VV2Yzrlr0Du4qEXbb3r8JVjZD1lTQeO3P6amxLxkM4WSzLQeeB5H4r LHFPVY+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAY-00FloV-Bk; Mon, 13 Feb 2023 18:03:14 +0000 Received: from mail-il1-x14a.google.com ([2607:f8b0:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA4-00FlaL-89 for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:46 +0000 Received: by mail-il1-x14a.google.com with SMTP id o10-20020a056e02102a00b003006328df7bso9883032ilj.17 for ; Mon, 13 Feb 2023 10:02:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=QbdYvBJWHHUW2D1muNpyWaqx7zQissf8ygmzU3xHnetuuSzhAGPmBNebmkvWnEv3Xz 0jTCK1eWSC3eS7VG05SG9/FEi8vrnebVB2jiUNG9uyANwJOnQEbKNRaRfVC8qN3KlsXd Rb7FZJQCo48VHQ5nbIMEllm2vsR2a8e8QO0sBnWaEwoXOJcijWRfplFKi31C6/eZC2hl CDRxtjZxnwD6IpsDftFHB8SBdayD9oQvAHBbKF9YvKcJEuRACq+a/5OsQTF53rsLP01P 9TtEQC0bRtuTmKo8G83KZK+lIFZkGfzIN2Lo/u/KWJLSOnf/yAbo8wQZT1gT4X3o0f7A /H3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=I+uK5I847Ub+3lN0aBU7Gcssw6jbSaFlZxnmEwJsF78=; b=ViuceQJJGQenF59FwtFVER3tlrzzh85ryvMD9JlNYMLsuCirM7sTscU4Pvmx/H0Nvd cy5LRfecDqrB5xBlWtAvnckGZkPq+Hh37QneTjwh6O9i9Exn8byR90e5utO8hZDpv7fS 883iKo9Pzh6EYi2in1zrJLtGuOxOm18bRAs5e7+dXDoS3SL4XowMtqDAUN2IWnjH+1df 9hXz55XMj/Nkfph8xnDZaVuyNx7ZlfcpkIVa2lATF9TteqAo1kbVoX8yHBWV+Dd9bmiG gjrb2AWBRzHE6xsYQVxRWmhXrJcdBN7eLK6yV1EDccZiXxsUrQ5HnrL3X24umNq7GdiK 83XQ== X-Gm-Message-State: AO0yUKUeGgKYAcV1XbG74ddsDUsCeV0vh09h+B0T+uQ/HuXhY/bmE0DF Zy7aQE4sf7WLWsGYjwqcWDxbLOQ8fnrr X-Google-Smtp-Source: AK7set9juY00LvKn0j/zEJmqdLAxmL1I6OU4HnYIt4dcFolsJ+3w6e8YxMwoo+OzXJYGRW6YP6suEVqRXC4+ X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a5d:9c49:0:b0:718:b11d:a972 with SMTP id 9-20020a5d9c49000000b00718b11da972mr11228352iof.36.1676311362812; Mon, 13 Feb 2023 10:02:42 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:24 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-4-rananta@google.com> Subject: [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100244_311243_E55E1918 X-CRM114-Status: UNSURE ( 8.80 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle counter enable bit) for PMCNTENSET_EL0 register. Signed-off-by: Raghavendra Rao Ananta --- tools/arch/arm64/include/asm/perf_event.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h index 97e49a4d4969f..8ce23aabf6fe6 100644 --- a/tools/arch/arm64/include/asm/perf_event.h +++ b/tools/arch/arm64/include/asm/perf_event.h @@ -222,9 +222,11 @@ /* * PMOVSR: counters overflow flag status reg */ +#define ARMV8_PMU_CNTOVS_C (1 << 31) /* Cycle counter overflow bit */ #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK + /* * PMXEVTYPER: Event selection reg */ @@ -247,6 +249,11 @@ #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ +/* + * PMCNTENSET: Count Enable set reg + */ +#define ARMV8_PMU_CNTENSET_C (1 << 31) /* Cycle counter enable bit */ + /* PMMIR_EL1.SLOTS mask */ #define ARMV8_PMU_SLOTS_MASK 0xff From patchwork Mon Feb 13 18:02:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51A96C636D4 for ; Mon, 13 Feb 2023 18:04:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lb5Hewti0Z2N5BCC4Phorglp42WZZbaeWYTT6dLXg+Q=; b=vhh3uPpPzlllNVGCojrH4EOajX +zkuuVXqfZfJpz6PHapQRVjSxeGd0f9tayZ4BFBCFfDiFLdeJ6jy0FnQK56ONgJABSwsBpi9u/Cqi wHQSomMqHK6QehTBognWdqFEnAuHhuKI6+P8VUMTfUFN1VhMf+6NUIOW/4XELDmDe5aP/KRpp/oz7 IRcVbY5dv2IpupvdR/hYeQ2OW/zeXRuYw9w56Qfzt7K6LHitU+H8GovBNUDolz8ohdOM6pA+F4YKq 3P0XjmJVGpfnYu3evPWpZ+Gyqix9EAsFpJ7EF6hp59o9g52onuF1UAVigD6wylPHta2aqMaYI5USK 9l+1P6CA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAQ-00FlmH-9c; Mon, 13 Feb 2023 18:03:06 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA5-00Flav-18 for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:46 +0000 Received: by mail-il1-x149.google.com with SMTP id i7-20020a056e021b0700b003033a763270so9850900ilv.19 for ; Mon, 13 Feb 2023 10:02:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=lcEuCEa97ssf+wWD9q1X2jC8NG6WwfI+HwutzvJgCa6Uq+ZlQvO6nZ4fTZ1DOuWvCH QYfQMES86DFpGnJNaCKSYUITRt/fnbFRlV24rRk/9s+HJOK4uK3HUoXbUfKpYuWWbdGM JAkmmiPtglLKasxoEqGau0AV+ap1NMUKKv6f9Q0J6+GYYOqy5tXTw0cA3vI0PsY+8Ti9 6DRHzCp9lRTmscj+OMpjzyRj19bIqkMSdns7IbL4DdWskvbfqjdhHYeYx/pEtPAC/xpy 7SX2rfvXEDyQVkwgmKnDgLQKOy/Rc39ynIF8Wwbekx9vLgo9IWDVOMuK3NLGvfW1CAvj N/RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m5yslkEpatxEcn+ZjK5j+e94Zo++KRDXawZarevQP94=; b=3S70oMEnUkD7o+K6Q95MP+SXU9mVyPwQv0C7KPsENl1zUYbzDgAxqaDaMYX8a3uXA4 doeE1i54ucFQQArUlDoNKHxPOUYYrG3CFHGZn1ozHvz0OUl5GrFS8MoKAo41xS1scLdW 9BAootuWs7G5j9o2G72yfQ4cAoJcZy5t/3Tp2eX3QSLl7E1wJxJIQmYSOkAvSIC0JGe0 bLYml2ZHOUZJVWwWxAMfP8vONhQki5ervXURowSgdP+5ia4DMva5XIokClrVJfwliDal Dkm9PKn0G/nPX5FJ5Ykl8FwosMMO/w7VNIGaVoaLt+WKpHPWni2A7Pk8myIpt4gUIZiX ZG2Q== X-Gm-Message-State: AO0yUKUXI0sC1FfbeTJFqiBRutTuNGDDONTjzYA4Ou2CyUyGI0d9O7V3 QXlKhgM1cmnTbieuPGTAC/1B+fcbn3qm X-Google-Smtp-Source: AK7set/0edXrdMAO+nSquBaytN7bLQuBx2MZ7S0viOK1DwWUM7ZNzi/YO3vgRtPP1oEgxZM2YSawlRvX5KHR X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:854b:0:b0:3a7:e46f:1016 with SMTP id g69-20020a02854b000000b003a7e46f1016mr298jai.0.1676311363803; Mon, 13 Feb 2023 10:02:43 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:25 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-5-rananta@google.com> Subject: [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100245_108183_9A473D49 X-CRM114-Status: UNSURE ( 9.35 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add basic helpers for the test to access the cycle counter registers. The helpers will be used in the upcoming patches to run the tests related to cycle counter. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index d72c3c9b9c39f..15aebc7d7dc94 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -147,6 +147,46 @@ static inline void disable_counter(int idx) isb(); } +static inline uint64_t read_cycle_counter(void) +{ + return read_sysreg(pmccntr_el0); +} + +static inline void reset_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcr_el0); + + write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0); + isb(); +} + +static inline void enable_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0); + isb(); +} + +static inline void disable_cycle_counter(void) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0); + isb(); +} + +static inline void write_pmccfiltr(unsigned long val) +{ + write_sysreg(val, pmccfiltr_el0); + isb(); +} + +static inline uint64_t read_pmccfiltr(void) +{ + return read_sysreg(pmccfiltr_el0); +} + static inline uint64_t get_pmcr_n(void) { return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0)); From patchwork Mon Feb 13 18:02:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 35190C6379F for ; Mon, 13 Feb 2023 18:04:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gArh57fcLe+cSY54hDy1nK/Kjf8Mnqy33dMKN0jU+7Q=; b=gve4fV7J3/RoxksP9/avsFnJey Vo/IA0P0uqehwF5vBAvKFC38JQpcocm1K/BuTtBOba4PVQIkdzAnIpWnTXINTjFJ3+P5Az/pxMaM1 N1n69UA7IBflQo/yCezUH6y/q6KiS05oDcumL8ostCmpbhFkPBNrNDthmhR/yFKa+TdvLuUwjVlxL q/L+N2HkyBGvWPOoaHeR6An4Ig5TV9awnM4fiHw0/l6yvrhFuccgrAWR3l5cgbvDBqnw+7VFb43+o bn7cj3J9jq7DyAYh2WFMhLW4CDKJwxQCL0SjXh6zutT/lzJWx+w45GqF0QCRoi3QJuC9LbcxziU0a hXA/hgXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAz-00Fm0d-0j; Mon, 13 Feb 2023 18:03:41 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA6-00FlbO-CR for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:47 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id y192-20020a2532c9000000b008ec2e7092d6so10229750yby.5 for ; Mon, 13 Feb 2023 10:02:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=Jx3a++ruGoXrcWcT9j3/ApaAyQ82DXdYanaDr7ST0XpE2Zdzv5xTJY/Gfvi9n0sUxT IAGJgL1CX6Hf5UbNZQlM9snWEB7yQtiJ3BksENLZKNl8tmSdopEGil1x1IoShWafjen4 62pvkCMKN/mE+1i0nSy4KestzSs7XPOtO/tu5mcO3NEE1o7l/6XD/mvfDRZctpTX5k1/ 1ymQj+IpyTb/3m2LR+/32yyqg07hGCAIxxTKNNPGOEavERroNb6chMCoBWYWytMMWT+T SVF11S0EQueuF/zePtphljd/7Lqh6rJXqYpNcY3OAqPtQRWYbKLek3BlYIVi3Lowxvff lneA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=4bHjLzwPzZl0PH49dH1KX9v4bZLiGzGy1oC4JTvs+xLVnPiG4KpwRHg4i2fn7T43Wy o+mMu53/Xu8yWO1S3MTkN5kdoeC7rB+sGZu8F5UdlsXfQCZaUh2vL+3aoBcTeXXs1Vy6 vMmbsX4ACajZV3lgJlJqOEtzvWEF0uZnGpyDprH1OFPPNsSCWCmTKX7gx7MgFRrmhqx9 95PsRYPBhnd0kjSg1QiiCSEiQMpoACMWfrSdqwk8SJRvs9anea/VcGZhMwg/Xn1ERJuE 0HDcra2uQST7mH7yJDhWNWEeC0Q9aptRgMvttN+HMMWexw3L6O+AMe6GsFWyws0TJnxP OOqw== X-Gm-Message-State: AO0yUKX0CElt+cOyBP/OTckahsGnR6VVUu/2lRVN7X553P2L+1uUfKJN bcbLZTW6hRwEur0O55q2UXY0m6Zvg8fK X-Google-Smtp-Source: AK7set/EzSiXKhIlnWZGFQXSX4Ttf2a7ec3Qre0VGbOsMKnsjqZTP111F1zCvSTSfNtsmpqkl4lGGYYDMsVP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1c4:b0:87e:a15b:4e55 with SMTP id u4-20020a05690201c400b0087ea15b4e55mr2919839ybh.21.1676311364919; Mon, 13 Feb 2023 10:02:44 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:26 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-6-rananta@google.com> Subject: [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100246_445513_9FD6C08D X-CRM114-Status: GOOD ( 19.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Accept a list of KVM PMU event filters as an argument while creating a VM via create_vpmu_vm(). Upcoming patches would leverage this to test the event filters' functionality. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++-- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 15aebc7d7dc94..2b3a4fa3afa9c 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,10 +15,14 @@ #include #include #include +#include /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The max number of event numbers that's supported */ +#define ARMV8_PMU_MAX_EVENTS 64 + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define MAX_EVENT_FILTERS_PER_VM 10 + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -232,6 +238,7 @@ struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; int gic_fd; + unsigned long *pmu_filter; }; enum test_stage { @@ -541,8 +548,51 @@ static void guest_code(void) #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL +static unsigned long * +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) +{ + int j; + unsigned long *pmu_filter; + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + }; + + /* + * Setting up of the bitmap is similar to what KVM does. + * If the first filter denys an event, default all the others to allow, and vice-versa. + */ + pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS); + TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter"); + + if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY) + bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS); + + for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) { + struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j]; + + if (!pmu_event_filter->nevents) + break; + + pr_debug("Applying event filter:: event: 0x%x; action: %s\n", + pmu_event_filter->base_event, + pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY"); + + filter_attr.addr = (uint64_t) pmu_event_filter; + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + + if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW) + __set_bit(pmu_event_filter->base_event, pmu_filter); + else + __clear_bit(pmu_event_filter->base_event, pmu_filter); + } + + return pmu_filter; +} + /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct vpmu_vm *create_vpmu_vm(void *guest_code) +static struct vpmu_vm * +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) { + if (vpmu_vm->pmu_filter) + bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); free(vpmu_vm); @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } From patchwork Mon Feb 13 18:02:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 62FB8C636CC for ; Mon, 13 Feb 2023 18:05:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=SfS/3QjeyDl9Ro/v0XCtoIPqb5E2qvFwgstNC0hQqsU=; b=3GHd4o9924JKnAN4Goo9lDCgio BBak/Nb3AgucfxEcwmzSfvm1LCIWF1Qz+u0O0MgiPAVjyELbPpL5/RuG1OvrmShArzV0eJjvBCGRN NtqzPhVPdk4M1zUeGN9uQ901XmAst2OmQ5qu9MQgzJFLFXvRKRnVXdQ0ZI2UXypWHfoIyfxz9UD7o OQ5sfgGPhkhQjLXPyoZ0i/ChxqykbvQFF/cPtA4n+aitCaM0BfemYLbFpL3bdykIEQYpBWdkXgEcp puXjYWcBH3K0XakvDczW1Jg0gdYIgYPIvPcCoIIDJz2pOnHM2/YU+wz/ZK8YLyjgjPWqrfv1Cs+al Rtz5/fKQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdBO-00FmDI-OV; Mon, 13 Feb 2023 18:04:07 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA8-00Flbk-Aa for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:50 +0000 Received: by mail-yb1-xb49.google.com with SMTP id y125-20020a25c883000000b0086349255277so13151355ybf.8 for ; Mon, 13 Feb 2023 10:02:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=fXK9ReETXPS2j77ugkcwz+WisHJJl30nYulVnC5wPiVkGYG2emzbPoAyLynH7ZaAsI 1c3LfWRVQ6iewmxzt1Wg3dcqrIbv7kvBuFrhDdqtN0ITzFnMcZNSuWr+0SY6Kw1+o/FR szzw1tiwapnZbgthXSx/Q5fTtv96fLq+VqsfNBqsKWqMyUmVfxhQ7HqAbJNxSJzNGvs+ CqH2iEfuDdAD+suKAproIohuUCyyo1DeIRtwQxE1WZ193pQLMzCrkXd/oJGU2pJpvoMD KS3jlG86kbYxGFvTrXi5TTvOLQarjIK+MQliKVXkQg6GClJdDqpSaaeT1GDu/N1b4QVv jK6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b4AB06q1cmw2xfepDKSJYP3pCnth/5v7kXmR5lfk+6U=; b=dIrIBxxAIAiBay3qvVZCzQRlR/AEaqrehC4wzdsqI5hHN/wLfDpvSmdtGA5s7rBvXs MYtV5CUYWR7+JyeNcjI8hHKp3aL3LDDX5v8/8hMom5TgZ4kYMkNNFoHoZ4WrCTf+xmiX OS3kQFgZ5hKceRxIvZFpkI7yuYdFhf22fV28LTHHeE3AuIeTZ11HBq2JXgmOqo6yvC4i Bx4jH2quCGZqCijtrsMrlt9HWLtOWlu1ggIAP2OF9mssoWnVLUGoONfY5zz2uyC1TmXZ ajdHMtHAcIWEwWiqH4RpF9CerECb3+6AqTzAwfijRD0cSx2sprv9k3fAhV9OijbiH+gY EveQ== X-Gm-Message-State: AO0yUKVMMROSGIC++QzFeon8TpBImuHn3SN/3udIQD9UGENXDywxhJVS wRRK76GlXdXqGZn0NT5OW1nKfVQX6HP5 X-Google-Smtp-Source: AK7set9x9mJnUYIPvLaxq+mW4hkjTEtu0lhCvSA8e5Al797fNHvzz7ZeDQoXvhWboY/INSxnq62Zmi3F8bIn X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a0d:eac3:0:b0:52e:c5c0:1d0e with SMTP id t186-20020a0deac3000000b0052ec5c01d0emr1538030ywe.418.1676311366096; Mon, 13 Feb 2023 10:02:46 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:27 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-7-rananta@google.com> Subject: [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100248_449423_48F7AD90 X-CRM114-Status: GOOD ( 33.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER attribute by applying a series of filters to allow or deny events from the userspace. Validation is done by the guest in a way that it should be able to count only the events that are allowed. The workload to execute a precise number of instructions (execute_precise_instrs() and precise_instrs_loop()) is taken from the kvm-unit-tests' arm/pmu.c. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++- 1 file changed, 258 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 2b3a4fa3afa9c..3dfb770b538e9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -2,12 +2,21 @@ /* * vpmu_test - Test the vPMU * - * Copyright (c) 2022 Google LLC. + * The test suit contains a series of checks to validate the vPMU + * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is + * supported on the host. The tests include: * - * This test checks if the guest can see the same number of the PMU event + * 1. Check if the guest can see the same number of the PMU event * counters (PMCR_EL0.N) that userspace sets, if the guest can access * those counters, and if the guest cannot access any other counters. - * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. + * + * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER + * attribute by applying a series of filters in various combinations + * of allowing or denying the events. The guest validates it by + * checking if it's able to count only the events that are allowed. + * + * Copyright (c) 2022 Google LLC. + * */ #include #include @@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = { #define MAX_EVENT_FILTERS_PER_VM 10 +#define EVENT_ALLOW(ev) \ + {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW} + +#define EVENT_DENY(ev) \ + {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY} + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -243,11 +258,13 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, + TEST_STAGE_KVM_EVENT_FILTER, }; struct guest_data { enum test_stage test_stage; uint64_t expected_pmcr_n; + unsigned long *pmu_filter; }; static struct guest_data guest_data; @@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } + +/* + * Extra instructions inserted by the compiler would be difficult to compensate + * for, so hand assemble everything between, and including, the PMCR accesses + * to start and stop counting. isb instructions are inserted to make sure + * pmccntr read after this function returns the exact instructions executed + * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop. + */ +static inline void precise_instrs_loop(int loop, uint32_t pmcr) +{ + uint64_t pmcr64 = pmcr; + + asm volatile( + " msr pmcr_el0, %[pmcr]\n" + " isb\n" + "1: subs %w[loop], %w[loop], #1\n" + " b.gt 1b\n" + " nop\n" + " msr pmcr_el0, xzr\n" + " isb\n" + : [loop] "+r" (loop) + : [pmcr] "r" (pmcr64) + : "cc"); +} + +/* + * Execute a known number of guest instructions. Only even instruction counts + * greater than or equal to 4 are supported by the in-line assembly code. The + * control register (PMCR_EL0) is initialized with the provided value (allowing + * for example for the cycle counter or event counters to be reset). At the end + * of the exact instruction loop, zero is written to PMCR_EL0 to disable + * counting, allowing the cycle counter or event counters to be read at the + * leisure of the calling code. + */ +static void execute_precise_instrs(int num, uint32_t pmcr) +{ + int loop = (num - 2) / 2; + + GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop); + precise_instrs_loop(loop, pmcr); +} + +static void test_instructions_count(int pmc_idx, bool expect_count) +{ + int i; + struct pmc_accessor *acc; + uint64_t cnt; + int instrs_count = 100; + + enable_counter(pmc_idx); + + /* Test the event using all the possible way to configure the event */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + pmu_disable_reset(); + + acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); + + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + + /* If a count is expected, the counter should be increased by 'instrs_count' */ + cnt = acc->read_cntr(pmc_idx); + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + i, expect_count, cnt, instrs_count); + } + + disable_counter(pmc_idx); +} + +static void test_cycles_count(bool expect_count) +{ + uint64_t cnt; + + pmu_enable(); + reset_cycle_counter(); + + /* Count cycles in EL0 and EL1 */ + write_pmccfiltr(0); + enable_cycle_counter(); + + cnt = read_cycle_counter(); + + /* + * If a count is expected by the test, the cycle counter should be increased by + * at least 1, as there is at least one instruction between enabling the + * counter and reading the counter. + */ + GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + + disable_cycle_counter(); + pmu_disable_reset(); +} + +static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) +{ + switch (event) { + case ARMV8_PMUV3_PERFCTR_INST_RETIRED: + test_instructions_count(pmc_idx, expect_count); + break; + case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: + test_cycles_count(expect_count); + break; + } +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n) } } +static void guest_event_filter_test(unsigned long *pmu_filter) +{ + uint64_t event; + + /* + * Check if PMCEIDx_EL0 is advertized as configured by the userspace. + * It's possible that even though the userspace allowed it, it may not be supported + * by the hardware and could be advertized as 'disabled'. Hence, only validate against + * the events that are advertized. + * + * Furthermore, check if the event is in fact counting if enabled, or vice-versa. + */ + for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) { + if (pmu_event_is_supported(event)) { + GUEST_ASSERT_1(test_bit(event, pmu_filter), event); + test_event_count(event, 0, true); + } else { + test_event_count(event, 0, false); + } + } +} + static void guest_code(void) { switch (guest_data.test_stage) { case TEST_STAGE_COUNTER_ACCESS: guest_counter_access_test(guest_data.expected_pmcr_n); break; + case TEST_STAGE_KVM_EVENT_FILTER: + guest_event_filter_test(guest_data.pmu_filter); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n) run_counter_access_error_test(i); } +static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = { + /* + * Each set of events denotes a filter configuration for that VM. + * During VM creation, the filters will be applied in the sequence mentioned here. + */ + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES), + }, + { + EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED), + }, +}; + +static void run_kvm_event_filter_error_tests(void) +{ + int ret; + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct vpmu_vm *vpmu_vm; + struct kvm_vcpu_init init; + struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + .addr = (uint64_t) &pmu_event_filter, + }; + + /* KVM should not allow configuring filters after the PMU is initialized */ + vpmu_vm = create_vpmu_vm(guest_code, NULL); + ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret == -1 && errno == EBUSY, + "Failed to disallow setting an event filter after PMU init"); + destroy_vpmu_vm(vpmu_vm); + + /* Check for invalid event filter setting */ + vm = vm_create(1); + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); + vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); + + pmu_event_filter.base_event = UINT16_MAX; + pmu_event_filter.nevents = 5; + ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration"); + kvm_vm_free(vm); +} + +static void run_kvm_event_filter_test(void) +{ + int i; + struct vpmu_vm *vpmu_vm; + struct kvm_vm *vm; + vm_vaddr_t pmu_filter_gva; + size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long); + + guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER; + + /* Test for valid filter configurations */ + for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) { + vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]); + vm = vpmu_vm->vm; + + pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR); + memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz); + guest_data.pmu_filter = (unsigned long *) pmu_filter_gva; + + run_vcpu(vpmu_vm->vcpu); + + destroy_vpmu_vm(vpmu_vm); + } + + /* Check if KVM is handling the errors correctly */ + run_kvm_event_filter_error_tests(); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); + run_kvm_event_filter_test(); } /* From patchwork Mon Feb 13 18:02:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E47E0C636CC for ; Mon, 13 Feb 2023 18:04:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=JiGPQ91Q/nruZc7xcZcv8KhjKxJzwVsn9k0hGHNGTQU=; b=BAqlmGMOA5HF+3hUyIGXFgp8hh f6BakNUd6LB3T+aPZhqd/1FNs98cmELORouTKgS8WEtr1x0wqbUKKBmkFVPYH/wLwAdJBdiTj3XEW QNx2cT9BvXxwAsNjDVMlUQHPLIpw9MbUY8+HJPD1EP0LkkuS5fpgvqtF0HU9xYF9uY0k2snc9Jv6Q z4ptWEOLKngxoT9AQqG1lRfQsC7NQmcXnA5WKLOt4XGqySfxZSvJoaIxdsRS598ZU5courLw1ihHT FDhMa2JcE1v3So4NSwsSeYvjdZrpg4szr5HkcXk1JXF8DkYk2NcdDI8nUz3zTc0ZCVjs+97PPf3pZ EIEI/Vcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdBB-00Fm6P-7f; Mon, 13 Feb 2023 18:03:53 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA8-00FlcJ-Mz for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:50 +0000 Received: by mail-il1-x149.google.com with SMTP id h7-20020a056e021d8700b0031532629b80so3961668ila.14 for ; Mon, 13 Feb 2023 10:02:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=IUlIvqk99wOkDyN8QxvxQiqVaFMxbqcgA9uNoQQZndNpq1vnhI1bsorC5l7knDzNI1 CNDo0y4gWYWsjH8HGIUC2KbuMfUKXfm7ePcqNQpy+Qs4qo8ajWDUsOujEIEHzjn327wK sZ1ayR5mmnOmUCp4kpQavgmctGK3PWKE8lV3wz7hGdhQau+QWqpp26566SvWZ7ItdtGp m6IcH36QOlGywNrGdqHCpVGddGG8i6Xw2nKmdn1sn1BnyRI/jVLOCzIkw/xbo4NTXMGy /IjMPtH0X6J6GrDvjXmIAlRLaq0mI7ZT34see0s7lbyDyBucG6Lo7pboRUp5GpzbE3TB P77g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/4pVme0YnvFkbmhGf1BNKoMvu6OdpqpHsZJlFNEmDZU=; b=n9TTvhZ2P3BqNCswh0YAwS3eGxtSdO5f/OsTp8JcA+0GMm55/lUyw8qmP0+OlTxp8x 8Yli85qux9o8njv6OZBldIk2IR91aw8J/o/zjZmkYXJ8RIJwIX7UeYBnzUYIVO7qdeIU x6F7Kcq3D+QwywQNjPVQnwO1uhCUNICXR9D8DgA+2kg8hbc7L4baLYjbgC5zUo0Ru9pL 5Q2R2j2LV8QkPPO2nZY3Fz8LIeOiZT7ojl+N2uqW7UIi4nG9kEm+WHlGIIUzmsuTweLP a0VHESrO0Xa1L2A3hocSRnUo4yf5XyjCuqTAKI+dYatUYcPneLdBecvzsdncGxJF16Cu LkHg== X-Gm-Message-State: AO0yUKUR/Tg7GPEHqtl+qdylYoMHBEl+7ASLhA+bGBs37yh308aPS+dd KvOChWfPAo5nkZZoT+wOFUG1eef/5OTX X-Google-Smtp-Source: AK7set/zRoXeDH2o2OgI5SNOkKe9Bljn/xqjOVO+iFT8+upLAFbhCUFo5GurkgO+YcVUEBjj4XoXzR5ZKsrW X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:9409:0:b0:310:9fc1:a92b with SMTP id c9-20020a929409000000b003109fc1a92bmr2739337ili.0.1676311367115; Mon, 13 Feb 2023 10:02:47 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:28 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-8-rananta@google.com> Subject: [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100248_778070_971E636C X-CRM114-Status: GOOD ( 18.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM doest't allow the guests to modify the filter types such counting events in nonsecure/secure-EL2, EL3, and so on. Validate the same by force-configuring the bits in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0 registers. The test extends further by trying to create an event for counting only in EL2 and validates if the counter is not progressing. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 3dfb770b538e9..5c166df245589 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,6 +15,10 @@ * of allowing or denying the events. The guest validates it by * checking if it's able to count only the events that are allowed. * + * 3. KVM doesn't allow the guest to count the events attributed with + * higher exception levels (EL2, EL3). Verify this functionality by + * configuring and trying to count the events for EL2 in the guest. + * * Copyright (c) 2022 Google LLC. * */ @@ -23,6 +27,7 @@ #include #include #include +#include #include #include @@ -259,6 +264,7 @@ struct vpmu_vm { enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, TEST_STAGE_KVM_EVENT_FILTER, + TEST_STAGE_KVM_EVTYPE_FILTER, }; struct guest_data { @@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter) } } +static void guest_evtype_filter_test(void) +{ + int i; + struct pmc_accessor *acc; + uint64_t typer, cnt; + struct arm_smccc_res res; + + pmu_enable(); + + /* + * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2), + * Monitor (EL3), and Multithreading configuration. It applies the mask + * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0, + * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible + * ways to configure the EVTYPER. + */ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + /* Set all filter bits (31-24), readback, and check against the mask */ + acc->write_typer(0, 0xff000000); + typer = acc->read_typer(0); + + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + /* + * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv + * to not count NS-EL2 events. Verify this functionality by configuring + * a NS-EL2 event, for which the couunt shouldn't increment. + */ + typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED; + typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0; + acc->write_typer(0, typer); + acc->write_cntr(0, 0); + enable_counter(0); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt = acc->read_cntr(0); + GUEST_ASSERT_3(cnt == 0, cnt, typer, i); + } + + /* Check the same sequence for the Cycle counter */ + write_pmccfiltr(0xff000000); + typer = read_pmccfiltr(); + GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK, + typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK); + + typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0; + write_pmccfiltr(typer); + reset_cycle_counter(); + enable_cycle_counter(); + + /* Issue a hypercall to enter EL2 and return */ + memset(&res, 0, sizeof(res)); + smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res); + + cnt = read_cycle_counter(); + GUEST_ASSERT_2(cnt == 0, cnt, typer); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -687,6 +757,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVENT_FILTER: guest_event_filter_test(guest_data.pmu_filter); break; + case TEST_STAGE_KVM_EVTYPE_FILTER: + guest_evtype_filter_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void) run_kvm_event_filter_error_tests(); } +static void run_kvm_evtype_filter_test(void) +{ + struct vpmu_vm *vpmu_vm; + + guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER; + + vpmu_vm = create_vpmu_vm(guest_code, NULL); + run_vcpu(vpmu_vm->vcpu); + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); + run_kvm_evtype_filter_test(); } /* From patchwork Mon Feb 13 18:02:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EBFA2C636CC for ; Mon, 13 Feb 2023 18:05:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=+VRaU3Pa6W2BzsH7i1VMLW5ouJ3wYeWel2llZX0Q2P4=; b=Sb68ASieRE1gta/uac6erbw/2D M/UtysPXrUL/L2AMrysT19WZgSS5qmk+RrXOyx9By5eRGSOLHpmsL25vHYkVxqH4E0nRMK1TaUskf 4+Opa6rX6P/8T923P+Sm7C/jaJdKdMTjWkW9xB+aNfxpwPwxQ7cPOb81WaryA13bnUGnYfnjV2ARn GWdejPPpJdHOOp5udpCRuh/OXWNEuqN6pa4Uf+Lc82f2xse+hls7qp3sa4TEag20sl7FBDejwsGIc JMr+p4neXkp9Mxl5ueFrRZH3Ms12SF2eMG62L7mBMb8ApQLL3rPKXir8Gqs7lZa1qvfPIARi9D6tm RqfFziLw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdBh-00FmOg-2T; Mon, 13 Feb 2023 18:04:26 +0000 Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAA-00Flci-8W for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:52 +0000 Received: by mail-io1-xd4a.google.com with SMTP id n85-20020a6b8b58000000b0073a2fb71d15so8765918iod.6 for ; Mon, 13 Feb 2023 10:02:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=q/9pkLp04O+q0/h8HqWtbW0b4VJCwyM93nDv77GqvjJ6CZ4Czxpm5H8d/lDlpTXsCj Lxw6nGE3jZxgA/Lo2HUBZdf5EE0qjSxvUgHGZW6o0+9e+HkhvQpiHgZdToFd+AbKHIFB A5CHSl5TSlCOZjqQlpxD5e5wkvR8X/0b6YcT5jL/nTql/+HqHR7svK3rwtTT1Y+ZTR7R GJ/moeEEW6KXP6NWUNqTjcT4/YBd1FC1rP0DUY32WTXTIIVoTMVTv09vwSp4CKIUlMXE jaIuuptu376OmVIHLWwwSlFCwuNah5pfd60Cn7DxqT17NURWdErBO2ZYOWNiy3gYm9fN UGog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NBYdz/0TDvO7IvfPjxUDSlCbo4CB4F+r8/XPr+qUWDQ=; b=tt8JSvFr3nLbWZkEXd4RhzFU6RSGJpyF+uuKJffUDI1M5VbpWnZcbtkzcw+QqvAasN Y/Q6CKatoKUAZmEyWEYOkecyaja4TEtWl9NaI0aT0IYTUcqq1COcOf+r6MZjGv6+0IAj 5RMbBA5pa0VW4PX09S0GrgFA0JUMgrCC/+pUmG7Z0XA3wVzf6DxbJWZw+c2JcZW9hIHU MmyqlR1Ev6cR0bvjF9ov3uK/SDkOGh7P8+av+7NVVYi6EXuzLBztjC1nBdUIvF17bRDR Pxiwzn8CMubxAbsva2iVeTv3Pb1f8xqfaUpLPIEGhz7gU25xpzKXcsKXDuf9CdU9XUmp gUyw== X-Gm-Message-State: AO0yUKXoofDQzQ+TX+Ibz31pcGpSkI7w2hywpXp7AOdkkqYDYivKmt7i 8pN6pgiEazDiKpYRtqjv6skF8yLj9kXo X-Google-Smtp-Source: AK7set/5XtGkWzBFYSheugf2dkdONWpj2qzeqEPgWjWfAnOy0vxdZ74FjxG8HtTLpC55TP3sfOr6vpeLRU29 X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:9407:0:b0:313:f870:58fb with SMTP id c7-20020a929407000000b00313f87058fbmr2508047ili.2.1676311368215; Mon, 13 Feb 2023 10:02:48 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:29 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-9-rananta@google.com> Subject: [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100250_369803_D7E899D8 X-CRM114-Status: GOOD ( 22.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement a stress test for KVM by frequently force-migrating the vCPU to random pCPUs in the system. This would validate the save/restore functionality of KVM and starting/stopping of PMU counters as necessary. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++- 1 file changed, 193 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 5c166df245589..0c9d801f4e602 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,9 +19,15 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * + * 4. Since the PMU registers are per-cpu, stress KVM by frequently + * migrating the guest vCPU to random pCPUs in the system, and check + * if the vPMU is still behaving as expected. + * * Copyright (c) 2022 Google LLC. * */ +#define _GNU_SOURCE + #include #include #include @@ -30,6 +36,11 @@ #include #include #include +#include +#include +#include + +#include "delay.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) @@ -37,6 +48,8 @@ /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define msecs_to_usecs(msec) ((msec) * 1000LL) + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -265,6 +278,7 @@ enum test_stage { TEST_STAGE_COUNTER_ACCESS = 1, TEST_STAGE_KVM_EVENT_FILTER, TEST_STAGE_KVM_EVTYPE_FILTER, + TEST_STAGE_VCPU_MIGRATION, }; struct guest_data { @@ -275,6 +289,19 @@ struct guest_data { static struct guest_data guest_data; +#define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 +#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 + +struct test_args { + int vcpu_migration_test_iter; + int vcpu_migration_test_migrate_freq_ms; +}; + +static struct test_args test_args = { + .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF, + .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS, +}; + static void guest_sync_handler(struct ex_regs *regs) { uint64_t esr, ec; @@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event) GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\ } - /* * Extra instructions inserted by the compiler would be difficult to compensate * for, so hand assemble everything between, and including, the PMCR accesses @@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) } } +static void test_basic_pmu_functionality(void) +{ + /* Test events on generic and cycle counters */ + test_instructions_count(0, true); + test_cycles_count(true); +} + /* * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers * are set or cleared as specified in @set_expected. @@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void) GUEST_ASSERT_2(cnt == 0, cnt, typer); } +static void guest_vcpu_migration_test(void) +{ + /* + * While the userspace continuously migrates this vCPU to random pCPUs, + * run basic PMU functionalities and verify the results. + */ + while (test_args.vcpu_migration_test_iter--) + test_basic_pmu_functionality(); +} + static void guest_code(void) { switch (guest_data.test_stage) { @@ -760,6 +803,9 @@ static void guest_code(void) case TEST_STAGE_KVM_EVTYPE_FILTER: guest_evtype_filter_test(); break; + case TEST_STAGE_VCPU_MIGRATION: + guest_vcpu_migration_test(); + break; default: GUEST_ASSERT_1(0, guest_data.test_stage); } @@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec, @@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu) struct ucall uc; sync_global_to_guest(vcpu->vm, guest_data); + sync_global_to_guest(vcpu->vm, test_args); + vcpu_run(vcpu); switch (get_ucall(vcpu, &uc)) { case UCALL_ABORT: @@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void) destroy_vpmu_vm(vpmu_vm); } +struct vcpu_migrate_data { + struct vpmu_vm *vpmu_vm; + pthread_t *pt_vcpu; + bool vcpu_done; +}; + +static void *run_vcpus_migrate_test_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data = arg; + struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; + + run_vcpu(vpmu_vm->vcpu); + migrate_data->vcpu_done = true; + + return NULL; +} + +static uint32_t get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf = get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place the vCPU on */ + do { + pcpu = rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu = get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + + ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret == 0 || ret == ESRCH, + "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret); + + return ret; +} + +static void *vcpus_migrate_func(void *arg) +{ + struct vcpu_migrate_data *migrate_data = arg; + + while (!migrate_data->vcpu_done) { + usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); + migrate_vcpu(migrate_data); + } + + return NULL; +} + +static void run_vcpu_migration_test(uint64_t pmcr_n) +{ + int ret; + struct vpmu_vm *vpmu_vm; + pthread_t pt_vcpu, pt_sched; + struct vcpu_migrate_data migrate_data = { + .pt_vcpu = &pt_vcpu, + .vcpu_done = false, + }; + + __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test"); + + guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; + guest_data.expected_pmcr_n = pmcr_n; + + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL); + + /* Initialize random number generation for migrating vCPUs to random pCPUs */ + srand(time(NULL)); + + /* Spawn a vCPU thread */ + ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data); + TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + + /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ + ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data); + TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs"); + + pthread_join(pt_sched, NULL); + pthread_join(pt_vcpu, NULL); + + destroy_vpmu_vm(vpmu_vm); +} + static void run_tests(uint64_t pmcr_n) { run_counter_access_tests(pmcr_n); run_kvm_event_filter_test(); run_kvm_evtype_filter_test(); + run_vcpu_migration_test(pmcr_n); } /* @@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void) return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); } -int main(void) +static void print_help(char *name) +{ + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n", + name); + pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n", + VCPU_MIGRATIONS_TEST_ITERS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n", + VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-h: print this help screen\n"); +} + +static bool parse_args(int argc, char *argv[]) +{ + int opt; + + while ((opt = getopt(argc, argv, "hi:m:")) != -1) { + switch (opt) { + case 'i': + test_args.vcpu_migration_test_iter = + atoi_positive("Nr vCPU migration iterations", optarg); + break; + case 'm': + test_args.vcpu_migration_test_migrate_freq_ms = + atoi_positive("vCPU migration frequency", optarg); + break; + case 'h': + default: + goto err; + } + } + + return true; + +err: + print_help(argv[0]); + return false; +} + +int main(int argc, char *argv[]) { uint64_t pmcr_n; TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); + if (!parse_args(argc, argv)) + exit(KSFT_SKIP); + pmcr_n = get_pmcr_n_limit(); run_tests(pmcr_n); From patchwork Mon Feb 13 18:02:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138792 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8A6A3C636D4 for ; Mon, 13 Feb 2023 18:05:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=aHDa1ItQyvsoPZ1syexOLXxdMiHoUC2fdOcmQw1mb3g=; b=fhsE3W6TTvw5/lgUm78bh5JVq8 lr4TnBNDTHnXbA0IO99UU4aSioTVWRw4NTF2QG6gjXLu/phL315Y6UH8fS5CiKimaskuiSNkHppM1 F6qYOP2eNKJzQFdqCQyab8a+jT1uNB+WR0KnRdp79D+1Uq/9L/vAbdR8dqNymGX1aZE4SfwKibPQH j9/4fxBWUZIGEAOpC7ohv6BqPwzIN8ziutexsxhL9fUSSF1PO7opUV2xKkBWnWFc/q4SyxGB6kyjL VMGg9fOkHaJGJIR7yKvpHaQfDZsEf6CiYv4WctRwVRbIJWjRVU2E0I+Tgp4DKIxWT7beKH3AhVKrr 9YnC/Buw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdBu-00FmWU-Dt; Mon, 13 Feb 2023 18:04:38 +0000 Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAA-00Fld7-JZ for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:52 +0000 Received: by mail-io1-xd4a.google.com with SMTP id e16-20020a6b5010000000b00719041c51ebso8831275iob.12 for ; Mon, 13 Feb 2023 10:02:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=miWor8s0jqaZUd3xGtI4JBTTtHHMqBsmA5HUd209CbcgvQyMiCxhmIj64++D4RGVUx zOUakTN+WbmyJc4C78xTGot9kZ1TRe/KSWTN0bpi6hF+WVnpU0/dmbw0WrccUj1vTd9B j6YMabigiPCP6tqWjGaElGNojvU9Bo+s3tzO3qRKWFPfYD2rFIZeIdpyKo0nGUoUr6bU n/Dd+LajqjVo+FCHnCumcYYRwJzv1+FUYrv9G15MD7XiGehqTHK72wTJyLinYEhAl+Jt 1CRcERI1uyamRWlZqHP0/QY9MLpgSBpNz5ytPZDElyqeePPkphsxrm3FcdKBwM6D7P6I A2TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BS2eIj6/WnJybsbailZVj7vXOCSqKu90WeWXBSqAGkA=; b=rnKl2KuiUQ6HooCjhsYjUDravnAJbu5goO4599eniQ/4h5FRrp3DRD+z9kt1K5fRek h27yCFRCq+rZG4caqJWTC9bPLphSvll+xp55XXJ0Ll96i7Tx+JhG6ZY5Eud4GpeqyI1N l/JvkoFN6HtYNNjQJXSyiXswNB4h3XVYk8dvr1BRYTRhagEu/bTXkIZ1F4ayDdRYHcbT V0c/W7xCQwCvS0GIHdZN1sssIOQAd/qNzgBLp3+T3NycubCT/12GORo1unxrx2Gub/4/ 3h+gBx8XO+Ta5mF8qZGgvMqD0VNOlbRiGSRfLFZZiOvEJuk+pmkwVvTHwnoOsZ4KQERG mCKg== X-Gm-Message-State: AO0yUKU2PwPzl3I0YgQDh37R/P7rYZLZ7Hij/uLW3aHd3AO9mJD4C18O tFgogEiUiNkzzOzySZwZH5CVdJuw4hw4 X-Google-Smtp-Source: AK7set+pdx/dE8EiXxbf6oKANYP1HUfyveaDx4g8eQWjTRp3vLmsmLm3zVaQurH0YKQXfW3y4Mzx2sOx1mUP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a02:735c:0:b0:3c4:88de:524 with SMTP id a28-20020a02735c000000b003c488de0524mr351jae.3.1676311369255; Mon, 13 Feb 2023 10:02:49 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:30 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-10-rananta@google.com> Subject: [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100250_722786_83812C44 X-CRM114-Status: GOOD ( 33.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vCPU migration test to also validate the vPMU's functionality when set up for overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++-- 1 file changed, 198 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 0c9d801f4e602..066dc17fa3906 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -21,7 +21,9 @@ * * 4. Since the PMU registers are per-cpu, stress KVM by frequently * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. + * if the vPMU is still behaving as expected. The sub-tests include + * testing basic functionalities such as basic counters behavior, + * overflow, and overflow interrupts. * * Copyright (c) 2022 Google LLC. * @@ -41,13 +43,27 @@ #include #include "delay.h" +#include "gic.h" +#include "spinlock.h" /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The cycle counter bit position that's common among the PMU registers */ +#define ARMV8_PMU_CYCLE_COUNTER_IDX 31 + /* The max number of event numbers that's supported */ #define ARMV8_PMU_MAX_EVENTS 64 +#define PMU_IRQ 23 + +#define COUNT_TO_OVERFLOW 0xFULL +#define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) +#define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + #define msecs_to_usecs(msec) ((msec) * 1000LL) /* @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val) isb(); } +static inline void write_pmovsclr(unsigned long val) +{ + write_sysreg(val, pmovsclr_el0); + isb(); +} + +static unsigned long read_pmovsclr(void) +{ + return read_sysreg(pmovsclr_el0); +} + static inline void enable_counter(int idx) { uint64_t v = read_sysreg(pmcntenset_el0); @@ -178,11 +205,33 @@ static inline void disable_counter(int idx) isb(); } +static inline void enable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenset_el1); + isb(); +} + +static inline void disable_irq(int idx) +{ + uint64_t v = read_sysreg(pmcntenset_el0); + + write_sysreg(BIT(idx) | v, pmintenclr_el1); + isb(); +} + static inline uint64_t read_cycle_counter(void) { return read_sysreg(pmccntr_el0); } +static inline void write_cycle_counter(uint64_t v) +{ + write_sysreg(v, pmccntr_el0); + isb(); +} + static inline void reset_cycle_counter(void) { uint64_t v = read_sysreg(pmcr_el0); @@ -289,6 +338,15 @@ struct guest_data { static struct guest_data guest_data; +/* Data to communicate among guest threads */ +struct guest_irq_data { + uint32_t pmc_idx_bmap; + uint32_t irq_received_bmap; + struct spinlock lock; +}; + +static struct guest_irq_data guest_irq_data; + #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs) expected_ec = INVALID_EC; } +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap) +{ + /* + * Fail if there's an interrupt from unexpected PMCs. + * All the expected events' IRQs may not arrive at the same time. + * Hence, check if the interrupt is valid only if it's expected. + */ + if (pmovsclr & BIT(pmc_idx)) { + GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap); + write_pmovsclr(BIT(pmc_idx)); + } +} + +static void guest_irq_handler(struct ex_regs *regs) +{ + uint32_t pmc_idx_bmap; + uint64_t i, pmcr_n = get_pmcr_n(); + uint32_t pmovsclr = read_pmovsclr(); + unsigned int intid = gic_get_and_ack_irq(); + + /* No other IRQ apart from the PMU IRQ is expected */ + GUEST_ASSERT_1(intid == PMU_IRQ, intid); + + spin_lock(&guest_irq_data.lock); + pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + + for (i = 0; i < pmcr_n; i++) + guest_validate_irq(i, pmovsclr, pmc_idx_bmap); + guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); + + /* Mark IRQ as recived for the corresponding PMCs */ + WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); + spin_unlock(&guest_irq_data.lock); + + gic_set_eoi(intid); +} + +static int pmu_irq_received(int pmc_idx) +{ + bool irq_received; + + spin_lock(&guest_irq_data.lock); + irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + return irq_received; +} + +static void pmu_irq_init(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + enable_irq(pmc_idx); +} + +static void pmu_irq_exit(int pmc_idx) +{ + write_pmovsclr(BIT(pmc_idx)); + + spin_lock(&guest_irq_data.lock); + WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&guest_irq_data.lock); + + disable_irq(pmc_idx); +} + /* * Run the given operation that should trigger an exception with the * given exception class. The exception handler (guest_sync_handler) @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr) precise_instrs_loop(loop, pmcr); } -static void test_instructions_count(int pmc_idx, bool expect_count) +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow) { int i; struct pmc_accessor *acc; - uint64_t cnt; - int instrs_count = 100; + uint64_t cntr_val = 0; + int instrs_count = 500; + + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT_1(expect_count, pmc_idx); + + cntr_val = PRE_OVERFLOW_32; + pmu_irq_init(pmc_idx); + } enable_counter(pmc_idx); @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count) for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { acc = &pmc_accessors[i]; - pmu_disable_reset(); - + acc->write_cntr(pmc_idx, cntr_val); acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED); - /* Enable the PMU and execute precisely number of instructions as a workload */ - execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); + /* + * Enable the PMU and execute a precise number of instructions as a workload. + * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count' + * should have enough instructions to raise an IRQ. + */ + execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E); - /* If a count is expected, the counter should be increased by 'instrs_count' */ - cnt = acc->read_cntr(pmc_idx); - GUEST_ASSERT_4(expect_count == (cnt == instrs_count), - i, expect_count, cnt, instrs_count); + /* + * If an overflow is expected, only check for the overflag flag. + * As overflow interrupt is enabled, the interrupt would add additional + * instructions and mess up the precise instruction count. Hence, measure + * the instructions count only when the test is not set up for an overflow. + */ + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i); + } else { + uint64_t cnt = acc->read_cntr(pmc_idx); + + GUEST_ASSERT_4(expect_count == (cnt == instrs_count), + pmc_idx, i, cnt, expect_count); + } } - disable_counter(pmc_idx); + if (test_overflow) + pmu_irq_exit(pmc_idx); } -static void test_cycles_count(bool expect_count) +static void test_cycles_count(bool expect_count, bool test_overflow) { uint64_t cnt; - pmu_enable(); - reset_cycle_counter(); + if (test_overflow) { + /* Overflow scenarios can only be tested when a count is expected */ + GUEST_ASSERT(expect_count); + + write_cycle_counter(PRE_OVERFLOW_64); + pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX); + } else { + reset_cycle_counter(); + } /* Count cycles in EL0 and EL1 */ write_pmccfiltr(0); enable_cycle_counter(); + /* Enable the PMU and execute precisely number of instructions as a workload */ + execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E); cnt = read_cycle_counter(); /* * If a count is expected by the test, the cycle counter should be increased by - * at least 1, as there is at least one instruction between enabling the + * at least 1, as there are a number of instructions between enabling the * counter and reading the counter. */ GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count); + if (test_overflow) { + GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count); + pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX); + } disable_cycle_counter(); pmu_disable_reset(); @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { case ARMV8_PMUV3_PERFCTR_INST_RETIRED: - test_instructions_count(pmc_idx, expect_count); + test_instructions_count(pmc_idx, expect_count, false); break; case ARMV8_PMUV3_PERFCTR_CPU_CYCLES: - test_cycles_count(expect_count); + test_cycles_count(expect_count, false); break; } } static void test_basic_pmu_functionality(void) { + local_irq_disable(); + gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_irq_enable(PMU_IRQ); + local_irq_enable(); + /* Test events on generic and cycle counters */ - test_instructions_count(0, true); - test_cycles_count(true); + test_instructions_count(0, true, false); + test_cycles_count(true, false); + + /* Test overflow with interrupts on generic and cycle counters */ + test_instructions_count(0, true, true); + test_cycles_count(true, true); } /* @@ -813,9 +988,6 @@ static void guest_code(void) GUEST_DONE(); } -#define GICD_BASE_GPA 0x8000000ULL -#define GICR_BASE_GPA 0x80A0000ULL - static unsigned long * set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) { @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; uint8_t pmuver, ec; - uint64_t dfr0, irq = 23; + uint64_t dfr0, irq = PMU_IRQ; struct vpmu_vm *vpmu_vm; struct kvm_device_attr irq_attr = { .group = KVM_ARM_VCPU_PMU_V3_CTRL, @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm->vm = vm = vm_create(1); vm_init_descriptor_tables(vm); + vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); /* Catch exceptions for easier debugging */ for (ec = 0; ec < ESR_EC_NUM; ec++) { From patchwork Mon Feb 13 18:02:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B0F0BC6379F for ; Mon, 13 Feb 2023 18:06:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sslrW0XEYosRArR+85rHDrOETGEIoyPzIHSMDC6PueM=; b=nWbh2SqT1dSwkcpmLAiP/4mWNh C+u/YYaygJycrxgUWk1fcppr0IPfAqHNAQn1NdfqTFS+EcAaQ4LaZD85kxDw8gPxZe5fnPHNNOvcf WyDTHXm8fNqTqjgBpHAy2GeF050HTLatsLh2In4nFLVkiD1oURXVD44MLsbBUeE921aEEWT8tAnRf xJyrLTW3OvqiliihGxs4SkG1ovMVnAyWpM+o9n/EWEdCO0nd+fP+DDvoQ3HccfO+mSNNEqWxefxYI bhRDztPEbyg2DEKi2ghzLhvieLqC/S7v7KftD7WzczZ4gFgAC2Scg8HyQvKm4GwOuXz9qQ2Vjy7i0 FyQtQdyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdCK-00Fmkh-8O; Mon, 13 Feb 2023 18:05:04 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAB-00Fle0-VN for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:53 +0000 Received: by mail-yb1-xb49.google.com with SMTP id x15-20020a25accf000000b008efe9505b2eso9987058ybd.22 for ; Mon, 13 Feb 2023 10:02:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=Odev8Z6Kqs3GEhehIpX1NfnNvtpHcHO5P823Sx9vCSEvBaW+zpJ3MeA2k0DrQgW/hq qzZ0Hg7nt5uxsTCap5L4HuaZBZQTI3E1Zi5MUcXkCuJGIpYie9eqKdEIoyCQHFLzDhhG xlq5vIQJd3CcnFweyuBak5BSdKhSKWTI0i0kyNFGzwKtEuzFKbtmWbuUQceIbBCCHYyi kxFKKFTLFjxiMhp6TugfvW8TYUKvke2doRbaM8GV/h9rrqEyRRaZMrWlf3IqbBqhJGsH wTSTc75k7otoY5g1eF6K6WOJDyw2YWSdz0xPc0gMpyTcUXxLWhaGv9Wjfw3hc+h3rdFW vu0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cRn8pqb+ZATFzovvOq02T9w/U2XDHj1IusmF+T0soEI=; b=mJQk55vN2hiVnVgFpeKZyEo6T8aAkJwYB63IFgYGvIU2nyS9bQzz7IkFrGHRc2UQwi Kciz3rYqrhCpqvOTR+cvd781HqtdmKVU18/gS8C6oNI8lgZBz+Pn9E9sDkLUNorkvrBC kmNNRz1R+ORKb8gmRQ/JMS1avrem/Vtz9ioLTCoJc/V8A6KjjbybsdFNiNUxKYbQYs6v fCvP8NAfZ8OqZOa0n0QeNTYMrn+sREw54iV0GMpqPcK3NEsTj5+0fsXsmDBmGMunXIV4 fqV1oJsvsXKAtCFByuZbpeGzZZStUAcnDV6Q0diHcWMLfOQ1eyx852Vm0AXUe4NMhjZR agXg== X-Gm-Message-State: AO0yUKUlCOYIohJphcBFqiym+99acFd+naZTUl4yekRPvjUnIGdGjaoZ 1Y2TwTXJ8Zd80iKgh49ilRp7zBASn8zH X-Google-Smtp-Source: AK7set8RHgzHP8c22NX4XX8rTz8mK48f2xqsC6kI1cPrwBMOEbu/WcXWtj02YO8LfVZHw/DkvQKxZKKZMH8A X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:690c:788:b0:52f:184a:da09 with SMTP id bw8-20020a05690c078800b0052f184ada09mr23ywb.2.1676311370417; Mon, 13 Feb 2023 10:02:50 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:31 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-11-rananta@google.com> Subject: [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100252_051142_708E7A49 X-CRM114-Status: GOOD ( 18.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vPMU's vCPU migration test to validate chained events, and their overflow conditions. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 066dc17fa3906..de725f4339ad5 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -23,7 +23,7 @@ * migrating the guest vCPU to random pCPUs in the system, and check * if the vPMU is still behaving as expected. The sub-tests include * testing basic functionalities such as basic counters behavior, - * overflow, and overflow interrupts. + * overflow, overflow interrupts, and chained events. * * Copyright (c) 2022 Google LLC. * @@ -61,6 +61,8 @@ #define PRE_OVERFLOW_32 (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1) #define PRE_OVERFLOW_64 (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1) +#define ALL_SET_64 GENMASK(63, 0) + #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL @@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool test_overflow) pmu_disable_reset(); } +static void test_chained_count(int pmc_idx) +{ + int i, chained_pmc_idx; + struct pmc_accessor *acc; + uint64_t pmcr_n, cnt, cntr_val; + + /* The test needs at least two PMCs */ + pmcr_n = get_pmcr_n(); + GUEST_ASSERT_1(pmcr_n >= 2, pmcr_n); + + /* + * The chained counter's idx is always chained with (pmc_idx + 1). + * pmc_idx should be even as the chained event doesn't count on + * odd numbered counters. + */ + GUEST_ASSERT_1(pmc_idx % 2 == 0, pmc_idx); + + /* + * The max counter idx that the chained counter can occupy is + * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2). + */ + chained_pmc_idx = pmc_idx + 1; + GUEST_ASSERT(chained_pmc_idx < pmcr_n); + + enable_counter(chained_pmc_idx); + pmu_irq_init(chained_pmc_idx); + + /* Configure the chained event using all the possible ways*/ + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { + acc = &pmc_accessors[i]; + + /* Test if the chained counter increments when the base event overflows */ + + cntr_val = 1; + acc->write_cntr(chained_pmc_idx, cntr_val); + acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN); + + /* Chain the counter with pmc_idx that's configured for an overflow */ + test_instructions_count(pmc_idx, true, true); + + /* + * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessors) + * combinations. Hence, the chained chained_pmc_idx is expected to be + * cntr_val + ARRAY_SIZE(pmc_accessors). + */ + cnt = acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt == cntr_val + ARRAY_SIZE(pmc_accessors), + pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors)); + + /* Test for the overflow of the chained counter itself */ + + cntr_val = ALL_SET_64; + acc->write_cntr(chained_pmc_idx, cntr_val); + + test_instructions_count(pmc_idx, true, true); + + /* + * At this point, an interrupt should've been fired for the chained + * counter (which validates the overflow bit), and the counter should've + * wrapped around to ARRAY_SIZE(pmc_accessors) - 1. + */ + cnt = acc->read_cntr(chained_pmc_idx); + GUEST_ASSERT_4(cnt == ARRAY_SIZE(pmc_accessors) - 1, + pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors)); + } + + pmu_irq_exit(chained_pmc_idx); +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void) /* Test overflow with interrupts on generic and cycle counters */ test_instructions_count(0, true, true); test_cycles_count(true, true); + + /* Test chained events */ + test_chained_count(0); } /* From patchwork Mon Feb 13 18:02:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138794 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A776AC636D4 for ; Mon, 13 Feb 2023 18:06:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jP/6n6JcjPhbWu1iAgkhQusscFGIM/D+qDCs50i9qaQ=; b=fL22tpNxnxdjMq3JkZI6XjjkYZ yjhZlosbWtHwmB7JMXJgBXrCUPFj2EXGcvWcTj3PsX/+muuRAhUO/2bKqf7rD2Y/hl8502UwUC4Q6 nROtkWkf8pZwWyn4DmRmfXRJ7BqYOxTZ8OHZPmb/VHkSOwnydbH7/G3W2PQLtsXyHOAyQhpwVEco3 umH72ynodCm7s+ebdj30n3/tVDQBpUY1g+aqqJ86sqlSXhA2Hb/+zKnkSm2Bsh3yfj/6CLU9RGCFc yLU/xSvwBx/FahbSJ6S8xdvCUUEcrRivciIjIoMkyTJjEign1QAJiuLS8s9fFF//kZuIP8GVs6AOQ oXTIviqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdCj-00Fmzi-P1; Mon, 13 Feb 2023 18:05:30 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAD-00FleW-7x for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:54 +0000 Received: by mail-il1-x149.google.com with SMTP id s12-20020a056e021a0c00b0030efd0ed890so9727235ild.7 for ; Mon, 13 Feb 2023 10:02:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=klshuBOzJGsDndpswreQ0XlinbMaJK4kRqAQ+37wZY8lmbOhHgx/ZdZOFpaN1lJ6Yy GKSvylJPXQGCRQtm8PG+CoJXz3i/KEj97JV/Ab045b8U+Wj43GYkWufOUVvppWmGFm0v WaQbsZTE/P4Putg06m5v8Od47rRyUpcHCvrxvuzKG+FvCNge/NQA8/pfj9YEQUHWkd1n t1T8Jk93C0yORnKG07Q1162qgwKRkb9xUo9DJnwrkywAG4Rmh/eaGJg6ea6LmI7s1gpt sHUJKKq/+DnRWgfVNv9ltO9gOAFPNz18MlmUrWzqIN8X8XsBaKgJsikEtqscWqdJj/h9 2Zgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E305a934JyKR6d0LbQnwDb1KLXy1uRhITDhO1feUNVQ=; b=ujj0waT3Shxa8qRK+9d4AtgPJVR/CoVqcI+aK7W1a6z6aVLcb7tdzFsAQp80jSG6hA KMSdRENfZP8Li6ICXAcNOYEnu02SEDnbw1xdcvllT8THyZmqQDq2dK0kHa47WlQ3MCtC /eQApaYUhgK53r9gQDNIYLCSvBdBezRcK/fVADPM21jGbfSjJLMY+CTCSepKyky7NMoR IBrlwi9uG9zDfi/wdxFEy7t9Vz/G6ZXS7CFD719RLme6KVGNqr5Fqnw9ws1KJ//eP0oP eaA2ct7divm4TP5eaRTWfex2YaGmT5HvaiG7LvhewnKgnPS4ybtWcvUNeqz0zP4wu0O1 ZNQQ== X-Gm-Message-State: AO0yUKWbEle5aHak1E+gzrV2X2SMN22I453sbfOhP91871nSMBE9GaPV VzXz3R7LVahChtZxoqWuohgtUNFU7j6c X-Google-Smtp-Source: AK7set9XcTIYijLiXD+fG0978y9t+ze0/330Z/53BuCCPZ7jl1ZgND52wYhOh70n0GJQrhaPSQBmbepj7EJS X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6638:d0c:b0:3a7:e46f:1018 with SMTP id q12-20020a0566380d0c00b003a7e46f1018mr113jaj.2.1676311371815; Mon, 13 Feb 2023 10:02:51 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:32 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-12-rananta@google.com> Subject: [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100253_393110_85F3CC59 X-CRM114-Status: GOOD ( 16.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend the vCPU migration test to occupy all the vPMU counters, by configuring chained events on alternate counter-ids and chaining them with its corresponding predecessor counter, and verify against the extended behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index de725f4339ad5..fd00acb9391c8 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx) pmu_irq_exit(chained_pmc_idx); } +static void test_chain_all_counters(void) +{ + int i; + uint64_t cnt, pmcr_n = get_pmcr_n(); + struct pmc_accessor *acc = &pmc_accessors[0]; + + /* + * Test the occupancy of all the event counters, by chaining the + * alternate counters. The test assumes that the host hasn't + * occupied any counters. Hence, if the test fails, it could be + * because all the counters weren't available to the guest or + * there's actually a bug in KVM. + */ + + /* + * Configure even numbered counters to count cpu-cycles, and chain + * each of them with its odd numbered counter. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN); + acc->write_cntr(i, 1); + } else { + pmu_irq_init(i); + acc->write_cntr(i, PRE_OVERFLOW_32); + acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES); + } + enable_counter(i); + } + + /* Introduce some cycles */ + execute_precise_instrs(500, ARMV8_PMU_PMCR_E); + + /* + * An overflow interrupt should've arrived for all the even numbered + * counters but none for the odd numbered ones. The odd numbered ones + * should've incremented exactly by 1. + */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2) { + GUEST_ASSERT_1(!pmu_irq_received(i), i); + + cnt = acc->read_cntr(i); + GUEST_ASSERT_2(cnt == 2, i, cnt); + } else { + GUEST_ASSERT_1(pmu_irq_received(i), i); + } + } + + /* Cleanup the states */ + for (i = 0; i < pmcr_n; i++) { + if (i % 2 == 0) + pmu_irq_exit(i); + disable_counter(i); + } +} + static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) { switch (event) { @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void) /* Test chained events */ test_chained_count(0); + + /* Test running chained events on all the implemented counters */ + test_chain_all_counters(); } /* From patchwork Mon Feb 13 18:02:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 39079C636D4 for ; Mon, 13 Feb 2023 18:07:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gMJewwI+YLWayjcQsCXE43KdZTNXH/+aP0WaZKapg9A=; b=eMJyMaIqz4E56nTEJrEy1/N/uK VAhDU5t2ZAwtfIyoIB5HNkJMOkgjnsI+MR1LVbW0870XExAw9Aob1KiNqcbjd04Oi/S0K9Mkf4xVm rc+gcACwNZqg8/G4+tADaY/i4vcVwKfoDgNqUNXMluq6jw0TO0ZH7hWB+hbm9vF7gakR7oz9a5pT9 f0+C33/ScYdVT1O3vxaZVDrs5XhnJnVm6EsyySkNU/68/AmQ2IIDJ3Mvlw/d7wUIHmdz4GQshv5uG jf95z+bIGbw57G8MbczyHIoEQYEAsd8l2SBWmKEdk4WI9WMiDeYjH6wV6hiNjLBqbmj5/VBoKulPs /J92YRLw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdDL-00FnOU-CU; Mon, 13 Feb 2023 18:06:08 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAF-00FlfE-0S for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:56 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-52ec987d600so106897307b3.21 for ; Mon, 13 Feb 2023 10:02:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=Dl8jDBFxwwxPG5dvCnLeljlHX+mEMM3Lt7CRmMJSCztwob6fwXljZpLM3RUkog1USt HLVFYbxCfbddhLdviZGYAZWu1YOebDg9K7yxfjuHtBa2X9q4BCps7A/v2LPNN1otPxsw oB6E+y3HzPVYekUc1zHIdLE6GTler0EGqotKcXV8VxicHBonenMsmfvh8IhK7LkKa6mk F3RSWL4rVu3+5tNQ4uRFQP/N0AT/0CiK/2xmVH7Rtm6fa5GJKd1sXVqRA0Z4nDB5ZkfV 0QDEQzHlYocRqurPz0TXY1z5Ozz7EvVY8ZkcsXGe26IThiFJWzAy9HKsbiwWBlJT5BvR 7tDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0LRKHv8C7GNrFb1gl+E1gfO7HmO+3D0ztdjnWC4dVGc=; b=Otm9fMyD9o7qfI1gcxzcDkFuF9Tttf0BHjjrHeHWRPscwWvlos6kC1y8h5USfYyG9T TqR2J2JwYGF3Liy+M/iB9TTQ5HhAdc3jG9rxxAhDtM9v3f4tCVO0o22D3b2Z0wbQcW2B c/+fuKkW2O8izGbFEjuHCx+I3WG9TJXbr6i3Bpaq+VqA5TFv4Qr1Uu9cFg8xr7KSzhnz JmV8R/j9Cl9cotOnJJ08kXmcEInhhMvisyWuvS/CZqFaY1K2Sb2IMGZS9tnK9MHiIgHr wblATnuzH3V8vL0427B9UCMfogjPYdakyGIlbtZwbUZrn1mtZB0pNOAPneu1lDjzNYql M+jg== X-Gm-Message-State: AO0yUKWBgAkAMbtka5KWy2X0CjpBBWnNtQH8mBi3jUMAM81waz9PDBsd zjgv0lM6YyBtz3IlExD9e9uAjqGDp5xb X-Google-Smtp-Source: AK7set8WFkcjjQ486sENyo/Bofwmb4YEVcsdM3KzPAVhr+NFr/y1hTAIeb0LxE4fo9R9UrAHes4GTeEFvSOs X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a25:5987:0:b0:8da:f656:8da6 with SMTP id n129-20020a255987000000b008daf6568da6mr16ybb.7.1676311372794; Mon, 13 Feb 2023 10:02:52 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:33 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-13-rananta@google.com> Subject: [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100255_130714_77BDC516 X-CRM114-Status: GOOD ( 19.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The PMU test's create_vpmu_vm() currently creates a VM with only one cpu. Extend this to accept a number of cpus as a argument to create a multi-vCPU VM. This would help the upcoming patches to test the vPMU context across multiple vCPUs. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 82 +++++++++++-------- 1 file changed, 49 insertions(+), 33 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index fd00acb9391c8..239fc7e06b3b9 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -320,7 +320,8 @@ uint64_t op_end_addr; struct vpmu_vm { struct kvm_vm *vm; - struct kvm_vcpu *vcpu; + int nr_vcpus; + struct kvm_vcpu **vcpus; int gic_fd; unsigned long *pmu_filter; }; @@ -1164,10 +1165,11 @@ set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_ return pmu_filter; } -/* Create a VM that has one vCPU with PMUv3 configured. */ +/* Create a VM that with PMUv3 configured. */ static struct vpmu_vm * -create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) +create_vpmu_vm(int nr_vcpus, void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { + int i; struct kvm_vm *vm; struct kvm_vcpu *vcpu; struct kvm_vcpu_init init; @@ -1187,7 +1189,11 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) vpmu_vm = calloc(1, sizeof(*vpmu_vm)); TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm"); - vpmu_vm->vm = vm = vm_create(1); + vpmu_vm->vcpus = calloc(nr_vcpus, sizeof(struct kvm_vcpu *)); + TEST_ASSERT(vpmu_vm->vcpus, "Failed to allocate kvm_vpus"); + vpmu_vm->nr_vcpus = nr_vcpus; + + vpmu_vm->vm = vm = vm_create(nr_vcpus); vm_init_descriptor_tables(vm); vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler); @@ -1197,26 +1203,35 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) guest_sync_handler); } - /* Create vCPU with PMUv3 */ + /* Create vCPUs with PMUv3 */ vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init); init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); - vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code); - vcpu_init_descriptor_tables(vcpu); - vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); - /* Make sure that PMUv3 support is indicated in the ID register */ - vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); - pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); - TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && - pmuver >= ID_AA64DFR0_PMUVER_8_0, - "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); + for (i = 0; i < nr_vcpus; i++) { + vpmu_vm->vcpus[i] = vcpu = aarch64_vcpu_add(vm, i, &init, guest_code); + vcpu_init_descriptor_tables(vcpu); + } - /* Initialize vPMU */ - if (pmu_event_filters) - vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + /* vGIC setup is expected after the vCPUs are created but before the vPMU is initialized */ + vpmu_vm->gic_fd = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); - vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + for (i = 0; i < nr_vcpus; i++) { + vcpu = vpmu_vm->vcpus[i]; + + /* Make sure that PMUv3 support is indicated in the ID register */ + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0); + TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF && + pmuver >= ID_AA64DFR0_PMUVER_8_0, + "Unexpected PMUVER (0x%x) on the vCPU %d with PMUv3", i, pmuver); + + /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); + } return vpmu_vm; } @@ -1227,6 +1242,7 @@ static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); + free(vpmu_vm->vcpus); free(vpmu_vm); } @@ -1264,8 +1280,8 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu = vpmu_vm->vcpu; + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu = vpmu_vm->vcpus[0]; /* Save the initial sp to restore them later to run the guest again */ vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); @@ -1309,8 +1325,8 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu = vpmu_vm->vcpu; + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu = vpmu_vm->vcpus[0]; /* Update the PMCR_EL0.N with @pmcr_n */ vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); @@ -1396,8 +1412,8 @@ static void run_kvm_event_filter_error_tests(void) }; /* KVM should not allow configuring filters after the PMU is initialized */ - vpmu_vm = create_vpmu_vm(guest_code, NULL); - ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + ret = __vcpu_ioctl(vpmu_vm->vcpus[0], KVM_SET_DEVICE_ATTR, &filter_attr); TEST_ASSERT(ret == -1 && errno == EBUSY, "Failed to disallow setting an event filter after PMU init"); destroy_vpmu_vm(vpmu_vm); @@ -1427,14 +1443,14 @@ static void run_kvm_event_filter_test(void) /* Test for valid filter configurations */ for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) { - vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]); + vpmu_vm = create_vpmu_vm(1, guest_code, pmu_event_filters[i]); vm = vpmu_vm->vm; pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR); memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz); guest_data.pmu_filter = (unsigned long *) pmu_filter_gva; - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } @@ -1449,8 +1465,8 @@ static void run_kvm_evtype_filter_test(void) guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER; - vpmu_vm = create_vpmu_vm(guest_code, NULL); - run_vcpu(vpmu_vm->vcpu); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + run_vcpu(vpmu_vm->vcpus[0]); destroy_vpmu_vm(vpmu_vm); } @@ -1465,7 +1481,7 @@ static void *run_vcpus_migrate_test_func(void *arg) struct vcpu_migrate_data *migrate_data = arg; struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; - run_vcpu(vpmu_vm->vcpu); + run_vcpu(vpmu_vm->vcpus[0]); migrate_data->vcpu_done = true; return NULL; @@ -1535,7 +1551,7 @@ static void run_vcpu_migration_test(uint64_t pmcr_n) guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n = pmcr_n; - migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL); + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL); /* Initialize random number generation for migrating vCPUs to random pCPUs */ srand(time(NULL)); @@ -1571,8 +1587,8 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code, NULL); - vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); + vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + vcpu_get_reg(vpmu_vm->vcpus[0], KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); From patchwork Mon Feb 13 18:02:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82C9EC636CC for ; Mon, 13 Feb 2023 18:07:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=1Ko4iDeptrKjVEo6Iz7LrCH/8EvQLD12DDTd/u0QUVw=; b=GSeLBQrjEFmXcLZVjr8GjPo5Ze GZdKn34/7Eqp4ee3fG14dTSnGy8ska9uckgBHIryrlsUNMR+BG7p5GZI/BznvqdUCXwEMvg0uvkat /NrrVMxyjWjqpdl0E5Kp4ZidprK4kaBlO+R6z9rIRGyB/BUowXZ41Q0XLE5JVplAX3CmvK6+B68xQ cUHAUB9ErMqi/Jsiwq21DI1LhCBK+IyJ9GAS3dxUV1uNHczyT7yfUbH4uAyg+y8iOqgdjYsc/RqxV YY/ZLG0Zgs6nV46xKJytTrWYdd0xlN+e+J53IhlBTHdIB/ngbg92J6BWNEGllmIIS0yqM5oHYRt0D i2RJutEg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdDs-00FnhK-AK; Mon, 13 Feb 2023 18:06:41 +0000 Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAF-00FlfU-L3 for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:58 +0000 Received: by mail-io1-xd4a.google.com with SMTP id g19-20020a6b6b13000000b0073deb4b4272so2959175ioc.5 for ; Mon, 13 Feb 2023 10:02:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=mnjrGM5wUez2KEVI/vFMV4+FbMOaoVOnfJ9xGgmJ4Zd0uoapoTt5c95+fncj1Ro61W TUPQy0EqHajLx4de1Fdfds2Ob5QycOCdI8eHnPwd/MexlEbNJPqJVys/+MnhBemJq29N ebNk/piULaEcMtvxd9QBPD5koG96A4AWJ6140gil1zQe/bYGnJWni59OsnV2d5tEhHNU X8QeFM0m+JtgdYxf7dURm6TMD3wCQ6HecN9oWny4BBbrHhW+3+yPtkFaWufSdEvy3thw ma3Cmg32IIfK0X5DtalBNaYeROcLkG7mk6jnQNONfmBTH2cxoarIDk15wHj6mi2GgKOq 41Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YKcCEtcasEy96yYR95uUDcecBsDQdV3knxsKnUz3/Gk=; b=vRrv+zb4B6pq4BHLW5I1eK+Yn929ZL7pSXo6+CFlVZhp2NwiNHMVx68+JO/W691OvK j+c820DoClzIEaEnvvCEfm67BzsWRuD6Vm9U91asZIzNG2xjiTURVvMnLqcy9V+wA7H5 qLbYsyZMed+mfs4gnWCwUBT+z6YJWj/Hs/BTv+71mCcWOc1zi7rqXLwA3LsezY6nFQVV HhZot/thE9+/oHc+oBGwoeFg52+HJj7FhJTBNchzzi/VMD9wC5UFmgQxFvWvw1sniPuM kFPkTVdchNfbNi/QVFHISKqCyG5J5Ou9UhhFBA/niCt8tsVTMVoFLnMSWhWcn7mscNX+ +yQw== X-Gm-Message-State: AO0yUKUEdMHr42t3f/JXfpG8f0aL5zgi0F/FgW1xAntkc7p6oL63RjKm ZAzh811z1qn9An8rMwLxHGjJR59oZq+g X-Google-Smtp-Source: AK7set9z6n5ru7Pp9s5RYw/d+TMzl0POKn/BVaGF5c9dcAeZ9wYFw3I4Z3GF+X4eobdissRqds0fajtDfTdM X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a92:b513:0:b0:310:a381:d0a2 with SMTP id f19-20020a92b513000000b00310a381d0a2mr2582318ile.0.1676311374044; Mon, 13 Feb 2023 10:02:54 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:34 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-14-rananta@google.com> Subject: [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100255_728772_7FE45C97 X-CRM114-Status: GOOD ( 23.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To test KVM's handling of multiple vCPU contexts together, that are frequently migrating across random pCPUs in the system, extend the test to create a VM with multiple vCPUs and validate the behavior. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------ 1 file changed, 114 insertions(+), 52 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 239fc7e06b3b9..c9d8e5f9a22ab 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -19,11 +19,12 @@ * higher exception levels (EL2, EL3). Verify this functionality by * configuring and trying to count the events for EL2 in the guest. * - * 4. Since the PMU registers are per-cpu, stress KVM by frequently - * migrating the guest vCPU to random pCPUs in the system, and check - * if the vPMU is still behaving as expected. The sub-tests include - * testing basic functionalities such as basic counters behavior, - * overflow, overflow interrupts, and chained events. + * 4. Since the PMU registers are per-cpu, stress KVM by creating a + * multi-vCPU VM, then frequently migrate the guest vCPUs to random + * pCPUs in the system, and check if the vPMU is still behaving as + * expected. The sub-tests include testing basic functionalities such + * as basic counters behavior, overflow, overflow interrupts, and + * chained events. * * Copyright (c) 2022 Google LLC. * @@ -348,19 +349,22 @@ struct guest_irq_data { struct spinlock lock; }; -static struct guest_irq_data guest_irq_data; +static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS]; #define VCPU_MIGRATIONS_TEST_ITERS_DEF 1000 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2 +#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF 2 struct test_args { int vcpu_migration_test_iter; int vcpu_migration_test_migrate_freq_ms; + int vcpu_migration_test_nr_vcpus; }; static struct test_args test_args = { .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF, .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS, + .vcpu_migration_test_nr_vcpus = VCPU_MIGRATIONS_TEST_NR_VPUS_DEF, }; static void guest_sync_handler(struct ex_regs *regs) @@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_ } } +static struct guest_irq_data *get_irq_data(void) +{ + uint32_t cpu = guest_get_vcpuid(); + + return &guest_irq_data[cpu]; +} + static void guest_irq_handler(struct ex_regs *regs) { uint32_t pmc_idx_bmap; uint64_t i, pmcr_n = get_pmcr_n(); uint32_t pmovsclr = read_pmovsclr(); unsigned int intid = gic_get_and_ack_irq(); + struct guest_irq_data *irq_data = get_irq_data(); /* No other IRQ apart from the PMU IRQ is expected */ GUEST_ASSERT_1(intid == PMU_IRQ, intid); - spin_lock(&guest_irq_data.lock); - pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap); + spin_lock(&irq_data->lock); + pmc_idx_bmap = READ_ONCE(irq_data->pmc_idx_bmap); for (i = 0; i < pmcr_n; i++) guest_validate_irq(i, pmovsclr, pmc_idx_bmap); guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap); /* Mark IRQ as recived for the corresponding PMCs */ - WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr); - spin_unlock(&guest_irq_data.lock); + WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr); + spin_unlock(&irq_data->lock); gic_set_eoi(intid); } @@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs) static int pmu_irq_received(int pmc_idx) { bool irq_received; + struct guest_irq_data *irq_data = get_irq_data(); - spin_lock(&guest_irq_data.lock); - irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + irq_received = READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&irq_data->lock); return irq_received; } static void pmu_irq_init(int pmc_idx) { + struct guest_irq_data *irq_data = get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx)); + spin_unlock(&irq_data->lock); enable_irq(pmc_idx); } static void pmu_irq_exit(int pmc_idx) { + struct guest_irq_data *irq_data = get_irq_data(); + write_pmovsclr(BIT(pmc_idx)); - spin_lock(&guest_irq_data.lock); - WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx)); - spin_unlock(&guest_irq_data.lock); + spin_lock(&irq_data->lock); + WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx)); + spin_unlock(&irq_data->lock); disable_irq(pmc_idx); } @@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count) static void test_basic_pmu_functionality(void) { local_irq_disable(); - gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); + gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus, + (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA); gic_irq_enable(PMU_IRQ); local_irq_enable(); @@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void) static void guest_vcpu_migration_test(void) { + int iter = test_args.vcpu_migration_test_iter; + /* * While the userspace continuously migrates this vCPU to random pCPUs, * run basic PMU functionalities and verify the results. */ - while (test_args.vcpu_migration_test_iter--) + while (iter--) test_basic_pmu_functionality(); } @@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void) struct vcpu_migrate_data { struct vpmu_vm *vpmu_vm; - pthread_t *pt_vcpu; - bool vcpu_done; + pthread_t *pt_vcpus; + unsigned long *vcpu_done_map; + pthread_mutex_t vcpu_done_map_lock; }; +struct vcpu_migrate_data migrate_data; + static void *run_vcpus_migrate_test_func(void *arg) { - struct vcpu_migrate_data *migrate_data = arg; - struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm; + struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm; + unsigned int vcpu_idx = (unsigned long)arg; - run_vcpu(vpmu_vm->vcpus[0]); - migrate_data->vcpu_done = true; + run_vcpu(vpmu_vm->vcpus[vcpu_idx]); + + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + __set_bit(vcpu_idx, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); return NULL; } @@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void) return pcpu; } -static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) +static int migrate_vcpu(int vcpu_idx) { int ret; cpu_set_t cpuset; @@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) CPU_ZERO(&cpuset); CPU_SET(new_pcpu, &cpuset); - pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu); + pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu); - ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset); + ret = pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cpuset), &cpuset); /* Allow the error where the vCPU thread is already finished */ TEST_ASSERT(ret == 0 || ret == ESRCH, @@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data) static void *vcpus_migrate_func(void *arg) { - struct vcpu_migrate_data *migrate_data = arg; + struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm; + int i, n_done, nr_vcpus = vpmu_vm->nr_vcpus; + bool vcpu_done; - while (!migrate_data->vcpu_done) { + do { usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms)); - migrate_vcpu(migrate_data); - } + for (n_done = 0, i = 0; i < nr_vcpus; i++) { + pthread_mutex_lock(&migrate_data.vcpu_done_map_lock); + vcpu_done = test_bit(i, migrate_data.vcpu_done_map); + pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + migrate_vcpu(i); + } + + } while (nr_vcpus != n_done); return NULL; } static void run_vcpu_migration_test(uint64_t pmcr_n) { - int ret; + int i, nr_vcpus, ret; struct vpmu_vm *vpmu_vm; - pthread_t pt_vcpu, pt_sched; - struct vcpu_migrate_data migrate_data = { - .pt_vcpu = &pt_vcpu, - .vcpu_done = false, - }; + pthread_t pt_sched, *pt_vcpus; __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test"); guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION; guest_data.expected_pmcr_n = pmcr_n; - migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL); + nr_vcpus = test_args.vcpu_migration_test_nr_vcpus; + + migrate_data.vcpu_done_map = bitmap_zalloc(nr_vcpus); + TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitmap"); + pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL); + + migrate_data.pt_vcpus = pt_vcpus = calloc(nr_vcpus, sizeof(*pt_vcpus)); + TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers"); + + migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(nr_vcpus, guest_code, NULL); /* Initialize random number generation for migrating vCPUs to random pCPUs */ srand(time(NULL)); - /* Spawn a vCPU thread */ - ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data); - TEST_ASSERT(!ret, "Failed to create the vCPU thread"); + /* Spawn vCPU threads */ + for (i = 0; i < nr_vcpus; i++) { + ret = pthread_create(&pt_vcpus[i], NULL, + run_vcpus_migrate_test_func, (void *)(unsigned long)i); + TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i); + } /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */ - ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data); + ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL); TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs"); pthread_join(pt_sched, NULL); - pthread_join(pt_vcpu, NULL); + + for (i = 0; i < nr_vcpus; i++) + pthread_join(pt_vcpus[i], NULL); destroy_vpmu_vm(vpmu_vm); + free(pt_vcpus); + bitmap_free(migrate_data.vcpu_done_map); } static void run_tests(uint64_t pmcr_n) @@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void) static void print_help(char *name) { - pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n", - name); + pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]" + "[-n vcpu_migration_nr_vcpus]\n", name); pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n", VCPU_MIGRATIONS_TEST_ITERS_DEF); pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n", VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS); + pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n", + VCPU_MIGRATIONS_TEST_NR_VPUS_DEF); pr_info("\t-h: print this help screen\n"); } @@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[]) { int opt; - while ((opt = getopt(argc, argv, "hi:m:")) != -1) { + while ((opt = getopt(argc, argv, "hi:m:n:")) != -1) { switch (opt) { case 'i': test_args.vcpu_migration_test_iter = @@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[]) test_args.vcpu_migration_test_migrate_freq_ms = atoi_positive("vCPU migration frequency", optarg); break; + case 'n': + test_args.vcpu_migration_test_nr_vcpus = + atoi_positive("Nr vCPUs for vCPU migrations", optarg); + if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) { + pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS); + goto err; + } + break; case 'h': default: goto err;