From patchwork Mon Feb 13 18:02:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raghavendra Rao Ananta X-Patchwork-Id: 13138788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 35190C6379F for ; Mon, 13 Feb 2023 18:04:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=gArh57fcLe+cSY54hDy1nK/Kjf8Mnqy33dMKN0jU+7Q=; b=gve4fV7J3/RoxksP9/avsFnJey Vo/IA0P0uqehwF5vBAvKFC38JQpcocm1K/BuTtBOba4PVQIkdzAnIpWnTXINTjFJ3+P5Az/pxMaM1 N1n69UA7IBflQo/yCezUH6y/q6KiS05oDcumL8ostCmpbhFkPBNrNDthmhR/yFKa+TdvLuUwjVlxL q/L+N2HkyBGvWPOoaHeR6An4Ig5TV9awnM4fiHw0/l6yvrhFuccgrAWR3l5cgbvDBqnw+7VFb43+o bn7cj3J9jq7DyAYh2WFMhLW4CDKJwxQCL0SjXh6zutT/lzJWx+w45GqF0QCRoi3QJuC9LbcxziU0a hXA/hgXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdAz-00Fm0d-0j; Mon, 13 Feb 2023 18:03:41 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pRdA6-00FlbO-CR for linux-arm-kernel@lists.infradead.org; Mon, 13 Feb 2023 18:02:47 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id y192-20020a2532c9000000b008ec2e7092d6so10229750yby.5 for ; Mon, 13 Feb 2023 10:02:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=Jx3a++ruGoXrcWcT9j3/ApaAyQ82DXdYanaDr7ST0XpE2Zdzv5xTJY/Gfvi9n0sUxT IAGJgL1CX6Hf5UbNZQlM9snWEB7yQtiJ3BksENLZKNl8tmSdopEGil1x1IoShWafjen4 62pvkCMKN/mE+1i0nSy4KestzSs7XPOtO/tu5mcO3NEE1o7l/6XD/mvfDRZctpTX5k1/ 1ymQj+IpyTb/3m2LR+/32yyqg07hGCAIxxTKNNPGOEavERroNb6chMCoBWYWytMMWT+T SVF11S0EQueuF/zePtphljd/7Lqh6rJXqYpNcY3OAqPtQRWYbKLek3BlYIVi3Lowxvff lneA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9BAtKaaHohTEAYMjD0cR4uj3MWVe56GuEhp9bvwPfa4=; b=4bHjLzwPzZl0PH49dH1KX9v4bZLiGzGy1oC4JTvs+xLVnPiG4KpwRHg4i2fn7T43Wy o+mMu53/Xu8yWO1S3MTkN5kdoeC7rB+sGZu8F5UdlsXfQCZaUh2vL+3aoBcTeXXs1Vy6 vMmbsX4ACajZV3lgJlJqOEtzvWEF0uZnGpyDprH1OFPPNsSCWCmTKX7gx7MgFRrmhqx9 95PsRYPBhnd0kjSg1QiiCSEiQMpoACMWfrSdqwk8SJRvs9anea/VcGZhMwg/Xn1ERJuE 0HDcra2uQST7mH7yJDhWNWEeC0Q9aptRgMvttN+HMMWexw3L6O+AMe6GsFWyws0TJnxP OOqw== X-Gm-Message-State: AO0yUKX0CElt+cOyBP/OTckahsGnR6VVUu/2lRVN7X553P2L+1uUfKJN bcbLZTW6hRwEur0O55q2UXY0m6Zvg8fK X-Google-Smtp-Source: AK7set/EzSiXKhIlnWZGFQXSX4Ttf2a7ec3Qre0VGbOsMKnsjqZTP111F1zCvSTSfNtsmpqkl4lGGYYDMsVP X-Received: from rananta-linux.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:22b5]) (user=rananta job=sendgmr) by 2002:a05:6902:1c4:b0:87e:a15b:4e55 with SMTP id u4-20020a05690201c400b0087ea15b4e55mr2919839ybh.21.1676311364919; Mon, 13 Feb 2023 10:02:44 -0800 (PST) Date: Mon, 13 Feb 2023 18:02:26 +0000 In-Reply-To: <20230213180234.2885032-1-rananta@google.com> Mime-Version: 1.0 References: <20230213180234.2885032-1-rananta@google.com> X-Mailer: git-send-email 2.39.1.581.gbfd45094c4-goog Message-ID: <20230213180234.2885032-6-rananta@google.com> Subject: [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation From: Raghavendra Rao Ananta To: Oliver Upton , Reiji Watanabe , Marc Zyngier , Ricardo Koller , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Jing Zhang , Colton Lewis , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230213_100246_445513_9FD6C08D X-CRM114-Status: GOOD ( 19.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Accept a list of KVM PMU event filters as an argument while creating a VM via create_vpmu_vm(). Upcoming patches would leverage this to test the event filters' functionality. No functional change intended. Signed-off-by: Raghavendra Rao Ananta --- .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++-- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c index 15aebc7d7dc94..2b3a4fa3afa9c 100644 --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c @@ -15,10 +15,14 @@ #include #include #include +#include /* The max number of the PMU event counters (excluding the cycle counter) */ #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) +/* The max number of event numbers that's supported */ +#define ARMV8_PMU_MAX_EVENTS 64 + /* * The macros and functions below for reading/writing PMEV{CNTR,TYPER}_EL0 * were basically copied from arch/arm64/kernel/perf_event.c. @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = { { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, }; +#define MAX_EVENT_FILTERS_PER_VM 10 + #define INVALID_EC (-1ul) uint64_t expected_ec = INVALID_EC; uint64_t op_end_addr; @@ -232,6 +238,7 @@ struct vpmu_vm { struct kvm_vm *vm; struct kvm_vcpu *vcpu; int gic_fd; + unsigned long *pmu_filter; }; enum test_stage { @@ -541,8 +548,51 @@ static void guest_code(void) #define GICD_BASE_GPA 0x8000000ULL #define GICR_BASE_GPA 0x80A0000ULL +static unsigned long * +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters) +{ + int j; + unsigned long *pmu_filter; + struct kvm_device_attr filter_attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_FILTER, + }; + + /* + * Setting up of the bitmap is similar to what KVM does. + * If the first filter denys an event, default all the others to allow, and vice-versa. + */ + pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS); + TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter"); + + if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY) + bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS); + + for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) { + struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j]; + + if (!pmu_event_filter->nevents) + break; + + pr_debug("Applying event filter:: event: 0x%x; action: %s\n", + pmu_event_filter->base_event, + pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY"); + + filter_attr.addr = (uint64_t) pmu_event_filter; + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr); + + if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW) + __set_bit(pmu_event_filter->base_event, pmu_filter); + else + __clear_bit(pmu_event_filter->base_event, pmu_filter); + } + + return pmu_filter; +} + /* Create a VM that has one vCPU with PMUv3 configured. */ -static struct vpmu_vm *create_vpmu_vm(void *guest_code) +static struct vpmu_vm * +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters) { struct kvm_vm *vm; struct kvm_vcpu *vcpu; @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); /* Initialize vPMU */ + if (pmu_event_filters) + vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters); + vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr); @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code) static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm) { + if (vpmu_vm->pmu_filter) + bitmap_free(vpmu_vm->pmu_filter); close(vpmu_vm->gic_fd); kvm_vm_free(vpmu_vm->vm); free(vpmu_vm); @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Test with pmcr_n %lu\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Save the initial sp to restore them later to run the guest again */ @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n) guest_data.expected_pmcr_n = pmcr_n; pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu = vpmu_vm->vcpu; /* Update the PMCR_EL0.N with @pmcr_n */ @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void) struct vpmu_vm *vpmu_vm; uint64_t pmcr; - vpmu_vm = create_vpmu_vm(guest_code); + vpmu_vm = create_vpmu_vm(guest_code, NULL); vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); destroy_vpmu_vm(vpmu_vm); + return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr); }