From patchwork Mon May 23 21:41:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12859502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6A3FC433EF for ; Mon, 23 May 2022 21:41:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229752AbiEWVlc (ORCPT ); Mon, 23 May 2022 17:41:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231970AbiEWVlV (ORCPT ); Mon, 23 May 2022 17:41:21 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB5486428 for ; Mon, 23 May 2022 14:41:18 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id jg18-20020a17090326d200b0016178ae1c69so8593698plb.6 for ; Mon, 23 May 2022 14:41:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rXoKQLI33hucmwCRbfGagWKre7q90aG+A6ARFDddUtI=; b=oaChhPE07Whp9umCaYQswNmvB1nJ3VEpswTLmDH5HRTvwz/CsvYBV39KwqrMGfhlh8 6xdvG5V4Ixs0NkvuWvnxoUdWzib6leS7Hc5vkAgHjazw7viF6RfjpOdnapaLVp/zPDyW j5Yk44YY1JLJMUEtoslb5Nv5hYnGDxZwVDqiumDle2D4nim+V7vPVQn0SBmwVKgquZwy dv8blI4L1CK8n2oBFA4zhzR1OKa7o84Ujf2nAguUiuXB4Dfu8TDS4tKeJcUYbat56iE8 6Ji7giEuasU10c6LWorsiBwBAnVGcUOdvNXW5K89yDfT22rCrrIVAgrmYUjvdvVlcvx0 ZeyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rXoKQLI33hucmwCRbfGagWKre7q90aG+A6ARFDddUtI=; b=Z5BjXf+0x8SGpYmxUzYG6V5TLIzh1VjrWaNKvoSLdlPYjRJEh/SFntK1YUz6tO2jdA vugXm0hPlx4awgJcXl+AehUxEIXAOogZHtRX9FOLBHExkA9gYaG6EUJ4SlYIBLKzN9+Z ZmoNt0PF/8r9XqINRv1o/ZCVIcf+KPIm+tg2IpXBloTEmZh2MowKRB7Y65OKkcfJWFfJ QQ0irKKbXKdqLW6FjH7K3NjFBSLhz/Ic+aHzq7ZzAjs6APAh2dfnfaOaLRZ9vF4UP60o jWnpEMdyePpvrmoNj3OQVpTH8eOGrdB+gpOo0cwzLCaSkFwhwJ+Xn6x6DZA4WqT6ovYx xvDA== X-Gm-Message-State: AOAM531lLgjumNmUlSkfuEIwVlEmzWOh2qISJktJBvKu2xX0//ZhGkfU hckk1BdtLzuaioVJJkbXwVl9Q4g4Vy3IMFHD/UbV/hM+q59q3FllOHALJFhWlgCHDHrpZ+G5mJr SE+ih1nYHHY50kTsHLOqFuee5qqASkozQCMQu720ngNPKX/S/2yY3/QzbIhPXFc7jqRPS X-Google-Smtp-Source: ABdhPJwovi3QFS60HpxqO5KnHavhnXdjBhxR209XEa/FJHe1HGjsdWtdIvK+3LwaB60B5/5Y7l7IoOQVNmWC/isq X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:903:1205:b0:15e:8cbc:fd2b with SMTP id l5-20020a170903120500b0015e8cbcfd2bmr24691423plh.99.1653342078203; Mon, 23 May 2022 14:41:18 -0700 (PDT) Date: Mon, 23 May 2022 21:41:07 +0000 In-Reply-To: <20220523214110.1282480-1-aaronlewis@google.com> Message-Id: <20220523214110.1282480-2-aaronlewis@google.com> Mime-Version: 1.0 References: <20220523214110.1282480-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH 1/4] kvm: x86/pmu: Introduce masked events to the pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When building an event list for the pmu event filter, fitting all the events in the limited space can be a challenge. It becomes particularly challenging when trying to include various unit mask combinations for a particular event the guest is allow to or not allow to program. Instead of increasing the size of the list to allow for these, add a new encoding in the pmu event filter's events field. These encoded events can then be used to test against the event the guest is attempting to program to determine if the guest should have access to it. The encoded values are: mask, match, and invert. When filtering events the mask is applied to the guest's unit mask to see if it matches the match value (ie: unit_mask & mask == match). If it does and the pmu event filter is an allow list the event is allowed, and denied if it's a deny list. Additionally, the result is reversed if the invert flag is set in the encoded event. This feature is enabled by setting the flags field to KVM_PMU_EVENT_FLAG_MASKED_EVENTS. Events can be encoded by using KVM_PMU_EVENT_ENCODE_MASKED_EVENT(). It is an error to have a bit set outside valid encoded bits, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in such cases, including bits that are set in the high nybble[1] for AMD if called on Intel. [1] bits 35:32 in the event and bits 11:8 in the eventsel. Signed-off-by: Aaron Lewis Change-Id: I64a0d54f0215eb09f3bb9ecae5c2a6dbcec32f93 Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- Documentation/virt/kvm/api.rst | 46 ++++++++++-- arch/x86/include/uapi/asm/kvm.h | 8 ++ arch/x86/kvm/pmu.c | 128 +++++++++++++++++++++++++++++--- arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/svm/pmu.c | 12 +++ arch/x86/kvm/vmx/pmu_intel.c | 12 +++ 6 files changed, 189 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 4a900cdbc62e..671c0bb06eb5 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4951,7 +4951,13 @@ using this ioctl. :Architectures: x86 :Type: vm ioctl :Parameters: struct kvm_pmu_event_filter (in) -:Returns: 0 on success, -1 on error +:Returns: 0 on success, + -EFAULT args[0] cannot be accessed. + -EINVAL args[0] contains invalid data in the filter or events field. + Note: event validation is only done for modes where + the flags field is non-zero. + -E2BIG nevents is too large. + -ENOMEM not enough memory to allocate the filter. :: @@ -4964,14 +4970,42 @@ using this ioctl. __u64 events[0]; }; -This ioctl restricts the set of PMU events that the guest can program. -The argument holds a list of events which will be allowed or denied. -The eventsel+umask of each event the guest attempts to program is compared -against the events field to determine whether the guest should have access. +This ioctl restricts the set of PMU events the guest can program. The +argument holds a list of events which will be allowed or denied. + The events field only controls general purpose counters; fixed purpose counters are controlled by the fixed_counter_bitmap. -No flags are defined yet, the field must be zero. +Valid values for 'flags':: + +``0`` + +This is the default behavior for the pmu event filter, and used when the +flags field is clear. In this mode the eventsel+umask for the event the +guest is attempting to program is compared against each event in the events +field to determine whether the guest should have access to it. + +``KVM_PMU_EVENT_FLAG_MASKED_EVENTS`` + +In this mode each event in the events field will be encoded with mask, match, +and invert values in addition to an eventsel. These encoded events will be +matched against the event the guest is attempting to program to determine +whether the guest should have access to it. When matching an encoded event +with a guest event these steps are followed: + 1. Match the encoded eventsel to the guest eventsel. + 2. If that matches, match the mask and match values from the encoded event to + the guest's unit mask (ie: unit_mask & mask == match). + 3. If that matches, the guest is allow to program the event if its an allow + list or the guest is not allow to program the event if its a deny list. + 4. If the invert value is set in the encoded event, reverse the meaning of #3 + (ie: deny if its an allow list, allow if it's a deny list). + +To encode an event in the pmu_event_filter use +KVM_PMU_EVENT_ENCODE_MASKED_EVENT(). + +If a bit is set in an encoded event that is not apart of the bits used for +eventsel, mask, match or invert a call to KVM_SET_PMU_EVENT_FILTER will +return -EINVAL. Valid values for 'action':: diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index bf6e96011dfe..850af8ee724f 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -521,6 +521,14 @@ struct kvm_pmu_event_filter { #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS (1u << 0) + +#define KVM_PMU_EVENT_ENCODE_MASKED_EVENT(select, mask, match, invert) \ + ((select) & 0xfful) | (((select) & 0xf00ul) << 24) | \ + (((mask) & 0xfful) << 24) | \ + (((match) & 0xfful) << 8) | \ + (((invert) & 0x1ul) << 23) + /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */ #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0604bc29f0b8..c2a9d7841922 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -171,14 +171,99 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) return true; } -static int cmp_u64(const void *pa, const void *pb) +static inline u64 get_event(u64 eventsel) { - u64 a = *(u64 *)pa; - u64 b = *(u64 *)pb; + return eventsel & AMD64_EVENTSEL_EVENT; +} +static inline u8 get_unit_mask(u64 eventsel) +{ + return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; +} + +static inline u8 get_counter_mask(u64 eventsel) +{ + return (eventsel & ARCH_PERFMON_EVENTSEL_CMASK) >> 24; +} + +static inline bool get_invert_comparison(u64 eventsel) +{ + return !!(eventsel & ARCH_PERFMON_EVENTSEL_INV); +} + +static inline int cmp_safe64(u64 a, u64 b) +{ return (a > b) - (a < b); } +static int cmp_eventsel_event(const void *pa, const void *pb) +{ + return cmp_safe64(*(u64*)pa & AMD64_EVENTSEL_EVENT, + *(u64*)pb & AMD64_EVENTSEL_EVENT); +} + +static int cmp_u64(const void *pa, const void *pb) +{ + return cmp_safe64(*(u64 *)pa, + *(u64 *)pb); +} + +static bool is_match(u64 masked_event, u64 eventsel) +{ + u8 mask = get_counter_mask(masked_event); + u8 match = get_unit_mask(masked_event); + u8 unit_mask = get_unit_mask(eventsel); + + return (unit_mask & mask) == match; +} + +static bool is_event_allowed(u64 masked_event, u32 action) +{ + if (get_invert_comparison(masked_event)) + return action != KVM_PMU_EVENT_ALLOW; + + return action == KVM_PMU_EVENT_ALLOW; +} + +static bool filter_masked_event(struct kvm_pmu_event_filter *filter, + u64 eventsel) +{ + u64 key = get_event(eventsel); + u64 *event, *evt; + + event = bsearch(&key, filter->events, filter->nevents, sizeof(u64), + cmp_eventsel_event); + + if(event) { + /* Walk the masked events backward looking for a match. */ + for (evt = event; evt >= filter->events && + get_event(*evt) == get_event(eventsel); evt--) + if (is_match(*evt, eventsel)) + return is_event_allowed(*evt, filter->action); + + /* Walk the masked events forward looking for a match. */ + for (evt = event + 1; + evt < (filter->events + filter->nevents) && + get_event(*evt) == get_event(eventsel); evt++) + if(is_match(*evt, eventsel)) + return is_event_allowed(*evt, filter->action); + } + + return filter->action == KVM_PMU_EVENT_DENY; +} + +static bool filter_default_event(struct kvm_pmu_event_filter *filter, + u64 eventsel) +{ + u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; + + if (bsearch(&key, filter->events, filter->nevents, + sizeof(u64), cmp_u64)) + return filter->action == KVM_PMU_EVENT_ALLOW; + + return filter->action == KVM_PMU_EVENT_DENY; +} + void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { u64 config; @@ -200,14 +285,11 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); if (filter) { - __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; - - if (bsearch(&key, filter->events, filter->nevents, - sizeof(__u64), cmp_u64)) - allow_event = filter->action == KVM_PMU_EVENT_ALLOW; - else - allow_event = filter->action == KVM_PMU_EVENT_DENY; + allow_event = (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) ? + filter_masked_event(filter, eventsel) : + filter_default_event(filter, eventsel); } + if (!allow_event) return; @@ -548,8 +630,22 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); +int has_invalid_event(struct kvm_pmu_event_filter *filter) +{ + u64 event_mask; + int i; + + event_mask = kvm_x86_ops.pmu_ops->get_event_mask(filter->flags); + for(i = 0; i < filter->nevents; i++) + if (filter->events[i] & ~event_mask) + return true; + + return false; +} + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { + int (*cmp)(const void *a, const void *b) = cmp_u64; struct kvm_pmu_event_filter tmp, *filter; size_t size; int r; @@ -561,7 +657,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) tmp.action != KVM_PMU_EVENT_DENY) return -EINVAL; - if (tmp.flags != 0) + if (tmp.flags & ~KVM_PMU_EVENT_FLAG_MASKED_EVENTS) return -EINVAL; if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS) @@ -579,10 +675,18 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) /* Ensure nevents can't be changed between the user copies. */ *filter = tmp; + r = -EINVAL; + /* To maintain backwards compatibility don't validate flags == 0. */ + if (filter->flags != 0 && has_invalid_event(filter)) + goto cleanup; + + if (filter->flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + cmp = cmp_eventsel_event; + /* * Sort the in-kernel list so that we can search it with bsearch. */ - sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL); + sort(&filter->events, filter->nevents, sizeof(u64), cmp, NULL); mutex_lock(&kvm->lock); filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter, diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 22992b049d38..7a0c2ee9f121 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -37,6 +37,7 @@ struct kvm_pmu_ops { void (*reset)(struct kvm_vcpu *vcpu); void (*deliver_pmi)(struct kvm_vcpu *vcpu); void (*cleanup)(struct kvm_vcpu *vcpu); + u64 (*get_event_mask)(u32 flag); }; static inline u64 pmc_bitmask(struct kvm_pmc *pmc) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 16a5ebb420cf..0cc66aa2d99a 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -342,6 +342,17 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } } +static u64 amd_pmu_get_event_mask(u32 flag) +{ + if (flag == KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + return AMD64_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK | + ARCH_PERFMON_EVENTSEL_INV | + ARCH_PERFMON_EVENTSEL_CMASK; + return AMD64_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK; +} + struct kvm_pmu_ops amd_pmu_ops = { .pmc_perf_hw_id = amd_pmc_perf_hw_id, .pmc_is_enabled = amd_pmc_is_enabled, @@ -355,4 +366,5 @@ struct kvm_pmu_ops amd_pmu_ops = { .refresh = amd_pmu_refresh, .init = amd_pmu_init, .reset = amd_pmu_reset, + .get_event_mask = amd_pmu_get_event_mask, }; diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b82b6709d7a8..6efddb1a8d9d 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -719,6 +719,17 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) intel_pmu_release_guest_lbr_event(vcpu); } +static u64 intel_pmu_get_event_mask(u32 flag) +{ + if (flag == KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + return ARCH_PERFMON_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK | + ARCH_PERFMON_EVENTSEL_INV | + ARCH_PERFMON_EVENTSEL_CMASK; + return ARCH_PERFMON_EVENTSEL_EVENT | + ARCH_PERFMON_EVENTSEL_UMASK; +} + struct kvm_pmu_ops intel_pmu_ops = { .pmc_perf_hw_id = intel_pmc_perf_hw_id, .pmc_is_enabled = intel_pmc_is_enabled, @@ -734,4 +745,5 @@ struct kvm_pmu_ops intel_pmu_ops = { .reset = intel_pmu_reset, .deliver_pmi = intel_pmu_deliver_pmi, .cleanup = intel_pmu_cleanup, + .get_event_mask = intel_pmu_get_event_mask, }; From patchwork Mon May 23 21:41:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12859503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AE2DC433EF for ; Mon, 23 May 2022 21:41:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbiEWVli (ORCPT ); Mon, 23 May 2022 17:41:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229645AbiEWVlZ (ORCPT ); Mon, 23 May 2022 17:41:25 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D85A76588 for ; Mon, 23 May 2022 14:41:21 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id h13-20020a170902f70d00b0015f4cc5d19aso8597763plo.18 for ; Mon, 23 May 2022 14:41:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XaPUZtsqKUHRdoq6Rr0czQOh3Wuz/i5AtoFG3xjN0j0=; b=FdVlvDcDvbb80KfXqbTtYo/JPftUT+C9SpXrf/LYOXkwpzMlxM1VEMDJczFAOjPIUN ESVtijdOJ6xdtrVsj90u+0YwMzvlYMLvo8nzD8AoIQM6g0R5RYSPVdvtY8MNLUC/1VZM c0fraO+wd+l0QsnU4cWCzyalh8j6k5Zo3pI1kchh3KYstuPLMstaTtVb1lrUUd6xj/Xc 2Iue1/XkL5p9Rfak3KzwO2Hkl4baV/4zVd1ytywzds1f2N9zQ8108F4elb2g0p4p2xfU JpX58M91ZDDEAi0I3++ddX6AJd6z/BiRrOoqjNp9R0cWN1gJFyeAzgvwcbbZibs6bQWn S8BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XaPUZtsqKUHRdoq6Rr0czQOh3Wuz/i5AtoFG3xjN0j0=; b=hoFB/teWARQHhoudbOXGfWMPBVNQf3p8rYwGDmuf5UqYt5iYo8r99xR/27fEhY5HRO YN+15OW+zTxHFiLMW9Om9H9fD207AFRPrN+gXVcbVSAVgmVqZvp9RUFJv3uefsSF3U/n NNThiZraFk09OHijIZZfC7gwKhsp8HxE71m5WQLqr5pnR98I66pW8YzFQ6YEp3TjX9X9 2NNbGJCnLiwDzxFAjpjVAd3zX69GSU6vZIk9zlG8DhafxoSaP5YAmHF2oIAmcNyQeCEK tZv1d3wAvLB2YknKsH5OomQo6A2jCe5KQamej5xayQ8wD3KEXNJaWYxBjgwQQBD0dODS P/2A== X-Gm-Message-State: AOAM533wP7SaQMDOTM1xB10CFe1F+miqath62ybH7yTEZG/kE5WaYHQQ GDEATK7Jm3GDwzSrpSFl6N4vzn9SQEDTUXvowwGmjDnes1/0N++hPwZcVn83/LtDS/vpMtouLFn ZyE1iXxhn7N+w94WO43hRkze1XW0VDJmtGWUM+ETBaeiMoMAxO1jc/5DKS8l3pivWiOKo X-Google-Smtp-Source: ABdhPJz2KPQ36ZNLULl0FQUvxVrCJsgpWlFvEGqQ/rLXtz9V4kZpbLEAFG/wzPJe/dVin/uWNttBD/rXtPaqE8bw X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:903:2449:b0:162:36b:af81 with SMTP id l9-20020a170903244900b00162036baf81mr13800090pls.32.1653342081277; Mon, 23 May 2022 14:41:21 -0700 (PDT) Date: Mon, 23 May 2022 21:41:08 +0000 In-Reply-To: <20220523214110.1282480-1-aaronlewis@google.com> Message-Id: <20220523214110.1282480-3-aaronlewis@google.com> Mime-Version: 1.0 References: <20220523214110.1282480-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH 2/4] selftests: kvm/x86: Add flags when creating a pmu event filter From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Now that the flags field can be non-zero, pass it in when creating a pmu event filter. This is needed in preparation for testing masked events. No functional change intended. Signed-off-by: Aaron Lewis --- .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 93d77574b255..4bff4c71ac45 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -222,14 +222,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], - int nevents, uint32_t action) +create_pmu_event_filter(const uint64_t event_list[], int nevents, + uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action; + f->flags = flags; for (i = 0; i < nevents; i++) f->events[i] = event_list[i]; @@ -240,7 +241,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action); + action, 0); } /* @@ -287,7 +288,7 @@ static void test_amd_deny_list(struct kvm_vm *vm) struct kvm_pmu_event_filter *f; uint64_t count; - f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); count = test_with_filter(vm, f); free(f); From patchwork Mon May 23 21:41:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12859505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DDBAC433F5 for ; Mon, 23 May 2022 21:41:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230457AbiEWVln (ORCPT ); Mon, 23 May 2022 17:41:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229833AbiEWVlZ (ORCPT ); Mon, 23 May 2022 17:41:25 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 992AEAE64 for ; Mon, 23 May 2022 14:41:24 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id m12-20020aa7900c000000b005183e9b0fd5so5084404pfo.23 for ; Mon, 23 May 2022 14:41:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PXdx7JDx6ESM+Vo6+UUctX9yIlvES7xFJJg3g/ilWQk=; b=LMusL3HdIRqvsu8G5RLUiHOGdjMixT7zvy0AgOGAea8G6nTb2QoVsgw9oy75BBlQqn mlww+JH6reYXn9M1OlyRG2zYjdKWTjxLEpq9O+rj+5i+4sUF+AZzrzgLhceXZ5VECYIL W21HGEP4YHLs7LO2KW4KxmhQvQEoCDwlKoBXIURLLvX/d7hBEC41tPk389O7B969U2Ns aO3CJb6xWJL0ep/djLHeNHUT4kt4SRItwZJEgouUsC8+behjd6mRNTg9+zRrTnb3stNe haB7VMsKDlCyo6ROIPuPlxn7+fyqbC4SaUvX8rQMfCMZA4IRu2UUIpfaLj0JXB+wAkwL wHNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PXdx7JDx6ESM+Vo6+UUctX9yIlvES7xFJJg3g/ilWQk=; b=dPnXHuq+1kL43WQt3VcgIS2qj4PfaD4sPAIC8/iP5S9PuiE6RuUr+FXucKmbcujmOE ZRiKnicd0AY4fhsHYjBd3o58UOCVWXuCbMuq+JJeGI5bDSou9PYJg6ob+Zj+8fEwJtOz Nylvn/SV7gxIOdznvCuszB1mNwUmEtE9KI8kg3AE5qh43pg5UN6q7PNVK4rlcGHSEgij GpuATBNP3GNZKNgII9t1dgIWCWin5dIDM6HBMRUCN8VJCdLGLNjTasCkTOE905fBdbzN sDavKJhlIcJDE3IyK0enmkqbZnN7pyMHmF1aluvYrwmpI8wsSQF4ZLVUpnkZAKaJo5G1 +FTg== X-Gm-Message-State: AOAM530kvNf8T9V2+wtkrjxQ3c7miIb4itwQ061U3ofGBUBsM24r30tl HWZAcgFnuFjVLMDC8h0gLSO3h91J7r+rKt1n5biuWg1KljyHu573hNfKs+aosY7Uw4vPlOujrz5 nwLFjfFwM47wKgYDWeExCCcuNwcJQwN4m/T8Jmg3Lcoqqz+g8vArCZ1GBVhWmneAiS5F4 X-Google-Smtp-Source: ABdhPJw5ZVNBZBm8D4Z6Z9b5F/TiQics6dpr/8JQEk0bTDpxYHq4wAWYyU4np5AiEFddxeNhsROl8Y6sfNHm9Gip X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a17:903:2d1:b0:156:7ceb:b56f with SMTP id s17-20020a17090302d100b001567cebb56fmr24291597plk.11.1653342083972; Mon, 23 May 2022 14:41:23 -0700 (PDT) Date: Mon, 23 May 2022 21:41:09 +0000 In-Reply-To: <20220523214110.1282480-1-aaronlewis@google.com> Message-Id: <20220523214110.1282480-4-aaronlewis@google.com> Mime-Version: 1.0 References: <20220523214110.1282480-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH 3/4] selftests: kvm/x86: Add testing for masked events From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add testing for the pmu event filter's masked events. These tests run through different ways of finding an event the guest is attempting to program in an event list. For any given eventsel, there may be multiple instances of it in an event list. These tests try different ways of looking up a match to force the matching algorithm to walk the relevant eventsel's and ensure it is able to a) find a match, b) stays within its bounds. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 107 ++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4bff4c71ac45..4071043bbe26 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -18,8 +18,12 @@ /* * In lieu of copying perf_event.h into tools... */ +#define ARCH_PERFMON_EVENTSEL_EVENT 0x000000FFULL #define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) #define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22) +#define AMD64_EVENTSEL_EVENT \ + (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32)) + union cpuid10_eax { struct { @@ -445,6 +449,107 @@ static bool use_amd_pmu(void) is_zen3(entry->eax)); } +#define ENCODE_MASKED_EVENT(select, mask, match, invert) \ + KVM_PMU_EVENT_ENCODE_MASKED_EVENT(select, mask, match, invert) + +static void expect_success(uint64_t count) +{ + if (count != NUM_BRANCHES) + pr_info("masked filter: Branch instructions retired = %lu (expected %u)\n", + count, NUM_BRANCHES); + TEST_ASSERT(count, "Allowed PMU event is not counting"); +} + +static void expect_failure(uint64_t count) +{ + if (count) + pr_info("masked filter: Branch instructions retired = %lu (expected 0)\n", + count); + TEST_ASSERT(!count, "Disallowed PMU Event is counting"); +} + +static void run_masked_filter_test(struct kvm_vm *vm, uint64_t masked_events[], + const int nmasked_events, uint64_t event, + uint32_t action, bool invert, + void (*expected_func)(uint64_t)) +{ + struct kvm_pmu_event_filter *f; + uint64_t old_event; + uint64_t count; + int i; + + for (i = 0; i < nmasked_events; i++) { + if((masked_events[i] & AMD64_EVENTSEL_EVENT) != EVENT(event, 0)) + continue; + + old_event = masked_events[i]; + + masked_events[i] = + ENCODE_MASKED_EVENT(event, ~0x00, 0x00, invert); + + f = create_pmu_event_filter(masked_events, nmasked_events, action, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + + count = test_with_filter(vm, f); + free(f); + + expected_func(count); + + masked_events[i] = old_event; + } +} + +static void run_masked_filter_tests(struct kvm_vm *vm, uint64_t masked_events[], + const int nmasked_events, uint64_t event) +{ + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_ALLOW, /*invert=*/false, + expect_success); + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_ALLOW, /*invert=*/true, + expect_failure); + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_DENY, /*invert=*/false, + expect_failure); + run_masked_filter_test(vm, masked_events, nmasked_events, event, + KVM_PMU_EVENT_DENY, /*invert=*/true, + expect_success); +} + +static void test_masked_filters(struct kvm_vm *vm) +{ + uint64_t masked_events[11]; + const int nmasked_events = ARRAY_SIZE(masked_events); + uint64_t prev_event, event, next_event; + int i; + + if (use_intel_pmu()) { + /* Instructions retired */ + prev_event = 0xc0; + event = INTEL_BR_RETIRED; + /* Branch misses retired */ + next_event = 0xc5; + } else { + TEST_ASSERT(use_amd_pmu(), "Unknown platform"); + /* Retired instructions */ + prev_event = 0xc0; + event = AMD_ZEN_BR_RETIRED; + /* Retired branch instructions mispredicted */ + next_event = 0xc3; + } + + for (i = 0; i < nmasked_events; i++) + masked_events[i] = + ENCODE_MASKED_EVENT(event, ~0x00, i+1, 0); + + run_masked_filter_tests(vm, masked_events, nmasked_events, event); + + masked_events[0] = ENCODE_MASKED_EVENT(prev_event, ~0x00, 0, 0); + masked_events[1] = ENCODE_MASKED_EVENT(next_event, ~0x00, 0, 0); + + run_masked_filter_tests(vm, masked_events, nmasked_events, event); +} + int main(int argc, char *argv[]) { void (*guest_code)(void) = NULL; @@ -489,6 +594,8 @@ int main(int argc, char *argv[]) test_not_member_deny_list(vm); test_not_member_allow_list(vm); + test_masked_filters(vm); + kvm_vm_free(vm); test_pmu_config_disable(guest_code); From patchwork Mon May 23 21:41:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Lewis X-Patchwork-Id: 12859504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EEFBC433EF for ; Mon, 23 May 2022 21:41:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229847AbiEWVlm (ORCPT ); Mon, 23 May 2022 17:41:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231134AbiEWVla (ORCPT ); Mon, 23 May 2022 17:41:30 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4AEBC11C01 for ; Mon, 23 May 2022 14:41:27 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id p16-20020a170902e75000b00161d96620c4so7303898plf.14 for ; Mon, 23 May 2022 14:41:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VU7bPAhX/r0weRFhbREr5P1IddEmW8HdNVHkdTEqOfQ=; b=VfQ2pewNeOMr4hdqxSjZaEIgmjWUg4f1QwcxPet0P9N0rdzAIr2FWOkTHZl3kdhTpB ssNfAxmaYNVYb4JZzQp4IGNQP+6p7yiRf4OSXD2NZn054zRj9F9j8h5TFljO9iFuw4Oc rQYWtMRXv4zHY/Hk3mBepjwcgCq8oE3Bw1u0PoiZz62rrtaPvRexN8kf8IZ4wo3uHNAo +q8CmfsSBwA0Ir9SB3JZSNob1wDfsnVv3aLB/vgY2pbQt9Xr7nf8IeQnjgnBo+IYNhJe gZBMP93gy05HF4Se0wfUdlF23lOGNBTimPbl8r2qxfiWWbXxEvjaxmJGjp6czzsdrZsL p22w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VU7bPAhX/r0weRFhbREr5P1IddEmW8HdNVHkdTEqOfQ=; b=MufgdO3GLtj5judRz6a68koWo/znP24H2TVGaUJNStOM4VPpZhPUASnCLsYaH8fFo7 QTGFFBQvXy/zjNElZ0pdM1b0snHWnEVia9izbw4qrLnu+IcnYTEas9w49w06uM0c9xZs +qLQdiqZ9gyyWiuSCjbWri9R9eYjToWmy+IeEmS04zLduuz4tFELR5rBeXg4aohqsLHj FbgOxw8H/nP8HeBjS10KyeJ4g6M9cdis8CsAyhGx4dftNrfCsli0n5Kc9FEsHNpSoz1Q y93azGQCZIvIasKxEMdRakietP96INUYF14WYU44y5RjJaeoHeFQzAUyS8CdVSaR87kj /9ig== X-Gm-Message-State: AOAM533XnlwnT9QvGQ6uwdm6NtMLnvmx2Ie0NH/nYNFO6N37vP0NKM+U vYt72Ch9Eo6qsHEy4guNlFHLV4+ZXk1kjlOhead6VaQfWo7HWM+1DEldZJBQczwlIHFJjoh+zjF 8epb9anh2E7QGPLUiOBoWK/ERvJNYS+7xJ1jJSxv3DH6vlrhKOXV12rGAYF02L93fI/2H X-Google-Smtp-Source: ABdhPJyTclnu9AsQMaoDbrjzl42lyBFiCn1zKhVMpABiSVfpusyaxosLQF9YVgk3HGiEpQBpyGgKKYC97JmBfaWi X-Received: from aaronlewis.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2675]) (user=aaronlewis job=sendgmr) by 2002:a62:484:0:b0:50d:a020:88e5 with SMTP id 126-20020a620484000000b0050da02088e5mr25245954pfe.51.1653342086556; Mon, 23 May 2022 14:41:26 -0700 (PDT) Date: Mon, 23 May 2022 21:41:10 +0000 In-Reply-To: <20220523214110.1282480-1-aaronlewis@google.com> Message-Id: <20220523214110.1282480-5-aaronlewis@google.com> Mime-Version: 1.0 References: <20220523214110.1282480-1-aaronlewis@google.com> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog Subject: [PATCH 4/4] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER From: Aaron Lewis To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, jmattson@google.com, seanjc@google.com, Aaron Lewis Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Test that masked events are not using invalid bits, and if they are, ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER. The only valid bits that can be used for masked events are set when using KVM_PMU_EVENT_ENCODE_MASKED_EVENT() with one caveat. If any bits in the high nybble[1] of the eventsel for AMD are used on Intel setting the pmu event filter with KVM_SET_PMU_EVENT_FILTER will fail. Also, because no validation was being done on the event list prior to the introduction of masked events, verify that this continues for the original event type (flags == 0). If invalid bits are set (bits other than eventsel+umask) the pmu event filter will be accepted by KVM_SET_PMU_EVENT_FILTER. [1] bits 35:32 in the event and bits 11:8 in the eventsel. Signed-off-by: Aaron Lewis --- .../kvm/x86_64/pmu_event_filter_test.c | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4071043bbe26..403143ee0b6d 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -550,6 +550,36 @@ static void test_masked_filters(struct kvm_vm *vm) run_masked_filter_tests(vm, masked_events, nmasked_events, event); } +static void test_filter_ioctl(struct kvm_vm *vm) +{ + struct kvm_pmu_event_filter *f; + uint64_t e = ~0ul; + int r; + + /* + * Unfortunately having invalid bits set in event data is expected to + * pass when flags == 0 (bits other than eventsel+umask). + */ + f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, 0); + r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + free(f); + + f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); + TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail"); + free(f); + + e = ENCODE_MASKED_EVENT(0xff, 0xff, 0xff, 0xf); + + f = create_pmu_event_filter(&e, 1, KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = _vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f); + TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + free(f); +} + int main(int argc, char *argv[]) { void (*guest_code)(void) = NULL; @@ -595,6 +625,7 @@ int main(int argc, char *argv[]) test_not_member_allow_list(vm); test_masked_filters(vm); + test_filter_ioctl(vm); kvm_vm_free(vm);