From patchwork Tue Nov 30 07:42:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12646447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03D72C433EF for ; Tue, 30 Nov 2021 07:42:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239111AbhK3Hpx (ORCPT ); Tue, 30 Nov 2021 02:45:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239100AbhK3Hpw (ORCPT ); Tue, 30 Nov 2021 02:45:52 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20828C061574; Mon, 29 Nov 2021 23:42:34 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id o4so19701347pfp.13; Mon, 29 Nov 2021 23:42:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ySY5gs4YmNlohZdoUrGBvO2t4ksP1moPL1VkuQOudcA=; b=XgJFvNg09Gdtz8x5xAK2L/MmHGIT8RZmo/pEGWoGD9+NE/J54ZK0VXCayTxE4ohAxZ pdVL+8f1DNgLEt8VuccErQxrz98Nui5d4PFSTZ6tdBU5FX51aaJoZC5w79Hu/RRKLf+Q K5xdSAboJJFGjdyKsQGuSpNadbOQbmWnB1hbz5y0KGKKN4gQ38e5Io26K8schL2RVNCF pkLlbuhNtTN4+fTpZnBuEVc7WyB4S1zKdqYkZKhITOmcr4Czr7WG8QluM5UBRyV/hTlX mOJmXc2WVy0DKi8LPT4qf5Ssab7J3cfmEL+ev47y6SGOFqCbypbfrMkbHX8F4M+O8MIb XDew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ySY5gs4YmNlohZdoUrGBvO2t4ksP1moPL1VkuQOudcA=; b=mBWZ3YcR6UmOH4iv6Vs13vQzrWH8rl66NuUKP9OmLcozyws9NbbPTt9eBIbNMyfqw+ WvRSXt/ZUfZaWnS/9QIf4K8IKF6gAFNbJg4G9iLCN29sPI2HPt9frf0KsBABayYfhJw5 33E2Q37WFUW0l3cYmW9L3Nt9yOEfNXd4Lf1/muYa+85N68QQCp8AmAVHsdV2dfLZ1Zgq IUYZ+tI/BjlPjHIB+u3SYggktrEXafOJgVXDGElNgIbc77iu1x91PWF9+XZHgthjptj5 uYpTvgwvqTdpkpbTMRygUYotPuAPcN0NdV8EIaogXE+NFMQdkC5tfHFB+ZKoQL5J/hrg glGA== X-Gm-Message-State: AOAM533NzAZ/09/XgCJDbv0muBgnmpG3TA0IXrV6PIJ8LLbeu61075UC r0z2rggTSvEoHwc1a+5xLME= X-Google-Smtp-Source: ABdhPJyS/8uN9LDDNxbK5mQUO6ZcKb/OZOARz1X1CnAVdseszQ7/miVlmsO7RnqH4kxSU45uxZL4zQ== X-Received: by 2002:a05:6a00:1783:b0:49f:c134:c6e2 with SMTP id s3-20020a056a00178300b0049fc134c6e2mr45261010pfg.0.1638258153697; Mon, 29 Nov 2021 23:42:33 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id h13sm19066010pfv.84.2021.11.29.23.42.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Nov 2021 23:42:33 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 1/6] KVM: x86/pmu: Setup pmc->eventsel for fixed PMCs Date: Tue, 30 Nov 2021 15:42:16 +0800 Message-Id: <20211130074221.93635-2-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211130074221.93635-1-likexu@tencent.com> References: <20211130074221.93635-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The current pmc->eventsel for fixed counter is underutilised. The pmc->eventsel can be setup for all known available fixed counters since we have mapping between fixed pmc index and the intel_arch_events array. Either gp or fixed counter, it will simplify the later checks for consistency between eventsel and perf_hw_id. Signed-off-by: Like Xu --- arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 1b7456b2177b..b7ab5fd03681 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -459,6 +459,21 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) return 1; } +static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu) +{ + size_t size = ARRAY_SIZE(fixed_pmc_events); + struct kvm_pmc *pmc; + u32 event; + int i; + + for (i = 0; i < pmu->nr_arch_fixed_counters; i++) { + pmc = &pmu->fixed_counters[i]; + event = fixed_pmc_events[array_index_nospec(i, size)]; + pmc->eventsel = (intel_arch_events[event].unit_mask << 8) | + intel_arch_events[event].eventsel; + } +} + static void intel_pmu_refresh(struct kvm_vcpu *vcpu) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -506,6 +521,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) edx.split.bit_width_fixed, x86_pmu.bit_width_fixed); pmu->counter_bitmask[KVM_PMC_FIXED] = ((u64)1 << edx.split.bit_width_fixed) - 1; + setup_fixed_pmc_eventsel(pmu); } pmu->global_ctrl = ((1ull << pmu->nr_arch_gp_counters) - 1) | From patchwork Tue Nov 30 07:42:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12646449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC827C433FE for ; Tue, 30 Nov 2021 07:42:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239120AbhK3Hp4 (ORCPT ); Tue, 30 Nov 2021 02:45:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239119AbhK3Hpz (ORCPT ); Tue, 30 Nov 2021 02:45:55 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14160C061574; Mon, 29 Nov 2021 23:42:37 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id o14so14255226plg.5; Mon, 29 Nov 2021 23:42:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cmUZ7ITQMzJ2jBgMHHTfOq9nR49+9hdji8RwmahpeUw=; b=cjoDGdiqwTffhVjfH1SCDQticwbrXYhpnf8wjdGJqB2Tb7WP01pe5qihM0LScvaIxY S9Ct7CKndog6juch/QzjXzwK0A5fkc98KQ7klB16QLjIxnaLHXbQLzMnCpFzLInDn/za B1GlwH+JtaQVyzAWWVaeEVO4ppDmiBFQq7QFWwng4O3jMENKOoVbrHH4cltmmiBVZLni Ob0IAoF9Y1zgr+woH/th77rB1oEJOdo1Sh8K4ypmlgcps+l2lk+rED8sv6MbHjbDi3bg ps6MtqK4HY1Yb0wMxmQ7AaI7dWud22caEKrmKl2or5hbVlRuEqxQqwxN65RNGh449U9m roFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cmUZ7ITQMzJ2jBgMHHTfOq9nR49+9hdji8RwmahpeUw=; b=AnuuPkgwH1r2+8HJEDOGrd41sPqn61o4NdN4m7+FTrfaSoEB8FPUs/WwE85DyhE6fA k9CYbaV8Ox+85aAejd09Y1AuRTLBJfxE+l8WoqRvaHAwo24OmFosye3u/5YHpGblaBN1 MNRihhuouCxIuWBs+8/chVDpR2HZSLLOo06EswUgCbmMG5u1YUtTj1tWAAf3yoFE8O2Z yNU2XuHaclRou+l30dD7kQSC7N9HyEjpU/QxMKHvHhIuP36XWBwiilVD3ethLfVKm68X zbGxqALuiOu8MT270t7UYgV4Caanp8JsTEIseicW1ilVwwRJ4vG+z/C0T2VIwhECJ9In lMcA== X-Gm-Message-State: AOAM530K/clU87ISeT6wwX9oqZ771+YZPu0vn1YXMehyv8Pg9xnMi6m7 NERjoO2e7r6L60v+mgFXkSk= X-Google-Smtp-Source: ABdhPJz5laVwQbMtyr28zn1JfCgprYmsp7G0mSjhEgQf9Q0iJp8hjT5YWmUU5JZyHpWnJKaeiXVkMg== X-Received: by 2002:a17:90b:3848:: with SMTP id nl8mr3951137pjb.221.1638258156673; Mon, 29 Nov 2021 23:42:36 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id h13sm19066010pfv.84.2021.11.29.23.42.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Nov 2021 23:42:36 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 2/6] KVM: x86/pmu: Refactoring find_arch_event() to pmc_perf_hw_id() Date: Tue, 30 Nov 2021 15:42:17 +0800 Message-Id: <20211130074221.93635-3-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211130074221.93635-1-likexu@tencent.com> References: <20211130074221.93635-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The find_arch_event() returns a "unsigned int" value, which is used by the pmc_reprogram_counter() to program a PERF_TYPE_HARDWARE type perf_event. The returned value is actually the kernel defined gernic perf_hw_id, let's rename it to pmc_perf_hw_id() with simpler incoming parameters for better self-explanation. Signed-off-by: Like Xu Reviewed-by: Jim Mattson --- arch/x86/kvm/pmu.c | 8 +------- arch/x86/kvm/pmu.h | 3 +-- arch/x86/kvm/svm/pmu.c | 8 ++++---- arch/x86/kvm/vmx/pmu_intel.c | 9 +++++---- 4 files changed, 11 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 09873f6488f7..3b3ccf5b1106 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -174,7 +174,6 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc) void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) { unsigned config, type = PERF_TYPE_RAW; - u8 event_select, unit_mask; struct kvm *kvm = pmc->vcpu->kvm; struct kvm_pmu_event_filter *filter; int i; @@ -206,17 +205,12 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) if (!allow_event) return; - event_select = eventsel & ARCH_PERFMON_EVENTSEL_EVENT; - unit_mask = (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | ARCH_PERFMON_EVENTSEL_INV | ARCH_PERFMON_EVENTSEL_CMASK | HSW_IN_TX | HSW_IN_TX_CHECKPOINTED))) { - config = kvm_x86_ops.pmu_ops->find_arch_event(pmc_to_pmu(pmc), - event_select, - unit_mask); + config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); if (config != PERF_COUNT_HW_MAX) type = PERF_TYPE_HARDWARE; } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 59d6b76203d5..dd7dbb1c5048 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -24,8 +24,7 @@ struct kvm_event_hw_type_mapping { }; struct kvm_pmu_ops { - unsigned (*find_arch_event)(struct kvm_pmu *pmu, u8 event_select, - u8 unit_mask); + unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); unsigned (*find_fixed_event)(int idx); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 0cf05e4caa4c..fb0ce8cda8a7 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -138,10 +138,10 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, return &pmu->gp_counters[msr_to_index(msr)]; } -static unsigned amd_find_arch_event(struct kvm_pmu *pmu, - u8 event_select, - u8 unit_mask) +static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) { + u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; + u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) @@ -323,7 +323,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops amd_pmu_ops = { - .find_arch_event = amd_find_arch_event, + .pmc_perf_hw_id = amd_pmc_perf_hw_id, .find_fixed_event = amd_find_fixed_event, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b7ab5fd03681..67a0188ecdc5 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -68,10 +68,11 @@ static void global_ctrl_changed(struct kvm_pmu *pmu, u64 data) reprogram_counter(pmu, bit); } -static unsigned intel_find_arch_event(struct kvm_pmu *pmu, - u8 event_select, - u8 unit_mask) +static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) { + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT; + u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) @@ -719,7 +720,7 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) } struct kvm_pmu_ops intel_pmu_ops = { - .find_arch_event = intel_find_arch_event, + .pmc_perf_hw_id = intel_pmc_perf_hw_id, .find_fixed_event = intel_find_fixed_event, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, From patchwork Tue Nov 30 07:42:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12646451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E181BC433EF for ; Tue, 30 Nov 2021 07:42:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239139AbhK3HqC (ORCPT ); Tue, 30 Nov 2021 02:46:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239128AbhK3Hp6 (ORCPT ); Tue, 30 Nov 2021 02:45:58 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3226C061574; Mon, 29 Nov 2021 23:42:39 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id p18-20020a17090ad31200b001a78bb52876so17612642pju.3; Mon, 29 Nov 2021 23:42:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wAZN5OcC8YDcmNyDQE/y9NeagKXbDq4eHYJ8bgvlYGw=; b=X54Bw7DmxTLuB30ZZJ2eKQR6YtM/Ek2vIv8ZoGwroly0nONl0HdzSYXGEnrlAu0ty0 xsiha453Ev9GMsLrezNmXYyA6ccPSg3k2KIxa1rHf011y4dvqGFdFWYY5Hoy6Fkt91or H14Od9Ya88nqeaVu8A3TAXHo85MDTX5+PP8K4qNC6bMtKpBjNo9vGOs8GWH5dNRzgMAA DogrzanVhJ1hknehBCiGzOvjXSQhkmeiKDajq12/s8WfQqk4kVNuXNyiKcL/rhmMrrTc M9SqKv80h7W82ChQY9bSGCet+3x3wkFuQDv+qGanXclzs+dQJzyfgzK9efVAUuQu67RR /iAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wAZN5OcC8YDcmNyDQE/y9NeagKXbDq4eHYJ8bgvlYGw=; b=mJ45JhcEDBLXEFwSR9+dMNWxcoz2NIldRSEQHUTd1fSBym3eJTSnuiFJ1UI1Zky/By Qi8NTChAVgeDV0j8s4XfsMYf/bsSoOMhv0fEn5peAa6Bl3DfSlhBozl2mdoE2lK1a9qd 4g4LHjTwK7uTPqBCGNpzmmxyFc88DOyIVXkD35AkISeMwnU893NjNqMNWornfdzNJFgu xuZ1gvdFrzrQkYIicDdyLXTVqHThDk2fEcxQjCQ11xmlGPUEZyjemBGqKNAJasxPbSAr Jw6OVXNXLdOwlvNnGkx7pbLPM40oloLMzGV+1nYrYBJNgDYhoSX/MQiSoN9s+jRGBWFs 8KGQ== X-Gm-Message-State: AOAM530cNt4asEKt0l15e7lz7ntpQnrd+rTK/IVp5kNydnm5arBS2/fA HJhKL4q1U6s5G5ajTW/2C9w= X-Google-Smtp-Source: ABdhPJyzyLVxjdNduOxRL8LaEPEH5ZRN2ATv1ChT4Y4wS8OIR8rcE0tAEIqNH9N5H/7jB9htU66Agw== X-Received: by 2002:a17:90b:1c0f:: with SMTP id oc15mr4167156pjb.50.1638258159508; Mon, 29 Nov 2021 23:42:39 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id h13sm19066010pfv.84.2021.11.29.23.42.36 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Nov 2021 23:42:39 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 3/6] KVM: x86/pmu: Reuse pmc_perf_hw_id() and drop find_fixed_event() Date: Tue, 30 Nov 2021 15:42:18 +0800 Message-Id: <20211130074221.93635-4-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211130074221.93635-1-likexu@tencent.com> References: <20211130074221.93635-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since we set the same semantic event value for the fixed counter in pmc->eventsel, returning the perf_hw_id for the fixed counter via find_fixed_event() can be painlessly replaced by pmc_perf_hw_id() with the help of pmc_is_fixed() check. Signed-off-by: Like Xu Reviewed-by: Jim Mattson --- arch/x86/kvm/pmu.c | 2 +- arch/x86/kvm/pmu.h | 1 - arch/x86/kvm/svm/pmu.c | 11 ++++------- arch/x86/kvm/vmx/pmu_intel.c | 19 +++---------------- 4 files changed, 8 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 3b3ccf5b1106..b7a1ae28ab87 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -262,7 +262,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) pmc->current_config = (u64)ctrl; pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_x86_ops.pmu_ops->find_fixed_event(idx), + kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc), !(en_field & 0x2), /* exclude user */ !(en_field & 0x1), /* exclude kernel */ pmi, false, false); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index dd7dbb1c5048..c91d9725aafd 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -25,7 +25,6 @@ struct kvm_event_hw_type_mapping { struct kvm_pmu_ops { unsigned int (*pmc_perf_hw_id)(struct kvm_pmc *pmc); - unsigned (*find_fixed_event)(int idx); bool (*pmc_is_enabled)(struct kvm_pmc *pmc); struct kvm_pmc *(*pmc_idx_to_pmc)(struct kvm_pmu *pmu, int pmc_idx); struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu, diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index fb0ce8cda8a7..12d8b301065a 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -144,6 +144,10 @@ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; int i; + /* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ + if (WARN_ON(pmc_is_fixed(pmc))) + return PERF_COUNT_HW_MAX; + for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++) if (amd_event_mapping[i].eventsel == event_select && amd_event_mapping[i].unit_mask == unit_mask) @@ -155,12 +159,6 @@ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc) return amd_event_mapping[i].event_type; } -/* return PERF_COUNT_HW_MAX as AMD doesn't have fixed events */ -static unsigned amd_find_fixed_event(int idx) -{ - return PERF_COUNT_HW_MAX; -} - /* check if a PMC is enabled by comparing it against global_ctrl bits. Because * AMD CPU doesn't have global_ctrl MSR, all PMCs are enabled (return TRUE). */ @@ -324,7 +322,6 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu_ops amd_pmu_ops = { .pmc_perf_hw_id = amd_pmc_perf_hw_id, - .find_fixed_event = amd_find_fixed_event, .pmc_is_enabled = amd_pmc_is_enabled, .pmc_idx_to_pmc = amd_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = amd_rdpmc_ecx_to_pmc, diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 67a0188ecdc5..ad0e53b0d7bf 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -76,9 +76,9 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) int i; for (i = 0; i < ARRAY_SIZE(intel_arch_events); i++) - if (intel_arch_events[i].eventsel == event_select - && intel_arch_events[i].unit_mask == unit_mask - && (pmu->available_event_types & (1 << i))) + if (intel_arch_events[i].eventsel == event_select && + intel_arch_events[i].unit_mask == unit_mask && + (pmc_is_fixed(pmc) || pmu->available_event_types & (1 << i))) break; if (i == ARRAY_SIZE(intel_arch_events)) @@ -87,18 +87,6 @@ static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc) return intel_arch_events[i].event_type; } -static unsigned intel_find_fixed_event(int idx) -{ - u32 event; - size_t size = ARRAY_SIZE(fixed_pmc_events); - - if (idx >= size) - return PERF_COUNT_HW_MAX; - - event = fixed_pmc_events[array_index_nospec(idx, size)]; - return intel_arch_events[event].event_type; -} - /* check if a PMC is enabled by comparing it with globl_ctrl bits. */ static bool intel_pmc_is_enabled(struct kvm_pmc *pmc) { @@ -721,7 +709,6 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) struct kvm_pmu_ops intel_pmu_ops = { .pmc_perf_hw_id = intel_pmc_perf_hw_id, - .find_fixed_event = intel_find_fixed_event, .pmc_is_enabled = intel_pmc_is_enabled, .pmc_idx_to_pmc = intel_pmc_idx_to_pmc, .rdpmc_ecx_to_pmc = intel_rdpmc_ecx_to_pmc, From patchwork Tue Nov 30 07:42:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12646457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49A6EC433F5 for ; Tue, 30 Nov 2021 07:42:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239195AbhK3HqM (ORCPT ); Tue, 30 Nov 2021 02:46:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239158AbhK3HqB (ORCPT ); Tue, 30 Nov 2021 02:46:01 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B221BC061714; Mon, 29 Nov 2021 23:42:42 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id s137so18899069pgs.5; Mon, 29 Nov 2021 23:42:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uzxmwMQ1+XpebIcj0dAIJF5e91X774jV/0PiDyig/tg=; b=eGuOk5LwZTS4WxwE6yO7tuIIm3QJ4fRAn2EQs1dmdCxVaeiKgiouYYjDjrongZ+yx/ fLSVkl1BaYjNb9wFgnOkTc1MFjl+9G9X2ozXRgTIrh0fwKjFIfqxmdRTUGDDxmpatbMJ mswz5KvxyVlIlzC5ndtebDCTHWA2CIiW5kJO+uxZF/XTtAI+/yhgp2Kwtc9hJOwYZSvY fjTW3UzMUY/7beldPufNaPBnHQv6NErx3cEpFVTMLfxkmIoFnaftVvQgxdam3u3Yqfy2 fcgDLx2TneCj8DNYj3MIR01Pif2efJi5csXWnsoffgBPBsPhyUiTBWsG8yEEsU6xDtjS KIzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uzxmwMQ1+XpebIcj0dAIJF5e91X774jV/0PiDyig/tg=; b=oHXRTfp9ao4dJIu2lI0WMKC59107gidGYfd60UspLWxb4yBlaF3fN2C2wxUbJ8BtMb l7e1w5za6tpa/lvvjzDwblWp7d/V4y40zmlJR/w+l8GKfYrVM24neX2jDF3sD1o1u69G NQC+4CwOR0Z/5/5/n/qLNrC6W/8usknCYdcsh3pkI0vyJcEBUZkppryC+ExrGiW1nMSb OIVzLDM6wDQvRwbTh7gFNdwWBxdv1yM6mEh8xzSQTw91edy8X6dsaCcTfxt+GTAZalJO N2AswH0TGfzx+iCt8Re/1rJHCrCL4i5pNfY9ceQL12qZY0amjkEBvWlgh1RJQ+E3TYWm 3YJg== X-Gm-Message-State: AOAM531DbuV7maczh63+Y7a8xZFEveQt0bd6/vpfrS5OxH086NNyPojD 1WQdQvNGpsy5rUJbOG7UmX5MMpT1dR8= X-Google-Smtp-Source: ABdhPJzdGRN8Yq7ZWzA+pyX/vaZOSo2Ki/dSXZlXF84BzQLecbyqdyFS32B/iBnk7RY3zyUmu6SKMA== X-Received: by 2002:a62:1544:0:b0:49f:f74e:8327 with SMTP id 65-20020a621544000000b0049ff74e8327mr43063648pfv.55.1638258162265; Mon, 29 Nov 2021 23:42:42 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id h13sm19066010pfv.84.2021.11.29.23.42.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Nov 2021 23:42:42 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu Subject: [PATCH v2 4/6] KVM: x86/pmu: Add pmc->intr to refactor kvm_perf_overflow{_intr}() Date: Tue, 30 Nov 2021 15:42:19 +0800 Message-Id: <20211130074221.93635-5-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211130074221.93635-1-likexu@tencent.com> References: <20211130074221.93635-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Depending on whether intr should be triggered or not, KVM registers two different event overflow callbacks in the perf_event context. The code skeleton of these two functions is very similar, so the pmc->intr can be stored into pmc from pmc_reprogram_counter() which provides smaller instructions footprint against the u-architecture branch predictor. The __kvm_perf_overflow() can be called in non-nmi contexts and a flag is needed to distinguish the caller context and thus avoid a check on kvm_is_in_guest(), otherwise we might get warnings from suspicious RCU or check_preemption_disabled(). Suggested-by: Paolo Bonzini Signed-off-by: Like Xu Reviewed-by: Jim Mattson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/pmu.c | 58 ++++++++++++++++----------------- 2 files changed, 29 insertions(+), 30 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index e41ad1ead721..6c2b2331ffeb 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -495,6 +495,7 @@ struct kvm_pmc { */ u64 current_config; bool is_paused; + bool intr; }; struct kvm_pmu { diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index b7a1ae28ab87..a20207ee4014 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -55,43 +55,41 @@ static void kvm_pmi_trigger_fn(struct irq_work *irq_work) kvm_pmu_deliver_pmi(vcpu); } -static void kvm_perf_overflow(struct perf_event *perf_event, - struct perf_sample_data *data, - struct pt_regs *regs) +static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) { - struct kvm_pmc *pmc = perf_event->overflow_handler_context; struct kvm_pmu *pmu = pmc_to_pmu(pmc); - if (!test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); - kvm_make_request(KVM_REQ_PMU, pmc->vcpu); - } + /* Ignore counters that have been reprogrammed already. */ + if (test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) + return; + + __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); + kvm_make_request(KVM_REQ_PMU, pmc->vcpu); + + if (!pmc->intr) + return; + + /* + * Inject PMI. If vcpu was in a guest mode during NMI PMI + * can be ejected on a guest mode re-entry. Otherwise we can't + * be sure that vcpu wasn't executing hlt instruction at the + * time of vmexit and is not going to re-enter guest mode until + * woken up. So we should wake it, but this is impossible from + * NMI context. Do it from irq work instead. + */ + if (in_pmi && !kvm_is_in_guest()) + irq_work_queue(&pmc_to_pmu(pmc)->irq_work); + else + kvm_make_request(KVM_REQ_PMI, pmc->vcpu); } -static void kvm_perf_overflow_intr(struct perf_event *perf_event, - struct perf_sample_data *data, - struct pt_regs *regs) +static void kvm_perf_overflow(struct perf_event *perf_event, + struct perf_sample_data *data, + struct pt_regs *regs) { struct kvm_pmc *pmc = perf_event->overflow_handler_context; - struct kvm_pmu *pmu = pmc_to_pmu(pmc); - - if (!test_and_set_bit(pmc->idx, pmu->reprogram_pmi)) { - __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); - kvm_make_request(KVM_REQ_PMU, pmc->vcpu); - /* - * Inject PMI. If vcpu was in a guest mode during NMI PMI - * can be ejected on a guest mode re-entry. Otherwise we can't - * be sure that vcpu wasn't executing hlt instruction at the - * time of vmexit and is not going to re-enter guest mode until - * woken up. So we should wake it, but this is impossible from - * NMI context. Do it from irq work instead. - */ - if (!kvm_is_in_guest()) - irq_work_queue(&pmc_to_pmu(pmc)->irq_work); - else - kvm_make_request(KVM_REQ_PMI, pmc->vcpu); - } + __kvm_perf_overflow(pmc, true); } static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, @@ -126,7 +124,6 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, } event = perf_event_create_kernel_counter(&attr, -1, current, - intr ? kvm_perf_overflow_intr : kvm_perf_overflow, pmc); if (IS_ERR(event)) { pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", @@ -138,6 +135,7 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, pmc_to_pmu(pmc)->event_count++; clear_bit(pmc->idx, pmc_to_pmu(pmc)->reprogram_pmi); pmc->is_paused = false; + pmc->intr = intr; } static void pmc_pause_counter(struct kvm_pmc *pmc) From patchwork Tue Nov 30 07:42:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12646453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 002EDC433F5 for ; Tue, 30 Nov 2021 07:42:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239142AbhK3HqG (ORCPT ); Tue, 30 Nov 2021 02:46:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239181AbhK3HqE (ORCPT ); Tue, 30 Nov 2021 02:46:04 -0500 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7159C061714; Mon, 29 Nov 2021 23:42:45 -0800 (PST) Received: by mail-pg1-x52b.google.com with SMTP id 133so730968pgc.12; Mon, 29 Nov 2021 23:42:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vayFNaDVZE3ZpmipANEaG4wpIWUVVgu6ZcSxACk3VoM=; b=YgIwgC+6WL8eCjCKoFb5ydJbiH8Tqze/sR3txrbvIJBHBZHf300LPgRASkQ/hAt3yY TtdfRctazrBtFmS9dxkku+V/f6Fo3wlFloEOuprnG4j2XGX22KKlNWGNT7wUl42Fi1m5 JsscAndsRpiPD2nGwkmD1iXhOfJ7JAlh5fCB54vJQoQxzgelO/dZP+6s2mjYdyzfPJvq wxxfc3oERDibzlYCgIyExyr1Do7bwJsm8YAsG90dD4Wxg0Re+D4jpGjKVcvQMMrjzbvE 7BgMznpooysUAtAN1nKn5NDeKjFYeyYKmrGsrWJNR1ay53L6N8rXhiLTjdZ5K7jQFZvC wLRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vayFNaDVZE3ZpmipANEaG4wpIWUVVgu6ZcSxACk3VoM=; b=7bTwsU9ufyM68xlElppPW0IimLBsyVUbp62R9V6Su+4RH6bnnS5mC4q24XFwea8W54 R+qZm2uNiHMNSBHdM2mcSlmU1pwhP1X0d50ZRFGgxuzbxg1Tl1SwCP4WhHeTUgxUGVSf AzhGeo1efBK9Q5KGiXidzS8DWVNlCQlZaP0kzVN0j6O3FbuQH89Du7rh8NXChgMBxYsc dBER2kfV0UKFxf6rLoRNvC2GER5l3+X1/jNQ/CmjMazF0JE7RnTESZh5pzNtpl1XCcZF Mtbbhkok8qUH/vefJi7jAaOI5iC702uWv21Xvtoef42gle6jA6I8YEZmzU7H3fcyW4EL uHXg== X-Gm-Message-State: AOAM533bKaUYDzzxd63h0Ymidhhu30vBE2fIKZxfTF/OXZmo0LKg+gpd A/zreO5nMVzYqSEE1ERsz0o= X-Google-Smtp-Source: ABdhPJwllg1gfnFgER/MJxRVVKDvt4R7xZJCOO23stSFAWc7Wh/VnMchJoFQNe/ldAWB3BAwlMXUOw== X-Received: by 2002:a63:2444:: with SMTP id k65mr24835324pgk.606.1638258165460; Mon, 29 Nov 2021 23:42:45 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id h13sm19066010pfv.84.2021.11.29.23.42.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Nov 2021 23:42:45 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Subject: [PATCH v2 5/6] KVM: x86: Update vPMCs when retiring instructions Date: Tue, 30 Nov 2021 15:42:20 +0800 Message-Id: <20211130074221.93635-6-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211130074221.93635-1-likexu@tencent.com> References: <20211130074221.93635-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu When KVM retires a guest instruction through emulation, increment any vPMCs that are configured to monitor "instructions retired," and update the sample period of those counters so that they will overflow at the right time. Signed-off-by: Eric Hankland [jmattson: - Split the code to increment "branch instructions retired" into a separate commit. - Added 'static' to kvm_pmu_incr_counter() definition. - Modified kvm_pmu_incr_counter() to check pmc->perf_event->state == PERF_EVENT_STATE_ACTIVE. ] Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests") Signed-off-by: Jim Mattson [likexu: - Drop checks for pmc->perf_event or event state or event type - Increase a counter once its umask bits and the first 8 select bits are matched - Rewrite kvm_pmu_incr_counter() with a less invasive approach to the host perf; - Rename kvm_pmu_record_event to kvm_pmu_trigger_event; - Add counter enable and CPL check for kvm_pmu_trigger_event(); ] Cc: Peter Zijlstra Signed-off-by: Like Xu Signed-off-by: Eric Hankland Signed-off-by: Jim Mattson Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/pmu.h | 1 + arch/x86/kvm/x86.c | 3 +++ 3 files changed, 64 insertions(+) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index a20207ee4014..8abdadb7e22a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -482,6 +482,66 @@ void kvm_pmu_destroy(struct kvm_vcpu *vcpu) kvm_pmu_reset(vcpu); } +static void kvm_pmu_incr_counter(struct kvm_pmc *pmc) +{ + struct kvm_pmu *pmu = pmc_to_pmu(pmc); + u64 prev_count; + + prev_count = pmc->counter; + pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc); + + reprogram_counter(pmu, pmc->idx); + if (pmc->counter < prev_count) + __kvm_perf_overflow(pmc, false); +} + +static inline bool eventsel_match_perf_hw_id(struct kvm_pmc *pmc, + unsigned int perf_hw_id) +{ + u64 old_eventsel = pmc->eventsel; + unsigned int config; + + pmc->eventsel &= (ARCH_PERFMON_EVENTSEL_EVENT | ARCH_PERFMON_EVENTSEL_UMASK); + config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); + pmc->eventsel = old_eventsel; + return config == perf_hw_id; +} + +static inline bool cpl_is_matched(struct kvm_pmc *pmc) +{ + bool select_os, select_user; + u64 config = pmc->current_config; + + if (pmc_is_gp(pmc)) { + select_os = config & ARCH_PERFMON_EVENTSEL_OS; + select_user = config & ARCH_PERFMON_EVENTSEL_USR; + } else { + select_os = config & 0x1; + select_user = config & 0x2; + } + + return (static_call(kvm_x86_get_cpl)(pmc->vcpu) == 0) ? select_os : select_user; +} + +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) +{ + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); + struct kvm_pmc *pmc; + int i; + + for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { + pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, i); + + if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc)) + continue; + + /* Ignore checks for edge detect, pin control, invert and CMASK bits */ + if (eventsel_match_perf_hw_id(pmc, perf_hw_id) && cpl_is_matched(pmc)) + kvm_pmu_incr_counter(pmc); + } +} +EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); + int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp) { struct kvm_pmu_event_filter tmp, *filter; diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index c91d9725aafd..7a7b8d5b775e 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -157,6 +157,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu); void kvm_pmu_cleanup(struct kvm_vcpu *vcpu); void kvm_pmu_destroy(struct kvm_vcpu *vcpu); int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp); +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id); bool is_vmware_backdoor_pmc(u32 pmc_idx); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a05a26471f19..83371be00771 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7978,6 +7978,8 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) if (unlikely(!r)) return 0; + kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); + /* * rflags is the old, "raw" value of the flags. The new value has * not been saved yet. @@ -8240,6 +8242,7 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, vcpu->arch.emulate_regs_need_sync_to_vcpu = false; if (!ctxt->have_exception || exception_type(ctxt->exception.vector) == EXCPT_TRAP) { + kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); kvm_rip_write(vcpu, ctxt->eip); if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) r = kvm_vcpu_do_singlestep(vcpu); From patchwork Tue Nov 30 07:42:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12646455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 040CEC433F5 for ; Tue, 30 Nov 2021 07:42:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239115AbhK3HqK (ORCPT ); Tue, 30 Nov 2021 02:46:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239188AbhK3HqH (ORCPT ); Tue, 30 Nov 2021 02:46:07 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A29B6C061574; Mon, 29 Nov 2021 23:42:48 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id u80so19734095pfc.9; Mon, 29 Nov 2021 23:42:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cEKEHXAnFqEWs2cfAOj4tplx3BT8KgIMnHJkci37HdI=; b=exXJRTbrcpxxUIL8XuHBUBDOgTvpMBZxmWpOqkW91KubfM0SgvmoBy+xEDb4TEbAvD PvfTvgLQYVgI1aQLkr1QCGLxleSqcPz9E7ZlZY9fnDgnpheu4SmxPnbrSRp89ztgTApx zIqIbBE8R53LZsHO2LdlfjHAe7cZvr8agIRNaXVJPqD0cc5MgWY3xKg9/q+jeZC63oc2 QIE/ceyNygW1YwXfQHoRUzFUJuvKt6q6cV0MkLeQ4mclddbP1m7ALsP75BQ+RQL/dTMQ z25BJ+MBuJPp/bbht9tfXMavuzHu2Z7/8lzW/GR5qbyUo/UeUc3Bg0P0UlxW8Gwh6jf5 k3QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cEKEHXAnFqEWs2cfAOj4tplx3BT8KgIMnHJkci37HdI=; b=GOHpqmU3jTo87h4c9mRUErcLbaNedNd/n2kyPL3svUSDfqEGb36FHCpapkhBth52/y THQSv1mVCEfeLak4B8l72EdPWe22GnpigD++29NWAgVTsZ3GPzXnyUOG4TWXav1qlYO6 yeYDvMjpFTouUEnn1ooX1uj6FpQjv0GbgQQkBA78woCNau+4726FZvKzlPViK3gOYADj SO6/r+zayQMkvscW5yUg6stlWP31KLh7JP22QJL5BdhMBtwOZ2tsPh1jSyC6EmRg50qj UVcTf2z9CnU4ZB/xZEZyDcOCRj6c04Xr18QRDZbY53dJNv9lc67IPY8bKQq5aTpKVkEY 0+rg== X-Gm-Message-State: AOAM532b3kmTfkG7HsD2zFdX6hA6W4f2X1vfRCH9hRsgIfuesmAbG0+O XHV1AIMrA2feaFhyIztMcUQ= X-Google-Smtp-Source: ABdhPJzUJVhiosPwm1YLAtHvzVGeRfs7/j087Pwo5LgS77p94Ub+3/N8EuQYSQE40rZkW9GqltVgdA== X-Received: by 2002:a63:6882:: with SMTP id d124mr39543074pgc.234.1638258168170; Mon, 29 Nov 2021 23:42:48 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id h13sm19066010pfv.84.2021.11.29.23.42.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Nov 2021 23:42:47 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 6/6] KVM: x86: Update vPMCs when retiring branch instructions Date: Tue, 30 Nov 2021 15:42:21 +0800 Message-Id: <20211130074221.93635-7-likexu@tencent.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211130074221.93635-1-likexu@tencent.com> References: <20211130074221.93635-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Jim Mattson When KVM retires a guest branch instruction through emulation, increment any vPMCs that are configured to monitor "branch instructions retired," and update the sample period of those counters so that they will overflow at the right time. Signed-off-by: Eric Hankland [jmattson: - Split the code to increment "branch instructions retired" into a separate commit. - Moved/consolidated the calls to kvm_pmu_trigger_event() in the emulation of VMLAUNCH/VMRESUME to accommodate the evolution of that code. ] Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests") Signed-off-by: Jim Mattson Reviewed-by: Jim Mattson --- arch/x86/kvm/emulate.c | 55 +++++++++++++++++++++----------------- arch/x86/kvm/kvm_emulate.h | 1 + arch/x86/kvm/vmx/nested.c | 7 +++-- arch/x86/kvm/x86.c | 2 ++ 4 files changed, 39 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 28b1a4e57827..166a145fc1e6 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -175,6 +175,7 @@ #define No16 ((u64)1 << 53) /* No 16 bit operand */ #define IncSP ((u64)1 << 54) /* SP is incremented before ModRM calc */ #define TwoMemOp ((u64)1 << 55) /* Instruction has two memory operand */ +#define IsBranch ((u64)1 << 56) /* Instruction is considered a branch. */ #define DstXacc (DstAccLo | SrcAccHi | SrcWrite) @@ -191,8 +192,9 @@ #define FASTOP_SIZE 8 struct opcode { - u64 flags : 56; - u64 intercept : 8; + u64 flags; + u8 intercept; + u8 pad[7]; union { int (*execute)(struct x86_emulate_ctxt *ctxt); const struct opcode *group; @@ -4364,10 +4366,10 @@ static const struct opcode group4[] = { static const struct opcode group5[] = { F(DstMem | SrcNone | Lock, em_inc), F(DstMem | SrcNone | Lock, em_dec), - I(SrcMem | NearBranch, em_call_near_abs), - I(SrcMemFAddr | ImplicitOps, em_call_far), - I(SrcMem | NearBranch, em_jmp_abs), - I(SrcMemFAddr | ImplicitOps, em_jmp_far), + I(SrcMem | NearBranch | IsBranch, em_call_near_abs), + I(SrcMemFAddr | ImplicitOps | IsBranch, em_call_far), + I(SrcMem | NearBranch | IsBranch, em_jmp_abs), + I(SrcMemFAddr | ImplicitOps | IsBranch, em_jmp_far), I(SrcMem | Stack | TwoMemOp, em_push), D(Undefined), }; @@ -4577,7 +4579,7 @@ static const struct opcode opcode_table[256] = { I2bvIP(DstDI | SrcDX | Mov | String | Unaligned, em_in, ins, check_perm_in), /* insb, insw/insd */ I2bvIP(SrcSI | DstDX | String, em_out, outs, check_perm_out), /* outsb, outsw/outsd */ /* 0x70 - 0x7F */ - X16(D(SrcImmByte | NearBranch)), + X16(D(SrcImmByte | NearBranch | IsBranch)), /* 0x80 - 0x87 */ G(ByteOp | DstMem | SrcImm, group1), G(DstMem | SrcImm, group1), @@ -4596,7 +4598,7 @@ static const struct opcode opcode_table[256] = { DI(SrcAcc | DstReg, pause), X7(D(SrcAcc | DstReg)), /* 0x98 - 0x9F */ D(DstAcc | SrcNone), I(ImplicitOps | SrcAcc, em_cwd), - I(SrcImmFAddr | No64, em_call_far), N, + I(SrcImmFAddr | No64 | IsBranch, em_call_far), N, II(ImplicitOps | Stack, em_pushf, pushf), II(ImplicitOps | Stack, em_popf, popf), I(ImplicitOps, em_sahf), I(ImplicitOps, em_lahf), @@ -4616,17 +4618,19 @@ static const struct opcode opcode_table[256] = { X8(I(DstReg | SrcImm64 | Mov, em_mov)), /* 0xC0 - 0xC7 */ G(ByteOp | Src2ImmByte, group2), G(Src2ImmByte, group2), - I(ImplicitOps | NearBranch | SrcImmU16, em_ret_near_imm), - I(ImplicitOps | NearBranch, em_ret), + I(ImplicitOps | NearBranch | SrcImmU16 | IsBranch, em_ret_near_imm), + I(ImplicitOps | NearBranch | IsBranch, em_ret), I(DstReg | SrcMemFAddr | ModRM | No64 | Src2ES, em_lseg), I(DstReg | SrcMemFAddr | ModRM | No64 | Src2DS, em_lseg), G(ByteOp, group11), G(0, group11), /* 0xC8 - 0xCF */ - I(Stack | SrcImmU16 | Src2ImmByte, em_enter), I(Stack, em_leave), - I(ImplicitOps | SrcImmU16, em_ret_far_imm), - I(ImplicitOps, em_ret_far), - D(ImplicitOps), DI(SrcImmByte, intn), - D(ImplicitOps | No64), II(ImplicitOps, em_iret, iret), + I(Stack | SrcImmU16 | Src2ImmByte | IsBranch, em_enter), + I(Stack | IsBranch, em_leave), + I(ImplicitOps | SrcImmU16 | IsBranch, em_ret_far_imm), + I(ImplicitOps | IsBranch, em_ret_far), + D(ImplicitOps | IsBranch), DI(SrcImmByte | IsBranch, intn), + D(ImplicitOps | No64 | IsBranch), + II(ImplicitOps | IsBranch, em_iret, iret), /* 0xD0 - 0xD7 */ G(Src2One | ByteOp, group2), G(Src2One, group2), G(Src2CL | ByteOp, group2), G(Src2CL, group2), @@ -4637,14 +4641,15 @@ static const struct opcode opcode_table[256] = { /* 0xD8 - 0xDF */ N, E(0, &escape_d9), N, E(0, &escape_db), N, E(0, &escape_dd), N, N, /* 0xE0 - 0xE7 */ - X3(I(SrcImmByte | NearBranch, em_loop)), - I(SrcImmByte | NearBranch, em_jcxz), + X3(I(SrcImmByte | NearBranch | IsBranch, em_loop)), + I(SrcImmByte | NearBranch | IsBranch, em_jcxz), I2bvIP(SrcImmUByte | DstAcc, em_in, in, check_perm_in), I2bvIP(SrcAcc | DstImmUByte, em_out, out, check_perm_out), /* 0xE8 - 0xEF */ - I(SrcImm | NearBranch, em_call), D(SrcImm | ImplicitOps | NearBranch), - I(SrcImmFAddr | No64, em_jmp_far), - D(SrcImmByte | ImplicitOps | NearBranch), + I(SrcImm | NearBranch | IsBranch, em_call), + D(SrcImm | ImplicitOps | NearBranch | IsBranch), + I(SrcImmFAddr | No64 | IsBranch, em_jmp_far), + D(SrcImmByte | ImplicitOps | NearBranch | IsBranch), I2bvIP(SrcDX | DstAcc, em_in, in, check_perm_in), I2bvIP(SrcAcc | DstDX, em_out, out, check_perm_out), /* 0xF0 - 0xF7 */ @@ -4660,7 +4665,7 @@ static const struct opcode opcode_table[256] = { static const struct opcode twobyte_table[256] = { /* 0x00 - 0x0F */ G(0, group6), GD(0, &group7), N, N, - N, I(ImplicitOps | EmulateOnUD, em_syscall), + N, I(ImplicitOps | EmulateOnUD | IsBranch, em_syscall), II(ImplicitOps | Priv, em_clts, clts), N, DI(ImplicitOps | Priv, invd), DI(ImplicitOps | Priv, wbinvd), N, N, N, D(ImplicitOps | ModRM | SrcMem | NoAccess), N, N, @@ -4691,8 +4696,8 @@ static const struct opcode twobyte_table[256] = { IIP(ImplicitOps, em_rdtsc, rdtsc, check_rdtsc), II(ImplicitOps | Priv, em_rdmsr, rdmsr), IIP(ImplicitOps, em_rdpmc, rdpmc, check_rdpmc), - I(ImplicitOps | EmulateOnUD, em_sysenter), - I(ImplicitOps | Priv | EmulateOnUD, em_sysexit), + I(ImplicitOps | EmulateOnUD | IsBranch, em_sysenter), + I(ImplicitOps | Priv | EmulateOnUD | IsBranch, em_sysexit), N, N, N, N, N, N, N, N, N, N, /* 0x40 - 0x4F */ @@ -4710,7 +4715,7 @@ static const struct opcode twobyte_table[256] = { N, N, N, N, N, N, N, GP(SrcReg | DstMem | ModRM | Mov, &pfx_0f_6f_0f_7f), /* 0x80 - 0x8F */ - X16(D(SrcImm | NearBranch)), + X16(D(SrcImm | NearBranch | IsBranch)), /* 0x90 - 0x9F */ X16(D(ByteOp | DstMem | SrcNone | ModRM| Mov)), /* 0xA0 - 0xA7 */ @@ -5224,6 +5229,8 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len, int ctxt->d |= opcode.flags; } + ctxt->is_branch = opcode.flags & IsBranch; + /* Unrecognised? */ if (ctxt->d == 0) return EMULATION_FAILED; diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index 68b420289d7e..39eded2426ff 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -369,6 +369,7 @@ struct x86_emulate_ctxt { struct fetch_cache fetch; struct read_cache io_read; struct read_cache mem_read; + bool is_branch; }; /* Repeat String Operation Prefix */ diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 95f3823b3a9d..6e05d36aedd5 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3504,10 +3504,13 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) if (evmptrld_status == EVMPTRLD_ERROR) { kvm_queue_exception(vcpu, UD_VECTOR); return 1; - } else if (CC(evmptrld_status == EVMPTRLD_VMFAIL)) { - return nested_vmx_failInvalid(vcpu); } + kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + + if (CC(evmptrld_status == EVMPTRLD_VMFAIL)) + return nested_vmx_failInvalid(vcpu); + if (CC(!evmptr_is_valid(vmx->nested.hv_evmcs_vmptr) && vmx->nested.current_vmptr == INVALID_GPA)) return nested_vmx_failInvalid(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 83371be00771..c28226dcbe4e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8243,6 +8243,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!ctxt->have_exception || exception_type(ctxt->exception.vector) == EXCPT_TRAP) { kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_INSTRUCTIONS); + if (ctxt->is_branch) + kvm_pmu_trigger_event(vcpu, PERF_COUNT_HW_BRANCH_INSTRUCTIONS); kvm_rip_write(vcpu, ctxt->eip); if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP))) r = kvm_vcpu_do_singlestep(vcpu);