From patchwork Tue May 10 11:57:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12844975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF2A1C433F5 for ; Tue, 10 May 2022 11:57:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241424AbiEJMBo (ORCPT ); Tue, 10 May 2022 08:01:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241379AbiEJMB1 (ORCPT ); Tue, 10 May 2022 08:01:27 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 449775418B; Tue, 10 May 2022 04:57:24 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d17so16564763plg.0; Tue, 10 May 2022 04:57:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=shkBQ6u+7n2fObSlWwQHgCMPATV4g6HS36NExz20L0w=; b=PQRwBWTzYkvG65psGWy3kZMKe/EyYclggxnRLqkt/6n/QbLN+5e7Ft2ksdjrq+WPw/ yMV4PtGil3G3L4rqfPj32Gj5C/A1G/NauIGQgxgWADiA+ZrrZYb1iSOycyhu+5SgPDXg Wi8d9/STee5ANfbUieBNxr5wEOVWtLTUC39cPxevmgTxp0Ncn/tBYNDkQd06pWEsdabn EK8colE11fIR8HNmwQ99ruhtUq/6mdXNOIcjL0zbaT7kQ6LP3Uuptw8b4vIRG8N9psV+ VctK7OmycZVvHnuoOuWTHP7L2UY6bC0+M7BEQpfuVfOXHlVvgLbPebmKiD9L3x2iLk+q 9zmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=shkBQ6u+7n2fObSlWwQHgCMPATV4g6HS36NExz20L0w=; b=32JrbQHOLKC6LABoAFOq99jH3T6zeXIg3g5MSJEZPrQ1sQyxk2bIVyVBZtKyUcQz6/ kJraKWwoL9jZwklMytcPQWlWlIiH+rfIZXNIyii81dqAYDS1p6XLTCaDY/PNa+AcwCc2 KT/7wqy38MEI7/rQZUaXaYLCGrllU/F5N6Yxs6pnYN8TUpSruQCwAtm4d2rjgnbaQ7OO 1fhW/CwCrYze0q9posQf1b4WUeypqnew57/WOr01HBfH3APnWXki5u5feXvjg2nzBuJ0 C7V8RVXgQw0noFm2VXqCFGYA13iyEnl+2+lieNxACx/XMRcksPTKSbGI92Bs7i0aZv1b wLjA== X-Gm-Message-State: AOAM530s4cq0+ehGKThTYMWGrNwYwv94ZHVK4xZuydB+SRqumDNZ9pbt gvTVFNIxGueDnNO5g5LFU70= X-Google-Smtp-Source: ABdhPJxE0E0NpdKRSDOPYn/tNURiZIemQGnJ9NLdwCebVEomK7N4OcuWGxqwJ8+az3vdtwf4KQEEww== X-Received: by 2002:a17:90b:3a83:b0:1dc:b7d4:8395 with SMTP id om3-20020a17090b3a8300b001dcb7d48395mr22931698pjb.173.1652183843744; Tue, 10 May 2022 04:57:23 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.81]) by smtp.gmail.com with ESMTPSA id m21-20020a62a215000000b0050dc7628194sm10460463pff.110.2022.05.10.04.57.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 04:57:23 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , sandipan.das@amd.com, Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/3] KVM: x86/pmu: Add fast-path check for per-vm vPMU disablility Date: Tue, 10 May 2022 19:57:16 +0800 Message-Id: <20220510115718.93335-1-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Since vcpu->kvm->arch.enable_pmu is introduced in a generic way, it makes more sense to move the relevant checks to generic code rather than scattering usages around, thus saving cpu cycles from static_call() when vPMU is disabled. Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 6 ++++++ arch/x86/kvm/svm/pmu.c | 3 --- arch/x86/kvm/vmx/pmu_intel.c | 2 +- 3 files changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 618f529f1c4d..522498945a4a 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -415,6 +415,9 @@ void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) { + if (!vcpu->kvm->arch.enable_pmu) + return false; + return static_call(kvm_x86_pmu_msr_idx_to_pmc)(vcpu, msr) || static_call(kvm_x86_pmu_is_valid_msr)(vcpu, msr); } @@ -445,6 +448,9 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) */ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) { + if (!vcpu->kvm->arch.enable_pmu) + return; + static_call(kvm_x86_pmu_refresh)(vcpu); } diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 57ab4739eb19..68b9e22c84d2 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -101,9 +101,6 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); - if (!vcpu->kvm->arch.enable_pmu) - return NULL; - switch (msr) { case MSR_F15H_PERF_CTL0: case MSR_F15H_PERF_CTL1: diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 9db662399487..3f15ec2dd4b3 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -493,7 +493,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu) pmu->raw_event_mask = X86_RAW_EVENT_MASK; entry = kvm_find_cpuid_entry(vcpu, 0xa, 0); - if (!entry || !vcpu->kvm->arch.enable_pmu) + if (!entry) return; eax.full = entry->eax; edx.full = entry->edx; From patchwork Tue May 10 11:57:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12844976 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 356E3C433EF for ; Tue, 10 May 2022 11:57:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232677AbiEJMBq (ORCPT ); Tue, 10 May 2022 08:01:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241393AbiEJMBm (ORCPT ); Tue, 10 May 2022 08:01:42 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECB2553B64; Tue, 10 May 2022 04:57:26 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id cq17-20020a17090af99100b001dc0386cd8fso1934709pjb.5; Tue, 10 May 2022 04:57:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=V+ri3ZfuN8FuVbSM//0lu8Toi01vpQqcUZp67pKtVbk=; b=LNtmuy58omRFRozovkqcW+xsZL6/z3+iz9w5LCVdNDQ51/haSM+16lwhxmP6rykSFE KzcuAXy7KOUCxC29OU5+kHT2q6Y8vHpPgZTAo6zT5bQha4nAxfBNRxcgVWxIJq69oZ4e qrXr83+xBI6KXhRF5z86ysmJrUVlfxoIp4kV3ATqjyg7wFwfEcMAy1RDCfIcqI9PrPef 8sFhEYmZjFZVD2NBjz01h8PGpeA5mP1zjmhdB/PiN/rbEJkFqOZFYNrGoR4m+C22E5HS HzFajf/lI8AswYT5jZiekylbky4v0ajGl2WpH3gbiALlzvwe6DcF9x5qsY3DjlcdJJzn MWzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=V+ri3ZfuN8FuVbSM//0lu8Toi01vpQqcUZp67pKtVbk=; b=BnHQyW57CXcCM4kukHu/BkI2IpsIuJlTQl4NQYL0kHFFxmYEbWeUUQRGnVgxZ7Q48C IfGzW2aPkSUWm6715cDnzc4J5JKMUWMBuEldSA36UTmHOXu9tDdMnld6MztXzSMNYRPA lvfmVUW4Bn/ex2QJMPhp4+lG2yEfAbs5UwuxwdqTis0Jpyk8kkrxg6OBwSlozzx1EvPw nJmF3gvmmtfF28AIWUGUeHXxrDhZN0vtEDiytjQiZKdZ0cKiskVZgQgIAPvY/U9m6qI9 fvYrZdKO6y2ygHEsj6r1igeY8M5XFAEQL8iU41JhZZ057+LekNmBkxJu57aewcJiFWDZ hawA== X-Gm-Message-State: AOAM5304AwfoRjbbVzhhekT5/zeh3XQrrg6aHSbny9wepL7WDcJhMGqn hE0Z45xj7N9BaT4E3tCorGA= X-Google-Smtp-Source: ABdhPJyIeP8oTcjQ3VMSFESdLsCog0TR/cq6n8yCgtgSImIplBUC0qVSZiWewGA3g4HwfQnLyFi06w== X-Received: by 2002:a17:90a:de87:b0:1d9:8264:baef with SMTP id n7-20020a17090ade8700b001d98264baefmr31351167pjv.227.1652183846333; Tue, 10 May 2022 04:57:26 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.81]) by smtp.gmail.com with ESMTPSA id m21-20020a62a215000000b0050dc7628194sm10460463pff.110.2022.05.10.04.57.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 04:57:26 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , sandipan.das@amd.com, Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/3] KVM: x86/svm/pmu: Direct access pmu->gp_counter[] to implement amd_*_to_pmc() Date: Tue, 10 May 2022 19:57:17 +0800 Message-Id: <20220510115718.93335-2-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220510115718.93335-1-likexu@tencent.com> References: <20220510115718.93335-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu AMD only has gp counters, whose corresponding vPMCs are initialised and stored in pmu->gp_counter[] in order of idx, so we can access this array directly based on any valid pmc->idx, without any help from other interfaces at all. The amd_rdpmc_ecx_to_pmc() can now reuse this part of the code quite naturally. Opportunistically apply array_index_nospec() to reduce the attack surface for speculative execution. Signed-off-by: Like Xu --- arch/x86/kvm/svm/pmu.c | 36 +++++++++++------------------------- 1 file changed, 11 insertions(+), 25 deletions(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 68b9e22c84d2..4668baf762d2 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -45,6 +45,16 @@ static struct kvm_event_hw_type_mapping amd_event_mapping[] = { [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND }, }; +static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) +{ + unsigned int num_counters = pmu->nr_arch_gp_counters; + + if (pmc_idx >= num_counters) + return NULL; + + return &pmu->gp_counters[array_index_nospec(pmc_idx, num_counters)]; +} + static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); @@ -164,22 +174,6 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc) return true; } -static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) -{ - unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER); - struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); - - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { - /* - * The idx is contiguous. The MSRs are not. The counter MSRs - * are interleaved with the event select MSRs. - */ - pmc_idx *= 2; - } - - return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER); -} - static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); @@ -193,15 +187,7 @@ static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) static struct kvm_pmc *amd_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu, unsigned int idx, u64 *mask) { - struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - struct kvm_pmc *counters; - - idx &= ~(3u << 30); - if (idx >= pmu->nr_arch_gp_counters) - return NULL; - counters = pmu->gp_counters; - - return &counters[idx]; + return amd_pmc_idx_to_pmc(vcpu_to_pmu(vcpu), idx & ~(3u << 30)); } static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) From patchwork Tue May 10 11:57:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12844977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECB4AC433EF for ; Tue, 10 May 2022 11:57:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241448AbiEJMBt (ORCPT ); Tue, 10 May 2022 08:01:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241395AbiEJMBm (ORCPT ); Tue, 10 May 2022 08:01:42 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00E1C25D11E; Tue, 10 May 2022 04:57:29 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id a191so14465227pge.2; Tue, 10 May 2022 04:57:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4EHijMXYqaJmiEN7Jfvz24yA9KqjDPzEn/FC/pqCNrE=; b=CHPMB23IKZeBKNMKntaVYEyo+fNtb5WgyWyN7HUWd7xTGfAewWqy+eWLErvLk94M9i v/b+nybOg+6EerGru4YaYEasrNCCh+NZO9Gz5/xvW/2axlLe5+ZalTaSEdioZ5k/uFqm 8j4RBdfYRJtYW/+gmCf/aQFCAHWjP1TEuisQcuT6p4z+otStzLH9mRqnGmAoPnYg9wi7 Wqhhjneji3X9wk2C1WqI+iofLYRGvbMXMDyq/OPDOFoVaNxu9YTq5uce0aR5yWkaxbeR VRQc9x5kmxeTa0OCGNoYkeR4cIFfEAmHBra3EEGcCMZ3ZO5inDMRLr56hbRJv04Tjbdl Ql7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4EHijMXYqaJmiEN7Jfvz24yA9KqjDPzEn/FC/pqCNrE=; b=q5l25yCzcI/RdytDVnihg/vzcWhnT967qwQ7YrDWkXifnGq83pg6ToX8pQ3rbDOR2r 1xoDqCij8+RiniS5yVGzT3VqTCx++eojrYDd9THnFuvpoLVynvmRMH5ULAz0nvKQyL3Q zgO0m4WDqM714gux5a4WC6nnJxyMXbf+2h/zPQSMOwMBL4euyfFRMRcEGblNY/Qir733 esQHKuhD0NOGGKi8xCLGVSWHdW1suDjQpqKGxJLTX7Fx61EIHeXybhR//5KuJMt3FPej wd71A3lVxfxv2VLT360k1oX9Qc0fZDoYoOIAFIblBgoUzvzvaTpvnHjuw9jZ8vDVYHaw nrBA== X-Gm-Message-State: AOAM532N+CxgkGbZ+zY6VVT8OPQtBuik7F3fon43jgBwv4ztNY4YQ5P3 XCHx0TrGqpfhFwEVP625mxA= X-Google-Smtp-Source: ABdhPJzWAZACAmlCttWC+a2MV8NJYLAhdL0z2+nnXOh2Diskk4x3djZN6Qwx6JyQ6TDCfzznDfXTgw== X-Received: by 2002:a63:846:0:b0:39d:9a9d:1178 with SMTP id 67-20020a630846000000b0039d9a9d1178mr16867813pgi.225.1652183848841; Tue, 10 May 2022 04:57:28 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.81]) by smtp.gmail.com with ESMTPSA id m21-20020a62a215000000b0050dc7628194sm10460463pff.110.2022.05.10.04.57.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 04:57:28 -0700 (PDT) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini Cc: Jim Mattson , sandipan.das@amd.com, Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] KVM: x86/svm/pmu: Drop 'enum index' for more counters scalability Date: Tue, 10 May 2022 19:57:18 +0800 Message-Id: <20220510115718.93335-3-likexu@tencent.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220510115718.93335-1-likexu@tencent.com> References: <20220510115718.93335-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu If the number of AMD gp counters continues to grow, the code will be very clumsy and the switch-case design of inline get_gp_pmc_amd() will also bloat the kernel text size. The target code is taught to manage two groups of MSRs, each representing a different version of the AMD PMU counter MSRs. The MSR addresses of each group are contiguous, with no holes, and there is no intersection between two sets of addresses, but they are discrete in functionality by design like this: [Group A : All counter MSRs are tightly bound to all event select MSRs ] MSR_K7_EVNTSEL0 0xc0010000 MSR_K7_EVNTSELi 0xc0010000 + i ... MSR_K7_EVNTSEL3 0xc0010003 MSR_K7_PERFCTR0 0xc0010004 MSR_K7_PERFCTRi 0xc0010004 + i ... MSR_K7_PERFCTR3 0xc0010007 [Group B : The counter MSRs are interleaved with the event select MSRs ] MSR_F15H_PERF_CTL0 0xc0010200 MSR_F15H_PERF_CTR0 (0xc0010200 + 1) ... MSR_F15H_PERF_CTLi (0xc0010200 + 2 * i) MSR_F15H_PERF_CTRi (0xc0010200 + 2 * i + 1) ... MSR_F15H_PERF_CTL5 (0xc0010200 + 2 * 5) MSR_F15H_PERF_CTR5 (0xc0010200 + 2 * 5 + 1) Rewrite get_gp_pmc_amd() in this way: first determine which group of registers is accessed by the pass-in 'msr' address, then determine which msr 'base' is referenced by 'type', applying different address scaling ratios separately, and finally get the pmc_idx. If the 'base' does not match its 'type', it continues to remain invalid. Signed-off-by: Like Xu --- arch/x86/kvm/svm/pmu.c | 96 ++++++++---------------------------------- 1 file changed, 18 insertions(+), 78 deletions(-) diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index 4668baf762d2..b1ae249b4779 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -23,16 +23,6 @@ enum pmu_type { PMU_TYPE_EVNTSEL, }; -enum index { - INDEX_ZERO = 0, - INDEX_ONE, - INDEX_TWO, - INDEX_THREE, - INDEX_FOUR, - INDEX_FIVE, - INDEX_ERROR, -}; - /* duplicated from amd_perfmon_event_map, K7 and above should work. */ static struct kvm_event_hw_type_mapping amd_event_mapping[] = { [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES }, @@ -55,11 +45,9 @@ static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx) return &pmu->gp_counters[array_index_nospec(pmc_idx, num_counters)]; } -static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) +static u32 get_msr_base(bool core_ctr, enum pmu_type type) { - struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); - - if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) { + if (core_ctr) { if (type == PMU_TYPE_COUNTER) return MSR_F15H_PERF_CTR; else @@ -72,77 +60,29 @@ static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type) } } -static enum index msr_to_index(u32 msr) -{ - switch (msr) { - case MSR_F15H_PERF_CTL0: - case MSR_F15H_PERF_CTR0: - case MSR_K7_EVNTSEL0: - case MSR_K7_PERFCTR0: - return INDEX_ZERO; - case MSR_F15H_PERF_CTL1: - case MSR_F15H_PERF_CTR1: - case MSR_K7_EVNTSEL1: - case MSR_K7_PERFCTR1: - return INDEX_ONE; - case MSR_F15H_PERF_CTL2: - case MSR_F15H_PERF_CTR2: - case MSR_K7_EVNTSEL2: - case MSR_K7_PERFCTR2: - return INDEX_TWO; - case MSR_F15H_PERF_CTL3: - case MSR_F15H_PERF_CTR3: - case MSR_K7_EVNTSEL3: - case MSR_K7_PERFCTR3: - return INDEX_THREE; - case MSR_F15H_PERF_CTL4: - case MSR_F15H_PERF_CTR4: - return INDEX_FOUR; - case MSR_F15H_PERF_CTL5: - case MSR_F15H_PERF_CTR5: - return INDEX_FIVE; - default: - return INDEX_ERROR; - } -} - static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr, enum pmu_type type) { struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); + unsigned int ratio = 0; + unsigned int pmc_idx; + u32 base; - switch (msr) { - case MSR_F15H_PERF_CTL0: - case MSR_F15H_PERF_CTL1: - case MSR_F15H_PERF_CTL2: - case MSR_F15H_PERF_CTL3: - case MSR_F15H_PERF_CTL4: - case MSR_F15H_PERF_CTL5: - if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) - return NULL; - fallthrough; - case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3: - if (type != PMU_TYPE_EVNTSEL) - return NULL; - break; - case MSR_F15H_PERF_CTR0: - case MSR_F15H_PERF_CTR1: - case MSR_F15H_PERF_CTR2: - case MSR_F15H_PERF_CTR3: - case MSR_F15H_PERF_CTR4: - case MSR_F15H_PERF_CTR5: - if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) - return NULL; - fallthrough; - case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3: - if (type != PMU_TYPE_COUNTER) - return NULL; - break; - default: - return NULL; + /* MSR_K7_* MSRs are still visible to PERFCTR_CORE guest. */ + if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE) && + msr >= MSR_F15H_PERF_CTL0 && msr <= MSR_F15H_PERF_CTR5) { + base = get_msr_base(true, type); + ratio = 2; + } else if (msr >= MSR_K7_EVNTSEL0 && msr <= MSR_K7_PERFCTR3) { + base = get_msr_base(false, type); + ratio = 1; } - return &pmu->gp_counters[msr_to_index(msr)]; + if (!ratio || msr < base) + return NULL; + + pmc_idx = (unsigned int)((msr - base) / ratio); + return amd_pmc_idx_to_pmc(pmu, pmc_idx); } static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc)