From patchwork Mon Nov 8 11:10:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C264C433EF for ; Mon, 8 Nov 2021 11:10:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5206961350 for ; Mon, 8 Nov 2021 11:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238985AbhKHLNg (ORCPT ); Mon, 8 Nov 2021 06:13:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbhKHLNa (ORCPT ); Mon, 8 Nov 2021 06:13:30 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95317C061570; Mon, 8 Nov 2021 03:10:46 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id gt5so7851657pjb.1; Mon, 08 Nov 2021 03:10:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JP82EkUCeu5i/GWrbIV5dc8BYPvDb6Y+uomOO0EQbb0=; b=HjH9sVaUp0IjHSPOCOuflJQufc25qreVDuxILYP9HLslD4lRp1uoNzc4HbaeEa9bwF wFbwV2nHXGuWP9PGsW1JwqXxOI21trYrizvGt5t6awtFMLkdIgIIk9u+3pEprwtzRLDr C1UnCFNW+25YdjtEbZWRSTvTrAiwmwC1cdGJAPBCBENJEaoM7D4S3g84O9/zjdModpHw yI93Tpa8R2n+gUDBx/G30AoOU8Qp10DkHqzIZNHBtdJ/4gBQNFeIOfIDv49iLNOBAL5H xmi1d3nZwEm9hmpDj4YO7+cJVs35KCRd75ERFcU77MwBSyTc+QiEIO2UveP+i8SVPKa3 6cMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JP82EkUCeu5i/GWrbIV5dc8BYPvDb6Y+uomOO0EQbb0=; b=nIfoV85srYJtzElj9kNziPHblRcfbKRXAAWAQ59p/yzpmSgDtcdybxMg5cUN01COnO eUOfFHE2Ds1CC82pmilhEif3Vh6GHyCIJwwb/RdFHM2upyLyMA148sFDAn760mAytcBM NPEc9y6Uyo/i26IBRQM6c8KsWxuEOOx7mSgMmUHAn/tQsAi1z6+yEaiS3X/yCKVGrhXW BLaAtYMRjWN6ZpT8DTAizVQ/QUnyIl6AtV5B1tbafbODudPIv8DZDiKlZcxPhD6B0TJ6 X2pVV6l8rH7+jmxO45Wm7DuhrKAkre7JIdXNRCFbE5E4OEXOSUw6E7gGJ7kh9ykTS63M qeZQ== X-Gm-Message-State: AOAM5332pGPNAuiuqQapWz3pvpD+5todNYQZP8HMbpl9Cqwtw6P1+gM7 n4RJHb7bgyux+xTI9oU/3M3hYsxnpt4= X-Google-Smtp-Source: ABdhPJzDOQSs0KSp2Hs0PVPaONtr815k3Mky5ZUgQZW8KpY6Gy4W1C4A1zeg6Jgx3RlTi0WQnItfbg== X-Received: by 2002:a17:902:d491:b0:142:892d:a89 with SMTP id c17-20020a170902d49100b00142892d0a89mr1329047plg.20.1636369846200; Mon, 08 Nov 2021 03:10:46 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.10.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:10:45 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 1/7] KVM: x86: Export kvm_pmu_is_valid_msr() for nVMX Date: Mon, 8 Nov 2021 19:10:26 +0800 Message-Id: <20211108111032.24457-2-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Let's export kvm_pmu_is_valid_msr() for nVMX, instead of exporting all kvm_pmu_ops for this one case. The reduced access scope will help to optimize the kvm_x86_ops.pmu_ops stuff later. Suggested-by: Sean Christopherson Signed-off-by: Like Xu Signed-off-by: Sean Christopherson --- arch/x86/kvm/pmu.c | 1 + arch/x86/kvm/vmx/nested.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 0772bad9165c..aa6ac9c7aff2 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -398,6 +398,7 @@ bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) return kvm_x86_ops.pmu_ops->msr_idx_to_pmc(vcpu, msr) || kvm_x86_ops.pmu_ops->is_valid_msr(vcpu, msr); } +EXPORT_SYMBOL_GPL(kvm_pmu_is_valid_msr); static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *vcpu, u32 msr) { diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index b4ee5e9f9e20..6c6bc8b2072a 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4796,7 +4796,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu) return; vmx = to_vmx(vcpu); - if (kvm_x86_ops.pmu_ops->is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) { + if (kvm_pmu_is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) { vmx->nested.msrs.entry_ctls_high |= VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; vmx->nested.msrs.exit_ctls_high |= From patchwork Mon Nov 8 11:10:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B7E2C4332F for ; Mon, 8 Nov 2021 11:11:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4659061352 for ; Mon, 8 Nov 2021 11:11:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239048AbhKHLNt (ORCPT ); Mon, 8 Nov 2021 06:13:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237770AbhKHLNd (ORCPT ); Mon, 8 Nov 2021 06:13:33 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A3E9C061570; Mon, 8 Nov 2021 03:10:49 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id h24so6797003pjq.2; Mon, 08 Nov 2021 03:10:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8jSSqeVy8vWxhQKpZjIVcISTXzXxWLvD+5Tyd52WajI=; b=CGXrL7Szqcey6rgixGKQRLBVzTHsQWPL3nQbTfFgYc4jhEzzC1ihIKy2IHLKlS1kQ1 cPxqSSsbp0Mtv7lfxzmJCL+8H4LRrztFKqLlpF1+WpFNCCrXKr4TYmmjjknSuhih28pF morKn0BkZnmz8dSk/l6geeql4gGlJjiL09yAFxTnggzuSCmwZ1TT0/yYDIK3hi32L31j 92oV9cF0IESXa4GIfcvYqwQIKM3ZxdSNt3VRIh4an3o1giJUI5m+PTSoO8yWLGxmjeyx Q+UAj/rgaL2mjkQGzdyFYvlNY0jJi2WUFbStU1uyd3rKMaQMYLFREDj2R6ysumJoTHdD LTag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8jSSqeVy8vWxhQKpZjIVcISTXzXxWLvD+5Tyd52WajI=; b=Hh93tjZzks+tfkk0iizo34X6fOi/wF6ibdIz84tuJ55Bny7nRHTs+nWpMvCeLXSKYk p8kVKEbp9vlymshBcHoH4dxRNEzE/wt6nlEplSSE9ZVmIGjOoYS+LQbP8ZAfJV6bCM7E c7qeX+QZo68fnEianQoVDK9EKWWzck6IkSXCTs03Q+PEFVzaa7LZEhf3mJIaGL/ubFi+ 11Kk5pS5v/NZksGLXaBz/CSs5IvDTZ1UsbtNJ+c7ycesqpIpOY6ZpCg+iS+xdKQZq28T q5hzXis3t0vheVhSvRAEFVYk88nYg5pCgNOvdrOO4aiIxGUhAB/gXW7xaSlI0uVmq8UR xFWw== X-Gm-Message-State: AOAM530LsEqXwMJmIle4X6HX78eVnPTVxwtkl2p58fNDAFSJFBlhtz8j FjuDHlQs1sgpBPesvIOUhl4= X-Google-Smtp-Source: ABdhPJwfsgObEbV71zo/CJG0ylW1SJSjPYgEPsT6egadU8EMfy5M8slXcPloLypAAy/NRaveL9UgDg== X-Received: by 2002:a17:90a:509:: with SMTP id h9mr52021587pjh.114.1636369848586; Mon, 08 Nov 2021 03:10:48 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.10.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:10:48 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 2/7] KVM: x86: Fix WARNING that macros should not use a trailing semicolon Date: Mon, 8 Nov 2021 19:10:27 +0800 Message-Id: <20211108111032.24457-3-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The scripts/checkpatch.pl complains about the macro KVM_X86_OP in this way: WARNING: macros should not use a trailing semicolon +#define KVM_X86_OP(func) \ + DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func)); Signed-off-by: Like Xu --- arch/x86/include/asm/kvm-x86-ops.h | 218 ++++++++++++++--------------- arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/x86.c | 2 +- 3 files changed, 112 insertions(+), 112 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index cefe1d81e2e8..8f1035cc2c06 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -12,115 +12,115 @@ BUILD_BUG_ON(1) * case where there is no definition or a function name that * doesn't match the typical naming convention is supplied. */ -KVM_X86_OP_NULL(hardware_enable) -KVM_X86_OP_NULL(hardware_disable) -KVM_X86_OP_NULL(hardware_unsetup) -KVM_X86_OP_NULL(cpu_has_accelerated_tpr) -KVM_X86_OP(has_emulated_msr) -KVM_X86_OP(vcpu_after_set_cpuid) -KVM_X86_OP(vm_init) -KVM_X86_OP_NULL(vm_destroy) -KVM_X86_OP(vcpu_create) -KVM_X86_OP(vcpu_free) -KVM_X86_OP(vcpu_reset) -KVM_X86_OP(prepare_guest_switch) -KVM_X86_OP(vcpu_load) -KVM_X86_OP(vcpu_put) -KVM_X86_OP(update_exception_bitmap) -KVM_X86_OP(get_msr) -KVM_X86_OP(set_msr) -KVM_X86_OP(get_segment_base) -KVM_X86_OP(get_segment) -KVM_X86_OP(get_cpl) -KVM_X86_OP(set_segment) -KVM_X86_OP_NULL(get_cs_db_l_bits) -KVM_X86_OP(set_cr0) -KVM_X86_OP(is_valid_cr4) -KVM_X86_OP(set_cr4) -KVM_X86_OP(set_efer) -KVM_X86_OP(get_idt) -KVM_X86_OP(set_idt) -KVM_X86_OP(get_gdt) -KVM_X86_OP(set_gdt) -KVM_X86_OP(sync_dirty_debug_regs) -KVM_X86_OP(set_dr7) -KVM_X86_OP(cache_reg) -KVM_X86_OP(get_rflags) -KVM_X86_OP(set_rflags) -KVM_X86_OP(tlb_flush_all) -KVM_X86_OP(tlb_flush_current) -KVM_X86_OP_NULL(tlb_remote_flush) -KVM_X86_OP_NULL(tlb_remote_flush_with_range) -KVM_X86_OP(tlb_flush_gva) -KVM_X86_OP(tlb_flush_guest) -KVM_X86_OP(run) -KVM_X86_OP_NULL(handle_exit) -KVM_X86_OP_NULL(skip_emulated_instruction) -KVM_X86_OP_NULL(update_emulated_instruction) -KVM_X86_OP(set_interrupt_shadow) -KVM_X86_OP(get_interrupt_shadow) -KVM_X86_OP(patch_hypercall) -KVM_X86_OP(set_irq) -KVM_X86_OP(set_nmi) -KVM_X86_OP(queue_exception) -KVM_X86_OP(cancel_injection) -KVM_X86_OP(interrupt_allowed) -KVM_X86_OP(nmi_allowed) -KVM_X86_OP(get_nmi_mask) -KVM_X86_OP(set_nmi_mask) -KVM_X86_OP(enable_nmi_window) -KVM_X86_OP(enable_irq_window) -KVM_X86_OP(update_cr8_intercept) -KVM_X86_OP(check_apicv_inhibit_reasons) -KVM_X86_OP(refresh_apicv_exec_ctrl) -KVM_X86_OP(hwapic_irr_update) -KVM_X86_OP(hwapic_isr_update) -KVM_X86_OP_NULL(guest_apic_has_interrupt) -KVM_X86_OP(load_eoi_exitmap) -KVM_X86_OP(set_virtual_apic_mode) -KVM_X86_OP_NULL(set_apic_access_page_addr) -KVM_X86_OP(deliver_posted_interrupt) -KVM_X86_OP_NULL(sync_pir_to_irr) -KVM_X86_OP(set_tss_addr) -KVM_X86_OP(set_identity_map_addr) -KVM_X86_OP(get_mt_mask) -KVM_X86_OP(load_mmu_pgd) -KVM_X86_OP_NULL(has_wbinvd_exit) -KVM_X86_OP(get_l2_tsc_offset) -KVM_X86_OP(get_l2_tsc_multiplier) -KVM_X86_OP(write_tsc_offset) -KVM_X86_OP(write_tsc_multiplier) -KVM_X86_OP(get_exit_info) -KVM_X86_OP(check_intercept) -KVM_X86_OP(handle_exit_irqoff) -KVM_X86_OP_NULL(request_immediate_exit) -KVM_X86_OP(sched_in) -KVM_X86_OP_NULL(update_cpu_dirty_logging) -KVM_X86_OP_NULL(pre_block) -KVM_X86_OP_NULL(post_block) -KVM_X86_OP_NULL(vcpu_blocking) -KVM_X86_OP_NULL(vcpu_unblocking) -KVM_X86_OP_NULL(update_pi_irte) -KVM_X86_OP_NULL(start_assignment) -KVM_X86_OP_NULL(apicv_post_state_restore) -KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt) -KVM_X86_OP_NULL(set_hv_timer) -KVM_X86_OP_NULL(cancel_hv_timer) -KVM_X86_OP(setup_mce) -KVM_X86_OP(smi_allowed) -KVM_X86_OP(enter_smm) -KVM_X86_OP(leave_smm) -KVM_X86_OP(enable_smi_window) -KVM_X86_OP_NULL(mem_enc_op) -KVM_X86_OP_NULL(mem_enc_reg_region) -KVM_X86_OP_NULL(mem_enc_unreg_region) -KVM_X86_OP(get_msr_feature) -KVM_X86_OP(can_emulate_instruction) -KVM_X86_OP(apic_init_signal_blocked) -KVM_X86_OP_NULL(enable_direct_tlbflush) -KVM_X86_OP_NULL(migrate_timers) -KVM_X86_OP(msr_filter_changed) -KVM_X86_OP_NULL(complete_emulated_msr) +KVM_X86_OP_NULL(hardware_enable); +KVM_X86_OP_NULL(hardware_disable); +KVM_X86_OP_NULL(hardware_unsetup); +KVM_X86_OP_NULL(cpu_has_accelerated_tpr); +KVM_X86_OP(has_emulated_msr); +KVM_X86_OP(vcpu_after_set_cpuid); +KVM_X86_OP(vm_init); +KVM_X86_OP_NULL(vm_destroy); +KVM_X86_OP(vcpu_create); +KVM_X86_OP(vcpu_free); +KVM_X86_OP(vcpu_reset); +KVM_X86_OP(prepare_guest_switch); +KVM_X86_OP(vcpu_load); +KVM_X86_OP(vcpu_put); +KVM_X86_OP(update_exception_bitmap); +KVM_X86_OP(get_msr); +KVM_X86_OP(set_msr); +KVM_X86_OP(get_segment_base); +KVM_X86_OP(get_segment); +KVM_X86_OP(get_cpl); +KVM_X86_OP(set_segment); +KVM_X86_OP_NULL(get_cs_db_l_bits); +KVM_X86_OP(set_cr0); +KVM_X86_OP(is_valid_cr4); +KVM_X86_OP(set_cr4); +KVM_X86_OP(set_efer); +KVM_X86_OP(get_idt); +KVM_X86_OP(set_idt); +KVM_X86_OP(get_gdt); +KVM_X86_OP(set_gdt); +KVM_X86_OP(sync_dirty_debug_regs); +KVM_X86_OP(set_dr7); +KVM_X86_OP(cache_reg); +KVM_X86_OP(get_rflags); +KVM_X86_OP(set_rflags); +KVM_X86_OP(tlb_flush_all); +KVM_X86_OP(tlb_flush_current); +KVM_X86_OP_NULL(tlb_remote_flush); +KVM_X86_OP_NULL(tlb_remote_flush_with_range); +KVM_X86_OP(tlb_flush_gva); +KVM_X86_OP(tlb_flush_guest); +KVM_X86_OP(run); +KVM_X86_OP_NULL(handle_exit); +KVM_X86_OP_NULL(skip_emulated_instruction); +KVM_X86_OP_NULL(update_emulated_instruction); +KVM_X86_OP(set_interrupt_shadow); +KVM_X86_OP(get_interrupt_shadow); +KVM_X86_OP(patch_hypercall); +KVM_X86_OP(set_irq); +KVM_X86_OP(set_nmi); +KVM_X86_OP(queue_exception); +KVM_X86_OP(cancel_injection); +KVM_X86_OP(interrupt_allowed); +KVM_X86_OP(nmi_allowed); +KVM_X86_OP(get_nmi_mask); +KVM_X86_OP(set_nmi_mask); +KVM_X86_OP(enable_nmi_window); +KVM_X86_OP(enable_irq_window); +KVM_X86_OP(update_cr8_intercept); +KVM_X86_OP(check_apicv_inhibit_reasons); +KVM_X86_OP(refresh_apicv_exec_ctrl); +KVM_X86_OP(hwapic_irr_update); +KVM_X86_OP(hwapic_isr_update); +KVM_X86_OP_NULL(guest_apic_has_interrupt); +KVM_X86_OP(load_eoi_exitmap); +KVM_X86_OP(set_virtual_apic_mode); +KVM_X86_OP_NULL(set_apic_access_page_addr); +KVM_X86_OP(deliver_posted_interrupt); +KVM_X86_OP_NULL(sync_pir_to_irr); +KVM_X86_OP(set_tss_addr); +KVM_X86_OP(set_identity_map_addr); +KVM_X86_OP(get_mt_mask); +KVM_X86_OP(load_mmu_pgd); +KVM_X86_OP_NULL(has_wbinvd_exit); +KVM_X86_OP(get_l2_tsc_offset); +KVM_X86_OP(get_l2_tsc_multiplier); +KVM_X86_OP(write_tsc_offset); +KVM_X86_OP(write_tsc_multiplier); +KVM_X86_OP(get_exit_info); +KVM_X86_OP(check_intercept); +KVM_X86_OP(handle_exit_irqoff); +KVM_X86_OP_NULL(request_immediate_exit); +KVM_X86_OP(sched_in); +KVM_X86_OP_NULL(update_cpu_dirty_logging); +KVM_X86_OP_NULL(pre_block); +KVM_X86_OP_NULL(post_block); +KVM_X86_OP_NULL(vcpu_blocking); +KVM_X86_OP_NULL(vcpu_unblocking); +KVM_X86_OP_NULL(update_pi_irte); +KVM_X86_OP_NULL(start_assignment); +KVM_X86_OP_NULL(apicv_post_state_restore); +KVM_X86_OP_NULL(dy_apicv_has_pending_interrupt); +KVM_X86_OP_NULL(set_hv_timer); +KVM_X86_OP_NULL(cancel_hv_timer); +KVM_X86_OP(setup_mce); +KVM_X86_OP(smi_allowed); +KVM_X86_OP(enter_smm); +KVM_X86_OP(leave_smm); +KVM_X86_OP(enable_smi_window); +KVM_X86_OP_NULL(mem_enc_op); +KVM_X86_OP_NULL(mem_enc_reg_region); +KVM_X86_OP_NULL(mem_enc_unreg_region); +KVM_X86_OP(get_msr_feature); +KVM_X86_OP(can_emulate_instruction); +KVM_X86_OP(apic_init_signal_blocked); +KVM_X86_OP_NULL(enable_direct_tlbflush); +KVM_X86_OP_NULL(migrate_timers); +KVM_X86_OP(msr_filter_changed); +KVM_X86_OP_NULL(complete_emulated_msr); #undef KVM_X86_OP #undef KVM_X86_OP_NULL diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 88fce6ab4bbd..c2a4a362f3e2 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1532,14 +1532,14 @@ extern bool __read_mostly enable_apicv; extern struct kvm_x86_ops kvm_x86_ops; #define KVM_X86_OP(func) \ - DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func)); + DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func)) #define KVM_X86_OP_NULL KVM_X86_OP #include static inline void kvm_ops_static_call_update(void) { #define KVM_X86_OP(func) \ - static_call_update(kvm_x86_##func, kvm_x86_ops.func); + static_call_update(kvm_x86_##func, kvm_x86_ops.func) #define KVM_X86_OP_NULL KVM_X86_OP #include } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ac83d873d65b..775051070627 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -125,7 +125,7 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops); #define KVM_X86_OP(func) \ DEFINE_STATIC_CALL_NULL(kvm_x86_##func, \ - *(((struct kvm_x86_ops *)0)->func)); + *(((struct kvm_x86_ops *)0)->func)) #define KVM_X86_OP_NULL KVM_X86_OP #include EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits); From patchwork Mon Nov 8 11:10:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C79AC433EF for ; Mon, 8 Nov 2021 11:10:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D55A61355 for ; Mon, 8 Nov 2021 11:10:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239000AbhKHLNi (ORCPT ); Mon, 8 Nov 2021 06:13:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238989AbhKHLNg (ORCPT ); Mon, 8 Nov 2021 06:13:36 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51D4EC061570; Mon, 8 Nov 2021 03:10:52 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id n11-20020a17090a2bcb00b001a1e7a0a6a6so8541670pje.0; Mon, 08 Nov 2021 03:10:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X+nMYBrxv4is7m+WcubKQVDiaz00Mm9SeNl5OqcCjdI=; b=am42H2uJYs6pP+WeST9R69GCMrRDkynBQzwbrWCzoLXhqoLqTJU70Ho5946yCl7tdj PLvcxtwyfwnvg74lzTJ1kBz7jnjYA/SPQYHWXGY3donFGia0jAsP6+lcQ6jojTd4/+ai saDueQZS7Y5Oe0IDwsWnm+lRa5s/ceqo4Jbo0UbYtmP2DgBNzDG46p+xetoC7dJd4f7p TJC4OyPrMEcBH+YYAgUuFBaASseT8bTttkmq5nfbEcQFGZuJ/7JrV6xhwkuxrhjT5ZQl pQyGshgQruJEQllkQVA3PdfQDTLJKPiMHp/jJmep12JZm/S4RDx+3Kq7bPE0Abt5+Rkh x1/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X+nMYBrxv4is7m+WcubKQVDiaz00Mm9SeNl5OqcCjdI=; b=3TdRdf9pNY4QegcAEj3CQGUCuEeYVuBqazpKGec/+ht/gJS7nKfokM+9Ar3XIvMUuR e9ZgDItd7rB6+kJ7g3son5WKWy49hjKzSOa3EXLENwakkLcmZSVCfco9rAHicfdBAawR mhFG/BAtr9rzNuBDIBlznGu1xiJLd3QjKJYmZ2Da1nfW1oyfX56WDjT84+7zsOCjw17D rj9CffajxT4jvRW6xfeHcVKIPuR4+Ho8iDs3HXp3+rTFWYX58G5roYb+sJ+l7EWn4AY1 D6Ne1saNyHtmmiJZmpOGS9ZdkPPNJMuD/CDlw8ygtvLct2/ZNsVjM61vMtGgPEBHJKQ2 hONg== X-Gm-Message-State: AOAM533/inq0FKUMHnqWRvRFu8VheKFpSgG3Y+Ma+WoStK8al7T21uWa geY1HAb5QPmmuq1hzZnEt2U= X-Google-Smtp-Source: ABdhPJxxjXugJY+GGBkOwP1Ttfe8uW3w/F/sKfmAkC523oQovf/T8V61YUC8TSoDcfsFRCXNz82BFQ== X-Received: by 2002:a17:90a:e7d0:: with SMTP id kb16mr50999038pjb.22.1636369851910; Mon, 08 Nov 2021 03:10:51 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.10.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:10:51 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 3/7] KVM: x86: Move kvm_ops_static_call_update() to x86.c Date: Mon, 8 Nov 2021 19:10:28 +0800 Message-Id: <20211108111032.24457-4-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The kvm_ops_static_call_update() is defined in kvm_host.h. That's completely unnecessary, it should have exactly one caller, kvm_arch_hardware_setup(). As a prep match, move kvm_ops_static_call_update() to x86.c, then it can reference the kvm_pmu_ops stuff. Suggested-by: Sean Christopherson Signed-off-by: Like Xu --- arch/x86/include/asm/kvm_host.h | 8 -------- arch/x86/kvm/x86.c | 8 ++++++++ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index c2a4a362f3e2..c2d4ee2973c5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1536,14 +1536,6 @@ extern struct kvm_x86_ops kvm_x86_ops; #define KVM_X86_OP_NULL KVM_X86_OP #include -static inline void kvm_ops_static_call_update(void) -{ -#define KVM_X86_OP(func) \ - static_call_update(kvm_x86_##func, kvm_x86_ops.func) -#define KVM_X86_OP_NULL KVM_X86_OP -#include -} - #define __KVM_HAVE_ARCH_VM_ALLOC static inline struct kvm *kvm_arch_alloc_vm(void) { diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 775051070627..0aee0a078d6f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11300,6 +11300,14 @@ void kvm_arch_hardware_disable(void) drop_user_return_notifiers(); } +static inline void kvm_ops_static_call_update(void) +{ +#define KVM_X86_OP(func) \ + static_call_update(kvm_x86_##func, kvm_x86_ops.func) +#define KVM_X86_OP_NULL KVM_X86_OP +#include +} + int kvm_arch_hardware_setup(void *opaque) { struct kvm_x86_init_ops *ops = opaque; From patchwork Mon Nov 8 11:10:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F263C433EF for ; Mon, 8 Nov 2021 11:11:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C33861361 for ; Mon, 8 Nov 2021 11:11:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237863AbhKHLNp (ORCPT ); Mon, 8 Nov 2021 06:13:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239016AbhKHLNj (ORCPT ); Mon, 8 Nov 2021 06:13:39 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 903E0C061714; Mon, 8 Nov 2021 03:10:55 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id u11so15565710plf.3; Mon, 08 Nov 2021 03:10:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kHBzy264HfP+fEwCgrD2AXuk2kQWTQNkuleAS7P7egg=; b=CHR1y68hI5pVrjvkXOV2rFLivt+U2CE60eqHV/TsJlZ47sRa7r2YNPAAIXrx0kieXw Wn9ex5omAUKps+SiueFHTxVWF8+APPvnvKFcPCrPJFaaeIGt+7/kzS257D8rH8bBsjEb coolgCh2DyB3VSzHczYyXj61BEqSdp34EeZHr0hYKUUxV3ZMzarVafrwdwv51Qjna/xt dXz7toW8HZ2TqRRlOon+QU7jFMowJBPjg/QIn5x1z1obs0EnqG1mQWthzmgf4E6uMGMo qPcEjtEqrihnUNnPchl9QQ3T5+CDroLCRLTGxRpMaktFXfJf2iuXSDC3vAT6YE6P4zCk yEMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kHBzy264HfP+fEwCgrD2AXuk2kQWTQNkuleAS7P7egg=; b=tS1MiOgj5HkHJ9JATWQClfdHntkJjL4gwMxS+ZOsDCVkat5wvL0K57/0gvFXQfhT+O FroTSLBOTahUsf95tR+7toVNZQ17wz5jjCvZSpV9C9SrvPwdtun2QYSevZAcjzQe2lf3 2uhawx5Gle9+TZBgsrlw1swxMXNnBrfCQUqbXwbXLIyyDUuWbC/uoAS4Qqo4A1Dqkhmp wdqjJTPMielp5HQLAwHa83JmQh+HP6MTYVlMVzK9E2y88c2hoqtFPHN7D7K8ybt6nwIN IhbbrfitBeneApx2blMri3iysm6uYAXRYTMS6cPIRCRTPQvx9iemvQx9IQQf08hYXJZ2 ELqA== X-Gm-Message-State: AOAM531DIr2AdIDWO4xdPWi3xgXt+H7EY6blPzIWmLeqbpcCk939ZGto VGtKbI61MwNFBtKhiHZbHwQ= X-Google-Smtp-Source: ABdhPJxRL15ZEGzeiG8Urf9eRIJs58pB1QTBs1t/ipZK6nwBgEDxAM0xQl4a4JeNxo/Qr6kLe1PbSg== X-Received: by 2002:a17:90b:1bd1:: with SMTP id oa17mr38910425pjb.246.1636369855077; Mon, 08 Nov 2021 03:10:55 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.10.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:10:54 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 4/7] KVM: x86: Copy kvm_pmu_ops by value to eliminate layer of indirection Date: Mon, 8 Nov 2021 19:10:29 +0800 Message-Id: <20211108111032.24457-5-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Replace the kvm_pmu_ops pointer in common x86 with an instance of the struct to save one pointer dereference when invoking functions. Copy the struct by value to set the ops during kvm_init(). Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 41 ++++++++++++++++++++++------------------- arch/x86/kvm/pmu.h | 4 +++- arch/x86/kvm/x86.c | 1 + 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index aa6ac9c7aff2..353989bf0102 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -47,6 +47,9 @@ * * AMD: [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters */ +struct kvm_pmu_ops kvm_pmu_ops __read_mostly; +EXPORT_SYMBOL_GPL(kvm_pmu_ops); + static void kvm_pmi_trigger_fn(struct irq_work *irq_work) { struct kvm_pmu *pmu = container_of(irq_work, struct kvm_pmu, irq_work); @@ -214,7 +217,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) ARCH_PERFMON_EVENTSEL_CMASK | HSW_IN_TX | HSW_IN_TX_CHECKPOINTED))) { - config = kvm_x86_ops.pmu_ops->find_arch_event(pmc_to_pmu(pmc), + config = kvm_pmu_ops.find_arch_event(pmc_to_pmu(pmc), event_select, unit_mask); if (config != PERF_COUNT_HW_MAX) @@ -268,7 +271,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) pmc->current_config = (u64)ctrl; pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_x86_ops.pmu_ops->find_fixed_event(idx), + kvm_pmu_ops.find_fixed_event(idx), !(en_field & 0x2), /* exclude user */ !(en_field & 0x1), /* exclude kernel */ pmi, false, false); @@ -277,7 +280,7 @@ EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx) { - struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, pmc_idx); + struct kvm_pmc *pmc = kvm_pmu_ops.pmc_idx_to_pmc(pmu, pmc_idx); if (!pmc) return; @@ -299,7 +302,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) int bit; for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, bit); + struct kvm_pmc *pmc = kvm_pmu_ops.pmc_idx_to_pmc(pmu, bit); if (unlikely(!pmc || !pmc->perf_event)) { clear_bit(bit, pmu->reprogram_pmi); @@ -321,7 +324,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) /* check if idx is a valid index to access PMU */ int kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { - return kvm_x86_ops.pmu_ops->is_valid_rdpmc_ecx(vcpu, idx); + return kvm_pmu_ops.is_valid_rdpmc_ecx(vcpu, idx); } bool is_vmware_backdoor_pmc(u32 pmc_idx) @@ -371,7 +374,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (is_vmware_backdoor_pmc(idx)) return kvm_pmu_rdpmc_vmware(vcpu, idx, data); - pmc = kvm_x86_ops.pmu_ops->rdpmc_ecx_to_pmc(vcpu, idx, &mask); + pmc = kvm_pmu_ops.rdpmc_ecx_to_pmc(vcpu, idx, &mask); if (!pmc) return 1; @@ -387,23 +390,23 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) { if (lapic_in_kernel(vcpu)) { - if (kvm_x86_ops.pmu_ops->deliver_pmi) - kvm_x86_ops.pmu_ops->deliver_pmi(vcpu); + if (kvm_pmu_ops.deliver_pmi) + kvm_pmu_ops.deliver_pmi(vcpu); kvm_apic_local_deliver(vcpu->arch.apic, APIC_LVTPC); } } bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) { - return kvm_x86_ops.pmu_ops->msr_idx_to_pmc(vcpu, msr) || - kvm_x86_ops.pmu_ops->is_valid_msr(vcpu, msr); + return kvm_pmu_ops.msr_idx_to_pmc(vcpu, msr) || + kvm_pmu_ops.is_valid_msr(vcpu, msr); } EXPORT_SYMBOL_GPL(kvm_pmu_is_valid_msr); static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *vcpu, u32 msr) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->msr_idx_to_pmc(vcpu, msr); + struct kvm_pmc *pmc = kvm_pmu_ops.msr_idx_to_pmc(vcpu, msr); if (pmc) __set_bit(pmc->idx, pmu->pmc_in_use); @@ -411,13 +414,13 @@ static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *vcpu, u32 msr) int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { - return kvm_x86_ops.pmu_ops->get_msr(vcpu, msr_info); + return kvm_pmu_ops.get_msr(vcpu, msr_info); } int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { kvm_pmu_mark_pmc_in_use(vcpu, msr_info->index); - return kvm_x86_ops.pmu_ops->set_msr(vcpu, msr_info); + return kvm_pmu_ops.set_msr(vcpu, msr_info); } /* refresh PMU settings. This function generally is called when underlying @@ -426,7 +429,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) */ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) { - kvm_x86_ops.pmu_ops->refresh(vcpu); + kvm_pmu_ops.refresh(vcpu); } void kvm_pmu_reset(struct kvm_vcpu *vcpu) @@ -434,7 +437,7 @@ void kvm_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); irq_work_sync(&pmu->irq_work); - kvm_x86_ops.pmu_ops->reset(vcpu); + kvm_pmu_ops.reset(vcpu); } void kvm_pmu_init(struct kvm_vcpu *vcpu) @@ -442,7 +445,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); memset(pmu, 0, sizeof(*pmu)); - kvm_x86_ops.pmu_ops->init(vcpu); + kvm_pmu_ops.init(vcpu); init_irq_work(&pmu->irq_work, kvm_pmi_trigger_fn); pmu->event_count = 0; pmu->need_cleanup = false; @@ -474,14 +477,14 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmu->pmc_in_use, X86_PMC_IDX_MAX); for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, i); + pmc = kvm_pmu_ops.pmc_idx_to_pmc(pmu, i); if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); } - if (kvm_x86_ops.pmu_ops->cleanup) - kvm_x86_ops.pmu_ops->cleanup(vcpu); + if (kvm_pmu_ops.cleanup) + kvm_pmu_ops.cleanup(vcpu); bitmap_zero(pmu->pmc_in_use, X86_PMC_IDX_MAX); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 0e4f2b1fa9fb..b2fe135d395a 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -17,6 +17,8 @@ #define MAX_FIXED_COUNTERS 3 +extern struct kvm_pmu_ops kvm_pmu_ops; + struct kvm_event_hw_type_mapping { u8 eventsel; u8 unit_mask; @@ -92,7 +94,7 @@ static inline bool pmc_is_fixed(struct kvm_pmc *pmc) static inline bool pmc_is_enabled(struct kvm_pmc *pmc) { - return kvm_x86_ops.pmu_ops->pmc_is_enabled(pmc); + return kvm_pmu_ops.pmc_is_enabled(pmc); } static inline bool kvm_valid_perf_global_ctrl(struct kvm_pmu *pmu, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 0aee0a078d6f..ca9a76abb6ba 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11323,6 +11323,7 @@ int kvm_arch_hardware_setup(void *opaque) return r; memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); + memcpy(&kvm_pmu_ops, kvm_x86_ops.pmu_ops, sizeof(kvm_pmu_ops)); kvm_ops_static_call_update(); if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) From patchwork Mon Nov 8 11:10:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EA69C433FE for ; Mon, 8 Nov 2021 11:11:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 437E261360 for ; Mon, 8 Nov 2021 11:11:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239047AbhKHLNs (ORCPT ); Mon, 8 Nov 2021 06:13:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239024AbhKHLNm (ORCPT ); Mon, 8 Nov 2021 06:13:42 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F7EBC061764; Mon, 8 Nov 2021 03:10:58 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id n11-20020a17090a2bcb00b001a1e7a0a6a6so8541803pje.0; Mon, 08 Nov 2021 03:10:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FWDD7U5T1WcrJwl6pQa+h0PSq1RnVU3eKyaFAF8L5xY=; b=FOGfOXSbF3NETUtUwFL6yABlqXRXPJm7+TmBzYElCbewOKUX988Kh8HQfAS9g9ZqWY waBn0zjOPnKm2JI/vq903BGJ0r0rbsueqc+T1hn5uzUpOewmu5YoBfCwDkimwDeXlqVK 7aDYS+vDFMoUOJkxZaWGQBwNGZnpnhU3PfZckanfVzG/55/nXBgxCqFSgUSeh9aR0qQT 9Ah2x9exvVQw5+/JES0uwItahKiuBJyZG3TPwENNc2pXlCRnbG9xH839/6FHNrVa12K2 pzQeNSHfZK8nuGm2EeKLbgoZ263ShByPcWvsl7OmQf0JsWngk3GSwdgRHrkpcdbF2z9f ZdWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FWDD7U5T1WcrJwl6pQa+h0PSq1RnVU3eKyaFAF8L5xY=; b=GSPxtcYVDgMamNsrR9QeWSe6G6P9Dbq/EJidwGR/aI31ZvCWEdLKt37ihExX+87QqS d3yJqIBMjGuJgX5J9HNo3AptfNxg35/h1Fz0bMKeE/AxUWptEKMrBM000YNp82REZ9NL z72aGae/vmOAmkiOUK6QJ6vvFr/YjnlmYCIN4EnO7LMLDBO3SJfNNOm7JxTnvZ0m6x57 XiG98K5yrCCequ+AdOK9eW7GtnyRcLCInPQM+WMIA1oPHhiHZkk/m+HCzOBmI7yJbJ3O kCOoFFSBhP1xBudvAGZ9yiRlwsRRgHeJm3B/M6TgsjsLMfscbuafoWeB6H3ZnUxNaLxd WT3A== X-Gm-Message-State: AOAM530KLYJtRThJyCrPsXo68B6GTS4QS3H28s1FK/klBJdcovUmBH9j /7wtFlWMwSr3Imj64ezwSIM= X-Google-Smtp-Source: ABdhPJxTBZGnQqvy/uNTvHf1dPikXZ+3SMPWqLBbIGbFxXhzRXIaV2TnzAs7CzsVjdrOC/o74UewAw== X-Received: by 2002:a17:902:6b47:b0:142:82e1:6c92 with SMTP id g7-20020a1709026b4700b0014282e16c92mr4725669plt.84.1636369857815; Mon, 08 Nov 2021 03:10:57 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.10.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:10:57 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 5/7] KVM: x86: Move .pmu_ops to kvm_x86_init_ops and tagged as __initdata Date: Mon, 8 Nov 2021 19:10:30 +0800 Message-Id: <20211108111032.24457-6-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu The pmu_ops should be moved to kvm_x86_init_ops and tagged as __initdata. That'll save those precious few bytes, and more importantly make the original ops unreachable, i.e. make it harder to sneak in post-init modification bugs. Suggested-by: Sean Christopherson Signed-off-by: Like Xu Reviewed-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/svm/pmu.c | 2 +- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/vmx/pmu_intel.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- arch/x86/kvm/x86.c | 2 +- 6 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index c2d4ee2973c5..00760a3ac88c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1436,8 +1436,7 @@ struct kvm_x86_ops { int cpu_dirty_log_size; void (*update_cpu_dirty_logging)(struct kvm_vcpu *vcpu); - /* pmu operations of sub-arch */ - const struct kvm_pmu_ops *pmu_ops; + /* nested operations of sub-arch */ const struct kvm_x86_nested_ops *nested_ops; /* @@ -1516,6 +1515,7 @@ struct kvm_x86_init_ops { int (*hardware_setup)(void); struct kvm_x86_ops *runtime_ops; + struct kvm_pmu_ops *pmu_ops; }; struct kvm_arch_async_pf { diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c index fdf587f19c5f..4554cbc3820c 100644 --- a/arch/x86/kvm/svm/pmu.c +++ b/arch/x86/kvm/svm/pmu.c @@ -319,7 +319,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu) } } -struct kvm_pmu_ops amd_pmu_ops = { +struct kvm_pmu_ops amd_pmu_ops __initdata = { .find_arch_event = amd_find_arch_event, .find_fixed_event = amd_find_fixed_event, .pmc_is_enabled = amd_pmc_is_enabled, diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 21bb81710e0f..8834d7d2b991 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4681,7 +4681,6 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .sched_in = svm_sched_in, - .pmu_ops = &amd_pmu_ops, .nested_ops = &svm_nested_ops, .deliver_posted_interrupt = svm_deliver_avic_intr, @@ -4717,6 +4716,7 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = { .check_processor_compatibility = svm_check_processor_compat, .runtime_ops = &svm_x86_ops, + .pmu_ops = &amd_pmu_ops, }; static int __init svm_init(void) diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index b8e0d21b7c8a..c0b905d032c8 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -703,7 +703,7 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) intel_pmu_release_guest_lbr_event(vcpu); } -struct kvm_pmu_ops intel_pmu_ops = { +struct kvm_pmu_ops intel_pmu_ops __initdata = { .find_arch_event = intel_find_arch_event, .find_fixed_event = intel_find_fixed_event, .pmc_is_enabled = intel_pmc_is_enabled, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 71f54d85f104..ce787d2e8e08 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7680,7 +7680,6 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = { .pre_block = vmx_pre_block, .post_block = vmx_post_block, - .pmu_ops = &intel_pmu_ops, .nested_ops = &vmx_nested_ops, .update_pi_irte = pi_update_irte, @@ -7922,6 +7921,7 @@ static struct kvm_x86_init_ops vmx_init_ops __initdata = { .hardware_setup = hardware_setup, .runtime_ops = &vmx_x86_ops, + .pmu_ops = &intel_pmu_ops, }; static void vmx_cleanup_l1d_flush(void) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ca9a76abb6ba..70dc8f41329c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11323,7 +11323,7 @@ int kvm_arch_hardware_setup(void *opaque) return r; memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); - memcpy(&kvm_pmu_ops, kvm_x86_ops.pmu_ops, sizeof(kvm_pmu_ops)); + memcpy(&kvm_pmu_ops, ops->pmu_ops, sizeof(kvm_pmu_ops)); kvm_ops_static_call_update(); if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) From patchwork Mon Nov 8 11:10:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D877CC433F5 for ; Mon, 8 Nov 2021 11:11:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C17CC61350 for ; Mon, 8 Nov 2021 11:11:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239057AbhKHLNu (ORCPT ); Mon, 8 Nov 2021 06:13:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239020AbhKHLNo (ORCPT ); Mon, 8 Nov 2021 06:13:44 -0500 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDFB7C061570; Mon, 8 Nov 2021 03:11:00 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id s13so1388067pfd.7; Mon, 08 Nov 2021 03:11:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gavCSuL1t3IJ6cNjx1GF8/Zg+Uk5+9JidRW6NUy4gzQ=; b=pHKLgCrCRCn/9efAstnxPLP/BbSrdHgo8pq4qptZiSvxYkXfzpr9NrpCSc+LAfLP4u HwAIR8Ow1IxfCqyecP4i/nNbSp70bVfs+oc55thLD1M838F+WrMF63QePFIp1xwKgWZF G0ZDLpq4CAeLjF2AX78IHddpJoiLs0OVByjsmpOduGcVnHtjy7uJ4Z3jqLd7zm3sxXJV FkDmqJ241A30dBRQbHn+7Wl6yHo29C52e/ndkYsssuUNN2SmAMUB0WWnQ+9rldTURr3y OO7O9Vw5PIt091luPJIIKcV7Eo5+87EYMhjE3DKLZsRR2YLMOvTjrEDF3G2b01WAYZuy p3ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gavCSuL1t3IJ6cNjx1GF8/Zg+Uk5+9JidRW6NUy4gzQ=; b=yd+Dxt4S7kJJsJ9H0/IF33YHJIc0ucUB7SQhAJN+cA8VVP36lXxaH5rHcmEEnRn7h0 N/LlT+UfVBas3F9PsjswK8NY5DsCf4osM82CKaCrqLPXT7T18fu8r4S8xU52xLikONhg EwrgMpgm4tZW51Vr1LyE5yGY53zElRBDRQUGiVhnbfDjC1zxwahaCD8R+AQc5bmhFr5G UrlqgB1EKYiQZouA2DIaqYRkQFpifzoOswrxAdOaOmbjxxUjCphDrK8hE4kBHYc92OAo QZypiKHlY+pMf3Y9GWDu0G/S8HPqnBBNehBtIfAOth/SgDJNWH8Ts0VW7nGIywpSpMs2 Rp8g== X-Gm-Message-State: AOAM531irQj/ANEPntBSmrUNDZaJvpcEQ8kJ+MF8GrIVeIoiOCumDULK suONh01ncr2jn36H3wzGANBkt0s4osk= X-Google-Smtp-Source: ABdhPJzD3D0h1+Y17yTAygoSu70TFGj9obOC44TLynUMBFVhlTsV0al22glG78QBEWMIIZRwtCdR4A== X-Received: by 2002:a63:3c4c:: with SMTP id i12mr58896041pgn.447.1636369860305; Mon, 08 Nov 2021 03:11:00 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.10.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:11:00 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 6/7] KVM: x86: Introduce definitions to support static calls for kvm_pmu_ops Date: Mon, 8 Nov 2021 19:10:31 +0800 Message-Id: <20211108111032.24457-7-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Use static calls to improve kvm_pmu_ops performance. Introduce the definitions that will be used by a subsequent patch to actualize the savings. Add a new kvm-x86-pmu-ops.h header that can be used for the definition of static calls. This header is also intended to be used to simplify the defition of amd_pmu_ops and intel_pmu_ops. Like what we did for kvm_x86_ops, 'pmu_ops' can be covered by static calls in a simlilar manner for insignificant but not negligible performance impact, especially on older models. Signed-off-by: Like Xu --- arch/x86/include/asm/kvm-x86-pmu-ops.h | 32 ++++++++++++++++++++++++++ arch/x86/kvm/pmu.c | 6 +++++ arch/x86/kvm/pmu.h | 5 ++++ 3 files changed, 43 insertions(+) create mode 100644 arch/x86/include/asm/kvm-x86-pmu-ops.h diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h new file mode 100644 index 000000000000..b7713b16d21d --- /dev/null +++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#if !defined(KVM_X86_PMU_OP) || !defined(KVM_X86_PMU_OP_NULL) +BUILD_BUG_ON(1) +#endif + +/* + * KVM_X86_PMU_OP() and KVM_X86_PMU_OP_NULL() are used to + * help generate "static_call()"s. They are also intended for use when defining + * the amd/intel KVM_X86_PMU_OPs. KVM_X86_PMU_OP() can be used + * for those functions that follow the [amd|intel]_func_name convention. + * KVM_X86_PMU_OP_NULL() can leave a NULL definition for the + * case where there is no definition or a function name that + * doesn't match the typical naming convention is supplied. + */ +KVM_X86_PMU_OP(find_arch_event); +KVM_X86_PMU_OP(find_fixed_event); +KVM_X86_PMU_OP(pmc_is_enabled); +KVM_X86_PMU_OP(pmc_idx_to_pmc); +KVM_X86_PMU_OP(rdpmc_ecx_to_pmc); +KVM_X86_PMU_OP(msr_idx_to_pmc); +KVM_X86_PMU_OP(is_valid_rdpmc_ecx); +KVM_X86_PMU_OP(is_valid_msr); +KVM_X86_PMU_OP(get_msr); +KVM_X86_PMU_OP(set_msr); +KVM_X86_PMU_OP(refresh); +KVM_X86_PMU_OP(init); +KVM_X86_PMU_OP(reset); +KVM_X86_PMU_OP_NULL(deliver_pmi); +KVM_X86_PMU_OP_NULL(cleanup); + +#undef KVM_X86_PMU_OP +#undef KVM_X86_PMU_OP_NULL diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 353989bf0102..bfdd9f2bc0fa 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -50,6 +50,12 @@ struct kvm_pmu_ops kvm_pmu_ops __read_mostly; EXPORT_SYMBOL_GPL(kvm_pmu_ops); +#define KVM_X86_PMU_OP(func) \ + DEFINE_STATIC_CALL_NULL(kvm_x86_pmu_##func, \ + *(((struct kvm_pmu_ops *)0)->func)) +#define KVM_X86_PMU_OP_NULL KVM_X86_PMU_OP +#include + static void kvm_pmi_trigger_fn(struct irq_work *irq_work) { struct kvm_pmu *pmu = container_of(irq_work, struct kvm_pmu, irq_work); diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index b2fe135d395a..40e0b523637b 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -45,6 +45,11 @@ struct kvm_pmu_ops { void (*cleanup)(struct kvm_vcpu *vcpu); }; +#define KVM_X86_PMU_OP(func) \ + DECLARE_STATIC_CALL(kvm_x86_pmu_##func, *(((struct kvm_pmu_ops *)0)->func)) +#define KVM_X86_PMU_OP_NULL KVM_X86_PMU_OP +#include + static inline u64 pmc_bitmask(struct kvm_pmc *pmc) { struct kvm_pmu *pmu = pmc_to_pmu(pmc); From patchwork Mon Nov 8 11:10:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Like Xu X-Patchwork-Id: 12608279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEB1DC433EF for ; Mon, 8 Nov 2021 11:11:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D74FC61360 for ; Mon, 8 Nov 2021 11:11:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239081AbhKHLNy (ORCPT ); Mon, 8 Nov 2021 06:13:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239046AbhKHLNr (ORCPT ); Mon, 8 Nov 2021 06:13:47 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4698CC061746; Mon, 8 Nov 2021 03:11:03 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id w33-20020a17090a6ba400b001a722a06212so4130383pjj.0; Mon, 08 Nov 2021 03:11:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vUAqZwFKsYN3uK9M4YrApSgCvGlMCSiYQp4Tquu3W2A=; b=Bk1rIbu98VM5kQhofNWut0oL4HooOc9wogoQ1vpbOtGxLgIf9YQu/kUfAzcKc2Jk4Q nrSW1IN1/lB8gvMORZ7+uAXgmR8lYCNZ6M76fwytsaByrB/Ae2eNCDVwjoRO+/J3HcTN sFeh9JqPWFrR3mUgGqt1hVe3+Dd7wQqSbQK+eEvYMpbSCVvhU8pVHRYaAdR8+cL8TJcj nC0uhj21uAbihZ8tTYdTpIsC8NWG4uUj36Q6TTU/DjPPlZ674nBd0RPdZKt/T4WG0u6Q 1l+8TcZsYJzOwDo0wbmifN3vLEcUo2w2L7qhJpsFstLjjoKZqe8+z01fc96rHU8cavuq ucMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vUAqZwFKsYN3uK9M4YrApSgCvGlMCSiYQp4Tquu3W2A=; b=pbg67pZaPdzLhQ3K8du0wj3CqPyNlaP9naSVRBv0bWTtotPZzgOGElTpKtBTPdgPEr dVnvBCXkUz2X9pZtDgdv7455Ad+XiGe4DTMloKOZTmWa+ocmN0maoFyIKDeCmDmSizfJ lvryqi6EPGuAXEUIWF8xDNhzsnRYdTa4ENSrRFF3kG3chcbMjAw5Hdmxnh1O4IKYF1Mn Qk4LLkLE1w+keRwrnFwJcnImD1RoUpmhUkFyI1cOWqYFWDFTJrhI4p5ttc4Fk6HR+kFA K3CI6X3QEfuGAfy1qtEtUFfS6bHWtkjW6JHXOGKeJ8XFlk0HLOOy6Ms0tKJsi/6O3y3I tVwg== X-Gm-Message-State: AOAM530IO8arxk5k5LGH2s3Jqncl5haKTyw0v+4HoN4iTdo+xNH3golu CyFL8iIKogGIEf1sdy5y5HE= X-Google-Smtp-Source: ABdhPJwPtF9ihW9kWSN60rI0oUUKYmiVDADcO8nar2zEh+YWrQmKiEmIWmBOjtqgFQOQyTOhBRseCw== X-Received: by 2002:a17:902:bd01:b0:141:6232:6f89 with SMTP id p1-20020a170902bd0100b0014162326f89mr69429459pls.12.1636369862851; Mon, 08 Nov 2021 03:11:02 -0800 (PST) Received: from localhost.localdomain ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id ne7sm16559483pjb.36.2021.11.08.03.11.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Nov 2021 03:11:02 -0800 (PST) From: Like Xu X-Google-Original-From: Like Xu To: Paolo Bonzini , Sean Christopherson Cc: kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , Vitaly Kuznetsov , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v2 7/7] KVM: x86: Use static calls to reduce kvm_pmu_ops overhead Date: Mon, 8 Nov 2021 19:10:32 +0800 Message-Id: <20211108111032.24457-8-likexu@tencent.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211108111032.24457-1-likexu@tencent.com> References: <20211108111032.24457-1-likexu@tencent.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Like Xu Convert kvm_pmu_ops to use static calls. Here are the worst sched_clock() nanosecond numbers for the kvm_pmu_ops functions that is most often called (up to 7 digits of calls) when running a single perf test case in a guest on an ICX 2.70GHz host (mitigations=on): | legacy | static call ------------------------------------------------------------ .pmc_idx_to_pmc | 10946 | 10047 (8%) .pmc_is_enabled | 11291 | 11175 (1%) .msr_idx_to_pmc | 13526 | 12346 (8%) .is_valid_msr | 10895 | 10484 (3%) Signed-off-by: Like Xu --- arch/x86/kvm/pmu.c | 36 +++++++++++++++++------------------- arch/x86/kvm/pmu.h | 2 +- arch/x86/kvm/x86.c | 5 +++++ 3 files changed, 23 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index bfdd9f2bc0fa..c86ff3057e2c 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -223,7 +223,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) ARCH_PERFMON_EVENTSEL_CMASK | HSW_IN_TX | HSW_IN_TX_CHECKPOINTED))) { - config = kvm_pmu_ops.find_arch_event(pmc_to_pmu(pmc), + config = static_call(kvm_x86_pmu_find_arch_event)(pmc_to_pmu(pmc), event_select, unit_mask); if (config != PERF_COUNT_HW_MAX) @@ -277,7 +277,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx) pmc->current_config = (u64)ctrl; pmc_reprogram_counter(pmc, PERF_TYPE_HARDWARE, - kvm_pmu_ops.find_fixed_event(idx), + static_call(kvm_x86_pmu_find_fixed_event)(idx), !(en_field & 0x2), /* exclude user */ !(en_field & 0x1), /* exclude kernel */ pmi, false, false); @@ -286,7 +286,7 @@ EXPORT_SYMBOL_GPL(reprogram_fixed_counter); void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx) { - struct kvm_pmc *pmc = kvm_pmu_ops.pmc_idx_to_pmc(pmu, pmc_idx); + struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, pmc_idx); if (!pmc) return; @@ -308,7 +308,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) int bit; for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc = kvm_pmu_ops.pmc_idx_to_pmc(pmu, bit); + struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit); if (unlikely(!pmc || !pmc->perf_event)) { clear_bit(bit, pmu->reprogram_pmi); @@ -330,7 +330,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) /* check if idx is a valid index to access PMU */ int kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx) { - return kvm_pmu_ops.is_valid_rdpmc_ecx(vcpu, idx); + return static_call(kvm_x86_pmu_is_valid_rdpmc_ecx)(vcpu, idx); } bool is_vmware_backdoor_pmc(u32 pmc_idx) @@ -380,7 +380,7 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) if (is_vmware_backdoor_pmc(idx)) return kvm_pmu_rdpmc_vmware(vcpu, idx, data); - pmc = kvm_pmu_ops.rdpmc_ecx_to_pmc(vcpu, idx, &mask); + pmc = static_call(kvm_x86_pmu_rdpmc_ecx_to_pmc)(vcpu, idx, &mask); if (!pmc) return 1; @@ -396,23 +396,22 @@ int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned idx, u64 *data) void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu) { if (lapic_in_kernel(vcpu)) { - if (kvm_pmu_ops.deliver_pmi) - kvm_pmu_ops.deliver_pmi(vcpu); + static_call_cond(kvm_x86_pmu_deliver_pmi)(vcpu); kvm_apic_local_deliver(vcpu->arch.apic, APIC_LVTPC); } } bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr) { - return kvm_pmu_ops.msr_idx_to_pmc(vcpu, msr) || - kvm_pmu_ops.is_valid_msr(vcpu, msr); + return static_call(kvm_x86_pmu_msr_idx_to_pmc)(vcpu, msr) || + static_call(kvm_x86_pmu_is_valid_msr)(vcpu, msr); } EXPORT_SYMBOL_GPL(kvm_pmu_is_valid_msr); static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *vcpu, u32 msr) { struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); - struct kvm_pmc *pmc = kvm_pmu_ops.msr_idx_to_pmc(vcpu, msr); + struct kvm_pmc *pmc = static_call(kvm_x86_pmu_msr_idx_to_pmc)(vcpu, msr); if (pmc) __set_bit(pmc->idx, pmu->pmc_in_use); @@ -420,13 +419,13 @@ static void kvm_pmu_mark_pmc_in_use(struct kvm_vcpu *vcpu, u32 msr) int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { - return kvm_pmu_ops.get_msr(vcpu, msr_info); + return static_call(kvm_x86_pmu_get_msr)(vcpu, msr_info); } int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { kvm_pmu_mark_pmc_in_use(vcpu, msr_info->index); - return kvm_pmu_ops.set_msr(vcpu, msr_info); + return static_call(kvm_x86_pmu_set_msr)(vcpu, msr_info); } /* refresh PMU settings. This function generally is called when underlying @@ -435,7 +434,7 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) */ void kvm_pmu_refresh(struct kvm_vcpu *vcpu) { - kvm_pmu_ops.refresh(vcpu); + static_call(kvm_x86_pmu_refresh)(vcpu); } void kvm_pmu_reset(struct kvm_vcpu *vcpu) @@ -443,7 +442,7 @@ void kvm_pmu_reset(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); irq_work_sync(&pmu->irq_work); - kvm_pmu_ops.reset(vcpu); + static_call(kvm_x86_pmu_reset)(vcpu); } void kvm_pmu_init(struct kvm_vcpu *vcpu) @@ -451,7 +450,7 @@ void kvm_pmu_init(struct kvm_vcpu *vcpu) struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); memset(pmu, 0, sizeof(*pmu)); - kvm_pmu_ops.init(vcpu); + static_call(kvm_x86_pmu_init)(vcpu); init_irq_work(&pmu->irq_work, kvm_pmi_trigger_fn); pmu->event_count = 0; pmu->need_cleanup = false; @@ -483,14 +482,13 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmu->pmc_in_use, X86_PMC_IDX_MAX); for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc = kvm_pmu_ops.pmc_idx_to_pmc(pmu, i); + pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); } - if (kvm_pmu_ops.cleanup) - kvm_pmu_ops.cleanup(vcpu); + static_call_cond(kvm_x86_pmu_cleanup)(vcpu); bitmap_zero(pmu->pmc_in_use, X86_PMC_IDX_MAX); } diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h index 40e0b523637b..a4bfd4200d67 100644 --- a/arch/x86/kvm/pmu.h +++ b/arch/x86/kvm/pmu.h @@ -99,7 +99,7 @@ static inline bool pmc_is_fixed(struct kvm_pmc *pmc) static inline bool pmc_is_enabled(struct kvm_pmc *pmc) { - return kvm_pmu_ops.pmc_is_enabled(pmc); + return static_call(kvm_x86_pmu_pmc_is_enabled)(pmc); } static inline bool kvm_valid_perf_global_ctrl(struct kvm_pmu *pmu, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 70dc8f41329c..c5db444d5f4a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11306,6 +11306,11 @@ static inline void kvm_ops_static_call_update(void) static_call_update(kvm_x86_##func, kvm_x86_ops.func) #define KVM_X86_OP_NULL KVM_X86_OP #include + +#define KVM_X86_PMU_OP(func) \ + static_call_update(kvm_x86_pmu_##func, kvm_pmu_ops.func) +#define KVM_X86_PMU_OP_NULL KVM_X86_PMU_OP +#include } int kvm_arch_hardware_setup(void *opaque)