From patchwork Tue Mar 28 03:47:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 13190434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6489EC761A6 for ; Tue, 28 Mar 2023 03:48:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232733AbjC1Dr6 (ORCPT ); Mon, 27 Mar 2023 23:47:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232738AbjC1Drz (ORCPT ); Mon, 27 Mar 2023 23:47:55 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD830198 for ; Mon, 27 Mar 2023 20:47:53 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-536a4eba107so107166467b3.19 for ; Mon, 27 Mar 2023 20:47:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679975273; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=1fqM/5aXH/kUnbZst5pUEbiSYW0bWjK1pNRwkxjpC7o=; b=nfCnu/maO6m2fyb3UGY0EV4VvPIVgxyIDQ/Stccb7oTK9UzBRlILkW1RfXa23+PKcN qYph1cHZhVwtJQVjyjX5AiHiZG2LNrP/kgjF+q72dnTLENdLoFLRnPM+/828+yOlspDu D7bBvHGgr5xJdj62d1FEz86SaPUbgQuCRyYVGXRq9Hv6o0mYwKddsYwz5tXL8JvGC0GJ 9NIfZuaKeHLTxx0ECdd9xmpQlLeQCgeOod9bO1QtPJcreidiA02sWmrTeDhlSAEuYRUr mw51jWIWU5lOU5qbfEDSvHmTbwMbakc607pjjpKmIQRkErApsqAH/WXdOyAR4h0huxpl fl3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679975273; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=1fqM/5aXH/kUnbZst5pUEbiSYW0bWjK1pNRwkxjpC7o=; b=LrdZQmvjGtwyOKx+4oT0Anpt1QZDbIdA171PEd+dvFyfMkLjlT/Y3ckyWxlFgEIivJ OudwZWQ7eOLKmmnYjvOBy7ekrQ2BfCPl/GAPWjhM+TpwnpMVl+vcn94tXN2col5PMM4a bb4UR9A4wXDuuWFdz4SCg4mbKczQKowN8FhX+Xu1S0bUCbxrsuBLtl+RmblkvMoq/MBA 3O/y59haRf2M2zy4t0d4dws9PKgToB7y766DCkEwpblyMdut5Wi3TmmvbDWzxQL4c41r jEXu8Cnt5Q1DXXI+FVlKk2pQT2GlBjbzGZRubyu8SSz2Unequyko5bTeQDNJ8Dbbdz7e /jPQ== X-Gm-Message-State: AAQBX9fH9NP6KGlYGsNkc05z4hrXvFbvour2EgZq6kHkgXSs4HNDPIsj j6+HeRgyqVbOgL6r91hr4Q1I3yu1xYI= X-Google-Smtp-Source: AKy350YDNIIl35Z1hSKIAvRkO/mVLdZdmZFktwv0iTKmYyy8goNSt9y0gXZFbDnhlOzfQwDo+Wp3YEuawvY= X-Received: from reijiw-west4.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:aa1]) (user=reijiw job=sendgmr) by 2002:a05:6902:1181:b0:b6c:2224:8a77 with SMTP id m1-20020a056902118100b00b6c22248a77mr9016560ybu.1.1679975273052; Mon, 27 Mar 2023 20:47:53 -0700 (PDT) Date: Mon, 27 Mar 2023 20:47:25 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog Message-ID: <20230328034725.2051499-1-reijiw@google.com> Subject: [PATCH v1] KVM: arm64: PMU: Restore the guest's EL0 event counting after migration From: Reiji Watanabe To: Marc Zyngier , Oliver Upton , kvmarm@lists.linux.dev Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Zenghui Yu , Suzuki K Poulose , Paolo Bonzini , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , Will Deacon , Reiji Watanabe , stable@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Currently, with VHE, KVM enables the EL0 event counting for the guest on vcpu_load() or KVM enables it as a part of the PMU register emulation process, when needed. However, in the migration case (with VHE), the same handling is lacking. So, enable it on the first KVM_RUN with VHE (after the migration) when needed. Fixes: d0c94c49792c ("KVM: arm64: Restore PMU configuration on first run") Cc: stable@vger.kernel.org Signed-off-by: Reiji Watanabe Reviewed-by: Marc Zyngier --- arch/arm64/kvm/pmu-emul.c | 1 + arch/arm64/kvm/sys_regs.c | 1 - 2 files changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c index c243b10f3e15..5eca0cdd961d 100644 --- a/arch/arm64/kvm/pmu-emul.c +++ b/arch/arm64/kvm/pmu-emul.c @@ -558,6 +558,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) for_each_set_bit(i, &mask, 32) kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true); } + kvm_vcpu_pmu_restore_guest(vcpu); } static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1b2c161120be..34688918c811 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -794,7 +794,6 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, if (!kvm_supports_32bit_el0()) val |= ARMV8_PMU_PMCR_LC; kvm_pmu_handle_pmcr(vcpu, val); - kvm_vcpu_pmu_restore_guest(vcpu); } else { /* PMCR.P & PMCR.C are RAZ */ val = __vcpu_sys_reg(vcpu, PMCR_EL0)