From patchwork Wed Aug 25 16:18:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F5F0C4338F for ; Wed, 25 Aug 2021 16:39:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF69760F5C for ; Wed, 25 Aug 2021 16:39:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EF69760F5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=q5frbGe6b+CLSbVkBPP/gkMnjU7d3E9xFt43/t2hmOc=; b=z1RJ8Tjnakn8Iz ZO2IWwASgKxvOX//qwOHjXOtCsR5WnijnMJ2S8iYWMikMoWlnBPoHzWql/lSNsakUZESQXuKPhqTc xg03wkvVpHqURwmR3mNj5SNhGC+c1G+o8UkfHAobffQawqA7PbUdMtukLAb6+TsR85hF3uq2+m9Mg sxChxChI6XJbCfygL/5V7s8jJ1ZSVdGuB5uMvy2wHHH0pr8vVVY1IJZDStOmuNYl2eavWoreIck2s C+bduPyKJEBWBTSPRchnjOQzwOGYN2zFEz214qEIRHY+hdgkhyig9rW+woLfQltBhZTnyLrjkKnuy yxSmyhnEC+1fKja1YDcQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvuc-007qXe-Pz; Wed, 25 Aug 2021 16:38:02 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvbD-007glG-Ln for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:18:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4360C1435; Wed, 25 Aug 2021 09:17:59 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E72953F66F; Wed, 25 Aug 2021 09:17:57 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 29/39] KVM: arm64: Rename DEBUG_STATE_SAVE_SPE -> DEBUG_SAVE_SPE_BUFFER flags Date: Wed, 25 Aug 2021 17:18:05 +0100 Message-Id: <20210825161815.266051-30-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091759_856019_C761CC98 X-CRM114-Status: GOOD ( 19.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Setting the KVM_ARM64_DEBUG_STATE_SAVE_SPE flag will stop profiling to drain the buffer, if the buffer is enabled when switching to the guest, and then re-enable profiling on the return to the host. Rename it to KVM_ARM64_DEBUG_SAVE_SPE_BUFFER to avoid any confusion with what a SPE enabled VCPU will do, which is to save and restore the full SPE state on a world switch, and not just part of it, some of the time. This also matches the name of the function __debug_save_host_buffers_nvhe(), which makes use of the flag to decide if the buffer should be drained. Similar treatment was applied to KVM_ARM64_DEBUG_STATE_SAVE_TRBE, which was renamed to KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER, for consistency and to better reflect what it is doing. CC: Suzuki K Poulose Signed-off-by: Alexandru Elisei --- arch/arm64/include/asm/kvm_host.h | 24 ++++++++++++------------ arch/arm64/kvm/debug.c | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 8 ++++---- 3 files changed, 22 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 351b77dc7732..e704847a7645 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -439,18 +439,18 @@ struct kvm_vcpu_arch { }) /* vcpu_arch flags field values: */ -#define KVM_ARM64_DEBUG_DIRTY (1 << 0) -#define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ -#define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ -#define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ -#define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ -#define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ -#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ -#define KVM_ARM64_GUEST_HAS_PTRAUTH (1 << 7) /* PTRAUTH exposed to guest */ -#define KVM_ARM64_PENDING_EXCEPTION (1 << 8) /* Exception pending */ -#define KVM_ARM64_EXCEPT_MASK (7 << 9) /* Target EL/MODE */ -#define KVM_ARM64_DEBUG_STATE_SAVE_SPE (1 << 12) /* Save SPE context if active */ -#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE (1 << 13) /* Save TRBE context if active */ +#define KVM_ARM64_DEBUG_DIRTY (1 << 0) +#define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ +#define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ +#define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ +#define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ +#define KVM_ARM64_GUEST_HAS_SVE (1 << 5) /* SVE exposed to guest */ +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ +#define KVM_ARM64_GUEST_HAS_PTRAUTH (1 << 7) /* PTRAUTH exposed to guest */ +#define KVM_ARM64_PENDING_EXCEPTION (1 << 8) /* Exception pending */ +#define KVM_ARM64_EXCEPT_MASK (7 << 9) /* Target EL/MODE */ +#define KVM_ARM64_DEBUG_SAVE_SPE_BUFFER (1 << 12) /* Save SPE buffer if active */ +#define KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER (1 << 13) /* Save TRBE buffer if active */ #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index 70712cd85f32..6e5fc1887215 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -299,22 +299,23 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu) return; dfr0 = read_sysreg(id_aa64dfr0_el1); + /* * If SPE is present on this CPU and is available at current EL, - * we may need to check if the host state needs to be saved. + * we may need to check if the host buffer needs to be drained. */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_PMSVER_SHIFT) && !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT))) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_SPE; + vcpu->arch.flags |= KVM_ARM64_DEBUG_SAVE_SPE_BUFFER; /* Check if we have TRBE implemented and available at the host */ if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_TRBE_SHIFT) && !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_PROG)) - vcpu->arch.flags |= KVM_ARM64_DEBUG_STATE_SAVE_TRBE; + vcpu->arch.flags |= KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER; } void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu) { - vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_STATE_SAVE_SPE | - KVM_ARM64_DEBUG_STATE_SAVE_TRBE); + vcpu->arch.flags &= ~(KVM_ARM64_DEBUG_SAVE_SPE_BUFFER | + KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER); } diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 6db58722f045..186b90b5fd20 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -85,10 +85,10 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { /* Disable and flush SPE data generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) __debug_save_spe(__ctxt_sys_reg(host_ctxt, PMSCR_EL1)); /* Disable and flush Self-Hosted Trace generation */ - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER) __debug_save_trace(&vcpu->arch.host_debug_state.trfcr_el1); } @@ -100,9 +100,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt) { - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_SPE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_SPE_BUFFER) __debug_restore_spe(ctxt_sys_reg(host_ctxt, PMSCR_EL1)); - if (vcpu->arch.flags & KVM_ARM64_DEBUG_STATE_SAVE_TRBE) + if (vcpu->arch.flags & KVM_ARM64_DEBUG_SAVE_TRBE_BUFFER) __debug_restore_trace(vcpu->arch.host_debug_state.trfcr_el1); }