From patchwork Tue Oct 3 17:05:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Julien Thierry X-Patchwork-Id: 9983285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8445F6029B for ; Tue, 3 Oct 2017 17:07:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6CF9E28784 for ; Tue, 3 Oct 2017 17:07:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 61A52288DE; Tue, 3 Oct 2017 17:07:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C58ED28784 for ; Tue, 3 Oct 2017 17:07:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JvuSSpe4MUJXc/g9UfH+2K1MSG2j5nDBs7iiXeRbeGA=; b=X1X1oBIjF9/Azy WUgyot43HS3M/bLN3vqzZA037Qor9nez6x00k+OTZ5ySL04gfhM0QqLl0nK7dzDYy5us4chIbA1nO jBA4QAby5HW3JrOMssGj3zZ4cUJf63jNigUGqFJfwyyrGcSxl9EH9imlMiBJFA3yPTTWYXfW5xgRU ItygtFY83djNiLGW7p+HXKxGG/NDBiGsEFLe76powAZJd1lX4qGoqdwDi5S6X7d+XgngY5V3RIAgm kigm/FjOGSRQgEO2vm52IApeBCdZb0PD5vvW1VYhp7GN/BSGfqy8dDjQ3cMM8FcPfVgJMzrZJ+bL0 x7CyuNhl7huwjpeWn1UA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dzQew-00061n-On; Tue, 03 Oct 2017 17:07:06 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dzQeH-0005Nh-MG for linux-arm-kernel@lists.infradead.org; Tue, 03 Oct 2017 17:06:33 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1108415BF; Tue, 3 Oct 2017 10:06:08 -0700 (PDT) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AFB4B3F578; Tue, 3 Oct 2017 10:06:06 -0700 (PDT) From: Julien Thierry To: linux-arm-kernel@lists.infradead.org, alex.bennee@linaro.org Subject: [PATCH REPOST 3/3] arm64: kvm: Fix single step for guest skipped instructions Date: Tue, 3 Oct 2017 18:05:52 +0100 Message-Id: <1507050352-15909-4-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1507050352-15909-1-git-send-email-julien.thierry@arm.com> References: <1507050352-15909-1-git-send-email-julien.thierry@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171003_100625_843959_531267AD X-CRM114-Status: GOOD ( 13.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Catalin Marinas , Will Deacon , Christoffer Dall , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Software Step exception is missing after trapping instruction from the guest. We need to set the PSR.SS to 0 for the guest vcpu before resuming guest execution. Signed-off-by: Julien Thierry Cc: Christoffer Dall Cc: Marc Zyngier Cc: Alex Bennée Cc: Catalin Marinas Cc: Will Deacon --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/include/asm/kvm_emulate.h | 2 ++ arch/arm64/kvm/debug.c | 17 ++++++++++++++++- arch/arm64/kvm/hyp/switch.c | 10 ++++++++++ 4 files changed, 30 insertions(+), 1 deletion(-) -- 1.9.1 diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 26a64d0..398bbaa 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -32,6 +32,8 @@ #define KVM_ARM64_DEBUG_DIRTY_SHIFT 0 #define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT) +#define KVM_ARM64_DEBUG_INST_SKIP_SHIFT 1 +#define KVM_ARM64_DEBUG_INST_SKIP (1 << KVM_ARM64_DEBUG_INST_SKIP_SHIFT) #define kvm_ksym_ref(sym) \ ({ \ diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index e5df3fc..ee02dd2ee 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -95,6 +95,8 @@ static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr) kvm_skip_instr32(vcpu, is_wide_instr); else *vcpu_pc(vcpu) += 4; + /* Let debug engine know we skipped an instruction */ + vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_INST_SKIP; } static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index dbadfaf..b5fcb96 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -151,12 +151,27 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) * debugging the system. */ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { - *vcpu_cpsr(vcpu) |= DBG_SPSR_SS; + /* + * Taking a Software step exception, context being + * stepped has PSTATE.SS == 0. In order to step the next + * instruction, we need to reset this bit. + * If we skipped an instruction while single stepping, + * we want to get a software step exception for the + * skipped instruction (i.e. as soon as we return to the + * guest). This is obtained by returning to the guest + * with PSTATE.SS cleared. + */ + if (!(vcpu->arch.debug_flags & KVM_ARM64_DEBUG_INST_SKIP)) + *vcpu_cpsr(vcpu) |= DBG_SPSR_SS; + else + *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; vcpu_sys_reg(vcpu, MDSCR_EL1) |= DBG_MDSCR_SS; } else { vcpu_sys_reg(vcpu, MDSCR_EL1) &= ~DBG_MDSCR_SS; } + vcpu->arch.debug_flags &= ~KVM_ARM64_DEBUG_INST_SKIP; + trace_kvm_arm_set_dreg32("SPSR_EL2", *vcpu_cpsr(vcpu)); /* diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 945e79c..34fe215 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -22,6 +22,7 @@ #include #include #include +#include static bool __hyp_text __fpsimd_enabled_nvhe(void) { @@ -276,6 +277,8 @@ static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu) } write_sysreg_el2(*vcpu_pc(vcpu), elr); + + write_sysreg_el2(read_sysreg_el2(spsr) & ~DBG_SPSR_SS, spsr); } int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) @@ -343,6 +346,13 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) if (ret == -1) { /* Promote an illegal access to an SError */ __skip_instr(vcpu); + + /* + * We're not jumping back, let debug setup know + * we skipped an instruction. + */ + vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_INST_SKIP; + exit_code = ARM_EXCEPTION_EL1_SERROR; }