From patchwork Thu Apr 30 10:15:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 11519745 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4A69912 for ; Thu, 30 Apr 2020 10:15:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CCB5A2186A for ; Thu, 30 Apr 2020 10:15:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588241721; bh=BlvNdHFKkh0FnrdTCIIcpCw2RMDmby99vpfCd+vpBjc=; h=From:To:Cc:Subject:Date:List-ID:From; b=SNvDnYt9/wUla1Pum6ahCQtPa6itkgzxd7ypndRxGNg1ORz+TDOxmr9pJOrT58TSM lKNUDztHuzWhZipVlLGb3JuoO03/Ph1GYCg5WuroJWjfBgASO4tkGZtApLZ15V7Xlx G/qW5APuRvrEOfp5B5Ou8Uvv/4q6knlmRg7ceNwo= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726782AbgD3KPU (ORCPT ); Thu, 30 Apr 2020 06:15:20 -0400 Received: from mail.kernel.org ([198.145.29.99]:46424 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726405AbgD3KPT (ORCPT ); Thu, 30 Apr 2020 06:15:19 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D2D262137B; Thu, 30 Apr 2020 10:15:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1588241719; bh=BlvNdHFKkh0FnrdTCIIcpCw2RMDmby99vpfCd+vpBjc=; h=From:To:Cc:Subject:Date:From; b=dXaxMw20l9rua7LXw8qu3PrlkeDyQR6+Q0MXVuU6tPr5oZAivlfCBo9QLgKuiiHB8 JebLGJqxr/Q0ZbUVj4xkr7/vCqB6BB9I301G1VOvI0qD5p9NFKciDiCphKYtIVp4XQ CZcrGS51uUsCB7rnEekSbqEPIAjWSkrV1HwMGQgQ= Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=why.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1jU6Dt-007zYv-8R; Thu, 30 Apr 2020 11:15:17 +0100 From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: James Morse , Julien Thierry , Suzuki K Poulose , Will Deacon Subject: [PATCH] KVM: arm64: Fix 32bit PC wrap-around Date: Thu, 30 Apr 2020 11:15:13 +0100 Message-Id: <20200430101513.318541-1-maz@kernel.org> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, will@kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In the unlikely event that a 32bit vcpu traps into the hypervisor on an instruction that is located right at the end of the 32bit range, the emulation of that instruction is going to increment PC past the 32bit range. This isn't great, as userspace can then observe this value and get a bit confused. Conversly, userspace can do things like (in the context of a 64bit guest that is capable of 32bit EL0) setting PSTATE to AArch64-EL0, set PC to a 64bit value, change PSTATE to AArch32-USR, and observe that PC hasn't been truncated. More confusion. Fix both by: - truncating PC increments for 32bit guests - sanitize PC every time a core reg is changed by userspace, and that PSTATE indicates a 32bit mode. Signed-off-by: Marc Zyngier Acked-by: Will Deacon id)); + + if (*vcpu_cpsr(vcpu) & PSR_AA32_MODE_MASK) + *vcpu_pc(vcpu) = lower_32_bits(*vcpu_pc(vcpu)); + out: return err; } diff --git a/virt/kvm/arm/hyp/aarch32.c b/virt/kvm/arm/hyp/aarch32.c index d31f267961e7..25c0e47d57cb 100644 --- a/virt/kvm/arm/hyp/aarch32.c +++ b/virt/kvm/arm/hyp/aarch32.c @@ -125,12 +125,16 @@ static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) */ void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) { + u32 pc = *vcpu_pc(vcpu); bool is_thumb; is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT); if (is_thumb && !is_wide_instr) - *vcpu_pc(vcpu) += 2; + pc += 2; else - *vcpu_pc(vcpu) += 4; + pc += 4; + + *vcpu_pc(vcpu) = pc; + kvm_adjust_itstate(vcpu); }