From patchwork Tue Sep 16 21:09:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christopher Covington X-Patchwork-Id: 4920631 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0C7BE9F32F for ; Tue, 16 Sep 2014 21:10:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 614BC20155 for ; Tue, 16 Sep 2014 21:12:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7B96D20154 for ; Tue, 16 Sep 2014 21:12:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XU00r-0002D7-1I; Tue, 16 Sep 2014 21:10:13 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XU00o-0000wx-CF for linux-arm-kernel@lists.infradead.org; Tue, 16 Sep 2014 21:10:10 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id A052513FCB3; Tue, 16 Sep 2014 21:09:49 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 92DE613FCE6; Tue, 16 Sep 2014 21:09:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from keeshans.qualcomm.com (rrcs-67-52-130-30.west.biz.rr.com [67.52.130.30]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: cov@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 5B44E13FCB3; Tue, 16 Sep 2014 21:09:48 +0000 (UTC) From: Christopher Covington To: Sonny Rao , Will Deacon , Catalin Marinas , Mark Rutland , Stephen Boyd , Marc Zyngier , linux-arm-kernel@lists.infradead.org, Doug Anderson , linux-kernel@vger.kernel.org Subject: [RFC] arm: Handle starting up in secure mode Date: Tue, 16 Sep 2014 17:09:16 -0400 Message-Id: <1410901756-20694-1-git-send-email-cov@codeaurora.org> X-Mailer: git-send-email 1.8.1.1 X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140916_141010_465330_D8925EEE X-CRM114-Status: GOOD ( 13.06 ) X-Spam-Score: -0.7 (/) Cc: Christopher Covington X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP ARM Linux currently has the most features available to it in hypervisor (HYP) mode, so switch to it when possible. This can also ensure proper reset of newer registers such as CNTVOFF. The permissions on the Non-Secure Access Control Register (NSACR) are used to probe what the security setting currently is when in supervisor (SVC) mode. Signed-off-by: Christopher Covington --- arch/arm/kernel/head.S | 1 + arch/arm/kernel/hyp-stub.S | 71 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index 664eee8..6fe2387 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -85,6 +85,7 @@ ENTRY(stext) THUMB( .thumb ) @ switch to Thumb now. THUMB(1: ) + bl __mon_stub_install #ifdef CONFIG_ARM_VIRT_EXT bl __hyp_stub_install #endif diff --git a/arch/arm/kernel/hyp-stub.S b/arch/arm/kernel/hyp-stub.S index 2a55373..36d1a9c 100644 --- a/arch/arm/kernel/hyp-stub.S +++ b/arch/arm/kernel/hyp-stub.S @@ -20,6 +20,7 @@ #include #include #include +#include #ifndef ZIMAGE /* @@ -76,6 +77,64 @@ ENTRY(__boot_cpu_mode) #endif /* ZIMAGE */ /* + * Detect whether the system is in secure supervisor mode, and if it is, + * switch to hypervisor mode by way of secure monitor mode. + */ +ENTRY(__mon_stub_install) + mrs r4, cpsr + and r4, r4, #MODE_MASK + cmp r4, #SVC_MODE + movne pc, lr + + /* + * Set things up so that if an NSACR access causes an undefined + * instruction exception, we return. safe_svcmode_maskall called + * just after this will get us back into supervisor mode. + */ + adr r4, __mon_stub_vectors + mcr p15, 0, r4, c12, c0, 0 @ set vector base address (VBAR) + mov r4, lr + + /* + * Writing the NSACR will only succeed if we're in a secure mode. + */ + mrc p15, 0, r5, c1, c1, 2 @ get non-secure access control (NSACR) + mcr p15, 0, r5, c1, c1, 2 @ set non-secure access control (NSACR) + + /* + * If we get here, we know we're in secure supervisor mode, so make the + * switch to secure monitor mode. + * + * TODO: make sure this doesn't trap to A64 EL3. + */ + adr r4, __mon_stub_vectors + mcr p15, 0, r4, c12, c0, 1 @ set monitor vector base (MVBAR) + adr r4, mon_settings + __SMC(0) + + /* + * Now, from non-secure supervisor mode, transition to hypervisor mode + * and return via the exception vector. + */ + mov r4, lr + __HVC(0) +ENDPROC(__mon_stub_install) + +ENTRY(mon_settings) + /* + * Prepare for hypervisor mode by setting the HCE and NS bits. + */ + mrc p15, 0, r4, c1, c1, 0 @ get secure configuration (SCR) + orr r4, r4, #0x100 + orr r4, r4, #1 + mcr p15, 0, r4, c1, c1, 0 @ set secure configuration (SCR) + + adr r4, __mon_stub_vectors + mcr p15, 4, r4, c12, c0, 0 @ set hypervisor vectors (HVBAR) + __ERET +ENDPROC(mon_settings) + +/* * Hypervisor stub installation functions. * * These must be called with the MMU and D-cache off. @@ -209,6 +268,18 @@ ENDPROC(__hyp_set_vectors) #endif .align 5 +__mon_stub_vectors: +__mon_stub_reset: W(b) . +__mon_stub_und: mov pc, r4 +__mon_stub_call: mov pc, r4 +__mon_stub_pabort: W(b) . +__mon_stub_dabort: W(b) . +__mon_stub_trap: mov pc, r4 +__mon_stub_irq: W(b) . +__mon_stub_fiq: W(b) . +ENDPROC(__hyp_stub_vectors) + +.align 5 __hyp_stub_vectors: __hyp_stub_reset: W(b) . __hyp_stub_und: W(b) .