From patchwork Fri Oct 25 16:42:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 11212659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35E50112C for ; Fri, 25 Oct 2019 16:43:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 139AC21872 for ; Fri, 25 Oct 2019 16:43:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="QUXy1Gha" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 139AC21872 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=seN1TDikIwtl8VcM4fQNRCpTCa2gm9emx6UPPorD/Vo=; b=QUXy1Ghae9uFzU uG2cL28eo7sMAne6Q5zKPIQgGqv7nIji1uq8thDAlrjIt7ziCBwPblMWimS8uWi0htxZ/39RsIv9D tpOPEN37d0TNKnJSIY5IovuJnfAMFFZbSHxUEDBummkr+nzlMv0DiTohwdVPlXM6kmXjL4FRMNijI IFBhYRm0zz2teQ+jXVhz451xiT/YuxZ0uv3eqS3Wi1T58cZmrFGolAFyL/TpWhmycdYKffBayH+sP LQZpiltU8gilhp0eSMe1IzVbPtSC+sxE6rqqHBLk7koV0RbN2pKFMduWkL4C2UGkRUCrt2EnBQXcG ovAsmIuvv07Bv1ufTbkw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iO2gf-000735-Io; Fri, 25 Oct 2019 16:43:41 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iO2ft-0006ID-Lh for linux-arm-kernel@lists.infradead.org; Fri, 25 Oct 2019 16:42:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 26C91337; Fri, 25 Oct 2019 09:42:53 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.105]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2407C3F71A; Fri, 25 Oct 2019 09:42:52 -0700 (PDT) From: James Morse To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 4/7] arm64: entry: convert el1_sync to C Date: Fri, 25 Oct 2019 17:42:13 +0100 Message-Id: <20191025164216.53626-5-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191025164216.53626-1-james.morse@arm.com> References: <20191025164216.53626-1-james.morse@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191025_094253_817748_D2AC611C X-CRM114-Status: GOOD ( 17.49 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Catalin Marinas , James Morse , Masami Hiramatsu , Will Deacon , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Mark Rutland This patch converts the EL1 sync entry assembly logic to C code. Doing this will allow us to make changes in a slightly more readable way. A case in point is supporting kernel-first RAS. do_sea() should be called on the CPU that took the fault. Largely the assembly code is converted to C in a relatively straightforward manner. Since all sync sites share a common asm entry point, the ASM_BUG() instances are no longer required for effective backtraces back to assembly, and we don't need similar BUG() entries. The ESR_ELx.EC codes for all (supported) debug exceptions are now checked in the el1_sync_handler's switch statement, which renders the check in el1_dbg redundant. This both simplifies the el1_dbg handler, and makes the EL1 exception handling more robust to currently-unallocated ESR_ELx.EC encodings. Signed-off-by: Mark Rutland [split out of a bigger series, added nokprobes, moved prototypes] Signed-off-by: James Morse Reviewed-by: Mark Rutland Cc: Julien Thierry --- arch/arm64/kernel/Makefile | 6 +- arch/arm64/kernel/entry-common.c | 98 ++++++++++++++++++++++++++++++++ arch/arm64/kernel/entry.S | 69 +--------------------- 3 files changed, 102 insertions(+), 71 deletions(-) create mode 100644 arch/arm64/kernel/entry-common.c diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 478491f07b4f..fc6488660f64 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -13,9 +13,9 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) # Object file lists. obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ - entry-fpsimd.o process.o ptrace.o setup.o signal.o \ - sys.o stacktrace.o time.o traps.o io.o vdso.o \ - hyp-stub.o psci.o cpu_ops.o insn.o \ + entry-common.o entry-fpsimd.o process.o ptrace.o \ + setup.o signal.o sys.o stacktrace.o time.o traps.o \ + io.o vdso.o hyp-stub.o psci.o cpu_ops.o insn.o \ return_address.o cpuinfo.o cpu_errata.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c new file mode 100644 index 000000000000..e726d1f4b9e9 --- /dev/null +++ b/arch/arm64/kernel/entry-common.c @@ -0,0 +1,98 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Exception handling code + * + * Copyright (C) 2019 ARM Ltd. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +static void notrace el1_abort(struct pt_regs *regs, unsigned long esr) +{ + unsigned long far = read_sysreg(far_el1); + + local_daif_inherit(regs); + far = untagged_addr(far); + do_mem_abort(far, esr, regs); +} +NOKPROBE_SYMBOL(el1_abort); + +static void notrace el1_pc(struct pt_regs *regs, unsigned long esr) +{ + unsigned long far = read_sysreg(far_el1); + + local_daif_inherit(regs); + do_sp_pc_abort(far, esr, regs); +} +NOKPROBE_SYMBOL(el1_pc); + +static void el1_undef(struct pt_regs *regs) +{ + local_daif_inherit(regs); + do_undefinstr(regs); +} +NOKPROBE_SYMBOL(el1_undef); + +static void el1_inv(struct pt_regs *regs, unsigned long esr) +{ + local_daif_inherit(regs); + bad_mode(regs, 0, esr); +} +NOKPROBE_SYMBOL(el1_inv); + +static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr) +{ + unsigned long far = read_sysreg(far_el1); + + /* + * The CPU masked interrupts, and we are leaving them masked during + * do_debug_exception(). Update PMR as if we had called + * local_mask_daif(). + */ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); + + do_debug_exception(far, esr, regs); +} +NOKPROBE_SYMBOL(el1_dbg); + +asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) +{ + unsigned long esr = read_sysreg(esr_el1); + + switch (ESR_ELx_EC(esr)) { + case ESR_ELx_EC_DABT_CUR: + case ESR_ELx_EC_IABT_CUR: + el1_abort(regs, esr); + break; + /* + * We don't handle ESR_ELx_EC_SP_ALIGN, since we will have hit a + * recursive exception when trying to push the initial pt_regs. + */ + case ESR_ELx_EC_PC_ALIGN: + el1_pc(regs, esr); + break; + case ESR_ELx_EC_SYS64: + case ESR_ELx_EC_UNKNOWN: + el1_undef(regs); + break; + case ESR_ELx_EC_BREAKPT_CUR: + case ESR_ELx_EC_SOFTSTP_CUR: + case ESR_ELx_EC_WATCHPT_CUR: + case ESR_ELx_EC_BRK64: + el1_dbg(regs, esr); + break; + default: + el1_inv(regs, esr); + }; +} +NOKPROBE_SYMBOL(el1_sync_handler); diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index e304fe04b098..5d7f42eb0e89 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -578,76 +578,9 @@ ENDPROC(el1_error_invalid) .align 6 el1_sync: kernel_entry 1 - mrs x1, esr_el1 // read the syndrome register - lsr x24, x1, #ESR_ELx_EC_SHIFT // exception class - cmp x24, #ESR_ELx_EC_DABT_CUR // data abort in EL1 - b.eq el1_da - cmp x24, #ESR_ELx_EC_IABT_CUR // instruction abort in EL1 - b.eq el1_ia - cmp x24, #ESR_ELx_EC_SYS64 // configurable trap - b.eq el1_undef - cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception - b.eq el1_pc - cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL1 - b.eq el1_undef - cmp x24, #ESR_ELx_EC_BREAKPT_CUR // debug exception in EL1 - b.ge el1_dbg - b el1_inv - -el1_ia: - /* - * Fall through to the Data abort case - */ -el1_da: - /* - * Data abort handling - */ - mrs x3, far_el1 - inherit_daif pstate=x23, tmp=x2 - clear_address_tag x0, x3 - mov x2, sp // struct pt_regs - bl do_mem_abort - - kernel_exit 1 -el1_pc: - /* - * PC alignment exception handling. We don't handle SP alignment faults, - * since we will have hit a recursive exception when trying to push the - * initial pt_regs. - */ - mrs x0, far_el1 - inherit_daif pstate=x23, tmp=x2 - mov x2, sp - bl do_sp_pc_abort - ASM_BUG() -el1_undef: - /* - * Undefined instruction - */ - inherit_daif pstate=x23, tmp=x2 mov x0, sp - bl do_undefinstr - kernel_exit 1 -el1_dbg: - /* - * Debug exception handling - */ - cmp x24, #ESR_ELx_EC_BRK64 // if BRK64 - cinc x24, x24, eq // set bit '0' - tbz x24, #0, el1_inv // EL1 only - gic_prio_kentry_setup tmp=x3 - mrs x0, far_el1 - mov x2, sp // struct pt_regs - bl do_debug_exception + bl el1_sync_handler kernel_exit 1 -el1_inv: - // TODO: add support for undefined instructions in kernel mode - inherit_daif pstate=x23, tmp=x2 - mov x0, sp - mov x2, x1 - mov x1, #BAD_SYNC - bl bad_mode - ASM_BUG() ENDPROC(el1_sync) .align 6