From patchwork Thu Oct 14 08:32:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feiyang Chen X-Patchwork-Id: 12558007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 347DCC433EF for ; Thu, 14 Oct 2021 08:32:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E4AF61156 for ; Thu, 14 Oct 2021 08:32:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230267AbhJNIex (ORCPT ); Thu, 14 Oct 2021 04:34:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230248AbhJNIeu (ORCPT ); Thu, 14 Oct 2021 04:34:50 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3086C061570; Thu, 14 Oct 2021 01:32:45 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id oa4so4206605pjb.2; Thu, 14 Oct 2021 01:32:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wgiwV35VyoYIPMvS/jOIcbZ5ObGfGIoVsUJqgerd1RI=; b=kXvVu7wFR86lEPtuhenXXohpHqNGCY/eLSqxfS1jgj+8hZfPoXBQeDyKLxP7cazY1N cN3uTzSAqlZjWhizO0nlD7lVfdE+J4WcIaTshIRR0QIhCn59EbO/wPwv2MHy4/YHLw0L V9FZ7YB3wSh2YS/UoGleLhzuKE6vEUU7BCseDi4E1OlnT/qOMB1ayZM+QdBEehBNvz+f W7LZbUVq30jNxIhw7POL3/8PEFuXZoJXPV0GG6w8ZfiJo59bXsRjndLwprW+3LKKNGhw my3PSLfB9ZRJK4PWIRtEd83ZbZBKXGIxKYK/mBvsNca1AswFISUx2eMAyqQygnySUcll xbIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wgiwV35VyoYIPMvS/jOIcbZ5ObGfGIoVsUJqgerd1RI=; b=1lTyhfPh1wA9QIGCs0qNjsDtK8mfjB85bEIDvjgMk2/5IMvqvzFQZIf7O4exj+nS8S /IWfFn2yM1dsTnk2vmKTIDIVRQjQUZbLvagMYMvguW9pxXx+tSp795m8VliHslDPqH0K k7M3yB5wVmaPvb66XqmxOVQpht1TjLttFzPfke7WcVSh++0le+JxDkQiln/F23KnLZ3n NlZvX5Xcmgvffa9GWnBUDZofgKommx/xxnX+WI+ifw3oKvkyduO0gbi3PyCFPwiKWgSe 7y7iuclzMYxFdDJg0xhf5jf9NXjEc6DyYNcEBEaxQivj/IBF/PscHcCFvyF9vm70fu2q I45w== X-Gm-Message-State: AOAM5300C88Bb/6XbpQtkhvCmsXo60JNjTNJCoDz1W41YQiyn44FeRMu iZAOpsyP+hQZVDeW+8Sny0I= X-Google-Smtp-Source: ABdhPJym7Jtr38byeES4u/WBmLSVyoEe6rBE+HY2fnJ6eUQQngKLquNU0EcLz06dQ0R/gPO/rvK0/w== X-Received: by 2002:a17:90a:de16:: with SMTP id m22mr18983910pjv.38.1634200364502; Thu, 14 Oct 2021 01:32:44 -0700 (PDT) Received: from archlinux.localdomain (61-218-132-193.hinet-ip.hinet.net. [61.218.132.193]) by smtp.gmail.com with ESMTPSA id z23sm1802004pgv.45.2021.10.14.01.32.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Oct 2021 01:32:44 -0700 (PDT) From: Feiyang Chen X-Google-Original-From: Feiyang Chen To: tsbogend@alpha.franken.de, tglx@linutronix.de, peterz@infradead.org, luto@kernel.org, arnd@arndb.de Cc: Feiyang Chen , linux-mips@vger.kernel.org, linux-arch@vger.kernel.org, chenhuacai@kernel.org, jiaxun.yang@flygoat.com, zhouyu@wanyeetech.com, hns@goldelico.com, chris.chenfeiyang@gmail.com, Yanteng Si Subject: [PATCH v3 1/2] MIPS: convert syscall to generic entry Date: Thu, 14 Oct 2021 16:32:53 +0800 Message-Id: <31a97087b56c703606b8d871ac35d2192928fe6b.1634177547.git.chenfeiyang@loongson.cn> X-Mailer: git-send-email 2.33.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Convert MIPS syscall to use the generic entry infrastructure from kernel/entry/*. There are a few special things on MIPS: - There is one type of syscall on MIPS32 (scall32-o32) and three types of syscalls on MIPS64 (scall64-o32, scall64-n32 and scall64-n64). Now convert to C code to handle different types of syscalls. - For some special syscalls (e.g. fork, clone, clone3 and sysmips), save_static_function() wrapper is used to save static registers. Now SAVE_STATIC is used in handle_sys before calling do_syscall(), so the save_static_function() wrapper can be removed. - For sigreturn/rt_sigreturn and sysmips, inline assembly is used to jump to syscall_exit directly for skipping setting the error flag and restoring all registers. Now use regs->regs[27] to mark whether to handle the error flag and restore all registers in handle_sys, so these functions can return normally as other architecture. Signed-off-by: Feiyang Chen Signed-off-by: Yanteng Si Reviewed-by: Huacai Chen --- arch/mips/Kconfig | 1 + arch/mips/include/asm/entry-common.h | 13 ++ arch/mips/include/asm/ptrace.h | 8 +- arch/mips/include/asm/sim.h | 70 ------- arch/mips/include/asm/syscall.h | 5 + arch/mips/include/asm/thread_info.h | 17 +- arch/mips/include/uapi/asm/ptrace.h | 7 +- arch/mips/kernel/Makefile | 14 +- arch/mips/kernel/entry.S | 75 ++------ arch/mips/kernel/linux32.c | 1 - arch/mips/kernel/ptrace.c | 78 -------- arch/mips/kernel/scall.S | 137 +++++++++++++ arch/mips/kernel/scall32-o32.S | 223 ---------------------- arch/mips/kernel/scall64-n32.S | 107 ----------- arch/mips/kernel/scall64-n64.S | 116 ----------- arch/mips/kernel/scall64-o32.S | 221 --------------------- arch/mips/kernel/signal.c | 37 ++-- arch/mips/kernel/signal_n32.c | 16 +- arch/mips/kernel/signal_o32.c | 31 +-- arch/mips/kernel/syscall.c | 148 +++++++++++--- arch/mips/kernel/syscalls/syscall_n32.tbl | 8 +- arch/mips/kernel/syscalls/syscall_n64.tbl | 8 +- arch/mips/kernel/syscalls/syscall_o32.tbl | 8 +- 23 files changed, 354 insertions(+), 995 deletions(-) create mode 100644 arch/mips/include/asm/entry-common.h delete mode 100644 arch/mips/include/asm/sim.h create mode 100644 arch/mips/kernel/scall.S delete mode 100644 arch/mips/kernel/scall32-o32.S delete mode 100644 arch/mips/kernel/scall64-n32.S delete mode 100644 arch/mips/kernel/scall64-n64.S delete mode 100644 arch/mips/kernel/scall64-o32.S diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 1291774a2fa5..debd125100ad 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -32,6 +32,7 @@ config MIPS select GENERIC_ATOMIC64 if !64BIT select GENERIC_CMOS_UPDATE select GENERIC_CPU_AUTOPROBE + select GENERIC_ENTRY select GENERIC_GETTIMEOFDAY select GENERIC_IOMAP select GENERIC_IRQ_PROBE diff --git a/arch/mips/include/asm/entry-common.h b/arch/mips/include/asm/entry-common.h new file mode 100644 index 000000000000..0fe2a098ded9 --- /dev/null +++ b/arch/mips/include/asm/entry-common.h @@ -0,0 +1,13 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef ARCH_LOONGARCH_ENTRY_COMMON_H +#define ARCH_LOONGARCH_ENTRY_COMMON_H + +#include +#include + +static inline bool on_thread_stack(void) +{ + return !(((unsigned long)(current->stack) ^ current_stack_pointer) & ~(THREAD_SIZE - 1)); +} + +#endif diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h index daf3cf244ea9..1b8f9d2ddc44 100644 --- a/arch/mips/include/asm/ptrace.h +++ b/arch/mips/include/asm/ptrace.h @@ -51,6 +51,11 @@ struct pt_regs { unsigned long __last[0]; } __aligned(8); +static inline int regs_irqs_disabled(struct pt_regs *regs) +{ + return arch_irqs_disabled_flags(regs->cp0_status); +} + static inline unsigned long kernel_stack_pointer(struct pt_regs *regs) { return regs->regs[29]; @@ -156,9 +161,6 @@ static inline long regs_return_value(struct pt_regs *regs) #define instruction_pointer(regs) ((regs)->cp0_epc) #define profile_pc(regs) instruction_pointer(regs) -extern asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall); -extern asmlinkage void syscall_trace_leave(struct pt_regs *regs); - extern void die(const char *, struct pt_regs *) __noreturn; static inline void die_if_kernel(const char *str, struct pt_regs *regs) diff --git a/arch/mips/include/asm/sim.h b/arch/mips/include/asm/sim.h deleted file mode 100644 index 59f31a95facd..000000000000 --- a/arch/mips/include/asm/sim.h +++ /dev/null @@ -1,70 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1999, 2000, 2003 Ralf Baechle - * Copyright (C) 1999, 2000 Silicon Graphics, Inc. - */ -#ifndef _ASM_SIM_H -#define _ASM_SIM_H - - -#include - -#define __str2(x) #x -#define __str(x) __str2(x) - -#ifdef CONFIG_32BIT - -#define save_static_function(symbol) \ -__asm__( \ - ".text\n\t" \ - ".globl\t__" #symbol "\n\t" \ - ".align\t2\n\t" \ - ".type\t__" #symbol ", @function\n\t" \ - ".ent\t__" #symbol ", 0\n__" \ - #symbol":\n\t" \ - ".frame\t$29, 0, $31\n\t" \ - "sw\t$16,"__str(PT_R16)"($29)\t\t\t# save_static_function\n\t" \ - "sw\t$17,"__str(PT_R17)"($29)\n\t" \ - "sw\t$18,"__str(PT_R18)"($29)\n\t" \ - "sw\t$19,"__str(PT_R19)"($29)\n\t" \ - "sw\t$20,"__str(PT_R20)"($29)\n\t" \ - "sw\t$21,"__str(PT_R21)"($29)\n\t" \ - "sw\t$22,"__str(PT_R22)"($29)\n\t" \ - "sw\t$23,"__str(PT_R23)"($29)\n\t" \ - "sw\t$30,"__str(PT_R30)"($29)\n\t" \ - "j\t" #symbol "\n\t" \ - ".end\t__" #symbol "\n\t" \ - ".size\t__" #symbol",. - __" #symbol) - -#endif /* CONFIG_32BIT */ - -#ifdef CONFIG_64BIT - -#define save_static_function(symbol) \ -__asm__( \ - ".text\n\t" \ - ".globl\t__" #symbol "\n\t" \ - ".align\t2\n\t" \ - ".type\t__" #symbol ", @function\n\t" \ - ".ent\t__" #symbol ", 0\n__" \ - #symbol":\n\t" \ - ".frame\t$29, 0, $31\n\t" \ - "sd\t$16,"__str(PT_R16)"($29)\t\t\t# save_static_function\n\t" \ - "sd\t$17,"__str(PT_R17)"($29)\n\t" \ - "sd\t$18,"__str(PT_R18)"($29)\n\t" \ - "sd\t$19,"__str(PT_R19)"($29)\n\t" \ - "sd\t$20,"__str(PT_R20)"($29)\n\t" \ - "sd\t$21,"__str(PT_R21)"($29)\n\t" \ - "sd\t$22,"__str(PT_R22)"($29)\n\t" \ - "sd\t$23,"__str(PT_R23)"($29)\n\t" \ - "sd\t$30,"__str(PT_R30)"($29)\n\t" \ - "j\t" #symbol "\n\t" \ - ".end\t__" #symbol "\n\t" \ - ".size\t__" #symbol",. - __" #symbol) - -#endif /* CONFIG_64BIT */ - -#endif /* _ASM_SIM_H */ diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h index 25fa651c937d..02ca0d659428 100644 --- a/arch/mips/include/asm/syscall.h +++ b/arch/mips/include/asm/syscall.h @@ -157,4 +157,9 @@ static inline int syscall_get_arch(struct task_struct *task) return arch; } +static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs) +{ + return false; +} + #endif /* __ASM_MIPS_SYSCALL_H */ diff --git a/arch/mips/include/asm/thread_info.h b/arch/mips/include/asm/thread_info.h index 0b17aaa9e012..5a5237413065 100644 --- a/arch/mips/include/asm/thread_info.h +++ b/arch/mips/include/asm/thread_info.h @@ -29,7 +29,8 @@ struct thread_info { __u32 cpu; /* current CPU */ int preempt_count; /* 0 => preemptable, <0 => BUG */ struct pt_regs *regs; - long syscall; /* syscall number */ + unsigned long syscall; /* syscall number */ + unsigned long syscall_work; /* SYSCALL_WORK_ flags */ }; /* @@ -69,6 +70,8 @@ static inline struct thread_info *current_thread_info(void) return __current_thread_info; } +register unsigned long current_stack_pointer __asm__("$29"); + #endif /* !__ASSEMBLY__ */ /* thread information allocation */ @@ -149,22 +152,10 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_MSA_CTX_LIVE (1<work - li t0, _TIF_ALLWORK_MASK - and t0, a2, t0 - bnez t0, syscall_exit_work + jal syscall_exit_to_user_mode -restore_all: # restore full frame .set noat - RESTORE_TEMP - RESTORE_AT RESTORE_STATIC -restore_partial: # restore partial frame -#ifdef CONFIG_TRACE_IRQFLAGS - SAVE_STATIC - SAVE_AT - SAVE_TEMP - LONG_L v0, PT_STATUS(sp) -#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX) - and v0, ST0_IEP -#else - and v0, ST0_IE -#endif - beqz v0, 1f - jal trace_hardirqs_on - b 2f -1: jal trace_hardirqs_off -2: + RESTORE_SOME + RESTORE_SP_AND_RET + .set at + +restore_all: # restore full frame + .set noat RESTORE_TEMP RESTORE_AT RESTORE_STATIC -#endif +restore_partial: # restore partial frame RESTORE_SOME RESTORE_SP_AND_RET .set at @@ -143,32 +126,6 @@ work_notifysig: # deal with pending signals and jal do_notify_resume # a2 already loaded j resume_userspace_check -FEXPORT(syscall_exit_partial) -#ifdef CONFIG_DEBUG_RSEQ - move a0, sp - jal rseq_syscall -#endif - local_irq_disable # make sure need_resched doesn't - # change between and return - LONG_L a2, TI_FLAGS($28) # current->work - li t0, _TIF_ALLWORK_MASK - and t0, a2 - beqz t0, restore_partial - SAVE_STATIC -syscall_exit_work: - LONG_L t0, PT_STATUS(sp) # returning to kernel mode? - andi t0, t0, KU_USER - beqz t0, resume_kernel - li t0, _TIF_WORK_SYSCALL_EXIT - and t0, a2 # a2 is preloaded with TI_FLAGS - beqz t0, work_pending # trace bit set? - local_irq_enable # could let syscall_trace_leave() - # call schedule() instead - TRACE_IRQS_ON - move a0, sp - jal syscall_trace_leave - b resume_userspace - #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR5) || \ defined(CONFIG_CPU_MIPSR6) || defined(CONFIG_MIPS_MT) diff --git a/arch/mips/kernel/linux32.c b/arch/mips/kernel/linux32.c index 6b61be486303..2b4b1fc1ff1b 100644 --- a/arch/mips/kernel/linux32.c +++ b/arch/mips/kernel/linux32.c @@ -38,7 +38,6 @@ #include #include -#include #include #include #include diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c index db7c5be1d4a3..04c08e41cfd3 100644 --- a/arch/mips/kernel/ptrace.c +++ b/arch/mips/kernel/ptrace.c @@ -46,9 +46,6 @@ #include #include -#define CREATE_TRACE_POINTS -#include - /* * Called by kernel/ptrace.c when detaching.. * @@ -1305,78 +1302,3 @@ long arch_ptrace(struct task_struct *child, long request, out: return ret; } - -/* - * Notification of system call entry/exit - * - triggered by current->work.syscall_trace - */ -asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall) -{ - user_exit(); - - current_thread_info()->syscall = syscall; - - if (test_thread_flag(TIF_SYSCALL_TRACE)) { - if (tracehook_report_syscall_entry(regs)) - return -1; - syscall = current_thread_info()->syscall; - } - -#ifdef CONFIG_SECCOMP - if (unlikely(test_thread_flag(TIF_SECCOMP))) { - int ret, i; - struct seccomp_data sd; - unsigned long args[6]; - - sd.nr = syscall; - sd.arch = syscall_get_arch(current); - syscall_get_arguments(current, regs, args); - for (i = 0; i < 6; i++) - sd.args[i] = args[i]; - sd.instruction_pointer = KSTK_EIP(current); - - ret = __secure_computing(&sd); - if (ret == -1) - return ret; - syscall = current_thread_info()->syscall; - } -#endif - - if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) - trace_sys_enter(regs, regs->regs[2]); - - audit_syscall_entry(syscall, regs->regs[4], regs->regs[5], - regs->regs[6], regs->regs[7]); - - /* - * Negative syscall numbers are mistaken for rejected syscalls, but - * won't have had the return value set appropriately, so we do so now. - */ - if (syscall < 0) - syscall_set_return_value(current, regs, -ENOSYS, 0); - return syscall; -} - -/* - * Notification of system call entry/exit - * - triggered by current->work.syscall_trace - */ -asmlinkage void syscall_trace_leave(struct pt_regs *regs) -{ - /* - * We may come here right after calling schedule_user() - * or do_notify_resume(), in which case we can be in RCU - * user mode. - */ - user_exit(); - - audit_syscall_exit(regs); - - if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) - trace_sys_exit(regs, regs_return_value(regs)); - - if (test_thread_flag(TIF_SYSCALL_TRACE)) - tracehook_report_syscall_exit(regs, 0); - - user_enter(); -} diff --git a/arch/mips/kernel/scall.S b/arch/mips/kernel/scall.S new file mode 100644 index 000000000000..fae8d99f0458 --- /dev/null +++ b/arch/mips/kernel/scall.S @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 1995, 96, 97, 98, 99, 2000, 01, 02 by Ralf Baechle + * Copyright (C) 1999, 2000 Silicon Graphics, Inc. + * Copyright (C) 2001 MIPS Technologies, Inc. + * Copyright (C) 2004 Thiemo Seufer + * Copyright (C) 2014 Imagination Technologies Ltd. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + .align 5 +NESTED(handle_sys, PT_SIZE, sp) + .set noat + SAVE_SOME + SAVE_STATIC + CLI + .set at + + move a0, sp + jal do_syscall + beqz v0, 1f # restore all registers? + nop + + .set noat + RESTORE_TEMP + RESTORE_STATIC + RESTORE_AT +1: RESTORE_SOME + RESTORE_SP_AND_RET + .set at + END(handle_sys) + +#ifdef CONFIG_32BIT +LEAF(sys_syscall) + subu t0, a0, __NR_O32_Linux # check syscall number + sltiu v0, t0, __NR_O32_Linux_syscalls + beqz t0, einval # do not recurse + sll t1, t0, 2 + beqz v0, einval + lw t2, sys_call_table(t1) # syscall routine + + move a0, a1 # shift argument registers + move a1, a2 + move a2, a3 + lw a3, 16(sp) + lw t4, 20(sp) + lw t5, 24(sp) + lw t6, 28(sp) + sw t4, 16(sp) + sw t5, 20(sp) + sw t6, 24(sp) + jr t2 + /* Unreached */ + +einval: li v0, -ENOSYS + jr ra + END(sys_syscall) + +#ifdef CONFIG_MIPS_MT_FPAFF + /* + * For FPU affinity scheduling on MIPS MT processors, we need to + * intercept sys_sched_xxxaffinity() calls until we get a proper hook + * in kernel/sched/core.c. Considered only temporary we only support + * these hooks for the 32-bit kernel - there is no MIPS64 MT processor + * atm. + */ +#define sys_sched_setaffinity mipsmt_sys_sched_setaffinity +#define sys_sched_getaffinity mipsmt_sys_sched_getaffinity +#endif /* CONFIG_MIPS_MT_FPAFF */ + +#define __SYSCALL_WITH_COMPAT(nr, native, compat) __SYSCALL(nr, native) +#define __SYSCALL(nr, entry) PTR entry + .align 2 + .type sys_call_table, @object +EXPORT(sys_call_table) +#include +#endif /* CONFIG_32BIT */ + +#ifdef CONFIG_64BIT +#ifdef CONFIG_MIPS32_O32 +LEAF(sys32_syscall) + subu t0, a0, __NR_O32_Linux # check syscall number + sltiu v0, t0, __NR_O32_Linux_syscalls + beqz t0, einval # do not recurse + dsll t1, t0, 3 + beqz v0, einval + ld t2, sys32_call_table(t1) # syscall routine + + move a0, a1 # shift argument registers + move a1, a2 + move a2, a3 + move a3, a4 + move a4, a5 + move a5, a6 + move a6, a7 + jr t2 + /* Unreached */ + +einval: li v0, -ENOSYS + jr ra + END(sys32_syscall) + +#define __SYSCALL_WITH_COMPAT(nr, native, compat) __SYSCALL(nr, compat) +#define __SYSCALL(nr, entry) PTR entry + .align 3 + .type sys32_call_table,@object +EXPORT(sys32_call_table) +#include +#endif /* CONFIG_MIPS32_O32 */ + +#ifdef CONFIG_MIPS32_N32 +#undef __SYSCALL +#define __SYSCALL(nr, entry) PTR entry + .align 3 + .type sysn32_call_table, @object +EXPORT(sysn32_call_table) +#include +#endif /* CONFIG_MIPS32_N32 */ + +#undef __SYSCALL +#define __SYSCALL(nr, entry) PTR entry + .align 3 + .type sys_call_table, @object +EXPORT(sys_call_table) +#include +#endif /* CONFIG_64BIT */ diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S deleted file mode 100644 index b1b2e106f711..000000000000 --- a/arch/mips/kernel/scall32-o32.S +++ /dev/null @@ -1,223 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1995-99, 2000- 02, 06 Ralf Baechle - * Copyright (C) 2001 MIPS Technologies, Inc. - * Copyright (C) 2004 Thiemo Seufer - * Copyright (C) 2014 Imagination Technologies Ltd. - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - - .align 5 -NESTED(handle_sys, PT_SIZE, sp) - .set noat - SAVE_SOME - TRACE_IRQS_ON_RELOAD - STI - .set at - - lw t1, PT_EPC(sp) # skip syscall on return - - addiu t1, 4 # skip to next instruction - sw t1, PT_EPC(sp) - - sw a3, PT_R26(sp) # save a3 for syscall restarting - - /* - * More than four arguments. Try to deal with it by copying the - * stack arguments from the user stack to the kernel stack. - * This Sucks (TM). - */ - lw t0, PT_R29(sp) # get old user stack pointer - - /* - * We intentionally keep the kernel stack a little below the top of - * userspace so we don't have to do a slower byte accurate check here. - */ - addu t4, t0, 32 - bltz t4, bad_stack # -> sp is bad - - /* - * Ok, copy the args from the luser stack to the kernel stack. - */ - - .set push - .set noreorder - .set nomacro - -load_a4: user_lw(t5, 16(t0)) # argument #5 from usp -load_a5: user_lw(t6, 20(t0)) # argument #6 from usp -load_a6: user_lw(t7, 24(t0)) # argument #7 from usp -load_a7: user_lw(t8, 28(t0)) # argument #8 from usp -loads_done: - - sw t5, 16(sp) # argument #5 to ksp - sw t6, 20(sp) # argument #6 to ksp - sw t7, 24(sp) # argument #7 to ksp - sw t8, 28(sp) # argument #8 to ksp - .set pop - - .section __ex_table,"a" - PTR load_a4, bad_stack_a4 - PTR load_a5, bad_stack_a5 - PTR load_a6, bad_stack_a6 - PTR load_a7, bad_stack_a7 - .previous - - lw t0, TI_FLAGS($28) # syscall tracing enabled? - li t1, _TIF_WORK_SYSCALL_ENTRY - and t0, t1 - bnez t0, syscall_trace_entry # -> yes -syscall_common: - subu v0, v0, __NR_O32_Linux # check syscall number - sltiu t0, v0, __NR_O32_Linux_syscalls - beqz t0, illegal_syscall - - sll t0, v0, 2 - la t1, sys_call_table - addu t1, t0 - lw t2, (t1) # syscall routine - - beqz t2, illegal_syscall - - jalr t2 # Do The Real Thing (TM) - - li t0, -EMAXERRNO - 1 # error? - sltu t0, t0, v0 - sw t0, PT_R7(sp) # set error flag - beqz t0, 1f - - lw t1, PT_R2(sp) # syscall number - negu v0 # error - sw t1, PT_R0(sp) # save it for syscall restarting -1: sw v0, PT_R2(sp) # result - -o32_syscall_exit: - j syscall_exit_partial - -/* ------------------------------------------------------------------------ */ - -syscall_trace_entry: - SAVE_STATIC - move a0, sp - - /* - * syscall number is in v0 unless we called syscall(__NR_###) - * where the real syscall number is in a0 - */ - move a1, v0 - subu t2, v0, __NR_O32_Linux - bnez t2, 1f /* __NR_syscall at offset 0 */ - lw a1, PT_R4(sp) - -1: jal syscall_trace_enter - - bltz v0, 1f # seccomp failed? Skip syscall - - RESTORE_STATIC - lw v0, PT_R2(sp) # Restore syscall (maybe modified) - lw a0, PT_R4(sp) # Restore argument registers - lw a1, PT_R5(sp) - lw a2, PT_R6(sp) - lw a3, PT_R7(sp) - j syscall_common - -1: j syscall_exit - -/* ------------------------------------------------------------------------ */ - - /* - * Our open-coded access area sanity test for the stack pointer - * failed. We probably should handle this case a bit more drastic. - */ -bad_stack: - li v0, EFAULT - sw v0, PT_R2(sp) - li t0, 1 # set error flag - sw t0, PT_R7(sp) - j o32_syscall_exit - -bad_stack_a4: - li t5, 0 - b load_a5 - -bad_stack_a5: - li t6, 0 - b load_a6 - -bad_stack_a6: - li t7, 0 - b load_a7 - -bad_stack_a7: - li t8, 0 - b loads_done - - /* - * The system call does not exist in this kernel - */ -illegal_syscall: - li v0, ENOSYS # error - sw v0, PT_R2(sp) - li t0, 1 # set error flag - sw t0, PT_R7(sp) - j o32_syscall_exit - END(handle_sys) - - LEAF(sys_syscall) - subu t0, a0, __NR_O32_Linux # check syscall number - sltiu v0, t0, __NR_O32_Linux_syscalls - beqz t0, einval # do not recurse - sll t1, t0, 2 - beqz v0, einval - lw t2, sys_call_table(t1) # syscall routine - - move a0, a1 # shift argument registers - move a1, a2 - move a2, a3 - lw a3, 16(sp) - lw t4, 20(sp) - lw t5, 24(sp) - lw t6, 28(sp) - sw t4, 16(sp) - sw t5, 20(sp) - sw t6, 24(sp) - jr t2 - /* Unreached */ - -einval: li v0, -ENOSYS - jr ra - END(sys_syscall) - -#ifdef CONFIG_MIPS_MT_FPAFF - /* - * For FPU affinity scheduling on MIPS MT processors, we need to - * intercept sys_sched_xxxaffinity() calls until we get a proper hook - * in kernel/sched/core.c. Considered only temporary we only support - * these hooks for the 32-bit kernel - there is no MIPS64 MT processor - * atm. - */ -#define sys_sched_setaffinity mipsmt_sys_sched_setaffinity -#define sys_sched_getaffinity mipsmt_sys_sched_getaffinity -#endif /* CONFIG_MIPS_MT_FPAFF */ - -#define __SYSCALL_WITH_COMPAT(nr, native, compat) __SYSCALL(nr, native) -#define __SYSCALL(nr, entry) PTR entry - .align 2 - .type sys_call_table, @object -EXPORT(sys_call_table) -#include diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S deleted file mode 100644 index f650c55a17dc..000000000000 --- a/arch/mips/kernel/scall64-n32.S +++ /dev/null @@ -1,107 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1995, 96, 97, 98, 99, 2000, 01 by Ralf Baechle - * Copyright (C) 1999, 2000 Silicon Graphics, Inc. - * Copyright (C) 2001 MIPS Technologies, Inc. - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#ifndef CONFIG_MIPS32_O32 -/* No O32, so define handle_sys here */ -#define handle_sysn32 handle_sys -#endif - - .align 5 -NESTED(handle_sysn32, PT_SIZE, sp) -#ifndef CONFIG_MIPS32_O32 - .set noat - SAVE_SOME - TRACE_IRQS_ON_RELOAD - STI - .set at -#endif - - dsubu t0, v0, __NR_N32_Linux # check syscall number - sltiu t0, t0, __NR_N32_Linux_syscalls - -#ifndef CONFIG_MIPS32_O32 - ld t1, PT_EPC(sp) # skip syscall on return - daddiu t1, 4 # skip to next instruction - sd t1, PT_EPC(sp) -#endif - beqz t0, not_n32_scall - - sd a3, PT_R26(sp) # save a3 for syscall restarting - - li t1, _TIF_WORK_SYSCALL_ENTRY - LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? - and t0, t1, t0 - bnez t0, n32_syscall_trace_entry - -syscall_common: - dsll t0, v0, 3 # offset into table - ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0) - - jalr t2 # Do The Real Thing (TM) - - li t0, -EMAXERRNO - 1 # error? - sltu t0, t0, v0 - sd t0, PT_R7(sp) # set error flag - beqz t0, 1f - - ld t1, PT_R2(sp) # syscall number - dnegu v0 # error - sd t1, PT_R0(sp) # save it for syscall restarting -1: sd v0, PT_R2(sp) # result - - j syscall_exit_partial - -/* ------------------------------------------------------------------------ */ - -n32_syscall_trace_entry: - SAVE_STATIC - move a0, sp - move a1, v0 - jal syscall_trace_enter - - bltz v0, 1f # seccomp failed? Skip syscall - - RESTORE_STATIC - ld v0, PT_R2(sp) # Restore syscall (maybe modified) - ld a0, PT_R4(sp) # Restore argument registers - ld a1, PT_R5(sp) - ld a2, PT_R6(sp) - ld a3, PT_R7(sp) - ld a4, PT_R8(sp) - ld a5, PT_R9(sp) - - dsubu t2, v0, __NR_N32_Linux # check (new) syscall number - sltiu t0, t2, __NR_N32_Linux_syscalls - beqz t0, not_n32_scall - - j syscall_common - -1: j syscall_exit - -not_n32_scall: - /* This is not an n32 compatibility syscall, pass it on to - the n64 syscall handlers. */ - j handle_sys64 - - END(handle_sysn32) - -#define __SYSCALL(nr, entry) PTR entry - .type sysn32_call_table, @object -EXPORT(sysn32_call_table) -#include diff --git a/arch/mips/kernel/scall64-n64.S b/arch/mips/kernel/scall64-n64.S deleted file mode 100644 index 5d7bfc65e4d0..000000000000 --- a/arch/mips/kernel/scall64-n64.S +++ /dev/null @@ -1,116 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1995, 96, 97, 98, 99, 2000, 01, 02 by Ralf Baechle - * Copyright (C) 1999, 2000 Silicon Graphics, Inc. - * Copyright (C) 2001 MIPS Technologies, Inc. - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#ifndef CONFIG_MIPS32_COMPAT -/* Neither O32 nor N32, so define handle_sys here */ -#define handle_sys64 handle_sys -#endif - - .align 5 -NESTED(handle_sys64, PT_SIZE, sp) -#if !defined(CONFIG_MIPS32_O32) && !defined(CONFIG_MIPS32_N32) - /* - * When 32-bit compatibility is configured scall_o32.S - * already did this. - */ - .set noat - SAVE_SOME - TRACE_IRQS_ON_RELOAD - STI - .set at -#endif - -#if !defined(CONFIG_MIPS32_O32) && !defined(CONFIG_MIPS32_N32) - ld t1, PT_EPC(sp) # skip syscall on return - daddiu t1, 4 # skip to next instruction - sd t1, PT_EPC(sp) -#endif - - sd a3, PT_R26(sp) # save a3 for syscall restarting - - li t1, _TIF_WORK_SYSCALL_ENTRY - LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? - and t0, t1, t0 - bnez t0, syscall_trace_entry - -syscall_common: - dsubu t2, v0, __NR_64_Linux - sltiu t0, t2, __NR_64_Linux_syscalls - beqz t0, illegal_syscall - - dsll t0, t2, 3 # offset into table - dla t2, sys_call_table - daddu t0, t2, t0 - ld t2, (t0) # syscall routine - beqz t2, illegal_syscall - - jalr t2 # Do The Real Thing (TM) - - li t0, -EMAXERRNO - 1 # error? - sltu t0, t0, v0 - sd t0, PT_R7(sp) # set error flag - beqz t0, 1f - - ld t1, PT_R2(sp) # syscall number - dnegu v0 # error - sd t1, PT_R0(sp) # save it for syscall restarting -1: sd v0, PT_R2(sp) # result - -n64_syscall_exit: - j syscall_exit_partial - -/* ------------------------------------------------------------------------ */ - -syscall_trace_entry: - SAVE_STATIC - move a0, sp - move a1, v0 - jal syscall_trace_enter - - bltz v0, 1f # seccomp failed? Skip syscall - - RESTORE_STATIC - ld v0, PT_R2(sp) # Restore syscall (maybe modified) - ld a0, PT_R4(sp) # Restore argument registers - ld a1, PT_R5(sp) - ld a2, PT_R6(sp) - ld a3, PT_R7(sp) - ld a4, PT_R8(sp) - ld a5, PT_R9(sp) - j syscall_common - -1: j syscall_exit - -illegal_syscall: - /* This also isn't a 64-bit syscall, throw an error. */ - li v0, ENOSYS # error - sd v0, PT_R2(sp) - li t0, 1 # set error flag - sd t0, PT_R7(sp) - j n64_syscall_exit - END(handle_sys64) - -#define __SYSCALL(nr, entry) PTR entry - .align 3 - .type sys_call_table, @object -EXPORT(sys_call_table) -#include diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S deleted file mode 100644 index cedc8bd88804..000000000000 --- a/arch/mips/kernel/scall64-o32.S +++ /dev/null @@ -1,221 +0,0 @@ -/* - * This file is subject to the terms and conditions of the GNU General Public - * License. See the file "COPYING" in the main directory of this archive - * for more details. - * - * Copyright (C) 1995 - 2000, 2001 by Ralf Baechle - * Copyright (C) 1999, 2000 Silicon Graphics, Inc. - * Copyright (C) 2001 MIPS Technologies, Inc. - * Copyright (C) 2004 Thiemo Seufer - * - * Hairy, the userspace application uses a different argument passing - * convention than the kernel, so we have to translate things from o32 - * to ABI64 calling convention. 64-bit syscalls are also processed - * here for now. - */ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - - .align 5 -NESTED(handle_sys, PT_SIZE, sp) - .set noat - SAVE_SOME - TRACE_IRQS_ON_RELOAD - STI - .set at - ld t1, PT_EPC(sp) # skip syscall on return - - dsubu t0, v0, __NR_O32_Linux # check syscall number - sltiu t0, t0, __NR_O32_Linux_syscalls - daddiu t1, 4 # skip to next instruction - sd t1, PT_EPC(sp) - beqz t0, not_o32_scall -#if 0 - SAVE_ALL - move a1, v0 - ASM_PRINT("Scall %ld\n") - RESTORE_ALL -#endif - - /* We don't want to stumble over broken sign extensions from - userland. O32 does never use the upper half. */ - sll a0, a0, 0 - sll a1, a1, 0 - sll a2, a2, 0 - sll a3, a3, 0 - - sd a3, PT_R26(sp) # save a3 for syscall restarting - - /* - * More than four arguments. Try to deal with it by copying the - * stack arguments from the user stack to the kernel stack. - * This Sucks (TM). - * - * We intentionally keep the kernel stack a little below the top of - * userspace so we don't have to do a slower byte accurate check here. - */ - ld t0, PT_R29(sp) # get old user stack pointer - daddu t1, t0, 32 - bltz t1, bad_stack - -load_a4: lw a4, 16(t0) # argument #5 from usp -load_a5: lw a5, 20(t0) # argument #6 from usp -load_a6: lw a6, 24(t0) # argument #7 from usp -load_a7: lw a7, 28(t0) # argument #8 from usp -loads_done: - - .section __ex_table,"a" - PTR load_a4, bad_stack_a4 - PTR load_a5, bad_stack_a5 - PTR load_a6, bad_stack_a6 - PTR load_a7, bad_stack_a7 - .previous - - li t1, _TIF_WORK_SYSCALL_ENTRY - LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? - and t0, t1, t0 - bnez t0, trace_a_syscall - -syscall_common: - dsll t0, v0, 3 # offset into table - ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0) - - jalr t2 # Do The Real Thing (TM) - - li t0, -EMAXERRNO - 1 # error? - sltu t0, t0, v0 - sd t0, PT_R7(sp) # set error flag - beqz t0, 1f - - ld t1, PT_R2(sp) # syscall number - dnegu v0 # error - sd t1, PT_R0(sp) # save it for syscall restarting -1: sd v0, PT_R2(sp) # result - -o32_syscall_exit: - j syscall_exit_partial - -/* ------------------------------------------------------------------------ */ - -trace_a_syscall: - SAVE_STATIC - sd a4, PT_R8(sp) # Save argument registers - sd a5, PT_R9(sp) - sd a6, PT_R10(sp) - sd a7, PT_R11(sp) # For indirect syscalls - - move a0, sp - /* - * absolute syscall number is in v0 unless we called syscall(__NR_###) - * where the real syscall number is in a0 - * note: NR_syscall is the first O32 syscall but the macro is - * only defined when compiling with -mabi=32 (CONFIG_32BIT) - * therefore __NR_O32_Linux is used (4000) - */ - .set push - .set reorder - subu t1, v0, __NR_O32_Linux - move a1, v0 - bnez t1, 1f /* __NR_syscall at offset 0 */ - ld a1, PT_R4(sp) /* Arg1 for __NR_syscall case */ - .set pop - -1: jal syscall_trace_enter - - bltz v0, 1f # seccomp failed? Skip syscall - - RESTORE_STATIC - ld v0, PT_R2(sp) # Restore syscall (maybe modified) - ld a0, PT_R4(sp) # Restore argument registers - ld a1, PT_R5(sp) - ld a2, PT_R6(sp) - ld a3, PT_R7(sp) - ld a4, PT_R8(sp) - ld a5, PT_R9(sp) - ld a6, PT_R10(sp) - ld a7, PT_R11(sp) # For indirect syscalls - - dsubu t0, v0, __NR_O32_Linux # check (new) syscall number - sltiu t0, t0, __NR_O32_Linux_syscalls - beqz t0, not_o32_scall - - j syscall_common - -1: j syscall_exit - -/* ------------------------------------------------------------------------ */ - - /* - * The stackpointer for a call with more than 4 arguments is bad. - */ -bad_stack: - li v0, EFAULT - sd v0, PT_R2(sp) - li t0, 1 # set error flag - sd t0, PT_R7(sp) - j o32_syscall_exit - -bad_stack_a4: - li a4, 0 - b load_a5 - -bad_stack_a5: - li a5, 0 - b load_a6 - -bad_stack_a6: - li a6, 0 - b load_a7 - -bad_stack_a7: - li a7, 0 - b loads_done - -not_o32_scall: - /* - * This is not an o32 compatibility syscall, pass it on - * to the 64-bit syscall handlers. - */ -#ifdef CONFIG_MIPS32_N32 - j handle_sysn32 -#else - j handle_sys64 -#endif - END(handle_sys) - -LEAF(sys32_syscall) - subu t0, a0, __NR_O32_Linux # check syscall number - sltiu v0, t0, __NR_O32_Linux_syscalls - beqz t0, einval # do not recurse - dsll t1, t0, 3 - beqz v0, einval - ld t2, sys32_call_table(t1) # syscall routine - - move a0, a1 # shift argument registers - move a1, a2 - move a2, a3 - move a3, a4 - move a4, a5 - move a5, a6 - move a6, a7 - jr t2 - /* Unreached */ - -einval: li v0, -ENOSYS - jr ra - END(sys32_syscall) - -#define __SYSCALL_WITH_COMPAT(nr, native, compat) __SYSCALL(nr, compat) -#define __SYSCALL(nr, entry) PTR entry - .align 3 - .type sys32_call_table,@object -EXPORT(sys32_call_table) -#include diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index c9b2a75563e1..314a6ffa0e07 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c @@ -32,7 +32,6 @@ #include #include #include -#include #include #include #include @@ -627,7 +626,7 @@ SYSCALL_DEFINE3(sigaction, int, sig, const struct sigaction __user *, act, #endif #ifdef CONFIG_TRAD_SIGNALS -asmlinkage void sys_sigreturn(void) +asmlinkage long sys_sigreturn(void) { struct sigframe __user *frame; struct pt_regs *regs; @@ -649,22 +648,17 @@ asmlinkage void sys_sigreturn(void) else if (sig) force_sig(sig); - /* - * Don't let your children do this ... - */ - __asm__ __volatile__( - "move\t$29, %0\n\t" - "j\tsyscall_exit" - : /* no outputs */ - : "r" (regs)); - /* Unreached */ + regs->regs[0] = 0; /* No syscall restarting */ + regs->regs[27] = 1; /* return directly */ + return regs->regs[2]; badframe: force_sig(SIGSEGV); + return 0; } #endif /* CONFIG_TRAD_SIGNALS */ -asmlinkage void sys_rt_sigreturn(void) +asmlinkage long sys_rt_sigreturn(void) { struct rt_sigframe __user *frame; struct pt_regs *regs; @@ -686,21 +680,16 @@ asmlinkage void sys_rt_sigreturn(void) else if (sig) force_sig(sig); + regs->regs[0] = 0; /* No syscall restarting */ if (restore_altstack(&frame->rs_uc.uc_stack)) goto badframe; - /* - * Don't let your children do this ... - */ - __asm__ __volatile__( - "move\t$29, %0\n\t" - "j\tsyscall_exit" - : /* no outputs */ - : "r" (regs)); - /* Unreached */ + regs->regs[27] = 1; /* return directly */ + return regs->regs[2]; badframe: force_sig(SIGSEGV); + return 0; } #ifdef CONFIG_TRAD_SIGNALS @@ -852,11 +841,11 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) signal_setup_done(ret, ksig, 0); } -static void do_signal(struct pt_regs *regs) +void arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal) { struct ksignal ksig; - if (get_signal(&ksig)) { + if (has_signal && get_signal(&ksig)) { /* Whee! Actually deliver the signal. */ handle_signal(&ksig, regs); return; @@ -904,7 +893,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused, /* deal with pending signal delivery */ if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) - do_signal(regs); + arch_do_signal_or_restart(regs, thread_info_flags & _TIF_SIGPENDING); if (thread_info_flags & _TIF_NOTIFY_RESUME) tracehook_notify_resume(regs); diff --git a/arch/mips/kernel/signal_n32.c b/arch/mips/kernel/signal_n32.c index 7bd00fad61af..e282f14747d2 100644 --- a/arch/mips/kernel/signal_n32.c +++ b/arch/mips/kernel/signal_n32.c @@ -19,7 +19,6 @@ #include #include #include -#include #include #include #include @@ -51,7 +50,7 @@ struct rt_sigframe_n32 { struct ucontextn32 rs_uc; }; -asmlinkage void sysn32_rt_sigreturn(void) +asmlinkage long sysn32_rt_sigreturn(void) { struct rt_sigframe_n32 __user *frame; struct pt_regs *regs; @@ -73,21 +72,16 @@ asmlinkage void sysn32_rt_sigreturn(void) else if (sig) force_sig(sig); + regs->regs[0] = 0; /* No syscall restarting */ if (compat_restore_altstack(&frame->rs_uc.uc_stack)) goto badframe; - /* - * Don't let your children do this ... - */ - __asm__ __volatile__( - "move\t$29, %0\n\t" - "j\tsyscall_exit" - : /* no outputs */ - : "r" (regs)); - /* Unreached */ + regs->regs[27] = 1; /* return directly */ + return regs->regs[2]; badframe: force_sig(SIGSEGV); + return 0; } static int setup_rt_frame_n32(void *sig_return, struct ksignal *ksig, diff --git a/arch/mips/kernel/signal_o32.c b/arch/mips/kernel/signal_o32.c index 299a7a28ca33..ea6d12dd15cc 100644 --- a/arch/mips/kernel/signal_o32.c +++ b/arch/mips/kernel/signal_o32.c @@ -17,7 +17,6 @@ #include #include #include -#include #include #include "signal-common.h" @@ -151,7 +150,7 @@ static int setup_frame_32(void *sig_return, struct ksignal *ksig, return 0; } -asmlinkage void sys32_rt_sigreturn(void) +asmlinkage long sys32_rt_sigreturn(void) { struct rt_sigframe32 __user *frame; struct pt_regs *regs; @@ -173,21 +172,16 @@ asmlinkage void sys32_rt_sigreturn(void) else if (sig) force_sig(sig); + regs->regs[0] = 0; /* No syscall restarting */ if (compat_restore_altstack(&frame->rs_uc.uc_stack)) goto badframe; - /* - * Don't let your children do this ... - */ - __asm__ __volatile__( - "move\t$29, %0\n\t" - "j\tsyscall_exit" - : /* no outputs */ - : "r" (regs)); - /* Unreached */ + regs->regs[27] = 1; /* return directly */ + return regs->regs[2]; badframe: force_sig(SIGSEGV); + return 0; } static int setup_rt_frame_32(void *sig_return, struct ksignal *ksig, @@ -253,7 +247,7 @@ struct mips_abi mips_abi_32 = { }; -asmlinkage void sys32_sigreturn(void) +asmlinkage long sys32_sigreturn(void) { struct sigframe32 __user *frame; struct pt_regs *regs; @@ -275,16 +269,11 @@ asmlinkage void sys32_sigreturn(void) else if (sig) force_sig(sig); - /* - * Don't let your children do this ... - */ - __asm__ __volatile__( - "move\t$29, %0\n\t" - "j\tsyscall_exit" - : /* no outputs */ - : "r" (regs)); - /* Unreached */ + regs->regs[0] = 0; /* No syscall restarting */ + regs->regs[27] = 1; /* return directly */ + return regs->regs[2]; badframe: force_sig(SIGSEGV); + return 0; } diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c index 2afa3eef486a..2653f82a8c99 100644 --- a/arch/mips/kernel/syscall.c +++ b/arch/mips/kernel/syscall.c @@ -8,6 +8,7 @@ * Copyright (C) 2001 MIPS Technologies, Inc. */ #include +#include #include #include #include @@ -35,9 +36,9 @@ #include #include #include -#include #include #include +#include #include #include @@ -79,10 +80,6 @@ SYSCALL_DEFINE6(mips_mmap2, unsigned long, addr, unsigned long, len, pgoff >> (PAGE_SHIFT - 12)); } -save_static_function(sys_fork); -save_static_function(sys_clone); -save_static_function(sys_clone3); - SYSCALL_DEFINE1(set_thread_area, unsigned long, addr) { struct thread_info *ti = task_thread_info(current); @@ -182,28 +179,11 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new) return err; regs = current_pt_regs(); - regs->regs[2] = old; - regs->regs[7] = 0; /* No error */ - - /* - * Don't let your children do this ... - */ - __asm__ __volatile__( - " move $29, %0 \n" - " j syscall_exit \n" - : /* no outputs */ - : "r" (regs)); - - /* unreached. Honestly. */ - unreachable(); + regs->regs[7] = 0; /* no error */ + regs->regs[27] = 1; /* return directly */ + return old; } -/* - * mips_atomic_set() normally returns directly via syscall_exit potentially - * clobbering static registers, so be sure to preserve them. - */ -save_static_function(sys_sysmips); - SYSCALL_DEFINE3(sysmips, long, cmd, long, arg1, long, arg2) { switch (cmd) { @@ -249,3 +229,121 @@ asmlinkage void bad_stack(void) { do_exit(SIGSEGV); } + +#if defined(CONFIG_32BIT) || defined(CONFIG_MIPS32_O32) +static inline int get_args(struct pt_regs *regs) +{ + int *usp = (int *)regs->regs[29]; + +#ifdef CONFIG_MIPS32_O32 + /* + * Hairy, the userspace application uses a different argument passing + * convention than the kernel, so we have to translate things from o32 + * to ABI64 calling convention. + * + * We don't want to stumble over broken sign extensions from userland. + * O32 does never use the upper half. + */ + regs->regs[4] = (int)regs->regs[4]; + regs->regs[5] = (int)regs->regs[5]; + regs->regs[6] = (int)regs->regs[6]; + regs->regs[7] = (int)regs->regs[7]; +#endif + + /* + * More than four arguments. Try to deal with it by copying the + * stack arguments from the user stack to the kernel stack. + * This Sucks (TM). + * + * We intentionally keep the kernel stack a little below the top of + * userspace so we don't have to do a slower byte accurate check here. + */ + if (!access_ok(usp, 32)) + return -1; + + get_user(regs->regs[8], usp + 4); + get_user(regs->regs[9], usp + 5); + get_user(regs->regs[10], usp + 6); + get_user(regs->regs[11], usp + 7); + + return 0; +} +#endif + +typedef long (*sys_call_fn)(unsigned long, unsigned long, + unsigned long, unsigned long, unsigned long, unsigned long); + +long noinstr do_syscall(struct pt_regs *regs) +{ + unsigned long nr; + unsigned long ret; + sys_call_fn syscall_fn = NULL; + + nr = regs->regs[2]; + current_thread_info()->syscall = nr; + nr = syscall_enter_from_user_mode(regs, nr); + + regs->cp0_epc += 4; /* skip syscall on return */ + /* skip to next instruction */ + regs->regs[26] = regs->regs[7]; /* save a3 for syscall restarting */ + regs->regs[27] = 0; /* do not return directly */ + +#ifdef CONFIG_32BIT + if (nr >= __NR_O32_Linux && nr < __NR_O32_Linux + __NR_O32_Linux_syscalls) { + if (get_args(regs) < 0) { + ret = EFAULT; + goto error; + } + syscall_fn = (sys_call_fn)sys_call_table[nr - __NR_O32_Linux]; + } +#endif + +#ifdef CONFIG_MIPS32_O32 + if (nr >= __NR_O32_Linux && nr < __NR_O32_Linux + __NR_O32_Linux_syscalls) { + if (get_args(regs) < 0) { + ret = EFAULT; + goto error; + } + syscall_fn = (sys_call_fn)sys32_call_table[nr - __NR_O32_Linux]; + } +#endif + +#ifdef CONFIG_MIPS32_N32 + if (nr >= __NR_N32_Linux && nr < __NR_N32_Linux + __NR_N32_Linux_syscalls) + syscall_fn = (sys_call_fn)sysn32_call_table[nr - __NR_N32_Linux]; +#endif + +#ifdef CONFIG_64BIT + if (nr >= __NR_64_Linux && nr < __NR_64_Linux + __NR_64_Linux_syscalls) + syscall_fn = (sys_call_fn)sys_call_table[nr - __NR_64_Linux]; +#endif + + if (unlikely(!syscall_fn)) { + ret = ENOSYS; + goto error; + } + + ret = syscall_fn(regs->regs[4], regs->regs[5], regs->regs[6], + regs->regs[7], regs->regs[8], regs->regs[9]); + + if (regs->regs[27]) /* return directly? */ + goto out; + + regs->regs[7] = 0; /* clear error flag */ + if (ret >= -EMAXERRNO - 1) { /* error? */ + regs->regs[0] = nr; /* save syscall number */ + /* for syscall restarting */ + ret = -ret; + goto error; + } + + goto out; + +error: + regs->regs[7] = 1; /* set error flag */ + +out: + regs->regs[2] = ret; + syscall_exit_to_user_mode(regs); + return regs->regs[27]; +} diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index 70e32de2bcaa..d9ae765e51f1 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -62,8 +62,8 @@ 52 n32 socketpair sys_socketpair 53 n32 setsockopt sys_setsockopt 54 n32 getsockopt sys_getsockopt -55 n32 clone __sys_clone -56 n32 fork __sys_fork +55 n32 clone sys_clone +56 n32 fork sys_fork 57 n32 execve compat_sys_execve 58 n32 exit sys_exit 59 n32 wait4 compat_sys_wait4 @@ -207,7 +207,7 @@ 196 n32 sched_getaffinity compat_sys_sched_getaffinity 197 n32 cacheflush sys_cacheflush 198 n32 cachectl sys_cachectl -199 n32 sysmips __sys_sysmips +199 n32 sysmips sys_sysmips 200 n32 io_setup compat_sys_io_setup 201 n32 io_destroy sys_io_destroy 202 n32 io_getevents sys_io_getevents_time32 @@ -373,7 +373,7 @@ 432 n32 fsmount sys_fsmount 433 n32 fspick sys_fspick 434 n32 pidfd_open sys_pidfd_open -435 n32 clone3 __sys_clone3 +435 n32 clone3 sys_clone3 436 n32 close_range sys_close_range 437 n32 openat2 sys_openat2 438 n32 pidfd_getfd sys_pidfd_getfd diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl index 1ca7bc337932..edec3e82d67a 100644 --- a/arch/mips/kernel/syscalls/syscall_n64.tbl +++ b/arch/mips/kernel/syscalls/syscall_n64.tbl @@ -62,8 +62,8 @@ 52 n64 socketpair sys_socketpair 53 n64 setsockopt sys_setsockopt 54 n64 getsockopt sys_getsockopt -55 n64 clone __sys_clone -56 n64 fork __sys_fork +55 n64 clone sys_clone +56 n64 fork sys_fork 57 n64 execve sys_execve 58 n64 exit sys_exit 59 n64 wait4 sys_wait4 @@ -207,7 +207,7 @@ 196 n64 sched_getaffinity sys_sched_getaffinity 197 n64 cacheflush sys_cacheflush 198 n64 cachectl sys_cachectl -199 n64 sysmips __sys_sysmips +199 n64 sysmips sys_sysmips 200 n64 io_setup sys_io_setup 201 n64 io_destroy sys_io_destroy 202 n64 io_getevents sys_io_getevents @@ -349,7 +349,7 @@ 432 n64 fsmount sys_fsmount 433 n64 fspick sys_fspick 434 n64 pidfd_open sys_pidfd_open -435 n64 clone3 __sys_clone3 +435 n64 clone3 sys_clone3 436 n64 close_range sys_close_range 437 n64 openat2 sys_openat2 438 n64 pidfd_getfd sys_pidfd_getfd diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index a61c35edaa74..89a1f267da6a 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -9,7 +9,7 @@ # 0 o32 syscall sys_syscall sys32_syscall 1 o32 exit sys_exit -2 o32 fork __sys_fork +2 o32 fork sys_fork 3 o32 read sys_read 4 o32 write sys_write 5 o32 open sys_open compat_sys_open @@ -131,7 +131,7 @@ 117 o32 ipc sys_ipc compat_sys_ipc 118 o32 fsync sys_fsync 119 o32 sigreturn sys_sigreturn sys32_sigreturn -120 o32 clone __sys_clone +120 o32 clone sys_clone 121 o32 setdomainname sys_setdomainname 122 o32 uname sys_newuname 123 o32 modify_ldt sys_ni_syscall @@ -160,7 +160,7 @@ 146 o32 writev sys_writev 147 o32 cacheflush sys_cacheflush 148 o32 cachectl sys_cachectl -149 o32 sysmips __sys_sysmips +149 o32 sysmips sys_sysmips 150 o32 unused150 sys_ni_syscall 151 o32 getsid sys_getsid 152 o32 fdatasync sys_fdatasync @@ -422,7 +422,7 @@ 432 o32 fsmount sys_fsmount 433 o32 fspick sys_fspick 434 o32 pidfd_open sys_pidfd_open -435 o32 clone3 __sys_clone3 +435 o32 clone3 sys_clone3 436 o32 close_range sys_close_range 437 o32 openat2 sys_openat2 438 o32 pidfd_getfd sys_pidfd_getfd From patchwork Thu Oct 14 08:32:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feiyang Chen X-Patchwork-Id: 12558065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57D1DC433F5 for ; Thu, 14 Oct 2021 08:33:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A90760E53 for ; Thu, 14 Oct 2021 08:33:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230246AbhJNIfG (ORCPT ); Thu, 14 Oct 2021 04:35:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbhJNIez (ORCPT ); Thu, 14 Oct 2021 04:34:55 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF72EC061570; Thu, 14 Oct 2021 01:32:50 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id q10-20020a17090a1b0a00b001a076a59640so5144246pjq.0; Thu, 14 Oct 2021 01:32:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=k7OT6HMJtupmQjXYmRGzazB83eIQrNYr9RUXbJ9zlf4=; b=NOzjscW0oxY5kXXyBMN/O0ikeKPqaA+njFEEpDJpVQcyo8w+goIxTOL5HVt7Q2je4d bX4HKBv/4r5GpEHzPrajTrW1HobIXFmGKjYrHaWOPC/wki0Gei7TttzmRkBBebLJXpx3 WIs6vCwZDsFl8E0ytWmqa3Bzti1U58B6SGy0kovcokS+DrwlzCsBWMevCrGJOG372DlH hp4pcVUvjXtEv8to1vn9mGg44jrD1jcARbNZy7MfFlvRuyPaoq/2SjlMzV4ZnqcpvafZ 2w16J+Z/573MzNCzaeUDy7V5hEmVz+2RstkBDZDA2mFAqnxKdSkhm50bj45u5nZw42js R07Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=k7OT6HMJtupmQjXYmRGzazB83eIQrNYr9RUXbJ9zlf4=; b=G5mpvq6HZDwL1rJz6HUPsF9c9wGzC1j8r59CbFBuokRVpSknJ6SbpTPbnSxpmcud4w M+3W21WjevnxEusQVQtcsONLe+2FLdnuoU4s2bh4q28GUQpGoOs01XVimlhepuZR8dSQ 4EIXgPwgEWPkO2Ow85dNzz7AviPJRBQE+U1YUgtltiX1C2fDVdljrqY3fShDuj0UV6jD VhJBF5w54I139TMDRRQzIhmT8RNMHHXWRylafF3NsTPCSpaykpxQobmAPnGa5bMhCMg8 xeEk9cHNvgNuzjfneKD5HodmUwNi67y4fQmLESU8RH+Ne1jV4ehjzrelqs0rsjgPfV7G z9gw== X-Gm-Message-State: AOAM533gSQaCNfMAJsUAOuZy9Vzfz+fYXfzs31mSSGce3lADbJoG6cEk LPNH43K0v5xQyUyWBLLPi2A= X-Google-Smtp-Source: ABdhPJxG0kT/khrj861dMSXCJqVj8wHK9SkgCpW+aptoKLn2h9jzzx7LvGUYAKFMTBo23wXP3xWo5A== X-Received: by 2002:a17:90a:f488:: with SMTP id bx8mr4753575pjb.86.1634200370108; Thu, 14 Oct 2021 01:32:50 -0700 (PDT) Received: from archlinux.localdomain (61-218-132-193.hinet-ip.hinet.net. [61.218.132.193]) by smtp.gmail.com with ESMTPSA id z23sm1802004pgv.45.2021.10.14.01.32.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Oct 2021 01:32:49 -0700 (PDT) From: Feiyang Chen X-Google-Original-From: Feiyang Chen To: tsbogend@alpha.franken.de, tglx@linutronix.de, peterz@infradead.org, luto@kernel.org, arnd@arndb.de Cc: Feiyang Chen , linux-mips@vger.kernel.org, linux-arch@vger.kernel.org, chenhuacai@kernel.org, jiaxun.yang@flygoat.com, zhouyu@wanyeetech.com, hns@goldelico.com, chris.chenfeiyang@gmail.com, Yanteng Si Subject: [PATCH v3 2/2] MIPS: convert irq to generic entry Date: Thu, 14 Oct 2021 16:32:54 +0800 Message-Id: <30df5df7baa0c4d8dbac984e2833294308b493cd.1634177547.git.chenfeiyang@loongson.cn> X-Mailer: git-send-email 2.33.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org Convert MIPS irq to use the generic entry infrastructure from kernel/entry/*. When entering the handler functions written in C, there are three status: STI, CLI and KMODE. Now use CLI for all handler functions, since interrupts must be disabled before calling irqentry_enter(). - For handler functions who originally used STI, enable interrupts after calling irqentry_enter(). - For handler functions who originally used KMODE, enable interrupts after calling irqentry_enter() only if they are enabled in the parent context. - If CONFIG_HARDWARE_WATCHPOINTS is defined, interrupts will be enabled after the watch registers are read. Only enable interrupts manually in do_watch() if it is not defined. Use call_on_irq_stack() to help invoking a function on IRQ stack. Signed-off-by: Feiyang Chen Signed-off-by: Yanteng Si Reviewed-by: Huacai Chen --- arch/mips/include/asm/irqflags.h | 42 ------ arch/mips/include/asm/stackframe.h | 8 + arch/mips/kernel/entry.S | 82 ----------- arch/mips/kernel/genex.S | 150 ++++--------------- arch/mips/kernel/head.S | 1 - arch/mips/kernel/r4k-bugs64.c | 14 +- arch/mips/kernel/scall.S | 1 - arch/mips/kernel/signal.c | 24 --- arch/mips/kernel/traps.c | 225 +++++++++++++++++++++-------- arch/mips/kernel/unaligned.c | 21 ++- arch/mips/mm/c-octeon.c | 15 ++ arch/mips/mm/cex-oct.S | 8 +- arch/mips/mm/fault.c | 12 +- arch/mips/mm/tlbex-fault.S | 7 +- 14 files changed, 257 insertions(+), 353 deletions(-) diff --git a/arch/mips/include/asm/irqflags.h b/arch/mips/include/asm/irqflags.h index f5b8300f4573..ee7519b0d23f 100644 --- a/arch/mips/include/asm/irqflags.h +++ b/arch/mips/include/asm/irqflags.h @@ -11,8 +11,6 @@ #ifndef _ASM_IRQFLAGS_H #define _ASM_IRQFLAGS_H -#ifndef __ASSEMBLY__ - #include #include #include @@ -142,44 +140,4 @@ static inline int arch_irqs_disabled(void) return arch_irqs_disabled_flags(arch_local_save_flags()); } -#endif /* #ifndef __ASSEMBLY__ */ - -/* - * Do the CPU's IRQ-state tracing from assembly code. - */ -#ifdef CONFIG_TRACE_IRQFLAGS -/* Reload some registers clobbered by trace_hardirqs_on */ -#ifdef CONFIG_64BIT -# define TRACE_IRQS_RELOAD_REGS \ - LONG_L $11, PT_R11(sp); \ - LONG_L $10, PT_R10(sp); \ - LONG_L $9, PT_R9(sp); \ - LONG_L $8, PT_R8(sp); \ - LONG_L $7, PT_R7(sp); \ - LONG_L $6, PT_R6(sp); \ - LONG_L $5, PT_R5(sp); \ - LONG_L $4, PT_R4(sp); \ - LONG_L $2, PT_R2(sp) -#else -# define TRACE_IRQS_RELOAD_REGS \ - LONG_L $7, PT_R7(sp); \ - LONG_L $6, PT_R6(sp); \ - LONG_L $5, PT_R5(sp); \ - LONG_L $4, PT_R4(sp); \ - LONG_L $2, PT_R2(sp) -#endif -# define TRACE_IRQS_ON \ - CLI; /* make sure trace_hardirqs_on() is called in kernel level */ \ - jal trace_hardirqs_on -# define TRACE_IRQS_ON_RELOAD \ - TRACE_IRQS_ON; \ - TRACE_IRQS_RELOAD_REGS -# define TRACE_IRQS_OFF \ - jal trace_hardirqs_off -#else -# define TRACE_IRQS_ON -# define TRACE_IRQS_ON_RELOAD -# define TRACE_IRQS_OFF -#endif - #endif /* _ASM_IRQFLAGS_H */ diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h index aa430a6c68b2..8bc74d7950fb 100644 --- a/arch/mips/include/asm/stackframe.h +++ b/arch/mips/include/asm/stackframe.h @@ -444,6 +444,14 @@ RESTORE_SP \docfi .endm + .macro RESTORE_ALL_AND_RET docfi=0 + RESTORE_TEMP \docfi + RESTORE_STATIC \docfi + RESTORE_AT \docfi + RESTORE_SOME \docfi + RESTORE_SP_AND_RET \docfi + .endm + /* * Move to kernel mode and disable interrupts. * Set cp0 enable bit as sign that we're running on the kernel stack diff --git a/arch/mips/kernel/entry.S b/arch/mips/kernel/entry.S index 1a2aec9dab1b..c9148831d820 100644 --- a/arch/mips/kernel/entry.S +++ b/arch/mips/kernel/entry.S @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -19,55 +18,8 @@ #include #include -#ifndef CONFIG_PREEMPTION -#define resume_kernel restore_all -#else -#define __ret_from_irq ret_from_exception -#endif - .text .align 5 -#ifndef CONFIG_PREEMPTION -FEXPORT(ret_from_exception) - local_irq_disable # preempt stop - b __ret_from_irq -#endif -FEXPORT(ret_from_irq) - LONG_S s0, TI_REGS($28) -FEXPORT(__ret_from_irq) -/* - * We can be coming here from a syscall done in the kernel space, - * e.g. a failed kernel_execve(). - */ -resume_userspace_check: - LONG_L t0, PT_STATUS(sp) # returning to kernel mode? - andi t0, t0, KU_USER - beqz t0, resume_kernel - -resume_userspace: - local_irq_disable # make sure we dont miss an - # interrupt setting need_resched - # between sampling and return - LONG_L a2, TI_FLAGS($28) # current->work - andi t0, a2, _TIF_WORK_MASK # (ignoring syscall_trace) - bnez t0, work_pending - j restore_all - -#ifdef CONFIG_PREEMPTION -resume_kernel: - local_irq_disable - lw t0, TI_PRE_COUNT($28) - bnez t0, restore_all - LONG_L t0, TI_FLAGS($28) - andi t1, t0, _TIF_NEED_RESCHED - beqz t1, restore_all - LONG_L t0, PT_STATUS(sp) # Interrupts off? - andi t0, 1 - beqz t0, restore_all - PTR_LA ra, restore_all - j preempt_schedule_irq -#endif - FEXPORT(ret_from_kernel_thread) jal schedule_tail # a0 = struct task_struct *prev move a0, s1 @@ -92,40 +44,6 @@ FEXPORT(ret_from_fork) RESTORE_SP_AND_RET .set at -restore_all: # restore full frame - .set noat - RESTORE_TEMP - RESTORE_AT - RESTORE_STATIC -restore_partial: # restore partial frame - RESTORE_SOME - RESTORE_SP_AND_RET - .set at - -work_pending: - andi t0, a2, _TIF_NEED_RESCHED # a2 is preloaded with TI_FLAGS - beqz t0, work_notifysig -work_resched: - TRACE_IRQS_OFF - jal schedule - - local_irq_disable # make sure need_resched and - # signals dont change between - # sampling and return - LONG_L a2, TI_FLAGS($28) - andi t0, a2, _TIF_WORK_MASK # is there any work to be done - # other than syscall tracing? - beqz t0, restore_all - andi t0, a2, _TIF_NEED_RESCHED - bnez t0, work_resched - -work_notifysig: # deal with pending signals and - # notify-resume requests - move a0, sp - li a1, 0 - jal do_notify_resume # a2 already loaded - j resume_userspace_check - #if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR5) || \ defined(CONFIG_CPU_MIPSR6) || defined(CONFIG_MIPS_MT) diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S index 743d75927b71..aa04eb131379 100644 --- a/arch/mips/kernel/genex.S +++ b/arch/mips/kernel/genex.S @@ -13,7 +13,6 @@ #include #include #include -#include #include #include #include @@ -182,53 +181,17 @@ NESTED(handle_int, PT_SIZE, sp) #endif SAVE_ALL docfi=1 CLI - TRACE_IRQS_OFF - LONG_L s0, TI_REGS($28) - LONG_S sp, TI_REGS($28) - - /* - * SAVE_ALL ensures we are using a valid kernel stack for the thread. - * Check if we are already using the IRQ stack. - */ - move s1, sp # Preserve the sp - - /* Get IRQ stack for this CPU */ - ASM_CPUID_MFC0 k0, ASM_SMP_CPUID_REG -#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32) - lui k1, %hi(irq_stack) -#else - lui k1, %highest(irq_stack) - daddiu k1, %higher(irq_stack) - dsll k1, 16 - daddiu k1, %hi(irq_stack) - dsll k1, 16 -#endif - LONG_SRL k0, SMP_CPUID_PTRSHIFT - LONG_ADDU k1, k0 - LONG_L t0, %lo(irq_stack)(k1) - - # Check if already on IRQ stack - PTR_LI t1, ~(_THREAD_SIZE-1) - and t1, t1, sp - beq t0, t1, 2f - - /* Switch to IRQ stack */ - li t1, _IRQ_STACK_START - PTR_ADD sp, t0, t1 - - /* Save task's sp on IRQ stack so that unwinding can follow it */ - LONG_S s1, 0(sp) -2: - jal plat_irq_dispatch - - /* Restore sp */ - move sp, s1 - - j ret_from_irq + move a0, sp + move a1, sp + jal do_int #ifdef CONFIG_CPU_MICROMIPS nop #endif + + .set noat + RESTORE_ALL_AND_RET + .set at END(handle_int) __INIT @@ -290,54 +253,13 @@ NESTED(except_vec_vi_handler, 0, sp) SAVE_TEMP SAVE_STATIC CLI -#ifdef CONFIG_TRACE_IRQFLAGS - move s0, v0 - TRACE_IRQS_OFF - move v0, s0 -#endif - - LONG_L s0, TI_REGS($28) - LONG_S sp, TI_REGS($28) - /* - * SAVE_ALL ensures we are using a valid kernel stack for the thread. - * Check if we are already using the IRQ stack. - */ - move s1, sp # Preserve the sp - - /* Get IRQ stack for this CPU */ - ASM_CPUID_MFC0 k0, ASM_SMP_CPUID_REG -#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32) - lui k1, %hi(irq_stack) -#else - lui k1, %highest(irq_stack) - daddiu k1, %higher(irq_stack) - dsll k1, 16 - daddiu k1, %hi(irq_stack) - dsll k1, 16 -#endif - LONG_SRL k0, SMP_CPUID_PTRSHIFT - LONG_ADDU k1, k0 - LONG_L t0, %lo(irq_stack)(k1) - - # Check if already on IRQ stack - PTR_LI t1, ~(_THREAD_SIZE-1) - and t1, t1, sp - beq t0, t1, 2f - - /* Switch to IRQ stack */ - li t1, _IRQ_STACK_START - PTR_ADD sp, t0, t1 - - /* Save task's sp on IRQ stack so that unwinding can follow it */ - LONG_S s1, 0(sp) -2: - jalr v0 - - /* Restore sp */ - move sp, s1 + move a0, sp + move a1, sp + move a2, v0 + jal do_vi - j ret_from_irq + RESTORE_ALL_AND_RET END(except_vec_vi_handler) /* @@ -462,22 +384,12 @@ NESTED(nmi_handler, PT_SIZE, sp) .set pop END(nmi_handler) - .macro __build_clear_none - .endm - - .macro __build_clear_sti - TRACE_IRQS_ON - STI - .endm - .macro __build_clear_cli CLI - TRACE_IRQS_OFF .endm .macro __build_clear_fpe CLI - TRACE_IRQS_OFF .set push /* gas fails to assemble cfc1 for some archs (octeon).*/ \ .set mips1 @@ -488,14 +400,13 @@ NESTED(nmi_handler, PT_SIZE, sp) .macro __build_clear_msa_fpe CLI - TRACE_IRQS_OFF _cfcmsa a1, MSA_CSR .endm .macro __build_clear_ade MFC0 t0, CP0_BADVADDR PTR_S t0, PT_BVADDR(sp) - KMODE + CLI .endm .macro __build_clear_gsexc @@ -507,8 +418,7 @@ NESTED(nmi_handler, PT_SIZE, sp) .set mips32 mfc0 a1, CP0_DIAGNOSTIC1 .set pop - TRACE_IRQS_ON - STI + CLI .endm .macro __BUILD_silent exception @@ -547,7 +457,7 @@ NESTED(nmi_handler, PT_SIZE, sp) __BUILD_\verbose \exception move a0, sp jal do_\handler - j ret_from_exception + RESTORE_ALL_AND_RET END(handle_\exception) .endm @@ -559,32 +469,28 @@ NESTED(nmi_handler, PT_SIZE, sp) BUILD_HANDLER ades ade ade silent /* #5 */ BUILD_HANDLER ibe be cli silent /* #6 */ BUILD_HANDLER dbe be cli silent /* #7 */ - BUILD_HANDLER bp bp sti silent /* #9 */ - BUILD_HANDLER ri ri sti silent /* #10 */ - BUILD_HANDLER cpu cpu sti silent /* #11 */ - BUILD_HANDLER ov ov sti silent /* #12 */ - BUILD_HANDLER tr tr sti silent /* #13 */ + BUILD_HANDLER bp bp cli silent /* #9 */ + BUILD_HANDLER ri ri cli silent /* #10 */ + BUILD_HANDLER cpu cpu cli silent /* #11 */ + BUILD_HANDLER ov ov cli silent /* #12 */ + BUILD_HANDLER tr tr cli silent /* #13 */ BUILD_HANDLER msa_fpe msa_fpe msa_fpe silent /* #14 */ #ifdef CONFIG_MIPS_FP_SUPPORT BUILD_HANDLER fpe fpe fpe silent /* #15 */ #endif - BUILD_HANDLER ftlb ftlb none silent /* #16 */ + BUILD_HANDLER ftlb ftlb cli silent /* #16 */ BUILD_HANDLER gsexc gsexc gsexc silent /* #16 */ - BUILD_HANDLER msa msa sti silent /* #21 */ - BUILD_HANDLER mdmx mdmx sti silent /* #22 */ + BUILD_HANDLER msa msa cli silent /* #21 */ + BUILD_HANDLER mdmx mdmx cli silent /* #22 */ #ifdef CONFIG_HARDWARE_WATCHPOINTS - /* - * For watch, interrupts will be enabled after the watch - * registers are read. - */ BUILD_HANDLER watch watch cli silent /* #23 */ #else - BUILD_HANDLER watch watch sti verbose /* #23 */ + BUILD_HANDLER watch watch cli verbose /* #23 */ #endif BUILD_HANDLER mcheck mcheck cli verbose /* #24 */ - BUILD_HANDLER mt mt sti silent /* #25 */ - BUILD_HANDLER dsp dsp sti silent /* #26 */ - BUILD_HANDLER reserved reserved sti verbose /* others */ + BUILD_HANDLER mt mt cli silent /* #25 */ + BUILD_HANDLER dsp dsp cli silent /* #26 */ + BUILD_HANDLER reserved reserved cli verbose /* others */ .align 5 LEAF(handle_ri_rdhwr_tlbp) @@ -678,5 +584,5 @@ isrdhwr: __INIT - BUILD_HANDLER daddi_ov daddi_ov none silent /* #12 */ + BUILD_HANDLER daddi_ov daddi_ov cli silent /* #12 */ #endif diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S index b825ed4476c7..5f028dbd5961 100644 --- a/arch/mips/kernel/head.S +++ b/arch/mips/kernel/head.S @@ -19,7 +19,6 @@ #include #include #include -#include #include #include #include diff --git a/arch/mips/kernel/r4k-bugs64.c b/arch/mips/kernel/r4k-bugs64.c index 35729c9e6cfa..0384b649877c 100644 --- a/arch/mips/kernel/r4k-bugs64.c +++ b/arch/mips/kernel/r4k-bugs64.c @@ -3,6 +3,7 @@ * Copyright (C) 2003, 2004, 2007 Maciej W. Rozycki */ #include +#include #include #include #include @@ -168,14 +169,19 @@ static __always_inline __init void check_mult_sh(void) static volatile int daddi_ov; -asmlinkage void __init do_daddi_ov(struct pt_regs *regs) +asmlinkage void noinstr __init do_daddi_ov(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); + + /* Enable interrupt if enabled in parent context */ + if (likely(!regs_irqs_disabled(regs))) + local_irq_enable(); - prev_state = exception_enter(); daddi_ov = 1; regs->cp0_epc += 4; - exception_exit(prev_state); + + local_irq_disable(); + irqentry_exit(regs, state); } static __init void check_daddi(void) diff --git a/arch/mips/kernel/scall.S b/arch/mips/kernel/scall.S index fae8d99f0458..bd2e05304e72 100644 --- a/arch/mips/kernel/scall.S +++ b/arch/mips/kernel/scall.S @@ -9,7 +9,6 @@ #include #include #include -#include #include #include #include diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c index 314a6ffa0e07..087dd3cfaafa 100644 --- a/arch/mips/kernel/signal.c +++ b/arch/mips/kernel/signal.c @@ -877,30 +877,6 @@ void arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal) restore_saved_sigmask(); } -/* - * notification of userspace execution resumption - * - triggered by the TIF_WORK_MASK flags - */ -asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused, - __u32 thread_info_flags) -{ - local_irq_enable(); - - user_exit(); - - if (thread_info_flags & _TIF_UPROBE) - uprobe_notify_resume(regs); - - /* deal with pending signal delivery */ - if (thread_info_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) - arch_do_signal_or_restart(regs, thread_info_flags & _TIF_SIGPENDING); - - if (thread_info_flags & _TIF_NOTIFY_RESUME) - tracehook_notify_resume(regs); - - user_enter(); -} - #if defined(CONFIG_SMP) && defined(CONFIG_MIPS_FP_SUPPORT) static int smp_save_fp_context(void __user *sc) { diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index 6f07362de5ce..5c4be5440b15 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -40,6 +41,7 @@ #include #include +#include #include #include #include @@ -438,15 +440,14 @@ static const struct exception_table_entry *search_dbe_tables(unsigned long addr) return e; } -asmlinkage void do_be(struct pt_regs *regs) +asmlinkage void noinstr do_be(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); const int field = 2 * sizeof(unsigned long); const struct exception_table_entry *fixup = NULL; int data = regs->cp0_cause & 4; int action = MIPS_BE_FATAL; - enum ctx_state prev_state; - prev_state = exception_enter(); /* XXX For now. Fixme, this searches the wrong table ... */ if (data && !user_mode(regs)) fixup = search_dbe_tables(exception_epc(regs)); @@ -486,7 +487,7 @@ asmlinkage void do_be(struct pt_regs *regs) force_sig(SIGBUS); out: - exception_exit(prev_state); + irqentry_exit(regs, state); } /* @@ -743,15 +744,18 @@ static int simulate_loongson3_cpucfg(struct pt_regs *regs, } #endif /* CONFIG_CPU_LOONGSON3_CPUCFG_EMULATION */ -asmlinkage void do_ov(struct pt_regs *regs) +asmlinkage void noinstr do_ov(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); + + local_irq_enable(); - prev_state = exception_enter(); die_if_kernel("Integer overflow", regs); force_sig_fault(SIGFPE, FPE_INTOVF, (void __user *)regs->cp0_epc); - exception_exit(prev_state); + + local_irq_disable(); + irqentry_exit(regs, state); } #ifdef CONFIG_MIPS_FP_SUPPORT @@ -865,13 +869,12 @@ static int simulate_fp(struct pt_regs *regs, unsigned int opcode, /* * XXX Delayed fp exceptions when doing a lazy ctx switch XXX */ -asmlinkage void do_fpe(struct pt_regs *regs, unsigned long fcr31) +asmlinkage void noinstr do_fpe(struct pt_regs *regs, unsigned long fcr31) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); void __user *fault_addr; int sig; - prev_state = exception_enter(); if (notify_die(DIE_FP, "FP exception", regs, 0, current->thread.trap_nr, SIGFPE) == NOTIFY_STOP) goto out; @@ -916,7 +919,8 @@ asmlinkage void do_fpe(struct pt_regs *regs, unsigned long fcr31) process_fpemu_return(sig, fault_addr, fcr31); out: - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); } /* @@ -1018,14 +1022,16 @@ void do_trap_or_bp(struct pt_regs *regs, unsigned int code, int si_code, } } -asmlinkage void do_bp(struct pt_regs *regs) +asmlinkage void noinstr do_bp(struct pt_regs *regs) { - unsigned long epc = msk_isa16_mode(exception_epc(regs)); - unsigned int opcode, bcode; - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); bool user = user_mode(regs); + unsigned int opcode, bcode; + unsigned long epc; + + local_irq_enable(); - prev_state = exception_enter(); + epc = msk_isa16_mode(exception_epc(regs)); current->thread.trap_nr = (regs->cp0_cause >> 2) & 0x1f; if (get_isa16_mode(regs->cp0_epc)) { u16 instr[2]; @@ -1097,7 +1103,8 @@ asmlinkage void do_bp(struct pt_regs *regs) do_trap_or_bp(regs, bcode, TRAP_BRKPT, "Break"); out: - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); return; out_sigsegv: @@ -1105,15 +1112,17 @@ asmlinkage void do_bp(struct pt_regs *regs) goto out; } -asmlinkage void do_tr(struct pt_regs *regs) +asmlinkage void noinstr do_tr(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); + bool user = user_mode(regs); u32 opcode, tcode = 0; - enum ctx_state prev_state; + unsigned long epc; u16 instr[2]; - bool user = user_mode(regs); - unsigned long epc = msk_isa16_mode(exception_epc(regs)); - prev_state = exception_enter(); + local_irq_enable(); + + epc = msk_isa16_mode(exception_epc(regs)); current->thread.trap_nr = (regs->cp0_cause >> 2) & 0x1f; if (get_isa16_mode(regs->cp0_epc)) { if (__get_inst16(&instr[0], (u16 *)(epc + 0), user) || @@ -1134,7 +1143,8 @@ asmlinkage void do_tr(struct pt_regs *regs) do_trap_or_bp(regs, tcode, 0, "Trap"); out: - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); return; out_sigsegv: @@ -1142,15 +1152,19 @@ asmlinkage void do_tr(struct pt_regs *regs) goto out; } -asmlinkage void do_ri(struct pt_regs *regs) +asmlinkage void noinstr do_ri(struct pt_regs *regs) { - unsigned int __user *epc = (unsigned int __user *)exception_epc(regs); + irqentry_state_t state = irqentry_enter(regs); unsigned long old_epc = regs->cp0_epc; unsigned long old31 = regs->regs[31]; - enum ctx_state prev_state; + unsigned int __user *epc; unsigned int opcode = 0; int status = -1; + local_irq_enable(); + + epc = (unsigned int __user *)exception_epc(regs); + /* * Avoid any kernel code. Just emulate the R2 instruction * as quickly as possible. @@ -1177,7 +1191,6 @@ asmlinkage void do_ri(struct pt_regs *regs) no_r2_instr: - prev_state = exception_enter(); current->thread.trap_nr = (regs->cp0_cause >> 2) & 0x1f; if (notify_die(DIE_RI, "RI Fault", regs, 0, current->thread.trap_nr, @@ -1233,7 +1246,8 @@ asmlinkage void do_ri(struct pt_regs *regs) } out: - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); } /* @@ -1393,16 +1407,17 @@ static int enable_restore_fp_context(int msa) #endif /* CONFIG_MIPS_FP_SUPPORT */ -asmlinkage void do_cpu(struct pt_regs *regs) +asmlinkage void noinstr do_cpu(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); unsigned int __user *epc; unsigned long old_epc, old31; unsigned int opcode; unsigned int cpid; int status; - prev_state = exception_enter(); + local_irq_enable(); + cpid = (regs->cp0_cause >> CAUSEB_CE) & 3; if (cpid != 2) @@ -1495,14 +1510,14 @@ asmlinkage void do_cpu(struct pt_regs *regs) break; } - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_msa_fpe(struct pt_regs *regs, unsigned int msacsr) +asmlinkage void noinstr do_msa_fpe(struct pt_regs *regs, unsigned int msacsr) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); - prev_state = exception_enter(); current->thread.trap_nr = (regs->cp0_cause >> 2) & 0x1f; if (notify_die(DIE_MSAFP, "MSA FP exception", regs, 0, current->thread.trap_nr, SIGFPE) == NOTIFY_STOP) @@ -1514,16 +1529,18 @@ asmlinkage void do_msa_fpe(struct pt_regs *regs, unsigned int msacsr) die_if_kernel("do_msa_fpe invoked from kernel context!", regs); force_sig(SIGFPE); + out: - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_msa(struct pt_regs *regs) +asmlinkage void noinstr do_msa(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); int err; - prev_state = exception_enter(); + local_irq_enable(); if (!cpu_has_msa || test_thread_flag(TIF_32BIT_FPREGS)) { force_sig(SIGILL); @@ -1535,27 +1552,39 @@ asmlinkage void do_msa(struct pt_regs *regs) err = enable_restore_fp_context(1); if (err) force_sig(SIGILL); + out: - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_mdmx(struct pt_regs *regs) +asmlinkage void noinstr do_mdmx(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); + + local_irq_enable(); - prev_state = exception_enter(); force_sig(SIGILL); - exception_exit(prev_state); + + local_irq_disable(); + irqentry_exit(regs, state); } /* * Called with interrupts disabled. */ -asmlinkage void do_watch(struct pt_regs *regs) +asmlinkage void noinstr do_watch(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); + +#ifndef CONFIG_HARDWARE_WATCHPOINTS + /* + * For watch, interrupts will be enabled after the watch + * registers are read. + */ + local_irq_enable(); +#endif - prev_state = exception_enter(); /* * Clear WP (bit 22) bit of cause register so we don't loop * forever. @@ -1575,15 +1604,16 @@ asmlinkage void do_watch(struct pt_regs *regs) mips_clear_watch_registers(); local_irq_enable(); } - exception_exit(prev_state); + + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_mcheck(struct pt_regs *regs) +asmlinkage void noinstr do_mcheck(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); int multi_match = regs->cp0_status & ST0_TS; - enum ctx_state prev_state; - prev_state = exception_enter(); show_regs(regs); if (multi_match) { @@ -1601,12 +1631,17 @@ asmlinkage void do_mcheck(struct pt_regs *regs) panic("Caught Machine Check exception - %scaused by multiple " "matching entries in the TLB.", (multi_match) ? "" : "not "); + + irqentry_exit(regs, state); } -asmlinkage void do_mt(struct pt_regs *regs) +asmlinkage void noinstr do_mt(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); int subcode; + local_irq_enable(); + subcode = (read_vpe_c0_vpecontrol() & VPECONTROL_EXCPT) >> VPECONTROL_EXCPT_SHIFT; switch (subcode) { @@ -1636,19 +1671,33 @@ asmlinkage void do_mt(struct pt_regs *regs) die_if_kernel("MIPS MT Thread exception in kernel", regs); force_sig(SIGILL); + + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_dsp(struct pt_regs *regs) +asmlinkage void noinstr do_dsp(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); + + local_irq_enable(); + if (cpu_has_dsp) panic("Unexpected DSP exception"); force_sig(SIGILL); + + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_reserved(struct pt_regs *regs) +asmlinkage void noinstr do_reserved(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); + + local_irq_enable(); + /* * Game over - no way to handle this if it ever occurs. Most probably * caused by a new unknown cpu type or after another deadly @@ -1657,6 +1706,9 @@ asmlinkage void do_reserved(struct pt_regs *regs) show_regs(regs); panic("Caught reserved exception %ld - should not happen.", (regs->cp0_cause & 0x7f) >> 2); + + local_irq_disable(); + irqentry_exit(regs, state); } static int __initdata l1parity = 1; @@ -1871,11 +1923,16 @@ asmlinkage void cache_parity_error(void) panic("Can't handle the cache error!"); } -asmlinkage void do_ftlb(void) +asmlinkage void noinstr do_ftlb(struct pt_regs *regs) { + irqentry_state_t state = irqentry_enter(regs); const int field = 2 * sizeof(unsigned long); unsigned int reg_val; + /* Enable interrupt if enabled in parent context */ + if (likely(!regs_irqs_disabled(regs))) + local_irq_enable(); + /* For the moment, report the problem and hang. */ if ((cpu_has_mips_r2_r6) && (((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_MIPS) || @@ -1898,16 +1955,17 @@ asmlinkage void do_ftlb(void) } /* Just print the cacheerr bits for now */ cache_parity_error(); + local_irq_disable(); + irqentry_exit(regs, state); } -asmlinkage void do_gsexc(struct pt_regs *regs, u32 diag1) +asmlinkage void noinstr do_gsexc(struct pt_regs *regs, u32 diag1) { + irqentry_state_t state = irqentry_enter(regs); u32 exccode = (diag1 & LOONGSON_DIAG1_EXCCODE) >> LOONGSON_DIAG1_EXCCODE_SHIFT; - enum ctx_state prev_state; - - prev_state = exception_enter(); + local_irq_enable(); switch (exccode) { case 0x08: /* Undocumented exception, will trigger on certain @@ -1928,7 +1986,52 @@ asmlinkage void do_gsexc(struct pt_regs *regs, u32 diag1) panic("Unhandled Loongson exception - GSCause = %08x", diag1); } - exception_exit(prev_state); + local_irq_disable(); + irqentry_exit(regs, state); +} + +static void noinstr call_on_irq_stack(struct pt_regs *regs, + unsigned long sp, void (*func)(void)) +{ + int cpu; + unsigned long stack; + irqentry_state_t state = irqentry_enter(regs); + struct pt_regs *old_regs = set_irq_regs(regs); + + cpu = smp_processor_id(); + + if (on_irq_stack(cpu, sp)) { + func(); + } else { + stack = (unsigned long)irq_stack[cpu] + _IRQ_STACK_START; + + /* Save task's sp on IRQ stack so that unwinding can follow it */ + *(unsigned long *)stack = sp; + + __asm__ __volatile__( + "move $16, $29 \n" /* Preserve sp */ + "move $29, %[stack] \n" /* Switch to IRQ stack */ + "jalr %[func] \n" /* Invoke func */ + "move $29, $16 \n" /* Restore sp */ + : /* No outputs */ + : [stack] "r" (stack), [func] "r" (func) + : "$2", "$3", "$4", "$5", "$6", "$7", "$8", "$9", "$10", "$11", + "$12", "$13", "$14", "$15", "$16", "$24", "$25", "memory"); + } + + set_irq_regs(old_regs); + irqentry_exit(regs, state); +} + +asmlinkage void noinstr do_int(struct pt_regs *regs, unsigned long sp) +{ + call_on_irq_stack(regs, sp, plat_irq_dispatch); +} + +asmlinkage void noinstr do_vi(struct pt_regs *regs, unsigned long sp, + vi_handler_t handler) +{ + call_on_irq_stack(regs, sp, handler); } /* diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c index df4b708c04a9..b0fdca022b82 100644 --- a/arch/mips/kernel/unaligned.c +++ b/arch/mips/kernel/unaligned.c @@ -74,6 +74,7 @@ * Undo the partial store in this case. */ #include +#include #include #include #include @@ -1472,12 +1473,15 @@ static void emulate_load_store_MIPS16e(struct pt_regs *regs, void __user * addr) force_sig(SIGILL); } -asmlinkage void do_ade(struct pt_regs *regs) +asmlinkage void noinstr do_ade(struct pt_regs *regs) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); unsigned int *pc; - prev_state = exception_enter(); + /* Enable interrupt if enabled in parent context */ + if (likely(!regs_irqs_disabled(regs))) + local_irq_enable(); + perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, regs->cp0_badvaddr); /* @@ -1512,13 +1516,13 @@ asmlinkage void do_ade(struct pt_regs *regs) if (cpu_has_mmips) { emulate_load_store_microMIPS(regs, (void __user *)regs->cp0_badvaddr); - return; + goto out; } if (cpu_has_mips16) { emulate_load_store_MIPS16e(regs, (void __user *)regs->cp0_badvaddr); - return; + goto out; } goto sigbus; @@ -1530,16 +1534,19 @@ asmlinkage void do_ade(struct pt_regs *regs) emulate_load_store_insn(regs, (void __user *)regs->cp0_badvaddr, pc); - return; + goto out; sigbus: die_if_kernel("Kernel unaligned instruction access", regs); force_sig(SIGBUS); +out: /* * XXX On return from the signal handler we should advance the epc */ - exception_exit(prev_state); + + local_irq_disable(); + irqentry_exit(regs, state); } #ifdef CONFIG_DEBUG_FS diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c index ec2ae501539a..e68c9e6c6480 100644 --- a/arch/mips/mm/c-octeon.c +++ b/arch/mips/mm/c-octeon.c @@ -5,6 +5,7 @@ * * Copyright (C) 2005-2007 Cavium Networks */ +#include #include #include #include @@ -349,3 +350,17 @@ asmlinkage void cache_parity_error_octeon_non_recoverable(void) co_cache_error_call_notifiers(1); panic("Can't handle cache error: nested exception"); } + +asmlinkage void noinstr do_cache_err(struct pt_regs *regs) +{ + irqentry_state_t state = irqentry_enter(regs); + + /* Enable interrupt if enabled in parent context */ + if (likely(!regs_irqs_disabled(regs))) + local_irq_enable(); + + cache_parity_error_octeon_recoverable(); + + local_irq_disable(); + irqentry_exit(regs, state); +} diff --git a/arch/mips/mm/cex-oct.S b/arch/mips/mm/cex-oct.S index 9029092aa740..7d39087d208b 100644 --- a/arch/mips/mm/cex-oct.S +++ b/arch/mips/mm/cex-oct.S @@ -60,11 +60,11 @@ .set noat SAVE_ALL - KMODE - jal cache_parity_error_octeon_recoverable - nop - j ret_from_exception + CLI + move a0, sp + jal do_cache_err nop + RESTORE_ALL_AND_RET .set pop END(handle_cache_err) diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c index e7abda9c013f..e55bd45a596b 100644 --- a/arch/mips/mm/fault.c +++ b/arch/mips/mm/fault.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -327,9 +328,14 @@ static void __kprobes __do_page_fault(struct pt_regs *regs, unsigned long write, asmlinkage void __kprobes do_page_fault(struct pt_regs *regs, unsigned long write, unsigned long address) { - enum ctx_state prev_state; + irqentry_state_t state = irqentry_enter(regs); + + /* Enable interrupt if enabled in parent context */ + if (likely(!regs_irqs_disabled(regs))) + local_irq_enable(); - prev_state = exception_enter(); __do_page_fault(regs, write, address); - exception_exit(prev_state); + + local_irq_disable(); + irqentry_exit(regs, state); } diff --git a/arch/mips/mm/tlbex-fault.S b/arch/mips/mm/tlbex-fault.S index 77db401fc620..e16b4aa1fcc4 100644 --- a/arch/mips/mm/tlbex-fault.S +++ b/arch/mips/mm/tlbex-fault.S @@ -15,12 +15,15 @@ .cfi_signal_frame SAVE_ALL docfi=1 MFC0 a2, CP0_BADVADDR - KMODE + CLI move a0, sp REG_S a2, PT_BVADDR(sp) li a1, \write jal do_page_fault - j ret_from_exception + + .set noat + RESTORE_ALL_AND_RET + .set at END(tlb_do_page_fault_\write) .endm