From patchwork Tue Nov 5 01:58:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Vincent Chen X-Patchwork-Id: 11226849 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 575C5112B for ; Tue, 5 Nov 2019 01:59:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2B64A217F5 for ; Tue, 5 Nov 2019 01:59:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="kqzGoQTr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730216AbfKEB7E (ORCPT ); Mon, 4 Nov 2019 20:59:04 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:33251 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729987AbfKEB7D (ORCPT ); Mon, 4 Nov 2019 20:59:03 -0500 Received: by mail-pg1-f193.google.com with SMTP id u23so12885321pgo.0 for ; Mon, 04 Nov 2019 17:59:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OKe5vka41vCk7IJBHRB0aU/h2BDmqOkfTUawjJnfeSU=; b=kqzGoQTrxwP3jEQq+Yl+YrUma3unFzu79pdCMhJiuD8Ksbdd8H6J2IMXxzus5hAQXl jawP+rzmoae5W6MB/cQeq7UL2tMioZDcsRvX7uAVlpOu3y3ZNl+SjoJ1IPnbAU1LB0Vk bKj8uQbbbANJIk9/Go4vcAiyGc8+2PBAeXIfDUQ/YhkgUZwB+hYs/mx4gk0HBUShZpXe c6/RxTCzkwJS5SHh6nm19fEiQMBSqvBR1fT2FGIBIH423yyKXaD1a9xF/nVJsX/QvDAD XQSmki1VYaHnkYiXEHQ8jsZheB2F31Ndnu8eOtlzGF5lYY6lfEUJfsTO58KhhXeXLVhC lnVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OKe5vka41vCk7IJBHRB0aU/h2BDmqOkfTUawjJnfeSU=; b=ZnhK9M11LVfCh+M2QGnptqLwA2bDWUc3RlYzWa4EKXbekqYnH9gWi+x1qyW7uNI9GE 8BbuhhI5HsAJL0Pd8COP2cJRwVwXyjSunsXx101UCAw2WKKi0NPP5Kftte5y8/7QW2HO lUzcwom6eP1e36StcUomTIRWCeylRQKP2ckwepU61dnAZ3dYZ+puGHTR6GFY+kOV7B9r gEEkdlmQD091T1/Mbf/DsLQlIm/7Dyh4XLPJy/mj7c8K11UjhQWsudI41IIsCc5ztorj LoTADuYmy1EdPf8h1aMReSs+guhjfXUCMAZ+ZIlGM3k/jIYodHnrJgwkxHRZsqWMBhbs pi4w== X-Gm-Message-State: APjAAAUNwc/eDJAh4rrNj/gtfORjBBxuzA9mDvHi/MKjSC3WEoEpwTd9 nqF3WhEq9kd4FOhZmMCHNE11ZQ== X-Google-Smtp-Source: APXvYqwtM23oxRwflZx1y81RMNLN2ECVpJdBVhvgKUWdOhUuSlJxbqlk8BuK0F22fNGLMEtYVGVIIA== X-Received: by 2002:a63:e056:: with SMTP id n22mr32592549pgj.73.1572919141406; Mon, 04 Nov 2019 17:59:01 -0800 (PST) Received: from localhost.localdomain (220-132-236-182.HINET-IP.hinet.net. [220.132.236.182]) by smtp.gmail.com with ESMTPSA id j6sm16484444pfa.124.2019.11.04.17.58.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Nov 2019 17:59:00 -0800 (PST) From: Vincent Chen To: paul.walmsley@sifive.com, mathieu.desnoyers@efficios.com Cc: linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, vincent.chen@sifive.com, =?utf-8?q?Patrick_St=C3=A4hlin?= Subject: [PATCH 1/3] riscv: add required functions to enable HAVE_REGS_AND_STACK_ACCESS_API Date: Tue, 5 Nov 2019 09:58:32 +0800 Message-Id: <1572919114-3886-2-git-send-email-vincent.chen@sifive.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572919114-3886-1-git-send-email-vincent.chen@sifive.com> References: <1572919114-3886-1-git-send-email-vincent.chen@sifive.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org In order to select HAVE_REGS_AND_STACK_ACCESS_API, adding the APIs required by kprobes to access pt_regs and stack entries to the RISC-V ports. Signed-off-by: Patrick Stählin Signed-off-by: Vincent Chen Signed-off-by:, but haven't heard back from it. Signed-off-by: Patrick Stählin Signed-off-by: Paul Walmsley --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/ptrace.h | 29 +++++++++++- arch/riscv/kernel/ptrace.c | 99 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 128 insertions(+), 1 deletion(-) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 8eebbc8860bb..d5bbf4223fd2 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -61,6 +61,7 @@ config RISCV select SPARSEMEM_STATIC if 32BIT select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select HAVE_ARCH_MMAP_RND_BITS + select HAVE_REGS_AND_STACK_ACCESS_API config ARCH_MMAP_RND_BITS_MIN default 18 if 64BIT diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h index d48d1e13973c..635b7b5506ec 100644 --- a/arch/riscv/include/asm/ptrace.h +++ b/arch/riscv/include/asm/ptrace.h @@ -8,6 +8,7 @@ #include #include +#include #ifndef __ASSEMBLY__ @@ -59,7 +60,7 @@ struct pt_regs { #endif #define user_mode(regs) (((regs)->sstatus & SR_SPP) == 0) - +#define MAX_REG_OFFSET offsetof(struct pt_regs, orig_a0) /* Helpers for working with the instruction pointer */ static inline unsigned long instruction_pointer(struct pt_regs *regs) @@ -79,6 +80,12 @@ static inline unsigned long user_stack_pointer(struct pt_regs *regs) { return regs->sp; } + +static inline unsigned long kernel_stack_pointer(struct pt_regs *regs) +{ + return regs->sp; +} + static inline void user_stack_pointer_set(struct pt_regs *regs, unsigned long val) { @@ -101,6 +108,26 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) return regs->a0; } +int regs_query_register_offset(const char *name); + +unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n); +/** + * regs_get_register() - get register value from its offset + * @regs: pt_regs from which register value is gotten + * @offset: offset of the register. + * + * regs_get_register returns the value of a register whose offset from @regs. + * The @offset is the offset of the register in struct pt_regs. + * If @offset is bigger than MAX_REG_OFFSET, this returns 0. + */ +static inline unsigned long regs_get_register(struct pt_regs *regs, + unsigned int offset) +{ + if (unlikely(offset > MAX_REG_OFFSET)) + return 0; + + return *(unsigned long *)((unsigned long)regs + offset); +} #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_PTRACE_H */ diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c index 1252113ef8b2..2ae450d67659 100644 --- a/arch/riscv/kernel/ptrace.c +++ b/arch/riscv/kernel/ptrace.c @@ -125,6 +125,105 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) return &riscv_user_native_view; } +struct pt_regs_offset { + const char *name; + int offset; +}; + +#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)} +#define REG_OFFSET_END {.name = NULL, .offset = 0} + +static const struct pt_regs_offset regoffset_table[] = { + REG_OFFSET_NAME(sepc), + REG_OFFSET_NAME(ra), + REG_OFFSET_NAME(sp), + REG_OFFSET_NAME(gp), + REG_OFFSET_NAME(tp), + REG_OFFSET_NAME(t0), + REG_OFFSET_NAME(t1), + REG_OFFSET_NAME(t2), + REG_OFFSET_NAME(s0), + REG_OFFSET_NAME(s1), + REG_OFFSET_NAME(a0), + REG_OFFSET_NAME(a1), + REG_OFFSET_NAME(a2), + REG_OFFSET_NAME(a3), + REG_OFFSET_NAME(a4), + REG_OFFSET_NAME(a5), + REG_OFFSET_NAME(a6), + REG_OFFSET_NAME(a7), + REG_OFFSET_NAME(s2), + REG_OFFSET_NAME(s3), + REG_OFFSET_NAME(s4), + REG_OFFSET_NAME(s5), + REG_OFFSET_NAME(s6), + REG_OFFSET_NAME(s7), + REG_OFFSET_NAME(s8), + REG_OFFSET_NAME(s9), + REG_OFFSET_NAME(s10), + REG_OFFSET_NAME(s11), + REG_OFFSET_NAME(t3), + REG_OFFSET_NAME(t4), + REG_OFFSET_NAME(t5), + REG_OFFSET_NAME(t6), + REG_OFFSET_NAME(sstatus), + REG_OFFSET_NAME(sbadaddr), + REG_OFFSET_NAME(scause), + REG_OFFSET_NAME(orig_a0), + REG_OFFSET_END, +}; + +/** + * regs_query_register_offset() - query register offset from its name + * @name: the name of a register + * + * regs_query_register_offset() returns the offset of a register in struct + * pt_regs from its name. If the name is invalid, this returns -EINVAL; + */ +int regs_query_register_offset(const char *name) +{ + const struct pt_regs_offset *roff; + + for (roff = regoffset_table; roff->name != NULL; roff++) + if (!strcmp(roff->name, name)) + return roff->offset; + return -EINVAL; +} + +/** + * regs_within_kernel_stack() - check the address in the stack + * @regs: pt_regs which contains kernel stack pointer. + * @addr: address which is checked. + * + * regs_within_kernel_stack() checks @addr is within the kernel stack page(s). + * If @addr is within the kernel stack, it returns true. If not, returns false. + */ +static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr) +{ + return (addr & ~(THREAD_SIZE - 1)) == + (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1)); +} + +/** + * regs_get_kernel_stack_nth() - get Nth entry of the stack + * @regs: pt_regs which contains kernel stack pointer. + * @n: stack entry number. + * + * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which + * is specified by @regs. If the @n th entry is NOT in the kernel stack, + * this returns 0. + */ +unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n) +{ + unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs); + + addr += n; + if (regs_within_kernel_stack(regs, (unsigned long)addr)) + return *addr; + else + return 0; +} + void ptrace_disable(struct task_struct *child) { clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE); From patchwork Tue Nov 5 01:58:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Chen X-Patchwork-Id: 11226851 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C7B2D14E5 for ; Tue, 5 Nov 2019 01:59:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A6F5820848 for ; Tue, 5 Nov 2019 01:59:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="JfPVSDAM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730227AbfKEB7F (ORCPT ); Mon, 4 Nov 2019 20:59:05 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:43764 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729987AbfKEB7F (ORCPT ); Mon, 4 Nov 2019 20:59:05 -0500 Received: by mail-pl1-f193.google.com with SMTP id a18so7439877plm.10 for ; Mon, 04 Nov 2019 17:59:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uB6oddDs4B5MJUkexIrktz0z9v6D2XtXOnfbRUzfnas=; b=JfPVSDAMfTA2eMXc/IL3xyiHo3mb7M/6Zkl0ebPXOqVQUX1Z6N9HPojRpc10M8MJ5e BrqyyOCA84XGdHYx8nMTpWIOZB1q1SkEb99dm08S8uYonB1HpRaeL64CPpN6pW5vt5R5 WAWuvQEWuLhK9EMmM0M8OHvrANih4vMeO+NMz1oEGVm2hdjAwdhLGeAUEFew5NT/pExL SXct7bQ4YHmc8TKg12jqs44pidAteVYINOXxGeaggGl/s++jHp5ijZiT0AvyGa4KRg76 A7ZFtxyGZsCofHCgKds9wForYwWutoVdffuzG55r1RgbUUDe0N6rUiV5xxtNgmN2AOBM 5iig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uB6oddDs4B5MJUkexIrktz0z9v6D2XtXOnfbRUzfnas=; b=V9rvy6ibyvXsUmG9FEIqvz4HQ8EzFTh0+qcOrO2sy5t/gZlUOyAD3AT66reWk0U6wd e35pQ6zyMh8swWKIf2bOG2SPkgcu6X9/DG21GIG7GqC4Zu11LNmyjy71CuDgP8NkQu8H BMPCKphQSNA3Vol+3enzbbBMMShtcdo37vqaxwXgQjO37POp/6fStGo6PYfDs7SV0XyC oVQc+/K6Z4FoywZnMRXZ6ZVpceRyaKxBtgxJUjvRs3/KCpbHDPB1HrTjpLURSqWomgOZ FPC8uRcuie8DgunxN+7/pnO1mCxv7HcW3y0+o+wsSqNzDk5CxFH7LkvjpTrSBMnRQfi1 evng== X-Gm-Message-State: APjAAAVagL8UHN1sNuAw9L8WPs3OD3vD9imvekhdM1ELGa1vDC07FOu+ NrjwzHUT939j7y/wPKonXPHxIA== X-Google-Smtp-Source: APXvYqwj2vCMWOZqVejxLCFtTPImCwciXJJhVXdprxtdj6wBJEQXejbBfy5C/bGV8Je3oDYo5g5ZRw== X-Received: by 2002:a17:902:47:: with SMTP id 65mr30306290pla.81.1572919144304; Mon, 04 Nov 2019 17:59:04 -0800 (PST) Received: from localhost.localdomain (220-132-236-182.HINET-IP.hinet.net. [220.132.236.182]) by smtp.gmail.com with ESMTPSA id j6sm16484444pfa.124.2019.11.04.17.59.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Nov 2019 17:59:03 -0800 (PST) From: Vincent Chen To: paul.walmsley@sifive.com, mathieu.desnoyers@efficios.com Cc: linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, vincent.chen@sifive.com Subject: [PATCH 2/3] riscv: Add support for restartable sequence Date: Tue, 5 Nov 2019 09:58:33 +0800 Message-Id: <1572919114-3886-3-git-send-email-vincent.chen@sifive.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572919114-3886-1-git-send-email-vincent.chen@sifive.com> References: <1572919114-3886-1-git-send-email-vincent.chen@sifive.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add calls to rseq_signal_deliver(), rseq_handle_notify_resume() and rseq_syscall() to introduce RSEQ support. 1. Call the rseq_handle_notify_resume() function on return to userspace if TIF_NOTIFY_RESUME thread flag is set. 2. Call the rseq_signal_deliver() function to fixup on the pre-signal frame when a signal is delivered on top of a restartable sequence critical section. 3. Check that system calls are not invoked from within rseq critical sections by invoking rseq_signal() from ret_from_syscall(). With CONFIG_DEBUG_RSEQ, such behavior results in termination of the process with SIGSEGV. Signed-off-by: Vincent Chen --- arch/riscv/Kconfig | 1 + arch/riscv/kernel/entry.S | 4 ++++ arch/riscv/kernel/signal.c | 3 +++ 3 files changed, 8 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d5bbf4223fd2..58759a4f8dff 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -62,6 +62,7 @@ config RISCV select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select HAVE_ARCH_MMAP_RND_BITS select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_RSEQ config ARCH_MMAP_RND_BITS_MIN default 18 if 64BIT diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 8ca479831142..00e9eaa02969 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -212,6 +212,10 @@ ENTRY(handle_exception) handle_syscall: /* save the initial A0 value (needed in signal handlers) */ REG_S a0, PT_ORIG_A0(sp) +#ifdef CONFIG_RSEQ_DEBUG + move a0, sp + call rseq_syscall +#endif /* * Advance SEPC to avoid executing the original * scall instruction on sret diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index d0f6f212f5df..f2f6017f92d0 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -219,6 +219,8 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) sigset_t *oldset = sigmask_to_save(); int ret; + rseq_signal_deliver(ksig, regs); + /* Are we from a system call? */ if (regs->scause == EXC_SYSCALL) { /* Avoid additional syscall restarting via ret_from_exception */ @@ -302,5 +304,6 @@ asmlinkage __visible void do_notify_resume(struct pt_regs *regs, if (thread_info_flags & _TIF_NOTIFY_RESUME) { clear_thread_flag(TIF_NOTIFY_RESUME); tracehook_notify_resume(regs); + rseq_handle_notify_resume(NULL, regs); } } From patchwork Tue Nov 5 01:58:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Chen X-Patchwork-Id: 11226857 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06A6C14E5 for ; Tue, 5 Nov 2019 01:59:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B36F220848 for ; Tue, 5 Nov 2019 01:59:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="RNx86mR2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730228AbfKEB7I (ORCPT ); Mon, 4 Nov 2019 20:59:08 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:38804 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729987AbfKEB7I (ORCPT ); Mon, 4 Nov 2019 20:59:08 -0500 Received: by mail-pf1-f195.google.com with SMTP id c13so13944006pfp.5 for ; Mon, 04 Nov 2019 17:59:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8gMHMZpiJZsoZS/WdytEaMC1Kc3VRL17j8prshQWA34=; b=RNx86mR2coN3REDFV8aZfM1fu9xV/hizGXcMK912jlaU/M50h8jYm9i6uc2JB0yuqM n/xgo7W7IORrKL/fA0hQuilvxBlY3XDJXXoFnX7K51K7JqX8zOrVcPhVL40qqrquL/Yf 8yBv10Ja8AMCenu0ZdGgitOY8mhyeBV7JVj7GWbTwXQqokrbY4mPynIhx9/LvDMTapOB jfd2i6VKqw2cSGhR4D29dJxlm1q19GTZXu+KJAsZy/DxfQtv6ptvcHTzQGKBQIrvqV35 ytO1dN68o93ESGngiJVbo3w0EpGHteLlDhErVMJ0AOkLq5j8I6bkjubKKAWEhMyvUtZW uAdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8gMHMZpiJZsoZS/WdytEaMC1Kc3VRL17j8prshQWA34=; b=Zvuva0yxPL5nSwmRZZ+nR4YtqQaI0VUtcTwquike0OX5SnI86dvdQvaF33bSPsiedr Ryb0Jsax7w99tWEEmBvZUdEtEiL+yDVXZIjmHN/2lmJREi0ux2tjlkUkGhQ4RSvfFjx9 zQGtHVTxyKTBzpcIIhAS3RGqdFsgkzhsTSWWKp0DXxmQqkrQoCN6E1lqd5i416PdZGjx 08abaAbU9M88RYePjrJGgakenBm5benRYJcj06RS+pgCEV7qfJ4AS6BU8Jr8NPoloZDG iQymqxS7okzDhHvxi6voYNAIfPZSTUx2c5B/veey6AsxjBc3az1Ef/8tqVRx5VYI0BOm WyVw== X-Gm-Message-State: APjAAAUBrdfZbn3USCHBGjoANP9b1PD3UcjztmZlnJW6lUPCqbEAl1em j1FyVdr52NAGHAqaBe7kiyvMAg== X-Google-Smtp-Source: APXvYqyVsN/d4BmjuiRfSzmvN2Gro17kJXdsZ/KrJ5Epmj1pkCkmKMJzuwTUF8cvA+0Fj7UncDf2Gw== X-Received: by 2002:a65:654e:: with SMTP id a14mr9843290pgw.170.1572919146914; Mon, 04 Nov 2019 17:59:06 -0800 (PST) Received: from localhost.localdomain (220-132-236-182.HINET-IP.hinet.net. [220.132.236.182]) by smtp.gmail.com with ESMTPSA id j6sm16484444pfa.124.2019.11.04.17.59.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 04 Nov 2019 17:59:06 -0800 (PST) From: Vincent Chen To: paul.walmsley@sifive.com, mathieu.desnoyers@efficios.com Cc: linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, vincent.chen@sifive.com Subject: [PATCH 3/3] rseq/selftests: Add support for riscv Date: Tue, 5 Nov 2019 09:58:34 +0800 Message-Id: <1572919114-3886-4-git-send-email-vincent.chen@sifive.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572919114-3886-1-git-send-email-vincent.chen@sifive.com> References: <1572919114-3886-1-git-send-email-vincent.chen@sifive.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add support for risc-v in the rseq selftests, which covers both 64-bit and 32-bit ISA with little endian mode. Signed-off-by: Vincent Chen --- tools/testing/selftests/rseq/param_test.c | 23 ++ tools/testing/selftests/rseq/rseq-riscv.h | 622 ++++++++++++++++++++++++++++++ tools/testing/selftests/rseq/rseq.h | 2 + 3 files changed, 647 insertions(+) create mode 100644 tools/testing/selftests/rseq/rseq-riscv.h diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c index eec2663261f2..5e4e7e8fb9c2 100644 --- a/tools/testing/selftests/rseq/param_test.c +++ b/tools/testing/selftests/rseq/param_test.c @@ -205,6 +205,29 @@ unsigned int yield_mod_cnt, nr_abort; "addiu " INJECT_ASM_REG ", -1\n\t" \ "bnez " INJECT_ASM_REG ", 222b\n\t" \ "333:\n\t" +#elif defined(__riscv) + +#define RSEQ_INJECT_INPUT \ + , [loop_cnt_1]"m"(loop_cnt[1]) \ + , [loop_cnt_2]"m"(loop_cnt[2]) \ + , [loop_cnt_3]"m"(loop_cnt[3]) \ + , [loop_cnt_4]"m"(loop_cnt[4]) \ + , [loop_cnt_5]"m"(loop_cnt[5]) \ + , [loop_cnt_6]"m"(loop_cnt[6]) + +#define INJECT_ASM_REG "t1" + +#define RSEQ_INJECT_CLOBBER \ + , INJECT_ASM_REG + +#define RSEQ_INJECT_ASM(n) \ + "lw " INJECT_ASM_REG ", %[loop_cnt_" #n "]\n\t" \ + "beqz " INJECT_ASM_REG ", 333f\n\t" \ + "222:\n\t" \ + "addi " INJECT_ASM_REG "," INJECT_ASM_REG ", -1\n\t" \ + "bnez " INJECT_ASM_REG ", 222b\n\t" \ + "333:\n\t" + #else #error unsupported target diff --git a/tools/testing/selftests/rseq/rseq-riscv.h b/tools/testing/selftests/rseq/rseq-riscv.h new file mode 100644 index 000000000000..56b47db4a9a4 --- /dev/null +++ b/tools/testing/selftests/rseq/rseq-riscv.h @@ -0,0 +1,622 @@ +/* SPDX-License-Identifier: LGPL-2.1 OR MIT */ +/* + * Select the instruction "csrw mhartid, x0" as the RSEQ_SIG. Unlike + * other architecture, the ebreak instruction has no immediate field for + * distinguishing purposes. Hence, ebreak is not suitable as RSEQ_SIG. + * "csrw mhartid, x0" can also satisfy the RSEQ requirement because it + * is an uncommon instruction and will raise an illegal instruction + * exception when executed in all modes. + */ + +#if __ORDER_LITTLE_ENDIAN__ == 1234 +#define RSEQ_SIG 0xf1401073 /* csrr mhartid, x0 */ +#else +#error "Currently, RSEQ only supports Little-Endian version" +#endif + +#if __riscv_xlen == 64 +#define __REG_SEL(a, b) a +#elif __riscv_xlen == 32 +#define __REG_SEL(a, b) b +#endif + +#define REG_L __REG_SEL("ld ", "lw ") +#define REG_S __REG_SEL("sd ", "sw ") + +#define RISCV_FENCE(p, s) \ + __asm__ __volatile__ ("fence " #p "," #s : : : "memory") +#define rseq_smp_mb() RISCV_FENCE(rw, rw) +#define rseq_smp_rmb() RISCV_FENCE(r, r) +#define rseq_smp_wmb() RISCV_FENCE(w, w) +#define RSEQ_ASM_TMP_REG_1 "t6" +#define RSEQ_ASM_TMP_REG_2 "t5" +#define RSEQ_ASM_TMP_REG_3 "t4" +#define RSEQ_ASM_TMP_REG_4 "t3" + +#define rseq_smp_load_acquire(p) \ +__extension__ ({ \ + __typeof(*p) ____p1 = RSEQ_READ_ONCE(*p); \ + RISCV_FENCE(r, rw) \ + ____p1; \ +}) + +#define rseq_smp_acquire__after_ctrl_dep() rseq_smp_rmb() + +#define rseq_smp_store_release(p, v) \ +do { \ + RISCV_FENCE(rw, w); \ + RSEQ_WRITE_ONCE(*p, v); \ +} while (0) + + +#ifdef RSEQ_SKIP_FASTPATH +#include "rseq-skip.h" +#else /* !RSEQ_SKIP_FASTPATH */ + +#define __RSEQ_ASM_DEFINE_TABLE(label, version, flags, start_ip, \ + post_commit_offset, abort_ip) \ + ".pushsection __rseq_cs, \"aw\"\n" \ + ".balign 32\n" \ + __rseq_str(label) ":\n" \ + ".long " __rseq_str(version) ", " __rseq_str(flags) "\n" \ + ".quad " __rseq_str(start_ip) ", " \ + __rseq_str(post_commit_offset) ", " \ + __rseq_str(abort_ip) "\n" \ + ".popsection\n\t" \ + ".pushsection __rseq_cs_ptr_array, \"aw\"\n" \ + ".quad " __rseq_str(label) "b\n" \ + ".popsection\n" + +#define RSEQ_ASM_DEFINE_TABLE(label, start_ip, post_commit_ip, abort_ip) \ + __RSEQ_ASM_DEFINE_TABLE(label, 0x0, 0x0, start_ip, \ + (post_commit_ip - start_ip), abort_ip) + + +/* + * Exit points of a rseq critical section consist of all instructions outside + * of the critical section where a critical section can either branch to or + * reach through the normal course of its execution. The abort IP and the + * post-commit IP are already part of the __rseq_cs section and should not be + * explicitly defined as additional exit points. Knowing all exit points is + * useful to assist debuggers stepping over the critical section. + */ +#define RSEQ_ASM_DEFINE_EXIT_POINT(start_ip, exit_ip) \ + ".pushsection __rseq_exit_point_array, \"aw\"\n" \ + ".quad " __rseq_str(start_ip) ", " __rseq_str(exit_ip) "\n" \ + ".popsection\n" + +#define RSEQ_ASM_STORE_RSEQ_CS(label, cs_label, rseq_cs) \ + RSEQ_INJECT_ASM(1) \ + "la "RSEQ_ASM_TMP_REG_1 ", " __rseq_str(cs_label) "\n" \ + REG_S RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(rseq_cs) "]\n" \ + __rseq_str(label) ":\n" + +#define RSEQ_ASM_DEFINE_ABORT(label, abort_label) \ + "j 222f\n" \ + ".balign 4\n" \ + ".long " __rseq_str(RSEQ_SIG) "\n" \ + __rseq_str(label) ":\n" \ + "j %l[" __rseq_str(abort_label) "]\n" \ + "222:\n" + +#define RSEQ_ASM_OP_STORE(value, var) \ + REG_S "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n" + +#define RSEQ_ASM_OP_CMPEQ(var, expect, label) \ + REG_L RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(var) "]\n" \ + "bne "RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(expect) "] ," \ + __rseq_str(label) "\n" + +#define RSEQ_ASM_OP_CMPEQ32(var, expect, label) \ + "lw "RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(var) "]\n" \ + "bne "RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(expect) "] ," \ + __rseq_str(label) "\n" + +#define RSEQ_ASM_OP_CMPNE(var, expect, label) \ + REG_L RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(var) "]\n" \ + "beq "RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(expect) "] ," \ + __rseq_str(label) "\n" + +#define RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, label) \ + RSEQ_INJECT_ASM(2) \ + RSEQ_ASM_OP_CMPEQ32(current_cpu_id, cpu_id, label) + +#define RSEQ_ASM_OP_R_LOAD(var) \ + REG_L RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(var) "]\n" + +#define RSEQ_ASM_OP_R_STORE(var) \ + REG_S RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(var) "]\n" + +#define RSEQ_ASM_OP_R_LOAD_OFF(offset) \ + "add "RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(offset) "], " \ + RSEQ_ASM_TMP_REG_1 "\n" \ + REG_L RSEQ_ASM_TMP_REG_1 ", (" RSEQ_ASM_TMP_REG_1 ")\n" + +#define RSEQ_ASM_OP_R_ADD(count) \ + "add "RSEQ_ASM_TMP_REG_1 ", " RSEQ_ASM_TMP_REG_1 \ + ", %[" __rseq_str(count) "]\n" + +#define RSEQ_ASM_OP_FINAL_STORE(value, var, post_commit_label) \ + RSEQ_ASM_OP_STORE(value, var) \ + __rseq_str(post_commit_label) ":\n" + +#define RSEQ_ASM_OP_FINAL_STORE_RELEASE(value, var, post_commit_label) \ + "fence rw, w\n" \ + RSEQ_ASM_OP_STORE(value, var) \ + __rseq_str(post_commit_label) ":\n" + +#define RSEQ_ASM_OP_R_FINAL_STORE(var, post_commit_label) \ + REG_S RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(var) "]\n" \ + __rseq_str(post_commit_label) ":\n" + +#define RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len) \ + "beqz %[" __rseq_str(len) "], 333f\n" \ + "mv " RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(len) "]\n" \ + "mv " RSEQ_ASM_TMP_REG_2 ", %[" __rseq_str(src) "]\n" \ + "mv " RSEQ_ASM_TMP_REG_3 ", %[" __rseq_str(dst) "]\n" \ + "222:\n" \ + "lb " RSEQ_ASM_TMP_REG_4 ", 0(" RSEQ_ASM_TMP_REG_2 ")\n" \ + "sb " RSEQ_ASM_TMP_REG_4 ", 0(" RSEQ_ASM_TMP_REG_3 ")\n" \ + "addi " RSEQ_ASM_TMP_REG_1 ", " RSEQ_ASM_TMP_REG_1 ", -1\n" \ + "addi " RSEQ_ASM_TMP_REG_2 ", " RSEQ_ASM_TMP_REG_2 ", 1\n" \ + "addi " RSEQ_ASM_TMP_REG_3 ", " RSEQ_ASM_TMP_REG_3 ", 1\n" \ + "bnez " RSEQ_ASM_TMP_REG_1 ", 222b\n" \ + "333:\n" + +static inline __attribute__((always_inline)) +int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail]) + RSEQ_INJECT_ASM(4) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2]) +#endif + RSEQ_ASM_OP_FINAL_STORE(newv, v, 3) + RSEQ_INJECT_ASM(5) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [v] "m" (*v), + [expect] "r" (expect), + [newv] "r" (newv) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2 +#endif + ); + + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, + off_t voffp, intptr_t *load, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPNE(v, expectnot, %l[cmpfail]) + RSEQ_INJECT_ASM(4) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPNE(v, expectnot, %l[error2]) +#endif + RSEQ_ASM_OP_R_LOAD(v) + RSEQ_ASM_OP_R_STORE(load) + RSEQ_ASM_OP_R_LOAD_OFF(voffp) + RSEQ_ASM_OP_R_FINAL_STORE(v, 3) + RSEQ_INJECT_ASM(5) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [v] "m" (*v), + [expectnot] "r" (expectnot), + [load] "m" (*load), + [voffp] "r" (voffp) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2 +#endif + ); + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_addv(intptr_t *v, intptr_t count, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) +#endif + RSEQ_ASM_OP_R_LOAD(v) + RSEQ_ASM_OP_R_ADD(count) + RSEQ_ASM_OP_R_FINAL_STORE(v, 3) + RSEQ_INJECT_ASM(4) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [v] "m" (*v), + [count] "r" (count) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1 + : abort +#ifdef RSEQ_COMPARE_TWICE + , error1 +#endif + ); + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t newv2, + intptr_t newv, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail]) + RSEQ_INJECT_ASM(4) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2]) +#endif + RSEQ_ASM_OP_STORE(newv2, v2) + RSEQ_INJECT_ASM(5) + RSEQ_ASM_OP_FINAL_STORE(newv, v, 3) + RSEQ_INJECT_ASM(6) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [expect] "r" (expect), + [v] "m" (*v), + [newv] "r" (newv), + [v2] "m" (*v2), + [newv2] "r" (newv2) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2 +#endif + ); + + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t newv2, + intptr_t newv, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail]) + RSEQ_INJECT_ASM(4) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2]) +#endif + RSEQ_ASM_OP_STORE(newv2, v2) + RSEQ_INJECT_ASM(5) + RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3) + RSEQ_INJECT_ASM(6) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [expect] "r" (expect), + [v] "m" (*v), + [newv] "r" (newv), + [v2] "m" (*v2), + [newv2] "r" (newv2) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2 +#endif + ); + + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t expect2, + intptr_t newv, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error3]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail]) + RSEQ_INJECT_ASM(4) + RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[cmpfail]) + RSEQ_INJECT_ASM(5) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2]) + RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[error3]) +#endif + RSEQ_ASM_OP_FINAL_STORE(newv, v, 3) + RSEQ_INJECT_ASM(6) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [v] "m" (*v), + [expect] "r" (expect), + [v2] "m" (*v2), + [expect2] "r" (expect2), + [newv] "r" (newv) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2, error3 +#endif + ); + + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +error3: + rseq_bug("2nd expected value comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, + void *dst, void *src, size_t len, + intptr_t newv, int cpu) +{ + RSEQ_INJECT_C(9) + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail]) + RSEQ_INJECT_ASM(4) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2]) +#endif + RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len) + RSEQ_INJECT_ASM(5) + RSEQ_ASM_OP_FINAL_STORE(newv, v, 3) + RSEQ_INJECT_ASM(6) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [expect] "r" (expect), + [v] "m" (*v), + [newv] "r" (newv), + [dst] "r" (dst), + [src] "r" (src), + [len] "r" (len) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1, RSEQ_ASM_TMP_REG_2, + RSEQ_ASM_TMP_REG_3, RSEQ_ASM_TMP_REG_4 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2 +#endif + ); + + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +#endif +} + +static inline __attribute__((always_inline)) +int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, + void *dst, void *src, size_t len, + intptr_t newv, int cpu) +{ + RSEQ_INJECT_C(9) + + __asm__ __volatile__ goto ( + RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail]) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1]) + RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2]) +#endif + RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs) + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f) + RSEQ_INJECT_ASM(3) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail]) + RSEQ_INJECT_ASM(4) +#ifdef RSEQ_COMPARE_TWICE + RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1]) + RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2]) +#endif + RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len) + RSEQ_INJECT_ASM(5) + RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3) + RSEQ_INJECT_ASM(6) + RSEQ_ASM_DEFINE_ABORT(4, abort) + : /* gcc asm goto does not allow outputs */ + : [cpu_id] "r" (cpu), + [current_cpu_id] "m" (__rseq_abi.cpu_id), + [rseq_cs] "m" (__rseq_abi.rseq_cs), + [expect] "r" (expect), + [v] "m" (*v), + [newv] "r" (newv), + [dst] "r" (dst), + [src] "r" (src), + [len] "r" (len) + RSEQ_INJECT_INPUT + : "memory", RSEQ_ASM_TMP_REG_1, RSEQ_ASM_TMP_REG_2, + RSEQ_ASM_TMP_REG_3, RSEQ_ASM_TMP_REG_4 + : abort, cmpfail +#ifdef RSEQ_COMPARE_TWICE + , error1, error2 +#endif + ); + + return 0; +abort: + RSEQ_INJECT_FAILED + return -1; +cmpfail: + return 1; +#ifdef RSEQ_COMPARE_TWICE +error1: + rseq_bug("cpu_id comparison failed"); +error2: + rseq_bug("expected value comparison failed"); +#endif +} + +#endif /* !RSEQ_SKIP_FASTPATH */ diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index d40d60e7499e..bc68a6127285 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -79,6 +79,8 @@ extern int __rseq_handled; #include #elif defined(__s390__) #include +#elif defined(__riscv) +#include #else #error unsupported target #endif