From patchwork Thu Jan 25 06:21:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 13530173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 714D7C47258 for ; Thu, 25 Jan 2024 07:31:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uFBfJ0Z+bonVSySSq3YkG2bHJMrB1o+bDyTHxtinMZ0=; b=XB4kGlO9R7hQvu QOnwhj8OSJW/ne9zBKqVfZRA7mHBNHXJbffyqBdI2LO1AAnEl+7luwmrklVdH5w87vmhMWI5yssHy kQ6BnsC9cDoI3LHOFIcfeXRrKUyoM8Og8L6o/y6+dqHkwzK4tkwbIoIPfuVOkRvgFuxgO+KLjHrF2 0GRVccitjGbN5Hc4HL3tscp0rcodPLR94PaqaTzBpx3hDp+JLGgOQ05/hJzZDkmCO8+yFTp0caa1A DyPkUUQtkBfOhz6ENcCoudpAFzM03jjWXaCsK2ndsVoH+Le+P4Y7NIGphRXIcXEhB12tqSddiksHF Yo7OW2FanMdwBkU4h/Ng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rSuD9-00738l-11; Thu, 25 Jan 2024 07:31:43 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rStFe-006lm8-2y for linux-riscv@bombadil.infradead.org; Thu, 25 Jan 2024 06:30:15 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uRwHXwAm0bShEXUK35E9/V7WUW2K3zxNpdrnhUWhVek=; b=aT2x21TpiGi2sM0eVoWJxyZaRD +8xHPhLCYx1BdHeXf1AhKhQf1H0bIbcjWrj3F9u9vp+QmK8iucASwtKZ9N1kFzvRv47LlXVnYahAq W0Cm7V0zbQKCw2Dq/vwD90linvD1c/mTw1S7t8aazUN8/2/OVHXJcRkXYn16zsIE4DRpi9H0vdRtk wGWKhmMKdG29IjtCwfiVCJFoF1YAeJIfVpKxxx9kScZXhGBB26V8qYC+rHB1tCD6Mt7m4Ii2SzArh blj6RoWP/7p1LE6w48zT+TYjUQYPdUPtlwwaAzHh+4oQ07WLJjvjExHFceikEKvLYiIHg6XLdPRH1 4jhngQzQ==; Received: from mail-oi1-x231.google.com ([2607:f8b0:4864:20::231]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rStFZ-00000004wdq-2kB0 for linux-riscv@lists.infradead.org; Thu, 25 Jan 2024 06:30:13 +0000 Received: by mail-oi1-x231.google.com with SMTP id 5614622812f47-3bba50cd318so5903148b6e.0 for ; Wed, 24 Jan 2024 22:30:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1706164207; x=1706769007; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uRwHXwAm0bShEXUK35E9/V7WUW2K3zxNpdrnhUWhVek=; b=nN4Lz8ksxCTpvpBKSxMsUWHazpNbhgq6QglXRI/j4JizpNzRutdUKyj4ijVLwftOi/ 9nKTKsawGSbm4ecRw/DC0q4dcc563Q0arXTKNS+P/p9SRuQQ9nVjhz9+H7FYM3zc528V abwP0zv0TTIOJbpZ7iywsoek0kfsHc/jEoToN/15AHX5GwR4lYs1qeH+Mb7d2f99/U5t TQaURF1CQlhxPVJ61QDjULtwnBsWh8+vzbYrSMdyO2/0SuDaru5vuiJx3JPEk2/y8NBZ /X1uTZkcAqQNrRUXtHZDIknLf3QMwFaH23JCJE4BksYyfwdXGLcmgS468GeWU12GOmuv 4u6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706164207; x=1706769007; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uRwHXwAm0bShEXUK35E9/V7WUW2K3zxNpdrnhUWhVek=; b=l9O76450FugBLXs0XP4UVlK0M86V9ZnLHPqeTa9KL2WRsEgYmjNxO/4L/sUTwck1HX FEYJCvbGugy7wSOJ/q3A+o5a7Fmy4ruhzhDCQr+1hp6ykGpke9Us8Rxe2UGi2N05AtUX Wh88gtWAexU2d0zf1LliN5l4+erNAlITI8C7j8ztlk+WjSXHqOZZZF62MFfDO0Bfvqy1 YR5OLIY9e5wdxjApk7y8jYSoTtMvL+wZOTVwJxoGr2Y8OybPLoIkfgr/DMTHf2GMbEFY VdO2eTnqr7bVCLMqE5YLUkcv8+ImXTFVNUazeGFloUmRjKpTlWKUewNJy0/AJNBqnsOH ZuuQ== X-Gm-Message-State: AOJu0Yy1HOP/8FdzWDRjVvzwQas1rdvPQfoMYdPjBiJ5h3TsbPxt6A0U 86FfYr8p8z4Obgnvp3CU7WT+Cnuz5mQi4RRWUTOWA+rIQiiZDuM4L75aBR94Zaw= X-Google-Smtp-Source: AGHT+IF2nRVvpyMXEbPJKSQolKqo6YC/sGsNRR3PtemnrQHSXb7Izvts4H7oiEK6bWKQLfrB7po8uA== X-Received: by 2002:a05:6808:3990:b0:3bc:7171:b7c7 with SMTP id gq16-20020a056808399000b003bc7171b7c7mr569515oib.67.1706164206921; Wed, 24 Jan 2024 22:30:06 -0800 (PST) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id t19-20020a056a00139300b006dd870b51b8sm3201139pfg.126.2024.01.24.22.30.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jan 2024 22:30:06 -0800 (PST) From: debug@rivosinc.com To: rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, paul.walmsley@sifive.com, palmer@dabbelt.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com Cc: corbet@lwn.net, aou@eecs.berkeley.edu, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, shuah@kernel.org, brauner@kernel.org, debug@rivosinc.com, guoren@kernel.org, samitolvanen@google.com, evan@rivosinc.com, xiao.w.wang@intel.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, waylingii@gmail.com, greentime.hu@sifive.com, heiko@sntech.de, jszhang@kernel.org, shikemeng@huaweicloud.com, david@redhat.com, charlie@rivosinc.com, panqinglin2020@iscas.ac.cn, willy@infradead.org, vincent.chen@sifive.com, andy.chiu@sifive.com, gerg@kernel.org, jeeheng.sia@starfivetech.com, mason.huo@starfivetech.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bhe@redhat.com, chenjiahao16@huawei.com, ruscur@russell.cc, bgray@linux.ibm.com, alx@kernel.org, baruch@tkos.co.il, zhangqing@loongson.cn, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, joey.gouly@arm.com, shr@devkernel.io, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH v1 19/28] riscv: Implements arch agnostic shadow stack prctls Date: Wed, 24 Jan 2024 22:21:44 -0800 Message-ID: <20240125062739.1339782-20-debug@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240125062739.1339782-1-debug@rivosinc.com> References: <20240125062739.1339782-1-debug@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240125_063011_603376_931EDE6B X-CRM114-Status: GOOD ( 22.31 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Deepak Gupta Implement architecture agnostic prctls() interface for setting and getting shadow stack status. prctls implemented are PR_GET_SHADOW_STACK_STATUS, PR_SET_SHADOW_STACK_STATUS and PR_LOCK_SHADOW_STACK_STATUS. As part of PR_SET_SHADOW_STACK_STATUS/PR_GET_SHADOW_STACK_STATUS, only PR_SHADOW_STACK_ENABLE is implemented because RISCV allows each mode to write to their own shadow stack using `sspush` or `ssamoswap`. PR_LOCK_SHADOW_STACK_STATUS locks current configuration of shadow stack enabling Following is not supported "Enable shadow stack, then disable and enable again." It's not sure whether providing such semantics are useful. It's better to return error code when such situation arises. Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/usercfi.h | 12 +++- arch/riscv/kernel/usercfi.c | 105 +++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h index eb9a0905e72b..72bcfa773752 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLY__ #include +#include struct task_struct; struct kernel_clone_args; @@ -14,7 +15,8 @@ struct kernel_clone_args; #ifdef CONFIG_RISCV_USER_CFI struct cfi_status { unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ - unsigned long rsvd : ((sizeof(unsigned long)*8) - 1); + unsigned long ubcfi_locked : 1; + unsigned long rsvd : ((sizeof(unsigned long)*8) - 2); unsigned long user_shdw_stk; /* Current user shadow stack pointer */ unsigned long shdw_stk_base; /* Base address of shadow stack */ unsigned long shdw_stk_size; /* size of shadow stack */ @@ -26,6 +28,9 @@ void shstk_release(struct task_struct *tsk); void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size); void set_active_shstk(struct task_struct *task, unsigned long shstk_addr); bool is_shstk_enabled(struct task_struct *task); +bool is_shstk_locked(struct task_struct *task); + +#define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK (PR_SHADOW_STACK_ENABLE) #else @@ -56,6 +61,11 @@ static inline bool is_shstk_enabled(struct task_struct *task) return false; } +static inline bool is_shstk_locked(struct task_struct *task) +{ + return false; +} + #endif /* CONFIG_RISCV_USER_CFI */ #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index 36cac0d653f5..be3a071272d8 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -24,6 +24,16 @@ bool is_shstk_enabled(struct task_struct *task) return task->thread_info.user_cfi_state.ubcfi_en ? true : false; } +bool is_shstk_allocated(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.shdw_stk_base ? true : false; +} + +bool is_shstk_locked(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ubcfi_locked ? true : false; +} + void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size) { task->thread_info.user_cfi_state.shdw_stk_base = shstk_addr; @@ -42,6 +52,21 @@ void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) task->thread_info.user_cfi_state.user_shdw_stk = shstk_addr; } +void set_shstk_status(struct task_struct *task, bool enable) +{ + task->thread_info.user_cfi_state.ubcfi_en = enable ? 1 : 0; + + if (enable) + task->thread_info.envcfg |= ENVCFG_SSE; + else + task->thread_info.envcfg &= ~ENVCFG_SSE; +} + +void set_shstk_lock(struct task_struct *task) +{ + task->thread_info.user_cfi_state.ubcfi_locked = 1; +} + /* * If size is 0, then to be compatible with regular stack we want it to be as big as * regular stack. Else PAGE_ALIGN it and return back @@ -269,3 +294,83 @@ void shstk_release(struct task_struct *tsk) vm_munmap(base, size); set_shstk_base(tsk, 0, 0); } + +int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status) +{ + unsigned long bcfi_status = 0; + + if (!cpu_supports_shadow_stack()) + return -EINVAL; + + /* this means shadow stack is enabled on the task */ + bcfi_status |= (is_shstk_enabled(t) ? PR_SHADOW_STACK_ENABLE : 0); + + return copy_to_user(status, &bcfi_status, sizeof(bcfi_status)) ? -EFAULT : 0; +} + +int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status) +{ + unsigned long size = 0, addr = 0; + bool enable_shstk = false; + + if (!cpu_supports_shadow_stack()) + return -EINVAL; + + /* Reject unknown flags */ + if (status & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + /* bcfi status is locked and further can't be modified by user */ + if (is_shstk_locked(t)) + return -EINVAL; + + enable_shstk = status & PR_SHADOW_STACK_ENABLE; + /* Request is to enable shadow stack and shadow stack is not enabled already */ + if (enable_shstk && !is_shstk_enabled(t)) { + /* shadow stack was allocated and enable request again + * no need to support such usecase and return EINVAL. + */ + if (is_shstk_allocated(t)) + return -EINVAL; + + size = calc_shstk_size(0); + addr = allocate_shadow_stack(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return -ENOMEM; + set_shstk_base(t, addr, size); + set_active_shstk(t, addr + size); + } + + /* + * If a request to disable shadow stack happens, let's go ahead and release it + * Although, if CLONE_VFORKed child did this, then in that case we will end up + * not releasing the shadow stack (because it might be needed in parent). Although + * we will disable it for VFORKed child. And if VFORKed child tries to enable again + * then in that case, it'll get entirely new shadow stack because following condition + * are true + * - shadow stack was not enabled for vforked child + * - shadow stack base was anyways pointing to 0 + * This shouldn't be a big issue because we want parent to have availability of shadow + * stack whenever VFORKed child releases resources via exit or exec but at the same + * time we want VFORKed child to break away and establish new shadow stack if it desires + * + */ + if (!enable_shstk) + shstk_release(t); + + set_shstk_status(t, enable_shstk); + return 0; +} + +int arch_lock_shadow_stack_status(struct task_struct *task, + unsigned long arg) +{ + /* If shtstk not supported or not enabled on task, nothing to lock here */ + if (!cpu_supports_shadow_stack() || + !is_shstk_enabled(task)) + return -EINVAL; + + set_shstk_lock(task); + + return 0; +}