From patchwork Fri Mar 14 21:39:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 14017532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2986C28B2F for ; Fri, 14 Mar 2025 21:40:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=H9r+YYVIli4/2rilwD1/jxSPWc7nEx3PKEQ4GYO2NDY=; b=x8iHTSFSQZ9CH3 Hz5EdQ6QyHzEGl7RBYTGseqdcJSQcHuuj/xj0UyeSUVcyasWFsUfdFXBZYiqWNfrF82JM9RolP8yL y31AcWBeNVeHNkKmBSAOfM9KL9EuudOyNNIp0gR78xw1Eox71RdoI9XSwc3TBTvvT0EsjjL/bK7WN brn9Z+2HqPuMozVsJxv34kuucPma1T0ySoP4K2IplHI02qnoItCGTY0/utEH/W8fgH+Bvo0iXRE3+ 5T6/KmHKzNUHwWsUsevmVbtqvKHvoCZ3WFsHQccSBUgvpFgdMbQ8FrwVOAeYBPuUqqe2vtx3AgF61 jNIrPKs2orXC3ozYJFSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1ttCl2-0000000FKh7-2NTv; Fri, 14 Mar 2025 21:39:56 +0000 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1ttCl0-0000000FKd6-0cLi for linux-riscv@lists.infradead.org; Fri, 14 Mar 2025 21:39:55 +0000 Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-2255003f4c6so46174475ad.0 for ; Fri, 14 Mar 2025 14:39:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741988393; x=1742593193; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ZL2v/gS+0+DF34n+ZrnHt0WiY6OgvVWONOJtZHYCX1s=; b=BgyI6cLm8v+QyNHXD9lyOlaF+dCwNeWGe5wuP2GZ57TptXXKI3hG5mGmaky2uqwisD Y6q+5f+N9SJ910HvO8msDSAn0DzZ73XwyZQEGJ5yTgI3adNDvk1uw/vNByOALtgZNuxC rV3MkeSaSWsGv/7+usVQoFcgqOvzQ7sFPwl9qFjwryTrNL6L/qZxE9W6S+s/RC0mcIP0 iuBJSigzpXiTwKm3/D96f8RwqIBwsafjKfGgKZFbmnCImCm/j/ILutqIEMmYHfDXQ6pY bKRR9Je8qwU7iPAc8bP67HrhzgUrVV35i4PnLuhTV99ClAJla7p5xbpzCrsMeCjr5L98 3+sQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741988393; x=1742593193; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZL2v/gS+0+DF34n+ZrnHt0WiY6OgvVWONOJtZHYCX1s=; b=tIMOT0Ihx5H02b8bICDJJ97VlNM6mUlk0Ue4RlcHL/8Yzare+l393ADKMZlZkKwMUS EHL8LTkwzv8MY6StTyXcjxapaBpQoJx/hCo0mFwUXnG3AsvYpNLk6nY/OwBdgvs5wFtv kc64shR4lEm2tmSo4b1Ex8as0NNezB4p/VIcnFwAISMSVL/tgqfDLAYKUnjCKn9JEo2J 4hQqjhrX/9SXX+VxqGGK1Eol9nl59O4VU5caOVVpywmgQW8Gpf2L2eTgt0jvH6XB4CSz wrRvcOGdbhbox3uMjdCgvh5uqk2x5e+yZZL/qZSaEz8NCDwVdEuHdkUKYGk9zxmbL30W b1IQ== X-Forwarded-Encrypted: i=1; AJvYcCUWgOMtV1DOfBcpNCBYOJatVYqHO+m7b+IoUuN5gbo1pT1Nb1vHCX+SlVFMWNuc3acyVhDugmjkCd4lOA==@lists.infradead.org X-Gm-Message-State: AOJu0Yz7mmQAvW70kxdeUjFScxAf9W+LAcXCWsa4DEQbWkNNLTNIYzhr WDbHdD4LF3/Kp283rQFDlu781CLX+5pJcIg1DMKhG/UJKYOD/JK0ueoUBQXSCRk= X-Gm-Gg: ASbGnctX7uiEfbaLZJ/2FX1OOdYNjd7r6ouvZk55yiUcT0tIbgLWCUF+lv5Drr+rZJU fBrh6absgs0HcZ0O32xGF0riTIvi4hSQG8oDl0bHK61NANotEUuPMpx9kjfVUNMy1xBUNcgqOeO e1IQE6Kb4SxlzCC9v2pPh8yLJznP+sLw+EElrIX9pMSGP8f3why294h82MTY/RQB57e+v5MPFTW or+XvM71v9aYlSxBO4x9NtnctE6rhgAkVFnBQTN8IJNUqcFTymVzXaxg6IUI4SgfylInErtq3j+ MJTQzYwUGNd/XlJrxkiMJpzFNdeANXUzC8ZX9dbmWBp+jsqUfgkZEEQ= X-Google-Smtp-Source: AGHT+IExiBLi5afxFVIy1qmyPdBKTHq7Fhk/9edJe1Uyl1tqXfGvVFk/2MK6lH2iHYAUQvjSyaVdrQ== X-Received: by 2002:a17:902:d4cb:b0:223:402b:cce2 with SMTP id d9443c01a7336-225e0aeefcdmr42109185ad.33.1741988393228; Fri, 14 Mar 2025 14:39:53 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-225c68a6e09sm33368855ad.55.2025.03.14.14.39.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 14:39:52 -0700 (PDT) From: Deepak Gupta Date: Fri, 14 Mar 2025 14:39:29 -0700 Subject: [PATCH v12 10/28] riscv/mm: Implement map_shadow_stack() syscall MIME-Version: 1.0 Message-Id: <20250314-v5_user_cfi_series-v12-10-e51202b53138@rivosinc.com> References: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> In-Reply-To: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Zong Li , Deepak Gupta X-Mailer: b4 0.14.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250314_143954_196635_DC790437 X-CRM114-Status: GOOD ( 23.60 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org As discussed extensively in the changelog for the addition of this syscall on x86 ("x86/shstk: Introduce map_shadow_stack syscall") the existing mmap() and madvise() syscalls do not map entirely well onto the security requirements for shadow stack memory since they lead to windows where memory is allocated but not yet protected or stacks which are not properly and safely initialised. Instead a new syscall map_shadow_stack() has been defined which allocates and initialises a shadow stack page. This patch implements this syscall for riscv. riscv doesn't require token to be setup by kernel because user mode can do that by itself. However to provide compatibility and portability with other architectues, user mode can specify token set flag. Reviewed-by: Zong Li Signed-off-by: Deepak Gupta --- arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/usercfi.c | 144 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 145 insertions(+) diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 8d186bfced45..3a861d320654 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -125,3 +125,4 @@ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += bugs.o +obj-$(CONFIG_RISCV_USER_CFI) += usercfi.o diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c new file mode 100644 index 000000000000..24022809a7b5 --- /dev/null +++ b/arch/riscv/kernel/usercfi.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2024 Rivos, Inc. + * Deepak Gupta + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define SHSTK_ENTRY_SIZE sizeof(void *) + +/* + * Writes on shadow stack can either be `sspush` or `ssamoswap`. `sspush` can happen + * implicitly on current shadow stack pointed to by CSR_SSP. `ssamoswap` takes pointer to + * shadow stack. To keep it simple, we plan to use `ssamoswap` to perform writes on shadow + * stack. + */ +static noinline unsigned long amo_user_shstk(unsigned long *addr, unsigned long val) +{ + /* + * Never expect -1 on shadow stack. Expect return addresses and zero + */ + unsigned long swap = -1; + + __enable_user_access(); + asm goto( + ".option push\n" + ".option arch, +zicfiss\n" + "1: ssamoswap.d %[swap], %[val], %[addr]\n" + _ASM_EXTABLE(1b, %l[fault]) + RISCV_ACQUIRE_BARRIER + ".option pop\n" + : [swap] "=r" (swap), [addr] "+A" (*addr) + : [val] "r" (val) + : "memory" + : fault + ); + __disable_user_access(); + return swap; +fault: + __disable_user_access(); + return -1; +} + +/* + * Create a restore token on the shadow stack. A token is always XLEN wide + * and aligned to XLEN. + */ +static int create_rstor_token(unsigned long ssp, unsigned long *token_addr) +{ + unsigned long addr; + + /* Token must be aligned */ + if (!IS_ALIGNED(ssp, SHSTK_ENTRY_SIZE)) + return -EINVAL; + + /* On RISC-V we're constructing token to be function of address itself */ + addr = ssp - SHSTK_ENTRY_SIZE; + + if (amo_user_shstk((unsigned long __user *)addr, (unsigned long)ssp) == -1) + return -EFAULT; + + if (token_addr) + *token_addr = addr; + + return 0; +} + +static unsigned long allocate_shadow_stack(unsigned long addr, unsigned long size, + unsigned long token_offset, bool set_tok) +{ + int flags = MAP_ANONYMOUS | MAP_PRIVATE; + struct mm_struct *mm = current->mm; + unsigned long populate, tok_loc = 0; + + if (addr) + flags |= MAP_FIXED_NOREPLACE; + + mmap_write_lock(mm); + addr = do_mmap(NULL, addr, size, PROT_READ, flags, + VM_SHADOW_STACK | VM_WRITE, 0, &populate, NULL); + mmap_write_unlock(mm); + + if (!set_tok || IS_ERR_VALUE(addr)) + goto out; + + if (create_rstor_token(addr + token_offset, &tok_loc)) { + vm_munmap(addr, size); + return -EINVAL; + } + + addr = tok_loc; + +out: + return addr; +} + +SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags) +{ + bool set_tok = flags & SHADOW_STACK_SET_TOKEN; + unsigned long aligned_size = 0; + + if (!cpu_supports_shadow_stack()) + return -EOPNOTSUPP; + + /* Anything other than set token should result in invalid param */ + if (flags & ~SHADOW_STACK_SET_TOKEN) + return -EINVAL; + + /* + * Unlike other architectures, on RISC-V, SSP pointer is held in CSR_SSP and is available + * CSR in all modes. CSR accesses are performed using 12bit index programmed in instruction + * itself. This provides static property on register programming and writes to CSR can't + * be unintentional from programmer's perspective. As long as programmer has guarded areas + * which perform writes to CSR_SSP properly, shadow stack pivoting is not possible. Since + * CSR_SSP is writeable by user mode, it itself can setup a shadow stack token subsequent + * to allocation. Although in order to provide portablity with other architecture (because + * `map_shadow_stack` is arch agnostic syscall), RISC-V will follow expectation of a token + * flag in flags and if provided in flags, setup a token at the base. + */ + + /* If there isn't space for a token */ + if (set_tok && size < SHSTK_ENTRY_SIZE) + return -ENOSPC; + + if (addr && (addr & (PAGE_SIZE - 1))) + return -EINVAL; + + aligned_size = PAGE_ALIGN(size); + if (aligned_size < size) + return -EOVERFLOW; + + return allocate_shadow_stack(addr, aligned_size, size, set_tok); +}