From patchwork Tue Jun 25 21:09:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 13712048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 865B6C30653 for ; Tue, 25 Jun 2024 21:09:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Q9k07uZKn0saRaAqE49+peCfRoBLr9hxxr+CrpcBlw4=; b=y1RqmLvHkHrPqv /PAs+0Jf/5basZc0SgeUjAiyybStpVfTMFvQCrGJKz+EdkOVjB9fdxcjTc4eVlQ+CflkQ8aG4qmly UuVuPYABKqPg0lgBEGPGyFLMerTP1/fyXVLP05VYVVV78MtO+Yt3gF/MkVChm2kVBfgkmsa3N5bOs qNFAAFzqq+ubUgzZ5y8I29Jy2tv8De4bu/ltJOauJxjc+yckVnPsHapnTB8kAnK9n/berj9RnM6Tr 0AvTu2Yfz/3CJ99TNqij9eRtV1tGUje1cC1oSqCfyqXQlkLJm/wve5yeBWmqUp9wDyXaLnKJ/YHaS gp7dAXcKvFWfQTO85ckg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMDQF-00000004VA9-1YTp; Tue, 25 Jun 2024 21:09:51 +0000 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMDQ7-00000004V4q-40jh for linux-riscv@lists.infradead.org; Tue, 25 Jun 2024 21:09:45 +0000 Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1f9de13d6baso42306045ad.2 for ; Tue, 25 Jun 2024 14:09:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1719349783; x=1719954583; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9RHKmejM9a0vevYfMOXCLHoYWj48ZdPOlbThmmBhK0Q=; b=DboAAj7o9855qstQWxAB391zou38al+VxTgVdclL+PB4nbXuWxL4ssx9banvh7ecpW r02fZdcztpK062AtQrwI16EL2SW8ndr57RU1TSqjWdz+MZqg3y+v/DwQNmhLgZOJEDFw 3KHaGHkoSFTecaby25QAAllW97QS45D3c0mPNvQFS+TNtIX0jtbRzFCxd2AUriiE6esF 976pNncXqy8LDHXLQ+DfB/1w3WxPPt6OEi2XXvsVI6o4ttf/Gq/oQBN+XQfWCsd/3HWd CS0S52mVOzRHFmSxKzoD21dBlsGSKIHEZqINTxAWtjRWoOz7ruE1EBQKUIAxMJupdjKC 9LPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719349783; x=1719954583; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9RHKmejM9a0vevYfMOXCLHoYWj48ZdPOlbThmmBhK0Q=; b=O6KFbZYHIHw9tAp3mK7+QFjMfrSaAFIpAjKjpTd+E+BjPUySIq+UEv/wiAlRn20Pcq 7mhOAL/iYYUqNaMXQAi3cs5WXmi+31vCpgibtZIvYOgqi+IWyTEbN1hnjDOQmFRiEztY gMh03V7JUsLymDJpFk9Il6Y9sAM6U3NACo9UaAjPmRINDyTZVd964MsFzJGpfnE5aq5t +kEDMWLmmBpGYYZjn/88OGXwOxs9ZzoWYL01QuSFHrd09FMLIigYOTe9qwl+aoaGYDsd DlqSwZ0PX3fRSYqHgU7/Il9C9VAS/3SAHLI5XLgBNimEo8cXXKyNrSMxcRzLnr9YFgVo WoeA== X-Forwarded-Encrypted: i=1; AJvYcCXzn77gg8eD1gaR3k57udAEC3CJSEnuA0gGwWtA24bT6kcnA9lIOB/0CVPUy2Zgvko+ZyPHUGEUZgffR+YH3hv+WYAu6Fdd1aKDFDcSmHvb X-Gm-Message-State: AOJu0YzZYiau5+qnYNnYPrPY+XkwtU9LYCYRcSy8Kb+USQm77wsts9yt XRvLoGzibgceLDGrdWP9ii0DMl1CipHjh4yADMHB3wuA67ecr5gflBmgRfxCdYnZn4/sTKOvsGP k X-Google-Smtp-Source: AGHT+IH/DMRd777ZSRsWNFii7RZFNgZupCFLnI/6nswVeiRDaQZIaz+Y8uQB+pMOl2GL7EuQdKoZrw== X-Received: by 2002:a17:902:c943:b0:1fa:2210:4562 with SMTP id d9443c01a7336-1fa23fd8a00mr103679455ad.29.1719349782852; Tue, 25 Jun 2024 14:09:42 -0700 (PDT) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f9eb328f57sm85873455ad.110.2024.06.25.14.09.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jun 2024 14:09:42 -0700 (PDT) From: Samuel Holland To: Palmer Dabbelt , linux-riscv@lists.infradead.org Cc: devicetree@vger.kernel.org, Catalin Marinas , linux-kernel@vger.kernel.org, Anup Patel , Conor Dooley , kasan-dev@googlegroups.com, Atish Patra , Evgenii Stepanov , Krzysztof Kozlowski , Rob Herring , "Kirill A . Shutemov" , Samuel Holland Subject: [PATCH v2 04/10] riscv: Add support for userspace pointer masking Date: Tue, 25 Jun 2024 14:09:15 -0700 Message-ID: <20240625210933.1620802-5-samuel.holland@sifive.com> X-Mailer: git-send-email 2.44.1 In-Reply-To: <20240625210933.1620802-1-samuel.holland@sifive.com> References: <20240625210933.1620802-1-samuel.holland@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240625_140944_021217_A0AB44ED X-CRM114-Status: GOOD ( 23.58 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RISC-V supports pointer masking with a variable number of tag bits (which is called "PMLEN" in the specification) and which is configured at the next higher privilege level. Wire up the PR_SET_TAGGED_ADDR_CTRL and PR_GET_TAGGED_ADDR_CTRL prctls so userspace can request a lower bound on the number of tag bits and determine the actual number of tag bits. As with arm64's PR_TAGGED_ADDR_ENABLE, the pointer masking configuration is thread-scoped, inherited on clone() and fork() and cleared on execve(). Signed-off-by: Samuel Holland --- Changes in v2: - Rebase on riscv/linux.git for-next - Add and use the envcfg_update_bits() helper function - Inline flush_tagged_addr_state() arch/riscv/Kconfig | 11 ++++ arch/riscv/include/asm/processor.h | 8 +++ arch/riscv/include/asm/switch_to.h | 11 ++++ arch/riscv/kernel/process.c | 99 ++++++++++++++++++++++++++++++ include/uapi/linux/prctl.h | 3 + 5 files changed, 132 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index b94176e25be1..8f9980f81ea5 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -505,6 +505,17 @@ config RISCV_ISA_C If you don't know what to do here, say Y. +config RISCV_ISA_POINTER_MASKING + bool "Smmpm, Smnpm, and Ssnpm extensions for pointer masking" + depends on 64BIT + default y + help + Add support for the pointer masking extensions (Smmpm, Smnpm, + and Ssnpm) when they are detected at boot. + + If this option is disabled, userspace will be unable to use + the prctl(PR_{SET,GET}_TAGGED_ADDR_CTRL) API. + config RISCV_ISA_SVNAPOT bool "Svnapot extension support for supervisor mode NAPOT pages" depends on 64BIT && MMU diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 0838922bd1c8..4f99c85d29ae 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -194,6 +194,14 @@ extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val); #define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2) extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread); +#ifdef CONFIG_RISCV_ISA_POINTER_MASKING +/* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */ +long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg); +long get_tagged_addr_ctrl(struct task_struct *task); +#define SET_TAGGED_ADDR_CTRL(arg) set_tagged_addr_ctrl(current, arg) +#define GET_TAGGED_ADDR_CTRL() get_tagged_addr_ctrl(current) +#endif + #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_PROCESSOR_H */ diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h index 9685cd85e57c..94e33216b2d9 100644 --- a/arch/riscv/include/asm/switch_to.h +++ b/arch/riscv/include/asm/switch_to.h @@ -70,6 +70,17 @@ static __always_inline bool has_fpu(void) { return false; } #define __switch_to_fpu(__prev, __next) do { } while (0) #endif +static inline void envcfg_update_bits(struct task_struct *task, + unsigned long mask, unsigned long val) +{ + unsigned long envcfg; + + envcfg = (task->thread.envcfg & ~mask) | val; + task->thread.envcfg = envcfg; + if (task == current) + csr_write(CSR_ENVCFG, envcfg); +} + static inline void __switch_to_envcfg(struct task_struct *next) { asm volatile (ALTERNATIVE("nop", "csrw " __stringify(CSR_ENVCFG) ", %0", diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index e4bc61c4e58a..dec5ccc44697 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -7,6 +7,7 @@ * Copyright (C) 2017 SiFive */ +#include #include #include #include @@ -171,6 +172,10 @@ void flush_thread(void) memset(¤t->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); clear_tsk_thread_flag(current, TIF_RISCV_V_DEFER_RESTORE); #endif +#ifdef CONFIG_RISCV_ISA_POINTER_MASKING + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM)) + envcfg_update_bits(current, ENVCFG_PMM, ENVCFG_PMM_PMLEN_0); +#endif } void arch_release_task_struct(struct task_struct *tsk) @@ -233,3 +238,97 @@ void __init arch_task_cache_init(void) { riscv_v_setup_ctx_cache(); } + +#ifdef CONFIG_RISCV_ISA_POINTER_MASKING +static bool have_user_pmlen_7; +static bool have_user_pmlen_16; + +long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg) +{ + unsigned long valid_mask = PR_PMLEN_MASK; + struct thread_info *ti = task_thread_info(task); + unsigned long pmm; + u8 pmlen; + + if (is_compat_thread(ti)) + return -EINVAL; + + if (arg & ~valid_mask) + return -EINVAL; + + pmlen = FIELD_GET(PR_PMLEN_MASK, arg); + if (pmlen > 16) { + return -EINVAL; + } else if (pmlen > 7) { + if (have_user_pmlen_16) + pmlen = 16; + else + return -EINVAL; + } else if (pmlen > 0) { + /* + * Prefer the smallest PMLEN that satisfies the user's request, + * in case choosing a larger PMLEN has a performance impact. + */ + if (have_user_pmlen_7) + pmlen = 7; + else if (have_user_pmlen_16) + pmlen = 16; + else + return -EINVAL; + } + + if (pmlen == 7) + pmm = ENVCFG_PMM_PMLEN_7; + else if (pmlen == 16) + pmm = ENVCFG_PMM_PMLEN_16; + else + pmm = ENVCFG_PMM_PMLEN_0; + + envcfg_update_bits(task, ENVCFG_PMM, pmm); + + return 0; +} + +long get_tagged_addr_ctrl(struct task_struct *task) +{ + struct thread_info *ti = task_thread_info(task); + long ret = 0; + + if (is_compat_thread(ti)) + return -EINVAL; + + switch (task->thread.envcfg & ENVCFG_PMM) { + case ENVCFG_PMM_PMLEN_7: + ret |= FIELD_PREP(PR_PMLEN_MASK, 7); + break; + case ENVCFG_PMM_PMLEN_16: + ret |= FIELD_PREP(PR_PMLEN_MASK, 16); + break; + } + + return ret; +} + +static bool try_to_set_pmm(unsigned long value) +{ + csr_set(CSR_ENVCFG, value); + return (csr_read_clear(CSR_ENVCFG, ENVCFG_PMM) & ENVCFG_PMM) == value; +} + +static int __init tagged_addr_init(void) +{ + if (!riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM)) + return 0; + + /* + * envcfg.PMM is a WARL field. Detect which values are supported. + * Assume the supported PMLEN values are the same on all harts. + */ + csr_clear(CSR_ENVCFG, ENVCFG_PMM); + have_user_pmlen_7 = try_to_set_pmm(ENVCFG_PMM_PMLEN_7); + have_user_pmlen_16 = try_to_set_pmm(ENVCFG_PMM_PMLEN_16); + + return 0; +} +core_initcall(tagged_addr_init); +#endif /* CONFIG_RISCV_ISA_POINTER_MASKING */ diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 35791791a879..6e84c827869b 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -244,6 +244,9 @@ struct prctl_mm_map { # define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) /* Unused; kept only for source compatibility */ # define PR_MTE_TCF_SHIFT 1 +/* RISC-V pointer masking tag length */ +# define PR_PMLEN_SHIFT 24 +# define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT) /* Control reclaim behavior when allocating memory */ #define PR_SET_IO_FLUSHER 57