From patchwork Mon Apr 7 16:19:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 14041357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B843C36010 for ; Mon, 7 Apr 2025 17:11:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HpvRBpGYEnwpfwDmSNpDyV1CEuLroQggbPFTJqL06A0=; b=NHX49IsWSzgGUrStmefY9unLth /vW4nQvyJYb7xXN2U22h5/6vpBdvwgeNJ25BKY00bTCdpjQ50s6Mq82Mn3KfJPY4yrnXxB7bvRRTa IjtyCNIXUXGhYgNnDY4KE3ect7WkpMUwJbLAagTdhY6FG2NZPS9eKmCcBvQLUDnV8mbesjuOhlofD LNmQlR4qTweRuIH2UjE3mCFD0pBG/6ext4F7Z33huSLVmIt6+2E1O5raLF0/Yp+2D15s3yWFSYRYe /Dia83scK4/9upcaIGM9/VXCHTfpkLlZuIE6VfxeVs9diUKK/odwkya1+n8v6LtFAOkE56hhdTRFh Aduh8Xdg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1u1q0j-00000001KcL-0T18; Mon, 07 Apr 2025 17:11:49 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.1 #2 (Red Hat Linux)) id 1u1pCl-0000000199N-1RAz for linux-arm-kernel@lists.infradead.org; Mon, 07 Apr 2025 16:20:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 006691BC0; Mon, 7 Apr 2025 09:20:12 -0700 (PDT) Received: from u200865.usa.arm.com (usa-sjc-mx-foss1.foss.arm.com [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 68C763F694; Mon, 7 Apr 2025 09:20:10 -0700 (PDT) From: Jeremy Linton To: linux-trace-kernel@vger.kernel.org Cc: linux-perf-users@vger.kernel.org, mhiramat@kernel.org, oleg@redhat.com, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, thiago.bauermann@linaro.org, broonie@kernel.org, yury.khrustalev@arm.com, kristina.martsenko@arm.com, liaochang1@huawei.com, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Jeremy Linton Subject: [PATCH v2 3/6] arm64: uaccess: Add additional userspace GCS accessors Date: Mon, 7 Apr 2025 11:19:48 -0500 Message-ID: <20250407161951.560865-4-jeremy.linton@arm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250407161951.560865-1-jeremy.linton@arm.com> References: <20250407161951.560865-1-jeremy.linton@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250407_092011_432881_88185CF8 X-CRM114-Status: GOOD ( 12.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Uprobes need more advanced read, push, and pop userspace GCS functionality. Implement those features using the existing gcsstr() and copy_from_user(). Its important to note that GCS pages can be read by normal instructions, but the hardware validates that pages used by GCS specific operations, have a GCS privilege set. We aren't validating this in load_user_gcs because it requires stabilizing the VMA over the read which may fault. Signed-off-by: Jeremy Linton --- arch/arm64/include/asm/uaccess.h | 42 ++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 5b91803201ef..34a8b2cc8935 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -20,6 +20,7 @@ #include #include +#include #include #include #include @@ -539,6 +540,47 @@ static inline void put_user_gcs(unsigned long val, unsigned long __user *addr, uaccess_ttbr0_disable(); } +static __always_inline unsigned long __must_check +copy_from_user(void *to, const void __user *from, unsigned long n); + +/* + * Unlike put_user_gcs() above, the use of copy_from_user() may provide + * an opening for non GCS pages to be used to source data. Therefore this + * should only be used in contexts where that is acceptable. + */ +static inline u64 load_user_gcs(unsigned long __user *addr, int *err) +{ + unsigned long ret; + u64 load = 0; + + gcsb_dsync(); + ret = copy_from_user(&load, addr, sizeof(load)); + if (ret != 0) + *err = ret; + return load; +} + +static inline void push_user_gcs(unsigned long val, int *err) +{ + u64 gcspr = read_sysreg_s(SYS_GCSPR_EL0); + + gcspr -= sizeof(u64); + put_user_gcs(val, (unsigned long __user *)gcspr, err); + if (!*err) + write_sysreg_s(gcspr, SYS_GCSPR_EL0); +} + +static inline u64 pop_user_gcs(int *err) +{ + u64 gcspr = read_sysreg_s(SYS_GCSPR_EL0); + u64 read_val; + + read_val = load_user_gcs((unsigned long __user *)gcspr, err); + if (!*err) + write_sysreg_s(gcspr + sizeof(u64), SYS_GCSPR_EL0); + + return read_val; +} #endif /* CONFIG_ARM64_GCS */