From patchwork Fri Aug 14 17:27:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11715071 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70312618 for ; Fri, 14 Aug 2020 17:32:16 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4761520768 for ; Fri, 14 Aug 2020 17:32:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qyoBBMbY"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="r3Pq4Nw/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4761520768 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+UJFQQ/YFkIB/RYqex7fQ4+QL4+Ytu2/k0XMfPlzKoQ=; b=qyoBBMbYHUgRsKKp08ttYrcdM 1xISJdsv+vSZh2SQwt62ldX7h8rxDcq4N1BpW03lBZ6W+9N2TNDH5s6ZU6TLn+HO2TDYQCEXuD+CG xBOY86X2RPA1gQ2wkuTqj5Aod2Cyq2MKBJF6tXBAv2rD6v3GhP6kMyzlfWtexyFBedMTS+2u4G47O ne8RCrNlb7BVM2Ux8qfUCQ+U+6x5MUwLv47CciDRRxxw+nEGagUpARaXwpTSzshYHeOR/L7VZDKNB h6i/hrwZiP+I7HfEzf5PxZ7eGX4Q2WMBmjAuj7Lyf3sa+ZjnWvN2BLtyBtpxyHZQGBbWtdqWNNziB mEZxqyScQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k6dYN-0005PB-Kd; Fri, 14 Aug 2020 17:31:43 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1k6dV1-0003YP-K5 for linux-arm-kernel@lists.infradead.org; Fri, 14 Aug 2020 17:28:24 +0000 Received: by mail-wr1-x44a.google.com with SMTP id r14so3614000wrq.3 for ; Fri, 14 Aug 2020 10:28:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Riwx46dUdQZCUBoIKFQK30aeY6nxibjjbFZQvhGTSVk=; b=r3Pq4Nw/CaoWG6G6om2TkSeS2q3clDVrI1jCJZUuPgJWrFW56RmBaxQJTKiw8RDwkv SXknHXxNubQ/xsH4JUduXI8P7Ao0rNKHTmxWSLd6+rR3L+eMVGvfYsQZCNJcqno+0SvU tQTCc0IaZnYqxsGgZIWb6dR600zBkz4ATNFlZXN0oodOskMVMbwApFhSiSS8Hc5kEo1o 2keQXF4RswsJN9aziL4F/HMIxE1tLyJyZehyXq70N4BsfEQC2/qJApII530XcFa6vDo9 umF+SAHHeQEUyeUJl91ZaJnkzhY9dZLQUxwtVJNYU4o96vxe2Ib/F+vtqvgmZyDaI9I/ HVjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Riwx46dUdQZCUBoIKFQK30aeY6nxibjjbFZQvhGTSVk=; b=IjbClgHa6DuQdYwmiZxj7QRjEgnIJc8BsQ1zd/4RJdhP93EE/I4OD36cWBnu10Rybb PVBqBmqR/+6Sgow95M5IyIKgomwbz2cB7MH4QQtflDPEwmSnw/RC8VAUKsDcr3fceE+i dr3SRWnISOg5d5Az06rfqvuC6kXIUBAIjX2qPc7j1Z5Dt7mYxGhKe/gyrjumzvoAotCo EEyA+wYzuorDqAfhtZzkDvAtTnMvD1yngw9jXt9ekbJvqBoH3V4GiY5WPx1V4QkuqaG7 V5qbgfPKtlw+gCURfq1EcvtOz5E4i2zBtttgSccHSn9/nV5pMWAeGt7VbX/qSKPApjSy 2hnw== X-Gm-Message-State: AOAM531aKRcY4EE9MH/znO495XFg53fAWh59V7gjKV9dLTi/vN8aahwZ ZafgBR1h3j8teZPwc1AG8dyjClnWvNpRwkCs X-Google-Smtp-Source: ABdhPJw/jNH9QPIZc2t+4rbadjLlOE44b7TKGySp7yNB+Ni/K8pn2eaB/l/BXc2r60vcjhzOsBbB8v3UAMZ+9JJW X-Received: by 2002:a05:600c:c3:: with SMTP id u3mr424057wmm.1.1597426091488; Fri, 14 Aug 2020 10:28:11 -0700 (PDT) Date: Fri, 14 Aug 2020 19:27:02 +0200 In-Reply-To: Message-Id: <2cf260bdc20793419e32240d2a3e692b0adf1f80.1597425745.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog Subject: [PATCH 20/35] arm64: mte: Add in-kernel MTE helpers From: Andrey Konovalov To: Dmitry Vyukov , Vincenzo Frascino , Catalin Marinas , kasan-dev@googlegroups.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200814_132815_775665_781ED647 X-CRM114-Status: GOOD ( 22.43 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marco Elver , Elena Petrova , Andrey Konovalov , Kevin Brodsky , Will Deacon , Branislav Rankov , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , linux-arm-kernel@lists.infradead.org, Andrey Ryabinin , Andrew Morton , Evgenii Stepanov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Vincenzo Frascino Provide helper functions to manipulate allocation and pointer tags for kernel addresses. Low-level helper functions (mte_assign_*, written in assembly) operate tag values from the [0x0, 0xF] range. High-level helper functions (mte_get/set_*) use the [0xF0, 0xFF] range to preserve compatibility with normal kernel pointers that have 0xFF in their top byte. MTE_GRANULE_SIZE definition is moved to mte_asm.h header that doesn't have any dependencies and is safe to include into any low-level header. Signed-off-by: Vincenzo Frascino Co-developed-by: Andrey Konovalov Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/esr.h | 1 + arch/arm64/include/asm/mte.h | 46 +++++++++++++++++++++++++++++--- arch/arm64/include/asm/mte_asm.h | 10 +++++++ arch/arm64/kernel/mte.c | 43 +++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 41 ++++++++++++++++++++++++++++ 5 files changed, 138 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/include/asm/mte_asm.h diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 035003acfa87..bc0dc66a6a27 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -103,6 +103,7 @@ #define ESR_ELx_FSC (0x3F) #define ESR_ELx_FSC_TYPE (0x3C) #define ESR_ELx_FSC_EXTABT (0x10) +#define ESR_ELx_FSC_MTE (0x11) #define ESR_ELx_FSC_SERROR (0x11) #define ESR_ELx_FSC_ACCESS (0x08) #define ESR_ELx_FSC_FAULT (0x04) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 1c99fcadb58c..733be1cb5c95 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -5,14 +5,19 @@ #ifndef __ASM_MTE_H #define __ASM_MTE_H -#define MTE_GRANULE_SIZE UL(16) +#include + #define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) #define MTE_TAG_SHIFT 56 #define MTE_TAG_SIZE 4 +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) +#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) #ifndef __ASSEMBLY__ +#include #include +#include #include @@ -45,7 +50,16 @@ long get_mte_ctrl(struct task_struct *task); int mte_ptrace_copy_tags(struct task_struct *child, long request, unsigned long addr, unsigned long data); -#else +void *mte_assign_valid_ptr_tag(void *ptr); +void *mte_assign_random_ptr_tag(void *ptr); +void mte_assign_mem_tag_range(void *addr, size_t size); + +#define mte_get_ptr_tag(ptr) ((u8)(((u64)(ptr)) >> MTE_TAG_SHIFT)) +u8 mte_get_mem_tag(void *addr); +u8 mte_get_random_tag(void); +void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); + +#else /* CONFIG_ARM64_MTE */ /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 @@ -80,7 +94,33 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child, return -EIO; } -#endif +static inline void *mte_assign_valid_ptr_tag(void *ptr) +{ + return ptr; +} +static inline void *mte_assign_random_ptr_tag(void *ptr) +{ + return ptr; +} +static inline void mte_assign_mem_tag_range(void *addr, size_t size) +{ +} + +#define mte_get_ptr_tag(ptr) 0xFF +static inline u8 mte_get_mem_tag(void *addr) +{ + return 0xFF; +} +static inline u8 mte_get_random_tag(void) +{ + return 0xFF; +} +static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) +{ + return addr; +} + +#endif /* CONFIG_ARM64_MTE */ #endif /* __ASSEMBLY__ */ #endif /* __ASM_MTE_H */ diff --git a/arch/arm64/include/asm/mte_asm.h b/arch/arm64/include/asm/mte_asm.h new file mode 100644 index 000000000000..aa532c1851e1 --- /dev/null +++ b/arch/arm64/include/asm/mte_asm.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020 ARM Ltd. + */ +#ifndef __ASM_MTE_ASM_H +#define __ASM_MTE_ASM_H + +#define MTE_GRANULE_SIZE UL(16) + +#endif /* __ASM_MTE_ASM_H */ diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index eb39504e390a..e2d708b4583d 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -13,8 +13,10 @@ #include #include #include +#include #include +#include #include #include #include @@ -72,6 +74,47 @@ int memcmp_pages(struct page *page1, struct page *page2) return ret; } +u8 mte_get_mem_tag(void *addr) +{ + if (system_supports_mte()) + addr = mte_assign_valid_ptr_tag(addr); + + return 0xF0 | mte_get_ptr_tag(addr); +} + +u8 mte_get_random_tag(void) +{ + u8 tag = 0xF; + + if (system_supports_mte()) + tag = mte_get_ptr_tag(mte_assign_random_ptr_tag(NULL)); + + return 0xF0 | tag; +} + +void * __must_check mte_set_mem_tag_range(void *addr, size_t size, u8 tag) +{ + void *ptr = addr; + + if ((!system_supports_mte()) || (size == 0)) + return addr; + + tag = 0xF0 | (tag & 0xF); + ptr = (void *)__tag_set(ptr, tag); + size = ALIGN(size, MTE_GRANULE_SIZE); + + mte_assign_mem_tag_range(ptr, size); + + /* + * mte_assign_mem_tag_range() can be invoked in a multi-threaded + * context, ensure that tags are written in memory before the + * reference is used. + */ + smp_wmb(); + + return ptr; +} + static void update_sctlr_el1_tcf0(u64 tcf0) { /* ISB required for the kernel uaccess routines */ diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 03ca6d8b8670..8c743540e32c 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -149,3 +149,44 @@ SYM_FUNC_START(mte_restore_page_tags) ret SYM_FUNC_END(mte_restore_page_tags) + +/* + * Assign pointer tag based on the allocation tag + * x0 - source pointer + * Returns: + * x0 - pointer with the correct tag to access memory + */ +SYM_FUNC_START(mte_assign_valid_ptr_tag) + ldg x0, [x0] + ret +SYM_FUNC_END(mte_assign_valid_ptr_tag) + +/* + * Assign random pointer tag + * x0 - source pointer + * Returns: + * x0 - pointer with a random tag + */ +SYM_FUNC_START(mte_assign_random_ptr_tag) + irg x0, x0 + ret +SYM_FUNC_END(mte_assign_random_ptr_tag) + +/* + * Assign allocation tags for a region of memory based on the pointer tag + * x0 - source pointer + * x1 - size + * + * Note: size is expected to be MTE_GRANULE_SIZE aligned + */ +SYM_FUNC_START(mte_assign_mem_tag_range) + /* if (src == NULL) return; */ + cbz x0, 2f + /* if (size == 0) return; */ + cbz x1, 2f +1: stg x0, [x0] + add x0, x0, #MTE_GRANULE_SIZE + sub x1, x1, #MTE_GRANULE_SIZE + cbnz x1, 1b +2: ret +SYM_FUNC_END(mte_assign_mem_tag_range)