From patchwork Tue Sep 15 21:16:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11777769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37CC559D for ; Tue, 15 Sep 2020 21:17:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E70B820770 for ; Tue, 15 Sep 2020 21:17:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="dj076h3H" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E70B820770 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C491790008F; Tue, 15 Sep 2020 17:17:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BF76290008A; Tue, 15 Sep 2020 17:17:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE5A390008F; Tue, 15 Sep 2020 17:17:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9312A90008A for ; Tue, 15 Sep 2020 17:17:30 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 584C98249980 for ; Tue, 15 Sep 2020 21:17:30 +0000 (UTC) X-FDA: 77266557060.24.pies23_5b026f227114 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 32FA81A4A5 for ; Tue, 15 Sep 2020 21:17:30 +0000 (UTC) X-Spam-Summary: 1,0,0,27919053db2c6113,d41d8cd98f00b204,3ac9hxwokce0p2s6tdz2a0v33v0t.r310x29c-11zaprz.36v@flex--andreyknvl.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:973:981:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3874:4049:4250:4321:4605:5007:6117:6119:6261:6653:6742:7576:7903:7904:8603:8784:9969:10004:11026:11232:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12679:12895:12986:13141:13161:13229:13230:13972:14096:14097:14394:14659:14877:21063:21080:21365:21444:21451:21611:21627:21795:21990:30051:30054:30069:30070,0,RBL:209.85.221.73:@flex--andreyknvl.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yrmbia5p8d7m5mcnnfceekws5h1ocqawo1shxnnejk1ytodzdwqaxsff7gczh.1rq9zt4ohihtfnqpz6k3yfqf6ghe4p4xidzq1oqgibsgzd8yqqyr4ydebgmfg7o.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0 .5,Netch X-HE-Tag: pies23_5b026f227114 X-Filterd-Recvd-Size: 10012 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Tue, 15 Sep 2020 21:17:29 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id g6so1717345wrv.3 for ; Tue, 15 Sep 2020 14:17:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=bCscEPFr1p8biWaTnMvAabXKFPxq9bPZ2kCBns9eE1M=; b=dj076h3Hq0EZ5vJsqvWYXNTgDdvEZskWR3MgYexaY/cHXXt/eNQ961DFynen3V5ls3 1gM+0JhfkHfRhDJv2EgyJpd8fTFxLm+Ie9EAP37+QlOFhJsoOwlIjPNv7pTfIRxQpnEN wZQOiN28rW7bBvA4c8d2fnW1qmdR5cJwGsUi5tatyu/J2LGJb4en6ZIwkIh/BSFC/vJ6 1w6pM4PpIThFUj0A3FikYijkbdliUSF3+7yELSk1fB5ELKjHSFKTwyjOuylsHYFnkRUm CYU7gh3lliG2hefWhZp0KC0fuXs2zCDlkOQP0Q/GvkntJVd0BVujYDp8RP9yrNScuDsz oW5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bCscEPFr1p8biWaTnMvAabXKFPxq9bPZ2kCBns9eE1M=; b=HfcidvjbRVoD/kzQ/ufV709Qa5SAf2H6sRD5l/OuwrDeoEpM3D4if634jJnWl0VPFX PwhUqx06mSjGWnwHiexSdAzmLUZy1kluYyVC+vrTCf79xR2j2BxSA6yl0CCcMqcw29+a AmEvRWSkb98OinqAIHDRukpSQjjurqRR5dg1evxW/qgvv2o/1n9nWRx35+YiIL9O2HAc d1iwUbGmnuNHq5s4pngzrmj603p+glx/6SF8xmKxrq04YZUo9YXzIk5EdwnY39UbIHFA oY145XqW++HzbmrzbB+p7HZj9MBaTYRmce0dhZqvw80bMaUIlZ9h7IG+tcBVu9hmPBsI 3mRQ== X-Gm-Message-State: AOAM533xz0xqAt4/gzaLxwaX8v5hlv7B1J+2x5AoL1WIbaRW7ZWytHVe klOp7l4jrxkfeU2Iz/NnWGVoMAc8/wegD7Fx X-Google-Smtp-Source: ABdhPJxLok4nKh03xLUM/OQWLXouOIoUSpe2Vwhf8TfNubNG71ESeL1AFAa3x8/3NFceAmkBHHqlope7y9QQo3dJ X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a1c:4d0c:: with SMTP id o12mr181612wmh.0.1600204648094; Tue, 15 Sep 2020 14:17:28 -0700 (PDT) Date: Tue, 15 Sep 2020 23:16:09 +0200 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.28.0.618.gf4bc123cb7-goog Subject: [PATCH v2 27/37] arm64: mte: Switch GCR_EL1 in kernel entry and exit From: Andrey Konovalov To: Dmitry Vyukov , Vincenzo Frascino , Catalin Marinas , kasan-dev@googlegroups.com Cc: Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Elena Petrova , Branislav Rankov , Kevin Brodsky , Will Deacon , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Rspamd-Queue-Id: 32FA81A4A5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vincenzo Frascino When MTE is present, the GCR_EL1 register contains the tags mask that allows to exclude tags from the random generation via the IRG instruction. With the introduction of the new Tag-Based KASAN API that provides a mechanism to reserve tags for special reasons, the MTE implementation has to make sure that the GCR_EL1 setting for the kernel does not affect the userspace processes and viceversa. Save and restore the kernel/user mask in GCR_EL1 in kernel entry and exit. Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov --- Change-Id: I0081cba5ace27a9111bebb239075c9a466af4c84 --- arch/arm64/include/asm/mte-helpers.h | 6 ++++++ arch/arm64/include/asm/mte.h | 2 ++ arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/cpufeature.c | 3 +++ arch/arm64/kernel/entry.S | 26 ++++++++++++++++++++++++++ arch/arm64/kernel/mte.c | 19 ++++++++++++++++--- 6 files changed, 56 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/mte-helpers.h b/arch/arm64/include/asm/mte-helpers.h index 5dc2d443851b..60a292fc747c 100644 --- a/arch/arm64/include/asm/mte-helpers.h +++ b/arch/arm64/include/asm/mte-helpers.h @@ -25,6 +25,8 @@ u8 mte_get_mem_tag(void *addr); u8 mte_get_random_tag(void); void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); +void mte_init_tags(u64 max_tag); + #else /* CONFIG_ARM64_MTE */ #define mte_get_ptr_tag(ptr) 0xFF @@ -41,6 +43,10 @@ static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) return addr; } +static inline void mte_init_tags(u64 max_tag) +{ +} + #endif /* CONFIG_ARM64_MTE */ #endif /* __ASSEMBLY__ */ diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 82cd7c89edec..3142a2de51ae 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -15,6 +15,8 @@ #include +extern u64 gcr_kernel_excl; + void mte_clear_page_tags(void *addr); unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7d32fc959b1a..dfe6ed8446ac 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -47,6 +47,9 @@ int main(void) #ifdef CONFIG_ARM64_PTR_AUTH DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); +#endif +#ifdef CONFIG_ARM64_MTE + DEFINE(THREAD_GCR_EL1_USER, offsetof(struct task_struct, thread.gcr_user_excl)); #endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index eca06b8c74db..3602ac45d093 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1721,6 +1721,9 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) /* Enable in-kernel MTE only if KASAN_HW_TAGS is enabled */ if (IS_ENABLED(CONFIG_KASAN_HW_TAGS)) { + /* Enable the kernel exclude mask for random tags generation */ + write_sysreg_s((SYS_GCR_EL1_RRND | gcr_kernel_excl), SYS_GCR_EL1); + /* Enable MTE Sync Mode for EL1 */ sysreg_clear_set(sctlr_el1, SCTLR_ELx_TCF_MASK, SCTLR_ELx_TCF_SYNC); isb(); diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ff34461524d4..79a6848840bd 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -175,6 +175,28 @@ alternative_else_nop_endif #endif .endm + .macro mte_restore_gcr, el, tsk, tmp, tmp2 +#ifdef CONFIG_ARM64_MTE +alternative_if_not ARM64_MTE + b 1f +alternative_else_nop_endif + .if \el == 0 + ldr \tmp, [\tsk, #THREAD_GCR_EL1_USER] + .else + ldr_l \tmp, gcr_kernel_excl + .endif + /* + * Calculate and set the exclude mask preserving + * the RRND (bit[16]) setting. + */ + mrs_s \tmp2, SYS_GCR_EL1 + bfi \tmp2, \tmp, #0, #16 + msr_s SYS_GCR_EL1, \tmp2 + isb +1: +#endif + .endm + .macro kernel_entry, el, regsize = 64 .if \regsize == 32 mov w0, w0 // zero upper 32 bits of x0 @@ -214,6 +236,8 @@ alternative_else_nop_endif ptrauth_keys_install_kernel tsk, x20, x22, x23 + mte_restore_gcr 1, tsk, x22, x23 + scs_load tsk, x20 .else add x21, sp, #S_FRAME_SIZE @@ -332,6 +356,8 @@ alternative_else_nop_endif /* No kernel C function calls after this as user keys are set. */ ptrauth_keys_install_user tsk, x0, x1, x2 + mte_restore_gcr 0, tsk, x0, x1 + apply_ssbd 0, x0, x1 .endif diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 858e75cfcaa0..1c7d963b5038 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -18,10 +18,13 @@ #include #include +#include #include #include #include +u64 gcr_kernel_excl __ro_after_init; + static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) { pte_t old_pte = READ_ONCE(*ptep); @@ -120,6 +123,13 @@ void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) return ptr; } +void mte_init_tags(u64 max_tag) +{ + u64 incl = GENMASK(max_tag & MTE_TAG_MAX, 0); + + gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK; +} + static void update_sctlr_el1_tcf0(u64 tcf0) { /* ISB required for the kernel uaccess routines */ @@ -155,7 +165,11 @@ static void update_gcr_el1_excl(u64 excl) static void set_gcr_el1_excl(u64 excl) { current->thread.gcr_user_excl = excl; - update_gcr_el1_excl(excl); + + /* + * SYS_GCR_EL1 will be set to current->thread.gcr_user_incl value + * by mte_restore_gcr() in kernel_exit, + */ } void flush_mte_state(void) @@ -181,7 +195,6 @@ void mte_thread_switch(struct task_struct *next) /* avoid expensive SCTLR_EL1 accesses if no change */ if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); - update_gcr_el1_excl(next->thread.gcr_user_excl); } void mte_suspend_exit(void) @@ -189,7 +202,7 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(current->thread.gcr_user_excl); + update_gcr_el1_excl(gcr_kernel_excl); } long set_mte_ctrl(struct task_struct *task, unsigned long arg)