From patchwork Fri Jul 2 19:45:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12356447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3853C07E95 for ; Fri, 2 Jul 2021 19:48:03 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 81C37613F7 for ; Fri, 2 Jul 2021 19:48:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 81C37613F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Mime-Version: Message-Id:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=9HpAhWQhUFY7CJAV/RXle1QKnt1c1WkyhQ3RvbMJrIM=; b=I6w vLlNUv/0f5c2smHdBeaw3QMB2p4jWPtqJtn6WdXayb0Et3oI+bqMfzisC+rmyCtD2qTdGDoMCF8/B DnDO5cJxTpoHVT1Dh5tMm8T0MAM/QKiDsf2pT46uq049i6fbcsAzKN/dPnd7KO2bp0PkB9T8WDEDu JhJ5SpMt7+0bu8w73zZ9zP1k19NcNYUtjwOXrpfjsuqayF76W6nzxMr19x3ZcXAj1Iuez1+7n8aa0 niHTvY34hGIRBeijUgNFpTMaoVs7Cbe0aHQIfHN2QYftrgx2p85h0YjIeY8YET+sg8a4HSxvbyXUh HHzdy/St49LBdIkpDUZLX78IXh6Cr8Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lzP6O-003v71-7o; Fri, 02 Jul 2021 19:45:28 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lzP6J-003v5n-SJ for linux-arm-kernel@lists.infradead.org; Fri, 02 Jul 2021 19:45:25 +0000 Received: by mail-yb1-xb49.google.com with SMTP id t11-20020a056902124bb029055a821867baso26227498ybu.14 for ; Fri, 02 Jul 2021 12:45:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=1nzcipWE40SMu0cWdnp4sXDR7LJ8ijaLEqpH0/qz8F0=; b=GTN0yd/gVRcfaILQFrKIuKd5HpOuRM1rBMFIQA+HjAhLjwJUqqfPv+OAdW9bPst2NG IhfJHltHnZp3pOg8XHNYdQUbI1nK9j2LG3g1b3+8LVbEGIcbHXHonCpcBEWd2oSFKpDO R/v6QoC64VTD3zP3c5WuIBnXr4+VajuB3HOUY2ZN5L82lGyoag01YGfIG1PlyKJmaHg6 3K5lf+fGRfJ0vazwlT/rk7nAoptG9aTV8SeRf5KezL3q13NqxizEFviXfTpNyUl1o4lB ExktXz5EtfYvtyYpXoEQ84oyTTYy6CElRVORTTAFEn+IAfIl7/1zwYrkNqb8XneKgz4r p6+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=1nzcipWE40SMu0cWdnp4sXDR7LJ8ijaLEqpH0/qz8F0=; b=U7HmvtxLdXelNBoJcebEPIkKz/ub8Z7g/fjkls8PgO6r9XY+B8aWGYUl6tLPYpGL3B 5TZTJGe6mRX6VuPY50iF3kgCVadu/LjAuIHfWDX4D/if5D8jRxTwKD1EWsqt17KdIhk2 UrYmx0dySKqCgAUb8xjjDUw2iEluO22Yh83Gf5G1UN7lpdl8BJ7u+MzvSQlyTjP+N82B 3z3kxcMxciHN3NeIgaGA6rRb5/dqftlSV50y3YSK/xK3FJRTkekbeiwgFrJsWJbHOXgz ZMyh7SQ8eYez+VGzw3Qd0cG/AchQZSEvHK+fUlN+Qe+2HUbBSOHBXBxKKufwvLjp+xD4 NH4A== X-Gm-Message-State: AOAM532dFiuUTxv/U8WldEfQRlL7QLvQzUAHsuEJHqo1p4pyXJudYcuQ et6Ra+5rs7BvlfUhwbcvYcatRI4= X-Google-Smtp-Source: ABdhPJzRqx2yqYQdGLJhlixZwLfTxDzESED1hK4aFPOZOWKdT0I2/1QjhQHXlFnuiNFK8NaDFL1gGZc= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7c5b:5407:a2db:c8fb]) (user=pcc job=sendgmr) by 2002:a25:a225:: with SMTP id b34mr1290887ybi.485.1625255122379; Fri, 02 Jul 2021 12:45:22 -0700 (PDT) Date: Fri, 2 Jul 2021 12:45:18 -0700 Message-Id: <20210702194518.2048539-1-pcc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH v2] arm64: mte: switch GCR_EL1 on task switch rather than entry/exit From: Peter Collingbourne To: Catalin Marinas , Vincenzo Frascino , Will Deacon , Andrey Konovalov Cc: Peter Collingbourne , Evgenii Stepanov , Szabolcs Nagy , Tejas Belagod , linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210702_124523_990346_33F61BFE X-CRM114-Status: GOOD ( 24.83 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Accessing GCR_EL1 and issuing an ISB can be expensive on some microarchitectures. To avoid taking this performance hit on every kernel entry/exit, switch GCR_EL1 on task switch rather than entry/exit. This is essentially a revert of commit bad1e1c663e0 ("arm64: mte: switch GCR_EL1 in kernel entry and exit"). This requires changing how we generate random tags for HW tag-based KASAN, since at this point IRG would use the user's exclusion mask, which may not be suitable for kernel use. In this patch I chose to take the modulus of CNTVCT_EL0, however alternative approaches are possible. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75 Acked-by: Andrey Konovalov --- v2: - rebase onto v9 of the tag checking mode preference series arch/arm64/include/asm/mte-kasan.h | 15 ++++++--- arch/arm64/include/asm/mte.h | 2 -- arch/arm64/kernel/entry.S | 41 ----------------------- arch/arm64/kernel/mte.c | 54 +++++++++++------------------- 4 files changed, 29 insertions(+), 83 deletions(-) diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h index ddd4d17cf9a0..e9b3c1bdbba3 100644 --- a/arch/arm64/include/asm/mte-kasan.h +++ b/arch/arm64/include/asm/mte-kasan.h @@ -13,6 +13,8 @@ #ifdef CONFIG_ARM64_MTE +extern u64 mte_tag_mod; + /* * These functions are meant to be only used from KASAN runtime through * the arch_*() interface defined in asm/memory.h. @@ -37,15 +39,18 @@ static inline u8 mte_get_mem_tag(void *addr) return mte_get_ptr_tag(addr); } -/* Generate a random tag. */ +/* + * Generate a random tag. We can't use IRG because the user's GCR_EL1 is still + * installed for performance reasons. Instead, take the modulus of the + * architected timer which should be random enough for our purposes. + */ static inline u8 mte_get_random_tag(void) { - void *addr; + u64 cntvct; - asm(__MTE_PREAMBLE "irg %0, %0" - : "=r" (addr)); + asm("mrs %0, cntvct_el0" : "=r"(cntvct)); - return mte_get_ptr_tag(addr); + return 0xF0 | (cntvct % mte_tag_mod); } /* diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index bc88a1ced0d7..412b94efcb11 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -16,8 +16,6 @@ #include -extern u64 gcr_kernel_excl; - void mte_clear_page_tags(void *addr); unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ce59280355c5..c95bfe145639 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -175,43 +175,6 @@ alternative_else_nop_endif #endif .endm - .macro mte_set_gcr, tmp, tmp2 -#ifdef CONFIG_ARM64_MTE - /* - * Calculate and set the exclude mask preserving - * the RRND (bit[16]) setting. - */ - mrs_s \tmp2, SYS_GCR_EL1 - bfxil \tmp2, \tmp, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16 - msr_s SYS_GCR_EL1, \tmp2 -#endif - .endm - - .macro mte_set_kernel_gcr, tmp, tmp2 -#ifdef CONFIG_KASAN_HW_TAGS -alternative_if_not ARM64_MTE - b 1f -alternative_else_nop_endif - ldr_l \tmp, gcr_kernel_excl - - mte_set_gcr \tmp, \tmp2 - isb -1: -#endif - .endm - - .macro mte_set_user_gcr, tsk, tmp, tmp2 -#ifdef CONFIG_ARM64_MTE -alternative_if_not ARM64_MTE - b 1f -alternative_else_nop_endif - ldr \tmp, [\tsk, #THREAD_MTE_CTRL] - - mte_set_gcr \tmp, \tmp2 -1: -#endif - .endm - .macro kernel_entry, el, regsize = 64 .if \regsize == 32 mov w0, w0 // zero upper 32 bits of x0 @@ -273,8 +236,6 @@ alternative_if ARM64_HAS_ADDRESS_AUTH alternative_else_nop_endif #endif - mte_set_kernel_gcr x22, x23 - scs_load tsk, x20 .else add x21, sp, #PT_REGS_SIZE @@ -398,8 +359,6 @@ alternative_if ARM64_HAS_ADDRESS_AUTH alternative_else_nop_endif #endif - mte_set_user_gcr tsk, x0, x1 - apply_ssbd 0, x0, x1 .endif diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 48f218e070cc..b8d3e0b20702 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -23,7 +23,7 @@ #include #include -u64 gcr_kernel_excl __ro_after_init; +u64 mte_tag_mod __ro_after_init; static bool report_fault_once = true; @@ -98,22 +98,7 @@ int memcmp_pages(struct page *page1, struct page *page2) void mte_init_tags(u64 max_tag) { - static bool gcr_kernel_excl_initialized; - - if (!gcr_kernel_excl_initialized) { - /* - * The format of the tags in KASAN is 0xFF and in MTE is 0xF. - * This conversion extracts an MTE tag from a KASAN tag. - */ - u64 incl = GENMASK(FIELD_GET(MTE_TAG_MASK >> MTE_TAG_SHIFT, - max_tag), 0); - - gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK; - gcr_kernel_excl_initialized = true; - } - - /* Enable the kernel exclude mask for random tags generation. */ - write_sysreg_s(SYS_GCR_EL1_RRND | gcr_kernel_excl, SYS_GCR_EL1); + mte_tag_mod = (max_tag & 0xF) + 1; } static inline void __mte_enable_kernel(const char *mode, unsigned long tcf) @@ -188,19 +173,7 @@ void mte_check_tfsr_el1(void) } #endif -static void update_gcr_el1_excl(u64 excl) -{ - - /* - * Note that the mask controlled by the user via prctl() is an - * include while GCR_EL1 accepts an exclude mask. - * No need for ISB since this only affects EL0 currently, implicit - * with ERET. - */ - sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl); -} - -static void mte_update_sctlr_user(struct task_struct *task) +static void mte_sync_ctrl(struct task_struct *task) { /* * This can only be called on the current or next task since the CPU @@ -219,6 +192,17 @@ static void mte_update_sctlr_user(struct task_struct *task) else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC) sctlr |= SCTLR_EL1_TCF0_SYNC; task->thread.sctlr_user = sctlr; + + /* + * Note that the mask controlled by the user via prctl() is an + * include while GCR_EL1 accepts an exclude mask. + * No need for ISB since this only affects EL0 currently, implicit + * with ERET. + */ + sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, + (mte_ctrl & MTE_CTRL_GCR_USER_EXCL_MASK) >> + MTE_CTRL_GCR_USER_EXCL_SHIFT); + preempt_enable(); } @@ -233,13 +217,13 @@ void mte_thread_init_user(void) clear_thread_flag(TIF_MTE_ASYNC_FAULT); /* disable tag checking and reset tag generation mask */ current->thread.mte_ctrl = MTE_CTRL_GCR_USER_EXCL_MASK; - mte_update_sctlr_user(current); + mte_sync_ctrl(current); set_task_sctlr_el1(current->thread.sctlr_user); } void mte_thread_switch(struct task_struct *next) { - mte_update_sctlr_user(next); + mte_sync_ctrl(next); /* * Check if an async tag exception occurred at EL1. @@ -273,7 +257,7 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(gcr_kernel_excl); + mte_sync_ctrl(current); } long set_mte_ctrl(struct task_struct *task, unsigned long arg) @@ -291,7 +275,7 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg) task->thread.mte_ctrl = mte_ctrl; if (task == current) { - mte_update_sctlr_user(task); + mte_sync_ctrl(task); set_task_sctlr_el1(task->thread.sctlr_user); } @@ -467,7 +451,7 @@ static ssize_t mte_tcf_preferred_show(struct device *dev, static void sync_sctlr(void *arg) { - mte_update_sctlr_user(current); + mte_sync_ctrl(current); set_task_sctlr_el1(current->thread.sctlr_user); }