From patchwork Thu Oct 22 20:23:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11851873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4FD7C388F7 for ; Thu, 22 Oct 2020 20:25:37 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6F87224654 for ; Thu, 22 Oct 2020 20:25:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="1alhuvYE"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="BuwZP/EK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F87224654 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=l9fW/lCqzfXDeECc1x1R5mZoc2tIWd1kyF7qOCgZrWU=; b=1alhuvYEun1MUhr49ms5SUm90 Vng3SEE+q8bdKDk8LNbJtmC/fBy/tYgs5FmS3SRnnh8bn6WVybzzj5YYX4VaAm5Ill6A2Xdqyn707 LDY+TMZXKip27bnejmIkyzEpFrm9b94SqTma84nJng4AQ+4SnDMv90UhmRvBwtZ4jkBgnwXNEXkEy TnTyAQsjbXUl04cDQWtVkpt3YdH/l2JcuxAq49A+jX8cIzn+Lw49VFcSb7IWy2zx3/KzC3WYNhimw VJiasimZoa8J9BsYPNVI10mAx+HJWu+0nb+0OTj+isInnZgwZ430y9x852OvAE5FQPaQr6rGUG5LF s6jFhrcBQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVh8A-0001or-GD; Thu, 22 Oct 2020 20:24:14 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVh84-0001mn-Dm for linux-arm-kernel@lists.infradead.org; Thu, 22 Oct 2020 20:24:09 +0000 Received: by mail-pg1-x549.google.com with SMTP id e16so1553016pgm.1 for ; Thu, 22 Oct 2020 13:24:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=zw2T1AigDoes2ZXL/BhvqfeuP2I6Xp6hCsVSILgFLs8=; b=BuwZP/EKu0LtETCQvIPBCjdbGmanyNFJwkuQZhTKogFSUwBPbwGZGu1uD808VWHLJq YBl5YdZ2JKT0Z0cTOXDHgNa0D1KRhlSRTqFKfyIsvd37jzKZDraiVkVWdOjYP7j+g9j3 DLKWOsllpnL9e96v7q73jsKOADyqAeeAAYlYjjtKAlX/UAw5HrKqp9qH73S2bic77OPm T0lUA4AH1laQOIAwEHSpRp9pvYfrp10DhzYj9tEyODtVAFIF2/adDcNxoL6BGnS9Kx1U iTIgrdHwXruRHx6h3SA6UH9pfV0Q5xzR53yxG3zGPkQ/B42bEum9lvNYWV0mMiHlrrvf TUrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zw2T1AigDoes2ZXL/BhvqfeuP2I6Xp6hCsVSILgFLs8=; b=ep2uJ2sdDz7YcKuysxfC423bptT0wgiFJZWPU9efI3z5UzH7usHcyyq6eQtIT8CVjo +WXQ0R0/CG56HUthoflp/xviAGWgcikmFAsdhDRJjb4V7DGq7vjFOty+iFKhowHDIc0N TGgGukBbQckf94zQ3P8MooH0JmVvK5ApcqtPyBOXF1IkZcNoj/lu6vxLb6z6oa2FXDkx 9ZxFxgG/hFLHPG7wzuJfohTjRxLDf3DBONDTzX0crN+4e+2VSkODxVRwL+DiwnIIlSq8 5m7t1yKgbm1CVOQHTtgMg5Pj0iZ3QJXuXypfXgNb5OFXtlCBdSLAFcUfnaB80zb682GB C6+Q== X-Gm-Message-State: AOAM530UPBjwURG05LCucacSSpLA3JpfSG52emN3pgocQ/rapOzgKCXs PQRrqOQ8JX2+88jC2KI7NOaY9GPEDNc2431oq0U= X-Google-Smtp-Source: ABdhPJxRFySAVxGLLNyCelqlhm8iYc1/ihDpFk9R8qj83Ysvq9Y2hRy81LujM+tgbEbBj+4tq73wBvubyx9vMXe4Sfc= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a63:1906:: with SMTP id z6mr3801421pgl.286.1603398244708; Thu, 22 Oct 2020 13:24:04 -0700 (PDT) Date: Thu, 22 Oct 2020 13:23:54 -0700 In-Reply-To: <20201022202355.3529836-1-samitolvanen@google.com> Message-Id: <20201022202355.3529836-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20201022202355.3529836-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.0.rc1.297.gfa9743e501-goog Subject: [PATCH 1/2] scs: switch to vmapped shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201022_162408_479234_5ED20BD6 X-CRM114-Status: GOOD ( 20.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel currently uses kmem_cache to allocate shadow call stacks, which means an overflow may not be immediately detected and can potentially result in another task's shadow stack to be overwritten. This change switches SCS to use virtually mapped shadow stacks, which increases shadow stack size to a full page and provides more robust overflow detection similarly to VMAP_STACK. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- include/linux/scs.h | 7 +---- kernel/scs.c | 63 ++++++++++++++++++++++++++++++++++++++------- 2 files changed, 55 insertions(+), 15 deletions(-) diff --git a/include/linux/scs.h b/include/linux/scs.h index 6dec390cf154..86e3c4b7b714 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -15,12 +15,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK -/* - * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit - * architecture) provided ~40% safety margin on stack usage while keeping - * memory allocation overhead reasonable. - */ -#define SCS_SIZE SZ_1K +#define SCS_SIZE PAGE_SIZE #define GFP_SCS (GFP_KERNEL | __GFP_ZERO) /* An illegal pointer value to mark the end of the shadow stack. */ diff --git a/kernel/scs.c b/kernel/scs.c index 4ff4a7ba0094..2136edba548d 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -5,50 +5,95 @@ * Copyright (C) 2019 Google LLC */ +#include #include #include #include -#include +#include #include -static struct kmem_cache *scs_cache; - static void __scs_account(void *s, int account) { - struct page *scs_page = virt_to_page(s); + struct page *scs_page = vmalloc_to_page(s); mod_node_page_state(page_pgdat(scs_page), NR_KERNEL_SCS_KB, account * (SCS_SIZE / SZ_1K)); } +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + static void *scs_alloc(int node) { - void *s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); if (!s) return NULL; +out: *__scs_magic(s) = SCS_END_MAGIC; /* * Poison the allocation to catch unintentional accesses to * the shadow stack when KASAN is enabled. */ - kasan_poison_object_data(scs_cache, s); + kasan_poison_vmalloc(s, SCS_SIZE); __scs_account(s, 1); return s; } static void scs_free(void *s) { + int i; + __scs_account(s, -1); - kasan_unpoison_object_data(scs_cache, s); - kmem_cache_free(scs_cache, s); + kasan_unpoison_vmalloc(s, SCS_SIZE); + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; } void __init scs_init(void) { - scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, 0, 0, NULL); + cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup); } int scs_prepare(struct task_struct *tsk, int node) From patchwork Thu Oct 22 20:23:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11851875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FCB9C388F9 for ; Thu, 22 Oct 2020 20:25:41 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9AEAD24248 for ; Thu, 22 Oct 2020 20:25:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="xr/btmiw"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="lIVRXpl0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AEAD24248 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=jo6dKUIUNFjPQ67hTmwr3td4/2iI/8pVTAzK8OhqkeM=; b=xr/btmiwi/HfuUO5u/2PLdteX Y/60+TI/hiHFF5QihD6JYraOYywYtYWLKG7WLj5ot2K/DZQLdLtkGKzeYCnXh+I0umBumK0a9Zftx 7ClUaCS/s2t/2btjEs6myWbhkuSQoUBCw7Ru4BTtm18sh7+OlWZ5FwYfQGhuh2XviBf57KMgC2AXG 5WHrvtbJ5YqPY5hDzjfX5hSgBMdZGD15iO3AcqtpA4q47wqySEslu9kjq3dBWyT0qQBwt3cZF5ac/ vMJ7CujTM6FYxUeyex6ovW2p9f8SOZ8R3OtsfpAh5/nvtvHLlxoA9dV9NrjpwkghBPtx2UE39FlUV GhMi4GyPg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVh8D-0001pg-Qh; Thu, 22 Oct 2020 20:24:17 +0000 Received: from mail-qt1-x84a.google.com ([2607:f8b0:4864:20::84a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVh86-0001nD-1S for linux-arm-kernel@lists.infradead.org; Thu, 22 Oct 2020 20:24:11 +0000 Received: by mail-qt1-x84a.google.com with SMTP id x42so1859207qta.13 for ; Thu, 22 Oct 2020 13:24:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=HdaEzwW485tOauxqLOPFR2PJAF06evpL8oosipyb9N4=; b=lIVRXpl0VYnVqLZkN59z5l6EB0oiuWQYvHmtdZOvGgLY1DycgKA7bfxTZolWpxPysq uPY8ErFQua5rvGjajXi6TAbgoI+aSLhe+N0PiuOUihjVjNHAr6XBZZBkcp4bE1Mz0GhR GEtMgSqvSMjP/EwI1/XRXHH8oGjnjguQF+Clr8AEuqdilMbhOh3g9xv3zhrBB+ie2nOx wTAhHLvzGhCQHGF+cwHgEnU9ZjnxkzKAvzo5ubuMb8Ewn/MfnxBk2c/zGC73G7usVhfz Nps59f9AEndYLXnnbequ/7aJ9P7jZbalqks1vwtStvcrVyKT61A/8SIBcJUasp6wWI9k hvug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HdaEzwW485tOauxqLOPFR2PJAF06evpL8oosipyb9N4=; b=Uj6bHOj/MCIPjLz4H6jbvGA/fr2Jc/Lt0242FHRvORsD3wQAvaemUA2ena6MBnobCc orD/zMplQV1tJNbVXcGHdK6ZUAcioTmMK4sAFkSdNTSoqB+ZNaTC0Zh/Hcr7OQqxzu2i sVK0J2wmSvtrPu1LiVKFsza6T/uU3xsxvnO/f2PoviHXvnA3RqeZGjYdFFomwk2MmkA6 s7J/uqTGcxvhMIlzub3oY0DupFIjJixIQvowMZXy3X/QlLOCV3IY02YUR2NyL9NLpusg 5Vabrbq4LMTgrYWRGqy55k83c9qDHjPO31yMIRRBA+pXFVpSIJ/yPlHdIlNjDsGWaAAX yKKw== X-Gm-Message-State: AOAM530z5sOO13J7eYBVX14VgjxEiU1n2FPacnHPfezbO0NbexrQifCs BewYkGmpwg51SDdc7eipY/yBkPJ4C51v9+IAqok= X-Google-Smtp-Source: ABdhPJz/Sh6UUI+3VBKbiRt9YnloPvsrz1cGe/jkO3/yS7Oyq21RzG7T9W9c0Oa0QjO69qc+dVzDOd5vcpOAQ+D1hLQ= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a0c:e512:: with SMTP id l18mr4207892qvm.31.1603398246632; Thu, 22 Oct 2020 13:24:06 -0700 (PDT) Date: Thu, 22 Oct 2020 13:23:55 -0700 In-Reply-To: <20201022202355.3529836-1-samitolvanen@google.com> Message-Id: <20201022202355.3529836-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20201022202355.3529836-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.0.rc1.297.gfa9743e501-goog Subject: [PATCH 2/2] arm64: scs: use vmapped IRQ and SDEI shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201022_162410_119366_3CD7E3D8 X-CRM114-Status: GOOD ( 22.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use scs_alloc() to allocate also IRQ and SDEI shadow stacks instead of using statically allocated stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/include/asm/scs.h | 21 ++++++++++- arch/arm64/kernel/entry.S | 6 ++-- arch/arm64/kernel/irq.c | 2 ++ arch/arm64/kernel/scs.c | 67 +++++++++++++++++++++++++++++++++--- arch/arm64/kernel/sdei.c | 7 ++++ include/linux/scs.h | 8 ++--- kernel/scs.c | 4 +-- 7 files changed, 101 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index eaa2cd92e4c1..e9d2c3e67ff9 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -24,6 +24,25 @@ .endm #endif /* CONFIG_SHADOW_CALL_STACK */ -#endif /* __ASSEMBLY __ */ +#else /* __ASSEMBLY__ */ + +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +extern void scs_init_irq(void); + +extern void scs_free_sdei(void); +extern int scs_init_sdei(void); + +#else + +static inline void scs_init_irq(void) {} +static inline void scs_free_sdei(void) {} +static inline int scs_init_sdei(void) { return -EOPNOTSUPP; } + +#endif + +#endif /* __ASSEMBLY__ */ #endif /* _ASM_SCS_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index f30007dff35f..0f76fe8142e4 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -438,7 +438,7 @@ SYM_CODE_END(__swpan_exit_el0) #ifdef CONFIG_SHADOW_CALL_STACK /* also switch to the irq shadow stack */ - adr_this_cpu scs_sp, irq_shadow_call_stack, x26 + ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x26 #endif 9998: @@ -1094,9 +1094,9 @@ SYM_CODE_START(__sdei_asm_handler) #ifdef CONFIG_SHADOW_CALL_STACK /* Use a separate shadow call stack for normal and critical events */ cbnz w4, 3f - adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal, tmp=x6 + ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 b 4f -3: adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical, tmp=x6 +3: ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 4: #endif diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 9cf2fb87584a..54ba3725bc0e 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -20,6 +20,7 @@ #include #include #include +#include #include /* Only access this in an NMI enter/exit */ @@ -54,6 +55,7 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + scs_init_irq(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index e8f7ff45dd8f..f85cebf8122a 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -6,11 +6,70 @@ */ #include -#include +#include -DEFINE_SCS(irq_shadow_call_stack); +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); +DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); #ifdef CONFIG_ARM_SDE_INTERFACE -DEFINE_SCS(sdei_shadow_call_stack_normal); -DEFINE_SCS(sdei_shadow_call_stack_critical); +DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); +DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); #endif + +void scs_init_irq(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + per_cpu(irq_shadow_call_stack_ptr, cpu) = + scs_alloc(cpu_to_node(cpu)); +} + + +void scs_free_sdei(void) +{ + int cpu; + void *s; + + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return; + + for_each_possible_cpu(cpu) { + s = per_cpu(sdei_shadow_call_stack_normal_ptr, cpu); + if (s) + scs_free(s); + + s = per_cpu(sdei_shadow_call_stack_critical_ptr, cpu); + if (s) + scs_free(s); + } +} + +int scs_init_sdei(void) +{ + int cpu; + void *s; + + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + s = scs_alloc(cpu_to_node(cpu)); + if (!s) + goto err; + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = s; + + s = scs_alloc(cpu_to_node(cpu)); + if (!s) + goto err; + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = s; + } + + return 0; + +err: + scs_free_sdei(); + return -ENOMEM; +} diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 7689f2031c0c..04519a7cb51d 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -138,6 +139,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 diff --git a/include/linux/scs.h b/include/linux/scs.h index 86e3c4b7b714..6b35a83576d4 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -21,13 +21,11 @@ /* An illegal pointer value to mark the end of the shadow stack. */ #define SCS_END_MAGIC (0x5f6UL + POISON_POINTER_DELTA) -/* Allocate a static per-CPU shadow stack */ -#define DEFINE_SCS(name) \ - DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ - #define task_scs(tsk) (task_thread_info(tsk)->scs_base) #define task_scs_sp(tsk) (task_thread_info(tsk)->scs_sp) +void *scs_alloc(int node); +void scs_free(void *s); void scs_init(void); int scs_prepare(struct task_struct *tsk, int node); void scs_release(struct task_struct *tsk); @@ -56,6 +54,8 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ +static inline void *scs_alloc(int node) { return NULL; } +static inline void scs_free(void *s) {} static inline void scs_init(void) {} static inline void scs_task_reset(struct task_struct *tsk) {} static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } diff --git a/kernel/scs.c b/kernel/scs.c index 2136edba548d..8df4a92cd939 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -24,7 +24,7 @@ static void __scs_account(void *s, int account) #define NR_CACHED_SCS 2 static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); -static void *scs_alloc(int node) +void *scs_alloc(int node) { int i; void *s; @@ -63,7 +63,7 @@ static void *scs_alloc(int node) return s; } -static void scs_free(void *s) +void scs_free(void *s) { int i;