From patchwork Mon Nov 30 23:34:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11941541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 311B9C71155 for ; Mon, 30 Nov 2020 23:36:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 920B120715 for ; Mon, 30 Nov 2020 23:36:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="tWBf5cuX"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="BzEa4nkf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 920B120715 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=RqjFfvcsFISg+E8ceghCxlldTjyF8YDPsNzATKXX29A=; b=tWBf5cuXJYkZE1e7l8ap9Kqm1 k2bkLvGw3PBdRcF/mDodV5MHf9Anht46THh5R0lgojOThPhb2AEc8DT7u5Za1I2FoYNiU75XHEos4 67YGgIvysZ4KRqguCvJpRzvpm7Avk5v77oRKpD+jtXWHlkJ/qejYpCOSSVqIzm3BXGr0YnwSrqih9 JIJq9FYsCs1n+eaa7nMhJGC/CfJ2VgZ25lgHQl5aedhRmtHCVW+aN39LFroIJBR+tEnmUIyw1K557 qNiftFDxxQ4MmmbZUZACS3XzrrhpgPV21SQuaPYoVzLrHrJ5DzdUeEUNuOUHBvF7wTiJAGM4I2XMP VIwJIBVmw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kjshA-0001nX-TH; Mon, 30 Nov 2020 23:35:00 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kjsh1-0001ki-Fd for linux-arm-kernel@lists.infradead.org; Mon, 30 Nov 2020 23:34:52 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id c9so158951ybs.8 for ; Mon, 30 Nov 2020 15:34:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=6uc7ZLqzr9oyVU4hbwFSRyKi3sFmVrBfBXorh1eNGEA=; b=BzEa4nkfxkMZKW7cseeSx8SY164/h+h2hni0VDyb30y/8QyxSnxAAC/8I5vwNG5bDl x3ysxxGDHBf00/s/nMUPoWxxLvs8FwoestNcKA0ujwnqwcG/4ZyQ0f5FcCDieuDRQNkC uEVy/uQgvIxxhGyKofDgNF+8FXmpI8SfoaBqtyerD/NwAl2kJJQf6s0FOFhrxW+IDqs4 vbkegF3NlvkROTGReHHpanIl0Hs7vPBwEJTpch37KtUSdqYR+7NSCHSzfphI2nkMsAYR MBMAxZUX+O0cJ8IBDpCeAWWKkN59Zt6yB7YimTU276QaGRZoQGvpHZyLzYqhMwmS63R0 YI2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6uc7ZLqzr9oyVU4hbwFSRyKi3sFmVrBfBXorh1eNGEA=; b=r/+csmU7CQ5qK2EZETyLd6EeWv+kjSA86ZHKVgfTchKiAYLnU9ezPelQ9nuOcheK17 S+efQA1noOcfdp7kI/dvN8rrZfIiCNPL/TYnSOvI/UROOHD1RC7klUCjjjxSb3jaRq9y N8218RmyAQt/E3QyTL6Kvegbh50JR2yDK7LhJXATTypnSXRZzJPD32t51YOW8cFtMbw8 7AZ9L6lresKOdItrQIrtvrLzcxOWM9KuKUVWgvXhTVsXY+g60YzpbCe4XS69CSlmg/a2 AvncPRBKWoqoby9/vK/Anr4v41kuXR5NDVajzD5X6dWpC0RjCHx/aAqCLHlhARmgq5uy OABA== X-Gm-Message-State: AOAM530sQMIVRF6oE/vY+OBazzRC22RHjKsHTOCgU8XU5d2YIl9c5poi /W1yEpG8NWZNkoh6NVfSOJOh3mL6SjMsJPgcmIE= X-Google-Smtp-Source: ABdhPJzGaieQ6MZgiNyXIW04eO8Ne+Y0K0jSiE+cKMV1SV2mjmeHFdgilfGi5zMqufWZUY5o2g/uoghZ3ShEg81gf1k= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a25:884:: with SMTP id 126mr26559330ybi.154.1606779287851; Mon, 30 Nov 2020 15:34:47 -0800 (PST) Date: Mon, 30 Nov 2020 15:34:41 -0800 In-Reply-To: <20201130233442.2562064-1-samitolvanen@google.com> Message-Id: <20201130233442.2562064-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20201130233442.2562064-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v3 1/2] scs: switch to vmapped shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_183451_559987_D9010103 X-CRM114-Status: GOOD ( 22.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel currently uses kmem_cache to allocate shadow call stacks, which means an overflows may not be immediately detected and can potentially result in another task's shadow stack to be overwritten. This change switches SCS to use virtually mapped shadow stacks for tasks, which increases shadow stack size to a full page and provides more robust overflow detection, similarly to VMAP_STACK. Signed-off-by: Sami Tolvanen Acked-by: Will Deacon --- include/linux/scs.h | 12 ++++---- kernel/scs.c | 71 ++++++++++++++++++++++++++++++++++++++------- 2 files changed, 66 insertions(+), 17 deletions(-) diff --git a/include/linux/scs.h b/include/linux/scs.h index 6dec390cf154..2a506c2a16f4 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -15,12 +15,8 @@ #ifdef CONFIG_SHADOW_CALL_STACK -/* - * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit - * architecture) provided ~40% safety margin on stack usage while keeping - * memory allocation overhead reasonable. - */ -#define SCS_SIZE SZ_1K +#define SCS_ORDER 0 +#define SCS_SIZE (PAGE_SIZE << SCS_ORDER) #define GFP_SCS (GFP_KERNEL | __GFP_ZERO) /* An illegal pointer value to mark the end of the shadow stack. */ @@ -33,6 +29,8 @@ #define task_scs(tsk) (task_thread_info(tsk)->scs_base) #define task_scs_sp(tsk) (task_thread_info(tsk)->scs_sp) +void *scs_alloc(int node); +void scs_free(void *s); void scs_init(void); int scs_prepare(struct task_struct *tsk, int node); void scs_release(struct task_struct *tsk); @@ -61,6 +59,8 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ +static inline void *scs_alloc(int node) { return NULL; } +static inline void scs_free(void *s) {} static inline void scs_init(void) {} static inline void scs_task_reset(struct task_struct *tsk) {} static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } diff --git a/kernel/scs.c b/kernel/scs.c index 4ff4a7ba0094..e2a71fc82fa0 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -5,26 +5,49 @@ * Copyright (C) 2019 Google LLC */ +#include #include #include #include -#include +#include #include -static struct kmem_cache *scs_cache; - static void __scs_account(void *s, int account) { - struct page *scs_page = virt_to_page(s); + struct page *scs_page = vmalloc_to_page(s); mod_node_page_state(page_pgdat(scs_page), NR_KERNEL_SCS_KB, account * (SCS_SIZE / SZ_1K)); } -static void *scs_alloc(int node) +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *__scs_alloc(int node) { - void *s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + kasan_unpoison_vmalloc(s, SCS_SIZE); + memset(s, 0, SCS_SIZE); + return s; + } + } + + return __vmalloc_node_range(SCS_SIZE, 1, VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, node, + __builtin_return_address(0)); +} +void *scs_alloc(int node) +{ + void *s; + + s = __scs_alloc(node); if (!s) return NULL; @@ -34,21 +57,47 @@ static void *scs_alloc(int node) * Poison the allocation to catch unintentional accesses to * the shadow stack when KASAN is enabled. */ - kasan_poison_object_data(scs_cache, s); + kasan_poison_vmalloc(s, SCS_SIZE); __scs_account(s, 1); return s; } -static void scs_free(void *s) +void scs_free(void *s) { + int i; + __scs_account(s, -1); - kasan_unpoison_object_data(scs_cache, s); - kmem_cache_free(scs_cache, s); + + /* + * We cannot sleep as this can be called in interrupt context, + * so use this_cpu_cmpxchg to update the cache, and vfree_atomic + * to free the stack. + */ + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; } void __init scs_init(void) { - scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, 0, 0, NULL); + cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup); } int scs_prepare(struct task_struct *tsk, int node) From patchwork Mon Nov 30 23:34:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11941543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFE15C63777 for ; Mon, 30 Nov 2020 23:36:30 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4840420715 for ; Mon, 30 Nov 2020 23:36:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="dbIpk03k"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="OP+NuPPV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4840420715 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/gUSfn3+6Sre0dW3//5v0yu1XaASQy5Cj0ET83UVcXI=; b=dbIpk03kymHlt8nTt+mBbOPPa fzWF4HA/8y8TqwR8QcMwHm5yJcPUbc9WlTuXj4M2ArCZbK02doFH5jr75WevEbClQh+lGmcxpA9Hz 59L5VKEWdU+OUQBO/77xdza6GYmJ+DJJOtE2q9O1RlLboa+VcLUN0k0vh7Rr2tnmOnP5bP9h3uu1D h7W6/TM1pDpyPjKlHQi/635J2rN+GU3X9E67tDltvZqzsxnXw284MMnEngI7rCEQC2GUf8uYZqf9n 2kUtegD2BVdF0cZ18FyIP1BUWddkOq7fRtumUkMWGl+kPIG8LM66gCCeRrhmlGJlGLXo9UvzUamAu 6NCFakzfw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kjshG-0001qL-PP; Mon, 30 Nov 2020 23:35:06 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kjsh5-0001l7-OE for linux-arm-kernel@lists.infradead.org; Mon, 30 Nov 2020 23:34:57 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id e11so8684076qvu.18 for ; Mon, 30 Nov 2020 15:34:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=/2hpXChav6Wt/RpcbVyRVD6Lx6UtoR7/piGzUBFcDPA=; b=OP+NuPPVKjRdl73IZILK5SqdKD4Ht7FixlV8PA26OKMxjg6NUNaSCQQQNc2lbAac3K h1tLLOXfkP/uccCUDN2SRKeBbQpAO+m37Klbfs/YBX/ltEAXIEW77i0UKV8LrGlsqf2P /+/faMrqoARnijNtjnwAmwoQZrRUIUzPQgd9vtZuj1FUXhFPNYgyDCtZsZWCaGVGtJ+r FcenFt6d6RLzZSF1mSI2tM1UcswgM+eAdr/lLCacaYgd2GRf49kp5rjwcBwr6xmdqBoX 5xL/srF1pLiVrDoS78yuk0GSdP8R+Sx7/7/VmEU+GUgdKsbDDb+hLlU1rii1VPQtX71/ xaZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/2hpXChav6Wt/RpcbVyRVD6Lx6UtoR7/piGzUBFcDPA=; b=nH6LmhIzo2oT2VmXNMPuj/WitvHODNX5iJXpmmc4AmPpTFq/fxAYrZ2cBiRj5DwEr3 gpfEyDXlSYGvDtF1Vds2Ph7q0Nh3MF96ghYHggdBcqtK7oEHdYHPCvtlRwQGlr3aHzs+ Zl9UGaE0oG3igJg06cVbC7dt1tkDP/jLrO09NBcOqZnIW3riILwpmSbE6qU66w7ScJaK 8j1uOudsXXUGmAeoxKufiHIFCbBveOz1XjdhCWrc0F+hCJ3g6YBf1nYakZdVnpUVPMG5 4MMBAUTCkr/32M/ZdX9QFHeOV313iTmgsdhfhMVgTx2ayjJS6XxNOjUPwykoW7IrRaEE 7UgA== X-Gm-Message-State: AOAM5332pD1WPjAkDUbPkW8kQ5YitjyQ8xRc2F7k7FLlPXVpz32Eaum8 Asl5iicqAgoqMlzhCHevDsanA5TUeiJK58riATA= X-Google-Smtp-Source: ABdhPJwHdvyygNPpCKDDdCrsa4Ck+vRg8Gp4MXUT40WxgVL9b6BDd+oHfOski3MhrYW0e3f5GXiTcQzqE8SrmlXMELQ= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a0c:e6e7:: with SMTP id m7mr233889qvn.11.1606779289826; Mon, 30 Nov 2020 15:34:49 -0800 (PST) Date: Mon, 30 Nov 2020 15:34:42 -0800 In-Reply-To: <20201130233442.2562064-1-samitolvanen@google.com> Message-Id: <20201130233442.2562064-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20201130233442.2562064-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v3 2/2] arm64: scs: use vmapped IRQ and SDEI shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_183455_840032_66CCA7F5 X-CRM114-Status: GOOD ( 23.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use scs_alloc() to allocate also IRQ and SDEI shadow stacks instead of using statically allocated stacks. Signed-off-by: Sami Tolvanen Acked-by: Will Deacon --- arch/arm64/kernel/Makefile | 1 - arch/arm64/kernel/entry.S | 6 ++-- arch/arm64/kernel/irq.c | 19 +++++++++++ arch/arm64/kernel/scs.c | 16 --------- arch/arm64/kernel/sdei.c | 70 ++++++++++++++++++++++++++++++++++++++ include/linux/scs.h | 4 --- 6 files changed, 92 insertions(+), 24 deletions(-) delete mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index bbaf0bc4ad60..86364ab6f13f 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -58,7 +58,6 @@ obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o -obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-y += vdso/ probes/ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index b295fb912b12..5c2ac4b5b2da 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -441,7 +441,7 @@ SYM_CODE_END(__swpan_exit_el0) #ifdef CONFIG_SHADOW_CALL_STACK /* also switch to the irq shadow stack */ - adr_this_cpu scs_sp, irq_shadow_call_stack, x26 + ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x26 #endif 9998: @@ -1097,9 +1097,9 @@ SYM_CODE_START(__sdei_asm_handler) #ifdef CONFIG_SHADOW_CALL_STACK /* Use a separate shadow call stack for normal and critical events */ cbnz w4, 3f - adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal, tmp=x6 + ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 b 4f -3: adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical, tmp=x6 +3: ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 4: #endif diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 9cf2fb87584a..5b7ada9d9559 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -27,6 +28,22 @@ DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); DEFINE_PER_CPU(unsigned long *, irq_stack_ptr); + +DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifdef CONFIG_SHADOW_CALL_STACK +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#endif + +static void init_irq_scs(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + per_cpu(irq_shadow_call_stack_ptr, cpu) = + scs_alloc(cpu_to_node(cpu)); +} + #ifdef CONFIG_VMAP_STACK static void init_irq_stacks(void) { @@ -54,6 +71,8 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) + init_irq_scs(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c deleted file mode 100644 index e8f7ff45dd8f..000000000000 --- a/arch/arm64/kernel/scs.c +++ /dev/null @@ -1,16 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Shadow Call Stack support. - * - * Copyright (C) 2019 Google LLC - */ - -#include -#include - -DEFINE_SCS(irq_shadow_call_stack); - -#ifdef CONFIG_ARM_SDE_INTERFACE -DEFINE_SCS(sdei_shadow_call_stack_normal); -DEFINE_SCS(sdei_shadow_call_stack_critical); -#endif diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 7689f2031c0c..d12fd786b267 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -37,6 +38,14 @@ DEFINE_PER_CPU(unsigned long *, sdei_stack_normal_ptr); DEFINE_PER_CPU(unsigned long *, sdei_stack_critical_ptr); #endif +DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); +DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); + +#ifdef CONFIG_SHADOW_CALL_STACK +DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); +DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); +#endif + static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu) { unsigned long *p; @@ -90,6 +99,59 @@ static int init_sdei_stacks(void) return err; } +static void _free_sdei_scs(unsigned long * __percpu *ptr, int cpu) +{ + void *s; + + s = per_cpu(*ptr, cpu); + if (s) { + per_cpu(*ptr, cpu) = NULL; + scs_free(s); + } +} + +static void free_sdei_scs(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + _free_sdei_scs(&sdei_shadow_call_stack_normal_ptr, cpu); + _free_sdei_scs(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + +static int _init_sdei_scs(unsigned long * __percpu *ptr, int cpu) +{ + void *s; + + s = scs_alloc(cpu_to_node(cpu)); + if (!s) + return -ENOMEM; + per_cpu(*ptr, cpu) = s; + + return 0; +} + +static int init_sdei_scs(void) +{ + int cpu; + int err = 0; + + for_each_possible_cpu(cpu) { + err = _init_sdei_scs(&sdei_shadow_call_stack_normal_ptr, cpu); + if (err) + break; + err = _init_sdei_scs(&sdei_shadow_call_stack_critical_ptr, cpu); + if (err) + break; + } + + if (err) + free_sdei_scs(); + + return err; +} + static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info) { unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr); @@ -138,6 +200,14 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) { + if (init_sdei_scs()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + } + sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 diff --git a/include/linux/scs.h b/include/linux/scs.h index 2a506c2a16f4..18122d9e17ff 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -22,10 +22,6 @@ /* An illegal pointer value to mark the end of the shadow stack. */ #define SCS_END_MAGIC (0x5f6UL + POISON_POINTER_DELTA) -/* Allocate a static per-CPU shadow stack */ -#define DEFINE_SCS(name) \ - DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ - #define task_scs(tsk) (task_thread_info(tsk)->scs_base) #define task_scs_sp(tsk) (task_thread_info(tsk)->scs_sp)