From patchwork Tue Nov 24 19:59:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11929801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C693AC63777 for ; Tue, 24 Nov 2020 20:01:20 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5127F20708 for ; Tue, 24 Nov 2020 20:01:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="11vVYtWr"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="SUk5Jlyx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5127F20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YITEYFYnu3kmExADtYWLHVc8CFM179cLh13uRX9wyqA=; b=11vVYtWrL9nsGtYcKYkVrwMkR UW9+8e+4PKKRAVU/rULvKda22cmeBSMjzbKt03XaOF6mOKjfX0Lp9UVpGWfNw+AhsWUhkL1YlYWz6 clgyvjBnoCYL/ynxadROZ6PU1rSWGtE+0Cmb4sOPplXtNyFehSWiGHEjcIh6/kE8mGjdDwg7rotxt 77XTXZLdMc3XsO7eKHumNDVtybA7bviY4OfDgM8qk/v5UQftddX1HD5O+YjuTcYMED9UErnyMCcnE H9IdMUPxoQRGzIhiZKSX/VNurg+2DFNTECIg9cg1yAIYOPi73e1nmuFFp2ZnzkST/kl8NSv6WNSdp o4QzP6c9Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kheTp-00022K-BT; Tue, 24 Nov 2020 20:00:01 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kheTi-00020H-Qt for linux-arm-kernel@lists.infradead.org; Tue, 24 Nov 2020 19:59:55 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id t14so7195qvc.13 for ; Tue, 24 Nov 2020 11:59:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=FVQS1M4KVeXjKNOKRHPJrd8BR6PbfTKPcKqR0T4EXVc=; b=SUk5JlyxWBeBNn8+bH/u3kAph5z8l4gaT+wrbmymdEUfN4CzkWGEURqr1fR+ga0Lvd f7Y97E940hOXcDcMbiNZZ/rxDnvWj1Y6iNMO5BLW/1tk5nM1iCphRbc62/0Wyg0z7cgk zLMLN/jgDkXowvVhtCVd3kaoUaveblswuMxyJUTWYafxvClxVVpWnCkRvI9UhaPTPucA qsA/8c6VHLcn1tGWWd8xtlFaubTN/cA2IdggdQBeIhiSvhSjn/vsxE546JHhqjAXLcZJ eBPXoqi4sIa6ueb3T9XSXpZT7pHcCCXShxm9NgBWaYGKLkeN0U/ZxTEr/5Fz+4p8drZn uSQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FVQS1M4KVeXjKNOKRHPJrd8BR6PbfTKPcKqR0T4EXVc=; b=b0ueXi922UJgzukySGOe1VxrgQh9j1Lap6696mzLYVrAXSNPM1zA9Yv71enqtYRL15 bi7hSKsD5ZEv9y4HVDbhwG25ITf19yRIIj6wRcKuBRwFDkjUmRyCv/5eN9+sn6YZrfUW Iy6MiHTCB/1PFSS6CVTTmcs/UB4TTlVZANbcV1vZqTy3zPj2x/aKm4jpeSBE3IX9fjuD 8sOLx2NCtYrvXWrJL6ANwpQmssY0MiuUNVWErQ9UL/VY2YUohpqiylTLvTJ/ay8bysmq wG/2bo9uinMLqq7v+3+A1ZR76BTLur+M9B621m7kj5puGXJnxISI5mtFboPed53+SuEk cfUw== X-Gm-Message-State: AOAM530RLnMGAqlkz4FaBMtBf7iFo9jqR+vduG8Ew1LzmsHo5WorbqI7 bwC/ceA8h+0itYzATEFfGIBoGvX54k2aKpsoyJc= X-Google-Smtp-Source: ABdhPJzLjM7X2Za0sQ5cQHWDCE2TBUvGheAojN48CLbofUImKRKHKdYP2ZPepyS89daNUYx7h3wNQP8vgae/UweoEbE= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a25:6e43:: with SMTP id j64mr7692153ybc.183.1606247990853; Tue, 24 Nov 2020 11:59:50 -0800 (PST) Date: Tue, 24 Nov 2020 11:59:39 -0800 In-Reply-To: <20201124195940.27061-1-samitolvanen@google.com> Message-Id: <20201124195940.27061-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20201124195940.27061-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 1/2] scs: switch to vmapped shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201124_145954_915088_1449BC22 X-CRM114-Status: GOOD ( 21.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel currently uses kmem_cache to allocate shadow call stacks, which means an overflows may not be immediately detected and can potentially result in another task's shadow stack to be overwritten. This change switches SCS to use virtually mapped shadow stacks for tasks, which increases shadow stack size to a full page and provides more robust overflow detection, similarly to VMAP_STACK. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Acked-by: Will Deacon --- include/linux/scs.h | 12 ++++----- kernel/scs.c | 66 +++++++++++++++++++++++++++++++++++++-------- 2 files changed, 61 insertions(+), 17 deletions(-) diff --git a/include/linux/scs.h b/include/linux/scs.h index 6dec390cf154..2a506c2a16f4 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -15,12 +15,8 @@ #ifdef CONFIG_SHADOW_CALL_STACK -/* - * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit - * architecture) provided ~40% safety margin on stack usage while keeping - * memory allocation overhead reasonable. - */ -#define SCS_SIZE SZ_1K +#define SCS_ORDER 0 +#define SCS_SIZE (PAGE_SIZE << SCS_ORDER) #define GFP_SCS (GFP_KERNEL | __GFP_ZERO) /* An illegal pointer value to mark the end of the shadow stack. */ @@ -33,6 +29,8 @@ #define task_scs(tsk) (task_thread_info(tsk)->scs_base) #define task_scs_sp(tsk) (task_thread_info(tsk)->scs_sp) +void *scs_alloc(int node); +void scs_free(void *s); void scs_init(void); int scs_prepare(struct task_struct *tsk, int node); void scs_release(struct task_struct *tsk); @@ -61,6 +59,8 @@ static inline bool task_scs_end_corrupted(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ +static inline void *scs_alloc(int node) { return NULL; } +static inline void scs_free(void *s) {} static inline void scs_init(void) {} static inline void scs_task_reset(struct task_struct *tsk) {} static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } diff --git a/kernel/scs.c b/kernel/scs.c index 4ff4a7ba0094..25b0dd5aa0e2 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -5,50 +5,94 @@ * Copyright (C) 2019 Google LLC */ +#include #include #include #include -#include +#include #include -static struct kmem_cache *scs_cache; - static void __scs_account(void *s, int account) { - struct page *scs_page = virt_to_page(s); + struct page *scs_page = vmalloc_to_page(s); mod_node_page_state(page_pgdat(scs_page), NR_KERNEL_SCS_KB, account * (SCS_SIZE / SZ_1K)); } -static void *scs_alloc(int node) +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +void *scs_alloc(int node) { - void *s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + kasan_unpoison_vmalloc(s, SCS_SIZE); + memset(s, 0, SCS_SIZE); + goto out; + } + } + + s = __vmalloc_node_range(SCS_SIZE, 1, VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, node, + __builtin_return_address(0)); if (!s) return NULL; +out: *__scs_magic(s) = SCS_END_MAGIC; /* * Poison the allocation to catch unintentional accesses to * the shadow stack when KASAN is enabled. */ - kasan_poison_object_data(scs_cache, s); + kasan_poison_vmalloc(s, SCS_SIZE); __scs_account(s, 1); return s; } -static void scs_free(void *s) +void scs_free(void *s) { + int i; + __scs_account(s, -1); - kasan_unpoison_object_data(scs_cache, s); - kmem_cache_free(scs_cache, s); + + /* + * We cannot sleep as this can be called in interrupt context, + * so use this_cpu_cmpxchg to update the cache, and vfree_atomic + * to free the stack. + */ + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; } void __init scs_init(void) { - scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, 0, 0, NULL); + cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup); } int scs_prepare(struct task_struct *tsk, int node) From patchwork Tue Nov 24 19:59:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11929795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9127EC2D0E4 for ; Tue, 24 Nov 2020 20:00:33 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 076B22086A for ; Tue, 24 Nov 2020 20:00:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="2+A4a0Jb"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="hurQCtcs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 076B22086A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+T9HIuab26dsmfC0KW7nPrkRavuWQZIK4YkEZW7/NDs=; b=2+A4a0JbcfSu7du8amAe94jHc QQqqxUSiU/iKbkvxgPB+2p5M2dO7rcqFDZSlHsaU/d5EjdXmwXwLiSBEkJ+si3kPXfg+LIjQPOnZ4 tPeKJpcjCDLOCFozisqsEwRk0G8sGlzFkN0PCQMiNLuUdxe5vI+fRfcV2GzMdaDJNkL24XbXwQeJ0 vgbK6mO5Ox6oPxTWWCCdzEkP3opt/SmCx5mjyuYXlaM4DPTtF9BsOjN/pARq+rfycEM1CpN2GLdSp j99HXx5cWgOSFCYx033NnwcDSA/knITtFc2h4yDGoOtUs7r5Ghd5zl3ZeJtLlAQ6UieWpHtpb2XZc X92JMnsGw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kheTt-00023I-4p; Tue, 24 Nov 2020 20:00:05 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kheTk-00020s-FW for linux-arm-kernel@lists.infradead.org; Tue, 24 Nov 2020 19:59:58 +0000 Received: by mail-qv1-xf49.google.com with SMTP id ek3so37712qvb.0 for ; Tue, 24 Nov 2020 11:59:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=WXbcQWvdWDLVC5jHOswTFrpCLxCuVOiUOcyyKJaeh/g=; b=hurQCtcsEarRy0HRrRrdpGlHFG8T6ntiY+W8DibV3UoYqNtuiPUoho9HjblqNH52ve 9muorssfxUCj/uJuGe2RRWCVB5sKHeKxDzL9PlWr8sHuUL/0Td/nQ9dcfme3BK4JaECN 0LyZTZcb9isPJuGGbU5djI5HJn1hEl/UERcsnV1LZGeOMd1xbMAjlC+sanpB2L3sTgkq pQcj6IKUIfsgrDvR2t3g0+w/UT+kiKzFix3w2yhFNmtfw6QW9hXqB+nJhNXrh/LSl1/K xUOjnKcnK73gCxt/HHZOQmWYlVRDMgCiZuNbOZUjeKx0rR9QRzWM7Fjqb5g9ZTloPyvq Aa4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WXbcQWvdWDLVC5jHOswTFrpCLxCuVOiUOcyyKJaeh/g=; b=lhcIa9TkYPKiumeBQHFOcupQcLuZ925XsoJQNesAOGvPADLBhCyvIdHWTTtz1VJhmV phjWZsc7YMtuW47vbj5MbS1N7MuP7+xzd5dZePA9LnSPsIF8i88r3xzsgSTbYkSPQ/B+ mMNL406EHcCGUf3wK0QqMwumYW08k9HfYwJbyiSw0bRZxR4yC1vD4vFPXsN5vjGnVWYv ZcQPc6DRMj4+WDcztABZkHP/x79Sj0AyMWrPrIACH8OBtswW7D3SVFlbXuaQkT1wf6Yi plmKFB14r1AoD7JNg92YwSL2ApYXQcDjbBE+hnT62a57JMhNKFTnn/3MRoPkFub4PfU0 /y1A== X-Gm-Message-State: AOAM531Oy4dIkZH8bTYbbxlrtjOjV/DBqLsUwHk8VQL/8vcl0L3mFTbk CTdwu37fB0rkaBm8uGc0ACS77Tgp5NKuoemZ5EI= X-Google-Smtp-Source: ABdhPJwz/b68ErbwAINA1DmsvIFCJSbOy4bc1aE3NJXrgYnPJF4Xsm8gnyly0MqpDkHkNvc5NEgy+sTiYJlDL/Kzg3M= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:ad4:4661:: with SMTP id z1mr239900qvv.19.1606247993327; Tue, 24 Nov 2020 11:59:53 -0800 (PST) Date: Tue, 24 Nov 2020 11:59:40 -0800 In-Reply-To: <20201124195940.27061-1-samitolvanen@google.com> Message-Id: <20201124195940.27061-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20201124195940.27061-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v2 2/2] arm64: scs: use vmapped IRQ and SDEI shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201124_145956_633641_8E79E6A9 X-CRM114-Status: GOOD ( 22.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Use scs_alloc() to allocate also IRQ and SDEI shadow stacks instead of using statically allocated stacks. Signed-off-by: Sami Tolvanen Acked-by: Will Deacon --- arch/arm64/kernel/Makefile | 1 - arch/arm64/kernel/entry.S | 6 ++-- arch/arm64/kernel/irq.c | 19 ++++++++++ arch/arm64/kernel/scs.c | 16 --------- arch/arm64/kernel/sdei.c | 71 +++++++++++++++++++++++++++++++------- include/linux/scs.h | 4 --- 6 files changed, 81 insertions(+), 36 deletions(-) delete mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index bbaf0bc4ad60..86364ab6f13f 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -58,7 +58,6 @@ obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o -obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_ARM64_MTE) += mte.o obj-y += vdso/ probes/ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index b295fb912b12..5c2ac4b5b2da 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -441,7 +441,7 @@ SYM_CODE_END(__swpan_exit_el0) #ifdef CONFIG_SHADOW_CALL_STACK /* also switch to the irq shadow stack */ - adr_this_cpu scs_sp, irq_shadow_call_stack, x26 + ldr_this_cpu scs_sp, irq_shadow_call_stack_ptr, x26 #endif 9998: @@ -1097,9 +1097,9 @@ SYM_CODE_START(__sdei_asm_handler) #ifdef CONFIG_SHADOW_CALL_STACK /* Use a separate shadow call stack for normal and critical events */ cbnz w4, 3f - adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal, tmp=x6 + ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 b 4f -3: adr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical, tmp=x6 +3: ldr_this_cpu dst=scs_sp, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 4: #endif diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 9cf2fb87584a..5b7ada9d9559 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -27,6 +28,22 @@ DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); DEFINE_PER_CPU(unsigned long *, irq_stack_ptr); + +DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifdef CONFIG_SHADOW_CALL_STACK +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#endif + +static void init_irq_scs(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + per_cpu(irq_shadow_call_stack_ptr, cpu) = + scs_alloc(cpu_to_node(cpu)); +} + #ifdef CONFIG_VMAP_STACK static void init_irq_stacks(void) { @@ -54,6 +71,8 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) + init_irq_scs(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c deleted file mode 100644 index e8f7ff45dd8f..000000000000 --- a/arch/arm64/kernel/scs.c +++ /dev/null @@ -1,16 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Shadow Call Stack support. - * - * Copyright (C) 2019 Google LLC - */ - -#include -#include - -DEFINE_SCS(irq_shadow_call_stack); - -#ifdef CONFIG_ARM_SDE_INTERFACE -DEFINE_SCS(sdei_shadow_call_stack_normal); -DEFINE_SCS(sdei_shadow_call_stack_critical); -#endif diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index 7689f2031c0c..cbc370d3bb4f 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -37,6 +38,14 @@ DEFINE_PER_CPU(unsigned long *, sdei_stack_normal_ptr); DEFINE_PER_CPU(unsigned long *, sdei_stack_critical_ptr); #endif +DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); +DECLARE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); + +#ifdef CONFIG_SHADOW_CALL_STACK +DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr); +DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr); +#endif + static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu) { unsigned long *p; @@ -48,13 +57,31 @@ static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu) } } +static void _free_sdei_scs(unsigned long * __percpu *ptr, int cpu) +{ + void *s; + + s = per_cpu(*ptr, cpu); + if (s) { + per_cpu(*ptr, cpu) = NULL; + scs_free(s); + } +} + static void free_sdei_stacks(void) { int cpu; for_each_possible_cpu(cpu) { - _free_sdei_stack(&sdei_stack_normal_ptr, cpu); - _free_sdei_stack(&sdei_stack_critical_ptr, cpu); + if (IS_ENABLED(CONFIG_VMAP_STACK)) { + _free_sdei_stack(&sdei_stack_normal_ptr, cpu); + _free_sdei_stack(&sdei_stack_critical_ptr, cpu); + } + + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) { + _free_sdei_scs(&sdei_shadow_call_stack_normal_ptr, cpu); + _free_sdei_scs(&sdei_shadow_call_stack_critical_ptr, cpu); + } } } @@ -70,18 +97,40 @@ static int _init_sdei_stack(unsigned long * __percpu *ptr, int cpu) return 0; } +static int _init_sdei_scs(unsigned long * __percpu *ptr, int cpu) +{ + void *s; + + s = scs_alloc(cpu_to_node(cpu)); + if (!s) + return -ENOMEM; + per_cpu(*ptr, cpu) = s; + + return 0; +} + static int init_sdei_stacks(void) { int cpu; int err = 0; for_each_possible_cpu(cpu) { - err = _init_sdei_stack(&sdei_stack_normal_ptr, cpu); - if (err) - break; - err = _init_sdei_stack(&sdei_stack_critical_ptr, cpu); - if (err) - break; + if (IS_ENABLED(CONFIG_VMAP_STACK)) { + err = _init_sdei_stack(&sdei_stack_normal_ptr, cpu); + if (err) + break; + err = _init_sdei_stack(&sdei_stack_critical_ptr, cpu); + if (err) + break; + } + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) { + err = _init_sdei_scs(&sdei_shadow_call_stack_normal_ptr, cpu); + if (err) + break; + err = _init_sdei_scs(&sdei_shadow_call_stack_critical_ptr, cpu); + if (err) + break; + } } if (err) @@ -133,10 +182,8 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } - if (IS_ENABLED(CONFIG_VMAP_STACK)) { - if (init_sdei_stacks()) - return 0; - } + if (init_sdei_stacks()) + return 0; sdei_exit_mode = (conduit == SMCCC_CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; diff --git a/include/linux/scs.h b/include/linux/scs.h index 2a506c2a16f4..18122d9e17ff 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -22,10 +22,6 @@ /* An illegal pointer value to mark the end of the shadow stack. */ #define SCS_END_MAGIC (0x5f6UL + POISON_POINTER_DELTA) -/* Allocate a static per-CPU shadow stack */ -#define DEFINE_SCS(name) \ - DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ - #define task_scs(tsk) (task_thread_info(tsk)->scs_base) #define task_scs_sp(tsk) (task_thread_info(tsk)->scs_sp)